text
stringlengths 6
128k
|
---|
# Mock theta functions and related combinatorics
Cristina Ballantine, Hannah Burson, Amanda Folsom,
Chi-Yun Hsu, Isabella Negrini, and Boya Wen Department of Mathematics and
Computer Science, College of the Holy Cross, Worcester, MA 01610 USA
<EMAIL_ADDRESS>School of Mathematics, University of Minnesota, Twin
Cities, 127 Vincent Hall 206 Church St. SE, Minneapolis, MN 55455, USA
<EMAIL_ADDRESS>Department of Mathematics and Statistics, Amherst College,
Amherst, MA 01002, USA<EMAIL_ADDRESS>Department of Mathematics,
University of California, Los Angeles, Math Sciences Building, 520 Portola
Plaza, Box 951555, Los Angeles, CA 90095, USA<EMAIL_ADDRESS>Mathematics
and Statistics, McGill University, Burnside Hall 805 Sherbrooke Street West,
Montreal, Quebec H3A 0B9, Canada<EMAIL_ADDRESS>Department
of Mathematics, University of Wisconsin-Madison, 480 Lincoln Drive, Madison,
WI 53706, USA<EMAIL_ADDRESS>
###### Abstract.
In this paper we add to the literature on the combinatorial nature of the mock
theta functions, a collection of curious $q$-hypergeometric series introduced
by Ramanujan in his last letter to Hardy in 1920, which we now know to be
important examples of mock modular forms. Our work is inspired by Beck’s
conjecture, now a theorem of Andrews, related to Euler’s identity: the excess
in the number of parts in all partitions of $n$ into odd parts over the number
of partitions of $n$ into distinct parts is equal to the number of partitions
with only one (possibly repeated) even part and all other parts odd. We
establish Beck-type identities associated to partition identities due to
Andrews, Dixit, and Yee for the third order mock theta functions
$\omega(q),\nu(q)$, and $\phi(q)$. Our proofs are both analytic and
combinatorial in nature, and involve mock theta generating functions and
combinatorial bijections.
## 1\. Introduction
### Mock theta functions
In Ramanujan’s last letter to Hardy from 1920, he presented his _mock theta
functions,_ a collection of 17 curious $q$-hypergeometric series including
$\displaystyle\omega(q)$
$\displaystyle:=\sum_{k=0}^{\infty}\frac{q^{2k(k+1)}}{(q;q^{2})^{2}_{k+1}},$
$\displaystyle\nu(q)$
$\displaystyle:=\sum_{k=0}^{\infty}\frac{q^{k(k+1)}}{(-q;q^{2})_{k+1}},$
$\displaystyle\phi(q)$
$\displaystyle:=\sum_{k=0}^{\infty}\frac{q^{k^{2}}}{(-q^{2};q^{2})_{k}},$
of the _third order_. Here and throughout, the $q$-Pochhammer symbol is
defined for $n\in\mathbb{N}_{0}\cup\\{\infty\\}$ by
$\displaystyle(a;q)_{n}:=\prod_{j=0}^{n-1}(1-aq^{j})=(1-a)(1-aq)(1-aq^{2})\cdots(1-aq^{n-1}).$
Ramanujan didn’t define what he meant by the order of a mock theta function,
nor did he precisely define a mock theta function. However, we have since been
able to extract a definition from his own writing [14] (see also the recent
works [23, 27]):
> _“Suppose there is a function in the Eulerian form and suppose that all or
> an infinity of points $q=e^{2i\pi m/n}$ are exponential singularities and
> also suppose that at these points the asymptotic form of the function closes
> neatly…The question is: is the function taken the sum of two functions one
> of which is an ordinary theta function and the other a (trivial) function
> which is $O(1)$ at all the points $e^{2i\pi m/n}$? The answer is it is not
> necessarily so. When it is not so I call the function Mock
> $\theta$-function. I have not proved rigorously that it is not necessarily
> so. But I have constructed a number of examples…”_
Ramanujan’s reference to theta functions, a class of modular forms, and
Eulerian forms, which are $q$-series similar in shape to $\omega(q),\nu(q),$
and $\phi(q)$, expressible in terms of _$q$ -hypergeometric_ series ([19,
22]), indirectly points back to earlier examples of Eulerian modular forms.
For example, Dedekind’s $\eta$-function is an important modular theta function
of weight $1/2$ which can be expressed in terms of a $q$-hypergeometric series
as follows:
(1) $\displaystyle
q^{\frac{1}{24}}\eta^{-1}(\tau)=\sum_{n=0}^{\infty}\frac{q^{n^{2}}}{(q;q)_{n}^{2}},$
where $q=e^{2\pi i\tau}$ is the usual modular variable, with $\tau$ in the
upper half complex plane. Ramanujan’s letter on his mock theta functions
claimed that mock theta functions behave like (weakly holomorphic) modular
forms near roots of unity but are not themselves modular, hence the adjective
_mock_.
The precise roles played by the mock theta functions within the theory of
modular forms remained unclear in the decades following Ramanujan’s death
shortly after he wrote his last letter to Hardy. However, the importance of
these functions was clear – they have been shown to play meaningful roles in
the diverse subjects of combinatorics, $q$-hypergeometric series, mathematical
physics, elliptic curves and traces of singular moduli, Moonshine and
representation theory, and more. Within the last 20 years we have also finally
understood, with thanks to key work by Zwegers, Bruinier-Funke, and others
including Bringmann-Ono and Zagier [16], that the mock theta functions turn
out to be examples of _mock modular forms_ , which are holomorphic parts of
_harmonic Maass forms_ , modern relatives to ordinary Maass forms and modular
forms. This context has also allowed us to make more sense of the notion of
the order of a mock theta function. For more background and information on
these aspects of the mock theta functions, see, e.g., [16, 18, 20, 30].
Turning to the first application of mock theta functions mentioned above,
combinatorics, we recall that Dedekind’s modular $\eta$-function may also be
viewed as the reciprocal of the generating function for integer partitions.
That is, (1) may also be written as
(2)
$\prod_{n=1}^{\infty}\frac{1}{1-q^{n}}=\sum_{n=0}^{\infty}p(n)q^{n}=1+q+2q^{2}+3q^{3}+5q^{4}+7q^{5}+\cdots,$
where $p(n)$ is the number of partitions of $n$. That (2) is simultaneously a
modular form and a combinatorial generating function has led to some deep and
important results and theory. Namely, Hardy–Ramanujan introduced their famous
_Circle Method_ in analytic number theory, which combined with the modularity
of Dedekind’s $\eta$-function, led to the following exact formula for the
partition numbers [26]
$p(n)=2\pi(24n-1)^{-\frac{3}{4}}\sum_{k=1}^{\infty}\frac{A_{k}(n)}{k}I_{\frac{3}{2}}\left(\frac{\pi\sqrt{24n-1}}{6k}\right),$
an infinite sum in terms of Kloosterman sums $A_{k}$ and Bessel functions
$I_{s}$.
Like the modular $\eta$-function, the mock theta functions may also be viewed
as combinatorial generating functions. For example, we have that
$\displaystyle q\omega(q)=\sum_{n=1}^{\infty}a_{\omega}(n)q^{n},\ \ \ \ \ \ \
\nu(-q)=\sum_{n=0}^{\infty}a_{\nu}(n)q^{n},\ \ \ \ \ \ \
\phi(q)=\sum_{n=0}^{\infty}a_{\phi}(n)q^{n},$
where $a_{\omega}(n)$ counts the number of partitions of $n$ whose parts,
except for one instance of the largest part, form pairs of consecutive non-
negative integers [19, (26.84)]; $a_{\nu}(n)$ counts the number of partitions
of $n$ whose even parts are distinct, and if $m$ occurs as a part, then so
does every positive even number less than $m$; and
$a_{\phi}(n):=sc_{e}(n)-sc_{o}(n),$ where $sc_{o/e}(n)$ counts the number of
self-conjugate partitions $\lambda$ of $n$ with $L(\lambda)$ odd/even. Here,
$L(\lambda)$ is the number of parts of $\lambda$ minus the side length of its
Durfee square. (See, e.g., [2] and Section 2 for more background on integer
partitions.)
Using the newer theory of mock modular forms, we have results analogous to the
celebrated Hardy–Ramanujan–Rademacher exact formula for $p(n)$; for example,
due to Garthwaite [21] we have that
$a_{\omega}(n)=\frac{\pi}{2\sqrt{2}}(3n+2)^{-\frac{1}{4}}\mathop{\sum_{k=1}^{\infty}}_{(k,2)=1}\frac{(-1)^{\frac{k-1}{2}}A_{k}(\frac{n(k+1)}{2}-\frac{3(k^{2}-1)}{8})}{k}I_{\frac{1}{2}}\left(\frac{\pi\sqrt{3n+2}}{3k}\right).$
Numerous other papers, some of which we discuss in the sections that follow,
have established further meaningful combinatorial results pertaining to the
mock theta functions, including congruence properties, asymptotic properties,
and more, adding to broader and older theories which rest at the intersection
of combinatorics and modular forms.
### Beck-type partition identities
In this paper we seek to add to the growing literature on understanding the
combinatorial nature of the mock theta functions. Precisely, we study the
number of parts in all partitions interpolated by the third order mock theta
functions $\omega(q),\nu(q),$ and $\phi(q)$. In general, identities on the
number of parts in all partitions of a certain type have been of interest in
the literature, dating back to work of Beck and Andrews. Their work was
motivated by Euler’s famous partition identity, which states that for any
positive integer $n$,
$p(n\ |\text{ odd parts})=p(n\ |\ \text{distinct parts}),$
and which may be immediately deduced from the identity
$\prod_{n=1}^{\infty}\frac{1}{1-q^{2n-1}}=\prod_{n=1}^{\infty}(1+q^{n})$
upon realizing that the “modular” products appearing are generating functions
for the partition functions in Euler’s identity.
While the natural number-of-parts refinement of Euler’s identity is not true,
namely the number of partitions of $n$ into exactly $m$ odd parts is not in
general equinumerous with the number of partitions of $n$ into exactly $m$
distinct parts, Beck conjectured and Andrews proved [3] that the excess in the
number of parts in all partitions of $n$ into odd parts over the number of
parts in all partitions of $n$ into distinct parts is equal to the number of
partitions with only one (possibly repeated) even part and all other parts
odd. Andrews also showed that this excess is also equal to the number of
partitions with only one repeated part and all other parts distinct. Andrews
provided an analytic proof of this theorem using generating functions, and
Yang [28] and Ballantine–Bielak [9] later independently provided combinatorial
proofs.
Since Beck made the first conjecture of this type, combinatorial identities on
the excess between the number of parts in all partitions arising from a
partition identity like Euler’s are now fairly commonly referred to as “Beck-
type identities.” In recent past, a number of other interesting Beck-type
companions to other important identities have been established – see, e.g.,
[5], [10], [11], [24], [28].
Here, we establish Beck-type identities associated to the third order mock
theta functions $\omega(q),\nu(q),$ and $\phi(q)$ in Theorem 3.2, Theorem 4.2,
and Theorem 5.1, respectively. Our results may be viewed as Beck-type
companion identities to partition identities for the third order mock theta
functions $\omega(q),\nu(q),$ and $\phi(q)$ due to Andrews, Dixit and Yee in
[6]. We devote Section 2 to preliminaries on partitions, and state and prove
our main results on $\omega(q),\nu(q),$ and $\phi(q)$ in Section 3, Section 4,
and Section 5, respectively. As a Corollary to our main results, we also
establish mock theta pentagonal-number-theorem-type results in Theorem 4.3 and
Corollary 4.4. Generally speaking, our proofs are both analytic and
combinatorial in nature, and involve mock theta generating functions and
combinatorial bijections.
Throughout, we assume $|q|<1$, so that all series converge absolutely.
## 2\. Preliminaries on partitions
Let $n\in\mathbb{N}_{0}$. A _partition_ of $n$, denoted
$\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j})$, is a non-increasing
sequence of positive integers
$\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{j}$ called _parts_ that add
up to $n$. We refer to $n$ as the _size_ of $\lambda$. The length of $\lambda$
is the number of parts of $\lambda$, denoted by $\ell(\lambda)$. We denote by
$\ell_{o}(\lambda)$ and $\ell_{e}(\lambda)$ the number of odd, respectively
even parts of $\lambda$. For convenience, we abuse notation and use $\lambda$
to denote either the multiset of its parts or the non-increasing sequence of
parts. We write $a\in\lambda$ to mean the positive integer $a$ is a part of
$\lambda$. As mentioned in the introduction, we denote by $p(n)$ the number of
partitions of $n$. The empty partition is the only partition of size $0$.
Thus, $p(0)=1$. We write $|\lambda|$ for the size of $\lambda$ and
$\lambda\vdash n$ to mean that $\lambda$ is a partition of size $n$. For a
pair of partitions $(\lambda,\mu)$ we also write $(\lambda,\mu)\vdash n$ to
mean $|\lambda|+|\mu|=n$. We use the convention that $\lambda_{k}=0$ for all
$k>\ell(\lambda)$. When convenient we will also use the exponential notation
for parts in a partition: the exponent of a part is the multiplicity of the
part in the partition. This notation will be used mostly for rectangular
partitions. We write $(a^{b})$ for the partition consisting of $b$ parts equal
to $a$. Further, we denote by calligraphy style capital letters the set of
partitions enumerated by the function denoted by the same letter. For example,
we denote by $q_{o}(n)$ the number of partitions of $n$ into distinct odd
parts and by $\mathcal{Q}_{o}(n)$ the set of partitions of $n$ into distinct
odd parts.
The _Ferrers diagram_ of a partition
$\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j})$ is an array of left
justified boxes such that the $i$th row from the top contains $\lambda_{i}$
boxes. We abuse notation and use $\lambda$ to mean a partition or its Ferrers
diagram. The $2$-_modular Ferrers diagram_ of $\lambda$ is a Ferrers diagram
in which row $i$ has $\lceil\frac{\lambda_{i}}{2}\rceil$ boxes, all but the
first filled with $2$. The first box of row $i$ is filled with $2$,
respectively $1$, if $\lambda_{i}$ is even, respectively odd.
###### Example 1.
The Ferrers diagram and the $2$-modular Ferrers diagram of
$\lambda=(5,4,3,3,2,2)$ are shown in Figure 1.
$\small\ydiagram{5,4,3,3,2,2}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \small\ytableau
1&22\\\ 22\\\ 12\\\ 12\\\ 2\\\ 2$ Figure 1. A Ferrers diagram and 2-modular
Ferrers diagram
Given a partition $\lambda$, its _conjugate_ $\lambda^{\prime}$ is the
partition for which the rows in its Ferrers diagram are precisely the columns
in the Ferrers diagram of $\lambda$. For example, the conjugate of
$\lambda=(5,4,3,3,2,2)$ is $\lambda^{\prime}=(6,6,4,2,1)$. A partition is
called _self-conjugate_ if it is equal to its conjugate.
The Durfee square of a partition $\lambda$ is the largest square that fits
inside the Ferrers diagram of $\lambda$, i.e., the partition $(a^{a})$, where
$a$ is such that $\lambda_{a}\geq a$ and $\lambda_{a+1}\leq a$. For example,
the Durfee square of $\lambda=(5,4,3,3,2,2)$ is $(3^{3})=(3,3,3)$.
For more details on partitions, we refer the reader to [2].
An _odd Ferrers diagram_ $F$ is a Ferrers diagram of a partition filled with
$1$ and $2$ such that the first row is filled with $1$ and the remaining rows
form the $2$-modular Ferrers diagram of a partition $\lambda$ with all parts
odd. If the first row has length $k$, we identify the odd Ferrers diagram $F$
with the pair $(k,\lambda)$. The _size_ of an odd Ferrers diagram ${F}$ is the
sum of all entries in the boxes of diagram and is denoted by $|{F}|$. The
length of $F$ is the number of rows in the diagram.
###### Example 2.
Figure 2 shows the odd Ferrers diagram of size $44$ and length $7$
corresponding to the pair $(k,\lambda)$ with $k=8$ and
$\lambda=(11,7,7,5,5,1)$.
1&11111 11
122222
1222
1222
122
122
1
Figure 2. An odd Ferrers diagram
The rank of a partition $\lambda$, denoted $r(\lambda)$, is defined as
$r(\lambda)=\lambda_{1}-\ell(\lambda)$, i.e., the number of columns minus the
number of rows in its Ferrers diagram. In [12], the $M_{2}$-rank of a
partition is defined as the number of columns minus the number of rows in its
$2$-modular diagram. The rank of an odd Ferrers diagram $F=(k,\lambda)$,
denoted $\operatorname{rank}(F)$, is defined as the number of columns minus
the number of rows of $F$, or equivalently,
$\operatorname{rank}(F)=k-\ell(\lambda)-1$.
## 3\. The mock theta function $\omega$
Recall from Section 1 that Ramanujan’s third order mock theta function
$\omega$ is defined by
$\omega(q):=\sum_{k=0}^{\infty}\frac{q^{2k(k+1)}}{(q;q^{2})_{k+1}^{2}}.$
It is known [19, (26.84)] that
$q\omega(q)=A_{\omega}(q):=\sum_{k=1}^{\infty}\frac{q^{k}}{(q;q^{2})_{k}}=\sum_{n=1}^{\infty}a_{\omega}(n)q^{n},$
where $a_{\omega}(n)$ counts the number of partitions of $n$ whose parts,
except for one instance of the largest part, form pairs of consecutive non-
negative integers. We are allowing pairs of consecutive integers to be
$(0,1)$, but we are not considering $0$ as a part of the partition. There is
also the (highly) non-trivial identity by Andrews–Dixit–Yee [6]:
$q\omega(q)=B_{\omega}(q):=\sum_{k=1}^{\infty}\frac{q^{k}}{(q^{k};q)_{k+1}(q^{2k+2};q^{2})_{\infty}}=\sum_{n=1}^{\infty}b_{\omega}(n)q^{n},$
where $b_{\omega}(n)$ counts the number of partitions of $n$ such that all odd
parts are less than twice the smallest part. Hence
$a_{\omega}(n)=b_{\omega}(n)$.
We define two variable generalizations of $A_{\omega}(q)$ and $B_{\omega}(q)$
as follows. Let
(3) $\displaystyle
A_{\omega}(z;q):=\sum_{k=1}^{\infty}\dfrac{zq^{k}}{(1-zq)(z^{2}q^{3};q^{2})_{k-1}}=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}a_{\omega}(m,n)z^{m}q^{n},$
where $a_{\omega}(m,n)$ counts the number of partitions of $n$ with $m$ parts,
which except for one instance of the largest part, form pairs of consecutive
non-negative integers. Let
(4) $\displaystyle
B_{\omega}(z;q):=\sum_{k=1}^{\infty}\frac{zq^{k}}{(zq^{k};q)_{k+1}(zq^{2k+2};q^{2})_{\infty}}=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}b_{\omega}(m,n)z^{m}q^{n},$
where $b_{\omega}(m,n)$ counts the number of partitions of $n$ with $m$ parts,
whose odd parts are less than twice the smallest part. In particular, we have
that $A_{\omega}(1;q)=A_{\omega}(q)$, and $B_{\omega}(1;q)=B_{\omega}(q)$.
Following the notation convention introduced in Section 2,
$\mathcal{A}_{\omega}(n)$ is the set of partitions of $n$ whose parts, except
for one instance of the largest part, form pairs of consecutive non-negative
integers. We denote by $\mathcal{A}_{\omega,2}(n)$ the set of odd Ferrers
diagrams of size $n$, and then $a_{\omega,2}(n)=|\mathcal{A}_{\omega,2}(n)|$.
We next define two generating functions, $A_{\omega,2}$ and
$\widetilde{A}_{\omega,2}$, for odd Ferrers diagrams, which we later show are
related to $A_{\omega}$ and $B_{\omega}$. Namely, we let
(5) $\displaystyle
A_{\omega,2}(z;q):=\sum_{k=1}^{\infty}\frac{zq^{k}}{(zq;q^{2})_{k}}=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}a_{\omega,2}(m,n)z^{m}q^{n},$
where $a_{\omega,2}(m,n)$ counts the number of odd Ferrers diagrams of size
$n$ with $m$ rows. We note that this interpretation was introduced by Andrews
in [4]. We also let
(6)
$\displaystyle\widetilde{A}_{\omega,2}(z;q):=\sum_{k=1}^{\infty}\frac{z^{k}q^{k}}{(q;q^{2})_{k}}=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\widetilde{a}_{\omega,2}(m,n)z^{m}q^{n},$
where $\tilde{a}_{\omega,2}(m,n)$ counts the number of odd Ferrers diagrams of
size $n$ with $m$ columns. The combinatorial interpretation of
$\widetilde{A}_{\omega,2}(z;q)$ was first described by Li–Yang in [25,
(2.22)].
###### Lemma 3.1.
There is an explicit bijection
$\mathcal{A}_{\omega}(n)\xrightarrow{\sim}\mathcal{A}_{\omega,2}(n)$.
Moreover, if $\mu\mapsto{F}_{\mu}$ under this bijection, then the number of
parts of $\mu$ is equal to the number of rows of ${F}_{\mu}$ plus the number
of rows of ${F}_{\mu}$ containing at least a $2$, i.e.,
(7) $\displaystyle
A_{\omega}(z;q)=A_{\omega,2}(z^{2};q)\cdot\frac{1-z^{2}q}{z(1-zq)}.$
###### Proof.
Start with $\mu\in\mathcal{A}_{\omega}(n)$, remove one instance of the largest
part $\mu_{1}$, and merge the (consecutive) pairs of parts of the remaining
partition to obtain a partition $\lambda$ into odd parts. Then the
corresponding odd Ferrers diagram is ${F}_{\mu}=(\mu_{1},\lambda)$. This
transformation is invertible: given $(k,\lambda)\in\mathcal{A}_{\omega,2}(n)$,
each part of $\lambda$ is odd and hence the sum of a pair of consecutive non-
negative integers. The corresponding partition has parts $k$ and all pairs of
parts obtained by splitting the parts of $\lambda$ into consecutive integers.
The connection between the number of parts of $\mu$ and the number of rows of
${F}_{\mu}$ is clear from this explicit bijection. ∎
###### Theorem 3.2.
The excess of the number of parts in all partitions in
$\mathcal{A}_{\omega}(n)$ over the number of parts in all partitions in
$\mathcal{B}_{\omega}(n)$ equals the number of rows containing at least a $2$
in all odd Ferrers diagrams $F=(k,\lambda)$ of size $n$, which is the same as
the number of parts greater than $1$ in $\lambda$.
We will provide four proofs of this theorem.
We first introduce some useful identities. From equation (8) of [7], we have
(8) $B_{\omega}(z;q)=\widetilde{A}_{\omega,2}(z;q).$
Moreover, from equation (16) of [7], we have
(9) $\widetilde{A}_{\omega,2}(z;q)=A_{\omega,2}(z;q).$
(This can also be seen from the fact that conjugation provides a bijection
$\widetilde{\mathcal{A}}_{\omega,2}(m,n)\xrightarrow{\sim}\mathcal{A}_{\omega,2}(m,n)$
[25, p.539].) From (8) and (9), we have that
(10) $\displaystyle B_{\omega}(z;q)=A_{\omega,2}(z;q).$
By Lemma 3.1 and Theorem 3.2, or, after differentiating (10) at $z=1$, we have
the following result.
###### Corollary 3.3.
The total number of parts in all partitions in $\mathcal{B}_{\omega}(n)$
equals the total number of rows in all odd Ferrers diagrams of size $n$.
All four proofs of Theorem 3.2 make use of the fact that
(11)
$\displaystyle\left.\frac{\partial(A_{\omega}(z;q)-B_{\omega}(z;q))}{\partial
z}\right|_{z=1}$
is the generating function for the excess of the number of parts in all
partitions in $\mathcal{A}_{\omega}(n)$ over the number of parts in all
partitions in $\mathcal{B}_{\omega}(n)$.
###### First proof.
We compute the derivative difference (11), using (7), (8), and (9):
$\displaystyle\left.\frac{\partial(A_{\omega}(z;q)-B_{\omega}(z;q))}{\partial
z}\right|_{z=1}$ $\displaystyle=\left.\frac{\partial}{\partial
z}\right|_{z=1}\left(\frac{1-z^{2}q}{z(1-zq)}\cdot\widetilde{A}_{\omega,2}(z^{2};q)-\widetilde{A}_{\omega,2}(z;q)\right)$
$\displaystyle=\left.\frac{\partial\widetilde{A}_{\omega,2}(z;q)}{\partial
z}\right|_{z=1}-\frac{1}{1-q}A_{\omega}(q)$
$\displaystyle=\sum_{k=1}^{\infty}\frac{kq^{k}}{(q;q^{2})_{k}}-\frac{1}{1-q}\sum_{k=1}^{\infty}\frac{q^{k}}{(q;q^{2})_{k}}.$
The second term $\frac{1}{1-q}\sum_{k=1}^{\infty}\frac{q^{k}}{(q;q^{2})_{k}}$
is the generating function for the number of pairs $({F},(1^{b}))\vdash n$,
where ${F}$ is an odd Ferrers diagram and $b\geq 0$ is an integer.
By mapping a pair $({F},(1^{b}))$ to an odd Ferrers diagram with at least $b$
rows of size $1$ and coloring the final $b$ rows of size $1$, we can see that
$\frac{1}{1-q}\sum_{k=1}^{\infty}\frac{q^{k}}{(q;q^{2})_{k}}$ is also the
generating function for the number of odd Ferrers diagrams ${F}=(k,\lambda)$
weighted by $m_{\lambda}(1)+1$, where $m_{\lambda}(1)$ is the number of parts
equal to $1$ in $\lambda$.
Hence $\left.\frac{\partial(A_{\omega}(z;q)-B_{\omega}(z;q))}{\partial
z}\right|_{z=1}$ is the generating function for the number of odd Ferrers
diagrams ${F}=(k,\lambda)$ weighted by $k-(m_{\lambda}({1})+1)$.
Note that conjugation provides a bijection between odd Ferrers diagrams of
size $n$ with $m$ rows and odd Ferrers diagrams of size $n$ with $m$ columns.
Hence for a conjugate pair ${F}=(k,\lambda)$ and ${F^{\prime}}=(j,\mu)$, we
have
$\displaystyle
k-(m_{\lambda}({1})+1)+j-(m_{\mu}({1})+1)=j-(m_{\lambda}({1})+1)+k-(m_{\mu}({1})+1)$
$\displaystyle=$
$\displaystyle(\ell(\lambda)+1)-(m_{\lambda}({1})+1)+(\ell(\mu)+1)-(m_{\mu}({1})+1).$
Therefore, summing over all odd Ferrers diagrams of size $n$, the generating
function stays the same if we replace the weight by
$(\ell(\lambda)+1)-(m_{\lambda}({1})+1)$, which is the number of rows
containing at least a $2$ in ${F}$. ∎
###### Second proof.
We compute the derivative difference (11), using (7) and (10):
$\displaystyle\left.\frac{\partial(A_{\omega}(z;q)-B_{\omega}(z;q))}{\partial
z}\right|_{z=1}$ $\displaystyle=\left.\frac{\partial}{\partial
z}\right|_{z=1}\left(\frac{1-z^{2}q}{z(1-zq)}\cdot
A_{\omega,2}(z^{2};q)-A_{\omega,2}(z;q)\right)$
$\displaystyle=\left.\frac{\partial A_{\omega,2}(z;q)}{\partial
z}\right|_{z=1}-\frac{1}{1-q}A_{\omega}(q).$
We have seen in the first proof that the second term
$\frac{1}{1-q}A_{\omega}(q)$ is the generating function for the number of odd
Ferrers diagrams ${F}=(k,\lambda)$ weighted by $m_{\lambda}({1})+1$. Hence
$\left.\frac{\partial(A_{\omega}(z;q)-B_{\omega}(z;q))}{\partial
z}\right|_{z=1}$ is the generating function for the number of rows containing
at least a $2$ in all odd Ferrers diagrams of size $n$. ∎
###### Third proof.
We compute the derivative difference (11), using (10):
(12)
$\displaystyle\left.\frac{\partial(A_{\omega}(z;q)-B_{\omega}(z;q))}{\partial
z}\right|_{z=1}$ $\displaystyle=\left.\frac{\partial}{\partial
z}\right|_{z=1}\left(A_{\omega}(z;q)-A_{\omega,2}(z;q)\right)$ (13)
$\displaystyle=\sum_{k=1}^{\infty}\frac{q^{k}}{(q;q^{2})_{k}}\left(\sum_{j=1}^{k-1}\frac{q^{2j+1}}{1-q^{2j+1}}\right).$
This is the generating function for the number of pairs
$({F},((2j+1)^{b}))\vdash n$, where ${F}$ is an odd Ferrers diagram and
$j,b\geq 1$ are integers. For each pair $({F},((2j+1)^{b}))\vdash n$, we
insert $b$ copies of $(2j+1)$ as $2$-modular rows into ${F}$ and color the
final $b$ rows of size $(2j+1)$ to obtain a colored odd Ferrers diagram. The
number of such colored odd Ferrers diagrams of size $n$ is equal to the number
of rows containing at least a $2$ in all odd Ferrers diagrams of size $n$. ∎
###### Fourth proof..
From (12), we have that
(14)
$\sum_{m=1}^{\infty}(ma_{\omega}(m,n)-mb_{\omega}(m,n))=\sum_{m=1}^{\infty}(ma_{\omega}(m,n)-ma_{\omega,2}(m,n)),$
for each $n\geq 1$. The left hand side of (13) is the excess in the statement
of Theorem 3.2, whereas the right hand side is the excess of the number of
parts in all partitions in $\mathcal{A}_{\omega}(n)$ over the number of rows
in all odd Ferrers diagrams in $\mathcal{A}_{\omega,2}(n)$. By Lemma 3.1,
$\mathcal{A}_{\omega}(n)\xrightarrow{\sim}\mathcal{A}_{\omega,2}(n)$, and the
excess is precisely the number of rows containing at least a $2$ in all odd
Ferrers diagrams of size $n$. ∎
From the third proof of Theorem 3.2, we obtain new interpretations of the
derivative difference (12) in Corollaries 3.4 and 3.5 below. These are
analogous to the original Beck identity which can be reinterpreted as follows.
The excess in the total number of parts in all partitions of $n$ into distinct
parts over the total number of parts in all partitions of $n$ into odd parts
equals the number of pairs $(\xi,\eta)\vdash n$, where $\xi$ is a partition
into odd parts and $\eta$ is a rectangular partition into equal even parts.
This is also the number of pairs $(\xi,\eta)\vdash n$, where $\xi$ is a
partition into distinct parts and $\eta$ is a rectangular partition with at
least two parts.
###### Corollary 3.4.
The excess in the total number of parts in all partitions in
$\mathcal{A}_{\omega}(n)$ over the total number of parts in all odd Ferrers
diagrams of size $n$ equals the number of pairs $(\xi,\eta)\vdash n$ where,
$\xi$ is an odd Ferrers diagram and $\eta$ is a rectangular partition into odd
parts of size at least $3$.
###### Corollary 3.5.
The excess of the number of parts in all partitions in
$\mathcal{A}_{\omega}(n)$ over the number of parts in all partitions in
$\mathcal{B}_{\omega}(n)$ equals the number of pairs $(\lambda,\eta)\vdash n$,
where $\lambda\in\mathcal{A}_{\omega}$ and $\eta$ is a rectangular partition
into odd parts of size at least $3$.
## 4\. The mock theta function $\nu$
Recall from Section 1 the mock theta function
$\nu(q):=\sum_{k=0}^{\infty}\frac{q^{k(k+1)}}{(-q;q^{2})_{k+1}}.$
We write
$\nu(-q)=A_{\nu}(q)=\sum_{n=0}^{\infty}a_{\nu}(n)q^{n}.$
Since $k(k+1)=2+4+\cdots+2k$, $a_{\nu}(n)$ counts the number of partitions of
$n$ whose even parts are distinct, and if $m$ occurs as a part, then so does
every positive even number less than $m$.
Let
$A_{\nu,2}(q):=\sum_{k=0}^{\infty}(-q;q^{2})_{k}q^{k}=:\sum_{n=0}^{\infty}a_{\nu,2}(n)q^{n}.$
Then it follows from [1, Corollary 1], with $x=s=q$, or from [6, (44)], with
$x=y=q$, that
$\nu(-q)=A_{\nu,2}(q).$
Note that $A_{\nu,2}(q)$ is the generating function for the number of odd
Ferrers diagrams $(k,\lambda)$ where the partition $\lambda$ has distinct
parts.
From [6, Theorem 4.1] we also have
(15)
$\displaystyle\nu(-q)=B_{\nu}(q):=\sum_{k=0}^{\infty}q^{k}(-q^{k+1};q)_{k}(-q^{2k+2};q^{2})_{\infty}=\sum_{n=0}^{\infty}b_{\nu}(n)q^{n},$
where $b_{\nu}(n)$ counts the number of partitions of $n$ into distinct parts,
in which each odd part is less than twice the smallest part, and zero can be a
part (note that this is different from our usual convention). For example,
$(6,4,3,2)$ and $(6,4,3,2,0)$ are counted as different partitions, the former
from the term $q^{k}(-q^{k+1};q)_{k}(-q^{2k+2};q^{2})_{\infty}$ for $k=2$
while the latter from the term for $k=0$.
###### Lemma 4.1.
$\mathcal{A}_{\nu}(n)$ is in (explicit) one-to-one correspondence with
$\mathcal{A}_{\nu,2}(n)$. Moreover, if $\pi\in\mathcal{A}_{\nu}(n)$
corresponds to $(k,\lambda)\in\mathcal{A}_{\nu,2}(n)$, then $\ell(\pi)=k$ and
$\ell(\lambda)$ is the number of even parts in $\pi$.
###### Proof.
We adapt the bijection in [25, Theorem 1.3]. We start with an odd Ferrers
diagram $F=(k,\lambda)\in\mathcal{A}_{\nu,2}(n)$, where $\lambda$ has distinct
parts and length $\ell$. We will associate to $F$ a partition $\pi$ in
$\mathcal{A}_{\nu}(n)$. Consider the subdiagram
$T=(\ell,(2\ell-1,2\ell-1,\ldots,3,1))$ of $F$. We map $T$ to the partition
$\varepsilon=(2\ell,2\ell-2,\ldots,4,2)$. We remove $T$ from $F$ and shift all
remaining boxes to the left to obtain a diagram $R$. The conjugate of $R$ is
the $2$-modular diagram of a partition $\rho$ with odd parts. Define
$\pi:=\varepsilon\cup\rho$.
From the procedure above, we see that $\pi$ has $k$ parts and the number of
parts of $\lambda$ is equal to the number of even parts in $\pi$. ∎
###### Example 3.
Let $F=(k,\lambda)=(5,(9,5,1))$. As in Lemma 4.1, we decompose $F=T+R$ as
shown in Figure 3.
1&1111
12222
122
1 = 1&11
122
12
1 \+ 1& 1
2 2
2
Figure 3. A decomposition of an odd Ferrers diagram as in Lemma 4.1
Then, $\varepsilon=(6,4,2)$, $\rho=(5,3)$ and $\pi=(6,5,4,3,2)$.
As in Section 3, we introduce two variable generalizations of
$A_{\nu}(q),A_{\nu,2}(q),$ and $B_{\nu}(q)$ in which the exponent of $z$ keeps
track of the number of parts in partitions.
Let
(16)
$A_{\nu}(z;q):=\sum_{k=0}^{\infty}\frac{z^{k}q^{k^{2}+k}}{(zq;q^{2})_{k+1}}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}a_{\nu}(m,n)z^{m}q^{n},$
where $a_{\nu}(m,n)$ counts the number of partitions in $\mathcal{A}_{\nu}(n)$
with $m$ parts.
Let
(17)
$A_{\nu,2}(z;q):=\sum_{k=0}^{\infty}(-zq;q^{2})_{k}zq^{k}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}a_{\nu,2}(m,n)z^{m}q^{n},$
where $a_{\nu,2}(m,n)$ counts the number of odd Ferrers diagrams $(k,\lambda)$
in $\mathcal{A}_{\nu,2}(n)$ with $m$ rows.
Let
(18)
$B_{\nu}(z;q):=\sum_{k=0}^{\infty}zq^{k}(-zq^{k+1};q)_{k}(-zq^{2k+2};q^{2})_{\infty}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}b_{\nu}(m,n)z^{m}q^{n},$
where $b_{\nu}(m,n)$ counts the number of partitions in $\mathcal{B}_{\nu}(n)$
with $m$ parts. Recall that partitions in $\mathcal{B}_{\nu}(n)$ can have $0$
as a part, which we do count in the number of parts – for example, the
partition $(6,4,2)$ has three parts, while $(6,4,2,0)$ has four parts.
###### Theorem 4.2.
The excess in the total number of parts in all partitions in
$\mathcal{A}_{\nu}(n)$ over the total number of parts in all partitions in
$\mathcal{B}_{\nu}(n)$ equals the sum of the number of odd parts minus $1$
over all partitions in $\mathcal{A}_{\nu}(n)$, or equivalently the sum of
ranks over all odd Ferrers diagrams in $\mathcal{A}_{\nu,2}(n)$. If $n\geq 1$,
the excess is non-negative.
We provide two proofs of this theorem.
###### Proof 1.
We have that
$\displaystyle\dfrac{\partial}{\partial z}$
$\displaystyle\left.\left(A_{\nu}(z;q)-B_{\nu}(z;q)\right)\right|_{z=1}$
$\displaystyle=\dfrac{\partial}{\partial
z}\left.\left(\sum_{k=0}^{\infty}\dfrac{z^{k}q^{k^{2}+k}}{(zq;q^{2})_{k+1}}-\sum_{k=0}^{\infty}zq^{k}(-zq^{k+1};q)_{k}(-zq^{2k+2};q^{2})_{\infty}\right)\right|_{z=1}$
$\displaystyle=\dfrac{\partial}{\partial
z}\left.\left(\sum_{k=0}^{\infty}\dfrac{z^{k}q^{k^{2}+k}}{(zq;q^{2})_{k+1}}-\sum_{k=0}^{\infty}\frac{z^{k+1}q^{k^{2}+k}}{(q;q^{2})_{k+1}}\right)\right|_{z=1}$
$\displaystyle=\sum_{k=0}^{\infty}\dfrac{q^{k^{2}+k}}{(q;q^{2})_{k+1}}\left(\left(\sum_{j=0}^{k}\frac{q^{2j+1}}{1-q^{2j+1}}\right)-1\right)$
$\displaystyle=:\sum_{n=0}^{\infty}c(n)q^{n},$
where we use [7, (7)] in the second line above. Note that $c(n)$ counts the
number of odd parts in all partitions in $\mathcal{A}_{\nu}(n)$ minus the
number of partitions in $\mathcal{A}_{\nu}(n)$. Rephrased, we have
$\sum_{n=0}^{\infty}c(n)q^{n}=\sum_{n=0}^{\infty}\sum_{\pi\in\mathcal{A}_{\nu}(n)}(\ell_{o}(\pi)-1)q^{n}.$
To show that $c(n)\geq 0$ for $n\geq 1$, notice that the only partition $\pi$
with $\ell_{o}(\pi)-1<0$ is $\pi=(2m,2m-2,\ldots 2)$. For this $\pi$, we can
split the largest part $2m$ into two odd parts $(2m-1)+1$ to obtain
$\widetilde{\pi}=(2m-1,2m-2,\ldots,2,1)$, another partition of the same size
in $\mathcal{A}_{\nu}$, such that
$(\ell_{o}(\pi)-1)+(\ell_{o}(\widetilde{\pi})-1)=-1+1=0$. All the other
partitions in $\mathcal{A}_{\nu}$ have at least one odd part. Therefore
$c(n)=\sum_{\pi\in\mathcal{A}_{\nu}(n)}(\ell_{o}(\pi)-1)\geq 0$. ∎
###### Proof 2.
From Lemma 4.1, if the partition $\pi\in\mathcal{A}_{\nu}(n)$ corresponds to
the odd Ferrers diagram $F=(k,\lambda)\in\mathcal{A}_{\nu,2}(n)$, then the
number of parts of $\pi$ is $k$. In terms of generating functions, we have the
identity
$A_{\nu}(z;q)=\sum_{k=0}^{\infty}\frac{z^{k}q^{k^{2}+k}}{(zq;q^{2})_{k+1}}=\sum_{k=0}^{\infty}(-q;q^{2})_{k}z^{k}q^{k}.$
Thus, $\left.\frac{\partial}{\partial z}\right|_{z=1}A_{\nu}(z;q)$ is the
generating function for the total number of columns in all odd Ferrers
diagrams in $\mathcal{A}_{\nu,2}(n)$.
On the other hand, from [7, Theorem 1] we have
$B_{\nu}(z;q)=\sum_{k=0}^{\infty}\frac{z^{k+1}q^{k^{2}+k}}{(q;q^{2})_{k+1}}.$
Again from Lemma 4.1, if the partition $\pi\in\mathcal{A}_{\nu}(n)$
corresponds to the odd Ferrers diagram
$F=(k,\lambda)\in\mathcal{A}_{\nu,2}(n)$, then the number of even parts in
$\pi$ is equal to $\ell(\lambda)$. In terms of generating functions, we have
$\sum_{k=0}^{\infty}\frac{z^{k+1}q^{k^{2}+k}}{(q;q^{2})_{k+1}}=\sum_{k=0}^{\infty}(-zq;q^{2})zq^{k}=A_{\nu,2}(z;q).$
(This is also [7, Theorem 2].) Hence $B_{\nu}(z;q)=A_{\nu,2}(z;q)$ and so
$\left.\frac{\partial}{\partial z}\right|_{z=1}B_{\nu}(z;q)$ is the generating
function for the total number of rows in all odd Ferrers diagrams in
$\mathcal{A}_{\nu,2}(n)$.
Combining these, we conclude that $\left.\frac{\partial}{\partial
z}\right|_{z=1}(A_{\nu}(z;q)-B_{\nu}(z;q))$ is the generating function for the
sum of ranks of all odd Ferrers diagrams in $\mathcal{A}_{\nu,2}(n)$.
Given an odd Ferrers diagram $F=(m,\lambda)\in\mathcal{A}_{\nu,2}(n)$, we have
$m\geq\ell(\lambda)$ since the parts of $\lambda$ are distinct. Then,
$\operatorname{rank}(F)=m-\ell(\lambda)-1\geq-1$ and
$\operatorname{rank}(F)=-1$ if and only if $m=\ell(\lambda)$, in which case
$F=(m,(2m-1,2m-3,\ldots,1))$. Hence, there is at most one odd Ferrers diagram
with rank $-1$ in $\mathcal{A}_{\nu,2}(n)$. If $\operatorname{rank}(F)=-1$,
the conjugate of $F$ is
$F^{\prime}=(m+1,(2m-1,2m-3,\ldots,3))\in\mathcal{A}_{\nu,2}(n)$,
$\operatorname{rank}(F^{\prime})=(m+1)-m=1$, and thus
$\operatorname{rank}(F)+\operatorname{rank}(F^{\prime})=0$. Since all other
odd Ferrers diagrams in $\mathcal{A}_{\nu,2}(n)$ have non-negative rank, it
follows that $c(n)\geq 0$.∎
We end this section by investigating the parity of $c(n)$. To this end, we
first prove a result similar to Euler’s Pentagonal Number Theorem.
###### Theorem 4.3.
For any non-negative integer $n$ we have
$|A_{\nu}(n\mid\mbox{even \\# of parts })|=|A_{\nu}(n\mid\mbox{odd \\# of
parts })|+e(n),$
where
$e(n)=\begin{cases}1&\mbox{ if }n=3j^{2}+2j\mbox{ for some }j\geq 0\\\
-1&\mbox{ if }n=3j^{2}+4j+1\mbox{ for some }j\geq 0\\\ 0&\mbox{
otherwise.}\end{cases}$
###### Proof.
We write a partition $\pi\in A_{\nu}(n)$ as $\pi=(\pi^{e},\pi^{o})$, where
$\pi^{e}$, respectively $\pi^{o}$, is the partition consisting of the even,
respectively odd parts of $\pi$. As usual, the largest part of $\pi^{e}$ is
$\pi_{1}^{e}$. We denote by $\pi^{o}_{s}$ the smallest part of $\pi^{o}$ and
by $m^{o}(\pi)$ the multiplicity of $\pi^{e}_{1}+1$ in $\pi^{o}$. We have
$m^{o}(\pi)\geq 0$.
Let $\tilde{A}_{\nu}(n)=\\{\pi\in
A_{\nu}(n)\mid\pi^{o}=(\pi^{e}_{1}+1)^{m^{o}(\pi)},\mbox{ with
}m^{o}(\pi)\in\\{\frac{\pi^{e}_{1}}{2},\frac{\pi^{e}_{1}}{2}+1\\}\\}$. Then,
$|\tilde{A}_{\nu}(n)|=0$ or $1$.
We define an involution on $A_{\nu}(n)\setminus\tilde{A}_{\nu}(n)$ as follows.
(i) If $\pi^{o}_{s}\geq 2m^{o}(\pi)+1$, remove $\pi^{e}_{1}$ from $\pi^{e}$
and the last two columns (of length $m^{o}(\pi)$) from $\pi^{o}$, and add
parts $\pi^{e}_{1}-1$ and $2m^{o}(\pi)+1$ to $\pi^{o}$.
(ii) If $\pi^{o}_{s}<2m^{o}(\pi)+1$, remove from $\pi^{o}$ one part equal to
$\pi^{e}_{1}+1$ (largest part) and one part equal to $\pi^{o}_{s}$, and add a
part equal to $\pi^{e}_{1}+2$ to $\pi^{e}$ and two columns equal to
$\frac{\pi^{o}_{s}-1}{2}$.
Note that the transformations in (i) and (ii) are inverses of each other.
We have $|\tilde{A}_{\nu}(n)|=1$ if and only if $n=3j^{2}+2j$ or
$3j^{2}+4j+1$. Moreover, for $\pi\in\tilde{A}_{\nu}(n)$,
$j=\ell(\pi^{e})=\ell_{e}(\pi)$ which completely determines the parity of
$\ell(\pi)$.
∎
###### Corollary 4.4.
Let $n\in\mathbb{N}$. Then $c(n)$ is odd if and only if $n$ is eight times a
generalized pentagonal number.
###### Proof.
We have that
$c(n)=\sum_{\pi\in A_{\nu}(n)}(\ell_{o}(\pi)-1).$
With the notation in the proof of Theorem 4.3, we have
$\ell_{o}(\pi)=\ell(\pi^{o})\equiv n\pmod{2}$ because the number of parts in a
partition with odd parts has the same parity as its size. Therefore, if $n$ is
odd, $\ell_{o}(\pi)-1$ is even for every $\pi\in A_{\nu}(n)$ and $c(n)$ is
even.
If $n$ is even, $c(n)\equiv|A_{\nu}(n)|\pmod{2}$. From Theorem 4.3, it follows
that $|A_{\nu}(n)|\equiv 1\pmod{2}$ if and only if $n=3j^{2}+2j$ or
$3j^{2}+4j+1$ for some $j\geq 0$. Since $n$ is even, if $n=3j^{2}+2j$, then
$j$ must be even, and if $n=3j^{2}+4j+1$, then $j$ must be odd. Therefore,
$|A_{\nu}(n)|\equiv 1\pmod{2}$ if and only if $n$ is eight times a generalized
pentagonal number. ∎
###### Remark 1.
Theorem 4.3 also follows from setting $a=1$ in Entry 3.7 of [15]. Using Lemma
4.1, Theorem 4.3 can be adapted to a pentagonal number theorem for
$A_{\nu,2}(n)$. Then, [25, Section 3.2] leads to a combinatorial proof of [6,
Theorem 5.4].
## 5\. The mock theta function $\phi$
Recall from Section 1 that the third order mock theta function $\phi$ is
defined by
(19)
$\displaystyle\phi(q):=\sum_{n=0}^{\infty}\frac{q^{n^{2}}}{(-q^{2},q^{2})_{n}}=\sum_{n=0}^{\infty}(sc_{e}(n)-sc_{o}(n))q^{n},$
where $sc_{o/e}(n)$ counts the number of self-conjugate partitions $\lambda$
of $n$ with $L(\lambda)$ odd/even. Here, $L(\lambda)$ is the number of parts
of $\lambda$ minus the side length of its Durfee square. From [6, Proof of
Theorem 4.2], we have
$\phi(q)=1+\sum_{n=0}^{\infty}(-1)^{n}q^{2n+1}(q;q^{2})_{n}.$
We first define the following generalization of $\phi(q)$:
$\displaystyle B_{\phi}(z;q)$
$\displaystyle:=1+\sum_{n=0}^{\infty}z^{n}q^{2n+1}(q;q^{2})_{n}=\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}b_{\phi}(m,n)z^{m}q^{n},$
where $b_{\phi}(0,0):=1$, and for $(m,n)\neq(0,0)$, $b_{\phi}(m,n)$ equals the
difference between the number of partitions of $n$ into distinct, odd parts
with largest part $2m+1$ and an odd number of parts, and the number of such
partitions with an even number of parts. Note that $B_{\phi}(-1,q)=\phi(q),$
and that this gives rise to a different combinatorial interpretation for the
coefficients of $\phi$ than the one given in (19). Namely, the coefficient of
$q^{n}$ in the $q$-series expansion for $\phi(q)$ also equals
$do_{e}(n)-do_{o}(n)$, where $do_{e}(n)$, resp. $do_{o}(n)$, counts the number
of partitions of $n$ into distinct odd parts with $M_{2}$-rank even,
respectively odd.
Next we define another bivariate function, which we later explain is related
to $\phi$ when $z=1$ (see (21)):
$\displaystyle A_{\phi}(z;q)$
$\displaystyle:=q\sum_{n=0}^{\infty}zq^{n}(-zq^{n+1};q)_{n}(-zq^{2n+1};q^{2})_{\infty}=\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}a_{\phi}(m,n)z^{m}q^{n+1},$
where $a_{\phi}(m,n)$ is the number of partitions of $n$ into distinct parts,
with $m$ parts, such that each even part is at most twice the smallest part.
The function $A_{\phi}(z;q)$ is related to $\phi(q)$ by the following
identity:
(21) $\displaystyle
A_{\phi}(1;q)=1-\phi(q)+2(-q;q^{2})_{\infty}\sum_{n=1}^{\infty}q^{n^{2}}.$
Using the Jacobi triple product [2, (2.2.10) with $z=1$], identity (21) is
essentially [6, Theorem 4.2] with some minor typographical errors corrected.
Unlike the mock theta functions $\omega(q)$ and $\nu(-q)$ studied in Sections
3 and 4, the $q$-series coefficients of $\phi(q)$ are not uniformly non-
negative, e.g.,
$\phi(q)=1+q-q^{3}+q^{4}+q^{5}-q^{6}-q^{7}+2q^{9}-2q^{11}+q^{12}+q^{13}-q^{14}-2q^{15}+q^{16}+O(q^{17}).$
However, the authors of [6] present (21) for $\phi(q)$ as a companion identity
to their similar result [6, Theorem 4.1] (see (15)) which shows that the mock
theta function $\nu(-q)$ is equal to the generating function for partitions
into distinct parts, in which each odd part is less than twice the smallest
part. Identity (21) similarly relates the mock theta function $\phi(q)$ to the
generating function for partitions into distinct parts in which each even part
is at most twice the smallest part, but up to a theta function. Indeed it is
identity (21) that leads to our “Beck-type” Theorem 5.1 for the mock theta
function $\phi(q)$ below. To state it, we introduce the functions
(22) $\displaystyle F_{1}(q)$
$\displaystyle:=F_{3}(q)\Big{(}1+2\sum_{n=1}^{\infty}q^{n^{2}}\Big{)}+2(-q;q^{2})_{\infty}\sum_{n=1}^{\infty}q^{n^{2}},$
(23) $\displaystyle F_{2}(q)$
$\displaystyle:=(-q;q^{2})_{\infty}\sum_{m=1}^{\infty}\frac{q^{2m-1}}{1+q^{2m-1}},\
$ (24) $\displaystyle F_{3}(q)$
$\displaystyle:=(-q;q^{2})_{\infty}\sum_{m=1}^{\infty}\frac{q^{2m}}{1+q^{2m}}.$
The functions $F_{2}(q)$ and $F_{3}(q)$, including their combinatorial
interpretations, are studied in [10].
###### Theorem 5.1.
We have that
(25) $\displaystyle\frac{\partial}{\partial
z}\Big{|}_{z=1}(A_{\phi}(z;q)+B_{\phi}(-z^{-1};q))=F_{1}(q)-F_{2}(q).$
Moreover, we have that
$\frac{\partial}{\partial
z}\Big{|}_{z=1}(A_{\phi}(z;q)+B_{\phi}(-z^{-1};q))\succeq 0.$
###### Remark 2.
A combinatorial interpretation of (25) in Theorem 5.1 can be deduced from the
combinatorial definitions of $a_{\phi}(m,n)$ and $b_{\phi}(m,n)$ provided
above, together with combinatorial interpretations of the $q$-series
coefficients of in the functions $F_{2}(q)$ and $F_{3}(q)$ provided in [10],
and the definition of $F_{1}(q)$. While this combinatorial intepretation
involves partition differences, Theorem 5.1 establishes the non-negativity of
the $q$-series coefficients of (25). On the other hand, it is of interest to
find another proof of this fact by finding a different and _manifestly_
positive combinatorial interpretation of the $q$-series coefficients of
$F_{1}(q)-F_{2}(q)$ (i.e., one which does not involve a combinatorial
difference). We leave this as an open problem.
### 5.1. Proof of Theorem 5.1
In this section, we prove Theorem 5.1, assuming the truth of Proposition 5.2
and Proposition 5.4 stated below. We provide a combinatorial proof of
Proposition 5.2 in Section 5.2, and provide both combinatorial and analytic
proofs of Proposition 5.4 in Section 5.3. In what follows, we use the notation
$G(q)\succeq_{S}0$, where $S\subseteq\mathbb{N}$, to mean that when expanded
as a $q$-series, the coefficients of $G(q)$ are non-negative, with the
exception of the coefficients of $q^{n}$ for $n\in S$.
###### Proposition 5.2.
We have that
$2F_{3}(q)-F_{2}(q)\succeq_{S}0,$
where $S:=\\{1,4,8,16\\}.$ Moreover, the coefficients of $2F_{3}(q)-F_{2}(q)$
are at least 4, with the exception of the coefficients of $q^{n}$ with $n$ in
the set $U:=\\{1,2,3,4,5,8,9,12,13,16,17\\}$.
###### Corollary 5.3.
We have that
(26)
$\displaystyle(2F_{3}(q)-F_{2}(q))\sum_{n=1}^{\infty}q^{n^{2}}\succeq_{T}0,$
where $T:=\\{2,5,9,13,17\\}$.
###### Proof of Corollary 5.3.
Writing $2F_{3}(q)-F_{2}(q)=:\sum_{n=1}^{\infty}a_{n}q^{n},$ we have that the
coefficient of $q^{n}$ in $(2F_{3}(q)-F_{2}(q))\sum_{j=1}^{\infty}q^{j^{2}}$
is equal to
(27) $\displaystyle\mathop{\sum_{k=1}^{n-1}a_{k}}_{k+m^{2}=n,\ 1\leq m^{2}\leq
n-1}.$
By Proposition 5.2, we have that $a_{k}\geq 0$ for any $k\not\in
S=\\{1,4,8,16\\}$, and $a_{k}\geq 4$ for any $k\not\in U$.
Let $n\notin T$ be a positive integer. We can directly compute the $q$-series
in (26) to $O(q^{65})$ to see the coefficients are non-negative. Now assume
that $n>65$. To prove that (27) is non-negative, it suffices to show that for
any $k\in S$ with $k+m^{2}=n$, there is another $k^{\prime}\neq k$ with
$k^{\prime}+m^{\prime 2}=n$ such that $a_{k^{\prime}}+a_{k}\geq 0$. Because
$n>65$, we must have $m\geq 2$. Let $k^{\prime}:=k+2m-1$. Then $1\leq
k^{\prime}\leq n-1$, $k^{\prime}>k$, and $k^{\prime}+(m-1)^{2}=n$, where
$1\leq(m-1)^{2}\leq n-1$. Note that $k^{\prime}\not\in U$. This is because for
$k\in S$ and $k^{\prime}\in U$, we have
$n=k+m^{2}=k+(\frac{k^{\prime}-k+1}{2})^{2}\leq 65$. By direct calculation, we
find that $\min\\{a_{k}\\}_{k\in S}=-4,$ and hence $a_{k}+a_{k^{\prime}}\geq
0$. ∎
###### Proposition 5.4.
Let $F_{2}(q)=:\sum_{n=1}^{\infty}b_{n}q^{n}$. For $n\geq 9$, we have that
$b_{n}\leq b_{n-1}+b_{n-4}.$
###### Corollary 5.5.
We have that
(28) $\displaystyle
F_{2}(q)\sum_{n=1}^{\infty}q^{n^{2}}-F_{2}(q)\succeq_{V}0,$
where $V:=\\{1,3,4,6,8\\}$.
###### Proof of Corollary 5.5.
To prove (28), we show
(29) $\displaystyle\sum_{1\leq m^{2}\leq n-1}b_{n-m^{2}}\geq b_{n}$
for $n\in\mathbb{N}\setminus V$. By Proposition 5.4, and the non-negativity of
$b_{n}$, for $n\geq 9$, we have
$\sum_{1\leq m^{2}\leq n-1}b_{n-m^{2}}\geq b_{n-1}+b_{n-4}\geq b_{n}.$
For $n\in\\{2,5,7\\}$, the inequality (29) can be verified directly. ∎
#### 5.1.1. Proof of Theorem 5.1
First, by straightforward manipulations, we find that [25, (2.4)] leads to the
identity
(30) $\displaystyle
B_{\phi}(-z^{-1};q)=1+\sum_{n=0}^{\infty}\frac{z^{-n}q^{(n+1)^{2}}}{(-z^{-1}q^{2};q^{2})_{n+1}}.$
Next we multiply [13, Theorem 6.11, $z\mapsto zq$] by $zq$ and use (30) to
find that
${A}_{\phi}(z;q)+{B}_{\phi}(-z^{-1};q)=D_{\phi}(z;q),$
where
$\displaystyle D_{\phi}(z;q)$
$\displaystyle:=1+z(-zq;q^{2})_{\infty}\left(-1+\frac{(-q;q)_{\infty}(q^{2};q^{2})_{\infty}(-q/z;q^{2})_{\infty}}{(-q^{2}/z;q^{2})_{\infty}}\right)$
$\displaystyle=1-z(-zq;q^{2})_{\infty}+z\frac{(-q;q)_{\infty}}{(-q^{2}/z;q^{2})_{\infty}}\left(1+\sum_{n=1}^{\infty}(z^{n}+z^{-n})q^{n^{2}}\right).$
Above, we have also used the Jacobi triple product [2, (2.2.10)]. Thus, the
derivative difference on the left hand side of (25) equals
$\frac{\partial}{\partial z}\big{|}_{z=1}{D}_{\phi}(z;q).$ After a direct
calculation using the definition of $D_{\phi}(z;q)$, and some simplification,
we obtain that this equals $F_{1}(q)-F_{2}(q)$.
To prove the second assertion of the theorem, it now suffices to show that
$F_{1}(q)-F_{2}(q)\succeq 0.$ From [10], we have that
$F_{3}(q)\succeq 0.$
From this and (22), it is not difficult to see that
(31) $\displaystyle F_{1}(q)-2F_{3}(q)\sum_{n=1}^{\infty}q^{n^{2}}\succeq 0.$
Thus, we have from (26), (28), and (31) that $F_{1}(q)-F_{2}(q)\succeq_{W}0$
for some explicit, finite, set $W$. The proof is complete after a direct
calculation of the $q$-series for $F_{1}(q)-F_{2}(q)$ up to $O(q^{n_{W}})$,
where $n_{W}:=\max W$, which reveals that in fact $F_{1}(q)-F_{2}(q)\succeq 0$
as claimed.
### 5.2. Proof of Proposition 5.2
###### Proof of Proposition 5.2.
We prove that for $n\geq 45$, the coefficient of $q^{n}$ in
$2F_{3}(q)-F_{2}(q)$ is at least $4$. For $n<45$ and $n\not\in U$, it can be
verified directly that the coefficient of $q^{n}$ is at least $4$. From [10],
$F_{2}(q)$ is the generating function for $|A(n)|$, where
$A(n):=\\{(\lambda,(a))\vdash n\big{|}\lambda\in\mathcal{Q}_{o},a\mbox{
odd},a\not\in\lambda\\}.$
From [10, Theorem 1.10] with $r=1$, $\ell=2$, $F_{3}(q)$ is the generating
function for $|B(n)|-\varepsilon(n)$, where
$\varepsilon(n):=\begin{cases}1&\text{if }n\equiv 0\pmod{4},\\\
0&\text{else,}\end{cases}$
and
$B(n):=\\{(\lambda,(c^{d}))\vdash n\big{|}\lambda\in\mathcal{Q}_{o},c\text{
even, }d\text{ odd, }\lambda_{1}-\lambda_{2}\leq c,\text{ and
}\lambda\neq\mu(c)\\}.$
Here, $\mu(c)$ is defined for even $c\geq 4$ to be the partition
$\mu(c)=\begin{cases}(\frac{c}{2}+1,\frac{c}{2}-1)&\mbox{ if }c\equiv
0\pmod{4}\\\ (\frac{c}{2}+2,\frac{c}{2}-2)&\mbox{ if }c\equiv
2\pmod{4}.\end{cases}$
Thus, $\mu(c)$ is a partition of $c$ into two distinct odd parts with minimum
difference between the parts.
For the remainder of the proof, let $n\geq 45$. We will show combinatorially
that $2(|B(n)|-1)\geq|A(n)|+4$, i.e., $2|B(n)|\geq|A(n)|+6$. Creating a
mapping from $A(n)$ to $B(n)$ that would achieve this turns out to be very
intricate. Instead, we define a mapping $\psi$ on $A(n)$ so that at most two
elements of $A(n)$ have equal image. The image of $\psi$ is a subset of
$U(n):=\\{(\lambda,(c^{d}))\vdash n\mid\lambda\text{ has odd parts, }c\text{
even, }d\text{ odd},\lambda_{1}-\lambda_{2}\leq c\\}$. After defining $\psi$,
we determine its image and the multiplicity of elements in the image. Then we
determine the elements in the image of $\psi$ that are not in $B(n)$ as well
as a subset of elements of $B(n)$ that are not in the image of $\psi$. We use
these sets to complete the proof.
We write $\psi(A(n))$ for the image of $A(n)$ under $\psi$ as a _multiset_ ,
each element apearing with multiplicity equal to the size of its pre-image
under $\psi$. Thus each element in $\psi(A(n))$ has multiplicity $1$ or $2$
and $|A(n)|=|\psi(A(n))|$. We write $B^{\prime}(n)$ for the multiset whose
elements are precisely those of $B(n)$ each appearing with multiplicity $2$.
Our goal is to show that $|B^{\prime}(n)|\geq|\psi(A(n))|+6$.
To define $\psi$, we start with
$(\lambda,(a))=((\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell(\lambda)}),(a))\in
A(n)$, and perform two steps.
Step 1: Define
$(\eta,(c))=\begin{cases}((\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell(\lambda)-1}),(a+1))&\mbox{
if }1\in\lambda,\\\
((\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell(\lambda)},1),(a-1))&\mbox{ if
}1\not\in\lambda,a\neq 1,\\\
((\lambda_{2},\ldots,\lambda_{\ell(\lambda)}),(\lambda_{1}+1))&\mbox{ if
}1\not\in\lambda,a=1.\end{cases}$
We obtain pairs $(\eta,(c))$, with $c$ even and $\eta\in\mathcal{Q}_{o}$
satisfying one of the following conditions:
$\begin{array}[]{l}1\not\in\eta,{c\geq 4,}\text{ and }c-1\not\in\eta;\\\
1\in\eta\text{ and }c+1\not\in\eta.\end{array}$
Moreover, pairs such that $1\not\in\eta$ and $\eta_{1}\leq c-3$ occur twice.
If $\eta=\emptyset$ (which occurs only if $n$ is even), define
$\psi(\lambda,(a))=(\eta,(c))=(\emptyset,(n))\in B(n)$. Thus, if $n$ is even,
the pair $(\emptyset,(n))$ appears in $\psi(A(n))$ with multiplicity $2$ since
it is the image of $((1),(n-1))$ and $((n-1),(1))$.
If $\eta\neq\emptyset$, continue with Step 2.
Step 2: Write $\eta_{1}-\eta_{2}=qc+r$ with $q,r\in\mathbb{Z}$, $q\geq 0$,
$0<r\leq c$.
We define $\eta-(qc):=(\eta_{1}-qc,\eta_{2},\ldots,\eta_{\ell(\eta)})$. Note
that $\eta-(qc)\neq\emptyset$.
If $q$ is even, define
$\psi(\lambda,(a))=(\xi,(c^{d}))=(\eta-(qc),(c^{q+1})).$
If $q$ is odd, define
$\psi(\lambda,(a))=(\xi,(c^{d}))=\begin{cases}((\eta-(qc))\cup(c-1)\cup(1),(c^{q}))&\mbox{
if }1\not\in\eta,\\\ ((\eta-(qc))\setminus\\{1\\}\cup(c+1),(c^{q}))&\mbox{ if
}1\in\eta.\end{cases}$
All pairs obtained satisfy $\xi_{1}-\xi_{2}\leq c$.
Next, we determine the image $\psi(A(n))$ of $\psi$.
From Step 1, if $n$ is even, $(\emptyset,(n))\in\psi(A(n))$.
From Step 2 above, following pairs $(\xi,(c^{d}))\in U(n)$ are in
$\psi(A(n))$.
From the case $q$ even: $(\xi,(c^{d}))\in U(n)$, $\xi\in\mathcal{Q}_{o}$, and
satisfying one of
$\begin{array}[]{l}1\not\in\xi,{c\geq 4,}\text{ and }c-1\not\in\xi;\\\
1\not\in\xi,{c\geq 4,}\ \xi_{1}=c-1\text{ and }d>1;\\\ 1\in\xi\text{ and
}c+1\not\in\xi,\text{ and }(\xi,(c^{d}))\neq((1),(2^{d})),d>1;\\\ 1\in\xi,\
\xi_{1}=c+1\text{ and }d>1.\\\ \end{array}$
(Note that if $c+1$ and $c-1$ are both in $\xi$, then they are the two largest
parts and we must have $1\in\xi$ and $d>1$.)
From the case $q$ odd: $(\xi,(c^{d}))\in U(n)$, and satisfying one of
$\begin{array}[]{l}1\in\xi,{c\geq 4,}\text{ and }c-1\in\xi\text{ and
}\ell(\xi)\geq 3;\\\ 1\not\in\xi\text{ and }c+1\in\xi\text{ and }\ell(\xi)\geq
2.\end{array}$
Moreover, $\xi$ has at most one repeated part and this can happen as follows:
$\xi_{1}=\xi_{2}=c-1{\geq 3}$, $\xi_{3}<\xi_{2}$ and $1\in\xi$;
$\xi_{1}=\xi_{2}=c+1$, $\xi_{3}<\xi_{2}$ and $1\not\in\xi$; or $\xi=(c-1,1,1)$
and $c\geq 4$.
Each of the following pairs $(\xi,(c^{d}))\in\psi(A(n))$ occurs with
multiplicity $2$.
$\begin{array}[]{l}\xi=\emptyset\text{ and }d=1;\\\ 1\not\in\xi,\ \xi_{1}\leq
c-3,\ c\geq 4,\text{ and }d=1;\\\ 1\in\xi,{c\geq 4,}\ \xi_{1}=c+1,\
\xi_{2}=c-1,\text{ and }d>1;\\\ 1\in\xi,c\geq 4,\ c-1\in\xi,\text{ and
}c+1\not\in\xi\text{ and }\ell(\xi)\geq 3;\\\ 1\not\in\xi,c\geq 4,\
c+1\in\xi,\text{ and }c-1\not\in\xi\text{ and }\ell(\xi)\geq 2.\end{array}$
The pairs $(\xi,(c^{d}))\in B(n)$ satisfying the conditions below are not in
$\psi(A(n))$.
$\begin{array}[]{l}1\in\xi,\ c+1\in\xi,\ c-1\not\in\xi,\ \xi_{1}>c+1;\\\ c=2,\
1\in\xi,\ 3\in\xi,\ \xi_{1}>3;\\\ 1\not\in\xi,{c\geq 4,}\ c-1\in\xi,\
c+1\not\in\xi,\ \xi_{1}>c+1.\end{array}$
The list above is not exhaustive but it is sufficient for our purposes.
Pairs $(\xi,(c^{d}))$ that are in $\psi(A(n))\setminus B^{\prime}(n)$ satisfy
one of the following conditions:
* (i)
$\xi=\mu(c)$;
* (ii)
$\xi_{1}=\xi_{2}=c-1$ and $1\in\xi$ and $c\geq 4$; or $\xi_{1}=\xi_{2}=c+1$
and $1\not\in\xi$;
* (iii)
$\xi=(c-1,1,1)$, $c\geq 4$;
Next, we map most pairs in $\psi(A(n))\setminus B^{\prime}(n)$ to pairs in
$B^{\prime}(n)\setminus\psi(A(n))$, i.e. to pairs in $B(n)$ whose pre-image
under $\psi$ has size at most one. Specifically, to simplify the argument, a
small number of pairs in $\psi(A(n))\setminus B^{\prime}(n)$ will not be
mapped below and will be collected in a multiset $\mathcal{E}(n)$.
By checking the list of conditions above, one can see that of the pairs listed
in (i)-(iii), only pairs of the form $(\mu(c),c)$ with $c\geq 8$ appear with
multiplicity $2$. The pre-image of this pair under $\psi$ is
$\\{((\mu(c),1),(c-1)),((c-1,\mu(c)),(1))\\}$. For each $n\equiv 0\pmod{4}$,
$n\geq 16$ there is only one such pair $(\mu(c),c)=(\mu(n/2),(n/2))$ and we
place two copies of $(\mu(n/2),(n/2))$ in $\mathcal{E}(n)$.
We describe this correspondence according to cases (i)-(iii) above.
(i) This case occurs only if $n\equiv 0\pmod{4}$. We consider pairs
$(\mu(c),(c^{d}))\neq(\mu(n/2),(n/2))$ and map them to $(\mu(3c),(c^{d-2}))\in
B^{\prime}(n)\setminus\psi(A(n))$. Note that $d>1$.
(ii) If $\xi_{1}=\xi_{2}=c\pm 1$, we consider two cases.
In the first case, if $\xi_{3}<\xi_{2}-2$, and
$(\xi,(c^{d}))\neq((3,3),(2^{d}))$, we map $(\xi,(c^{d}))$ to
$(\xi\setminus\\{\xi_{1},\xi_{2}\\}\cup\mu(2\xi_{1}),(c^{d}))$. Note that
$\mu(2\xi_{1})=\mu(2c\pm 2)$. If $n\equiv 0\pmod{4}$, we place
$((3,3),(2^{(n-6)/2}))$ in $\mathcal{E}(n)$.
In the second case, if $\xi_{1}=\xi_{2}=c\pm 1$, $\xi_{3}=\xi_{2}-2$, and
$(\xi,(c^{d}))\neq((3,3,1),(4^{d}))$, we map
$(\xi,(c^{d}))=((c-1,c-1,c-3,\ldots,\xi_{\ell(\xi)-1},1),(c^{d}))$
to
$(\xi\setminus\\{\xi_{1},\xi_{2},\xi_{3},1\\}\cup\mu(3c-4),(c^{d}));$
and map
$(\xi,(c^{d}))=((c+1,c+1,c-1,\ldots\xi_{\ell(\xi)}),(c^{d}))$
to
$(\xi\setminus\\{\xi_{1},\xi_{2},\xi_{3}\\}\cup\mu(3c)\cup\\{1\\},(c^{d})).$
Note that in this case $\xi_{\ell(\xi)}>1$ and $c\geq 4$. If $(n-7)/2$ is an
odd positive integer, we place $((3,3,1),(4^{(n-7)/4}))$ in $\mathcal{E}(n)$.
(iii) If $d>3$, we map $((c-1,1,1),(c^{d}))$ to
$(\mu(5c)\cup\\{1\\},(c^{d-4}))$. If $n\equiv 1\pmod{4}$ is odd, we place
$(((n-3)/2,1,1),((n-1)/2))$ in $\mathcal{E}(n)$. If $n\equiv 1\pmod{8}$, we
also place $(((n-5)/4,1,1),(((n-1)/4)^{3}))$ in $\mathcal{E}(n)$.
Denote by $\mathcal{S}(n)$ the image of the correspondence above.
Thus, if $n$ is even,
$\displaystyle\mathcal{E}(n)\subseteq\\{(\mu(n/2),(n/2)),$
$\displaystyle(\mu(n/2),(n/2)),((3,3),(2^{(n-6)/2}))\\},$
and if $n$ is odd,
$\displaystyle\mathcal{E}(n)\subseteq\\{((3,3,1),(4^{(n-7)/4})),$
$\displaystyle(((n-3)/2,1,1),((n-1)/2)),$
$\displaystyle(((n-5)/4,1,1),(((n-1)/4)^{3}))\\}.$
If $n\geq 45$ the following pairs in $B(n)$ do not occur in
$\psi(A(n))\cup\mathcal{S}(n)$ and appear with multiplicity $2$ in
$B^{\prime}(n)$.
If $n$ is even, for $k=2,3,4$,
$\begin{array}[]{l}((\mu(n-4k-2),2k+1,1),(2k)),\\\
((\mu(n-6k-2),2k+3,2k-1),(2k)).\end{array}$
If $n\equiv 3\pmod{4}$, for $1\leq k\leq 5$,
$\begin{array}[]{l}((\mu(n-4k-7),5,3,1),(2^{2k-1})).\end{array}$
If $n\equiv 1\pmod{4}$, for $1\leq k\leq 5$,
$\begin{array}[]{l}((\mu(n-4k-9),7,3,1),(2^{2k-1})).\end{array}$
Thus, we found a submultiset of $B^{\prime}(n)$ disjoint from
$\psi(A(n))\cup\mathcal{S}(n)$ and of size larger than $|\mathcal{E}(n)|+6$
and hence we have shown that $|B^{\prime}(n)|\geq|\psi(A(n)|+6$. ∎
### 5.3. Proof of Proposition 5.4
We provide two different proofs of Proposition 5.4 below, one of which is
combinatorial in nature (see Section 5.3.1), the other of which is analytic
(see Section 5.3.2).
#### 5.3.1. Combinatorial Proof of Proposition 5.4
As shown in [10], if $F_{2}(q)=\sum_{n=1}^{\infty}b_{n}q^{n}$, then $b_{n}$
equals the number of parts in all partitions of $n$ with distinct odd parts,
i.e.
$b_{n}=\sum_{\lambda\in Q_{o}(n)}\ell(\lambda).$
We first create a length preserving bijection $\varphi$ from
$\\{\lambda\in\mathcal{Q}_{o}(n)\mid 1\not\in\lambda\\}$ to
$\\{\xi\in\mathcal{Q}_{o}(n-4)\mid\mbox{if }\ell(\xi)\geq 3,\mbox{ then
}\xi_{\ell(\xi)-2}-\xi_{\ell(\xi)-1}>2\\}$.
Let
$\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell(\lambda)})\in\mathcal{Q}_{o}(n)$,
$1\not\in\lambda$ and define
$\varphi(\lambda)=\begin{cases}(n-4)&\mbox{ if }\ell(\lambda)=1,\mbox{ i.e.,
}\lambda=(n)\\\
(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell(\lambda)-1}-2,\lambda_{\ell(\lambda)}-2)&\mbox{
if }\ell(\lambda)\geq 2.\end{cases}$
Clearly, $\ell(\lambda)=\ell(\varphi(\lambda))$.
Next, we consider the bijection $\psi$ from
$\\{\lambda\in\mathcal{Q}_{o}(n)\mid 1\in\lambda\\}$ to
$\\{\xi\in\mathcal{Q}_{o}(n-1)\mid 1\not\in\lambda\\}$ given by
$\psi(\lambda)=\lambda\setminus\\{1\\}$. Clearly,
$\ell(\lambda)=\ell(\psi(\lambda))+1$.
It remains to show that the number of parts equal to $1$ in all partitions in
$\mathcal{Q}_{o}(n)$ is less than or equal to the total number of parts in all
partitions in $\mathcal{Y}:=\\{\xi\in\mathcal{Q}_{o}(n-1)\mid
1\in\lambda\\}\cup\\{\xi\in\mathcal{Q}_{o}(n-4)\mid\ell(\xi)\geq 3\mbox{ and
}\xi_{\ell(\xi)-2}-\xi_{\ell(\xi)-1}=2\\}$. We create an injection $\xi$ from
$\\{\lambda\in\mathcal{Q}_{o}(n)\mid 1\in\lambda\\}$ to the set of partitions
in $\mathcal{Y}$ with exactly one marked part.
Let $\lambda\in\mathcal{Q}_{o}(n)$ be such that $1\in\lambda$. Since $n\geq
9$, we have $\ell(\lambda)\geq 2$. Let $a:=\lambda_{\ell(\lambda)-1}$.
If $a\geq 9$, $1\not\in\mu(a-1)$, and we define
$\xi(\lambda)=\lambda\setminus\\{a\\}\cup\mu(a-1)\in\mathcal{Q}_{o}(n-1)$ with
part $1$ marked. Note $\xi(\lambda)$ has at least three parts and if it has
exactly three parts, then the difference between the first and second part is
$2$ or $4$. The marked part of $\xi(\lambda)$ is the last part.
Next we consider the case when $a=3,5,$ or $7$. Since $n\geq 9$, we have
$\ell(\lambda)\geq 3$.
If $\ell(\lambda)\geq 4$, define
$\xi(\lambda)=\lambda\setminus\\{\lambda_{1},a\\}\cup\\{\lambda_{1}+a-1\\}\in\mathcal{Q}_{o}(n-1)$
with marked first, second, or third part according to $a=3,5$, or $7$,
respectively. Note that if $a=7$, the difference between the first and second
part in $\xi(\lambda)$ is at least $8$. Thus, when $\ell(\lambda)=4$ and
$a=7$, then $\xi(\lambda)$ has exactly three parts and the marked part is $1$
but the obtained marked partition is different from the marked partitions
obtained in the case $a\geq 9$.
If $\ell(\lambda)=3$, then $n$ is odd. We define
$\xi(n-8,7,1)=(\overline{n-2},1)$, $\xi(n-6,5,1)=(n-2,\overline{1})$ and
$\xi(n-6,5,1)=\begin{cases}(\mu(n-5),\overline{1})&\mbox{ if }n\geq 17,n\equiv
1\pmod{4}\\\ (\mu(n-7),\overline{3})&\mbox{ if }n\geq 17,n\equiv 3\pmod{4}\\\
(\overline{n-2},1)&\mbox{ if }9\leq n\leq 15.\end{cases}$
Note that $(n-8,7,1)$ occurs only when $n\geq 17$, and that
$(\mu(n-5),\overline{1}),(\mu(n-7),\overline{3})\in\mathcal{Q}_{o}(n-4)$.
#### 5.3.2. Analytic Proof of Proposition 5.4
To prove Proposition 5.4, it suffices to show that
(32) $\displaystyle(q^{4}+q-1)F_{2}(q)\succeq_{S^{\prime}}0,$
where $S^{\prime}:=\\{0,1,2,3,4,5,6,7,8\\}$. Indeed, we will prove the
stronger result with $S^{\prime}=\\{1,3,4,6,8\\}$. Towards (32), we establish
Lemma 5.6 below, which is stated in terms of the polynomials
$\displaystyle f_{m}(q):=\begin{cases}-q+q^{2}-q^{4}+2q^{5}-q^{6},&m=1,\\\
-q^{3},&m=2,\\\ -q^{5}+q^{7}-q^{8}+q^{9}+2q^{10}+q^{13}-q^{15},&m=3,\\\
-q^{2m-1}+q^{2m+1}-q^{2m+2}+q^{2m+3}+q^{2m+4},&m\geq 4.\end{cases}$
###### Lemma 5.6.
For each integer $m\geq 1$, we have that
(33)
$\displaystyle(q^{4}+q-1)q^{2m-1}\mathop{\prod_{\ell=1}^{\infty}}_{\ell\neq
m}(1+q^{2\ell-1})=f_{m}(q)+g_{m}(q),$
where $g_{m}(q)\succeq 0,$ and
$g_{m}(q)=\begin{cases}O(q^{7}),&m=1,\\\ O(q^{5}),&m=2,\\\ O(q^{16}),&m=3,\\\
O(q^{2m+6}),&m\geq 4.\end{cases}$
Using Lemma 5.6, we give an analytic proof of Proposition 5.4 below. Following
its proof, the remainder of this section is devoted to proving Lemma 5.6.
###### Analytic proof of Proposition 5.4.
By Lemma 5.6, we have that
(34)
$\displaystyle(q^{4}+q-1)F_{2}(q)=\sum_{m=1}^{\infty}(f_{m}(q)+g_{m}(q)).$
We rewrite $\sum_{m=1}^{\infty}f_{m}(q)$ as
$\displaystyle\sum_{j=1}^{3}f_{j}(q)+\sum_{m\geq 4}q^{2m+3}-\sum_{m\geq
4}(q^{2m-1}+q^{2m+2})+\sum_{m\geq 4}(q^{2m+1}+q^{2m+4})$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{3}f_{j}(q)+\sum_{m\geq 4}q^{2m+3}-\sum_{m\geq
4}(q^{2m-1}+q^{2m+2})+\sum_{m\geq 5}(q^{2m-1}+q^{2m+2})$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{3}f_{j}(q)+\sum_{m\geq 4}q^{2m+3}-q^{7}-q^{10}$
$\displaystyle=$
$\displaystyle-q+q^{2}-q^{3}-q^{4}+q^{5}-q^{6}-q^{8}+q^{9}+q^{10}+q^{11}+2q^{13}+\sum_{m\geq
7}q^{2m+3}.$
Since $\sum_{m=1}^{\infty}g_{m}(q)\succeq 0$, we obtain the non-negativity of
coefficients stated (32). ∎
###### Proof of Lemma 5.6.
We divide the proof into cases, depending on $m$.
Throughout the proof, we make use of the following calculations. Let $a,i,j$
be positive integers. We express a product $\displaystyle\prod_{k\geq
i}(1+q^{2k-1})$ in terms of the smallest, respectively largest exponent
appearing in monomials as
(35) $1+\sum_{k\geq
i}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})=1+q^{2i-1}+\sum_{k\geq
i+1}q^{2k-1}\prod_{i\leq\ell<k}(1+q^{2\ell-1}).$
Then,
(36) $\displaystyle q^{a+2j}\sum_{k\geq
i}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})-q^{a}\sum_{k\geq
i+j}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$ $\displaystyle=q^{a+2j}\sum_{k\geq
i}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})-q^{a+2j}\sum_{k\geq
i+j}q^{2(k-j)-1}\prod_{\ell>k}(1+q^{2\ell-1})$
$\displaystyle=q^{a+2j}\sum_{k\geq
i}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})-q^{a+2j}\sum_{k\geq
i}q^{2k-1}\prod_{\ell>k+j}(1+q^{2\ell-1})\succeq 0.$
Similarly
(37) $\displaystyle q^{a}\sum_{k\geq
i}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})-q^{a+2}\sum_{k\geq
i}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$
$\displaystyle=q^{a+2i-1}+q^{a+2i+1}(1+q^{2i-1})+q^{a}\sum_{k\geq
i+2}q^{2k-1}\prod_{i\leq\ell<k}(1+q^{2\ell-1})$ $\displaystyle\ \ \
-q^{a+2+2i-1}-q^{a+2}\sum_{k\geq
i+1}q^{2k-1}\prod_{i\leq\ell<k}(1+q^{2\ell-1})\succeq 0.$
The non-negativity of coefficients follows from the fact that
$q^{a+2}\sum_{k\geq
i+1}q^{2k-1}\prod_{i\leq\ell<k}(1+q^{2\ell-1})=q^{a}\sum_{k\geq
i+2}q^{2k-1}\prod_{i\leq\ell<k-1}(1+q^{2\ell-1}).$
We continue with the proof of Lemma 5.6.
Case $m\geq 4.$ We rewrite the left hand side of (33) as
$P_{m}(q)Q_{m}(q),$
where
$\displaystyle P_{m}(q)$
$\displaystyle:=(q^{4}+q-1)q^{2m-1}(1+q)(1+q^{3})(1+q^{5})$ $\displaystyle\
=-q^{2m-1}+q^{2m+1}-q^{2m+2}+q^{2m+3}+q^{2m+4}+2q^{2m+6}+q^{2m+8}$
$\displaystyle\hskip 14.45377pt+2q^{2m+9}+q^{2m+11}+q^{2m+12},$ $\displaystyle
Q_{m}(q)$ $\displaystyle:=\mathop{\prod_{\ell=4}^{\infty}}_{\ell\neq
m}(1+q^{2\ell-1})=1+\mathop{\sum_{k\geq 4}}_{k\neq
m}q^{2k-1}\mathop{\prod_{\ell>k}}_{\ell\neq m}(1+q^{2\ell-1}).$
Then, $g_{m}(q)=$
(38) $\displaystyle=-(q^{2m-1}+q^{2m+2})\mathop{\sum_{k\geq 4}}_{k\neq
m}q^{2k-1}\mathop{\prod_{\ell>k}}_{\ell\neq m}(1+q^{2\ell-1})$ (39)
$\displaystyle+(q^{2m+1}+q^{2m+3}+q^{2m+4})\mathop{\sum_{k\geq 4}}_{k\neq
m}q^{2k-1}\mathop{\prod_{\ell>k}}_{\ell\neq m}(1+q^{2\ell-1})$ (40)
$\displaystyle+(2q^{2m+6}+q^{2m+8}+2q^{2m+9}+q^{2m+11}+q^{2m+12})Q_{m}(q).$
Thus, $g_{m}(q)=O(q^{2m+6})$. To show that $g_{m}(q)\succeq 0$, we show that
all terms in (38) appear with positive sign in (39) or (40).
We first consider the case $m=4$. Then (38) equals
$\displaystyle-(q^{16}+q^{19})\prod_{\ell>5}(1+q^{2\ell-1})-(q^{7}+q^{10})\sum_{k\geq
6}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})=:-C_{1}-C_{2}.$
All terms in $C_{1}$ appear in (40). Using (36) with $j=1$, $i=5$, and
$a=7,10$ respectively, terms in $C_{2}$ cancel with terms in (39). Thus,
$g_{4}(q)\succeq 0$.
For $m>4$, we rewrite (38) separating the terms according to $k=4$, $k\neq
4,m+1$, and $k=m+1$. When $k\neq 4,m+1$, we factor out $q^{2}$ and shift the
index of summation. Thus, (38) equals
$\displaystyle-(q^{2m+6}+q^{2m+9})\mathop{\prod_{\ell>4}}_{\ell\neq
m}(1+q^{2\ell-1})$ $\displaystyle-(q^{2m+1}+q^{2m+4})\mathop{\sum_{k\geq
4}}_{k\neq m-1,m}q^{2k-1}\mathop{\prod_{\ell>k+1}}_{\ell\neq
m}(1+q^{2\ell-1})$
$\displaystyle-(q^{4m}\mathop{\prod_{\ell>m+1}}(1+q^{2\ell-1})+q^{4m+3}\mathop{\prod_{\ell>m+1}}(1+q^{2\ell-1})).$
Writing $q^{4m}=q^{2m+3}\cdot q^{2m-3}$ and $q^{4m+3}=q^{2m+6}\cdot q^{2m-3}$,
we see that each term in (38) cancels with a corresponding positive term in
(39) or (40) (and terms in (39) and (40) are used at most once in this
cancellation). Hence, $g_{m}(q)\succeq 0$.
Case $m=1.$ Using (35), we rewrite the left hand side of (33) as
$\displaystyle(q^{4}+q-1)$ $\displaystyle q\prod_{\ell\geq 2}(1+q^{2\ell-1})$
$\displaystyle\hskip 7.22743pt=q^{5}+q^{2}-q+(q^{5}+q^{2}-q)\sum_{k\geq
2}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$ $\displaystyle\hskip
7.22743pt=-q+q^{2}-q^{4}+2q^{5}-q^{6}+g_{1}(q),$
where
$\displaystyle g_{1}(q)=$ $\displaystyle q^{5}\sum_{k\geq
2}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})+q^{5}\sum_{k\geq
3}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$ $\displaystyle+$ $\displaystyle
q^{2}\sum_{k\geq 3}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})-q^{4}\sum_{k\geq
3}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$ $\displaystyle-$ $\displaystyle
q^{6}\sum_{k\geq 4}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})-q\sum_{k\geq
4}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$ $\displaystyle=:$
$\displaystyle\,A_{1}+A_{2}+A_{3}-A_{4}-A_{5}-A_{6}.$
From this expression, it is clear that $g_{1}(q)=O(q^{7})$. To show that
$g_{1}(q)\succeq 0$, we first compute $A_{1}-A_{6}$. This equals
$\displaystyle q^{5}\sum_{k\geq
2}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})-q^{5}\sum_{k\geq
2}q^{2k-1}\prod_{\ell>k+2}(1+q^{2\ell-1})$ $\displaystyle=$
$\displaystyle\sum_{k\geq
2}(q^{4k+5}+q^{4k+7}+q^{6k+8})\prod_{\ell>k+2}(1+q^{2\ell-1})=:B_{1}+B_{2}+B_{3}.$
Next, separating terms by $k$ even and odd respectively, we rewrite $A_{5}$ as
$\sum_{j\geq 2}q^{4j+5}\prod_{\ell>2j}(1+q^{2\ell-1})+\sum_{j\geq
3}q^{4j+7}\prod_{\ell>2j+1}(1+q^{2\ell-1}).$
Since $k+2\leq 2k$ if $k\geq 2$, we have $B_{1}+B_{2}-A_{5}\succeq 0$. From
(37) with $a=2,i=3$, it follows that $A_{3}-A_{4}\succeq 0$. Hence,
$g_{1}(q)\succeq 0$.
Case $m=2.$ Using (35), we rewrite the left hand side of (33) as
$(q^{4}+q-1)q^{3}(1+q)\prod_{\ell\geq 3}(1+q^{2\ell-1})=-q^{3}+g_{2}(q),$
where
$g_{2}(q)=(q^{5}+q^{7}+q^{8})\prod_{\ell\geq
3}(1+q^{2\ell-1})-q^{3}\sum_{k\geq
3}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})=O(q^{5}).$
To show $g_{2}(q)\succeq 0$, we write
$\displaystyle q^{3}\sum_{k\geq
3}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})=q^{8}\prod_{\ell>3}(1+q^{2\ell-1})+q^{3}\sum_{k\geq
4}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$
Using (35) and (36) with $a=3,j=1,i=3$, it follows that $g_{2}(q)\succeq 0$.
Case $m=3.$ Using (35), we rewrite the left hand side of (33) as
$\displaystyle(q^{4}+q-1)$ $\displaystyle q^{5}(1+q)(1+q^{3})\prod_{\ell\geq
4}(1+q^{2\ell-1})$
$\displaystyle=-q^{5}+q^{7}-q^{8}+q^{9}+2q^{10}+q^{13}-q^{15}+g_{3}(q),$
where
$\displaystyle g_{3}(q)=$ $\displaystyle q^{12}+q^{15}$
$\displaystyle+(-q^{5}+q^{7}-q^{8}+q^{9}+2q^{10}+q^{12}+q^{13})\sum_{k\geq
4}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$ $\displaystyle=$
$\displaystyle(q^{9}+2q^{10}+q^{12}+q^{13})\sum_{k\geq
4}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$
$\displaystyle+(q^{7}+q^{14})\sum_{k\geq
5}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$
$\displaystyle-(q^{8}+q^{15}+q^{12})\sum_{k\geq
5}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})$
$\displaystyle-(q^{5}+q^{14})\sum_{k\geq
6}q^{2k-1}\prod_{\ell>k}(1+q^{2\ell-1})=O(q^{16}).$
Using (36) with $a=8,j=1,i=4$ and also with $a=5,j=1,i=5$, as well as (37)
with $a=13,i=5$, we obtain $g_{3}(q)\succeq 0$. ∎
## References
* [1] G. E. Andrews, On basic hypergeometric series, mock theta functions, and partitions, II, Quart. J. Math. 17 (1966), 132–143.
* [2] by same author, The Theory of Partitions, Reprint of the 1976 original. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1998. 255 pp.
* [3] by same author, Euler’s partition identity and two problems of George Beck, Math. Student 86 (2017), no. 1-2, 115–119.
* [4] by same author, Integer partitions with even parts below odd parts and the mock theta functions, Ann. Comb., 22:3 (2018), 433–445.
* [5] G. E. Andrews and C. Ballantine, Almost partition identities, Proc. Natl. Acad. Sci. USA 116 (2019), no. 12, 5428–5436
* [6] G. E. Andrews, A. Dixit, and A. J. Yee, _Partitions associated with the Ramanujan/Watson mock theta functions $\omega(q),\nu(q),$ and $\phi(q)$,_ Res. Number Theory 1:19 (2015), 25pp.
* [7] G. E. Andrews, A. J. Yee, _Some identities associated with mock theta functions $\omega(q)$ and $\nu(q)$,_ Ramanujan J. 48 (2019), 613–622.
* [8] G. E. Andrews, F. Dyson, and D. Hickerson, _Partitions and indefinite quadratic forms,_ Invent. Math. 91 (1988), 391–407.
* [9] C. Ballantine and R. Bielak, Combinatorial proofs of two Euler-type identities due to Andrews. Ann. Comb. 23 (2019), no. 3-4, 511–525.
* [10] C. Ballantine, H. Burson, A. Folsom, C-Y Hsu, I. Negrini, B. Wen, _On a partition identity of Lehmer_ , arXiv:2109.00609 [math.NT], (2021), 26pp.
* [11] C. Ballantine and A. Welch, Beck-type companion identities for Franklin’s identity via a modular refinement, Discrete Math. 344 8 (2021), 112480.
* [12] A. Berkovich and F. G. Garvan, Some observations on Dyson’s new symmetries of partitions, J. Combin. Theory, Ser. A 100 (2002), 61–93.
* [13] B. C. Berndt, A. Dixit, and R. Gupta, _Generalizations of the Andrews-Yee identities associated with the mock theta functions $\omega(q)$ and $\nu(q)$,_ preprint, 2021.
* [14] B. C. Berndt and R. A. Rankin, _Ramanujan. Letters and commentary,_ History of Mathematics, 9. American Mathematical Society, Providence, RI; London Mathematical Society, London, 1995. 347 pp
* [15] B. C. Berndt and A. J. Yee, _Combinatorial Proofs of Identities in Ramanujan’s Lost Notebook Associated with the Rogers-Fine Identity and False Theta Functions,_ Ann. Comb. 7 (2003), 409–423.
* [16] K. Bringmann, A. Folsom, K. Ono, and L. Rolen, _Harmonic Maass forms and mock modular forms: theory and applications._ American Mathematical Society Colloquium Publications, 64. American Mathematical Society, Providence, RI, 2017. 391 pp.
* [17] H. Cohen, _$q$ -identities for Maass waveforms,_ Invent. Math. 91 (1988), 409-422.
* [18] W. D. Duke, _Almost a century of answering the question: What is a mock theta function?,_ Notices Amer. Math. Soc. 61 (2014), no. 11, 1314–1320.
* [19] N. J. Fine, _Basic Hypergeometric Series and Applicaitons,_ Mathematical Surveys and Monographs, Vol. 27, American Mathematical Society, Providence, R.I., 1988.
* [20] A. Folsom, _Perspectives on mock modular forms,_ J. Number Theory 176 (2017), 500–540.
* [21] S. Garthwaite, _The coefficients of the $\omega(q)$ mock theta function,_ Int. J. Number Theory Vol. 04, No. 06, pp. 1027–1042 (2008).
* [22] G. Gasper and M. Rahman, _Basic hypergeometric series,_ Second edition. Encyclopedia of Mathematics and its Applications, 96. Cambridge University Press, Cambridge, 2004.
* [23] M. Griffin, L. Rolen, and K. Ono, _Ramanujan’s mock theta functions,_ Proc. Nat. Acad. Sci., 110, no. 15, (2013) 5765–5768.
* [24] R. Li and A. Y. Z. Wang, Partitions associated with two fifth-order mock theta functions and Beck-type identities, Int. J. Number Theory (2020), no. 4, 841–855.
* [25] F. Z. K. Li and J. Y. X. Yang, Combinatorial proofs for identities related to generalizations of the mock theta functions $\omega(q)$ and $\nu(q)$, The Ramanujan Journal 50, no. 3 (2019), 527-550.
* [26] H. Rademacher, _Topics in analytic number theory,_ Edited by E. Grosswald, J. Lehner and M. Newman. Die Grundlehren der mathematischen Wissenschaften, Band 169. Springer-Verlag, New York-Heidelberg, 1973. 320 pp.
* [27] R. C. Rhoades, _On Ramanujan’s definition of mock theta function,_ Proc. Nat. Acad. Sci., 110, no. 19, (2013) 7592-7594.
* [28] J. Y. X. Yang, Combinatorial proofs and generalizations of conjectures related to Euler’s partition theorem. European J. Combin. 76 (2019), 62–72.
* [29] D. Zagier, _Introduction to Modular Forms,_ In: Waldschmidt M., Moussa P., Luck JM., Itzykson C. (eds) From Number Theory to Physics. Springer, Berlin, Heidelberg, 1992. 238–291.
* [30] by same author, _Ramanujan’s mock theta functions and their applications, [d’après Zwegers and Bringmann-Ono],_ Séminaire Bourbaki 60ème année, 2006-2007, Exposés 982-996, Astérisque no. 326 (2009), 22pp.
* [31] S. Zwegers, _Mock Theta Functions,_ Ph.D. Thesis, Utrecht University (2002).
## Acknowledgements
The authors thank the Banff International Research Station (BIRS) and the
Women in Numbers 5 (WIN5) Program. The third author is partially supported by
National Science Foundation Grant DMS-1901791. The fifth author is partially
supported by a FRQNT scholarship by Fonds de Recherche du Québec, and an ISM
scholarship by Institut des Sciences Mathématiques.
|
# Data-driven Discovery of The Quadrotor Equations of Motion Via Sparse
Identification of Nonlinear Dynamics
Zeyad M. Manaa
Aerospace Engineering Department
King Fahd University of Petroleum and Minerals
Dhahran, 31261, Saudi Arabian
&Mohammed R. Elbalshy
Aerospace Engineering Department
King Fahd University of Petroleum and Minerals
Dhahran, 31261, Saudi Arabian
&Ayman M. Abdallah
Aerospace Engineering Department
King Fahd University of Petroleum and Minerals
Dhahran, 31261, Saudi Arabian
Corresponding author: Z. M. M<EMAIL_ADDRESS>
Author contributions: Z. M. M formulated research; Z. M. M and M. R. E
performed research and simulations; Z. M. M, M. R. E, and A. M. A analyzed
data; M. R. E and Z. M. M wrote the manuscript; and Z. M. M, M. R. E, and A.
M. A reviewed and finalized the manuscript.
Z. M. M would like to acknowledge the support provided by the Deanship of
Research Oversight and Coordination at King Fahd University of Petroleum and
Minerals (KFUPM) under Research Grant xxxxx.
###### Abstract
Dynamical systems provide a mathematical framework for understanding complex
physical phenomena. The mathematical formulation of these systems plays a
crucial role in numerous applications; however, it often proves to be quite
intricate. Fortunately, data can be readily available through sensor
measurements or numerical simulations. In this study, we employ the Sparse
Identification of Nonlinear Dynamics (SINDy) algorithm to extract a
mathematical model solely from data. The influence of the hyperparameter
$\lambda$ on the sparsity of the identified dynamics is discussed.
Additionally, we investigate the impact of data size and the time step between
snapshots on the discovered model. To serve as a data source, a ground truth
mathematical model was derived from the first principals, we focus on modeling
the dynamics of a generic 6 Degrees of Freedom (DOF) quadrotor. For the scope
of this initial manuscript and for simplicity and algorithm validation
purposes, we specifically consider a sub-case of the 6 DOF system for
simulation, restricting the quadrotor’s motion to a 2-dimensional plane (i.e.
3 DOF). To evaluate the efficacy of the SINDy algorithm, we simulate three
cases employing a Proportional-Derivative (PD) controller for the 3 DOF case
including different trajectories. The performance of SINDy model is assessed
through the evaluation of absolute error metrics and root mean squared error
(RMSE). Interestingly, the predicted states exhibit at most a RMSE of order of
magnitude approximately $10^{-4}$, manifestation of the algorithm’s
effectiveness. This research highlights the application of the SINDy algorithm
in extracting the quadrotor mathematical models from data. We also try to
investigate the effect of noisy measurements on the algorithm efficacy. The
successful modeling of the 3 DOF quadrotor dynamics demonstrates the
algorithm’s potential, while the evaluation metrics validate its performance,
thereby clearing the way for more applications in the realm of unmanned aerial
vehicles.
_K_ eywords Keywords: dynamical systems $\cdot$ machine learning $\cdot$
sparse regression $\cdot$ system identification $\cdot$ quadrotor $\cdot$
numerical simulations $\cdot$ optimization
## 1 Introduction
Dynamical systems provide a mathematical representation for describing the
world. Formally, dynamical systems are concerned with the analysis, and
interpretation of the behavior of sets of differential equations that trace
the advancement of a system’s state across time [1, 2]. Classical dynamics has
been discussed for years and applied to a wide range of applications; it is
far too complex to be stated in a few words. However, the combination of big
data, machine learning techniques, and statistical learning is driving a
revolution in data-driven dynamical modeling and control of complex systems,
with analytical derivations being replaced by data-driven methodology [3]. In
most real-world circumstances, data is so abundant that it cannot be
comprehended. Furthermore, the physical principles that control these data are
frequently complex; this is true for most physical concerns, such as climate
science, epidemiology, and finance, to name a few. Researchers [4, 5, 6]
attempt to decompose data in order to acquire insights into these massive, yet
unexplained, datasets. As a result, the abundance of data remains a challenge
because it is inherently imperfect. Consequently, the rise of data-driven
dynamics is paving the way for such difficulties.
The outcome of this development in data science was not initially noticeable
on dynamical systems [7], but recently a lot of research has been geared
towards that area. Bongard and Lipson [8] and Schmidt and Lipson [9] was able
to glean information about a nonlinear dynamical system’s structure which was
used after that to find the nonlinear differential equations [10] by symbolic
regression. Dynamic Mode Decomposition (DMD) was first introduced to fluid
dynamics by Schmid [11]. He extracted spatiotemporal structures from high
dimensional data using DMD to analyse it. Researchers have extended the DMD
method to explore other interesting intersections, including but not limited
to the Koopman operator [12], extended DMD [13] to allow for approximation for
Koopman operator, kernel DMD [14, 15], time-delay DMD [16, 17]. Moreover,
Proctor et al. [18] extended the DMD to account for control input. These
methods allow for linear system identification. Although these techniques rely
on a precise set of coordinates for dynamics linearization, they are
beneficial for developing dynamics-based linear models that progress high-
dimensional observations across time [19, 20] which are useful for control
purposes [21]. Recent studies [22, 23] have examined the use of deep learning
techniques to choose the proper set of coordinates to be used in both DMD and
extended DMD.
While linear dynamical systems have several great qualities, there are many
complex and intriguing dynamical phenomena that a linear model cannot properly
capture. This encourages ways for discovering models of nonlinear dynamical
systems. Consequently, another breakthrough on data-driven dynamics happened
after Brunton et al. [7] revealed the method of discovering nonlinear
equations of motion using SINDy algorithm. They combined compressed sensing
[24, 25, 26] with sparse regression [27, 28] and came up with the new
technique of SINDy, which is based on the idea that most dynamical systems
contain a small number of active terms that account for the majority of
dynamics. Recently, SINDy algorithm has been generalized to include control
inputs [3], to include tensor bases [29], to discover partial differential
equation [30, 31], and to account for noisy data as a kind of challenge [32,
33]. The usage of SINDy algorithm thereafter begin to be a workhorse in the
fields of optics, chemical engineering, robotics, and disease modeling [34,
35, 36, 37, 38].
Despite the increasing body of literature in the field, there has been a
dearth of research focused on the domain of aerospace, especially the Unmanned
Aerial Vehicles (UAV). A few studies discussed data-driven dynamics and
control as an application of the UAVs. Manzoor et al. [39] proposed a novel
approach for controlling ducted fan aerial vehicles (DFAVs) by integrating
model predictive control (MPC) and physics-informed machine learning. They
combined the physics-based model with the data-driven model to account for
sudden changes in the system dynamics motivated by [40, 41]. Also, Kaiser et
al. [3] proposed an integration between SINDy and MPC. They found that the
SINDy method needs less data compared to other popular machine learning
approaches, such as neural networks. This reduces the likelihood of over-
fitting. In contrast to [39], when combined with MPC, the SINDy algorithm
yields models that are both computationally feasible and precise, even when
trained on limited data.
Motivated by this, SINDy algorithm will be utilized to discover the EOM of
quadrotor using numerical simulation data at first, and for the next phase,
experimental data might be used. Moreover, a study on SINDy’s sparsity
hyperparameter effect on model discovery will be examined along with the
effect of the sampling time of data collected whether numerically or
experimentally.
## 2 Modelling and Simulation
### 2.1 Coordinate Systems
The coordinate systems for the quadrotor are shown in Fig. 1. The inertial
reference frame, $\mathcal{I}$, is defined by
$\mathcal{I}=\\{\mathcal{O},\boldsymbol{x}_{\mathcal{I}},\boldsymbol{y}_{\mathcal{I}},\boldsymbol{z}_{\mathcal{I}}\\}$
(1)
where, $\mathcal{O}$ is the frame’s origin, $\mathbf{x}_{\mathcal{I}}$,
$\mathbf{y}_{\mathcal{I}}$, $\mathbf{z}_{\mathcal{I}}$ are unit vectors
defining the frame axes. The body frame, $\mathcal{B}$, is fixed to the center
of mass of the quadrotor defined by
$\mathcal{B}=\\{\mathcal{C},\boldsymbol{x}_{\mathcal{B}},\boldsymbol{y}_{\mathcal{B}},\boldsymbol{z}_{\mathcal{B}}\\}$
(2)
where, $\mathcal{C}$ is the frame’s origin, $\mathbf{x}_{\mathcal{B}}$,
$\mathbf{y}_{\mathcal{B}}$, $\mathbf{z}_{\mathcal{B}}$ are unit vectors
defining the frame axes.
Figure 1: Quadrotor with the body and the inertial reference frames.
We may acquire the transformation matrix that transition between the body
frame to the inertial frame by using the $\psi-\theta-\phi$ sequence, which
indicates the yaw, pitch and roll angles respectively.
${}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}=\left[\begin{array}[]{ccc}c\psi
c\theta-s\phi s\psi s\theta&-c\phi s\psi&c\psi s\theta+c\theta s\phi s\psi\\\
c\theta s\psi+c\psi s\phi s\theta&c\phi c\psi&s\psi s\theta-c\psi c\theta
s\phi\\\ -c\phi s\theta&s\phi&c\phi c\theta\end{array}\right]$ (3)
Giving the fact that this matrix is orthonormal
${}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}{}^{\mathcal{I}}\mathcal{R}^{-1}_{\mathcal{B}}={}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}{}^{\mathcal{I}}\mathcal{R}^{\top}_{\mathcal{B}}=\texttt{eye(3)}$
(4)
where eye(3) is a $3\times 3$ identity matrix.
### 2.2 Quadrotor Dynamics
We assume the quadrotor is a rigid body with 6 DOF, having a mass m and
Intertia matrix $\boldsymbol{J}=\texttt{diag}(J_{x},J_{y},J_{z})$. Let
$\boldsymbol{r}$ denotes the position of $\mathcal{C}$ in $\mathcal{I}$, the
$\sum\boldsymbol{F}$ denotes the summation of forces acting upon the body and
$T_{n}$ denotes the motor force. The equations governing the motion of
$\mathcal{C}$ is derived by applying Newton’s law to the translational motion
considering the control actuation applied on the body coordinate.
$\displaystyle m\ddot{\boldsymbol{r}}^{\mathcal{I}}$
$\displaystyle=m\left(\ddot{\boldsymbol{r}}^{\mathcal{B}}+{\boldsymbol{\omega}^{\mathcal{B/I}}\times{\dot{\boldsymbol{r}}}^{\mathcal{B}}}\right)=\sum{\boldsymbol{F}}$
(5) $\displaystyle\sum{\boldsymbol{F}}$
$\displaystyle={\boldsymbol{F}_{g}}+{\boldsymbol{F}}_{\text{th}}=\begin{bmatrix}0\\\
0\\\
-mg\end{bmatrix}+{}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}\begin{bmatrix}0\\\
0\\\ \sum_{n=1}^{4}{T_{n}}\end{bmatrix}$
$\displaystyle\ddot{\boldsymbol{r}}^{\mathcal{B}}$
$\displaystyle=\frac{1}{m}\sum{\boldsymbol{F}}-{\boldsymbol{\omega}^{\mathcal{B/I}}\times{\dot{\boldsymbol{r}}}^{\mathcal{B}}}$
We also employ Euler’s equation to model the attitude dynamics such that $J$
is the quadrotor’s moment of inertia, $\boldsymbol{\omega}$ is the angular
velocity, $\boldsymbol{h}$ is the angular momentum defined as
$\boldsymbol{h}=\boldsymbol{J}\boldsymbol{\omega}$, the $\sum\boldsymbol{M}$
is the summation of the moments acting upon the system.
$\dot{\boldsymbol{\omega}}=\boldsymbol{J}^{-1}\left({\sum\boldsymbol{M}}-\boldsymbol{\omega}\times{\boldsymbol{J}\boldsymbol{\omega}}\right)$
(6)
The relation between quadrotor velocity in the body frame and position in the
inertial frame can be expressed as
$\dot{\boldsymbol{r}}^{\mathcal{I}}={}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}\dot{\boldsymbol{r}}^{\mathcal{B}}$.
Also, to get the attitude of the quadrotor we define the relation between
quadrotor compenets of angular velocity in the body frame and the roll, pitch
and yaw derivatives in the inertial frame by
$\boldsymbol{\omega}=\mathcal{T}\boldsymbol{\dot{a}}$. In summary, the overall
dynamics of the system in hand are given by equation 7.
$\displaystyle\dot{\boldsymbol{r}}^{\mathcal{I}}$
$\displaystyle={}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}\dot{\boldsymbol{r}}^{\mathcal{B}}$
(7) $\displaystyle\ddot{\boldsymbol{r}}^{\mathcal{B}}$
$\displaystyle=\frac{1}{m}\sum{\boldsymbol{F}}-{\boldsymbol{\omega}^{\mathcal{B/I}}\times{\dot{\boldsymbol{r}}}^{\mathcal{B}}}$
$\displaystyle\boldsymbol{\omega}$
$\displaystyle=\mathcal{T}\boldsymbol{\dot{a}}$
$\displaystyle\dot{\boldsymbol{\omega}}$
$\displaystyle=\boldsymbol{J}^{-1}\left({\boldsymbol{M}}-\boldsymbol{\omega}\times{\boldsymbol{J}\boldsymbol{\omega}}\right)$
### 2.3 Control inputs
The first control actuation input is defined as the sum of the forces from
each of the four rotors. So,
$u_{1}=[T_{1}+T_{2}+T_{3}+T_{4}]$ (8)
As shown in Fig 1, rotors one and three both rotate in the negative
${\boldsymbol{z}_{\mathcal{B}}}$ direction. Rotors two and four both rotate in
the positive ${\boldsymbol{z}_{\mathcal{B}}}$ direction. Since the moment
exerted on the quadrotor oppose the direction of blade rotation. So, the
moments ${M_{1}}$ and ${M_{2}}$ are directed to the positive
${\boldsymbol{z}_{\mathcal{B}}}$ direction, while the moments ${M_{3}}$ and
${M_{4}}$ are directed to the positive ${\boldsymbol{z}_{\mathcal{B}}}$
direction. Recalling to Eq. 6, it can be rewritten as
${\boldsymbol{J}}\begin{bmatrix}\dot{p}\\\ \dot{q}\\\ \dot{r}\\\
\end{bmatrix}=\begin{bmatrix}{L(T_{2}-T_{4})}\\\ {L(T_{3}-T_{1})}\\\
{M_{1}-M_{2}+M_{3}-M_{4}}\end{bmatrix}-\begin{bmatrix}p\\\ q\\\ r\\\
\end{bmatrix}\times{\boldsymbol{J}}\begin{bmatrix}p\\\ q\\\ r\\\
\end{bmatrix}$ (9)
From Eq. 8 and Eq. 9, the total control actuation inputs, defined as the
vecotr $\boldsymbol{U}=[u_{1},u_{2},u_{3},u_{4}]^{\top}$,
$\boldsymbol{U}=\begin{bmatrix}u_{1}\\\ u_{2}\\\ u_{3}\\\ u_{4}\\\
\end{bmatrix}=\begin{bmatrix}1&1&1&1\\\ 0&L&0&-L\\\ -L&0&L&0\\\
\gamma&-\gamma&\gamma&-\gamma\end{bmatrix}\begin{bmatrix}T_{1}\\\ T_{2}\\\
T_{3}\\\ T_{4}\\\ \end{bmatrix}$ (10)
where ${L}$ is the distance from the rotor axis of rotation to the quadrotor
center of mass and $\gamma=\frac{K_{M}}{K_{F}}$, where ${K_{m}}$ and ${K_{t}}$
are respectively the aerodynamic motor moment and force constants [42, Section
3.2].
### 2.4 Simulation
For the scope of this manuscript, we will partially focus on the simulation of
a 3 DOF quadrotor system, as shown in Figure 2 Quad. This design, also known
as a planar quadrotor, restricts the quadrotor’s motion to the $y-z$ plane
exclusively. At first, we wanted to use a simplified model to validate and
test SINDy algorithm to the problem in hand. Then, gradually we transition to
the entire 6-DOF analysis by studying this simplified subset derived from the
comprehensive 6-DOF scenario mentioned in Section 2.2. The simulation of the
system’s equations of motion will be done discretely using the well-known
Runge-Kutta fourth-order (RK4) numerical technique.
$\boldsymbol{x}_{k+1}=\boldsymbol{f}_{\text{RK4}}(\boldsymbol{x}_{k},\boldsymbol{u}_{k},\Delta
t)$ (11)
where $k$ is discrete time index, $\Delta t$ is the time step,
$\boldsymbol{u}$ is the control actuation. For the full picture formulation of
RK4 readers may refer to [43].
The 3 DOF equations of motion are
$\displaystyle\ddot{y}$ $\displaystyle=-\frac{u_{1}}{m}\sin{\phi}$ (12)
$\displaystyle\ddot{z}$ $\displaystyle=\frac{u_{1}}{m}\sin{\phi}-g$
$\displaystyle\ddot{\phi}$ $\displaystyle=\frac{u_{2}}{J_{x}}$
Figure 2: 3 DOF quadrotor with the inertial and the body reference frames.
or in a matrix format as
$\left[\begin{array}[]{c}\ddot{y}\\\ \ddot{z}\\\
\ddot{\phi}\end{array}\right]=\left[\begin{array}[]{c}0\\\ -g\\\
0\end{array}\right]+\left[\begin{array}[]{cc}-\frac{1}{m}\sin(\phi)&0\\\
\frac{1}{m}\cos(\phi)&0\\\
0&\frac{1}{I_{xx}}\end{array}\right]\left[\begin{array}[]{l}u_{1}\\\
u_{2}\end{array}\right]$ (13)
We consider the state vector
$\boldsymbol{x}=\begin{bmatrix}y,z,\phi,\dot{y},\dot{z},\dot{\phi}\end{bmatrix}^{\top}$.
Then, Equations 13 are linearized about the hovering position using the first
order Taylor series expansion approximation of the non-linear terms and
developed PD controller for the robot.
The linearized version of Eqs. 13
$\displaystyle\ddot{y}$ $\displaystyle=-g\phi$ (14) $\displaystyle\ddot{z}$
$\displaystyle=-g+\frac{u_{1}}{m}$ $\displaystyle\ddot{\phi}$
$\displaystyle=\frac{u_{2}}{I_{xx}}$
Introducing a generic state variable $\boldsymbol{\varrho}$ that can represent
either $\boldsymbol{y}$, $\boldsymbol{z}$, or $\boldsymbol{\phi}$ at a time.
For this generic state variable $\boldsymbol{\varrho}$ to be driven to a new
desired state, the commanded acceleration $\boldsymbol{\ddot{\varrho}}_{c}$ is
needed. For this we define the proportional and derivative error as
$\displaystyle e_{p}=\varrho_{d}-\varrho$ (15) $\displaystyle
e_{v}=\dot{\varrho}_{d}-\dot{\varrho}$
where the $\varrho_{d}$ is the desired value. Then we want both $e_{p}$ and
$e_{v}$ to satisfy,
$\left(\ddot{\varrho}_{d}-\ddot{\varrho}_{c}\right)+k_{p}e_{p}+k_{v}e_{v}=0$
(16)
in order to guarantee the error’s convergence under some values of $k_{p}$ and
$k_{v}$. So
$\displaystyle
u_{1}=mg+m\ddot{z}_{c}=m\left(\ddot{z}_{d}+k_{v,z}\left(\dot{z}_{d}-\dot{z}\right)+k_{p,z}\left(z_{d}-z\right)+g\right)$
(17) $\displaystyle
u_{2}=J_{x}\ddot{\phi}_{d}=J_{x}\left(\ddot{\phi}_{c}+k_{v,\phi}\left(\dot{\phi}_{c}-\dot{\phi}\right)+k_{p,\phi}\left(\phi_{c}-\phi\right)\right)$
$\displaystyle\phi_{c}=-\frac{\ddot{y}_{c}}{g}=-\frac{1}{g}\left(\ddot{y}_{d}+k_{v,y}\left(\dot{y}_{d}-\dot{y}\right)+k_{p,y}\left(y_{d}-y\right)\right)$
From this, we can simulate the 3 DOF quadrotor based on different desired
trajectory cases found in Table 1.
We have set the mass of the quadrotor to be $0.18$ Kg, the arm length is 0.086
m, and the quadrotor’s moment of inertia ($J_{x}$) is 0.00025 Kg.m2.
Case | Trajectory type | Initial states | Desired states
---|---|---|---
A | step | $x_{0}=[0,\;0,\;0,\;0,\;0,\;0]^{\top}$ | $x_{d}=[0.5,\;0.2,\;0,\;0,\;0,\;0]^{\top}$
B | Sine | $x_{0}=[0,\;0,\;0,\;0,\;0,\;0]^{\top}$ | $x_{d}=[4,\;0,\;0,\;0,\;0,\;0]^{\top}$
C | Diamond | $x_{0}=[0,\;1.8,\;0,\;0,\;0,\;0]^{\top}$ | $x_{d}=[0,\;0,\;0,\;0,\;0,\;0]^{\top}$
Table 1: Simulation cases for different trajectories.
The response of the three cases is summarized in Figures 3, 4, and 5. In the
simulation settings, one can see the acceptable tracking behaviour of the
quadrotor. The resulted data from will be used as a training and test data for
SINDy algorithm; more on this in section 3.1.
Figure 3: The reference step trajectory represent case A and the system
response Figure 4: The reference sinusoidal trajectory represent case B and
the system response Figure 5: The reference diamond-shaped trajectory
represent case C and the system response
## 3 Methodology
### 3.1 SINDy With Control Framework
Nonlinear dynamical systems can be represented as
$\dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{x})$ (18)
and we consider more general case when control takes place in the nonlinear
dynamics described as
$\dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{x},\boldsymbol{u})$ (19)
where $\boldsymbol{x}\in\mathbb{R}^{n}$ is the system states vector,
$\boldsymbol{u}\in\mathbb{R}^{q}$ is the control vector, and $\boldsymbol{f}$
is a nonlinear such that
$\boldsymbol{f}:\mathbb{R}^{n}\times\mathbb{R}^{q}\longrightarrow\mathbb{R}^{n}$.
SINDy with control technique utilizes sparse regression to identify a minimal
set of active terms from a candidate library
$\boldsymbol{\Pi(\boldsymbol{x},\boldsymbol{u})}$ of linear and nonlinear
terms in the system states $\boldsymbol{x}$ and actuation variables
$\boldsymbol{u}$ that approximate the underlying nonlinear dynamics
represented by $\boldsymbol{f}$. On the premise that a considerable number of
systems manifest relatively sparse active terms in their dynamics. To
construct $\boldsymbol{\Pi}$ we measure $m$ snapshots of the state
$\boldsymbol{x}$ and actuation $\boldsymbol{u}$ by experiment or from
numerical simulation.
$\boldsymbol{X}=\begin{bmatrix}\boldsymbol{x}^{\top}(t_{1})\\\
\boldsymbol{x}^{\top}(t_{2})\\\ \boldsymbol{x}^{\top}(t_{3})\\\ \vdots\\\
\boldsymbol{x}^{\top}(t_{m})\end{bmatrix}=\begin{bmatrix}x_{1}(t_{1})&x_{2}(t_{1})&\dots&x_{n}(t_{1})\\\
x_{1}(t_{2})&x_{2}(t_{2})&\dots&x_{n}(t_{2})\\\
x_{1}(t_{3})&x_{2}(t_{3})&\dots&x_{n}(t_{3})\\\ \vdots&\vdots&\ddots&\vdots\\\
x_{1}(t_{m})&x_{2}(t_{m})&\dots&x_{n}(t_{m})\end{bmatrix},\quad\boldsymbol{U}=\begin{bmatrix}\boldsymbol{u}^{\top}(t_{1})\\\
\boldsymbol{u}^{\top}(t_{2})\\\ \boldsymbol{u}^{\top}(t_{3})\\\ \vdots\\\
\boldsymbol{u}^{\top}(t_{m})\end{bmatrix}=\begin{bmatrix}u_{1}(t_{1})&u_{2}(t_{1})&\dots&u_{q}(t_{1})\\\
u_{1}(t_{2})&u_{2}(t_{2})&\dots&u_{q}(t_{2})\\\
u_{1}(t_{3})&u_{2}(t_{3})&\dots&u_{q}(t_{3})\\\ \vdots&\vdots&\ddots&\vdots\\\
u_{1}(t_{m})&u_{2}(t_{m})&\dots&u_{q}(t_{m})\end{bmatrix}$ (20)
The matrix $\boldsymbol{\Pi}$ can be reconstructed now by
$\boldsymbol{\Pi}(\boldsymbol{X,U})=\begin{bmatrix}|&|&|&|&|&|&&|&|&\\\
\boldsymbol{1}&\boldsymbol{X}&\boldsymbol{U}&\boldsymbol{X\otimes
X}&\boldsymbol{X\otimes U}&\boldsymbol{U\otimes
U}&\dots&\cos(\boldsymbol{X})&\cos(\boldsymbol{X\otimes X})&\dots\\\
|&|&|&|&|&|&&|&|&\end{bmatrix}$ (21)
The effectiveness of the candidate term library is important to the SINDy with
control algorithm. A basic strategy starts with a simple option, like
polynomials, and gradually increases the complexity of the library by
incorporating additional terms [3]. After evaluating the library, the states
derivatives also should be evaluated. Here we evaluated the derivatives by
using numerical approximation. Specifically, we used the finite difference
method.
The system in Eq. (19) can be rewritten as
$\dot{\boldsymbol{X}}=\boldsymbol{\Pi}(\boldsymbol{X},\boldsymbol{U})\boldsymbol{\Omega}$
(22)
where $\boldsymbol{\Lambda}$ is a coefficient matrix that is almost sparse.
This matrix tries to activate the fewest active terms in the candidate matrix
$\boldsymbol{\Pi}$ that results in the best model fit:
$\omega_{j}=\operatorname*{argmin}_{\tilde{\omega}_{j}}||\dot{\boldsymbol{X}}-\boldsymbol{\Pi}(\boldsymbol{X,U})\tilde{\omega}_{j}||_{2}^{2}+\lambda||\tilde{\omega}_{j}||_{1}$
(23)
This problem can be solved using variety of regression techniques. Including
but not limited to, LASSO [27], sequential thresholded least-squares [7], and
Sparse Relaxed Regularized Regression (SR3) [44]. We tried different
optimizers and have found SR3 is the best for our case.
#### 3.1.1 Hyperparameter Tuning
The hyperparameter $\lambda$ is critical in identifying the most sparse
dynamics. Typically, $\lambda$ takes values between 0 to 1. So, we scanned
over the lambda domain with step size of 0.05 and evaluated the model using
test data to cross validate utilizing the root mean squared error (RMSE). We
found the best value of $\lambda$ to be equal to 0.45.
We assume the generic prediction of a generic state vector
$\boldsymbol{\varrho}$ is denoted as $\hat{\boldsymbol{\varrho}}$ and the RMSE
of $\boldsymbol{\varrho}$ is denoted as $\tilde{\varrho}$
$\tilde{\varrho}=\sqrt{\frac{\sum_{i=1}^{N}(\varrho_{i}-\hat{\varrho}_{i})^{2}}{N}}$
(24)
The chosen lambda corresponds to the RMSE of the states of the planar quad
case.
State | ${y}$ | ${z}$ | ${\phi}$ | $\dot{{y}}$ | $\dot{{z}}$ | $\dot{{\phi}}$
---|---|---|---|---|---|---
RMSE $\times 10^{-3}$ | 0.0088 | 0.0152 | 0.0227 | 0.0294 | 0.0070 | 0.1379
Table 2: The RMSE for the chosen $\lambda=0.45$ for diamond shaped trajectory.
#### 3.1.2 Candidate functions
In the present study, we exploit our comprehensive understanding of the
physical system in hand. Specifically, we propose that the presence of
nonlinearity in the system can be expressed through polynomials and Fourier
basis functions, such as those involving sine and cosine. Through mathematical
analysis of the system, we have discerned that from the entire state space of
the system, only the Euler angles can be represented as Fourier basis
functions, and the rest can be characterized by polynomials.
### 3.2 Training
The numerical data from the 3 DOF planar quadrotor simulation was utilized. We
first choose Case C because we thought it would allow the quadrotor to span
the entire state space. Consequently, it will be the ideal case to start with
for training. It was subsequently demonstrated by the other cases that the
algorithm is generic regardless of trajectory. As long as the trajectory gave
sufficient data and spanned the state space appropriately, the algorithm
successfully captured the complete dynamics. If, on the other hand, the
quadrotor followed a signal that led it to move only in a straight line along
the $y$ or $z$ axis, the algorithm may fail to distinguish the unseen states,
resulting in an incomplete identification of the dynamics. We used snapshots
with $\Delta t=0.05$ with 1000 snapshots in time. We differentiated the states
using finite difference numerical scheme of order one.
## 4 Results and Discussion
### 4.1 Discovered Model
The model is trained with the data extracted from case C as discussed earlier
and it came out with the following dynamics
$\displaystyle\dot{y}$ $\displaystyle=\dot{y}$ (25) $\displaystyle\dot{z}$
$\displaystyle=\dot{z}$ $\displaystyle\dot{\phi}$
$\displaystyle=0.993\dot{\phi}$ $\displaystyle\ddot{y}$
$\displaystyle=-5.549u_{1}\sin(\phi)$ $\displaystyle\ddot{z}$
$\displaystyle=-9.811+5.556u_{1}\cos(\phi)$ $\displaystyle\ddot{\phi}$
$\displaystyle=4000.000u_{2}$
That nearly matches the original derived mathematical model from the first
principles. Table 3 shows a comparison between the discovered dynamics by
SINDy and the gound truth mathematical model.
Table 3: Comparison between the discovered dynamics and ground truth mathematical model. States | SINDy | Mathematical Model
---|---|---
$\dot{y}$ | $1.000\dot{y}$ | $1.000\;\dot{y}$
$\dot{z}$ | $1.000\;\dot{z}$ | $1.000\;\dot{z}$
$\dot{\phi}$ | $0.993\;\dot{\phi}$ | 1.000 $\dot{\phi}$
$\ddot{y}$ | $-5.549\;u_{1}\sin(\phi)$ | $-5.5556\;u_{1}\;\sin(\phi)$
$\ddot{z}$ | $-9.811+5.556\;u_{1}\cos(\phi)$ | $-9.81+5.556\;u_{1}\cos(\phi)$
$\ddot{\phi}$ | $4000.000\;u_{2}$ | $4000.000\;u_{2}$
However, we also used the other cases to train the model and it resulted in
models that compare to the original one as we discussed in section 3.2.
### 4.2 Testing
#### 4.2.1 Case A
Here we simulate the discovered dynamics using the step trajectory as desired
trajectory. We compare the system behaviour over time for both the discovered
SINDy dynamics and the ground truth. Figure 6(b) shows the absolute error
between the predicted states $\hat{\boldsymbol{\varrho}}$ and the original
states $\boldsymbol{\varrho}$. The results show that the absolute error
approaches zero, giving strong validation for the identified model. This shows
that the SINDy algorithm accurately captures the underlying dynamics of the
system, closely matching the ground truth dynamics. The relatively small error
between anticipated and original states confirms the discovered model’s
efficacy and reliability in capturing the main characteristics of quadrotor
dynamics. The close agreement between the identified dynamics and the ground
truth confirms the SINDy algorithm’s utility as a good tool for extracting
mathematical models from data.
(a) The reference step trajectory represent case A and the system response
(b) The absolute error between the predicted states
$\hat{\boldsymbol{\varrho}}$ and the original states $\boldsymbol{\varrho}$
Figure 6: SINDy comparison with the mathematical model.
#### 4.2.2 Case B
As in the previous section, we simulate the discovered dynamics using the
sinusoidal trajectory as desired trajectory. We compare the system behaviour
over time for both the discovered SINDy dynamics and the ground truth. Figure
7(b) shows the absolute error between the predicted states
$\hat{\boldsymbol{\varrho}}$ and the original states $\boldsymbol{\varrho}$.
We can say that the error nearly equal zero. This validates the discovered
model.
(a) The reference sinusoidal trajectory represent case A and the system
response
(b) The absolute error between the predicted states
$\hat{\boldsymbol{\varrho}}$ and the original states $\boldsymbol{\varrho}$
Figure 7: SINDy comparison with the mathematical model.
## 5 Conclusion
In this study, we have employed the SINDy algorithm to extract a mathematical
model from simulation data of a quadrotor. Initially, the quadrotor dynamics
were modeled in the 6 DOF configuration. Subsequently, for the purpose of
simplification and model validation, the system was constrained to a 3 DOF
representation within a 2-dimensional plane. To assess the effectiveness of
the SINDy algorithm, three distinct simulation cases, as depicted in Figure
LABEL:fig:response, were considered. A systematic exploration of the
hyperparameter $\lambda$ space was conducted. Through this thorough analysis,
we successfully identified the optimal model with an associated $\lambda$
value of 0.45, as demonstrated in Table 2, based on the RMSE metric. Table 3
and Figures 6(b), 7(b) show that the algorithm captured almost the same
dynamics as the quadrotor ground truth mathematical model.
We also demonstrated the SINDy powerfulness in obtaining solid mathematical
models from data, which will aid in the advancement of modeling in the field
of aerospace applications, particularly UAVs. Furthermore, in a future version
of this research, we will attempt to analyze the algorithm’s resilience in the
face of noisy measurements in order to assure its robustness in real-world
circumstances. Understanding how measurement errors affect the identified
model’s accuracy and reliability. Given the occurrence of mathematically
unmodeled disturbances and uncertainties in real-world scenarios, future
research should concentrate on building discrepancy models capable of properly
capturing and accounting for these aspects. Incorporating these models with
the SINDy algorithm can improve its ability to deal with disturbances,
resulting in more robust and accurate predictions.
Furthermore, we intend to extend the SINDy algorithm’s applicability to the
full 6 DOF quadrotor model, encompassing all elements of its dynamic behavior.
A more thorough understanding of the quadrotor’s dynamics can be obtained by
including the entire model, including translational and rotational motion.
There will also be an in-depth examination of the optimizers used in the SINDy
algorithm. Examining various optimization strategies and their impact on the
quality of the identified model can result in improvements in convergence
speed, accuracy, and the capacity to handle complicated datasets successfully.
## References
* [1] D. K. Arrowsmith, C. M. Place, C. H. Place, and Arrowsmith/Place. An Introduction to Dynamical Systems. Cambridge University Press, July 1990. Google-Books-ID: HOU6q6wHCncC.
* [2] Steven L. Brunton and J. Nathan Kutz. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press, Cambridge, 2019.
* [3] Eurika Kaiser, J Nathan Kutz, and Steven L Brunton. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proceedings of the Royal Society A, 474(2219):20180335, 2018.
* [4] Clarence W Rowley, Tim Colonius, and Richard M Murray. Model reduction for compressible flows using pod and galerkin projection. Physica D: Nonlinear Phenomena, 189(1-2):115–129, 2004.
* [5] Clarence W Rowley and Scott TM Dawson. Model reduction for flow analysis and control. Annual Review of Fluid Mechanics, 49:387–417, 2017.
* [6] Mikhail Hayhoe, Francisco Barreras, and Victor M Preciado. Data-driven control of the covid-19 outbreak via non-pharmaceutical interventions: A geometric programming approach. arXiv preprint arXiv:2011.01392, 2020.
* [7] Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, April 2016. Publisher: Proceedings of the National Academy of Sciences.
* [8] Josh Bongard and Hod Lipson. Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24):9943–9948, 2007.
* [9] Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. science, 324(5923):81–85, 2009.
* [10] John R Koza. Genetic programming as a means for programming computers by natural selection. Statistics and computing, 4:87–112, 1994.
* [11] Peter Schmid and Joern Sesterhenn. Dynamic mode decomposition of numerical and experimental data. In APS Division of Fluid Dynamics Meeting Abstracts, volume 61, pages MR–007, August 2008.
* [12] B. O. Koopman. Hamiltonian Systems and Transformation in Hilbert Space. Proceedings of the National Academy of Sciences, 17(5):315–318, May 1931. Publisher: Proceedings of the National Academy of Sciences.
* [13] Matthew O. Williams, Ioannis G. Kevrekidis, and Clarence W. Rowley. A Data–Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition. J Nonlinear Sci, 25(6):1307–1346, December 2015.
* [14] Matthew O. Williams, Clarence W. Rowley, and Ioannis G. Kevrekidis. A Kernel-Based Approach to Data-Driven Koopman Spectral Analysis, July 2015. arXiv:1411.2260 [math].
* [15] Peter J Baddoo, Benjamin Herrmann, Beverley J McKeon, and Steven L Brunton. Kernel learning for robust dynamic mode decomposition: linear and nonlinear disambiguation optimization. Proceedings of the Royal Society A, 478(2260):20210830, 2022.
* [16] Steven L. Brunton, Bingni W. Brunton, Joshua L. Proctor, Eurika Kaiser, and J. Nathan Kutz. Chaos as an intermittently forced linear system. Nat Commun, 8(1):19, May 2017. Number: 1 Publisher: Nature Publishing Group.
* [17] Hassan Arbabi and Igor Mezić. Ergodic theory, Dynamic Mode Decomposition and Computation of Spectral Properties of the Koopman operator. SIAM J. Appl. Dyn. Syst., 16(4):2096–2126, January 2017. arXiv:1611.06664 [math].
* [18] Joshua L. Proctor, Steven L. Brunton, and J. Nathan Kutz. Dynamic Mode Decomposition with Control. SIAM J. Appl. Dyn. Syst., 15(1):142–161, January 2016. Publisher: Society for Industrial and Applied Mathematics.
* [19] Jonathan H. Tu, Clarence W. Rowley, Dirk M. Luchtenburg, Steven L. Brunton, and J. Nathan Kutz. On Dynamic Mode Decomposition: Theory and Applications. Journal of Computational Dynamics, 1(2):391–421, 2014. arXiv:1312.0041 [physics].
* [20] Jonathan H Tu. Dynamic mode decomposition: Theory and applications. PhD thesis, Princeton University, 2013.
* [21] Kathleen Champion. From data to dynamics: discovering governing equations from data. PhD thesis, 2019.
* [22] Enoch Yeung, Soumya Kundu, and Nathan Hodas. Learning Deep Neural Network Representations for Koopman Operators of Nonlinear Dynamical Systems, November 2017. arXiv:1708.06850 [cs, math].
* [23] Naoya Takeishi, Yoshinobu Kawahara, and Takehisa Yairi. Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition, January 2018. arXiv:1710.04340 [cs, math, stat].
* [24] E.J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, February 2006. Conference Name: IEEE Transactions on Information Theory.
* [25] Emmanuel J. Candès, Justin K. Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8):1207–1223, 2006. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/cpa.20124.
* [26] David Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4), April 2006.
* [27] Robert Tibshirani. Regression Shrinkage and Selection Via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.2517-6161.1996.tb02080.x.
* [28] Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An Introduction to Statistical Learning: with Applications in R. Springer Texts in Statistics. Springer US, New York, NY, 2021.
* [29] Patrick Gelß, Stefan Klus, Jens Eisert, and Christof Schütte. Multidimensional approximation of nonlinear dynamical systems. Journal of Computational and Nonlinear Dynamics, 14(6), 2019.
* [30] Samuel H Rudy, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Data-driven discovery of partial differential equations. Science advances, 3(4):e1602614, 2017.
* [31] Hayden Schaeffer. Learning partial differential equations via data discovery and sparse optimization. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2197):20160446, 2017.
* [32] Hayden Schaeffer and Scott G. McCalla. Sparse model selection via integral terms. Physical Review E, 96(2):undefined–undefined, 2017. Number: 2.
* [33] Patrick AK Reinbold, Daniel R Gurevich, and Roman O Grigoriev. Using noisy or incomplete data to discover models of spatiotemporal dynamics. Physical Review E, 101(1):010203, 2020.
* [34] Yu Xin Jiang, Xiong Xiong, Shuo Zhang, Jia Xiang Wang, Jia Chun Li, and Lin Du. Modeling and prediction of the transmission dynamics of COVID-19 based on the SINDy-LM method. Nonlinear Dynamics, 105(3):2775–2794, 2021. Number: 3.
* [35] Gregor Thiele, Arne Fey, David Sommer, and Jorg Kruger. System identification of a hysteresis-controlled pump system using SINDy. 2020 24th International Conference on System Theory, Control and Computing, ICSTCC 2020 - Proceedings, pages 457–464, 2020.
* [36] Bhavana Bhadriraju, Mohammed Saad Faizan Bangi, Abhinav Narasingam, and Joseph Sang Il Kwon. Operable adaptive sparse identification of systems: Application to chemical processes. AIChE Journal, 66(11):undefined–undefined, 2020. Number: 11.
* [37] Dipankar Bhattacharya, Leo K. Cheng, and Weiliang Xu. Sparse Machine Learning Discovery of Dynamic Differential Equation of an Esophageal Swallowing Robot. IEEE Transactions on Industrial Electronics, 67(6):4711–4720, 2020. Number: 6.
* [38] Mariia Sorokina, Stylianos Sygletos, and Sergei Turitsyn. Sparse Identification for Nonlinear Optical Communication Systems: SINO Method. Opt. Express, 24(26):30433, December 2016. arXiv:1701.01650 [physics].
* [39] Tayyab Manzoor, Hailong Pei, Zhongqi Sun, and Zihuan Cheng. Model predictive control technique for ducted fan aerial vehicles using physics-informed machine learning. Drones, 7(1):4, 2022.
* [40] Markus Quade, Markus Abel, J Nathan Kutz, and Steven L Brunton. Sparse identification of nonlinear dynamics for rapid model recovery. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28(6):063116, 2018.
* [41] Kadierdan Kaheman, Eurika Kaiser, Benjamin Strom, J Nathan Kutz, and Steven L Brunton. Learning discrepancy models from experimental data. arXiv preprint arXiv:1909.08574, 2019.
* [42] HTMN ElKholy. Dynamic modeling and control of a quadrotor using linear and nonlinear approaches, 2014. Master thesis.
* [43] Joan Sola. Quaternion kinematics for the error-state kalman filter. arXiv preprint arXiv:1711.02508, 2017.
* [44] Peng Zheng, Travis Askham, Steven L Brunton, J Nathan Kutz, and Aleksandr Y Aravkin. A unified framework for sparse relaxed regularized regression: Sr3. IEEE Access, 7:1404–1423, 2018.
|
# The third law of thermodynamics and black holes
H<EMAIL_ADDRESS>A. H.
<EMAIL_ADDRESS>Iarley P.
<EMAIL_ADDRESS>J. P. Morais<EMAIL_ADDRESS>U. K<EMAIL_ADDRESS>A. Sayahian Jahromi5 1 Research
Institute for Astronomy and Astrophysics of Maragha (RIAAM), University of
Maragheh, P.O. Box 55136-553, Maragheh, Iran
2 Department of Chemistry and Physics, Federal University of Paraíba, Rodovia
BR 079 - Km 12, 58397-000 Areia-PB, Brazil
3 Instituto de Física, Universidade Federal do Rio de Janeiro, 21.941-972 -
Rio de Janeiro-RJ-Brazil
4 Department of Mathematics, Institute of Applied Sciences and Humanities, GLA
University, Mathura-281406, Uttar Pradesh, India
5 Zarghan Branch, Islamic Azad University, Zarghan, Iran
###### Abstract
Working in the framework of generalized statistics, the problem of
availability of the third law of thermodynamics in the black hole physics is
studied by focusing on Schwarzschild black hole which easily and clearly
exposes the violation of this law in the common approach based on Bekenstein
entropy. Additionally, it is addressed that some inconsistencies between the
predictions of quantum field theory and those of thermodynamics on the black
hole temperature may be reconciled by using the thermodynamics laws in order
to broaden energy definition. It claims that thermodynamics should be employed
as a powerful tool in looking for more comprehensive energy definitions in
high energy physics, still mysterious.
….
###### pacs:
….
## I Introduction
The third law of thermodynamics states that the entropy of a system should
approach a constant value ($C$) at absolute zero temperature, or equally
$S(T\rightarrow 0)\rightarrow C$. Bekenstein entropy ($S_{B}$) is proportional
to the horizon area $(A=4\pi r_{h}^{2})$, and correspondingly, for a
Schwarzschild black hole of mass $M(\equiv E)$ and Hawking temperature $T_{H}$
for which $r_{h}=2M$ and $T=\frac{\partial M}{\partial S}=\frac{1}{8\pi
M}=T_{H}$, we have
$\displaystyle S_{B}=\frac{A}{4}=4\pi M^{2}=\frac{1}{16\pi T_{H}^{2}},$ (1)
in the units of $c=\hbar=k_{B}=G=1$, where $k_{B}$ denotes the Boltzmann
constant. It clearly indicates that the third law of thermodynamics is not
satisfied or briefly, $T_{H}\rightarrow 0(\parallel
M\rightarrow\infty)\Rightarrow S_{B}\rightarrow\infty$. Although we considered
Schwarzschild black hole, the behavior of
$S_{B}\big{(}T(M\rightarrow\infty)\rightarrow 0\big{)}\rightarrow\infty$ is
common in other black holes such as Kerr-Newman and Reissner-Nordström metrics
1 ; 01 ; 2 ; 3 ; 4 .
$S_{B}$ is non-extensive, a trait which reminds the generalized statistics
such as Tsallis statistics revT ; masi including entropies which are not
extensive. Indeed, as gravitational systems include long-range interaction
(gravity), it is proposed that the Boltzmann entropy (leading to Eq. (1))
should be replaced with generalized entropies which leads to substantial
consequences in both gravitational and cosmological setups (see for example
Refs. tsallis ; refgerg ; gerg ; non13 ; nonK ; EPJC ; KHDE ; sadeghi ; mesri2
their references and citations). It also seems that there is a connection
between deviation from the Boltzmann statistics and the quantum features of
gravity epl ; homa ; mesri ; barrow ; mesri2 , and consequently, the
generalized and loop quantum gravity entropies can be classified as the
subclasses of a general entropy gen .
Our first aim is to study the possibility of satisfying the third law by
employing some new entropies proposed that provide respectable solutions in
cosmological and gravitational setups. Hereof and in subsequent sections, we
respectively study the problem by considering Tsallis and Cirto entropy,
Tsallis entropy and Kaniadakis entropy. Throughout the survey, we also address
a thermodynamic energy definition for a black hole of mass $M$, corresponding
to each entropy, which admits Hawking temperature. The black hole remnant and
its decay time in mentioned entropy formalisms have been studied in fifth
section. The last section is also devoted to summary.
## II Tsallis and Cirto entropy and the third law of thermodynamics
Motivated by the non-extensivity of $S_{B}$, and also the long-range nature of
gravity, Tsallis and Cirto tsallis introduce a new entropy for black holes as
$\displaystyle S_{T}=\gamma A^{\delta},$ (2)
where $\gamma$ and $\delta$ are two unknown free constants evaluated by other
parts of physics or observations. It is also useful to note here that this
form of entropy is also supported in the framework of quantum gravity barrow .
It means that two different approaches and motivations lead to the same result
which increases the credit of this new proposal for the black hole entropy.
Moreover, the equality of results can be considered as the sign of a deep
connection between non-extensivity and quantum gravity helping us build a
relation between their parameters, a result noted in Refs. epl ; homa .
Considering $r_{h}=2M$ and $T=\frac{\partial M}{\partial
S_{T}}=\frac{1}{2\delta\gamma(16\pi)^{\delta}M^{2\delta-1}}$, one easily finds
that the third law of thermodynamics is met whenever $0<\delta<\frac{1}{2}$,
and in summary, $S_{T}\rightarrow 0\parallel M\rightarrow 0\parallel
T\rightarrow 0$.
Now, let us employ Hawking temperature ($T_{H}=\frac{1}{8\pi M}$) instead of
$T=\frac{\partial M}{\partial S_{T}}$ which leads to $S_{T}\propto
T^{-2\delta}$ meaning that the third law is fulfilled only if $\delta<0$ and
for this case, we briefly have $M\rightarrow\infty\parallel S_{T},T\rightarrow
0$. In Tsallis statistics, an intrinsic temperature discrepancy between real
temperature and the temperature obtained by thermodynamic relation may emerge
depending on the expectation values definition (averaging methods) used in
obtaining quantities such as energy kol , and only, having in hand the system
temperature, one can decide on true temperature kol . Therefore, the above
obtained temperature discrepancy may be more understandable by bearing in mind
the intrinsic temperature discrepancy of Tsallis statistics. Consequently,
since $\delta$ is a free parameter estimated from observations revT ; masi or
probably other parts of physics epl ; homa , we cannot go further in choosing
one of these temperatures, and thus the corresponding thermodynamics, unless
we enclasp detailed observations, data and info on black holes.
Of course, a way to reconcile the above inconsistency between temperatures is
to redefine energy. In both cases above, we assumed $E=M$, while if we assume
$T=T_{H}=\frac{\partial E}{\partial S}$, and use Eq. (2), then we reach
$\displaystyle E_{T}=\int_{0}^{M}\frac{1}{8\pi m}\frac{\partial
S_{T}}{\partial m}dm,$ (3)
finally leading to
$\displaystyle
E_{T}=4\gamma\delta\frac{(4\pi)^{\delta-1}}{2\delta-1}M^{2\delta-1},$ (4)
as the energy of a Schwarzschild black hole of mass $M$ in Tsallis formalism
which recovers $E=M$ by inserting $\delta=1$ and $\gamma=\frac{1}{4}$ (the
Bekenstein limit). It is also obvious that $E_{T}$ is positive if
$\delta>\frac{1}{2}$ or $\delta<0$. For the $\delta>\frac{1}{2}$ case, the
third law is not satisfied and we face a situation like to what we obtained in
the case of Bekenstein entropy (i. e, $T_{H}\rightarrow 0\parallel
S_{B},M,E_{T}\rightarrow\infty$). The third law is met for $\delta<0$ and in
parallel $E_{T}\rightarrow 0$ or briefly, $E_{T},S_{T},T\rightarrow 0\parallel
M\rightarrow\infty$.
Finally, we think that since the Hawking temperature is also supported by
other parts of physics such as quantum field theory in curved spacetime and so
on haw ; bek ; thes , and as there is not any common agreement on the energy
definition in high energy physics energy ; energy1 , it is probably reasonable
to rely on this approach compared to the two previously mentioned cases. The
latter means that thermodynamics may be used to find a more proper energy
definition in high energy physics.
In summary, we obtained $3$ cases recaped as
$\displaystyle\\!\\!\\!\\!i)\ E=M,\ T=\frac{\partial E}{\partial
S_{T}}=\frac{1}{2\delta\gamma(16\pi)^{\delta}M^{2\delta-1}}\neq T_{H},$
$\displaystyle\\!\\!\\!\\!ii)\ E=M,\ T=T_{H}=\frac{1}{8\pi
M}\neq\frac{\partial E}{\partial S_{T}},$ (5)
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!iii)\
E_{T}=4\gamma\delta\frac{(4\pi)^{\delta-1}}{2\delta-1}M^{2\delta-1},\
T=T_{H}=\frac{1}{8\pi M}=\frac{\partial E_{T}}{\partial S_{T}},$
where the third law is satisfied for the first case when
$0<\delta<\frac{1}{2}$, and for the remaining cases when $\delta<0$.
## III Tsallis entropy and the third law
Recently, focusing on the relation between Tsallis and Boltzmann statistics, a
new entropy has been derived for black holes as KHDE
$\displaystyle S_{q}=\frac{1}{1-q}[\exp\big{(}(1-q)S_{B}\big{)}-1]$
$\displaystyle=\frac{2\exp(\frac{(1-q)S_{B}}{2})}{(1-q)}\sinh\left(\frac{(1-q)S_{B}}{2}\right),$
(6)
where $q$ is a free unknown parameter and this result is also confirmed by
calculating the Tsallis entropy content of black holes in the framework of
quantum gravity mesri ; KHDE . Following the recipe of previous section, if we
assume $E=M$ then we have
$\displaystyle T=\frac{\partial E}{\partial S_{q}}=T_{H}\exp\big{(}(q-1)4\pi
M^{2}\big{)},$ (7)
which recovers $T_{H}$ whenever $q=1$ (the Bekenstein limit of (III) KHDE ).
Briefly, $T\rightarrow 0$ only if $q<1$, and in this manner
$M,S_{q}\rightarrow\infty$ meaning that the third law is not satisfied. On the
other hand, if we assume $T=T_{H}$ (case $ii$), then we see that the third law
is satisfied only if $q>1$, or briefly $M\rightarrow\infty\Rightarrow
S_{q}(T\rightarrow 0)\rightarrow 0$. For the case $iii$, where
$T=T_{H}=\frac{1}{8\pi M}=\frac{\partial E_{q}}{\partial S_{q}}$, we reach
$\displaystyle E_{q}=\int_{0}^{M}\exp\big{(}(1-q)4\pi m^{2}\big{)}dm,$ (8)
as the energy content of a black hole of mass $M$. Clearly, the third law is
again satisfied only if $q>1$ and moreover, $E=\int_{0}^{M}dm=M$ is recovered
at the limit of $q\rightarrow 1$. The above integration can be performed with
the solution given as
$\displaystyle E_{q}=\frac{{\rm
erf}\left(2\sqrt{\pi}M\sqrt{q-1}\right)}{4\sqrt{q-1}},$ (9)
in which $\rm erf(x)$ denotes the error function er .
## IV Kaniadakis entropy and the third law
Kaniadakis entropy of a black hole is also reported as KHDE
$\displaystyle S_{\kappa}=\frac{1}{\kappa}\sinh\big{(}\kappa S_{B}\big{)},$
(10)
where $\kappa$ is an unknown parameter evaluated by observations and probably,
the other parts of physics KHDE . Here, simple calculations lead to
$\displaystyle T=\frac{\partial E}{\partial
S_{\kappa}}=\frac{T_{H}}{\cosh(\kappa S_{B})},$ (11)
for the $i$-th case. The result indicates that, independent of the value of
$\kappa$, the third law is not satisfied
($S_{\kappa}\rightarrow\infty\parallel M\rightarrow\infty\parallel
T\rightarrow 0$). For the second case ($T=T_{H},E=M$), we can write
$\displaystyle S_{\kappa}=\frac{1}{\kappa}\sinh\left(\frac{\kappa}{16\pi
T^{2}}\right),$ (12)
which shows the third law is satisfied only if $\kappa<0$. If $\kappa<0$, then
the third law is also met for case $iii$, where $T=T_{H}$ and energy content
of black hole is obtainable by using
$\displaystyle E_{\kappa}=\int_{0}^{M}\cosh(\kappa 4\pi m^{2})dm,$ (13)
which recovers the $E=\int_{0}^{M}dm=M$ results at the limit of $\kappa=0$.
The solution to the above integral is also given as
$\displaystyle E_{\kappa}=\frac{1}{8\sqrt{\kappa}}\left[{\rm
erf}\left(2\sqrt{\kappa\pi}M\right)+{\rm
erfi}\left(2\sqrt{\kappa\pi}M\right)\right],$ (14)
where
${\rm erfi}(x)=-i{\rm erf}(ix).$ (15)
In Fig. (1), $E_{\kappa}$ and $E_{q}$ are plotted for different values of $q$
and $\kappa$, the $E=M$ case has also been depicted to have a comparison. It
is worthwhile to mention that, as it is obvious from this figure, there is an
asymptote for $E_{q}$ when $q>1$ as $E_{q}(M\gg
1)\rightarrow\frac{1}{4\sqrt{q-1}}$.
Figure 1: Upper panel: The behavior of energy content of the Schwarzschild
black hole as a function of its mass for different values of $q$ parameter.
Lower panel: The behavior of energy content of the Schwarzschild black hole as
a function of its mass for different values of $\kappa$ parameter. The dashed
curve also shows the Schwarzschild black hole of $E=M$.
## V Black body radiation and black hole evaporation
In the framework of common statistical mechanics based on Gibbs entropy, the
black hole evaporation is described by Stefan-Boltzmann (SB) formula cav ; ali
. As it is apparent from Eq. (1), we have $S_{B},M\rightarrow\infty$ when
$T\rightarrow 0$ meaning that we face a catastrophe ali . This result can be
summarized in the language of SB law as
$\displaystyle\frac{dM}{dt}=-A\sigma T^{4},$ (16)
where the ordinary energy definition ($E=M$) is used and
$\sigma(=\frac{\pi^{2}}{60}$ in our units kanbb1 ; kelly ) denotes the SB
constant ali . Here, minus sign is also due to the fact that Eq. (16) explains
the amount of energy that system loses it. It clearly shows that the decay
rate ($\frac{dM}{dt}$) diverges as $M$ ($T$) approaches zero (infinity) ali .
Another consequence of this law is our ability to find the decay time $\tau$
as
$\displaystyle\tau=-\frac{1}{\sigma}\int_{M}^{0}\frac{dm}{AT^{4}}=\frac{(8\pi)^{3}}{2\sigma}\frac{M^{3}}{3},$
(17)
and therefore $\tau\sim M^{3}$ for a Schwarzschild black hole of temperature
$T_{H}$. Eq. (17) also indicates that $\tau\rightarrow\infty$ when
$M\rightarrow\infty$ while $T,\frac{dM}{dt}\rightarrow 0$ ali .
### V.1 Tsallis and Cirto (TC) black hole
Black body radiation has not been studied in the formalism of Eq. (2), but as
we mentioned, this entropy is also confirmed by quantum gravity barrow . The
latter allows us to use the black body radiation formula in the framework of
quantum gravity to study the black holes meeting Eq. (2). The modifications of
quantum gravity on the black body spectrum have recently been studied by some
authors in different bases nozar ; hus ; cqg ; lobo . A common feature of
quantum gravity scenarios is generalized uncertainty principle (GUP) that
motivates us to focus on its modification on black body radiation. Indeed,
such a modification is also obtainable by using other quantum gravity
scenarios hus ; cqg .
In this regard, it is proposed that GUP may modify the black body spectrum as
cqg
$\displaystyle\frac{dE}{dt}=-A\sigma\big{[}T^{4}+\frac{15}{9}\alpha
T^{6}\big{]},$ (18)
where $\alpha$ is called the GUP parameter originated from the quantum aspects
of gravity, and there is an ongoing debate on its value agha ; cqg . It is
also obvious that Eq. (16) is recovered whenever $\alpha\rightarrow 0$ and
$E=M$. Calculations for three obtained cases lead to
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{i}^{TC}$ $\displaystyle=$
$\displaystyle-\frac{(2\delta\gamma(16\pi)^{\delta})^{4}}{16\pi\sigma}\int_{M}^{0}\frac{dm}{m^{6-8\delta}+\frac{15}{(6\delta\gamma(16\pi)^{\delta})^{2}}\alpha
m^{8-12\delta}},$ $\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{ii}^{TC}$
$\displaystyle=$
$\displaystyle-\frac{1}{18\sigma}\Big{[}1536\pi^{3}M^{3}-120\pi M\alpha$
$\displaystyle+$ $\displaystyle
5\sqrt{15}\alpha^{\frac{3}{2}}\tan^{-1}(\frac{8\sqrt{3}\pi
M}{\sqrt{5\alpha}})\Big{]},$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{iii}^{TC}$ $\displaystyle=$
$\displaystyle-\frac{64\gamma\delta(4\pi)^{\delta+4}}{\sigma}\int_{M}^{0}\frac{m^{2+2\delta}dm}{(8\pi
m)^{2}+\frac{15}{9}\alpha},$ (19)
where hypergeometric functions are solutions to the first and third cases. It
is also useful to mention that, in all cases, the integrands recover Eq. (17)
at the corresponding appropriate limit. Since the third law is satisfied for
the first case and other cases when $0<\delta<\frac{1}{2}$, and $\delta<0$,
respectively, we plot the obtained evaporation times for some values of
$\delta$ that fall into these intervals.
Figure (2) shows the behavior of decay time against black hole mass for
different values of $\delta$ parameter. Each point on the curves presents the
time needed for a black hole of mass $0<M\leq 1$ to completely decay to zero
mass. As we observe within the upper panel, the decay time is finite and grows
as the initial black hole mass increases and asymptotically reaches a finite
value. Indeed, for $0<\delta<\frac{1}{2}$, it takes finite time for a black
hole to evaporate. The middle panel shows the behavior of decay time for
$\delta<1/2$ (family of black curves) and $1/2<\delta<1$ (family of red and
blue curves) where we observe that there exists a critical value of this
parameter ($\delta_{c}$) so that for $\delta<\delta_{c}$ the decay time is
finite and for $\delta>\delta_{c}$ the decay time diverges. In the lower
panel, we sketched the behavior of decay time for $\delta>1$ where it is seen
that the decay time for a black hole of finite mass grows unboundedly and
diverges as the initial mass of black hole increases. However, for these
values of $\delta$ parameter and those of middle panel, the third law of
thermodynamics ($S(T\rightarrow 0)\rightarrow 0$) is violated.
In Fig.(3) we have plotted decay time of a black hole against its mass for the
second case, where we observe that the larger the black hole mass the longer
it takes for the black hole to completely evaporate. The slope of
$\tau^{TC}_{ii}$ curve increases for larger values of black hole mass so that
the decay time grows unboundedly and diverges for massive and super massive
black holes. Finally, Fig. (4) presents the decay time of black hole for the
third case. In the upper panel we observe that, for $\delta<0$ for which the
third law is respected, a black hole with initial finite mass will completely
evaporate at a finite amount of time. The lower panel shows that for
$\delta>0$ the decay time is an increasing function of the black hole mass and
the heavier the initial black hole the longer it takes to completely
evaporate. However, from the viewpoint of the third case, for this value of
$\delta$ parameter the third law is not respected. As yet there is no
agreement on the numeric value of $\alpha$ parameter we have considered the
value of this parameter to be unity cqg ; agha . We further note that entropy
is a dimensionless quantity in each system of units where the Boltzmann
constant is unity. Hence, from Eq. (2) we can deduce that $\gamma\propto
C/\ell_{\rm Pl}^{2\delta}$ and as we work in the system of units for which
$\ell_{\rm Pl}=1$, $\gamma$ parameter is a positive constant value which we
have considered it to be unity.
Figure 2: Plot of decay time for the first case, versus black hole mass for
different values of parameter $\delta$. We have set $\sigma=\pi^{2}/60$,
$\gamma=1$ and $\alpha=1$. Figure 3: Plot of decay time for the second case,
versus black hole mass. We have set $\sigma=\pi^{2}/60$ and $\alpha=1$.
Figure 4: Plot of decay time for the third case, versus black hole mass. We
have set $\sigma=\pi^{2}/60$, $\gamma=1$ and $\alpha=1$.
### V.2 Tsallis black hole
Although Eq. (16) is used in some previous works which investigate the black
hole thermodynamics in various non-extensive statistics epl ; refgerg ; gerg ;
mesri2 , a comprehensive study in the Tsallis framework should employs the
Tsallis counterpart of Eq. (16). The latter is a controversial issue tbbr1 ;
tbbr2 ; tbbr3 ; kol , as different averaging methods are useable and are
employed in this statistics kol . These different methods have their own
benefits and shortcomings, and indeed, their correctness and accessibility
situations are still unsolved and need more attentions and observations kol .
Here, motivated by the fact that
$\displaystyle\frac{dE}{dt}=-A\sigma_{q}T^{4}$ (20)
is obtained by different approaches tbbr1 , and is also originated from the
black body spectrum in Tsallis statistics tbbs , we focus on Eq. (20) as the
alternative of Eq. (16). Here, $\sigma_{q}$ is called the generalized SB
constant calculated as tbbr1
$\displaystyle\sigma_{q}=\frac{1}{\pi^{2}}\int_{0}^{\infty}\left[\frac{x^{3}}{\exp(x)-1}-\frac{1-q}{2}\frac{x^{5}\exp(x)}{(\exp(x)-1)^{2}}\right]dx\Rightarrow$
$\displaystyle\sigma_{q}\simeq\sigma(1-6\cdot 15(1-q))=\sigma(6\cdot
15q-5\cdot 15),$ (21)
if the integration is numerically solved tbbr1 . It is also obvious that
$\sigma_{q}\rightarrow\sigma$ at the appropriate limit of $q\rightarrow 1$.
For three obtained cases, we have
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{i}^{q}=-\frac{(8\pi)^{3}}{2\sigma_{q}}\int_{M}^{0}m^{2}\exp\big{(}(1-q)4\pi
m^{2}\big{)}dm,$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{ii}^{q}=\frac{(8\pi)^{3}}{2\sigma_{q}}\frac{M^{3}}{3},$
(22) $\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{iii}^{q}=\tau_{i}^{q}.$
Since only the second and third cases satisfy the third law only for $q>1$, in
Fig. (5), we plotted $\tau_{ii}^{q}$ and $\tau_{iii}^{q}$ for different values
of $q$ parameter (family of black curves). We therefore observe that the decay
time is finite and the lesser the value of $q$ parameter, the longer it takes
for a Tsallis black hole to completely evaporate. The blue curve presents the
behavior of $\tau_{ii}^{q}$ where we see that the decay time grows unboundedly
as the initial black hole mass increases.
Figure 5: Plot of decay time for Tsallis black hole against its initial mass.
We have set $\sigma=\pi^{2}/60$. Figure 6: Plot of decay time for Kaniadakis
black hole against its initial mass. We have set $\sigma=\pi^{2}/60$.
### V.3 Kaniadakis black hole
Black body spectrum in Kaniadakis statistics has recently been studied kanbb1
; kanbb2 , and it has been shown that kanbb1
$\displaystyle\frac{dE}{dt}=-A\sigma_{\kappa}T^{4},$ (23)
where $\sigma_{\kappa}=\frac{J_{3}^{\kappa}(0)}{4\pi^{2}}$ in which
$J_{3}^{\kappa}(0)=\int_{0}^{\infty}\frac{x^{3}}{\exp_{\kappa}(x)-1}dx$ while
$\exp_{\kappa}(x)=[\sqrt{1+\kappa^{2}x^{2}}+\kappa x]^{\frac{1}{\kappa}}$, and
we have $\sigma_{\kappa\rightarrow 0}=\sigma$ kanbb1 . Finally, we reach
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{i}^{\kappa}=-\frac{(8\pi)^{3}}{2\sigma_{\kappa}}\int_{M}^{0}m^{2}\cosh\big{(}4\kappa\pi
m^{2}\big{)}dm,$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{ii}^{\kappa}=\frac{(8\pi)^{3}}{2\sigma_{\kappa}}\frac{M^{3}}{3},$
(24)
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\tau_{iii}^{\kappa}=\tau_{i}^{\kappa},$
for three cases we discussed above. In Fig. (6) we have plotted
$\tau_{ii}^{\kappa}$ and $\tau_{iii}^{\kappa}$ (family of black curves) for
some values of $\kappa<0$ parameter where we observe that the decay time grows
as $\kappa$ tends to larger values in negative direction. Such a behavior is
parallel to the satisfaction of the third law. For the second case we observe
that the decay time is finite for a finite mass black hole.
## VI Summary
Based on the third law of thermodynamics, it is impossible for a system to
touch zero-entropy state (at least a state with minimum and finite value of
entropy) as its temperature tends to zero by only experiencing a finite number
of thermodynamical processes. As each process spends its own time interval to
be completed, we may even swap finite number of thermodynamical processes with
finite time in the above statement. The story becomes more complicated in the
ordinary black hole physics, generated by Bekenstein entropy, where it is
obtained that entropy diverges, while its temperature approaches zero. In this
situation, while black hole evaporates at finite time (17), and loses all of
its mass, its temperature diverges at its final evolution steps meaning that
we face a catastrophe ali . Here, we only focused on Schwarzschild black hole
as a primary solution which clearly exposes the mentioned inconsistency with
third law.
Motivated by the long range nature of gravity, some recent works that propose
a deep connection between quantum gravity and generalized statistics epl ;
homa ; mesri ; barrow ; mesri2 , and also the successes of these types of
statistics in justifying some cosmological and gravitational phenomena tsallis
; refgerg ; gerg ; non13 ; nonK ; EPJC ; KHDE ; sadeghi ; mesri2 , we studied
the status of third law for a Schwarzschild black hole in the framework of
some generalized statistics and we found out that this law may theoretically
be settled. Moreover, we obtained that the thermodynamic analysis along with
the laws may help us find new energy definitions, thus establishing a
consistency between the results of thermodynamics and the predictions of
quantum field theory about the black body radiation. The latter means that
thermodynamics may eventually shed light on the physics and pave the way to
find a comprehensive energy definition energy ; energy1 .
It is finally useful to mention that, a few months after submitting our
preprint to arXiv, we found two related papers plb ; gen1 . While one of them
investigates the availability of the generalized second law of thermodynamics
in a universe whose apparent horizon meets the generalized entropies plb ,
another one studies the BH temperature and energy by employing the Tsallis and
Cirto and Rényi entropies gen1 . The approach of Ref. gen1 , and additionally,
its findings about the Tsallis and Cirto entropy are similar to what we did
and obtained in Sec. (II), considered as a confirmation for our concern and
the strategy adopted here.
###### Acknowledgements.
IPL would like to acknowledge the contribution of the COST Action CA18108. IPL
was partially supported by the National Council for Scientific and
Technological Development - CNPq grant 306414/2020-1. J. P. M. G is supported
by CNPq under Grant No. 151701/2020-2.
## References
* (1) I. Rácz, Class. Quantum Grav. 17, 4353 (2000).
* (2) D. Chen et al, Int. J. Mod. Phys. A 29, 1430054 (2014).
* (3) W. Xu, J. Wang, X. h. Meng, Galaxies 3, 53 (2015).
* (4) Y. Yao, M. S. Hou, Y. C. Ong, Eur. Phys. J. C 79, 513 (2019).
* (5) M. Hajebrahimi, K. Nozari, 4, 043E03 (2020).
* (6) C. Tsallis, Introduction to Non-Extensive Statistical Mechanics: Approaching a Complex World, Springer, Berlin (2009).
* (7) M. Masi, Phys. Lett. A 338, 217 (2005).
* (8) C. Tsallis, L. J. L. Cirto, Eur. Phys. J. C 73, 2487 (2013).
* (9) H. Moradpour, Int. Jour. Theor. Phys. 55, 4176 (2016).
* (10) R. C. Nunes et al, JCAP 08, 051 (2016).
* (11) N. Komatsu, Eur. Phys. J. C 77, 229 (2017).
* (12) J. Sadeghi, M. Rostami, M. R. Alipour, Int. J. Mod. Phys. A 34 (30), 1950182 (2019).
* (13) H. Moradpour et al, Eur. Phys. J. C 80, 732 (2020).
* (14) T. S. Biró, V. G. Czinner, Phys. Lett. B 726, 861 (2013).
* (15) S. Ghaffari et al, Gen. Rel. Grav. 51, 93 (2019).
* (16) K. Mejrhit, R. Hajji, Eur. Phys. J. C 80, 1060 (2020).
* (17) H. Moradpour et al, EPL 127, 60006 (2019).
* (18) H. Shababi, K. Ourabah, Eur. Phys. J. Plus 135, 697 (2020).
* (19) K. Mejrhit, S. E. Ennadifi, Phys. Lett. B 794, 24 (2019).
* (20) J. D. Barrow, Phys. Lett. B 808, 135643 (2020).
* (21) A. V. Kolesnichenko, Sol. Syst. Res. 54, 420 (2020).
* (22) S. Nojiri, S. D. Odintsov, V. Faraoni, arXiv:2201.02424.
* (23) S. W. Hawking, Nature. 248, 5443 (1974).
* (24) J. D. Bekenstein, Phys. Rev. D 7, 2333 (1973).
* (25) A. Coutant, arXiv:1405.3466 [hep-th].
* (26) H. Moradpour et al, AHEP, Article ID 7124730, (2018).
* (27) M. Cruz, F. Izaurieta, S. Lepe, Eur. Phys. J. C 80, 559 (2020).
* (28) L. C. Andrews, Special functions of mathematics for engineers, SPIE Press. p. 110 (1998).
* (29) M. Cavaglia, Phys. Lett. B 569, 7 (2003).
* (30) A. F. Ali, Phys. Rev. D 89, 104040 (2014).
* (31) I. Lourek, M. Tribeche, Phys. Lett. A 381, 452 (2017).
* (32) R. E. Kelly, Am. J. Phys. 49, 714 (1981).
* (33) K. Nozari, S. F. Anvari, A. S. Sefiedgar, arXiv:1206.5631 [hep-th].
* (34) V. Husain, S. S. Seahra, E. J. Webster, Phys. Rev. D 88, 024014 (2013).
* (35) G. Amelino-Camelia, M. Arzano, Y. Ling, G. Mandanici, Class. Quant. Grav. 23, 2585 (2006).
* (36) I. P. Lobo, G. B. Santos, Phys. Lett. B 817, 136272 (2021).
* (37) S. Aghababaei et al, Phys. Scr. 96 055303 (2021).
* (38) A. R. Plastino, A. Plastino, H. Vucetich, Phys. Lett. A 207, 42 (1995);
Q. A. Wang, A. Le Méhauté, Phys. Lett. A 237, 28 (1997);
U. Tirnakli, F. Büyükkiliç, D. Demirhan, Phys. Lett. A 245, 62 (1998).
* (39) E. K. Lenzi, R. S. Mendes, Phys. Lett. A 250, 270 (1998).
* (40) S. L. Choudhury, R. K. Paul, Annals of Physics, 395, 317 (2018).
* (41) C. Tsallis, F. C. Sa Barreto, Generalization of Planck radiation law: possible testing of the nature of space-time, CBPF preprint. Brazil.
* (42) K. Ourabah, M. Tribeche, Phys. Rev. E 89, 062130 (2014).
* (43) S. Nojiri, S. D. Odintsov, V. Faraoni, Phys. Rev. D 104, 084030 (2021).
* (44) E. M. C. Abreu, J. A. Neto, Phys. Lett. B 824, 136803 (2022).
|
# Veech groups, irrational billiards and stable abelian differentials
Ferrán Valdez Max Planck Institut für Mathematik Vivatsgasse 7. 53111, Bonn,
Germany<EMAIL_ADDRESS>
###### Abstract.
We describe Veech groups of flat surfaces arising from irrational angled
polygonal billiards or irreducible stable abelian differentials. For
irrational polygonal billiards, we prove that these groups are non-discrete
subgroups of $\rm SO(2,\mathbf{R})$ and we calculate their rank.
## 1\. Introduction
The Veech group of a flat surface is the group of derivatives of orientation-
preserving affine homeomorphisms. If the surface is compact, Veech groups are
discrete subgroups of $\mathbf{SL}(2,\mathbf{R})$ that can be related to the
geodesic flow on the surface [7]. Our main goal is to describe Veech groups
arising from non-compact flat surfaces associated to billiards on an
irrational angled polygon. Nevertheless, in this article we will not discuss
dynamical aspects of geodesics. More precisely,
###### Theorem 1.1.
Let $P\subset\mathbf{R}^{2}$ be a simply connected polygon with interior
angles $\\{\lambda_{j}\pi\\}_{j=1}^{N}$, $S(P)$ the flat surface obtained from
$P$ via the Katok-Zemljakov construction and $G(S)$ the Veech group of $S(P)$.
Suppose there exists $\lambda_{j}\in\mathbf{R}\setminus\mathbf{Q}$ for some
$j=1,\ldots,n$. Then, $G(S)<\mathbf{SO}(2,\mathbf{R})$ and the group generated
by the rotations
(1.1) $R(S)=<\begin{pmatrix}\cos(2\lambda_{j}\pi)&-\sin(2\lambda_{j}\pi)\\\
\sin(2\lambda_{j}\pi)&\cos(2\lambda_{j}\pi)\end{pmatrix}\mid j=1,\ldots,N>$
has maximal rank in $G(S)$.
The surface $S(P)$ has infinite genus and only one end. A topological surface
satisfying these two conditions is called a _Loch Ness monster_ [6].
After a suggestion from M. Möller, we consider Veech groups arising from
stable abelian differentials at $\partial\Omega\overline{M_{g}}$, the boundary
of the Deligne-Mumford compactification of the Hodge bundle $\Omega M_{g}$
(see §5, [1] for a definition). On this boundary, the notion of Veech group
makes sense only for stable abelian differentials on an irreducible stable
curve, called irreducible. In this direction, we prove the following:
###### Proposition 1.2.
Let $(X,\omega)\in\partial\rm\Omega\overline{\mathcal{M}_{g}}$ be an
irreducible stable Abelian differential of genus $g$. Suppose that there
exists at least one node in $X$ where the 1–form $\omega$ has a pole. Let
$\\{r_{j},-r_{j}\\}_{j=1}^{k}$ be the set of all residues of $\omega$ and
define
(1.2) $\ N:=<\\{\begin{pmatrix}1&s\\\ 0&t\end{pmatrix}\mid
t\in\mathbf{R}^{+},s\in\mathbf{R}\\},-Id>.$
Let $G(X)=G(X,\omega)$ be the Veech group of $(X,\omega)$. Then,
1. (1)
If there exist $i\neq j$ such that $r_{i}/r_{j}\notin\mathbf{R}$, then $G(X)$
is finite.
2. (2)
If all residues of $\omega$, as vectors in $\mathbf{C}\simeq\mathbf{R}^{2}$
are parallel, then $G(X)<N$ is either conjugated to a discrete subgroup or the
equal to $N$.
Recently, Hubert and Schmithüsen [3] have shown the existence of a countably
family of _infinite area_ origamis whose Veech groups are infinitely generated
subgroups of $\mathbf{SL}(2,\mathbf{Z})$. These origamis arise as
$\mathbf{Z}$-covers of (finite area) genus 2 origamis. Motivated by this work,
in the last section of this article we construct, for each $n\in\mathbf{N}$,
an uncountable family of flat surfaces $\mathcal{S}_{n}=\\{S_{i}\\}_{i\in I}$
such that each $S_{i}$ is homeomorphic to the Loch Ness monster and the Veech
group $G(S_{i})<\mathbf{SO}(2,\mathbf{R})$ is infinitely generated.
This article is organized as follows. We introduce the notion of _tame_ flat
surface and extend the definition of some classical geometric invariants
(saddle connections, Veech groups) to the non-compact realm in Section 2 .
Loosely speaking, _tame_ flat surfaces present a discrete set of
singularities, which are either of finite or infinite angle. We briefly recall
the Katok-Zemljakov construction, the notion of stable Abelian differential
and define Veech groups for irreducible nodal flat surfaces. Section 3 deals
with the proof of Theorem 1.1 and Section 4 with the proof of Proposition 1.2.
Finally, Section 5 presents the construction of the family of flat surfaces
$\mathcal{S}_{n}$ mentioned above.
Acknowledgments. This article was written during a stay of the author at the
Max Planck Institut für Mathematik in Bonn. The author wishes to express his
gratitude to the administration and staff of the MPI for the wonderful working
facilities and the atmosphere. The author acknowledges support from the
Sonderforschungsbereich/Transregio 45 and the ANR Symplexe. The author thanks
M. Möller and M. Bainbridge for valuable discussions.
## 2\. Preliminaries
Non-compact flat surfaces. Let $(S,\omega)$ be a pair formed by a connected
Riemann surface $S$ and a holomorphic 1–form $\omega$ on $S$ which is not
identically zero. Denote by $Z(\omega)\subset S$ the zero locus of the form
$\omega$. Local integration of this form endows $S\setminus Z(\omega)$ with an
atlas whose transition functions are translations of $\mathbf{C}$. The
pullback of the standard translation invariant flat metric on the complex
plane defines a flat metric $d$ on $S\setminus Z(\omega)$. Let $\widehat{S}$
be the metric completion of $S$. Each point in $Z(\omega)$ has a neighborhood
isometric to the neighborhood of $0\in\mathbf{C}$ with the metric induced by
the 1–form $z^{k}dz$ for some $k>1$ (which is a cyclic finite branched
covering of $\mathbf{C}$). The points in $Z(\omega)$ are called finite angle
singularities. Note that there is a natural embedding of $S$ into
$\widehat{S}$.
###### Definition 2.1.
A point $p\in\widehat{S}$ is called an _infinite angle singularity_ , if there
exists a radius $\epsilon>0$ such that the punctured neighborhood:
(2.3) $\\{z\in\widehat{S}\mid 0<d(z,p)<\epsilon\\}$
is isometric to the infinite cyclic covering of
$\epsilon\mathbf{D}^{*}=\\{w\in\mathbf{C}^{*}\mid 0<\mid w\mid<\epsilon\\}$.
We denote by $Y_{\infty}(\omega)$ the set of infinite angle singularities of
$\widehat{S}$.
###### Definition 2.2.
The pair $(S,\omega)$ is called a _tame_ flat surface if $\hat{S}\setminus
S=Y_{\infty}(\omega)$.
One can easily check that flat surfaces arising from irrational polygons or
stable abelian differentials are tame.
###### Definition 2.3.
A _singular geodesic_ of $S=(S,\omega)$ is an open geodesic segment in the
flat metric $d$ whose image under the natural embedding
$S\hookrightarrow\widehat{S}$ issues from a singularity of $\widehat{S}$,
contains no point of $Y(\omega)$ in its interior and is not properly contained
in some other geodesic segment. A _saddle connection_ is a finite length
singular geodesic.
To each saddle connection we can associate a _holonomy vector_ : we ’develop’
the saddle connection in the plane by using local coordinates of the flat
structure. The difference vector defined by the planar line segment is the
holonomy vector. Two saddle connections are _parallel_ , if their
corresponding holonomy vectors are linearly dependent.
Let $\mathrm{Aff}_{+}(S)$ be the group of affine orientation preserving
homeomorphisms of the flat surface $S$ (by definition $S$ comes with a
distinguished 1–form $\omega$). Consider the map
(2.4)
$\mathrm{Aff}_{+}(S)\overset{D}{\longrightarrow}\mathbf{GL}_{+}(2,\mathbf{R})$
that associates to every $\phi\in\mathrm{Aff}_{+}(S)$ its (constant) Jacobian
derivative $D\phi$.
###### Definition 2.4.
Let $S$ be a flat surface. We call $G(S)=D(\mathrm{Aff}_{+}(S))$ the _Veech
group_ of $S$.
The Katok-Zemljakov construction. In the following paragraph we recall briefly
the definition of this construction. For details see [6] and references
within.
Let $P_{0}$ denote the polygon $P$ deprived of its vertices. The
identification of two disjoint copies of $P_{0}$ along ”common sides” defines
a Euclidean structure on the $N$-punctured sphere. We denote it by
$\SS^{2}(P)$. This punctured surface is naturally covered by $S(P_{0})$, the
_minimal translation surface_ corresponding to $P$. We denote the projection
of this covering by $\pi:S(P_{0})\longrightarrow\SS^{2}(P)$. Call a vertex of
$P$ _rational_ , if the corresponding interior angle is commensurable with
$\pi$. When the set of rational vertices of $P$ is not empty, the translation
surface $S(P_{0})$ can be locally compactified by adding points ”above”
rational vertices of $P$. The result of this local compactification is a flat
surface with finite angle singularities that we denote by $S(P)$. If the set
of rational vertices of $P$ is empty, we set $S(P)=S(P_{0})$. In both cases,
$S(P)$ is called the flat surface obtained from the polygon $P$ via the
_Katok-Zemljakov construction_. Remark that, in the case of rational polygons,
some authors give a different definition (see [5] or [2]).
Stable Abelian differentials. We recall briefly the notion of stable Abelian
differential, following Bainbridge [1].
A _nodal Riemann surface_ $X$ is a finite type Riemann surface, _i.e._ with
finitely generated fundamental group, that has finitely many cusps which have
been identified pairwise to form nodes. A connected component of a nodal
Riemann surface $X$ with its nodes removed is called a _part_ of $X$, and the
closure of a part of $X$ is an _irreducible component_. The genus of a nodal
Riemann surface is the topological genus of the non-singular Riemann surface
obtained by replacing each node in $X$ with an annulus. A _stable Riemann
surface_ is a connected nodal Riemann surface for which each part has negative
Euler characteristic. A _stable Abeliann differential_ $\omega$ on a stable
Riemann surface $X$ is a holomophic 1–form on $X$ minus its nodes such that
its restriction to each part of $X$ has at worst simple poles at the cusps and
at two cusps which have been identified to form a node, the differential has
opposite residues, if any. Nodes at which $\omega$ presents a pole are called
_polar nodes_.
Veech groups on stable Abelian differentials. Let $(X,\omega)$ be a stable
Abelian differential. We denote by ${\rm Aff_{+}}(X,\omega)$ the group of
affine orientation preserving homeomorphisms of $X$. The Jacobian derivative
$D\phi$ of an affine homeomorphism $\phi$ is constant on each irreducible
component of $(X,\omega)$. In general, there is no canonical derivation
morphism from the affine group of a stable Abelian differential onto
$\mathbf{GL}_{+}(2,\mathbf{R})$. Consider, for example, the genus 2 stable
Abelian differential given by the following figure:
Figure 1.
We avoid this situation by restricting ourselves to irreducible Riemann
surfaces.
###### Definition 2.5.
Let $X=(X,\omega)$ be an _irreducible_ stable Abelian differential. We call
$G(X)=D({\rm Aff_{+}}(X,\omega))$ the _Veech group_ of $X$.
Abelian differentials close to a stable Abelian differential $\rm(X,\omega)$
with a polar node develop very long cylinders which are pinched off to form a
node in the limit, (see §5.3 [1]). In the following figure we depict a genus
two stable abelian differential with two nodes ( with residues $\rm\pm 1$ and
$\rm\pm(1+i)$) and two double zeroes:
Figure 2.
When considering the flat metric, every stable Abelian differential deprived
of its polar nodes is a complete metric space. We call _singular geodesic_ in
the context of stable Abelian differentials, every geodesic segment that
issues from a zero or a non-polar node of $\omega$, contains no such zero or
non-polar node on its interior and is not properly contained in some other
geodesic segment. As before, finite length singular geodesics will be called
_saddle connections_.
_Decomposition of stable Abelian differentials with polar nodes_. Suppose that
$(X,\omega)$ has polar nodes with residues $r_{1},\ldots,r_{k}$. Every $r_{j}$
defines a direction $\theta(r_{j})\in\mathbf{R}/\mathbf{Z}$ for which
$(X,\omega)$ presents a set of disjoint infinite area cylinders
$C_{1,j},\ldots,C_{n(j),j}$ foliated by closed geodesics parallel to
$\theta(r_{j})$ and whose length is $\mid r_{j}\mid$. Denote by $C_{j}$ the
closure in $(X,\omega)$ of $\cup_{i=1}^{n(j)}C_{i,j}$ and
$C=\cup_{j=1}^{k}C_{j}$. We define
(2.5) $X^{\prime}:=X\setminus C$
The Veech group of $(X,\omega)$ acts linearly on the set of residues of
$\omega$ and leaves the decomposition $X=X^{\prime}\sqcup C$ invariant.
## 3\. Proof of Theorem 1.1
First, we prove that the matrix group $R(S)$ defined in (1.1) is a subgroup of
$G(S)$. Then, we prove that $G(S)<\mathbf{SO}(2,\mathbf{R})$ and, finally,
that ${\rm Rank}(G(S))={\rm Rank}(R(S))$.
(i) The locally Euclidean structure on the $N$-punctured sphere
$\mathbf{S}^{2}(P)$ gives rise to the holonomy representation:
(3.6) $\rm hol:\pi_{1}(\mathbf{S}^{2}(P))\longrightarrow
Isom_{+}(\mathbf{R}^{2})$
Let $B_{j}$ be a simple loop in $\mathbf{S}^{2}(P)$ around the missing vertex
of $P$ whose interior angle is $\lambda_{j}\pi$, $\rm j=1,\ldots,N$. Suppose
that $B_{j}\cap B_{i}=*$, for $\rm i\neq j$. Then, $\\{B_{j}\\}_{j=1}^{N}$
generates $\pi_{1}(\mathbf{S}^{2}(P),*)$. Given an isometry $\varphi\in\rm
Isom_{+}(\mathbf{R}^{2})$, we denote its derivative by $D\circ\varphi$. A
direct calculation in local coordinates shows that $\rm hol(B_{j})$ is affine
and that $M_{j}=D\circ\rm hol(B_{j})$ is given by:
(3.7) $M_{j}=\begin{pmatrix}\cos(2\lambda_{j}\pi)&-\sin(2\lambda_{j}\pi)\\\
\sin(2\lambda_{j}\pi)&\cos(2\lambda_{j}\pi)\end{pmatrix}\hskip
28.45274ptj=1,\ldots,N.$
Since $G(S(P_{0}))=G(S(P))$, we conclude that $R(S)$ is a subgroup of $G(S)$.
(ii) We claim that length of every saddle connection in $S(P)$ is bounded
below by some constant $c=c(P)>0$. Indeed, consider the folding map
$f:\SS^{2}(P)\longrightarrow P$ which is 2-1 except along the boundary of $P$.
The projection $f\circ\pi:S(P_{0})\longrightarrow P$ maps every saddle
connection $\gamma\subset S(P_{0})$ onto a _generalized diagonal_ of the
billiard game on $P$ (see [4] for a precise definition). The length of
$\gamma$ is bounded below by the length of the generalized diagonal
$f\circ\pi(\gamma)$. The length of any generalized diagonal of the billiard
table $P$ is bounded below by some positive constant $c$ depending only on
$P$. This proves our claim. The constant $c$ is realized by a generalized
diagonal. Therefore, we can choose a holonomy vector $v$ is of minimal length.
Given that $R(S)<G(S)$, the $G(S)$-orbit of $v$ is dense in the circle of
radius $|v|$ centered at the origin. This forces the Veech group $G(S)$ to lie
in $\mathbf{SO}(2,\mathbf{R})$.
(iii) Suppose that there exist an affine homeomorphism $\varphi\in{\rm
Aff}_{+}(S)$ such that $D\varphi$ is an infinite order element of
$\mathbf{SO}(2,\mathbf{R})/R(S)$. Let $\gamma_{0}$ be a fixed saddle
connection. Then $\\{f\circ\pi\circ\varphi^{k}(\gamma_{0})\\}_{k\in Z}$ is an
infinite set of generalized diagonals of bounded length. But this is a
contradiction, for the set of generalized diagonals of bounded length on a
polygonal billiard is always finite [4].
## 4\. Proof of Proposition 1.2
The Veech group of the irrational stable Abelian differential $(X,\omega)$
acts linearly on the (finite) set of residues of $\omega$. Therefore, if not
all residues are parallel, $G(X)$ must be finite.
Suppose now that all residues are parallel to the horizontal direction. Then
$G(X)<N$. If every holonomy vector of $(X,\omega)$ is horizontal we claim that
$G(X)=N$. Indeed, in this situation $X^{\prime}$ defined in (2.5) is empty and
the horizontal geodesic flow decomposes $X$ into finitely many cylinders with
horizontal boundaries. This allows to define, for every $g\in N$, an
orientation preserving affine homeomorphism of $X$ whose differential is
exactly $g$. On the other hand, if at least one holonomy vector fails to be
horizontal, then $G(X)<N$ is discrete, for the set of holonomy vectors of any
stable Abelian differential is discrete.
$\square$
Remark. Veech groups of irreducible stable Abelian differentials in
$\partial\Omega\overline{\mathcal{M}_{g}}$ without polar nodes are as
”complicated” as Veech groups of flat surfaces in $\Omega\mathcal{M}_{g}$ with
marked points. More precisely, a nodal Riemann surface $X$ has a
_normalization_ $f:S(X)\longrightarrow X$ defined by separating the two
branches passing through each node of $X$. For every node $p$, denote
$\\{p_{+},p_{-}\\}:=f^{-1}(p)$. Then, if the stable Abelian differential
$(X,\omega)$ has no polar nodes, we have the equality:
(4.8) ${\rm Aff}_{+}(X,\omega)=\\{\phi\in{\rm Aff}_{+}(S(X),\omega)\hskip
2.84526pt|\hskip 2.84526pt\phi(p_{+})=\phi(p_{-}),\hskip
2.84526pt\forall\hskip 2.84526ptp\hskip 2.84526pt\text{node of $X$}\\}.$
## 5\. Infinitely generated Veech groups in $\mathbf{SO}(2,\mathbf{R})$
Fix $n\in\mathbf{N}$. Consider an unbounded sequence of real numbers
(5.9) $\rm x_{0}=0<x_{1}<x_{2}<\ldots<x_{j}<...$
such that $\rm x_{j+1}-x_{j}>1$ for all $j$. The segments of straight line
joining the point $\rm(x_{j},x_{j}^{2n})$ to$\rm(x_{j+1},x_{j+1}^{2n})$ and
$\rm(-x_{j},x_{j}^{2n})$ to $\rm(-x_{j+1},x_{j+1}^{2n})$, $j\geq 0$, define a
polygonal line $\partial P$ in $\mathbf{C}$. Let $int(P)$ be the connected
component of $\mathbf{C}\setminus\partial P$ intersecting the positive
imaginary axis $\rm Im(z)>0$. We define $P=\partial P\cup int(P)$. We call $P$
the _unbouded polygon_ defined by the sequence (5.9). Remark that $P$ is
symmetric with respect to the imaginary axis. For each $\rm j\geq 0$, let
$\rm\lambda_{j}\pi$ be the interior angle of $P$ at the vertex
$\rm(x_{j},x_{j}^{2n})$.
###### Definition 5.1.
We say that a sequence of real numbers $\rm\\{\mu_{j}\\}_{j\geq 0}$ is _free
of resonances_ if and only if for every finite subset
$\rm\\{\mu_{j_{1}},\ldots\mu_{j_{N}}\\}$ the kernel of the group morphism
$\mathbf{Z}^{N}\longrightarrow\mathbf{C}$ defined by
$\rm(n_{1},\ldots,n_{N})\longrightarrow exp(2\pi
i(\sum_{k=1}^{N}n_{k}\mu_{j_{k}}))$
is trivial.
There are uncountable many choices $\\{x_{j\geq 0}^{i}\\}$, $i\in I$, for
(5.9), such that the sequence $\\{\lambda_{j}^{i}\\}_{j\geq 0}$ defining the
interior angles of $P=P_{i}$ is free of resonances. For each $i\in I$, denote
by $S^{2}(P_{i})$ the identification of two vertexless copies of $P_{i}$ along
”common sides”. The Katok-Zemljakov construction described in Section 2 can be
applied to the unbounded polygon $P_{i}$. The result is a flat covering
$S_{i}\longrightarrow\mathbf{S}^{2}(P_{i})$.
###### Lemma 5.2.
The flat surface $S_{i}$ is homeomorphic to the Loch Ness monster. The Veech
group $G(S_{i})<\mathbf{SO}(2,\mathbf{R})$ is infinitely generated and
contains the infinite rank group generated by the matrices
(5.10) $\begin{pmatrix}\cos(2\lambda_{j}^{i}\pi)&-\sin(2\lambda_{j}^{i}\pi)\\\
\sin(2\lambda_{j}^{i}\pi)&\cos(2\lambda_{j}^{i}\pi)\end{pmatrix}\rm\hskip
28.45274ptj\geq 0,$
###### Proof.
Every flat surface obtained via the Katok-Zemljakov construction from a
(bounded) polygon whose angles are all irrational multiples of $\pi$, is
homeomorphic to a Loch-Ness monster. This is proved in Theorem 1 ( Case (A)
and absence of resonances) in [6]. The same conclusion can be drawn for the
unbounded polygons $P_{i}$ after replacing in the proof of Theorem 1 [_Ibid._]
_polygon P_ and surface $X(P)$ by _unbounded polygon_ $P_{i}$ and $S_{i}$,
respectively.
In §3, sections (i) and (ii) made use of the boundness of $P$ to assure that
the length of every saddle connection in $S(P)$ was bounded below by a
constant depending only on $P$. For unbounded polygons, this is ensured by the
condition $\rm x_{j+1}-x_{j}>1$, for all $j$, on the sequence (5.9). It
follows that, for every $i\in I$, $G(S_{i})<\mathbf{SO}(2,\mathbf{R})$ and
that this Veech group contains the group generated by the matrices (5.10).
This matrix group is infinitely generated, for the sequence
$\\{\lambda_{j}^{i}\\}_{j\geq 0}$ is free of resonances, for every $i\in I$.
$\square$
## References
* [1] M. Bainbridge. _Euler characteristics of Teichmüller curves in genus two_. Geom. Topol. 11 (2007), 1887–2073.
* [2] E. Gutkin and S. Troubetzkoy _Directional flows and strong recurrence for polygonal billiards_. International Conference on Dynamical Systems (Montevideo, 1995), 21–45, Pitman Res. Notes Math. Ser., 362, Longman, Harlow, 1996. _Geom. Dedicata_ 125 (2007), 39–46.
* [3] P. Hubert and G. Schmithüsen. _Infinite translation surfaces with infinitely generated Veech groups_. Preprint. http://www.cmi.univ-mrs.fr/ hubert/articles/hub-schmithuesen.pdf
* [4] A. B. Katok, _The growth rate for the number of singular and periodic orbits for a polygonal billiard_. Comm. Math. Phys. 111 (1987), no. 1, 151–160.
* [5] H. Masur and S. Tabachnikov. _Rational billiards and flat structures_. Handbook of dynamical systems. Vol. 1A, 1015–1089, North Holland. Amsterdam 2002.
* [6] J.F. Valdez. _Infinite genus surfaces and irrational polygonal billiards_. To appear in Geom. Dedicata.
* [7] W.A. Veech. _Teichmüller curves in the moduli space, Eisenstein series and applications to triangular billiards_. Inventiones mathematicae, 97, 1989, 553–583.
|
# Rosetta Neurons: Mining the Common Units in a Model Zoo
Amil Dravid∗
Northwestern
Yossi Gandelsman∗
UC Berkeley
Alexei A. Efros
UC Berkeley
Assaf Shocher
UC Berkeley, Google
###### Abstract
††footnotetext: * Equal contribution.
Do different neural networks, trained for various vision tasks, share some
common representations? In this paper, we demonstrate the existence of common
features we call “Rosetta Neurons” across a range of models with different
architectures, different tasks (generative and discriminative), and different
types of supervision (class-supervised, text-supervised, self-supervised). We
present an algorithm for mining a dictionary of Rosetta Neurons across several
popular vision models: Class Supervised-ResNet50, DINO-ResNet50, DINO-ViT,
MAE, CLIP-ResNet50, BigGAN, StyleGAN-2, StyleGAN-XL. Our findings suggest that
certain visual concepts and structures are inherently embedded in the natural
world and can be learned by different models regardless of the specific task
or architecture, and without the use of semantic labels. We can visualize
shared concepts directly due to generative models included in our analysis.
The Rosetta Neurons facilitate model-to-model translation enabling various
inversion-based manipulations, including cross-class alignments, shifting,
zooming, and more, without the need for specialized training.
Figure 1: Mining for “Rosetta Neurons.” Our findings demonstrate the existence
of matching neurons across different models that express a shared concept
(such as object contours, object parts, and colors). These concepts emerge
without any supervision or manual annotations. We visualize the concepts with
heatmaps and a novel inversion technique (two right columns).
Figure 2: Visualization of all the concepts for one class. An example of the
set of all concepts emerging for ImageNet “Tench” class by matching the five
discriminative models from Table 2 and clustering within StyleGAN-XL. GAN
heatmaps are visualized over one generated image.
## 1 Introduction
One of the key realizations of modern machine learning is that models trained
on one task end up being useful for many other, often unrelated, tasks. This
is evidenced by the success of backbone pretrained networks and self-
supervised training regimes. In computer vision, the prevailing theory is that
neural network models trained for various vision tasks tend to share the same
concepts and structures because they are inherently present in the visual
world. However, the precise nature of these shared elements and the technical
mechanisms that enable their transfer remain unclear.
††footnotetext: Project page, code and models:
https://yossigandelsman.github.io/rosetta_neurons
In this paper, we seek to identify and match units that express similar
concepts across different models. We call them Rosetta 111The Rosetta Stone is
an ancient Egyptian artifact, a large stone inscribed with the same text in
three different languages. It was the key to deciphering Egyptian hieroglyphic
script. The original stone is on public display at the British Museum in
London. Neurons (see fig. 1). How do we find them, considering it is likely
that each model would express them differently? Additionally, neural networks
are usually over-parameterized, which suggests that multiple neurons can
express the same concept (synonyms). The layer and channel that express the
concept would also differ between models. Finally, the value of the activation
is calibrated differently in each. To address these challenges, we carefully
choose the matching method we use. We found that post ReLU/GeLU values tend to
produce distinct activation maps, thus these are the values we match. We
compare units from different layers between the models while carefully
normalizing the activation maps to overcome these differences. To address
synonym neurons, we also apply our matching method on a model with itself and
cluster units together according to the matches.
We search for Rosetta Neurons across eight different models: Class Supervised-
ResNet50 [13], DINO-ResNet50, DINO-ViT [4], MAE [12], CLIP-ResNet50 [24],
BigGAN [3], StyleGAN-2 [15], StyleGAN-XL [29]. We apply the models to the same
dataset and correlate different units of different models. We mine the Rosetta
neurons by clustering the highest correlations. This results in the emergence
of model-free global representations, dictated by the data.
Fig. 2 shows an example image and all the activation maps from the discovered
Rosetta Neurons. The activation maps include semantic concepts such as the
person’s head, hand, shirt, and fish as well as non-semantic concepts like
contour, shading, and skin tone. In contrast to the celebrated work of Bau _et
al_. on Network Dissection [2, 1], our method does not rely on human
annotations or semantic segmentation maps. Therefore, we allow for the
emergence of non-semantic concepts.
The Rosetta Neurons allow us to translate from one model’s “language” to
another. One particularly useful type of model-to-model translation is from
discriminative models to generative models as it allows us to easily visualize
the Rosetta Neurons. By applying simple transformations to the activation maps
of the desired Rosetta Neurons and optimizing the generator’s latent code, we
demonstrate realistic edits. Additionally, we demonstrate how GAN inversion
from real image to latent code improves when the optimization is guided by the
Rosetta Neurons. This can be further used for out-of-distribution inversion,
which performs image-to-image translation using a regular latent-to-image GAN.
All of these edits usually require specialized training (e.g. [8, 14, 38]),
but we leverage the Rosetta Neurons to perform them with a fixed pre-trained
model.
The contributions of our paper are as follows:
* •
We show the existence of Rosetta Neurons that share the same concepts across
different models and training regimes.
* •
We develop a method for matching, normalizing, and clustering activations
across models. We use this method to curate a dictionary of visual concepts.
* •
The Rosetta Neurons enables model-to-model translation that bridges the gap
between representations in generative and discriminative models.
* •
We visualize the Rosetta Neurons and exploit them as handles to demonstrate
manipulations to generated images that otherwise require specialized training.
Figure 3: Rosetta Neuron Dictionary. A sample from the dictionary curated for
the ImageNet class “Briard”. The full dictionary can be found in the
supplementary material. The figure presents 4 emergent concepts demonstrated
in 3 example images. For each model, we present the normalized activation maps
of the Rosetta Neuron matching the shared concept.
## 2 Related Work
Visualizing deep representations. The field of interpreting deep models has
been steadily growing, and includes optimizing an image to maximize the
activations of particular neurons [36, 33, 22], gradient weighted activation
maps [32, 23, 25, 30], nearest neighbors of deep feature representations [20],
etc. The seminal work of Bau _et al_. [1, 2] took a different approach by
identifying units that have activation maps highly correlated with semantic
segments in corresponding images, thereby reducing the search space of
meaningful units. However, this method necessitates annotations provided by a
pre-trained segmentation network or a human annotator and is confined to
discovering explainable units from a predefined set of classes and in a single
model. Whereas all previous works focused on analyzing a single, specific
neural network model, the focus of our work is in capturing commonalities
across many different networks. Furthermore, unlike [2, 1], our method does
not require semantic annotation.
Figure 4: Rosetta Neurons guided image inversion. An input image is passed
through a discriminative model $D$ (i.e.: DINO) to obtain the Rosetta Neurons’
activation maps. Then, the latent code $Z$ of the generator is optimized to
match those activation maps, according to the extracted pairs.
Explaining discriminative models with generative models. GANAlyze [10]
optimized the latent code of a pre-trained GAN to find directions that affect
a classifier decision. Semantic Pyramid [31] explored the subspaces of
generated images to which the activations of a classifier are invariant. Lang
_et al_. [21] trained a GAN to explain attributes that underlie classifier
decisions. In all of these cases, the point where the generative and
discriminative models communicate is in the one “language” they both speak -
pixels; which is the output of the former and an input of the latter. Our
method for bridging this gap takes a more straightforward approach: we
directly match neurons from pre-trained networks and identify correspondences
between their internal activations. Moreover, as opposed to [21] and [31], our
method does not require GAN training and can be applied to any off-the-shelf
GAN and discriminative model.
Analyzing representation similarities in neural networks. Our work is inspired
by the neuroscience literature on representational similarity analysis [18, 7]
that aims to extract correspondences between different brain areas [11],
species [19], individual subjects [5], and between neural networks and brain
neural activities [34]. On the computational side, Kornblith _et al_. [17]
aimed to quantify the similarities between different layers of discriminative
convolutional neural networks, focusing on identifying and preserving
invariances. Esser, Rombach, and Ommer [9, 28] trained an invertible network
to translate non-local concepts, expressed by a latent variable, across
models. In contrast, our findings reveal that individual neurons hold shared
concepts across a range of models and training regimes without the need to
train a specialized network for translation. This leads to another important
difference: the concepts we discover are local and have different responses
for different spatial locations in an image. We can visualize these responses
and gain insights into how these concepts are represented in the network.
## 3 Method
Our goal is to find Rosetta Neurons across a variety of models. We define
Rosetta Neurons as two (or more) neurons in different models whose activations
(outputs) are positively correlated over a set of many inputs. Below we
explain how to find Rosetta Neurons across a variety of models and describe
how to merge similar Rosetta Neurons into clusters that represent the same
concepts.
### 3.1 Mining common units in two models
Preliminaries. Given two models $F^{(1)},F^{(2)}$, we run $n$ inputs through
both models. For discriminative models, this means a set of images
${\\{I_{i}\\}^{n}_{i=1}}$. If one of the models is generative, we first sample
$n$ random input noises ${\\{Z_{i}\\}^{n}_{i=1}}$ and generate images
$I_{i}=F^{(1)}(z_{i})$ that will be the set of inputs to the discriminative
model $F^{(2)}$. We denote the set of extracted activation maps of $F$ by
$F^{act}$. The size $|F^{act}|$ is the total number of channels in all the
layers. The $j$-th intermediate activation map of $F$ when applied to the
$i$-th input is then $F^{j}_{i}$. That is $F^{j}_{i}=F^{j}(I_{i})$ for a
discriminative model and $F^{j}_{i}=F^{j}(z_{i})$ for a generative one.
Comparing activation maps. To compare units $F^{(1)j}$ and $F^{(2)k}$, namely,
the $j$-th unit from the first model with the $k$-th unit from the second one,
we first bilinearly interpolate the feature maps to have the same spatial
dimensions according to the maximum of the two map sizes. Our approach to
perform matching is based on correlation, similar to [18], but taken across
both data instances and spatial dimensions. We then take the mean and variance
across the $n$ images and across the spatial dimensions of the images, where
$x$ combines both spatial dimensions of the images.
$\begin{split}\overline{F^{j}}&=\frac{1}{nm^{2}}\sum\limits_{i,x}F^{j}_{i,x}\\\
var(F^{j})&=\frac{1}{nm^{2}-1}\sum\limits_{i,x}\left(F^{j}_{i,x}-\overline{F^{j}}\right)^{2}\end{split}$
(1)
Next, the measure of distance between two units is calculated by Pearson
correlation:
$d(F^{(1)j},F^{(2)k})=\frac{\sum\limits_{i,x}\left(F^{(1)j}_{i,x}-\overline{F^{(1)j}}\right)\left(F^{(2)k}_{i,x}-\overline{F^{(2)k}}\right)}{\sqrt{var(F^{(1)j})\cdot
var(F^{(2)k})}}$ (2)
In our experiments, this matching is computed between a generative model $G$
and a discriminative model $D$. The images used for $D$ are generated by $G$
applied to $n$ sampled noises.
Filtering “best buddies” pairs. To detect reliable matches between activation
maps, we keep the pairs that are mutual nearest neighbors (named “best-
buddies” pairs by [6]) according to our distance metric and filter out any
other pair. Formally, our set of “best buddies” pairs is:
$\begin{split}BB(&F^{(1)},F^{(2)};K)=\\{(j,k)|\\\ &F^{(1)k}\in
KNN(F^{(2)j},F^{(1)act};K)\\\ \land\;\;&F^{(2)j}\in
KNN(F^{(1)k},F^{(2)act};K)\\}\\\ \end{split}$ (3)
Where $KNN({F^{(a)j},F^{(b)act}})$ is the set of the K-nearest neighbors of
the unit ${j}$ from model ${F^{(a)}}$ among all the units in model
${F^{(b)}}$:
$\begin{split}KNN(F^{(a)j},F^{(b)act};K)=&\underset{{q_{1}...q_{K}}\subseteq
F^{(b)act}}{\mathrm{argmin}}\sum_{k=1}^{K}d(F^{(a)j},q_{k})\end{split}\vspace{-0.2cm}$
As shown in [6], the probability of being mutual nearest neighbors is
maximized when the neighbors are drawn from the same distribution. Thus,
keeping the “best buddies” discards noisy matches.
### 3.2 Extracting common units in $m$ models
Merging units between different models. To find similar activation maps across
many different discriminative models $D_{i},i\in[m]$, we merge the “best
buddies” pairs calculated between $D_{i}$ and a generator $G$ for all the
$i$’s. Formally, our Rosetta units are:
$\begin{split}R(G,D_{1}...D_{m})=\\{(j,k_{1},...,k_{m})|\forall{i}:(j,k_{i})\in
BB(G,D_{i})\\}\end{split}$ (4)
This set of tuples includes the “translations” between similar neurons across
all the models. Note that when $m=1$, $R(G,D_{1})=BB(G,D_{1})$.
Clustering similar units into concepts. Empirically, the set of Rosetta units
includes a few units that have similar activation maps for the $n$ images. For
instance, multiple units may be responsible for edges or concepts such as
“face.” We cluster them according to the self “best-buddies” of the generative
model, defined by $BB(G,G;K)$. We set two Rosetta Neurons in $R$ to belong to
the same cluster if their corresponding units in $G$ are in $BB(G,G;K)$.
Curating a dictionary. After extracting matching units for a dataset across a
model zoo, we enumerate the sets of matching Rosetta Neurons in the clustered
$R$. Fig. 3 is a sample from such a dictionary. Fig. 2 shows a list of all the
concepts for a single image. Since the concepts emerge and are not related to
human annotated labels, we simply enumerate them and present each concept on
several example images to visually identify it. Using 1600 instances generated
by the GAN, Distances are taken between all possible bipartite pairs of units,
the $K=5$ nearest neighbors are extracted, from which Best-Buddies are
filtered. Typically for the datasets and models we experimented with, around
50 concepts emerge. The exact list of models used in our experiments and the
datasets they were trained on can be found in Table. 2. See supplementary
material for the dictionaries.
## 4 Visualizing the Rosetta Neurons
As we involve a generative model in the Rosetta Neurons mining procedure, we
can utilize it for visualizing the discovered neurons as well. In this
section, we present how to visualize the neurons via a lightweight matches-
guided inversion technique. We then present how direct edits of the activation
maps of the neurons can translate into a variety of generative edits in the
image space, without any generator modification or re-training.
Figure 5: Out-of-distribution inversions. By incorporating the Rosetta
Neurons in the image inversion process, we can invert sketches and cartoons
(first row), and generate similar in-distribution images (last row). A subset
of the Rosetta Neurons from the input images that were matched during the
inversion process is shown in the middle rows.
### 4.1 Rosetta Neurons-Guided Inversion
To visualize the extracted Rosetta Neurons, we take inspiration from [31], and
use the generative model $G$ to produce images for which the generator
activation maps of the Rosetta Neurons best match to the paired activation
maps extracted from $D(I_{v})$, as shown in figure 4. As opposed to [31], we
do not train the generative model to be conditioned on the activation maps.
Instead, we invert images through the fixed generator into some latent code
$z$, while maximizing the similarity between the activation maps of the paired
Rosetta Neurons. Our objective is:
$\arg\min_{z}(-L_{act}(z,I_{v})+\alpha L_{reg}(z))$ (5)
Where $\alpha$ is a loss coefficient, $L_{reg}$ is a regularization term
($L_{2}$ or $L_{1}$), and $L_{act}(z,I_{v})$ is the mean of normalized
similarities between the paired activations:
$\begin{split}&L_{act}(z,I_{v})=\\\
&\frac{1}{|BB(G,D)|}{{\sum}}_{\begin{subarray}{c}(j,k)\in\\\
BB(G,D)\end{subarray}}\frac{\sum\limits_{x}\left(G^{j}_{x}-\overline{G^{j}}\right)\left(D^{k}_{x}-\overline{D^{k}}\right)}{\sqrt{var(G^{j})\cdot
var(D^{k})}}\end{split}$ (6)
Where $G^{j}$ is the $j$-th activation map of $G(z)$ and $D^{k}$ is the $k$-th
activation map of $D(I_{v})$. For obtaining this loss, we use the mean and
variance precomputed by Eq. 1 over the entire dataset during the earlier
mining phase. However, we calculate the correlation over the spatial
dimensions of a single data instance.
The Rosetta neurons guided inversion has two typical modes. The first mode is
when both the initial activation map and the target one have some intensity
somewhere in the map (e.g. two activation maps that are corresponding to
“nose” are activated in different spacial locations). In this case, the visual
effect is an alignment between the two activation maps. As many of the Rosetta
neurons capture object parts, it results in image-to-image alignment (e.g.,
fig. 6). The second mode is when either the target or the initial activation
map is not activated. In this case, a concept will appear or disappear (e.g.,
fig. 9).
Visualizing a single Rosetta Neuron. We can visualize a single Rosetta Neuron
by modifying the loss in our inversion process (eq. 6). Rather than
calculating the sum over the entire set of Rosetta Neurons, we do it for a
single pair that corresponds to the specific Rosetta neuron. When this
optimization procedure is applied a few times on the same input neuron pair
starting from a few different randomly initialized latent codes, we get a
diverse set of images that are matching to the same activation map of the
wanted Rosetta Neuron. This allows a user to disentangle and detect what is
the concept that is specifically represented by the given neuron. Figure 1
present two optimized images for each of the presented Rosetta Neurons. This
visualization allows the viewer to see that Concept #1 corresponds to the
concept “red color,” rather than to the concept “hat.”
Figure 6: Cross-class image-to-image translation. Rosetta Neurons guided
inversion of input images (top row) into a StyleGAN2 trained on LSUN cats
[35], allows us to preserve the pose of the animal while changing it from dog
to cat (bottom row). See supplementary material for more examples.
Inverting out-of-distribution images. The inversion process presented above
does not use the generated image in the optimization, as opposed to common
inversion techniques that calculate the pixel loss or perceptual loss between
the generated image the input image. Our optimization process does not compare
the image pixel values, and as many of the Rosetta Neurons capture high-level
semantic concepts and coarse structure of the image, this allows us to invert
images outside of the training distribution of the generative model. Figure 6
presents a cross-class image-to-image translation that is achieved by Rosetta
Neurons guided inversion. As shown, the pose of the input images of dogs is
transferred to the poses of the optimized cat images, as the Rosetta Neurons
include concepts such as “nose,” “ears,” and “contour” (please refer to Figure
1 for a subset of the Rosetta Neurons for this set of models).
Figure 5 presents the inversion results for sketches and cartoons, and a
subset of the Rosetta Neurons that were used for optimization. As shown, the
matches-guided inversion allows us to “translate” between the two domains via
the shared Rosetta Neurons and preserve the scene layout and object pose. Our
lightweight method does not require dedicated models or model training, as
opposed to [38, 14].
Inverting in-distribution images. We found that adding the loss term in eq. 5
to the simple reconstruction loss objective improves the inversion quality.
Specifically, we optimize:
$\arg\min_{z}(L_{rec}(G(z),I_{v})+\alpha L_{reg}(z)-\beta L_{act}(z,I_{v}))$
(7)
Where $L_{rec}$ is the reconstruction loss between the generated image and the
input image, and $\beta$ is a loss coefficient. The reconstruction loss can be
pixel loss, such as $L_{1}$ or $L_{2}$ between the two images, or a perceptual
loss.
We compare the inversion quality with and without the Rosetta Neurons guidance
and present the PSNR, SSIM, and LPIPS [37] for StyleGAN-XL inversion. We use
solely a perceptual loss as a baseline, similarly to [29]. We add our loss
term to the optimization, where the Rosetta Neurons are calculated from 3 sets
of matches with StyleGAN-XL: matching to DINO-RN, matching to CLIP-RN, and
matching across all the discriminative models in Table 2. We use the same
hyperparameters as in [29], and set $\alpha=0.1$ and $\beta=1$.
Table 1 presents the quantitative inversion results for 5000 randomly sampled
images from the ImageNet validation set (10% of the validation set, 5 images
per class), as done in [29]. Figure 7 presents the inversion results for the
baseline and for the additional Rosetta Neurons guidance using the matches
between all the models. As shown qualitatively and quantitatively, the
inversion quality improves when the Rosetta Neurons guiding is added. We
hypothesize this is due to the optimization objective that directly guides the
early layers of the generator and adds layout constraints. These soft
constraints reduce the optimization search space and avoid convergence to
local minima with low similarity to the input image.
| PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$
---|---|---|---
Perceptual loss | 13.99 | 0.340 | 0.48
+DINO matches | 15.06 | 0.360 | 0.45
+CLIP matches | 15.20 | 0.362 | 0.44
+All matches | 15.42 | 0.365 | 0.46
Table 1: Inversion quality on ImageNet. We compare the inversion quality for
StyleGAN-XL when Rosetta Neurons guidance is added, for 3 sets of matches -
StyleGAN-XL & DINO-RN, StyleGAN-XL & CLIP-RN and all the models from figure 3.
Model | Training dataset | Resolution
---|---|---
StyleGAN-XL | ImageNet | 256
StyleGAN2 | LSUN(cat) | 256
StyleGAN2 | LSUN(horse) | 512
BigGAN | ImageNet | 256
ResNet50 | ImageNet | 224
DINO-ResNet50 | ImageNet | 224
DINO-VIT-base | ImageNet | 224
MAE-base | ImageNet | 224
CLIP | WebImageText | 224
Table 2: Models used in the paper. Figure 7: Image inversions for StyleGAN-
XL. We compare inversions by optimizing perceptual loss only (second column),
to additional Rosetta Neurons guidance loss, with matches calculated across
all the models presented in Figure 3 (third column). See supplementary
material for more examples.
### 4.2 Rosetta Neurons Guided Editing
The set of Rosetta Neurons allows us to apply controlled edits on a generated
image $I_{src}=G(z)$ and thus to provide a counterfactual explanation to the
neurons. Specifically, we modify the activation maps corresponding to the
Rosetta Neurons, extracted from $G(z)$, and re-optimize the latent code to
match the edited activation maps according to the same optimization objective
presented in eq. 5. As opposed to previous methods like [8], which trained a
specifically designed generator to allow disentangled manipulation of objects
at test-time, we use a fixed generator and only optimize the latent
representation. Next, we describe the different manipulation that can be done
on the activation maps, before re-optimizing the latent code:
Zoom-in. We double the size of each activation map that corresponds to a
Rosetta Neurons with bilinear interpolation and crop the central crop to
return to the original activation map size. We start our re-optimization from
the same latent code that generated the original image.
Shift. To shift the image, we shift the activation maps directly and pad them
with zeros. The shift stride is relative to the activation map size (e.g. we
shift a $4~{}\times~{}4$ activation map by 1, while shifting $8\times 8$
activation maps by 2).
Copy & paste. We shift the activation maps twice into two directions (e.g.
left and right), creating two sets of activation maps - left map, and right
map. We merge them by copying and pasting the left half of the left activation
map and the right half of the right activation map. We found that starting
from random $z$ rather than $z$ that generated the original image obtains
better results.
Figure 8 shows the different image edits that are done via latent optimization
to match the manipulated Rosetta Neurons. We apply the edits for two different
generative models (BigGAN and StyleGAN2) to show the robustness of the method
to different architectures.
Figure 8: Rosetta Neurons guided editing. Direct manipulations on the
activation maps corresponding to the Rosetta neurons are translated to
manipulations in the image space. We use two models (top row - StyleGAN2,
bottom two rows - BigGAN) and utilize the matches between each of them to
DINO-RN.
Fine-grained Rosetta Neurons edit. Our optimization procedure allows us to
manipulate a subset of the Rosetta Neurons, instead of editing all of the
neurons together. Specifically, we can manually find among the Rosetta Neurons
a few that correspond to elements in the image that we wish to modify. We
create “ground truth” activations by modifying them manually and re-optimizing
the latent code to match them. For example - to remove concepts specified by
Rosetta Neurons, we set their values to the minimal value in their activation
maps. We start our optimization from the latent that corresponds to the input
image and optimize until the picked activation maps converge to the manually
edited activation maps. Figure 9 presents examples of removed Rosetta Neurons.
Modifying only a few activation maps (1 or 2 in the presented images) that
correspond to the objects we aimed to remove, allows us to apply realistic
manipulations in the image space. As opposed to [2], we do not rewrite the
units in the GAN directly and apply optimization instead, as we found that
direct edits create artifacts in the generated image for large and diverse
GANs.
Implementation details. For the re-optimization step, we train $z$ for 500
steps, with Adam optimizer [16] and a learning rate of 0.1 for StyleGAN2 and
0.01 for BigGAN. Following [29], the learning rate is ramped up from zero
linearly during the first 5% of the iterations and ramped down to zero using a
cosine schedule during the last 25% of the iterations. We use $K=5$ for
calculating the nearest neighbors. The inversion and inversion-based editing
take less than 5 minutes per image on one A100 GPU.
Figure 9: Single Rosetta Neurons Edits. We optimize the latent input s.t. the
value of a desired Rosetta activation reduces. This allows removing elements
from the image (e.g. emptying the beer in the glass, reducing the water stream
in the fountain, and removing food from a plate). See appendix for more
examples.
## 5 Limitations
Our method can not calculate GAN-GAN matches directly, only through a
discriminative model. Unlike discriminative models that can receive the same
input image, making two GANs generate the same image is not straightforward.
Consequently, we only match GANs with discriminative models.
Secondly, we were unsuccessful when applying our approach to diffusion models,
such as [27]. We speculate that this is due to the autoregressive nature of
diffusion models, where each step is a conditional generative model from image
to image. We hypothesize that as a result, the noisy image input is a stronger
signal in determining the outcome of each step, rather than a specific unit.
Thus, the units in diffusion models have more of an enhancing or editing role,
rather than a generating role, which makes it less likely to identify a
designated perceptual neuron.
Lastly, our method relies on correlations, and therefore there is a risk of
mining spurious correlations. As shown in Figure 3, the dog in the third
example does not have its tongue visible, yet both StyleGAN-XL and DINO-RN
activated for Concept #1 in a location where the tongue would typically be
found. This may be due to the correlation between the presence of a tongue and
the contextual information where it usually occurs.
## 6 Conclusion
We introduced a new method for mining and visualizing common representations
that emerge in different visual models. Our results demonstrate the existence
of specific units that represent the same concepts in a diverse set of deep
neural networks, and how they can be utilized for various generative tasks via
a lightweight latent optimization process. We believe that the found common
neurons can be used in a variety of additional tasks, including image
retrieval tasks and more advanced generative tasks. Additionally, we hope that
the extracted representations will shed light on the similarities and
dissimilarities between models that are trained for different tasks and with
different architectures. We plan to explore this direction in future work.
## Acknowledgements
The authors would like to thank Niv Haim, Bill Peebles, Sasha Sax, Karttikeya
Mangalam, and Xinlei Chen for the helpful discussions. YG is funded by the
Berkeley Fellowship. AS gratefully acknowledges financial support for this
publication by the Fulbright U.S. Postdoctoral Program, which is sponsored by
the U.S. Department of State. Its contents are solely the responsibility of
the author and do not necessarily represent the official views of the
Fulbright Program or the Government of the United States. Additional funding
came from DARPA MCS and ONR MURI.
## References
* [1] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In Computer Vision and Pattern Recognition, 2017.
* [2] David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, and Antonio Torralba. Gan dissection: Visualizing and understanding generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
* [3] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019.
* [4] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the International Conference on Computer Vision (ICCV), 2021.
* [5] Andrew C. Connolly, J. Swaroop Guntupalli, Jason D. Gors, Michael Hanke, Yaroslav O. Halchenko, Yu-Chien Wu, Hervé Abdi, and James V. Haxby. The representation of biological classes in the human brain. The Journal of Neuroscience, 32:2608 – 2618, 2012.
* [6] Tali Dekel, Shaul Oron, Michael Rubinstein, Shai Avidan, and William T. Freeman. Best-buddies similarity for roboust template matching. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 2021–2029, 2015.
* [7] Shimon Edelman. Representation is representation of similarities. Behavioral and Brain Sciences, 21(4):449–467, 1998.
* [8] Dave Epstein, Taesung Park, Richard Zhang, Eli Shechtman, and Alexei A. Efros. Blobgan: Spatially disentangled scene representations. European Conference on Computer Vision (ECCV), 2022.
* [9] Patrick Esser, Robin Rombach, and Björn Ommer. A disentangling invertible interpretation network for explaining latent representations. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9220–9229, 2020.
* [10] Lore Goetschalckx, Alex Andonian, Aude Oliva, and Phillip Isola. Ganalyze: Toward visual definitions of cognitive image properties. arXiv preprint arXiv:1906.10112, 2019.
* [11] James Haxby, Maria Gobbini, Maura Furey, Alumit Ishai, Jennifer Schouten, and Pietro Pietrini. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science (New York, N.Y.), 293:2425–30, 10 2001.
* [12] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. arXiv:2111.06377, 2021.
* [13] Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2015.
* [14] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, 2017.
* [15] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proc. CVPR, 2020.
* [16] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
* [17] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. ArXiv, abs/1905.00414, 2019.
* [18] Nikolaus Kriegeskorte, Marieke Mur, and Peter Bandettini. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, 2008.
* [19] Nikolaus Kriegeskorte, Marieke Mur, Douglas A. Ruff, Roozbeh Kiani, Jerzy Bodurka, Hossein Esteky, Keiji Tanaka, and Peter A. Bandettini. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 60:1126–1141, 2008.
* [20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60:84 – 90, 2012.
* [21] Oran Lang, Yossi Gandelsman, Michal Yarom, Yoav Wald, Gal Elidan, Avinatan Hassidim, William T. Freeman, Phillip Isola, Amir Globerson, Michal Irani, and Inbar Mosseri. Explaining in style: Training a gan to explain a classifier in stylespace. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 673–682, 2021.
* [22] Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. Distill, 2020. https://distill.pub/2020/circuits/zoom-in.
* [23] Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In Proceedings of the British Machine Vision Conference (BMVC), 2018\.
* [24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. CoRR, abs/2103.00020, 2021.
* [25] Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, and Andrea Vedaldi. There and back again: Revisiting backpropagation saliency methods. CoRR, abs/2004.02866, 2020.
* [26] Daniel Roich, Ron Mokady, Amit H Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. ACM Trans. Graph., 2021.
* [27] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021.
* [28] Robin Rombach, Patrick Esser, and Björn Ommer. Network-to-network translation with conditional invertible neural networks. arXiv: Computer Vision and Pattern Recognition, 2020.
* [29] Axel Sauer, Katja Schwarz, and Andreas Geiger. Stylegan-xl: Scaling stylegan to large diverse datasets. In ACM SIGGRAPH 2022 Conference Proceedings, SIGGRAPH ’22, New York, NY, USA, 2022. Association for Computing Machinery.
* [30] Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 128:336–359, 2016.
* [31] Assaf Shocher, Yossi Gandelsman, Inbar Mosseri, Michal Yarom, Michal Irani, William T. Freeman, and Tali Dekel. Semantic pyramid for image generation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7455–7464, 2020.
* [32] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR, abs/1312.6034, 2013.
* [33] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
* [34] Daniel Yamins, Ha Hong, Charles Cadieu, Ethan Solomon, Darren Seibert, and James Dicarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 111, 05 2014.
* [35] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
* [36] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, 2013.
* [37] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
* [38] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017.
## 7 Appendix
We provide extended examples of Rosetta dictionaries as well as additional
edits and visualizations. We further provide the code for extracting and
visualizing Rosetta neurons.
Figure 10: Rosetta Neuron Dictionary for LSUN-horses. A sample from the
dictionary curated for the LSUN-horses dataset. The figure presents 6 emergent
concepts demonstrated in 4 example images. Figure 11: Rosetta Neuron
Dictionary for LSUN-horses (cont.) Figure 12: Rosetta Neuron Dictionary. A
sample from the dictionary curated for the ImageNet class “Church”. The figure
presents 5 emergent concepts demonstrated in 2 example images. Figure 13: All
the concepts for LSUN-cats. Shown for one StyleGAN2 generated image. Figure
14: All the concepts for ImageNet class “Briard”. Shown on one StyleGAN-XL
generated image. Figure 15: All the concepts for ImageNet class “Goldfish”.
Shown on one StyleGAN-XL generated image. Figure 16: All the concepts for
ImageNet class “Church”. Shown on one StyleGAN-XL generated image. Figure 17:
All the concepts for ImageNet class “Espresso”. Shown on one StyleGAN-XL
generated image. Figure 18: Additional out-of-distribution and cross-class
inversions. We show out-of-distribution image inversions done by Rosetta
Neurons guidance for StyleGAN2 model, trained on LSUN cats (left 3 images) and
LSUN horses (right 3 images). Figure 19: Dog-to-cat cross-class inversions.
Using Rosetta Neurons guidance for StyleGAN2 model, trained on LSUN cats.
Figure 20: Additional examples of Rosetta Neurons guided editing. We show
examples using BigGAN and its matches to CLIP-RN. Figure 21: Additional Single
Rosetta Neurons Edits. By decreasing (two left image pairs) or increasing (two
right image pairs) the values of specific manually chosen Rosetta Neurons
before the latent optimization process, we can remove or add elements to the
image. In this figure, we demonstrate (left to right): Removing lava
eruptions, removing trees, adding Crema to an Espresso, and adding a dog’s
tongue. For the leftmost example, we also provide the complete list of Rosetta
Neurons visualizations. The chosen concept is marked with a red frame. Figure
22: Additional image inversions for StyleGAN-XL. We compare using perceptual
loss (second row) to perceptual loss with additional guidance from the Rosetta
Neurons (third row). Figure 23: High Resolution single Rosetta Neuron Edits We
provide additional examples, complementary to Fig. 9, but with higher
resolution. We conduct matching between a StyleGAN3 trained on
$1024$$\times$$1024$ FFHQ images and DINO-ViT with 1000 images, which takes
$~{}2700s$. We then apply standard PTI [26] to a real high-res
($1024$$\times$$1024$) image (160s). Finally, we perform our editing which
takes 18.4s (Zoom-in possible).
|
# Architecting Peer-to-Peer Serverless Distributed Machine Learning Training
for Improved Fault Tolerance
Amine Barrak
Department of Computer Science
Université du Québec à Chicoutimi
Saguenay, QC
<EMAIL_ADDRESS>
&Fabio Petrillo
Department of Software Engineering
École de Technologie Supérieure
Montreal, QC
<EMAIL_ADDRESS>
&Fehmi Jaafar
Department of Computer Science
Université du Québec à Chicoutimi
Saguenay, QC
<EMAIL_ADDRESS>
###### Abstract
Distributed Machine Learning refers to the practice of training a model on
multiple computers or devices that can be called nodes. Additionally,
serverless computing is a new paradigm for cloud computing that uses functions
as a computational unit. Serverless computing can be effective for distributed
learning systems by enabling automated resource scaling, less manual
intervention, and cost reduction. By distributing the workload, distributed
machine learning can speed up the training process and allow more complex
models to be trained. Several topologies of distributed machine learning have
been established (centralized, parameter server, peer-to-peer). However, the
parameter server architecture may have limitations in terms of fault
tolerance, including a single point of failure and complex recovery processes.
Moreover, training machine learning in a peer-to-peer (P2P) architecture can
offer benefits in terms of fault tolerance by eliminating the single point of
failure. In a P2P architecture, each node or worker can act as both a server
and a client, which allows for more decentralized decision making and
eliminates the need for a central coordinator. In this position paper, we
propose exploring the use of serverless computing in distributed machine
learning training and comparing the performance of P2P architecture with the
parameter server architecture, focusing on cost reduction and fault tolerance.
_K_ eywords Cloud Computing $\cdot$ Serverless computing $\cdot$ Distributed
Training Machine Learning $\cdot$ Peer To Peer Architecture
## 1 Introduction
Machine learning is a rapidly growing field that requires high computing
resources, particularly when training large, complex models. As the demand for
precise and advanced machine learning models rises, the importance of having
high computing resources has become increasingly crucial. Moreover, training
ML models is a data-intensive activity that faces large-scale computing issues
with sophisticated models and data lakes, and data input may become a severe
performance bottleneck [1]. Given the substantial computing requirements of
machine learning, distributed training has emerged as a necessary approach
that enables multiple nodes to share their computing resources.
Cloud computing has revolutionized the way machine learning models are trained
and deployed. With cloud computing, high-performance computing resources, such
as GPUs and TPUs, are available, which can significantly speed up the training
process. Additionally, cloud-based platforms offer parallelization
capabilities, enabling powerful model training across multiple machines [2].
In particular, serverless is a new paradigm for cloud computing that uses
functions as the unit of computation. It simply requires the function code to
be uploaded, and the cloud provider will take care of running these functions,
including managing resources such as servers and storage. These functions can
be triggered by specific events, such as an HTTP request, a message in a
queue, or a change in a database. They are designed to perform a specific task
and are executed only when needed, making them highly scalable and cost-
effective. Serverless functions are typically stateless, and once the function
has completed its task, it is terminated, freeing up resources for other
functions to use [3].
Distributed Machine Learning refers to the practice of training a machine
learning model on multiple computers or devices that can be called nodes. This
is done to scale up the training process, handle large amounts of data, and
leverage the combined computing power of multiple machines. By distributing
the workload, distributed machine learning can speed up the training process
and allow for larger and more complex models to be trained. Several topologies
of distributed machine learning has been set (centralised, parameter server,
peer to peer), where The different nodes of the distributed system need to be
connected through a specific architectural pattern to fulfill a common task
[4].
The degree of distribution that the system is planned to implement is a
decisive element in topology. It can have a substantial impact on its
performance, scalability, dependability, and security [4]. Specifically,
distributed learning is a method of training machine learning based on a
leader-worker architecture, where multiple worker machines work together under
the supervision of a leader machine to train a model. Several solutions for
distributed machine learning based on parameter server architecture using
serverless were proposed [5, 6, 7, 8, 9, 10]. In a parameter server
architecture, each worker updates its own model parameters locally and
periodically sends its updates to the parameter server. The parameter server
aggregates these updates and broadcasts them to all workers. This allows for
efficient parallel training of the model, as each worker can continue training
using the latest version of the model parameters.
However, the parameter server architecture can have limitations in terms of
fault tolerance, including a single point of failure, server communication
overhead, complex recovery processes, and lack of redundancy of the parameter
server, as well as load balancing [11].
Byzantine faults refer to any arbitrary or unpredictable behavior of a
component in the system. In distributed machine learning training, a Byzantine
node can affect the training process of a model and lead to incorrect or
compromised results. Guerraoui et al. [12] conducted an experiment to test the
additional overhead of fault tolerance in different architectures based on
servers and workers. They found that tolerating Byzantine servers induces much
more overhead than tolerating Byzantine workers.
Fault tolerance is essential in machine learning, especially in distributed
systems with multiple nodes or devices involved in model training. A failure
in one component can have significant impacts on the system’s overall
performance, leading to wasted time, resources, and money. By implementing
fault tolerance measures, machine learning systems can improve their
reliability and minimize the risk of costly downtime and data loss, resulting
in more efficient use of resources and better outcomes [13].
Peer-to-peer architectures are more resilient and fault-tolerant than
parameter-server architectures because there is no central server that is
prone to failure (for example, during high demand), avoiding the well-known
problem of a Single Point of Failure (SPOF) that can bring an entire machine
learning system to a halt. In a P2P architecture, each node or worker can act
as both a server and a client, which allows for more decentralized decision-
making and eliminates the need for a central coordinator.
Serverless computing offers several computing capabilities, such as cost
optimization [14] and high scalability [15], that are well-suited to the
expensive computing operations of machine learning.
This positional paper presents our hypothesis regarding a peer to peer
distributed training based on serverless computing. This hypothesis can
provide a valuable insight in term of fault tolerance.
## 2 Proposal
In this positional paper, we propose using a peer-to-peer serverless
architecture to improve the fault tolerance of distributed machine learning
training. Our research addresses this hypothesis in detail, which is discussed
next.
Hypothesis 1 Distributed machine learning training using serverless computing
with peer-to-peer (P2P) architecture results in improved fault tolerance
compared to using a serverless-based parameter server architecture.
To validate our hypothesis, our research project is divided into the following
three steps: (1) Implement a peer-to-peer architecture based on serverless
computing for distributed machine learning training; (2) Evaluate and explore
the impact of serverless computing on the peer-to-peer architecture; (3)
Evaluate and compare the fault tolerance of the serverless-based peer-to-peer
architecture with that of the serverless-based parameter server architecture.
## 3 Conclusion
In this paper, we present our proposal of exploring the impact of serverless
computing on distributed training for machine learning using peer-to-peer
architecture. Our goal is to investigate the potential benefits and challenges
associated with this approach, with a focus on fault tolerance. To achieve
this, our research design for comparing the performance of a serverless-based
parameter server architecture and a P2P architecture in distributed training
machine learning.
## References
* [1] Elmar Haussmann. Accelerating i/o bound deep learning on shared storage, 2018.
* [2] Kai Hwang. Cloud computing for machine learning and cognitive applications. Mit Press, 2017.
* [3] Johann Schleier-Smith, Vikram Sreekanti, Anurag Khandelwal, Joao Carreira, Neeraja J Yadwadkar, Raluca Ada Popa, Joseph E Gonzalez, Ion Stoica, and David A Patterson. What serverless computing is and should become: The next phase of cloud computing. Communications of the ACM, 64(5):76–84, 2021.
* [4] Joost Verbraeken, Matthijs Wolting, Jonathan Katzy, Jeroen Kloppenburg, Tim Verbelen, and Jan S Rellermeyer. A survey on distributed machine learning. Acm computing surveys (csur), 53(2):1–33, 2020.
* [5] Marc Sánchez-Artigas and Pablo Gimeno Sarroca. Experience paper: Towards enhancing cost efficiency in serverless machine learning training. In Proceedings of the 22nd International Middleware Conference, pages 210–222, 2021.
* [6] Andreas Grafberger, Mohak Chadha, Anshul Jindal, Jianfeng Gu, and Michael Gerndt. Fedless: Secure and scalable federated learning using serverless computing. arXiv preprint arXiv:2111.03396, 2021.
* [7] Jiawei Jiang, Shaoduo Gan, Yue Liu, Fanlin Wang, Gustavo Alonso, Ana Klimovic, Ankit Singla, Wentao Wu, and Ce Zhang. Towards demystifying serverless machine learning training. In Proceedings of the 2021 International Conference on Management of Data, pages 857–871, 2021.
* [8] Daniel Barcelona-Pons, Pierre Sutra, Marc Sánchez-Artigas, Gerard París, and Pedro García-López. Stateful serverless computing with crucial. ACM Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–38, 2022.
* [9] Pablo Gimeno Sarroca and Marc Sánchez-Artigas. Mlless: Achieving cost efficiency in serverless machine learning training. arXiv preprint arXiv:2206.05786, 2022.
* [10] Ahsan Ali, Syed Zawad, Paarijaat Aditya, Istemi Ekin Akkus, Ruichuan Chen, and Feng Yan. Smlt: A serverless framework for scalable and adaptive machine learning design and training. arXiv preprint arXiv:2205.01853, 2022.
* [11] Travis Addair. Decentralized and distributed machine learning model training with actors.
* [12] Rachid Guerraoui, Arsany Guirguis, Jérémy Plassmann, Anton Ragot, and Sébastien Rouault. Garfield: System support for byzantine machine learning (regular paper). In 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pages 39–51. IEEE, 2021.
* [13] Lalli Myllyaho, Mikko Raatikainen, Tomi Männistö, Jukka K Nurminen, and Tommi Mikkonen. On misbehaviour and fault tolerance in machine learning systems. Journal of Systems and Software, 183:111096, 2022.
* [14] Tarek Elgamal. Costless: Optimizing cost of serverless computing through function fusion and placement. In 2018 IEEE/ACM Symposium on Edge Computing (SEC), pages 300–312. IEEE, 2018.
* [15] Tianyi Yu, Qingyuan Liu, Dong Du, Yubin Xia, Binyu Zang, Ziqian Lu, Pingchao Yang, Chenggang Qin, and Haibo Chen. Characterizing serverless platforms with serverlessbench. In Proceedings of the 11th ACM Symposium on Cloud Computing, pages 30–44, 2020.
|
# Accountability and Motivation: A Model of Delegated Reform Decisions with
Career Concerns
Liqun Liu Ph.D. student, University of Chicago Harris School. Email:
<EMAIL_ADDRESS>
###### Abstract
Successful reform policy-making commonly involves correct policy choices
followed by quality implementation. When political decision makers carry out
these decisions, office-holding motives may prevent them from acting in the
most appropriate way. I build a formal model to examine how a decision maker
with conflicting policy and office-holding motives makes reform policies
across several salient information environments. My model highlights the
difficulty of fine-tuning the decision maker’s motives in a context where
pandering is inevitable. I show that backing away from full transparency often
helps – not by eliminating pandering, but by lending credibility to a
retention rule that is pivotal on reform outcomes – to provide a high-powered
incentive for implementation. Excessively stringent or lenient transparency
requirements direct the decision maker’s attention to acting congruently
without taking the policy consequences seriously.
Keywords: Accountability; Motivation; Pandering; Transparency
## 1 Introduction
There is a widespread perception that political decision makers are “timid”
during policy-making: they often shy away from the policies that are good for
the society but bad for their future careers (e.g. Canes-Wrone et al. (2001);
Maskin and Tirole (2004); Fox (2007)). The key to controlling this behavior is
removing their accountability, that is, they shall not be responsible
politically for unfavorable policy consequences that arise from the ex ante
appropriate policy decisions111Maskin and Tirole (2004) suggest that technical
decisions should be left to unaccountable officials; Prat (2005), Fox and Van
Weelden (2012), and Ashworth (2012) suggest that backing away from
transparency helps because the incumbent cannot be judged on unobserved
actions.. Yet in many complex reform policy-making where the quality of
implementation matters, simply joining or splitting one’s policy and office-
holding motives is not enough. The public faces excessive policy risks if
political decision makers are insufficiently motivated at the implementation
stage.
The following example illustrates this point. Dominic Cummings, a chief figure
in the Brexit campaign and a former senior advisor of Boris Johnson, used to
have enormous influence on the British civil servants. To Cummings, supporting
the Brexit is a matter of loyalty rather than policy-making for civil
servants. When Cummings just assumed his position as Johnson’s advisor,
anecdotes222“How Dominic Cummings took control in Boris Johnson’s first days
as Prime Minister”. BuzzfeedNews. 27 July 2019. suggest that he cued the
department aides to support the no-deal Brexit in return for extra money from
the Treasury; to department aides, it is “do or die”. In January 2021, after
stepping down from the Downing Street for months, Cummings continued his
propaganda. He posted on the social media “Should I name and shame the senior
officials who persistently present a disastrously incorrect picture to
ministers?” and “Many will be pleased to know this is about ideologues with EU
stars in their eyes, not the virus.”333“Dominic Cummings threatens to expose
Remainer civil servants who tried to sabotage Brexit”, Express, 6 January
2021.
Not all civil servants shared Cummings’s enthusiasm about Brexit. In fact,
many were confused, depressed, and/or annoyed at implementing the plan. After
long being trapped in the black hole of Brexit, a civil servant complained to
The Guardian: “Heaven help us if no deal (Brexit) actually happens.”444“Many
civil servants are depressed – including me. Brexit will do that to you.”, The
Guardian, 26 November 2019. Jill Rutter, a former treaury mandarin, put
straightforwardly: “Brexit is an article of faith, rather than a pragmatic
choice.”555“The civil service must speak truth to Boris (and his Cabinet)”.
The Institute for Government. 25 July 2019. Others in the Whitehall question
the practicability of Cummings’s rush and radical changes prior to the Brexit.
Dave Penman, the head of the FDA union, pinpointed the source of
(de)motivation in reform implementation:“ There’s a huge difference between
bringing in new ideas or radical agendas and implementing untested ideologies
which, if they go wrong, will impact upon the delivery of public services to
millions of citizens.”666“Dominic Cummings role provokes alarm inside civil
service”. The Guardian, July 25 2019. Indeed, Cummings shall be happy to see
these unelected bureaucrats pander to support the Brexit. But even a Brexit-
zealot like him faces an incentive problem: how to motivate its
implementation?
This paper brings the motivation problem to a pandering context, focusing on
the right kind of accountability within the delegated reform decisions. The
delegates’ office and policy motives are imperfectly aligned; hence, to what
extent the public might fine-tune these motives determines the quality of the
reform decision-making. The only policy instrument that public can harness is
the delegates’ accountability, that is, they shall be removed from office
unless their reform decision-making admit certain “nice” features. Then the
question becomes: what are the nice features that would align the delegates’
policy and office-holding motives? Can the delegates be credibly
rewarded/punished electorally after the policy-making? And finally, what kind
of accountability would best serve the policy interest for the public?
I analyze these questions with a formal model of reform decision-making. There
is a careerist agent (he) carrying out the reform decisions and its
implementation on behalf of a principal (she). The agent must choose between
the safe status quo policy and a risky policy reform. To highlight the
motivation issue in a clear fashion, I follow Hirsch (2016) and assume that
good policy choice and good implementation are complementary to a successful
reform. As a policy expert, the agent is assumed to be better informed than
the principal about the underlying state that dictates whether the reform
might be successful. However, the agent is not a perfect delegate. On the one
hand, he does not necessarily share the principal’s policy preference. A
congruent agent prefers a successful reform to the status quo to a failed
reform just as the principal; a noncongruent (conservative) agent prefers the
status quo no matter what. On the other hand, the agent places some weight on
retention; the office benefit incentivizes him to make policy to signal
congruence to the principal. Taken together, career concerns might distort the
agent’s reform decision-making in a way that damages the principal’s policy
interest. For example, he may initiate a reform during bad times, or failing
to implement a good reform with sufficient efforts.
My main analysis concerns to which extent the principal may and should hold
the agent accountable. Since the agent cannot be judged on things that the
principal does not observe, I assume that the principal employ information
policies as devices to establish the limits of accountability. Ranging from
the least to the most transparent environments, the principal may observe 1)
the policy choice, 2) the policy choice and its outcome, and 3) the policy
choice, implementation effort, and its outcome. Roughly, they classify the
policy-making environments according to whether the agent might answer for its
decision, implementation, and outcomes. That said, the principal does not
necessarily use all data from policy-making to make the retention decision. I
study how the principal and the agent form mutually-correct guesses on the
other’s strategy, and characterize the environment that best serves the
principal’s policy interest.
Main results. I show that the right kind of accountability must involve the
principal observing the policy outcome; beyond that, the principal does not
necessarily benefit from knowing more about the reform decision-making.
Knowing the policy outcome matters because it alters the burden of proof (e.g.
Ashworth (2012)): when the public is unlikely to learn about the
appropriateness of a policy, the agent proves loyalty unless he refuses to
reform. With the policy outcome going public, the agent shows disloyalty
unless he does the reform right. Assuming in both situations the agent panders
to reform, he exerts more efforts in the latter case because he must answer
for the reform decision and outcomes. In fact, a retention rule pivotal on
reform outcomes provides a powerful incentive – the agent would internalize
the office benefit into his implementation effort. Thus, an environment with
the policy outcome observable fine-tunes the agent’s policy and office motives
in a context in which pandering is inevitable.
Can the principal improve by gleaning information about the reform
implementation before evaluating the agent? We have to caution that the
principal might not utilize all data at hand for the retention decision. The
key barrier is that, the accountability for the reform implementation and
outcomes do not always get along. More often, the reform decision together
with additional cues about its implementation enables the principal to
perfectly screen the agent. For example, if holding the office or political
promotion comes at price of an undesirable reform decision plus burdensome
midterm reviews from above, a noncongruent agent would be reluctant to go
after it at the beginning. From the principal’s perspective, as long as the
agent is willing to pay for the increased cost of congruence, he does not have
to answer for the policy consequences.
My paper is not the first to recognize the perverse effect of transparency,
but it uncovers a novel mechanism embedded in complex policy-making problems.
It is often argued that transparency could be bad for policy-making when the
correct action leads to adverse inference about the decision maker’s
competence or congruence; making the action unobservable may cure the issue by
aligning the decision maker’s policy and office motives (e.g. Prat (2005); Fox
(2007); Fox and Van Weelden (2012)). When the policy-making involves more than
a policy choice, the decision maker’s motivation at the implementation stage
is endogenous to how the public links his future to various potential policy
consequences. This often makes it impossible to completely aligning one’s
conflicting motives.
Building on these observations, I return to positive question – the principal-
optimal information policies – by comparing the agent’s incentives underlying
two good accountability. To fix ideas, let’s erase the selection issue and
suppose that the delegate is “quite” congruent; driven by career concerns,
this agent always panders to reform to avoid being mistaken as the
“noncongruent” type. When does he work harder? If his future career is tied to
the reform consequences, then the agent works to pursue its success; if
instead his future career is tied to acting congruently, then he works to
separate from the noncongruent types. Generally, these two motivations differ
in magnitude. Depending on the exact parameter values, the principal may
prefer either accountability.
These results have practical implications for how the public should motivate
the delegated reform policy-making. Crucially, more transparency does not
necessarily translate into more or less accountability; instead, it often
induces a different kind of accountability that is not necessarily better or
worse. Through comparative static analysis, I discuss the level of
transparency that induces right kind of accountability across different
environments. I find that the principal-optimal level of transparency is not
necessarily monotone with respect to the delegates’ office motive. This novel
result highlights the subtle motivation effect of career concerns as opposed
to creating a straightforward pandering incentive.
## 2 Model
Setup. Consider a model of policy-making in which an agent (he) carries out a
reform decision and its implementation on behalf of a principal (she). There
are two possible policies, a risky reform ($r$) and a safe status quo ($q$).
By risky, I mean that the reform outcome could be a success or a failure; the
status quo outcome is deterministic. The principal’s utility $v$ from these
outcomes are $1$, $d$, and $0$, with $d\in(0,1)$. In other words, she prefers
a successful reform to the status quo to a failed reform.
Following Hirsch (2016), I suppose that “choosing well” and “implementing
well” are both essential elements to a successful reform. “Choosing well”
means that a reform happens when the underlying state $\omega$ calls for it.
Specifically, the reform could be good ($\omega=G$) or bad ($\omega=B$) in
nature, with the common prior $P(\omega=G)=\phi\in(0,1)$. A bad reform always
fails; a good reform may succeed. To formalize “implementing well matters”, I
suppose that a good reform succeeds with a probability equal to the
implementation effort $e\in[0,1]$. The status quo policy induces a sure
outcome that does not vary with the implementation effort.
While nobody observes the state $\omega$ perfectly, the agent as a policy
expert knows it more than the principal. He receives a signal $s\in\\{g,b\\}$
of accuracy $p\geq\frac{1}{2}$ about $\omega$ i.e.
$P(s=g|\omega=G)=P(s=b|\omega=B)=p$.
The agent is an imperfect delegate. On the one hand, his policy interest may
differ from the principal’s. With probability $\pi$ he is “congruent”; that
is, he shares the principal’s policy preference, evaluating a successful
reform at $1$, the status quo at $d$, and the failed reform at $0$. With
probability $1-\pi$ he is “noncongruent”; he is captured by special interest
groups and averts any policy change (see also Fox (2007)). For such a
politically conservative agent, the policy utility $v_{n}$ is $d$ for the
status quo and $0$ for any reform outcome. Denote the agent’s type space
$T:=\\{c,n\\}$ where $t=c$ means congruent and $t=n$ means noncongruent.
On the other hand, unlike the principal, the agent cares about the
implementation cost and the office. A reform implementation effort $e$ costs
the agent $\frac{e^{2}}{2\lambda}$. Here $\lambda$ parameterizes the agent’s
cost sensitivity: a larger $\lambda$ means that the agent bears a lower cost
of implementation. The agent derives a utility $R>0$ from being in office
after the reform policy-making.
The principal disciplines the agent’s behavior by deciding whether to remove
him from office at the end of the day. To do so, she forms the posterior
belief $\mu$ that the agent is congruent with all policy-making information
$\mathcal{I}$ available; that is, $\mu:=P(t=c|\mathcal{I})$. I suppose that
the principal would draw another agent randomly from the same pool after she
removes this one. Hence, she retains (removes) this agent if she updates
positively (negatively) from the policy-making. Further suppose that ties are
broken in the agent’s favor. Essentially, this restriction turns the
principal’s strategy to preparing a tacit contract in the form of “a retention
set”; that is, the agent shall be retained whenever his policy-making meets a
set of implicit standards. For example, a retention set “$\\{\text{successful
reform}\\}$” means that the agent shall be retained as long as he reforms and
succeeds. This practice appears in everyday life and is quite common in the
delegation literature (e.g. Armstrong and Vickers (2010)).
The principal cares about policy-making and selecting a congruent agent, but
her selection motive comes secondary. From the eye of the principal, the agent
in-office tomorrow is congruent with probability $Q:=D\mu+(1-D)\pi$, where
$D=1$ indicates retention and $D=0$ indicates removal. I suppose that the
principal’s payoff takes the form $v(\cdot)+MQ$ with $M>0$, and focus on the
limiting case $M\downarrow 0$. This assumption enables us to see clearly how
the principal may control agent to further her policy interest; that said, my
main result does not at all rest on this knife-edge assumption. In Appendix
A.5 I argue that all conclusions go through as long as the principal’s policy
motive dominates her selection motive.
Here is the summary of payoff. The principal’s total utility is $v(\cdot)$; a
congruent agent’s total utility is $v(\cdot)-\frac{e^{2}}{2\lambda}+R\cdot D$;
a noncongruent agent’s total utility is
$u_{n}(x,y)=v_{n}(\cdot)-\frac{e^{2}}{2\lambda}+R\cdot D$.
Information environment. Let’s for the salient reality assume that the
principal always observes the agent’s policy choice. After all, it would be
absurd to see the principal retreating completely from the delegated reform
decision-making. Depending on whether the principal wishes to hold this agent
accountable for the policy implementation and its outcome, I consider three
relevant information regimes. From the least to the most transparent
environments, I allow the principal to observe 1) only the policy choice, 2)
the policy choice and its outcome, and 3) the policy choice, the effort, and
the outcome. In the Appendix A.5, I argue that the effort is a shorthand for
the agent’s moral hazard in the reform implementation. Label these
environments as “nontransparent”, “opaque”, and “transparent”.
I make two remarks about the completeness of this classification. First, my
setup assumes perfect invertibility from the outcome to actions, that is, the
principal perfectly knows whether a reform has taken place from any one of the
three potential outcomes “successful reform”, “failed reform”, and “status
quo”. As a result, my setup does not permit an environment in which the
principal knows the outcome but not the policy choice, thus eliminating the
possibility that she may be better off committing not to observe the action
(Prat, 2005). Second, there does exist an information regime in which the
principal knows the policy choice and efforts. I shall come back to this
regime only if, from an ad hoc standpoint, the agent cannot be accountable for
the policy and its implementation in those information regimes we are about to
study.
Sequence of moves. For each information regime, the game moves as follows:
first, Nature picks the random variables $(\omega,s,t)$ according to the
distribution. Second, the agent observes $s\in\\{g,b\\}$. Third, the agent
chooses $x\in\\{r,q\\}$ and effort $e\in[0,1]$. Fourth, Nature determines the
outcome of the project; Fifth, the principal decides on whether to retain the
agent conditional on observables. Finally, all players’ payoff realize.
Assumptions. Let $\mu_{+}:=P(\omega=G|s=g)=\frac{\phi p}{\phi
p+(1-\phi)(1-p)}$ and
$\mu_{-}:=P(\omega=G|s=b)=\frac{\phi(1-p)}{\phi(1-p)+(1-\phi)p}$ be the
posterior beliefs that the reform is good by nature after one receives a
good/bad signal. I make several assumptions. First, the agent’s signal $s$ is
very informative. The formal requirement is
$\mu_{+}>\sqrt{{\frac{2d}{\lambda}}}>\mu_{-}$. Second, the office rent is
moderate; that is,
$\min\\{(1+R)\mu_{-},R\mu_{+}\\}>\sqrt{\frac{2d}{\lambda}}>R\mu_{-}$. I also
impose $\lambda(1+R)\leq 1$ to ensure that $e\in[0,1]$. In the appendix Lemma
4, I show that these parameter restrictions imply $R>2d$.
Solution concept. The agent chooses an action $a\in\\{r,q\\}\times[0,1]$ based
on his type $t\in\\{c,n\\}$ and signal $s\in\\{g,b\\}$. His (mixed) strategy
$\sigma(\cdot|t,s)$ specifies a probability distribution over actions $a$ for
each type-signal tuple $(t,s)$. The principal decides whether to retain this
agent based on her information set $I\in\mathcal{I}$; her (mixed) strategy is
$\sigma_{p}:I\rightarrow[0,1]$. For each information regime, I look for the
existence of Perfect Bayesian Equilibrium (PBE) in which the principal plays a
pure retention strategy. The equilibrium condition dictates that no players
admit any profitable deviation from the equilibrium strategy; and, players
form beliefs by the Bayes’ rule whenever possible.
A preliminary observation is that the game may admit numerous equilibria
supported by ad hoc off-path beliefs. To illustrate, consider the
nontransparent regime in which the principal observes only the policy choice.
When the office motive outweighs policy considerations, there is a pooling
equilibrium in which both types of agent always choose the status quo; the
principal holds the off-path belief that whoever reforms is noncongruent. This
equilibrium is nonetheless unsatisfying – the congruent agent may make a
public speech to convince the principal that that he is willing to reform when
the signal is good. The principal might indeed buy this argument, because only
a congruent agent may benefit from this kind of action.
To restrict the off-path beliefs, I apply the universal divinity refinement
from Banks and Sobel (1987) to the PBEs of this game. Behind this refinement
is an intuitive idea: the principal believes that an unexpected action comes
from the type that is mostly likely to benefit from it. The refinement gives
rises to the notion of “strong off-path beliefs” proposed by Fox and Jordan
(2011),
###### Lemma 1 (Strong off-path beliefs).
A PBE surviving the universal divinity refinement assigns the following off-
path beliefs:
1. 1.
if the status quo is off-path, then the principal believes that whoever
chooses it is noncongruent.
2. 2.
if the reform is off-path, then the principal believes that whoever chooses it
is congruent.
Appendix A.1 contains its proof. Call any PBE surviving this refinement an
“equilibrium”.
## 3 Comparison of information regimes
Below I present the heuristic descriptions of equilibria across different
regimes. Formal statements and proofs are relegated to the appendix. Also, it
is useful to establish a no-accountability benchmark. This situation is
plausible, for example, if the agent is a lame duck, or if the principal
completely backs away from transparency.
###### Fact (No accountability).
An unaccountable agent chooses his favorite action. That is, only a congruent
agent reforms when the signal is good; the status quo prevails if the signal
is bad or the agent is noncongruent.
The main takeaway is that, under the model assumptions, only a congruent agent
may sometimes want to carry out a reform solely for the sake of policy-making.
### 3.1 Nontransparent regime
When the principal observes nothing but the agent’s policy choice, her
retention decision is determined by whether this policy signals congruence. In
particular, she updates positively from a certain policy if it is more likely
to be initiated by a congruent type.
I argue that initiating a reform helps one secure the office. There are two
cases: either the reform is off-path, in which case the strong off-path belief
assigns probability one to this event that the agent is congruent; or the
reform is on path. In the latter case, only a congruent agent may attach
nontrivial weights to a reform and sometimes strictly prefers implementing a
reform to the status quo policy. This means that the congruent agent must
reform at least weakly more often than a noncongruent type given any retention
strategy. As such, initiating a reform cannot be bad news for congruence.
Provided a moderate office benefit, the only plausible equilibrium in this
nontransparent regime involves both types of the agent pooling at the reform
policy. To see this, I rule out the remaining possibility that the congruent
agent reforms strictly more often on path. According to this strategy, the
status quo policy becomes bad news for retention; a career-minded agent would
rather deviate by pandering to reform despite it might go against his policy
interest.
###### Result 1 (Accountability for decision).
In the nontransparent regime, there exists a unique equilibrium in pure
strategy. In this equilibrium, both types of agent panders to reform without
sufficient implementation efforts. The principal always retains the agent on
path.
The nontransparent regime induces the accountability for decision; that is,
the agent shall be retained unless he chooses the ex ante noncongruent policy
(status quo). The reform implementation exhibits a lack of motivation because
it is either carried out at the wrong time (bad signal) or by the wrong person
(noncongruent agent). This is not surprising: by making the reform
implementation and outcome unobservable, the nontransparent regime essentially
reduces to the classic “wrong transparency” environment of Fox (2007). In both
models, the quality of policy-making is compromised by the agent’s misaligned
office and policy motives.
### 3.2 Opaque regime
Suppose in addition to the policy choice, the principal also observes its
outcome. This means that she has additional data to evaluate the agent before
making the retention decision. But for which policy outcome should this agent
be responsible? Should he be punished electorally for not reforming, for
failing to reform well, or both?
A critical observation is that, conditional on the initiation of a reform, the
principal might not credibly hold an agent accountable to its potential
outcomes. In a situation where the principal’s retention rule does not vary
with the reform outcome, the opaque regime has no bite because only the policy
choice matters. Is this uninteresting case plausible? Indeed. As a better-
motivated reformer, a congruent agent in general reforms more often and
implements better than a noncongruent type. From the principal’s perspective,
she is sure that more reform successes come from the congruent type. But she
cannot determine which type fails more often. When the congruent agent’s extra
reform attempts more than compensate for his lower failure probability, more
failures are attributable to the congruent type, thus rendering a failed
reform also good news about congruence.
To eliminate this possibility, I impose an informativeness condition that
guarantees a reform failure to be bad news about congruence. Notate
$\gamma=\frac{1-p}{p}$ and $z=\frac{1-\phi}{\phi}$:
###### Definition 1 (Informativeness condition).
$z-\lambda\frac{1}{1+\gamma
z}\leq\gamma[\lambda(1+R)\frac{\gamma}{\gamma+z}-1]$.
The informativeness condition derives from the most plausible case that a
reform failure could but does not have to be good news for congruence; that
is, a congruent agent always reforms, whereas a noncongruent agent reforms
when the signal is good. The condition depends on the signal accuracy $p$, the
prior probability of a good state $\phi$, the cost sensitivity $\lambda$ and
the office benefit $R$. $p$ and $\phi$ pin down how more often a congruent
type reforms relative to a noncongruent type (i.e. receiving a bad signal).
$\lambda$ and $R$ describe how more motivated a congruent type is relative to
a noncongruent type at the reform implementation. These parameters summarize
different agents’ reform “quantity” and “quality” in relative terms; they are
essential for the principal’s inference.
Suppose the informativeness condition holds throughout.
###### Result 2 (Accountability for outcomes).
In the opaque regime, there exists a unique equilibrium in pure strategy. In
this equilibrium, the congruent agent always reforms and implements with
efforts internalizing the office benefit. The noncongruent agent chooses the
status quo unless the signal is good, in which case he reforms with an effort
completely motivated by the office benefits. The principal removes this agent
unless she observes a successful reform.
The opaque regime induces the accountability for policy outcomes. In this
case, choosing the congruent policy (reform) no longer secures office; doing
it right would. To the principal’s delight, this accountability provides a
high-powered incentive for reform implementation by linking the agent’s policy
and office motives. That is, under a retention rule pivotal on reform
outcomes, a reform success brings the joint benefits of policy and office vis-
a-vis a reform failure. As a result, the agent would internalize the office
benefit into the reform implementation efforts. Now that the agent’s policy
choice has again been distorted by his career concerns, the opaque regime
makes a case in which motivating in spite of pandering is possible.
The equilibrium has several interesting features. The noncongruent agent’s
policy choice always varys with signals; the congruent agent reforms even if
the signal is bad. Their equilibrium behaviors seem to suggest that the
noncongruent agent is more responsive, whereas the congruent agent is too
“timid” to hold on to his policy judgement. That said, the noncongruent
agent’s policy responsiveness derives purely from his office-holding motive to
gamble for resurrection. Such a motive misalignment exposes the principal to
an excessively high risk of reform failure.
The accountability for outcomes echoes prior works in the pandering
literature. For example, Fox and Shotts (2009) show that an agent may signal
congruence/competence from his policy choice/outcomes in what they call a
“delegate equilibrium” and “a trustee equilibrium” respectively. But my
accountability mechanism is novel; here, both the policy choice and outcomes
matter for retention because each of them enables the principal to learn
something about the agent’s congruence. That is, the principal screens a
noncongruent agent upon observing the status quo; conditional on a reform
taking place, she is more convinced about facing a congruent agent upon a
successful outcome.
### 3.3 Transparent regime
I continue to study the most transparent policy-making environment, in which
the principal knows the agent’s effort in addition to his policy choice and
outcome.
At face value, the principal may further her policy interest by conditioning
her retention decision on all observables. For example, she may want to
safeguard the reform quality by promising the agent: “you shall be removed
unless you reform with a good effort and the reform succeeds.” Per Hölmstrom
(1979), the principal would do better if this promise is credible.
But not all information is equally relevant for retention. In fact, the
principal would have cast her vote upon observing whether an agent acts
congruently, regardless of whether the reform ends up successfully or not. Put
differently, the accountability for implementation and outcomes do not get
along. Behind this incompatibility lies a simple idea: excessive transparency
constrains the principal’s capacity to “guess” the agent’s congruence from
unobserved actions. In the transparent regime we shall expect either perfect
or no screening, but nothing in between.
My analysis below focuses on the separating equilibria. This is done purely
for technical convenience – the welfare comparison with respect to the
separating equilibria is cleaner and entails no loss of generality. I shall
return to the welfare issue shortly; for now, let’s rest assured that agent
must face the same kind of accountability in both separating and pooling
equilibria. Because the reform success is a chance event, the principal does
not penalize the agent electorally if both types act in the same way (pooling
equilibria). If instead two types behave differently, the principal identifies
the congruent agent and rewards accordingly (separating equilibria). These two
classes of equilibria share a common element: the action matters for
retention; the consequence does not. I describe such a retention rule as
exerting the “accountability for processes”.
Other considerations for selecting on the separating equilibria include: 1)
more information about policy-making raises the cost of pooling; thus we tend
to believe that more transparency improves sorting. 2) Separating equilibria
are more robust to parameter changes. A separating equilibrium always exists
because the congruent agent is a better-motivated reformer. A pooling
equilibrium surviving the universal divinity refinement exists only if the
cost of implementation is not too low (Lemma 7).
Among the class of separating equilibria, the divinity refinement uniquely
selects the least costly one also known as the “Riley outcome” (Riley (1979)):
###### Result 3 (Accountability for processes).
In the transparent regime, there exists a unique separating equilibrium
surviving the universal divinity refinement. In this equilibrium, the
congruent agent always reforms and implements with efforts just enough to
separate; the noncongruent agent always chooses the status quo. The principal
retains whenever observing a reform with an implementation effort beyond the
minimal requirement.
Contrary to our expectation, more transparency does not necessarily affect
accountability in a monotone manner; rather, it induces a different kind of
accountability. An agent shall be retained whenever he is willing to bear the
increased cost of acting congruently; that is, initiate and implement the
reform with an effort exceeding the minimal requirement. He shall not be
responsible for any negative reform consequences.
The transparency regime begets two interesting effects. First, the reform
decision becomes completely partisan – the congruent agent always reforms, and
the noncongruent one always stays with the status quo. This improves sorting
but hurts discipline. Second, the accountability for processes also encourages
the congruent agent to implement well, but his motivation comes from
separating with a noncongruent type rather than pursuing a successful reform.
### 3.4 Comparison
Compare Results 1-3 with the no-accountability benchmark. We note a pattern
that echoes previous literature: when the principal sees more, the agent is in
general accountable for more aspects of the policy-making (e.g. Crémer (1995);
Dewatripont et al. (1999)); this also engenders pandering incentives (Maskin
and Tirole (2004); Fox (2007)). More important are the following novelties:
1. 1.
More transparency tends to improve sorting because it discourages the
noncongruent agent from signaling congruence via costly means. But more
transparency does not translate into a higher or lower level of
accountability; rather, it holds the agent accountable for different aspects
of policy-making that are not necessarily rankable.
2. 2.
The principal’s best hope is to motivate implementation in spite of the
agent’s pandering incentive; she cannot employ any information regime to
completely eradicate this behavior. The underlying reason is that a congruent
agent faces irreconcilable conflicts between office-holding and policy motives
across all information regimes: the policy motive always recommends a policy
decision responsive to signals, whereas the office motive always recommends
shying away from the status quo.
3. 3.
Motivating implementation by partially aligning the agent’s policy and motive
office is possible. Specifically, a retention rule that is pivotal on reform
outcomes provides a high-powered incentive for implementation. Such a rule is
possible only within the opaque regime in which the equilibrium political
selection is informative but noisy. A retention rule asking for reforming with
a minimal effort requirement also motives the agent. Such a rule is made
possible by the transparent regime, where a congruent agent works hard to
avoid being perceived as noncongruent.
## 4 Mechanism Design
Building on the equilibrium analysis in previous sections, I am ready to
answer the positive question for the principal: what is the right kind of
accountability?
### 4.1 Optimal accountability
Suppose the agent behaves as what I have described in Results 1-3.
###### Result 4.
1) In a nontransparent regime, where the agent is accountable for his policy
decision, the principal’s welfare is lowest. 2) Depending on model parameters,
either the opaque or the transparent regime could prevail; that is, the right
kind of accountability is either the accountability for processes or outcomes.
The core of this result concerns which kind of accountability elicits the most
implementation efforts from an agent who panders to reform. The principal’s
policy interest is always damaged by the nontransparent regime because a
career-minded agent never inputs serious effort. In other words, the
accountability for decision induces unproductive pandering.
Either one of the other two information regimes may induce the right
accountability. Relative to the transparent regime, the opaque regime is not
necessarily better at motivating a congruent agent; the exact comparison
hinges on whether the congruent finds it harder to succeed in the reform, or
to separate from a noncongruent type. At the same time, the opaque regime is
not necessarily worse in terms of distorting a noncongruent agent’s policy-
making – from the principal’s standpoint, she may prefer this demotivated
reformer to occasionally gamble for resurrection by reforming rather than
always stay with an unattractive status quo.
### 4.2 Comparative Statics
Suppose that, for some baseline parameters, the principal-optimal information
regime inducing the right accountability is the opaque regime. I assess
whether and how certain parameter changes might affect its optimality,
assuming players behave according to Results 1-3. Subject to the changes, the
principal is unwilling to switch information policies if the opaque regime
becomes even better at motivating an agent than the transparent regime.
#### Better reform outlook
A better reform outlook (a higher prior $\phi$ of a good reform) makes the
opaque regime more appealing to the principal.
Intuitively, if an agent is ex ante more certain that the reform is good by
nature, then he would exerts more efforts at the implementation stage. This
change allows the opaque regime to elicit more efforts than the transparent
regime. The rationale lies in the composition of the agent’s motivation:
unique to the opaque regime is the part of efforts coming from the agent
internalizing the office rent, which is also increasing in the stronger prior.
#### Lower effort cost/better signal
When the cost of efforts decreases, and/or the signal accuracy increases, the
principal often stays with the opaque regime.
The result comes from a subtle observation about the bar of separation in the
transparent regime; that is, a congruent agent does not have to implement with
an unnaturally high effort to distinguish himself from a noncongruent type. By
“unnaturally high”, I mean this agent exerts an effort exceeding what he would
have done in the absence of career concerns. In fact, it is entirely plausible
that a congruent agent’s natural implementation effort would suffice to
separate. Were this true, then the transparent regime cannot even beat the no-
accountability case (because it induces pandering when the state calls for the
status quo), let alone the opaque regime.
The parameter changes reinforce this possibility: given a congruent agent who
can act “naturally” to separate within a transparent regime, lower effort
costs and/or better signals make this sort of separation even easier.
#### Larger office rent
Matters are entirely different if the office rent $R$ increases. A more
attractive office better motivates an agent to pursue a successful reform in
the opaque regime; it also incentivizes a congruent agent to work harder for
separation in the transparent regime. A priori, it is hard to tell which
effect dominates.
In the appendix, I show that from the office rent the principal benefits
quadratically via the motivation effect and linearly via the separation
effect; furthermore, these two benefits intersect for different values of the
office rent. As a result, the principal may wish to switch her information
regime more than once as the office rent increases.
## 5 Conclusion
This paper studies the motivation issue in the context of reform policy-making
with career concerns. My analysis highlights two novelties. First, the
principal in my setup wishes her delegate to not only choose well, but also
work hard. While this feature of policy-making has appeared in previous models
(e.g. Hirsch (2016)), it has not yet been integrated into the pandering
literature in a theoretically serious way. I argue that the motivation issue
is real in policy-making. That is, it does not go away if the principal simply
keeps or removes the delegates’ accountability; rather, it forces the
principal to choose the right kind of accountability that links the delegates’
policy and office motives. I formalize this problem in a setting where the
delegate faces conflicting office and policy motives, and compare several
possible solutions. In general, I show that the principal must hold the
delegates accountable for more than the policy decisions. The right kind of
accountability – one that elicits the maximal reform implementation efforts –
hinges on whether the delegate finds it harder to act congruently or to
succeed in a reform.
Second, I highlight the difficult commitment problem in complex reform policy-
making with career concerns. This is rarely an issue if the the careerist
delegates are evaluated on the basis of a binary potential outcome; there, the
good news for retention often takes a simple form, be it a
“congruent/noncongruent” action or a “good/bad” outcome. In complex reform
decision environment with more than two potential outcomes, such a clear-cut
standard is often lacking. An evaluator must bear in mind that the credibility
of her retention rule is endogenous to what she can observe about the policy-
making. A retention rule pivotal on reform outcomes provides high-powered
incentives for implementation, but it requires the evaluator to refrain from
knowing the reason why a reform succeeds/fails and thus back away from full
transparency. The electoral mechanism through which secrecy helps is closely
tied to Ashworth and Bueno de Mesquita (2014).
How should we relate the theoretical results to the real world? When the
public cannot commit to rewarding career-minded officials electorally, one
remedy is to establish an “outcome-based” accountability. That is, the
officials are granted autonomy in the decision-making processes, and evaluated
on the basis of whether the decision succeeds. The expression of granting
process-autonomy dated back to no later than Sun Tzu’s Art of War (Sun, 1988,
original traditionally dated circa 500 BC). He wrote in the chapter of
Adaptations (italics are mine):
> There are routes not to be followed, armies not to be attacked, citadels not
> to be besieged, territory no to be fought over, orders of civilian
> governments not to be obeyed.
Cao Cao, a Chinese warlord, statesman and poet during the end of the Han
dynasty, interpreted it as “when it is a matter of expediting your work, don’t
be limited to the commands of the civilian leadership”777Ibid.. Such a
principle guided Chinese generals for millennia. My results explain why it
worked: a process-autonomy decision-making environment encourages the generals
to pursue a successful battle because this is the only way to signal either
competence or loyalty.
Finally, we observe that the outcome-based accountability is a second-best
solution to the motivation problem. Ideally, the public could write a
constitutional provision specifying a set of retention probabilities
corresponding to each hypothetical policy outcomes. But not all actions,
particularly the political ones, are contractible. For example, it perhaps
goes beyond Cummings’s capacity to enforce a list of promotions and
punishments on his Whitehall fellows for every consequence of the Brexit. When
motivation is necessary to guarantee the quality of policy-making, it is often
a good idea for the public to maintain an arm’s length relationship with the
officials.
## Appendix A Appendix
### A.1 Preliminary
###### Definition 2.
An arbitrary event $I\in\mathcal{I}$ is neutral news about congruence if
$P(t=c|I)=\pi$; it is good (bad) news if $P(t=c|I)>(<)\pi$.
I first justify the strong off-path belief.
###### Proof of Lemma 1.
Following the notations in Fudenberg and Tirole (1991) I let
$D((t,s),\mathcal{M},x^{\prime})$ ($D^{0}((t,s),\mathcal{M},x^{\prime})$) be
the set of the principal’s mixed-strategy best response – the set of
principal’s retention probability – to the agent’s action $x^{\prime}$ and
belief concentrated on $\mathcal{M}\subset\\{c,n\\}\times\\{g,b\\}$ that makes
a type $(t,s)$ agent strictly benefits (indifferent) by taking $x^{\prime}$
relative to his equilibrium action. Here $t\in T=\\{c,n\\}$ is the agent’s
payoff type; $s\in\\{g,b\\}$ is the agent’s signal. The divinity condition
says that fixing some off path action $x^{\prime}$, if for some
$s\in\\{g,b\\}$ and $t\in\\{c,n\\}$ we have
$D((t,s),\mathcal{M},x^{\prime})\cup
D^{0}((t,s),\mathcal{M},x^{\prime})\subset\bigcup_{(t^{\prime},s^{\prime})\neq(t,x)}D((t^{\prime},s^{\prime}),\mathcal{M},x^{\prime})$
then we can assign probability $0$ that the deviation comes from type $(t,s)$.
We iterate this process if necessary until the divine equilibrium is found.
It is useful to note that $(n,b)$ and $(n,g)$ share the same preferences.
Hence from now on I use $(n,\cdot)$ to denote these two types if they shall be
preserved or ruled out together. Note also that we may arrange types
$(c,g),(c,b),(n,\cdot)$ in the descending order of their motivation in reform.
To establish the claim, we would like to 1) strike type $(n,\cdot)$ if the
principal observes $(r,e)$ for some $e\in[0,1]$, and 2) assigns probability
$1$ to the type $(n,\cdot)$ if the principal observes $q$. To save notations,
I simply notate $x^{\prime}=r$ if reform is off path and the agent in this
deviation reforms with some effort $e\geq 0$.
Let $\underline{p}^{(c,s)}(\mathcal{M},x^{\prime})$ be the retention
probability that a congruent agent with signal $s$ finds indifferent to his
equilibrium payoff after he deviates to action $x^{\prime}$; similarly we
define $\underline{p}^{n}(\mathcal{M},x^{\prime})$. Then
$D((t,s),\mathcal{M},x^{\prime})=\\{p:p>\underline{p}^{(c,s)}(\mathcal{M},x^{\prime})\\}$
and
$D((n,\cdot),\mathcal{M},x^{\prime})=\\{\underline{p}^{n}((t,s),\mathcal{M},x^{\prime})\\}$
where $p$ is the retention probability. To prove the claim, it suffices to
verify whether
$\underline{p}^{(c,s)}(\mathcal{M},q)>\underline{p}^{n}(\mathcal{M},q)$ for
all $s$ and
$\underline{p}^{(c,s)}(\mathcal{M},r)<\underline{p}^{n}(\mathcal{M},r)$ for
all $s$.
Consider the case in which $q$ is off-path. This suggests that in equilibrium,
every type of the agent reforms and the principal retains on path. Let
$e^{*}_{(n,\cdot)}$ and $e^{*}_{(c,s)}$ be the equilibrium effort level of a
noncongruent and a congruent agent with a signal $s$.
1) If efforts are observable then
$e^{*}:=e^{*}_{(n,\cdot)}=e^{*}_{(c,g)}=e^{*}_{(c,b)}$. To see this, if on
path there are two different actions $(r,e),(r,e^{\prime})$ with
$e^{\prime}>e$ such that the principal retains on both actions, then the
noncongruent agent would always play one with the lower action $(r,e)$. Then
the action $(r,e)$ becomes bad news about congruence and the principal should
replace on path. Hence, any pooling equilibrium must take the form that both
types pool with at unique action of the form $(r,e)$.
2) If efforts are not observable then $e^{*}_{n}=0$ and
$e^{*}_{(c,s)}\in\arg\max\mu(s)e-\frac{e^{2}}{2\lambda}$ with $\mu(s)=\mu_{+}$
if $s=g$ and $\mu_{-}$ if $s=b$. This follows from the requirement of
sequential rationality in any PBE.
Now, the definition of $\underline{p}$ requires that if efforts are
observable, $\underline{p}^{(c,s)}(\mathcal{M},q)\cdot R+d=R+\mu
e^{*}-\frac{{e^{*}}^{2}}{2\lambda}$ for all $s$ and
$\underline{p}^{n}(\mathcal{M},q)\cdot R+d=R-\frac{{e^{*}}^{2}}{2\lambda}$. If
efforts are unobservable then $\underline{p}^{(c,s)}(\mathcal{M},q)\cdot
R+d=R+\max_{e}\\{\mu(s)e-\frac{{e}^{2}}{2\lambda}\\}$ and
$\underline{p}^{n}(\mathcal{M},q)\cdot R+d=R$. One concludes that in both
cases $\underline{p}^{(c,s)}(\mathcal{M},q)>\underline{p}^{n}(\mathcal{M},q)$.
In other words, the noncongruent has more to gain by deviating to the status
quo policy than the congruent one during both good and bad times. This implies
that we can strike $(c,b)$ and $(c,g)$ in sequence with the universal divinity
refinement. The off-path belief about the payoff type $t$ following
$\\{x=q\\}$ surviving the divinity condition is noncongruent for sure.
The same logic applies to the case in which $r$ is off-path – the congruent
agent has more to gain from a deviation. Let’s fix a pooling equilibrium at
$x=q$ and consider the deviation to $x=r$ with any nonnegative effort
$e^{\prime}$. Since for any $e^{\prime}$ the type $(n,\cdot)$ always obtains
strictly less reform payoff than both congruent types $(c,g)$ $(c,b)$,
whenever the deviation benefits some888For a deviation with prohibitively high
efforts, the divinity condition has no bite. When this is the case, we may
nonetheless assign the “strong off-path belief”. congruent types the divinity
condition strikes type $(n,\cdot)$. ∎
Now I describe the agent’s behavior subject to different retention incentives.
###### Lemma 2.
Absent retention incentives, a noncongruent agent always take the status quo
policy; a congruent agent initiates the reform with effort $\lambda\mu_{+}$
after $s=g$ and keeps the status quo after $s=b$.
###### Proof.
The noncongruent agent’s optimal behavior is obvious. Let $\mu$ be the
congruent type’s posterior belief that the reform is good. Conditional on
initiating a reform, his objective is
$\displaystyle\max_{e}\mu e-\frac{e^{2}}{2\lambda}$
The optimal effort is $\lambda\mu_{+}$ after $s=g$, and $\lambda\mu_{-}$ after
$s=g$; The agent’s reform payoffs are respectively
$\frac{\lambda}{2}\mu_{+}^{2}$ and $\frac{\lambda}{2}\mu_{-}^{2}$. By the
moderate office rent assumption, he initiates reform if $s=g$ and keep the
status quo if $s=b$. ∎
###### Lemma 3.
Suppose the principal retains if and only if observing a successful reform.
Let $\mu$ be the agent’s posterior belief about the state being $\omega=G$.
Then conditional on a reform, the congruent agent exerts effort
$\lambda(1+R)\mu$ while the noncongruent agent exerts effort $\lambda R\mu$.
###### Proof.
Rewrite the agent’s objective function as
$\displaystyle\text{Congruent}\qquad\max_{e}\mu e(1+R)-\frac{e^{2}}{2\lambda}$
$\displaystyle\text{Nonongruent}\qquad\max_{e}\mu eR-\frac{e^{2}}{2\lambda}$
The result follows immediately. ∎
###### Lemma 4.
$\min\\{(1+R)\mu_{-},R\mu_{+}\\}>\sqrt{\frac{2d}{\lambda}}>R\mu_{-}$ and
$\lambda(1+R)\leq 1$ imply $R>2d$.
###### Proof.
Suppose the noncongruent agent secures retention conditional on a successful
reform. His objective is $\max_{e\in\\{0,1\\}}\mu eR-\frac{e^{2}}{2\lambda}$
with $\mu$ being the belief that the reform is good. This means that the
noncongruent agent’s maximum payoff is $\frac{\lambda R^{2}\mu^{2}}{2}$. By
our assumption on $R$, this value is larger than $d$ if $\mu=\mu_{+}$. On the
other hand, $\frac{\lambda R^{2}\mu^{2}}{2}\leq\frac{\lambda
R^{2}}{2}<\frac{R}{2}$ since $\lambda R<\lambda(1+R)\leq 1$. This means that
$R>2d$. ∎
### A.2 Equilibrium characterization
The proof of Fact follows directly from Lemma 2. Let us formalize and prove
Results 1-3.
#### Nontransparent regime
###### Proposition 1 (Restating Result 1).
Under the nontransparent regime, there exists a unique equilibrium in pure
strategy. In this equilibrium, the congruent agent always chooses $r$; he
implements with effort $\lambda\mu_{+}$ after $s=g$, and $\lambda\mu_{-}$
after $s=b$. The noncongruent agent always chooses $r$ with effort $0$
regardless of signals. The principal always retains the agent.
###### Proof.
First check the equilibrium conditions. With the strong off-path belief, the
principal replaces whenever observing $x=q$. Since $R>d$, no agent would
deviate from $x=r$. The agent chooses effort $\lambda\mu$ for each posterior
belief $\mu\in\\{\mu_{-},\mu_{+}\\}$ and the noncongruent agent chooses zero
effort.
Second, I rule out other pure-strategy equilibrium possibilities under the
strong off-path beliefs. 1) It cannot be the case that in equilibrium, one
type of agent keeps the status quo more often than the others. Suppose in
equilibrium the congruent type does $q$ more often than the noncongruent type.
Then $\\{x=q\\}$ is good news for retention and the noncongruent type would
deviate to keeping the status quo. Suppose instead the noncongruent agent does
$q$ more often. Then $\\{x=q\\}$ is bad news and $\\{x=r\\}$ is good news for
retention. Since $R>2d$, the noncongruent agent would deviate to choosing $r$
for all signals. 2) It cannot be the case that both types of agent takes $x=q$
regardless of signals: the congruent agent would initiate the reform with
effort $\lambda\mu_{+}$ after a good signal and convince the principal that he
is congruent and thus get reelected. 3) It cannot be the case that both types
of agent chooses $r$ after $s=g$ and chooses $q$ after $s=b$. When this is the
case, the noncongruent type would deviate to choosing $q$ after a good signal.
Since $x=q$ is neutral news, the noncongruent type will be retained while
enjoying his preferred policy. ∎
#### Opaque regime
###### Proposition 2 (Restating Result 2).
Suppose the informativeness condition holds. Then there exists a unique
equilibrium in pure strategy. In this equilibrium, 1) the congruent agent
always chooses $r$. He implements with effort $\lambda(1+R)\mu_{+}$ after
$s=g$ and $\lambda(1+R)\mu_{-}$ after $s=b$. 2) the noncongruent agent chooses
$r$ with effort $\lambda R\mu_{+}$ after $s=g$, and $x=q$ after $s=b$. The
principal retains the agent after a successful reform and replaces otherwise.
There are a few steps:
First, I rule out cases in which the agent’s policy choices are signal-
invariant. Per earlier discussions, it cannot be part of an equilibrium that
both types choose $x=q$ regardless of signals. It cannot be part of an
equilibrium that the congruent agent always chooses $x=r$ with some
nonnegative effort and the noncongruent agent always chooses $x=q$; otherwise,
the noncongruent type would deviate to $x=r$ to secure reelection. Note also
that it cannot be the case that both types choose $x=r$ and exert some signal-
dependent efforts. When this is the case, the principal infers that a
successful is good news and a failed reform is bad news about congruence. But
a noncongruent agent wants to deviate from this strategy: after a bad signal,
by choosing $x=r$ with effort $\lambda R\mu_{-}$ he obtains
$\frac{\lambda}{2}R\mu^{2}_{-}$. By the moderate office rent assumption, this
payoff is lower than the status quo payoff $d$.
Next, I consider cases in which the agent’s policy choice responds to signals.
###### Claim 1.
The following strategies cannot be part of an equilibrium: both types choose
$x=q$ after $s=b$; they choose $x=r$ and exert nonnegative efforts after
$s=g$.
###### Proof.
Following this strategy $\\{x=q\\}$ is neutral news about congruence. Hence a
noncongruent type would deviate to $x=q$ after observing $s=g$. ∎
###### Claim 2.
The following strategy cannot be part of an equilibrium: the noncongruent
agent always chooses $x=q$ and the congruent agent chooses $x=q$ after $s=b$
and chooses $r$ with some nonnegative effort $e\geq 0$ after $s=g$.
###### Proof.
According to this strategy, $\\{x=q\\}$ is bad news about congruence. There
are two cases to consider. First, the principal retains on policy. Then both
types of agents would deviate to choosing $x=r$ and get retained. Second, the
principal retains whenever a reform succeeds. When this is the case, the
congruent agent wants to deviate to $x=r$ after $s=b$: in doing so, she
obtains a payoff of $\frac{\lambda}{2}(1+R)\mu_{-}^{2}$. By the moderate
office rent assumption, this payoff is better than $d$. ∎
It is also straightforward to rule out pathological strategies in which the
agent reforms when the signal is bad and takes the status quo when the signal
is good. The only sensible strategy profile that may constitute an equilibrium
is what is described in Proposition 2.
###### Claim 3.
Under the strategy specified in Proposition 2, a successful reform is good
news and a failed reform is bad news for congruence whenever the
informativeness condition holds.
###### Proof.
A successful reform being good news follows from the fact that the congruent
type always exerts more effort than the noncongruent type after a good signal.
It remains to check when a failed reform is bad news. Let $S,F$ denote the
event that a reform succeeds/fails.
By the Bayes’ rule,
$\displaystyle P(t=c|F)=\frac{P(t=c,F)}{P(F)}$
and $P(t=c,F)=P(t=c,s=g,F)+P(t=c,s=b,F)$
$\displaystyle P(t=c,s=g,F)$
$\displaystyle=P(s=g,\omega=g)[1-\lambda(1+R)]\mu_{+})+P(s=g,\omega=b)$
$\displaystyle=\phi p[1-\lambda(1+R)\mu_{+}]+(1-\phi)(1-p)$ $\displaystyle
P(t=c,s=b,F)$ $\displaystyle=P(s=b,\omega=g)[1-\lambda
R\mu_{+}]+P(s=b,\omega=b)$ $\displaystyle=\phi(1-p)[1-\lambda
R\mu_{+}]+(1-\phi)p$ $\displaystyle P(t=c,F)$
$\displaystyle=1-\phi\lambda(1+R)[p\mu_{+}+(1-p)\mu_{-}]$
Likewise,
$\displaystyle P(t=n,F)$ $\displaystyle=P(t=n,s=g,F)$
$\displaystyle=P(s=g,\omega=g)(1-\lambda R\mu_{+})+P(s=g,\omega=b)$
$\displaystyle=\phi p(1-\lambda R\mu_{+})+(1-\phi)(1-p)$
Since $P(t=c|F)\leq\pi\Leftrightarrow P(t=c,F)\leq P(t=n,F)$, we can rewrite
the necessary and sufficient condition to
$\displaystyle 1-\phi\lambda(1+R)[p\mu_{+}+(1-p)\mu_{-}]$
$\displaystyle\leq\phi p(1-\lambda R\mu_{+})+(1-\phi)(1-p)$
$\displaystyle\Leftrightarrow\qquad\phi(1-p)+p(1-\phi)$
$\displaystyle\leq\phi\lambda p\mu_{+}+\phi(1-p)\lambda\mu_{-}(1+R)$
$\displaystyle\Leftrightarrow\qquad p[1-\phi-\lambda\phi\mu_{+}]$
$\displaystyle\leq\phi(1-p)[\lambda\mu_{-}(1+R)-1]$
Substitute in $\gamma=\frac{1-p}{p}.z=\frac{1-\phi}{\phi}$, the last
inequality is $z-\lambda\frac{1}{1+\gamma
z}\leq\gamma[\lambda(1+R)\frac{\gamma}{\gamma+z}-1]$ as desired. ∎
###### Remark 1 (Technical).
The informativeness condition is statistical. Note that the RHS is negative
because $\lambda(1+R)\leq 1$. This means that for any $\lambda$,
$\exists\bar{z}$ such that for all $z\leq\bar{z}$, we can find sufficient
small $\gamma$ such that the inequality holds. Put differently, we have one
degree of freedom to choose an element in $(\lambda,R)$ satisfying
$\lambda(1+R)\leq 1$ such that the set of parameters supporting the
informativeness condition is nonempty.
The following lemma makes this point precise.
###### Lemma 5.
Suppose $z<\lambda$. Then $\exists\bar{p}\in(\frac{1}{2},1]$ that is
independent from other parameters such that for all $p\geq\bar{p}$, the
informativeness condition holds.
###### Proof.
Rearrange this inequality to $z\leq\lambda\frac{1}{1+\gamma
z}+\gamma[\lambda(1+R)\frac{\gamma}{\gamma+z}-1]$. Define
$F(\gamma)=\lambda\frac{1}{1+\gamma
z}+\gamma[\lambda(1+R)\frac{\gamma}{\gamma+z}-1]$. Under the assumption
$\lambda(1+R)\leq 1$, we claim that $F$ is decreasing in $\gamma$. To see it
$\displaystyle F^{\prime}(\gamma)$ $\displaystyle=\lambda[-\frac{z}{(1+\gamma
z)^{2}}+(1+R)(1-\frac{z^{2}}{(\gamma+z)^{2}})]-1$
$\displaystyle\leq\lambda(1+R)(1-\frac{z^{2}}{(\gamma+z)^{2}})]-1$
$\displaystyle\leq(1-\frac{z^{2}}{(\gamma+z)^{2}})]-1<0$
This means that as long as $F(0)>z$, there must be some $\bar{\gamma}\in(0,1]$
such that $F(\gamma)\geq z$ for all $\gamma\leq\bar{\gamma}$. The lemma
follows by substituting in $\gamma=\frac{1-p}{p}$. ∎
###### Lemma 6.
Let $v=(\lambda,R,\phi,p)$ and
$v^{\prime}=(\lambda^{\prime},R^{\prime},\phi^{\prime},p^{\prime})$ be two
vectors of parameters where $v^{\prime}\geq v$ component-wise with at least
one inequality strict. If the informative condition holds for $v$, then it
also holds for all $v^{\prime}$.
###### Proof.
The results for $\lambda,R$ and $\phi$ are immediate from inspecting the
informativeness condition; the result for $p$ follows from Lemma 5. ∎
Finally, let’s verify that the strategies and beliefs specified in Proposition
2 indeed constitute an equilibrium.
###### Proof of Proposition 2.
According to the strategies in Proposition 2, the status quo is bad news. Now
that a successful reform is good news and a failed reform is bad news for
congruence, the principal retains only after observing a successful reform.
The agent’s efforts follow from Lemma 3. ∎
#### Transparent regime
###### Proposition 3 (Restating Result 3: Existence).
Under the transparent regime, there exists a separating equilibrium that
survives the universal divinity refinement. In this equilibrium the congruent
type always chooses $r$; he implements with effort
$e_{H}=\max\\{\sqrt{2\lambda(R-d)},\lambda\mu_{+}\\}$ after $s=g$ and
$e_{L}=\max\\{\sqrt{2\lambda(R-d)},\lambda\mu_{-}\\}$ after $s=b$; the
noncongruent type always chooses $q$. Given her observation $(r,e)$, the
principal’s belief about the type $(t,s)\in\\{c,n\\}\times\\{g,b\\}$ is that
$P((c,g)|(r,e_{H}))=1$, $P((c,b)|(r,e_{L}))=1$ and $P((c,s)|(q,0))=0$ for all
$s\in\\{g,b\\}$; the off-path belief is $P((c,s)|(r,e):e<e_{H},e\neq e_{L})=0$
for all $s\in\\{g,b\\}$ and $P((c,g)|(r,e):e>e_{H})=1$. She retains whenever
observing $(r,e)$ with $e\geq e_{H}$ or $(r,e_{L})$.
###### Proof.
Let us verify that the strategies and beliefs in Proposition 3 indeed
constitute a PBE. First, the noncongruent agent does not want to mimic a
congruent one even when the signal is good. To see it, the noncongruent agent
values retention at $R$ and the status quo payoff at $d$; he is unwilling to
initiate a reform if it entails a cost higher than $R-d$, which amounts to an
effort $\sqrt{2\lambda(R-d)}$. This means that as long as the congruent agent
is willing to exert an effort above this level, separation happens. Notate
$\mu$ as a congruent agent’s posterior belief that the state is good. His
policy payoff is single-peaked at the $\lambda\mu$. This means that the
congruent agent exerts effort $\max\\{\sqrt{2\lambda(R-d)},\lambda\mu\\}$ in
equilibrium.
Now we verify that this equilibrium survives the universal divinity
refinement. Let’s consider whether a deviation $(r,e^{\prime})$ with
$e^{\prime}\notin\\{e_{H},e_{L}\\}$ may benefit anyone. Recall that there are
three payoff types $(c,g),(c,b),(n,\cdot)$ arranged in the descending order
with respect to the incentive to reform. Clearly, no profitable deviation may
occur with $e^{\prime}\geq e_{H}$. So we consider cases
$e^{\prime}\in(e_{L},e_{H})$ and $e^{\prime}<e_{L}$.
* •
$e^{\prime}\in(e_{H},e_{L})$. Then among the reformers, only the type $(c,g)$
may benefit from this deviation when
$e_{H}=\sqrt{2\lambda(R-d)}>\lambda\mu_{+}$ because it allows him to save
effort for separation; the type $(c,b)$ cannot benefit from this deviation.
But the principal then regards this as bad news for retention. To see it, we
again use the notation $\underline{p}^{(c,g)}$ and $\underline{p}^{n}$ to
represent the retention probability that respectively make a type $(c,g)$
agent and a type $(n,\cdot)$ agent indifferent between deviation and obtaining
the equilibrium payoff; it suffices to show that
$\underline{p}^{n}<\underline{p}^{(c,g)}$ i.e. the noncongruent agent benefits
more from this deviation and thus can tolerate more retention loss. By
definition, $\underline{p}^{n}\cdot R-\frac{{e^{\prime}}^{2}}{2\lambda}=d$ and
$\underline{p}^{(c,g)}\cdot
R+\mu_{+}e^{\prime}-\frac{{e^{\prime}}^{2}}{2\lambda}=R+\mu_{+}e_{H}-\frac{{e_{H}}^{2}}{2\lambda}=d+\mu_{+}e_{H}$.
Since $e_{H}>e^{\prime}$, straightforward comparison suggests that
$\underline{p}^{n}<\underline{p}^{(c,g)}$. It happens because the noncongruent
agent has less stake in reform and so he does not suffer as much as the
congruent type in reducing efforts. As such, the principal must replace this
agent $(c,g)$ and the agent cannot benefit from this deviation.
* •
$e^{\prime}\in(0,e_{L})$. Then this deviation may benefit the $(c,b)$ if
$\lambda\mu_{+}>e_{L}=\sqrt{2\lambda(R-d)}>\lambda\mu_{-}$ and the principal
retains; it may benefit both types of $(c,g)$ and $(c,b)$ if
$e_{L}=\sqrt{2\lambda(R-d)}>\lambda\mu_{+}$ the principal retains. We consider
the first case; the second one follows analogously. The main observation goes
just as above: whenever the congruent agent benefits from this deviation, the
noncongruent type benefits more because he does not suffer from the loss of
reform benefit. More precisely, $\underline{p}^{n}\cdot
R-\frac{{e^{\prime}}^{2}}{2\lambda}=d$ and $\underline{p}^{(c,b)}\cdot
R+\mu(b)e^{\prime}-\frac{{e^{\prime}}^{2}}{2\lambda}=R+\mu(b)e_{L}-\frac{{e_{L}}^{2}}{2\lambda}=d+\mu(b)e_{L}$.
Straightforward comparison shows that
$\underline{p}^{n}<\underline{p}^{(c,b)}$. This suggests that the principal
strikes $(c,b)$ if she observes any deviation to $(r,e^{\prime})$ with
$e^{\prime}\in(0,e_{L})$ and believes that this deviation comes from type
$(n,\cdot)$.
Hence, we have established the proposition. ∎
###### Lemma 7 (Restating Result 3: Uniqueness).
1. 1.
Among all separating equilibria, the universal divinity refinement selects
uniquely on the least costly separating equilibrium described in Proposition
3.
2. 2.
If $\lambda\mu_{+}^{2}<2(R-d)$, the universal divinity refinement cannot rule
out a class of pooling equilibrium with the following properties: both types
of agent choose $x=r$ with effort $e^{*}$, where
$e^{*}\in[\lambda\mu_{+},\sqrt{2\lambda(R-d)}]$. Otherwise, no pooling
equilibrium may survive the universal divinity criterion.
###### Proof.
Part 1. Proposition 3 describes the least costly separating equilibrium or the
Riley outcome that survives the universal divinity refinement.
Consider other possibility of separation. First consider perfect separation in
which an agent reforms if and only if he is congruent. We claim that the only
plausible equilibrium in this category is the one characterized in Proposition
3. To see it, $\sqrt{2\lambda(R-d)}$ is the minimal effort to deter a
noncongruent agent from mimicking. Any other equilibrium must involve the
congruent exerting an effort weakly larger than this. However, any other
separating equilibrium in which a type $(c,g)$ chooses $(r,e^{\prime}_{H})$
and a type $(c,b)$ chooses $(r,e^{\prime}_{L})$ with either
$e^{\prime}_{H}\neq e_{H}$ and/or $e^{\prime}_{L}\neq e_{L}$ does not survive
a deviation to the strategy specified in Proposition 3. For a concrete
example, suppose towards contradiction that indeed a type $(c,g)$ agent
chooses $(r,e^{\prime}_{H})$. Then he benefits most by deviating to
$(r,e_{H})$; upon observing this observation, the principal believes according
to the divinity condition that the agent has type $(c,g)$. Consequently, the
only equilibrium possibility is one described in Result 3.
Next we rule out “semi-separating” possibilities in which either 1) not all
congruent types reform and all noncongruent types stay with the status quo or
2) not all noncongruent type stays with the status quo and all congruent types
reform. In the first case, not reforming is bad news about congruence. If on
path the type $(c,b)$ agent does not reform, the he may profitably deviate by
reforming with effort $\lambda\mu_{-}$; upon this deviation the principal
applies the divinity criterion and assigns probability $1$ to type $(c,b)$ and
retains. Same story applies to type $(c,g)$. In the second case, since the
types of $(n,g),(n,b)$ share the same policy preference, they should behave
the same in equilibrium. So we can rule out this possibility as well.
All other semi-separating equilibrium possibilities involving one of
$(c,g)\&(c,b)$ reforms and one of $(n,g)\&(n,b)$ keeps the status quo can be
easily ruled out.
Part 2. There are a few steps:
Step 1. Claim: In any pooling equilibrium that survives the divinity
refinement, it must be that the agent reforms with an effort
$e\geq\lambda\mu_{+}$.
###### Proof.
Suppose not. This boils down to two possibilities: all types of agents
$(c,g),(c,b),(n,\cdot)$ 1) pool on the status quo, or 2) pool on the reform
with effort $e<\lambda\mu_{+}$. In either case, however, a $(c,g)$ type can
profitably deviate by choosing $(r,\lambda\mu_{+})$. Upon this deviation, the
divine condition assigns the probability 1 that this deviation comes from a
type-$(c,g)$ agent since he benefits more than other types. The agent will be
retained, thus contradicting the equilibrium condition. ∎
Step 2: Fix a pooling equilibrium in which everyone reforms with effort
$e^{*}\geq\lambda\mu_{+}$. Let’s use the divinity condition to pin down the
off-path beliefs. On path, every type shall be retained. For any deviation to
be profitable it must be that either (1) the deviation involves the agent
reforms with an effort $e^{\prime}<e^{*}$, or (2) the agent chooses the status
quo.
Case (1). As before define $\underline{p}^{(\cdot)}$ as the break-even
retention probability after deviation. By definition,
$\displaystyle
R\underline{p}^{(c,g)}+\mu_{+}e^{\prime}-\frac{(e^{\prime})^{2}}{2\lambda}$
$\displaystyle=R+\mu_{+}e^{*}-\frac{(e^{*})^{2}}{2\lambda}$ $\displaystyle
R\underline{p}^{(c,b)}+\mu_{-}e^{\prime}-\frac{(e^{\prime})^{2}}{2\lambda}$
$\displaystyle=R+\mu_{-}e^{*}-\frac{(e^{*})^{2}}{2\lambda}$ $\displaystyle
R\underline{p}^{n}-\frac{(e^{\prime})^{2}}{2\lambda}$
$\displaystyle=R-\frac{(e^{*})^{2}}{2\lambda}$
By the assumption that $e^{*}>e^{\prime}$, we deduce
$\underline{p}^{(c,g)}>\underline{p}^{(c,b)}>\underline{p}^{n}$; in other
words, the noncongruent type benefits from the deviation the most. The the
principal assigns probability 1 that the agent is of type-$n$ following any
deviation $(r,e^{\prime})$ with $e^{\prime}<e^{*}$.
Case (2). Repeat steps in Case (1) and modify the deviation to $q$. By
definition,
$\displaystyle R\underline{p}^{(c,g)}+d$
$\displaystyle=R+\mu_{+}e^{*}-\frac{(e^{*})^{2}}{2\lambda}$ $\displaystyle
R\underline{p}^{(c,b)}+d$
$\displaystyle=R+\mu_{-}e^{*}-\frac{(e^{*})^{2}}{2\lambda}$ $\displaystyle
R\underline{p}^{n}+d$ $\displaystyle=R-\frac{(e^{*})^{2}}{2\lambda}$
As before, $\underline{p}^{(c,g)}>\underline{p}^{(c,b)}>\underline{p}^{n}$.
The the principal assigns probability 1 that the agent is of type-$n$
following any deviation $(r,e^{\prime})$ with $e^{\prime}<e^{*}$.
Taken together, I have shown that any deviation that might benefit the agent
would make the principal more suspicious that he is a noncongruent type.
Reversing the argument, it is straightforward to verify that the divinity
condition assigns any unprofitable deviation to $(r,e^{\prime})$ with
$e^{\prime}\in(e^{*},\sqrt{2\lambda(R-d)}]$ a belief that the agent is of type
$(c,g)$ with probability 1. Hence if $\lambda\mu_{+}>2(R-d)$, the following
pooling equilibrium survives the divinity refinement: All types of agent pool
on the action $(r,e^{*})$ with
$e^{*}\in[\lambda\mu_{+},\sqrt{2\lambda(R-d)}]$; the principal assigns
probability $1$ that the agent is noncongruent upon observing $x=q$ or
$(r,e^{\prime})$ with $e^{\prime}<e^{*}$; and she assigns probability $1$ that
the agent is congruent & has received the signal $s=g$ upon observing
$(r,e^{\prime})$ with $e^{\prime}>e^{*}$. ∎
### A.3 Mechanism Design
Formalize Result 4 as follows:
###### Proposition 4.
The nontransparent regime induces the lowest welfare. Either the opaque or the
transparent regime may induce the highest welfare for certain parameter values
$(p,\phi,d,\lambda,R,\pi)\in\Omega$.
###### Proof.
Across three regimes, the congruent agent always initiates reforms. He exerts
the least effort under the nontransparent regime. To see why the congruent
agent shirks most there, after taking the “correct” position he no longer
worries about office. In other two regimes, the congruent agent has to either
gamble for success or separate from the noncongruent type. Together with the
fact that the noncongruent agent always fails a reform after exerting zero
effort, the principal’s policy payoff is the lowest under the nontransparent
regime.
To see why the opaque regime may prevail, it suffices to check whether the
congruent agent works hardest under this regime. Were this true, then the
principal would prefer this regime when there is a sizable proportion of
congruent agents in the pool ($\pi$ high). A sufficient condition is
$\lambda(1+R)\mu_{-}\geq\sqrt{2\lambda(R-d)}$ or equivalently
$\lambda(1+R)^{2}\mu_{-}^{2}\geq 2(R-d)$. RHS is bounded above by
$2(R-\frac{\lambda}{2}R^{2}\mu_{-}^{2})$ using the condition
$d\geq\frac{\lambda R^{2}\mu^{2}_{-}}{2}$ so it is sufficient to show that
$R\leq\frac{\lambda}{2}\mu_{-}^{2}[R^{2}+(1+R)^{2}]$. This condition can be
further simplified to $2\leq\lambda\mu^{2}_{-}[2R+2+\frac{1}{R}]$. For
sufficiently small $R$, we can always find $\lambda\in[0,1]$ satisfying this
inequality and the parameter restriction $\lambda(1+R)\leq 1$. Further, per
Remark 1 we can identify a set of parameters satisfying the informativeness
condition. Finally, there exists a set of sufficiently small $d$ satisfying
the assumptions on signal accuracy.
There also exist parameters such that the transparent regime prevails.
Examples are available in the proof of Proposition 5. ∎
### A.4 Comparative Statics
We collect comparative static results into a proposition:
###### Proposition 5.
Given a vector $w=(p,\phi,d,\lambda,R,\pi)$ under which the opaque regime is
optimal. Then
1. 1.
The principal would continue to use this regime if $\phi$ increases; moreover,
the principal strictly benefits from this.
2. 2.
Suppose further that $2(R-d)<\lambda$. Then 1) there exists
$p^{\prime}\in(\frac{1}{2},1)$ such that for all $p\in(p^{\prime},1]$, the
principal would continue to use this regime if $\lambda$ increases; 2)
$\exists p^{\prime\prime}\in(\frac{1}{2},1)$ such that for all
$p\in(p^{\prime\prime},1]$, the principal would continue to use this regime if
$p$ increases. In both cases, the principal strictly benefits from this
parameter changes.
3. 3.
An increase in $R$ may cause the principal to switch to the transparent
regime. Specifically, fixing sufficiently high $p$, $\phi$, and $\pi$, and
assuming $\lambda(1+d)\leq\frac{1}{2}$, there exists a pair
$\underline{R}=\underline{R}(\lambda,d)$ and $\bar{R}=\bar{R}(\lambda,d)$ such
that 1) for all $R\in(\underline{R},\bar{R})$ the transparent regime
dominates; for $R>\bar{R}$ and $R<\underline{R}$ the opaque regime dominates.
2) $\underline{R}$ is increasing in $\lambda$ and $d$; $\bar{R}$ is decreasing
in $\lambda$ and $d$.
###### Proof.
(Sanity check) I claim that the set of parameter values satisfying (a)
$2(R-d)<\lambda$, (b) several model assumptions, and (c) the informativeness
condition, is nonempty.
To see it, suppose that the signal accuracy is very high or $p\approx 1$; it
overwhelms a weaker prior $\phi$ (e.g. $\phi=\frac{3}{4}$), resulting in
$\mu_{+}\approx 1$ and $\mu_{-}\approx 0$. Now I check these restrictions one
by one.
1. 1.
With $p\approx 1$ Lemma 5 guarantees the informativeness condition if
$z<\lambda$.
2. 2.
With $\mu_{-}\approx 0$, two model assumptions reduce to
$\max\\{(1+R)\mu_{-},R\mu_{+}\\}>\sqrt{\frac{2d}{\lambda}}$ and
$\mu_{+}>\sqrt{\frac{2d}{\lambda}}$. If we pick
$\lambda>\max\\{\frac{2d}{R^{2}\mu_{+}^{2}},\frac{2d}{\mu_{+}^{2}}\\}\approx\max\\{\frac{2d}{R^{2}},2d\\}$
then these two assumptions hold.
3. 3.
We also need $\lambda(1+R)\leq 1$.
4. 4.
Need to verify that $\lambda\mu_{+}^{2}>2(R-d)$
Taken together, we may choose parameters like this: pick
$p=\frac{99}{100},\phi=\frac{3}{4}(z=\frac{1}{3}),\lambda=\frac{1}{2},R=\frac{1}{4},d=\frac{1}{80}$.
This gives $\mu_{+}\approx 0.996$ and $\mu_{-}\approx 0.03$ and
$\sqrt{\frac{2d}{\lambda}}\approx 0.22$. $\lambda\mu_{+}^{2}\approx 0.496$ and
$2(R-d)=0.475$. All restrictions are met.
Part 1. A larger $\phi$ makes the opaque regime more appealing to the
principal because the agent works harder when he is more certain that the
reform is good by nature. This “strong prior” effect benefits the transparent
regime only if 1) the agent is congruent and 2)
$\sqrt{2\lambda(R-d)}<\lambda\mu$ for some $\mu\in\\{\mu_{+},\mu_{-}\\}$;
otherwise, the agent’s policy choice and implementation effort remain
invariant to the prior. But under these two conditions, the opaque regime
strictly dominates by encouraging the agent to exert more effort
($\lambda(1+R)\mu>\lambda\mu$).
Part 2. Assuming the bar for separation is low
($\sqrt{2\lambda(R-d)}<\lambda\mu$), the congruent agent must exert less
effort under the transparent regime than in the opaque regime. Sufficiently
accurate signals guarantee this low bar for separation. To see it, a higher
$p$ does two things: it reinforces the separation condition
($\sqrt{2\lambda(R-d)}<\lambda\mu_{+}$) by inducing a sufficiently high
posterior; it also tilts the principal’s welfare calculus towards the
realization of a good signal999With a bad signal, the reform is doomed to fail
and confers almost zero payoff; the status quo policy confers $d>0$. Since the
agent’s equilibrium strategies remain invariant to the parameter changes, the
welfare impact conditional on a bad signal is minimal from the principal’s
perspective. A higher $\lambda$ also guarantees this bar for separation.
Part 3. We want to construct a pair of vectors
$w^{\prime}=(p,\phi,d,\lambda,R,\pi),w^{\prime\prime}=(p,\phi,d,\lambda,R^{\prime},\pi)$
with $R^{\prime}>R$ that satisfying the model assumptions, the informativeness
condition, and $\lambda(1+R)<1$, such that the principal prefers the opaque
regime under $w^{\prime}$; she prefers the transparent regime under
$w^{\prime\prime}$.
To simplify matters, I assume that the agent is likely to be congruent
($\pi\approx 1$); I let $p=\phi=1-\epsilon$ for $\epsilon$ sufficiently small.
Consequently, $\mu_{+}\approx 1$ and $\mu_{-}=\frac{1}{2}$. With these two
assumptions, the agent is unlikely to receive a bad signal (which occurs with
probability $p(1-\phi)+\phi(1-p)\approx 2\epsilon$); the welfare comparison
reduces to which information policy would elicit more efforts from a congruent
agent when the signal is good.
Under the opaque regime, the congruent agent exerts effort
$\lambda(1+R)\mu_{+}$. Under the transparent regime, he exerts
$\max\\{\sqrt{2\lambda(R-d)},\lambda\mu_{+}\\}$ (which is true in both
separating and pooling equilibria). The transparent regime may elicit more
effort whenever $\sqrt{2\lambda(R-d)}\geq\lambda(1+R)\mu_{+}$ or
$\lambda\mu_{+}^{2}(1+R)\leq 2(R-d)$. Define
$\hat{\lambda}=\lambda\mu_{+}^{2}$ and $H(R)=\hat{\lambda}(1+R)^{2}-2(R-d)$.
$H$ has real solutions if $1\geq 2\hat{\lambda}(d+1)$; in this case, two
solutions are
$\underline{R}=\frac{1-\hat{\lambda}-\sqrt{1-2(1+d)\hat{\lambda}}}{\hat{\lambda}}>0$
and
$\bar{R}=\frac{1-\hat{\lambda}+\sqrt{1-2(1+d)\hat{\lambda}}}{\hat{\lambda}}$.
Given the quadratic shape of $H$, for all $R<\underline{R}$ the opaque regime
dominates; for $R\in(\underline{R},\bar{R})$ the transparent regime dominates.
This suggests that if $R$ is close enough to $\underline{R}$ or $\bar{R}$,
then the principal is willing to switch information regimes when there is
small perturbation to $R$; if $R<\underline{R}$ or $R>\bar{R}$ then a local
increase in $R$ would not induce a regime switch.
We claim from the expression $\underline{R},\bar{R}$ that:
###### Claim 4.
$\underline{R}$ is increasing in $\lambda$ and $d$; $\bar{R}$ is decreasing in
$\lambda$ and $d$.
###### Proof.
($\underline{R}$): Since $\hat{\lambda}=\lambda\mu_{+}$ it suffices to verify
$\frac{\partial\underline{R}}{\partial\lambda}\geq 0$ and
$\frac{\partial\underline{R}}{\partial d}\geq 0$. The latter is obvious. Note
also that
$\displaystyle\frac{\partial\underline{R}}{\partial\lambda}=\frac{\frac{\lambda(1+d)}{\sqrt{1-2(1+d)\lambda}}-(1-\sqrt{1-2\lambda(1+d)})}{\lambda^{2}}$
Denote
$k=\sqrt{1-2(1+d)\lambda}\Leftrightarrow(1+d)\lambda=\frac{1-k^{2}}{2}$. The
numerator of the above expression is
$\frac{1-k^{2}}{2k}-(1-k)=\frac{1}{2}(k+\frac{1}{k})-1\geq 0$.
($\bar{R}$): Similarly, $\frac{\partial\bar{R}}{\partial\lambda}\geq
0\Leftrightarrow-\frac{\lambda(1+d)}{\sqrt{1-2(1+d)\lambda}}\geq\sqrt{1-2(1+d)\lambda}$
which is always false; the case for $d$ is again straightforward. ∎
It remains to verify that the vector thus constructed
$(p,\phi,d,\lambda,R)=(1-\epsilon,1-\epsilon,d,\lambda,\bar{R})$ may satisfy
all necessary assumptions. We have the freedom to choose $d$ and $\lambda$. I
let $\epsilon\downarrow 0$ for simplicity.
1. 1.
$\lambda(1+\underline{R})\xrightarrow{\epsilon\downarrow
0}\lambda+1-\lambda-\sqrt{1-2(1+d)\lambda}<1$.
2. 2.
The informativeness condition is guaranteed for $p=\phi=1-\epsilon$ and
$\lambda>0$. To see it, $z=\gamma=\frac{\epsilon}{1-\epsilon}\approx 0$ so the
condition $z-\lambda\frac{1}{1+\gamma
z}\leq\gamma[\lambda(1+R)\frac{\gamma}{\gamma+z}-1]$ reduces to $\lambda>0$.
3. 3.
The assumption on signal accuracy requires
$1>\sqrt{\frac{2d}{\lambda}}>\frac{1}{2}$ for $\epsilon$ arbitrarily small.
This means that fixing $\lambda$ it must be that
$d\in(\frac{\lambda}{8},\frac{\lambda}{2})$.
4. 4.
The assumption on moderate office rent simplifies to
$\max\\{\frac{1+\underline{R}}{2},\bar{R}\\}>\sqrt{\frac{2d}{\lambda}}>\frac{\underline{R}}{2}$.
Unpacking the expression
$\underline{R}=\frac{1-\hat{\lambda}-\sqrt{1-2(1+d)\hat{\lambda}}}{\hat{\lambda}}$,
letting $\hat{\lambda}\rightarrow\lambda$ (since $\epsilon\downarrow 0$), a
sufficient condition is
$\displaystyle
1-\lambda-\sqrt{1-2(1+d)\lambda}<2\sqrt{2d\lambda}<1-\sqrt{1-2(1+d)\lambda}$
There is a large set of pairs $(d,\lambda)$ satisfying the above three
conditions. For example, we can let $d=0.05$ and $\lambda=0.3$. Condition 1, 2
and 3 are immediate. Condition 4 simplifies to $0.0917<0.346<0.39$.
Finally, from $w=(1-\epsilon,1-\epsilon,0.05,0.3,\bar{R})$ we may construct
$w^{\prime}=(1-\epsilon,1-\epsilon,0.05,0.3,\bar{R}-\delta)$ and
$w^{\prime\prime}=(1-\epsilon,1-\epsilon,0.05,0.3,\bar{R}+\delta)$ with
$\delta>0$ sufficiently small such that the principal chooses the opaque
regime under $w^{\prime}$ and the transparent regime under $w^{\prime\prime}$.
∎
### A.5 Robustness
#### Nontrivial selection
The baseline model makes a simplifying assumption that the principal is almost
entirely policy-motivated. One may wonder to what extent this assumption
drives my results; after all, it is quite reasonable to assume that the
principal may attach nontrivial weight on selecting a congruent agent.
I argue that all results continue to hold as long as the principal’s weight on
selection is not too high. Crucially, note that the principal’s retention
happens at the last stage. Since the principal retains on good news about
congruence, her retention strategy remains invariant to the weight of
selection. Also note that a policy-motivated principal’s optimal information
policy is generically101010The set of parameters that give rise to more than
one optimal policy is meager in the parameter space. unique. This means that
even if this principal starts to care about selection, she would not change
her information regime if selection is not as important as policy-making.
#### More data on implementation
In the baseline model I use a single parameter $e$ to capture the moral hazard
in the reform implementation. I claim that this approach entails no loss of
generality.
Consider a more general environment in which the reform implementation
involves a vector of necessary inputs $(e_{1},e_{2},...e_{n})$, each
determining the success probability $h$ of a good reform according to the
function $h=h(e_{1},e_{2},...e_{n})$. Let the cost function be
$C(e_{1},e_{2},...e_{n})$. Now, suppose an agent wants to target a level of
success probability $\bar{h}$. He shall solve a textbook minimization problem
(up to regularity conditions)
$\min_{e_{1},...e_{n}}C(e_{1},e_{2},...e_{n})\quad
s.t.\;h(e_{1},e_{2},...e_{n})\geq\bar{h}$, and derive an induced cost function
for $h$. That is, achieving the level of success probability $h$ for a good
reform costs one $C(h)$. This gives us a single-variable representation of
moral hazard.
From this standpoint, we may interpret the effort parameter $e$ as a “score”
summarizing all useful information for reform policy-making that is under the
agent’s control.
#### Modeling congruence
I model (non)congruence along the line of Fox (2007). One may wonder whether
alternative notions of congruence (e.g. Maskin and Tirole (2004)) might induce
similar or different results.
I argue that the current setup presents the motivation issue in a clean
fashion. By contrast, an alternative setup closer to Maskin and Tirole (2004)
often involves the noncongruent type discounting future differently than the
congruent type.
To see this, suppose the noncongruent type is a reform saboteur – he prefers a
failed reform to the status quo to a successful reform. Per Maskin and Tirole
(2004), we would like this type of agent to reform whenever the timing is bad,
and stays with the status quo whenever the state calls for a reform. This
equilibrium behavior could be induced, for example, when efforts and the
policy choices are substitutes for reform success: a good reform always
succeeds, and a bad reform succeeds with a probability equal to effort. The
main issue is that, the agent cannot control the (risky) reform outcome in a
deterministic way; this renders asymmetry in the continuation payoffs to two
types of agent. In the modified setup above, the congruent type at least
secures $R+\phi$ by holding office in the period 2 (reform without effort);
the noncongruent type obtains at most $R+(1-\phi)$. The congruent agent
benefits more from holding office at least when the prior is biased towards
reform ($\phi\geq\frac{1}{2}$). In this case, career concerns discipline the
congruent agent more strictly than the noncongruent one; accordingly, it makes
type-separation easier relative to the main model. While this observation
lends extra credibility that an opaque information regime may prevail (since
the congruent agent exerts less effort on path under the transparent regime),
it is not obvious whether its optimality comes mainly from the motivation
effect of a pivotal decision rule or just the asymmetric discipline effects.
## Reference
* Armstrong and Vickers (2010) Mark Armstrong and John Vickers. A Model of Delegated Project Choice. _Econometrica_ , 78(1):213–244, 2010.
* Ashworth (2012) Scott Ashworth. Electoral Accountability: Recent Theoretical and Empirical Work. _Annual Review of Political Science_ , 15(1):183–201, 2012.
* Ashworth and Bueno de Mesquita (2014) Scott Ashworth and Ethan Bueno de Mesquita. Is Voter Competence Good for Voters?: Information, Rationality, and Democratic Performance. _American Political Science Review_ , 108(3):565–587, 2014.
* Banks and Sobel (1987) Jeffrey S. Banks and Joel Sobel. Equilibrium Selection in Signaling Games. _Econometrica_ , 55(3):647–661, 1987.
* Canes-Wrone et al. (2001) Brandice Canes-Wrone, Michael C. Herron, and Kenneth W. Shotts. Leadership and Pandering: A Theory of Executive Policymaking. _American Journal of Political Science_ , 45(3):532–550, 2001.
* Crémer (1995) Jacques Crémer. Arm’s Length Relationships. _The Quarterly Journal of Economics_ , 110(2):275–295, 1995.
* Dewatripont et al. (1999) Mathias Dewatripont, Ian Jewitt, and Jean Tirole. The Economics of Career Concerns, Part II: Application to Missions and Accountability of Government Agencies. _The Review of Economic Studies_ , 66(1):199–217, 1999.
* Fox (2007) Justin Fox. Government Transparency and Policymaking. _Public Choice_ , 131(1/2):23–44, 2007.
* Fox and Jordan (2011) Justin Fox and Stuart V. Jordan. Delegation and Accountability. _The Journal of Politics_ , 73(3):831–844, 2011\.
* Fox and Shotts (2009) Justin Fox and Kenneth W. Shotts. Delegates or Trustees? A Theory of Political Accountability. _The Journal of Politics_ , 71(4):1225–1237, 2009\.
* Fox and Van Weelden (2012) Justin Fox and Richard Van Weelden. Costly transparency. _Journal of Public Economics_ , 96(1):142–150, 2012.
* Fudenberg and Tirole (1991) Drew Fudenberg and Jean Tirole. _Game Theory_. MIT Press, Cambridge, MA, 1991.
* Hirsch (2016) Alexander V. Hirsch. Experimentation and Persuasion in Political Organizations. _American Political Science Review_ , 110(1):68–84, 2016.
* Hölmstrom (1979) Bengt Hölmstrom. Moral Hazard and Observability. _The Bell Journal of Economics_ , 10(1):74–91, 1979.
* Maskin and Tirole (2004) Eric Maskin and Jean Tirole. The Politician and the Judge: Accountability in Government. _American Economic Review_ , 94(4):1034–1054, September 2004.
* Prat (2005) Andrea Prat. The Wrong Kind of Transparency. _American Economic Review_ , 95(3):862–877, June 2005.
* Riley (1979) John G. Riley. Informational Equilibrium. _Econometrica_ , 47(2):331–359, 1979.
* Sun (1988) Tzu Sun. _The Art of War_. Shambhala Publications, Boulder, CO, 1988.
|
Prepared for Physics of Elementary Particles and Atomic Nuclei. Theory
# Possible studies at the first stage of the NICA collider operation with
polarized and unpolarized proton and deuteron beams
###### Abstract
Nuclotron based Ion Collider fAcility (NICA) project is in progress at the
Joint Institute for Nuclear Research and will start experiments with heavy
ions. In the context of the NICA Hadronic Physics programme double polarized
$pp$-, $dd$\- and $pd$\- collisions even at lower energies of
$\sqrt{s_{NN}}=3.4-10$ GeV, which will be accessible already at the initial
stage of experiments,are essential tools for precise understanding the spin
dependence of the nucleon-nucleon strong interactions, in both elastic and
deep-inelastic regimes. A special interest is interaction in few baryon
systems at double strangeness, charm and beauty thresholds.For instance,
polarized large-angle elastic $pp$ and $pn$ scattering near the charm
threshold allows one to get an access to properties of possible exotic
multiquark states and their relation to the states recently observed at
LHCb.Large angle scattering of protons and deuterons on the deuteron contains
unique information on the short-range structure of the deuteron, its non-
nucleonic degrees of freedom and also on color transparency phenomenon.
Furthermore, double polarized proton-deuteron scattering offer a possibility
to test the Standard Model through the search for time-invariance (or CP-
invariance under CPT symmetry) violation and parity-violation in single-
polarized scattering. This paper contains suggestions for experiments with
usage of the Spin Physics Detector (SPD) and discusses perspectives of the
first stage of the SPD Programme. This includes experiments with non-polarized
beams too as well as collisions like 12С-12С and 40Сa-40Ca.
###### Abstract
The spin-dependent Glauber theory is applied to calculate spin observables of
$pd$ elastic scattering at $3$-$50$ GeV/c using $pp$ amplitudes available in
the literature and parametrized within the Regge formalism. The calculated
vector $A_{y}^{p}$, $A_{y}^{d}$ and tensor $A_{xx}$, $A_{yy}$ analyzing powers
and the spin-correlation coefficients $C_{y,y}$, $C_{x,x}$, $C_{yy,y}$,
$C_{xx,y}$ can be measured at SPD NICA and, thus, will provide a test of the
used $pN$ amplitudes. Quasi-elastic scattering $pd\to\\{pp\\}_{s}n$ with
formation of spin-singlet $pp(^{1}S_{0})$ pair at zero scattering angle is of
special interest. The $dd$ elastic scattering is briefly outlined. The double
polarized $pp$ and $pn$ elastic scattering at large c.m.s. scattering angle
$\theta_{cm}=90^{\circ}$ is considered in the threshold of the charm
production.
###### Abstract
Motivation is outlined for a precise study of high-energy diffractive
scattering of protons at $|t|\lesssim 1$ GeV2 in the experiment SPD. Small
oscillations in the $t$-dependence of the differential cross section at low
and medium $t$, observed in earlier experiments at Protvino, ISR, Fermilab and
now also at LHC, are probably related with the proton’s structure at impact
parameters exceeding the size of the proton’s quark core and thus indicate
involvement of meson periphery of the nucleon to diffractive scattering. The
experiment SPD can provide new precise data on small-angle elastic
$pp$-scattering for exploring this phenomenon.
###### Abstract
The spin effects in the elastic proton-proton scattering are analysed at NICA
energies. It is shown the importance the investigation of the region of the
diffraction minimum in the differential cross sections. Some possible
estimation of spin effects are given for the different NICA energies in the
framework of the new high energy generelazed structure (HEGS) model.
###### Abstract
Physics of single-spin processes for the SPD NICA project is proposed. This
includes transverse single-spin asymmetry ($A_{N}$) and hyperon polarization
($P_{N}$) measurements in various types of collisions, including p+p, d+d, C+C
and Ca+Ca. The polarized $p$\- and $d$-beams in the NICA collider can be used
to study $A_{N}$ for more than several dozen reactions at different energies
in the $3.4<\sqrt{s}<27$ GeV range. A number of interesting phenomena have
been predicted, such as the oscillation for $A_{N}(x_{\rm{F}})$ and
$P_{N}(x_{\rm{F}})$, the resonance dependence on the energy $\sqrt{s}$ for
$A_{N}$ and $P_{N}$, and the threshold dependence of $A_{N}$ on the c.m.
production angle for some reactions. The role of quark composition of
particles involved in the reaction is discussed.
###### Abstract
In the context of NICA-SPD project, the motivation of the study of vector
meson, charm production (hidden) $p+p\to p+p+V$, $V=\rho$,$\phi$, $J/\Psi$ and
open $N+N\to\Lambda_{C}(\Sigma_{C})+\bar{D}+N$ is recalled. Backward vector
meson production, that should be background free in a collider, can possibly
be measured and be used also as an alternative method of producing neutron
beams. Simple estimations of cross sections are presented on the basis of
existing literature. When possible, model independent statements on
polarization effects are highlighted.
###### Abstract
We argue that reaction $p^{2}H\to ppn$ at large momentum transfer to one of
the nucleons of the deuteron provides a sensitive probe of space time
evolution of hard $pN$ scattering . The same process in a different kinematics
allow to study short range correlations in the deuteron. Use of the polarized
deuteron beams would provide a unique opportunity to separate S- and D-wave
contributions to the high momentum component of the deuteron wave function. A
possibility to look for nonnucleonic components of the short range
correlations is also outlined.
###### Abstract
Differential cross sections of various binary reactions with the lightest
nuclei at large fixed scattering angles are in qualitative agreement with the
$s$\- power-law dependence dictated by the constituent counting rules. We
propose to measure differential cross section and deuteron analyzing powers of
the $dp$\- elastic scattering at the SPD NICA to search for transition region
from meson-baryon to quark-gluon degrees of freedom in the deuteron structure.
###### Abstract
In this experimental proposal, we consider a possibility of registering light
dibaryons at the NICA SPD facility, experimental and theoretical indications
of the existence of which were previously obtained at JINR. The main attention
is paid to the description of the observable effects, as well as the
requirements for measuring instruments necessary for their successful
detection.
###### Abstract
Based on our recent study of the lightest neutral hypernuclei with strangeness
$-1$ and $-2$, we propose to look for the neutral hypernucleus [4][ΛΛ]n in
deuteron-deuteron collisions which can be accessed by SPD NICA in the future.
Some advantages and opportunities for hypernuclei and exotic hadrons in the
double $K^{+}$ production channels at NICA are addressed.
###### Abstract
Experiments are proposed directed on solution of three main problems of
physics of soft $pp$ interactions: understanding/description of baryon spectra
in $pp$ collisions, evolution of $<P^{2}_{T}>$ – $x_{F}$ correlations with
energy growth and two-particle $P_{T}$ correlations.
###### Abstract
Over three decades there has been no comprehensive understanding of the
mechanism of soft photons (energy smaller 50 MeV) formation. Experimental data
indicate an excess of their yield in hadron and nuclear interactions in
comparison with calculations performed theoretically. In JINR, in connection
with the building of a new accelerator complex NICA, it has become possible to
carry out such studies in pp, pA and AA interactions at energies up to 25
$A\,$GeV. We prepared the extensive physical program for soft photons that
covers the wide region of investigations in high energy physics. To carry out
this program our group develops the conception of an electromagnetic
calorimeter on type ‘‘shashlik’’ based on gadolinium-gallium garnet (GaGG)
crystals, which have significantly lower the threshold for registration of
photons. The first tests of electromagnetic calorimeters manufactured at JINR
on the basis of the GaGG and a composite of tungsten and copper confirm that
choice.
###### Abstract
The space-time picture of hadron formation in high energy processes with
nuclear targets is still poorly known. It is suggested to test different
models of hadron formation by using collisions of heavy ions. Results of
microscopic transport calculations of proton and charged pion rapidity and
transverse momentum distributions in C+C and Ca+Ca collisions at
$\sqrt{s}_{NN}=11$ GeV are presented.
###### Abstract
It is proposed to use the Drell-Yan process with pair production of $\tau$
leptons to measure the parameters of polarized parton distribution functions
of a proton at the NICA collider in the SPD experiment. To determine the
polarization of tau leptons, we propose to use decays of $\tau$ leptons into a
single charged pi-meson and neutrino. To parameterize the polarization state
of $\tau$ leptons, it is proposed to use the energy of single $\pi$-mesons.
###### Abstract
Firm interpretation of the recent results from the AMS-02 and PAMELA
spectrometers, regarding the antiproton yield in $p$-$p$ and $p$-$d$
collisions, has been hindered by uncertainties in production cross section,
angular, and momentum spectra of the produced antiprotons. The proposed
measurements of antiproton yield at the planned SPD experiment at the NICA
collider could significantly contribute to enhancing the situation in favor of
the search for dark matter WIMPs.
###### Abstract
We present new ideas on tests of fundamental symmetries in polarization
experiments at the NICA facility. Specifically, we explore the possibilities
of high precision tests of the Standard Model by parity violation and searches
of Beyond the Standard Model semistrong breaking of time reversal invariance
in double polarized proton-deuteron scattering, taking advantage of high
intensity beams of polarized protons and deuterons available at NICA. In both
cases, we propose to use the new technique of polarized beam with precessing
horizontal polarizations, and polarized deuterons are the favored choice. The
external target in the extracted beam is optional for the parity violation
experiment, which requires furnishing Nuclotron and/or new Booster with very
modest new instrumentation. One should not overlook this potential for
substantial broadening of horizons of spin physics at the NICA facility.
1 NRC “Kurchatov Institute” - IHEP, Protvino 142281, Moscow region, Russia 2
Skobeltsyn Institute of Nuclear Physics, MSU, Moscow, 119991 Russia 3 P.N.
Lebedev Physical Institute,Leninsky prospect 53, 119991 Moscow, Russia 4
Astronomy Department, Faculty of Science, Cairo University, Giza, Egypt, 12613
5 Veksler and Baldin Laboratory of High Energy Physics, Joint Institute for
Nuclear Research,Dubna, Moscow region, 141980 Russia 6 Joint Institute for
Nuclear Researches, DLNP, Dubna, Moscow reg. 141980 Russia 7 Petersburg
Nuclear Physics Institute NRC KI, Gatchina, Russia 8 St. Petersburg
Polytechnic University, St. Peterburg, Russia 9 Sukhoi State Technical
University of Gomel, Prospect Octiabria, 48, 246746 Gomel, Belarus 10 Budker
Institute of Nuclear Physics of SB RAS, 630090 Novosibirsk, Russia 11
Novosibirsk State University, 630090 Novosibirsk, Russia 12 Novosibirsk State
Technical University,630092 Novosibirsk, Russia 13 Laboratory of Information
Technologies, Joint Institute for Nuclear Research, Dubna, Moscow region,
141980 Russia 14 Joint Institute for Nuclear Researches, BLTP, Dubna, Moscow
reg. 141980 Russia 15 Institut für Theoretische Physik, Justus-Liebig-
Universität, 35392 Giessen, Germany 16 L.D. Landau Institute for Theoretical
Physics, 142432 Chernogolovka, Russia 17 Université de Lyon, Institut de
Physique des 2 Infinis de Lyon, UCBL–IN2P3-CNRS, 4, rue Enrico Fermi,
Villeurbanne, France 18 St. Petersburg State University, St. Peterburg, Russia
19 Pensilvania State University, 104 Davey Laboratory University Park PA 16802
USA 20 DPhN, IRFU, CEA, Université Paris-Saclay, 91191 Gif-sur-Yvette Cedex,
France 21 Dubna State University, Dubna, Moscow reg. 141980 Russia 22
Department of Physics, M.V. Lomonosov State University, Moscow, 119991 Russia
23 Guangdong Provincial Key Laboratory of Nuclear Science, Institute of
Quantum Matter, South China Normal University, Guangzhou 510006, P.R. China 24
Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049,
P.R. China 25 University of Chinese Academy of Sciences, Beijing 100049, P.R.
China 26 Moscow Institute of Physics and Technology (National Research
University), 141701 Dolgoprudny, Russia
###### Contents
1. 1 The SPD setup and experimental conditions 111This section is written by A.V. Guskov (E-mail<EMAIL_ADDRESS>and A.D. Kovalenko<EMAIL_ADDRESS> 2. 2 Elastic $pN$, $pd$ and $dd$ scattering 333This section is written by Yu.N. Uzikov<EMAIL_ADDRESS> 1. 2.1 Spin amplitudes of $pN$ elastic scattering
2. 2.2 Polarized $pd$ elastic diffraction scattering within the Glauber model
3. 2.3 Quasielastic pd-scattering $p+d\to\\{pp\\}(^{1}S_{0})+n$
4. 2.4 Elastic $dd$ scattering
5. 2.5 Double polarized large angle $pN$ elastic scattering
6. 2.6 Summary
3. 3 Studying periphery of the nucleon in diffractive $pp$ scattering 666This section is written by V.A. Baskov, O.D. Dalkarov, A.I. L’vov (E-mail<EMAIL_ADDRESS>and V.V. Polyanskiy
4. 4 Hadron structure and spin effects in elastic hadron scattering at NICA energies 888This section is written by O.V. Selyugin; E-mail<EMAIL_ADDRESS> 1. 4.1 HEGS model and spin effects in the dip region of momentum transfer
2. 4.2 Conclusions
5. 5 Single-spin physics 101010This section is written by V. Abramov; E-mail<EMAIL_ADDRESS> 1. 5.1 Model of chromomagnetic polarization of quarks
2. 5.2 Single-spin hadron asymmetry
3. 5.3 Transverse polarization of hyperons
6. 6 Vector light and charmed meson production 121212 This section is written by E. Tomasi-Gustafsson; E-mail<EMAIL_ADDRESS> 1. 6.1 Charm production
2. 6.2 Open charm production
3. 6.3 Backward meson production
4. 6.4 Conclusions
7. 7 Exclusive hard processes with deuteron at NICA151515This section is written by M. Strikman; E-mail<EMAIL_ADDRESS> 1. 7.1 Probing dynamics of nucleon - nucleon interaction in proton - deuteron quasielastic scattering
2. 7.2 Probing microscopic deuteron structure
8. 8 Scaling behaviour of exclusive reactions with lightest nuclei and spin observables 171717This section is written by V.P. Ladygin (E-mail<EMAIL_ADDRESS>and Yu.N. Uzikov
9. 9 Multiquark correlations and exotic hadron state production 191919This section is written by V.T. Kim (kim_vt@pnpi.nrcki.ru), A.A. Shavrin<EMAIL_ADDRESS>and A.V. Zelenov<EMAIL_ADDRESS> 1. 9.1 Multiquark correlations and exotic state production at SPD NICA
2. 9.2 Multiquark correlations: fluctons in nuclei
3. 9.3 Few-quark correlations: Diquarks
4. 9.4 Multiparton scattering
5. 9.5 Multiquark exotic state production
6. 9.6 Summary
10. 10 Study of inelastic d-d and p-d interactions for observation of neutron-proton system under strong compression 212121 This section is written by B.F. Kostenko, E-mail<EMAIL_ADDRESS> 1. 10.1 Introduction
2. 10.2 Search for new dibaryons at the NICA SPD facility
11. 11 Proposal for the study of lightest neutral hypernuclei with strangeness $-1$ and $-2$ 242424This section is written by J.-M. Richard, Q. Wang and Q. Zhao<EMAIL_ADDRESS> 1. 11.1 Binding conditions for 3 and 4-body systems with strangeness $-1$ and $-2$
2. 11.2 Production mechanism for $\isotope[4][\Lambda\Lambda]{n}$ and advantages of double $K^{+}$ productions
3. 11.3 Summary
12. 12 Problems of soft $pp$ interactions 262626This section is written by A. Galoyan and V. Uzhinsky.
13. 13 Puzzles of soft photons pp, pA and AA interactions 282828This section is written by E. Kokoulina<EMAIL_ADDRESS>and V.A. Nikitin (E-mail: nikitin@jinr.ru).
1. 13.1 The scientific program of SP study
2. 13.2 The preparation to experimental SP study
14. 14 Hadron formation effects in heavy ion collisions 303030This section is written by A. B. Larionov; E-mail<EMAIL_ADDRESS> 1. 14.1 The model
2. 14.2 Numerical results
3. 14.3 Summary and conclusions
15. 15 Measurement of characteristics of the processes of pair production of polarized tau leptons in the SPD experiment. 343434 This section is written by A. Aleshko, E. Boos (E-mail: boos@theory.sinp.msu.ru), V. Bunichev (E-mail<EMAIL_ADDRESS> 16. 16 On Measuring Antiproton-Production Cross Sections for Dark Matter Search 363636This section is written by R. El-Kholy; E-mail<EMAIL_ADDRESS> 1. 16.1 Antiproton Production Cross Sections
2. 16.2 NICA SPD Contribution
3. 16.3 Summary
17. 17 Tests of fundamental discrete symmetries at NICA facility: addendum to the spin physics programme 383838 This section is presented by I.A. Koop, A.I. Milstein, N.N. Nikolaev (E-mail: nikolaev@itp.ac.ru), A.S. Popov, S.G. Salnikov, P.Yu. Shatunov,Yu.M. Shatunov.
1. 17.1 Precessing spin asymmetries in the total $pd$ cross section
2. 17.2 PV asymmetry: expectations from Standard Model
3. 17.3 The experimental strategies
4. 17.4 Summary and outlook
## Tests of QCD basics in the transition region
The Standard Model (SM) of fundamental interactions formulated five decades
ago as a local gauge invariant theory based on the $SU(2)_{L}\times
U(1)_{Y}\times SU(3)_{c}$ spontaneously broken symmetry, was perfectly
confirmed by experiments in electroweak sector. The only part of this model,
Quantum Chromodynamics (QCD), connected with the colored $SU(3)_{c}$ symmetry
and considered as a basis of strong interactions between quarks and gluons is
still under experimental verification.
At low energies, below the GeV region the strong interaction is described in
terms of baryons exchanging mesons in accordance with the chiral effective
field theory, which is based on spontaneously broken chiral symmetry of the
QCD Lagrangian [1]. Recent progress in our understanding of properties of the
light nuclei and nuclear reactions achieved within this approach is outlined
in Refs. [2, 3]. At much higher energies and high transferred 4-momenta,
perturbative Quantum Chromodynamics (pQCD) characterizes the strong force in
terms of quark and gluons carrying color charge, and obeying to parton
distribution functions (PDF) of hadrons and nuclei. Although these two
pictures are well determined in their respective energy scales, the transition
between them is not well identified. Whereas the goal of the Many Purposes
Detector (MPD) NICA project is to search for phase transition of the baryon
matter at high temperature and high density into the quark gluon plasma in
heavy-ions collision, and on this way to study properties of the early
Universe, the main aim of the Spin Physics Detector (SPD) project [4] at its
first stage with lower energies is quite different and, in particular, is just
connected with a search for the transition region from hadron to quark-gluon
degrees of freedom in theoretical describing of collisions of free nucleons or
lightest nuclei. QCD predicts that hadrons produced in exclusive processes at
sufficiently high 4-momentum transfer will experience diminished final
(initial) state interactions. This QCD prediction named as color transparency
(CT) [5], [6] may help to identify the transition between these two
alternative descriptions of strong forces after the onset of CT will be
observed. Another signal for the transition region in structure of the
lightest nuclei is related to onset of the predicted by pQCD dimensional
scaling in reactions with these nuclei. A clear indication for transition to
quark degrees of freedom in strong interactions would give a formation of
multiquark states, like dibaryon resonances observed in sector of light quarks
[7]. Production of heavy quarks in few-nucleon systems can be related to
formation of exotic type of resonances, as ‘‘octoquarks’ $uuds\bar{s}uud$,
$uudc\bar{c}uud$ [8] and the behaviour of double spin correlation $A_{NN}$ of
$pp$ elastic scattering measured near the charm threshold at large angles [9]
supports this assumption. On the other hand, it is important to understand how
this observation is related to recently observed at LHCb pentaquark states
$uudc\bar{c}$ [10]. The SPD NICA has all possibilities for study these and
other issues of QCD. Furthermore, polarization phenomena provide an unique
possibility to search for physics beyond the SM by making test of fundamental
discrete symmetries of the SM related to the space (P), time (T) and charge
(C) inversion. One of these options is connected with double polarized proton-
deuteron scattering providing a search for T-invariance (or CP-invariance
under CPT-symmetry) violation.
Experiments with unpolarized colliding beams are also of importance in study
of reactions at heavy quark thresholds and in search for both color
transparency and scaling onset or multiquark (dibaryon) states.
## 1 The SPD setup and experimental conditions 111This section is written by
A.V. Guskov (E-mail<EMAIL_ADDRESS>and A.D. Kovalenko
<EMAIL_ADDRESS>
The SPD experimental setup is being designed as a universal $4\pi$ detector
with advanced tracking and particle identification capabilities based on
modern technologies that can operate with polarized proton and deuteron beams
at a collision energy up to 27 GeV and a luminosity up to $10^{32}$ cm-2 s-1
(proton collisions). Details of the SPD experimental setup are described in
its Conceptual Design Report [4]. The silicon vertex detector will provide
resolution for the vertex position on the level of below 100 $\mu$m needed for
reconstruction of primary and secondary vertices. The straw tube-based
tracking system placed within a solenoidal magnetic field of up to 1 T at the
detector axis should provide the transverse momentum resolution
$\sigma_{p_{T}}/p_{T}\approx 2\%$ for a particle momentum of 1 GeV/$c$. The
time-of-flight system with a time resolution of about 60 ps will provide
$3\sigma$ $\pi/K$ and $K/p$ separation of up to about 1.2 GeV/$c$ and 2.2
GeV/$c$, respectively. Possible use of the aerogel-based Cherenkov detector
could extend this range. Detection of photons will be provided by the sampling
electromagnetic calorimeter with the energy resolution $\sim 5\%/\sqrt{E}$. To
minimize multiple scattering and photon conversion effects for photons, the
detector material will be kept to a minimum throughout the internal part of
the detector. The muon (range) system is planned for muon identification. It
can also act as a rough hadron calorimeter. The pair of beam-beam counters and
zero-degree calorimeters will be responsible for the local polarimetry and
luminosity control. To minimize possible systematic effects, SPD will be
equipped with a triggerless DAQ system.
It is assumed that up to 30% of the collider running time will be devoted to
polarized deuteron and proton experiments from the beginning of the collider
commissioning. Thus, some polarized $pp$-, $dd$\- and even $pd$\- collisions
at energy range of $\sqrt{s_{NN}}=3.4\div 10$ GeV, could be possible already
at the initial stage of the collider operation. The most accessible is
polarized deuteron beam from the Nuclotron in the energy range of $1\div 4$
GeV/u. Average luminosity of $dd$ \- collisions is estimated to $8\times
10^{27}\div 2.5\times 10^{31}\,cm^{-2}s^{-1}$. Stable direction of the
polarization vector is vertical. A single and double polarized collisions are
possible. Transverse polarization of deuteron beam can be obtained at the
specific energy point $\sim 5.6$ GeV corresponding to the spin integer
resonance. The adequate intensity of polarized proton beam from the Nuclotron
($\geq 10^{10}$ part./pulse) will be reached after commissioning of the new
light ion injector LILAC scheduled to 2025-2026 and the spin control system
have been designed for the collider. The existing proton injected chain put
limit to the beam intensity due to very low output linac energy (5 MeV). Thus,
only experiments on the beam storage and acceleration are planning for the
commissioning phase. Realization of pd - mode is more complicated because
HILAC and LILAC both injection chains should be involved in the process.
Moreover, only single polarized collision mode is available, namely:
unpolarized deuteron with polarized proton. The peak luminosity in symmetric
dp – mode, corresponding to equal momentum of the colliding particles per
nucleon, can reach of $2\times 10^{31}\,cm^{-2}s^{-1}$ at stored intensity of
$6\times 10^{11}$ particles per each collider ring. Light ion collision
studies at the SPD are possible also. The luminosity level can be scaled from
that was specified for gold-gold collisions: $1\times 10^{27}\,cm^{-2}s^{-1}$
at $\sqrt{s_{NN}}=11$ GeV.
## 2 Elastic $pN$, $pd$ and $dd$ scattering 333This section is written by
Yu.N. Uzikov<EMAIL_ADDRESS>
PACS: 25.40.Cm, 13.75.Cs, 13.88.+e
### 2.1 Spin amplitudes of $pN$ elastic scattering
Nucleon-nucleon elastic scattering contains fundamental information on the
dynamics of the $NN$ interaction and constitutes a basic process in physics of
atomic nuclei and hadrons. A systematic reconstruction of spin amplitudes of
$pp$ and $pn$ elastic scattering from $pN$ scattering data is provided by the
SAID partial-wave analysis [11] and covers laboratory energies up to $3$ GeV
($p_{lab}\approx 3.8$ GeV/c) for $pp$ and $1.2$ GeV ($p_{lab}\approx 1.9$
GeV/c) for $pn$ scattering. At higher energies there is only incomplete
experimental information on $pp$ scattering, whereas data for the $pn$ system
are very scarce. In the literature there are several models and corresponding
parametrizations for $pN$ amplitudes. Some of them are obtained in the eikonal
approach for the lab momentum $6$ GeV/c [12] and for LHC energies [13] and
recently in [14] (see Sect. 4). At moderate transferred momenta $-t$ and large
invariant mass $s$ the Regge model is expected to be valid to describe elastic
$pN$ scattering. In literature there are some parametrizations for $pN$
amplitudes, obtained within the Regge phenomenology for values of $s$ above
$6$ GeV2 ($p_{lab}\geq 2.2$ GeV/c) [15] and for $p_{lab}=3$-$50$ GeV/c
(corresponding to $2.77<\sqrt{s}<10$ GeV) [16].
Assuming Lorentz-invariance and parity conservation, the elastic $NN$
scattering is described by eight independent helicity amplitudes $\phi_{i}$
($i=1,\dots 8$) determined in [17, 18]. Under time-reversal invariance, one
has ($\phi_{5}=\phi_{8}$, $\phi_{6}=\phi_{7}$) six independent amplitudes, and
for identical nucleons $pp$ and $nn$ the number of independent helicity
amplitudes is equal to five ($\phi_{5}=-\phi_{6}$, $\phi_{7}=-\phi_{8}$). Full
information about the spin dependent $pN$ amplitudes can be obtained, in
principle, from a complete polarization experiment, which, however, requires
to measure twelve (ten) independent observables at a given collision energy
for $pn$ ($pp$ or $nn$) and, thus, constitutes a too complicated experimental
task. Another possible way to check existing parametrizations in addition to
direct measurement of spin observables of $pN$ elastic scattering is to study
spin effects in proton-deuteron ($pd$) and neutron-deuteron ($nd$) elastic and
quasi-elastic scattering. The polarized $pd$-elastic scattering is discussed
below using the Glauber diffraction theory.
At large $-t$ corresponding to large scattering angles in the c.m.s. pN system
($\theta_{cm}\approx 90^{\circ}$), where the Regge model cannot be applied,
very interesting features were observed in the double spin asymmetry $A_{NN}$
in the elastic pp scattering at laboratory momenta $p_{lab}=5-10$ GeV/c.
Commonly accepted explanation of those features is absent in literature. In
section 2.5 we give a short review of existing models based on usage of the
pQCD amplitudes and non-perturbative exotic multiquark resonances
contribution.
### 2.2 Polarized $pd$ elastic diffraction scattering within the Glauber
model
As was noted above, a possible way to check existing parametrizations of $pN$
elastic amplitudes is to study spin effects in proton-deuteron ($pd$) and
deuteron-deuteron ($dd$) elastic and quasi-elastic scattering. At high
energies and small four-momentum transfer $t$, $pd$ scattering can be
described by the Glauber diffraction theory of multistep scattering, which
involves as input on-shell $pN$ elastic scattering amplitudes. Applications of
this theory with spin-dependent effects included [19] indicate a good
agreement with the $pd$ scattering data at energies about $1$ GeV if the SAID
data on $pN$ scattering amplitudes are used as input of the calculations [20,
21, 22].
Figure 1: Analyzing power for $pp$ elastic scattering as a function of the
four-momentum transfer $-t$ at $4.8$ GeV/c (left) and $45$ GeV/c (right). The
results of calculations [23] based on the Regge model parameterizations from
[16] are shown by the solid line (see details in Ref. [23]). Left: Data are
taken from Refs. [24] (filled squares: $4.4$ GeV/c; open squares: $5.15$
GeV/c), and [25] (circles). Right: Data are taken from Refs. [26] (squares)
and [27] (circles).
fig-uz01
The spin-dependent Glauber theory [19, 20] is applied recently [23] to
calculate spin observables of $pd$ elastic scattering at $3$-$50$ GeV/c
utilizing the $pp$ elastic scattering amplitudes $f_{pp}$ established and
parametrized in Ref. [16] within the Regge formalism. The Regge approach
allows one to construct $pn$ (and $\bar{p}N$) amplitudes together with the
$pp$ amplitudes. This feature allows one to perform a test of broad set of
$pN$ amplitudes and applicability of the Regge model itself to $pN$ elastic
scattering. However, in view of the scarce experimental information about the
spin-dependent $pn$ amplitudes and taking into account that the spin-
independent parts of the $pp$ and $pn$ amplitudes at high energies are
approximately the same, it was assumed in [23] as a first approximation, that
$f_{pn}=f_{pp}$. The amplitudes of $pN$ elastic scattering are written as [19]
$\displaystyle M_{N}({\bf p},{\bf
q};{\mbox{\boldmath$\sigma$}},{{\mbox{\boldmath$\sigma$}}}_{N})=A_{N}+C_{N}{\mbox{\boldmath$\sigma$}}\hat{n}+C_{N}^{\prime}{\mbox{\boldmath$\sigma$}}_{N}\hat{n}+B_{N}({\mbox{\boldmath$\sigma$}}\hat{\bf
k})({\mbox{\boldmath$\sigma$}}_{N}\hat{\bf k})+$ (1)
$\displaystyle+(G_{N}+H_{N})({\mbox{\boldmath$\sigma$}}\hat{\bf
q})({\mbox{\boldmath$\sigma$}}_{N}\hat{\bf
q})+(G_{N}-H_{N})({\mbox{\boldmath$\sigma$}}\hat{\bf
n})({\mbox{\boldmath$\sigma$}}_{N}\hat{\bf n}),$
where the complex numbers $A_{N}$, $C_{N}$, $C_{N}^{\prime}$, $B_{N}$,
$G_{N}$, $H_{N}$ were fixed from the amplitudes of the SAID analysis [11] and
parametrized by a sum of Gaussians. For the double scattering term in $pd$
scattering the unit vectors $\hat{\bf k}$, $\hat{\bf q}$, $\hat{\bf n}$ are
defined separately for each individual $NN$ collision. Numerical values for
the parameters of the Gaussians are obtained by fitting to the helicity
amplitudes from Ref. [16]. Those for $p_{lab}=45$ GeV/c are given in Ref.
[23]. The differential cross section of $pp$ elastic scattering and the vector
analyzing power $A_{y}$ are reproduced with these parameterizations on the
same level of accuracy as in Ref. [16], in the interval of transferred four
momentum $-t<1.5$ (GeV/c)2. An example of calculations of $A_{y}$ at
$p_{lab}=4.8$ GeV/c and 45 GeV/c is shown in Fig. LABEL:fig-uz01.
The spin observables $A_{y}$, $A_{ij}$, and $C_{ij,k}$ considered in the work
[23] are defined in the notation of Ref. [28] as following
$\displaystyle
A_{y}^{d}=TrMS_{y}M^{+}/TrMM^{+},A_{y}^{p}=TrM\sigma_{y}M^{+}/TrMM^{+}$ (2)
$\displaystyle A_{yy}=TrM{\cal P}_{yy}M^{+}/TrMM^{+},A_{xx}=TrM{\cal
P}_{yy}M^{+}/TrMM^{+}$ $\displaystyle
C_{y,y}=TrM{S}_{y}\sigma_{y}M^{+}/TrMM^{+},C_{x,x}=TrM{S}_{y}\sigma_{y}M^{+}/TrMM^{+},$
$\displaystyle C_{xx,y}=TrM{\cal
P}_{xx}\sigma_{y}M^{+}/TrMM^{+},C_{yy,y}=TrM{\cal
P}_{yy}\sigma_{y}M^{+}/TrMM^{+},$
where ${\cal P}_{ij}=\frac{3}{2}(S_{i}S_{j}+S_{j}S_{i})-2\delta_{ij}$ and
$S_{j}$ ($j=x,y,z$) are Cartesian components of the spin operator for the
system with $S=1$, the transition operator $M$ depends on the momentum of the
initial ($\bf p$) and final (${\bf p}^{\prime}$) proton and contains the Pauli
spin matrices ${{\mbox{\boldmath$\sigma$}}}=(\sigma_{x},\ \sigma_{y},\
\sigma_{z})$. We use the Madison reference frame with the axis OZ$||$${\bf
p}$, OY$||$$[{\bf p}\times{\bf p}^{\prime}]$ and OX choosen in such a way to
provide a right-handed coordinate system.
Figure 2: Results for spin-dependent $pd$ observables. Predictions from Ref.
[23] for $p_{lab}=4.8$ GeV/c are shown by dashed lines while those at $45$
GeV/c correspond to the solid lines.
fig-uz02
The unpolarized differential cross section, vector ($A_{y}^{p}$, $A_{y}^{d}$)
and tensor ($A_{xx}$, $A_{yy}$) analyzing powers and some spin correlation
parameters ($C_{x,x}$, $C_{y,y}$, $C_{xx,y}$, $C_{yy,y}$) 555 We use here
notations of Ref. [28] of $pd$ elastic scattering were calculated at
$p_{l}=4.85$ GeV/c and 45 GeV/c at $0<-t<2$ GeV2 using $pN$ amplitudes from
[16]. The results obtained for $A_{y}^{p}$, $A_{y}^{d}$, $C_{xx,y}$ and
$C_{yy,y}$ are shown in Fig. LABEL:fig-uz02. As shown in Ref. [23] available
data on $pd$-elastic differential cross section in forward hemisphere are well
described by this model. Most sensitive to the spin-dependent $pN$ amplitudes
are vector analyzing powers $A_{y}$ and spin correlation parameters $C_{x,x}$
and $C_{y,y}$. So, even measurement of the ratio $A_{y}^{d}/A_{y}^{p}$ at low
$t$ gives valuable information on the transverse spin-spin term in NN-
amplitudes [29]. In contrast, the tenzor analyzing powers $A_{xx}$ and
$A_{xx}$ are very weakly sensitive to those amplitudes and weakly changed with
increasing energy. The calculated in [23] polarization observables can be
measured at SPD NICA that will provide a test of the used $pN$ amplitudes. The
corresponding differential cross section is rather large in the considered
region $p_{lab}=3-50$ GeV/c and $|t|=0-2$ GeV2 being $d\sigma/dt>0.1$ mb/GeV2.
Expected counting rate $N$ at $p_{lab}=50$ GeV/c (${q_{pp}^{cm}}=5$ GeV/c) for
the luminosity $L=5\times 10^{30}cm^{-2}s^{-1}$ and for the solid angle
$\Delta\Omega=0.03$ is $N\geq 10^{2}s^{-1}$.
The $pN$ helicity amplitudes $\phi_{5}$ and $\phi_{1}+\phi_{3}$, which can be
tested in the above described procedure are necessary in search of time-
reversal invariance effects in double-polarized $pd$ scattering [30, 31]. Data
of the spin-correlation parameters of $pp$ elastic scattering being analyzed
in the framework of the eikonal model [13] will allow one to obtain space
structure of the spin-dependent hadron forces [32].
### 2.3 Quasielastic pd-scattering $p+d\to\\{pp\\}(^{1}S_{0})+n$
Spin structure of the amplitude of the reaction of quasielastic $pd$
scattering with formation of the $pp$ pair at small excitation energy $\leq 3$
MeV
$p+d\to\\{pp\\}(^{1}S_{0})+n$ (3)
is of special interest. In this reaction the final $pp$ pair is in the
${}^{1}S_{0}$ state of the internal motion, therefore the number of of
independent transition matrix elements is diminished to six instead of twelve
for the elastic $pd$\- scattering. Since the angular momentum of the
$pp(^{1}S_{0})$ pair is zero, in collinear kinematics the transition matrix
element of this reaction is completely described by two independent amplitudes
${\cal A}$ and ${\cal B}$ as following
${\cal F}={\cal A}({\bf e}\cdot{\bf k})({{\mbox{\boldmath$\sigma$}}}\cdot{\bf
k})+{\cal B}{\bf e}\cdot{{\mbox{\boldmath$\sigma$}}},$ (4)
where ${\bf k}$ is unit vector directed along the beam, ${\bf e}$ is the
deuteron polarization vector and $\sigma$ is the Pauli matrix. The modules of
these amplitudes and cosine of the relative phase $\varphi_{AB}$ can be
determined by measurement of unpolarized cross section of the reaction
$d\sigma_{0}$ and tenzor analyzing powers $T_{20}=A_{zz}/\sqrt{2}$ and
$A_{yy}$. In order to measure the sine of the relative phase $\varphi_{AB}$
one has to measure only the sign of the spin-correlation coefficient
$C_{xz,y}$.
Within the approximation of the $pn$\- single scattering the theoretical
analysis of this reaction becomes more simple. In this case the ${\cal A}$ and
${\cal B}$ amplitudes of the reaction (3) are expressed via the spin
amplitudes of the charge exchange reaction
$p+n\to n+p.$ (5)
The transition matrix element of reaction (5) at zero scattering angle can be
written as
$f_{12}^{collin}=\alpha+\beta({{\mbox{\boldmath$\sigma$}}}_{1}\cdot{{\mbox{\boldmath$\sigma$}}}_{2})+(\varepsilon-\beta)({{\mbox{\boldmath$\sigma$}}}_{1}\cdot{\bf
k})({{\mbox{\boldmath$\sigma$}}}_{2}\cdot{\bf k}),$ (6)
where ${{\mbox{\boldmath$\sigma$}}}_{1}$ (${{\mbox{\boldmath$\sigma$}}}_{2}$)
the Pauli matrix acting on the spin state of the first (second) nucleon.
We can show that measurement of $d\sigma_{0}$ and $T_{20}$ provides the
modules of $|\varepsilon|$ and $|\beta|$ whereas the cosine of the relative
phase ( or $Re\varepsilon\beta^{*}$) is determined by the spin correlation
parameters $C_{x,x}=C_{y,y}$. In order to measure the sine of this phase
($Im\beta\varepsilon^{*}$) one has to measure the sign of
$C_{xz,y}(=-C_{yz,x})$. Therefore, measurement of $d\sigma_{0}$, $T_{20}$,
$C_{y,y}$ and the sign of $C_{xz,y}$ at zero scattering angle completely
determines the spin amplitudes $\varepsilon$ and $\beta$.
### 2.4 Elastic $dd$ scattering
Spin observables of the $dd$\- elastic scattering in forward hemisphere also
can be used to test spin-dependent amplitudes of $pN$ elastic scattering since
the Glauber model can be used for description of these observables.
Unpolarized differential cross section of the $dd$\- elastic scattering in
forward hemisphere measured at energies $\sqrt{s}=53-63$ GeV [33] was well
described by the modified Glauber theory including Gribov inelastic
corrections. At lower energies corresponding to the SPD NICA region, one may
expect that inelastic corrections are not important, that can be checked by
direct calculation of unpolarized cross section and subsequent comparison with
the data. In this calculations the above considered spin dependent amplitudes
of the $pd$ elastic scattering [23] can be used as input for the Glauber
calculations of the $dd$ scattering.
At large scattering angles $\theta_{cm}\sim 90^{\circ}$ the $pd\to pd$ and
$dd\to dd$ processes are sensitive to the short-range (six-quark) structure of
the deuteron. Therefore, measurement of any observables of these processes at
large $\theta_{cm}$ will be important to search for non-nucleonic degrees of
freedom of the deuteron.
### 2.5 Double polarized large angle $pN$ elastic scattering
The $pp$ and $pn$ elastic scattering at high energy $\sqrt{s}=5-7$ GeV and
large transferred momentum $-t=5-10$ GeV2 is powered by short-range properties
of NN-interaction corresponding to small separation between nucleons
$r_{NN}\sim\hbar/\sqrt{-t}\leq 0.1$ fm. There are three following aspects of
QCD dynamics in these processes. (i) First, the differential cross section
$d\sigma^{pp}/dt({s},\theta_{cm})$ at fixed angle $\theta_{cm}\sim 90^{\circ}$
on the whole follows to the pQCD constituent counting rules
$d\sigma^{pp}/dt({s},\theta_{cm})\sim s^{-10}$ [34, 35, 36, 37]. However, a
clear deviation from this prediction in form of oscillations with increasing
energy is observed in the region $s=10\div 40$ GeV2 [34, 35, 36, 37]. The
irregularity in the energy dependence is on the level of $\sim 50\%$ in the
region, where magnitude of the elastic pp- cross section falls down by 8
orders of magnitude. (ii) Second, anomalous polarization asymmetries were
observed in hard pN-scattering at $p_{lab}=11.75$ GeV/c [38, 39, 9]. Elastic
$pp$-cross section with spins of protons parallel and normal to the scattering
plane is almost four time larger than the cross section with antiparallel
spins. The challenge is that in order to generate such large polarization
effect, one needs to have large contribution from double spin-flip helicity
amplitude $\phi_{2}$ or negligible contribution from helicity conserving
$\phi_{1}$ amplitude. However, in pQCD, in contrast, $\phi_{2}$ is the most
suppressed and the $\phi_{1}$ is largest [40]. Predicted within the pQCD
(quark-interchange model) double spin asymmetry $A_{NN}$ does not depend on
energy [41], [42], whereas the measured asymmetry demonstrates ’’oscillating‘‘
energy dependence. (iii) The third QCD aspect of hard NN scattering is related
to the Color Transparency phenomenon (CT), that is a reduction of the
absorption in the nuclear medium of hard produced hadrons, both mesons and
baryons[5], [6]. Being in point like configurations, which are dictated by
mechanism of high momentum transfer, the initial and final hadrons
participating in hard process have small color dipole momenta and, therefore,
small interaction cross section with nuclear medium. These expectations
resulted in huge theoretical and experimental activities in 90’s. While the CT
effect is observed for the hard production of the $q\bar{q}$ systems, the
similar effect for $qqq$ is elusive. The data [43, 44] on the reaction $p+A\to
pp+X$ on the 12C and 27Al show again an ’’oscillatory‘‘ effect, i.e. the
transparency increases with increasing momentum up to $p_{lb}=9$ GeV/c, and
then decreases below the Glauber calculation predictions at 14 GeV/c. An
attempt to connect all three above aspects together into one approach was
undertaken in Ref. [40]. However, recent measurement of the cross section of
the reaction 12C(e,ep)X at $Q^{2}=8-14\,(GeV/c)^{2}$ [45] shows no CT effect
and this fact raises new questions to the analysis made in [40]. On the other
hand, according to [8], the observed large variations in spin correlations of
$pp$-elastic scattering are consistent with formation in the s-channel of
‘‘octoquark’’resonances $uuds\bar{s}uud$ and $uudc\bar{c}uud$ near the
strangeness and charm production thresholds, respectively. The variations with
increasing energy are explained as a result of interference of the pQCD
background amplitude with nonperturbative resonant amplitudes. Furthermore,
the model [8] provides a description of the oscillations in the unpolarized
differential $pp$\- elastic cross section.One should mentioned, however, that
another explanation of the oscillation effect in the
$d\sigma^{pp}/dt({s},\theta_{cm})$ was suggested in Ref. [46].
The considered questions about new types of charm-based resonances [47] became
especially interesting after observation enhancement effects in the decay
$\Lambda_{b}^{0}\to J/\Psi pK^{-}$ interpreted as pentaquark states
$uudc\bar{c}$ [10] (see also Ref. [47]). More insight into this issue can be
get from the data on large angle $pn$ elastic scattering. Different spin-
isospin structure of the transition matrix elements for the near threshold
$J/\Psi$ production in $pn$ and $pp$ collisions [48] means that spin
observables in $pn$\- elastic scattering can give a valuable independent
information on the considered dynamics. Data on these observables are almost
absent in the considered energy region. A task to get such data in the energy
interval $\sqrt{s_{NN}}\approxeq 3\div 5$ GeV from the ${\vec{p}}{\vec{d}}\to
pnp$ and ${\vec{d}}{\vec{d}}\to pnpn$ reactions is accessible for the SPD NCA.
### 2.6 Summary
To conclusion, nucleon-nucleon elastic scattering is a basic process in the
physics of atomic nuclei and the interaction of hadrons with nuclei. Existing
models and corresponding parametrizations of $pp$ amplitudes in the region of
small transferred momenta can be effectively tested by a measurement of spin
observables for $pd$ and $dd$ elastic scattering and a subsequent comparison
of the results with corresponding Glauber calculations. The spin observables
of $pd$ elastic scattering studied and evaluated in [23] are found to be not
too small and, thus, could be measured at the future SPD NICA facility. As
extension of this study, the quasi-elastic processes with formation of the
spin-singlet final $NN$-pair at small excitation energy $<3$ MeV in the
${}^{1}S_{0}$ state of internal motion, $pd\to n\\{pp\\}_{s}$ and $pd\to
p\\{pn\\}_{s}$ can be also investigated.
## 3 Studying periphery of the nucleon in diffractive $pp$ scattering 666This
section is written by V.A. Baskov, O.D. Dalkarov, A.I. L’vov (E-mail:
<EMAIL_ADDRESS>and V.V. Polyanskiy
1\. Scattering of high-energy hadrons at low $t$ is usually described by a
simple phenomenological dependence $d\sigma/dt=Ae^{Bt}$ (not applicable in the
Coulomb region, $|t|\lesssim 0.01$ GeV2, and at $|t|\gtrsim 0.4$ GeV2). In the
impact parameter representation, such a dependence corresponds to a gaussian
profile function $\Gamma(b)\sim\exp(-b^{2}/2B)$ with the average transverse
size $\langle b^{2}\rangle^{1/2}=B^{1/2}\sim 0.6~{}$fm when $B\sim 10$ GeV-2.
This size corresponds well to the quark core size of the nucleon, $r_{q}\sim
0.4-0.5$ fm, where the bulk of the nucleon mass (and the energy and momentum)
is concentrated.
On the other hand, part of the nucleon components is clearly located at larger
distances, pion cloud being the most evident example. The first evidence of
the pion cloud effect in the diffractive scattering, including rapid variation
of the effective slope $B$ at $|t|\sim 0.1$ GeV${}^{2}\approx 4m_{\pi}^{2}$,
have been found in ISR measurements (a comprehensive review of the ISR data
can be found in [49]).
First explanations of this, presumably pion cloud effect, were provided by
Anselm and Gribov [50] (see also [51, 52]). Soon then a dedicated experiment
was conducted in Protvino [53] in order to test the ISR results. However,
beyond confirming findings of ISR, one more oscillation in the differential
cross section at $|t|\sim 0.5$ GeV2 was found. Being located at higher $t$, it
might be related with somewhat heavier mesons around the proton (but not as
heavy as vector mesons that are too heavy).
Actually, S.P. Denisov et al. recently suggested to continue exploring $pp$
elastic scattering in that kinematical region at Protvino [54], and the
current proposal of doing similar experiment at SPD was directly motivated by
the Denisov’s ideas.
2\. It is essential that the Protvino experiment is not the only work
indicating an oscillation at $|t|\sim 0.5~{}\rm GeV^{2}$ in the fine structure
of the $pp$ diffraction cone. In Fig. 3 the most precise data of three
experiments — from Protvino [53] (at the proton beam momentum $p=60$ GeV$/c$),
ISR [49] (at the total energy $\sqrt{s}=52.8$ GeV), and Fermilab [55] (at
$p=200$ GeV$/c$), see also a comprehensive compilation and parameterization of
world data in [56] — are compared to exponential form $F(t)=Ae^{Bt+Ct^{2}}$
beyond the Coulomb region of tiny $|t|$ and the region of $|t|\lesssim 0.1$
GeV2 where effects of the pion cloud contribute. One has to notice that ISR
and FNAL data do not fully cover the region of $t$ with suspected oscillations
and do not have sufficient accuracy there. Therefore, further experimental
studies in that region are well justified.
Figure 3: Deviations of the $pp$ differential cross section from smooth
dependences $F(t)=Ae^{Bt+Ct^{2}}$. Data are from Protvino, ISR and FNAL, see
in the text. Solid line is a polynomial smoothing of shown ratios. Strips show
statistical errors in the polynomial.
In principle. information on the smoothed ratios $R(t)=(d\sigma/dt)/F(t)$
could be used in order to estimate the $pp$ scattering amplitude $f(s,t)$ and
to find then, through a Fourier-Bessel transformation, a profile function
$\Gamma(b)$ of the impact parameter $b$ [57]. A peak in $f(s,t)$ at $|t|\sim
0.5~{}\rm GeV^{2}$ corresponds to a peak in the profile function $\Gamma(b)$
at large distances $b\sim 7.0/\sqrt{|t|}\sim 2$ fm (here 7.0 is the second
maximum of the Bessel function $J_{0}(x)$), However, a straightforward
calculation of $\Gamma(b)$ in this way does not give reliable results in the
region where $\Gamma(b)$ becomes very small and sensitive to assumed phase of
the amplitude used, its spin structure, behavior at higher $|t|$, etc.
Actually, more sophisticated and indirect approaches are to be used in order
to analyze data on the described oscillation — see, for example, [58, 59, 60].
3\. In order to cover the region of interest, $|t|\sim 0.1-0.8$ GeV2, the
experimental setup must detect protons (in coincidence) scattered at angles
$\theta\sim 3-10^{\circ}$ what needs detectors placed at distances $R\sim
4-15$ cm from the beam. Accuracy of determination of the momentum transfer
squared $t$ in individual events of elastic $pp$-scattering must be better
than $\Delta t\sim 0.01{-}0.02$ GeV2, and this can be achieved with planned
tracker endcap detectors and with angular spread of colliding protons
determined by beam emittance and beta-function at IP.
Additional measurements of $d\sigma/dt$ and/or polarization observables at
higher $t$ are also desirable; they do not require high accuracy in
determination of $t$ [61].
Vertex detector, tracker system and software for track reconstruction in SPD
are sufficient for identification and recording $pp$ elastic events at
energies $\sqrt{s}\lesssim 15$ GeV. For higher energies and smaller angles,
when scattered protons fly very close to the beam pipe, installing fast
detectors close to the pipe, with the time resolution $\Delta T\lesssim 50$
ps, for determination of hitting times of forward-flying protons (perhaps,
using the so-called PID system) would make it possible to study the discussed
anomaly at the highest SPD energies.
## 4 Hadron structure and spin effects in elastic hadron scattering at NICA
energies 888This section is written by O.V. Selyugin; E-mail:
<EMAIL_ADDRESS>
PACS: 13.40.Gp, 14.20.Dh, 12.38.Lg
One of the most important tasks of modern physics is the research into the
basic properties of hadron interaction. The dynamics of strong interactions
finds its most complete representation in elastic scattering. It is just this
process that allows the verification of the results obtained from the main
principles of quantum field theory: the concept of the scattering amplitude as
a unified analytic function of its kinematic variables connecting different
reaction channels were introduced in the dispersion theory by N.N.
Bogoliubov[62]. Now many questions of hadron interactions are connected with
modern problems of astrophysics such as unitarity and the optical theorem
[63], and problems of baryon-antibaryon symmetry and CP-invariance violation
[30] The main domain of elastic scattering is small angles. Only in this
region of interactions can we measure the basic properties that define the
hadron structure. Their values are connected, on the one hand, with the large-
scale structure of hadrons and, on the other hand, with the first principles
that lead to the theorems on the behavior of scattering amplitudes at
asymptotic energies [64, 65].
Modern studies of elastic scattering of high energy protons lead to several
unexpected results reviewed, e.g., in [66, 67]. Spin amplitudes of the elastic
$NN$ scattering constitute a spin picture of the nucleon. Without knowledge of
the spin $NN$-amplitudes it is not possible to understand spin observable of
nucleon scattering off nuclei. In the modern picture, the structure of hadrons
is determined by Generalized Distribution functions (GPDs), which include the
corresponding parton distributions (PDFs). The sum rule [68] allow to obtain
the elastic form factor (electromagnetic and gravitomagnetic) through the
first and second integration moments of GPDs. It leads to remarkable
properties of GPDs - some corresponding to inelastic and elastic scattering of
hadrons. Now some different models examining the nonperturbative instanton
contribution lead to sufficiently large spin effects at superhigh energies
[69, 70] The research of such spin effects will be a crucial stone for
different models and will help us to understand the interaction and structure
of particles, especially at large distances. There are large programs of
researching spin effects at different accelerators. Especially, we should like
to note the programs at NICA, where the polarization of both the collider
beams will be constructed. So it is very important to obtain reliable
predictions for the spin asymmetries at these energies. In this paper, we
extend the model predictions to spin asymmetries in the NICA energy domain.
The NICA SPD detector bounded a very small momentum transfer. If in the first
steps the angles start from $16$ mrad, then the minimum momentum transfer,
which can be measured is more than $-0.01$ GeV2. Hence it is needed to exclude
the Coulomb-nuclear interference region, where the real part of the spin-non-
flip amplitude can be determined, We should move our researches on the region
of the diffraction minimum, where the imaginary part of the spin-non-flip
amplitude changes its sign. Note that in some models the absence of the second
diffraction minimum is explained by the contribution in the differential cross
section of the spin-flip amplitude [71] The interference of the hadronic and
electromagnetic amplitudes may give an important contribution not only at very
small transfer momenta but also in the range of the diffraction minimum [72].
However, for that one should know the phase of the interference of the
Columbic and hadronic amplitude at sufficiently large transfer momenta too.
Using the existing model of nucleon elastic scattering at high energies
$\sqrt{s}>9$ GeV - 14 TeV [73, 58], which involves minimum of free parameters,
we are going to develop its extended version aimed to describe all available
data on cross sections and spin-correlation parameters at lower energies down
to the SPD NICA region. The model will be based on the usage of known
information on generalized parton distributions in the nucleon, electro-
magnetic and gravitomagnetic form factors of the nucleon taking into account
analyticity and unitarity requirements and providing compatibility with the
high energy limit, where the pomeron exchange dominates.
### 4.1 HEGS model and spin effects in the dip region of momentum transfer
The differential cross sections of nucleon-nucleon elastic scattering can be
written as a sum of different helicity amplitudes:
$\displaystyle\frac{d\sigma}{dt}=\frac{2\pi}{s^{2}}(|\Phi_{1}|^{2}+|\Phi_{2}|^{2}+|\Phi_{3}|^{2}+|\Phi_{4}|^{2}+4|\Phi_{5}|^{2}.$
(7)
$\displaystyle
A_{N}\frac{d\sigma}{dt}=-\frac{4\pi}{s^{2}}[Im(\Phi_{1}(s,t)+\Phi_{2}(s,t)+\Phi_{3}(s,t)-\Phi_{4})(s,t)\Phi^{*}_{5}(s,t)]$
(8)
and
$\displaystyle
A_{NN}\frac{d\sigma}{dt}=\frac{4\pi}{s^{2}}[Re(\Phi_{1}(s,t)\Phi^{*}_{2}(s,t)-\Phi_{3}(s,t)\Phi^{*}_{4})(s,t)+|\Phi_{5}(s,t)|^{2}]$
(9)
The HEGS model [73, 58] takes into account all five spiral electromagnetic
amplitudes. The electromagnetic amplitude can be calculated in the framework
of QED. In the high energy approximation, it can be obtained [74] for the
spin-non-flip amplitudes:
$\displaystyle F^{em}_{1}(t)=\alpha f_{1}^{2}(t)\frac{s-2m^{2}}{t};\ \ \
F^{em}_{3}(t)=F^{em}_{1};$ (10)
and for the spin-flip amplitudes: with the electromagnetic and hadronic
interactions included, every amplitude $\phi_{i}(s,t)$ can be described as
$\displaystyle\phi_{i}(s,t)=F^{em}_{i}\exp{(i\alpha\varphi(s,t))}+F^{h}_{i}(s,t),$
(11)
where $\varphi(s,t)=\varphi_{C}(t)-\varphi_{Ch}(s,t)$, and $\varphi_{C}(t)$
will be calculated in the second Born approximation in order to allow the
evaluation of the Coulomb-hadron interference term $\varphi_{Ch}(s,t)$. The
quantity $\varphi(s,t)$ has been calculated at large momentum transfer
including the region of the diffaction minimum [72, 75, 76] and references
therein.
Let us define the hadronic spin-non-flip amplitudes as
$\displaystyle F^{h}_{\rm nf}(s,t)$ $\displaystyle=$
$\displaystyle\left[\Phi_{1}(s,t)+\Phi_{3}(s,t)\right]/2;$ (12)
The model is based on the idea that at high energies a hadron interaction in
the non-perturbative regime is determined by the reggenized-gluon exchange.
The cross-even part of this amplitude can have two non-perturbative parts,
possible standard pomeron - $(P_{2np})$ and cross-even part of the 3-non-
perturbative gluons ($P_{3np}$). The interaction of these two objects is
proportional to two different form factors of the hadron. This is the main
assumption of the model. The second important assumption is that we choose the
slope of the second term four times smaller than the slope of the first term,
by analogy with the two pomeron cuts. Both terms have the same intercept.
The form factors are determined by the Generalized parton distributions of the
hadron (GPDs). The first form factor corresponding to the first momentum of
GPDs is the standard electromagnetic form factor - $G(t)$. The second form
factor is determined by the second momentum of GPDs -$A(t)$. The parameters
and $t$-dependence of GPDs are determined by the standard parton distribution
functions, so by experimental data on deep inelastic scattering and by
experimental data for the electromagnetic form factors (see [13]). The
calculations of the form factors were carried out in [77]. The final elastic
hadron scattering amplitude is obtained after unitarization of the Born term.
At large $t$ our model calculations are extended up to $-t=15$ GeV2. We added
a small contribution of the energy independent part of the spin flip amplitude
in the form similar to the proposed in [78] and analyzed in [14].
$\displaystyle F_{sf}(s,t)\ =h_{sf}q^{3}F_{1}^{2}(t)e^{-B_{sf}q^{2}}.$ (13)
The energy dependent part of the spin-flip amplitude is related to the main
amplitude but with an additional kinematic factor and the main slope taken
twice more, conformity with the paper [79, 80]. The form factors incoming in
the spin-flip amplitude are determined by the GPD functions $H(s,t,x)$ and
$E(s,t,x)$, which include the corresponding PDF distributions. The model is
very simple from the viewpoint of the number of fitting parameters and
functions. There are no any artificial functions or any cuts which bound the
separate parts of the amplitude by some region of momentum transfer.
Now we shell restrict our discussion to the analysis of $A_{N}$ as there are
some experimental data in the region of NICA energies. In the standard
pictures the spin-flip and double spin-flip amplitudes correspond to the spin-
orbit $(LS)$ and spin-spin $(SS)$ coupling terms. The contribution to $A_{N}$
from the hadron double spin-flip amplitudes already at $p_{L}=6\ $GeV/c is of
the second order compared to the contribution from the spin-flip amplitude. So
with the usual high energy approximation for the helicity amplitudes at small
transfer momenta we suppose that $\Phi_{1}=\Phi_{3}$ and we can neglect the
contributions of the hadron parts of $\Phi_{2}-\Phi_{4}$. Note that if
$\Phi_{1},\Phi_{3},\Phi_{5}$ have the same phases, their interference
contribution to $A_{N}$ will be zero, though the size of the hadron spin-flip
amplitude can be large. Hence, if this phase has different $s$ and $t$
dependencies, the contribution from the hadron spin-flip amplitude to $A_{N}$
can be zero at $s_{i},\ t_{i}$ and non-zero at other $s_{j},\ t_{j}$. e
experimental data ($\sum\chi^{2}/n_{dof}=1.24$).
Figure 4: The model calculation of the diffraction minimum in $d\sigma/dt$ of
$pp$ scattering [left] at $\sqrt{s}=30.4$ GeV; [right] for $pp$ and $p\bar{p}$
at $\sqrt{s}=52.8$ GeV [81] scattering. Figure 5: The model calculation of
the diffraction minimum in $d\sigma/dt$ of $pp$ at
$\sqrt{s}=13.4;16.8;30.4;44.7;$ GeV; (lines, respectively, long dash; solid;
thin-solid, and short - dash ); and experimental data [81], respectively, -
the triangle down, the circles (solid), triangle up, and circles )
Now let us examine the form of the differential cross section in the region of
the momentum transfer where the diffractive properties of elastic scattering
appear most strongly - it is the region of the diffraction dip. The form and
the energy dependence of the diffraction minimum are very sensitive to
different parts of the scattering amplitude. The change of the sign of the
imaginary part of the scattering amplitude determines the position of the
minimum and its movement with changing energy. The contributions of the real
part of the spin-non-flip scattering amplitude and the square of the spin-flip
amplitude determine the size and the energy dependence of the dip. Hence, this
depends heavily on the odderon contribution. The spin-flip amplitude gives the
contribution to the differential cross sections additively. So the measurement
of the form and energy dependence of the diffraction minimum with high
precision is an important task for future experiments.
The HEGS model reproduces $d\sigma/dt$ at very small and large $t$ and
provides a qualitative description of the dip region at $-t\approx 1.6$ GeV2,
for $\sqrt{s}=10$ GeV and $-t\approx 0.45$ GeV2 for $\sqrt{s}=13$ TeV. Note
that it gives a good description for the proton-proton and proton-antiproton
elastic scattering or $\sqrt{s}=53$ GeV and for $\sqrt{s}=62.1$ GeV (Fig.2a).
The dependence of the position of the diffraction minimum on $t$ is determined
in most part by the growth of the total cross sections and the slope of the
imaginary part of the scattering amplitude. Figures 2b and 3 show this a
dependence obtained in the HEGS model at different energies.
Figure 6: The analyzing power $A_{N}$ of pp - scattering calculated: a) at
$\sqrt{s}=4.9\ $GeV (the experimental data [82]), and b) at $\sqrt{s}=6.8\
$GeV (points - the existence experimental data [83] ).
Figure 7: The analyzing power $A_{N}$ of pp - scattering calculated: a) at
$\sqrt{s}=9.2\ $GeV, and (the experimental data [27]), and b) at
$\sqrt{s}=13.7\ $GeV (points - the experimental data [84]).
In Fig.3, the description of the diffraction minimum in our model is shown for
NICA energies. The HEGS model reproduces sufficiently well the energy
dependence and the form of the diffraction dip. In this energy region the
diffraction minimum reaches the sharpest dip at $\sqrt{s}=30$ GeV near the
final NICA energy. Note that at this energy the value of $\rho(s,t=0)$ also
changes its sign in the proton-proton scattering.
The calculated analyzing power at $p_{L}=6\ GeV/c$ is shown in Fig.4a. One can
see that a good description of experimental data on the analyzing power can be
reached only with one hadron-spin flip amplitude.
The experimental data at $p_{L}=11.75\ $GeV/c seriously differ from those at
$p_{L}=6\ $GeV/c but our calculations reproduce $A_{N}$ sufficiently well
(Fig.4b ). It is shown that our energy dependence of the spin-flip amplitudes
was chosen correctly and we may hope that further we will obtain correct
values of the analyzing power and other spin correlation parameters.
From Fig.4 we can see that in the region $|t|\approx 0.2\div 1\ $ GeV2 the
contributions from the hadron spin-flip amplitudes are most important. At
last, Fig.5a shows our calculations at $p_{L}=200\ GeV/c$.
At this energy, the contributions of the phenomenological energy independent
part of the spin-flip amplitude is compared with the energy dependent part.
The spin effect is sufficiently large and has a specifical form, which is
determined by the form of the differential cross section in the diffraction
dip domain.
Figure 8: The analyzing power $A_{N}$ of pp - scattering calculated: a) at
$\sqrt{s}=19.4\ $GeV (the experimental data [85]), and b) at $\sqrt{s}=23.4\
$GeV (points - the experimental data [84])
### 4.2 Conclusions
The Generelized parton distributions (GPDs) make it possible to better
understand the thin hadron structure and to obtain the hadron structure in the
space frame (impact parameter representations). It is tightly connected with
the hadron elastic hadron form factors. The research into the form and energy
dependence of the diffraction minimum of the differential cross sections of
elastic hadron-hadron scattering at different energies will give valuable
information about the structure of the hadron scattering amplitude and hence
the hadron structure and the dynamics of strong interactions. The diffraction
minimum corresponds to the change of the sign of the imaginary part of the
spin-non-flip hadronic scattering amplitude and is created under a strong
impact of the unitarization procedure. Its dip depends on the contributions of
the real part of the spin-non-flip amplitude and the whole contribution of the
spin-flip scattering amplitude. In the framework of HEGS model, we show a deep
connection between elastic and inealastic cross sections, which are tightly
connected with the hadron structure at small and large distances.
The HEGS model reproduces well the form and the energy dependence of the
diffraction dip of the proton-proton and proton antiproton elastic scattering
[86]. The predictions of the model in most part reproduce the form of the
differential cross section at $\sqrt{s}=13$ TeV. It means that the energy
dependence of the scattering amplitude determined in the HEGS model and
unitarization procedure in the form of the standard eikonal representation
satisfies the experimental data in the huge energy region (from $\sqrt{s}=9$
GeV up to $\sqrt{s}=13$ TeV. It should be noted that the real part of the
scattering amplitude, on which the form and energy dependence of the
diffraction dip heavily depend, is determined in the framework of the HEGS
model only by the complex $\bar{s}$, and hence it is tightly connected with
the imaginary part of the scattering amplitude and satisfies the analyticity
and the dispersion relations. Quantitatively, for different thin structures of
the scattering amplitude, a wider analysis is needed. This concerns the fixed
intercept taken from the deep inelastic processes and the fixed Regge slope
$\alpha^{\prime}$, as well as the form of the spin-flip amplitude. Such an
analysis requires a wider range of experimental data, including the
polarization data of $A_{N}(s,t)$, $A_{NN}(s,t)$, $A_{LL}(s,t)$,
$A_{SL}(s,t)$. The obtained information about the sizes and energy dependence
of the spin-flip and double-flip amplitudes will make it possible to better
understand the results of famous experiments carried out by A. Krish at the
ZGS to obtain the spin-dependent differential cross sections [87, 88] and the
spin correlation parameter $A_{NN}$ and at the AGS [89] to obtain the spin
correlation parameter $A_{N}$ showing the significant spin effects at large
momentum transfer.
## 5 Single-spin physics 101010This section is written by V. Abramov; E-mail:
<EMAIL_ADDRESS>
PACS: 12.38.$-$t; 12.38.Qk; 13.60.$-$r; 13.75.$-$n; 13.85.$-$t; 13.85.Ni;
13.87.Fh;
### Introduction
All previous experience in the development of spin physics testifies to its
fundamental importance for understanding the laws of the micro-world,
including for the construction of the theory of strong interactions. It should
be noted that large values of transverse single-spin asymmetries ($A_{N}$) and
hyperon polarizations ($P_{N}$) in a wide energy range have not yet received
an unambiguous and convincing explanation within the framework of the theory
of strong interactions - quantum chromodynamics (QCD), which is one of the
components of the Standard Model. The experimental data accumulated to date
point to a very interesting phenomenology in the field of transverse single-
spin phenomena, including the nontrivial dependence of the spin observables
$A_{N}$ and $P_{N}$ on the collision energy ($\sqrt{s}$), the Feynman variable
($x_{\rm{F}}$), the transverse momentum ($p_{T}$), the atomic weights of the
colliding particles ($A_{1}$ and $A_{2}$), the multiplicity of charged
particles ($N_{ch}$) in the event and the centrality of collisions. It is
equally important to measure $A_{N}$ and $P_{N}$ for as many reactions as
possible in order to understand how spin effects depend on the quark
composition and other quantum characteristics of the particles involved in the
reaction. Data on dozens of reactions have been accumulated, but the accuracy
of measurements and the limited kinematic region in most experiments do not
yet allow unambiguous conclusions to be drawn about the origin of polarization
phenomena and even about their dependence on various variables. The purpose of
this proposal is to significantly expand the amount of polarization data
available for analysis and to improve their accuracy. This will help advance
the creation of adequate models of polarization phenomena and their
discrimination when compared with the entire dataset.
Planned measurements at the SPD facility in the energy range for a pair of
colliding nucleons from 3.4 to 27 GeV in the reaction c.m. frame are very
important for the systematic and detailed study of polarization phenomena and
the study of their dependence on various variables. Analysis of the available
data within the framework of the chromomagnetic polarization of quarks (CPQ)
model [90] shows that the unambiguous determination of the model parameters is
possible only if there are measurements with several (three or more) values
for each of the listed above variables. It should be noted that the maximum
energy of the accelerator in Dubna is high enough to register particles with
large transverse momenta in the range $p_{T}$ = 1 - 4 GeV/c, for which the
polarization effects are significant and the quark degrees of freedom are
already manifested. The identification of particles in this energy range is
much easier than at large accelerators, and this is an important condition for
the systematic study of polarization phenomena in a large number of reactions.
The conditions for making measurements at the SPD facility at the first stage
of the NICA collider can be found in [91]. Maximum energy in the c.m. of two
colliding nucleons will be 27 GeV for p+p collisions and 14 GeV for d+d, C+C
and Ca+Ca collisions. Vector polarization will be 50% for protons and 75% for
deuterons.
Table 1: Inclusive reactions for which the spin-spin asymmetry $A_{N}$ was
measured.
$\rm N^{\underline{o}}$ | Reaction | $\rm N^{\underline{o}}$ | Reaction | $\rm N^{\underline{o}}$ | Reaction
---|---|---|---|---|---
1 | $p^{\uparrow}p(A)\rightarrow\pi^{+}X$ | 10 | $p^{\uparrow}p(A)\rightarrow J/\psi X$ | 19 | $\bar{p}d^{\uparrow}\rightarrow\pi^{0}X$
2 | $p^{\uparrow}p(A)\rightarrow\pi^{-}X$ | 11 | $p^{\uparrow}p(A)\rightarrow\eta X$ | 20 | $\pi^{+}p^{\uparrow}\rightarrow\pi^{+}X$
3 | $p^{\uparrow}p\rightarrow\pi^{0}X$ | 12 | $d^{\uparrow}p(A)\rightarrow\pi^{+}X$ | 21 | $\pi^{-}p^{\uparrow}\rightarrow\pi^{-}X$
4 | $p^{\uparrow}p(A)\rightarrow K^{+}X$ | 13 | $d^{\uparrow}p(A)\rightarrow\pi^{-}X$ | 22 | $\pi^{-}p^{\uparrow}\rightarrow\pi^{0}X$
5 | $p^{\uparrow}p(A)\rightarrow K^{-}X$ | 14 | $p^{\uparrow}p\rightarrow\Lambda X$ | 23 | $\pi^{-}d^{\uparrow}\rightarrow\pi^{0}X$
6 | $p^{\uparrow}p\rightarrow K^{0}_{S}X$ | 15 | $\bar{p}^{\uparrow}p\rightarrow\pi^{+}X$ | 24 | $K^{-}d^{\uparrow}\rightarrow\pi^{0}X$
7 | $p^{\uparrow}p(A)\rightarrow nX$ | 16 | $\bar{p}^{\uparrow}p\rightarrow\pi^{-}X$ | 25 | $K^{-}p^{\uparrow}\rightarrow\pi^{0}X$
8 | $p^{\uparrow}p(A)\rightarrow pX$ | 17 | $\bar{p}^{\uparrow}p\rightarrow\pi^{0}X$ | 26 | $\pi^{-}p^{\uparrow}\rightarrow\eta X$
9 | $p^{\uparrow}p(A)\rightarrow\bar{p}X$ | 18 | $\bar{p}^{\uparrow}p\rightarrow\eta X$ | 27 | $\bar{p}p^{\uparrow}\rightarrow\pi^{0}X$
Table 1 presents 27 inclusive reactions for which there are already data on
the single-spin asymmetry of hadrons [90, 92]. The first 14 reactions from the
Table 1 can potentially be studied at the NICA collider using the SPD
facility. A list of other possible 28 reactions is shown in Table 2 and
includes various particles and resonances. The initial state can be any with a
polarized beam: $p^{\uparrow}p$, $p^{\uparrow}d$, $d^{\uparrow}p$,
$d^{\uparrow}d$. Their detailed study will reveal the dependence of $A_{N}$ on
kinematic and other variables, including the quark composition of the
particles involved, their spin, isospin, and atomic weight.
Table 2: Inclusive reactions to be studied at the SPD for which $A_{N}$ has
not yet been measured. The reaction is $p^{\uparrow}p\rightarrow h+X$. Final
decay mode of the detected particle $h$ is indicated only.
$\rm N^{\underline{o}}$ | Decay mode | $\rm N^{\underline{o}}$ | Decay mode | $\rm N^{\underline{o}}$ | Decay mode
---|---|---|---|---|---
1 | $K^{0}_{L}\rightarrow\pi^{+}\pi^{-}\pi^{0}$ | 10 | $\phi\rightarrow K^{+}K^{-}$ | 19 | $\bar{\Xi}^{0}\rightarrow\bar{\Lambda}\pi^{0}$
2 | $\eta^{\prime}\rightarrow\pi+\pi-\eta$ | 11 | $\rho^{0}(770)\rightarrow\pi^{+}\pi^{-}$ | 20 | $\Sigma^{0}\rightarrow\Lambda\gamma$
3 | $a_{0}(980)\rightarrow\eta\pi^{0}$ | 12 | $\rho^{+}(770)\rightarrow\pi^{+}\pi^{0}$ | 21 | $\bar{\Sigma}^{0}\rightarrow\bar{\Lambda}\gamma$
4 | $K^{0*}(892)\rightarrow K^{+}\pi^{-}$ | 13 | $\rho^{-}(770)\rightarrow\pi^{-}\pi^{0}$ | 22 | $\Delta^{++}\rightarrow p\pi^{+}$
5 | $K^{0*}(892)\rightarrow K^{-}\pi^{+}$ | 14 | $\rho^{0}(770)\rightarrow\mu^{+}\mu^{-}$ | 23 | $\Delta^{+}\rightarrow p\pi^{0}$
6 | $K^{+*}(892)\rightarrow K^{+}\pi^{0}$ | 15 | $\bar{\Lambda}\rightarrow\bar{p}\pi^{+}$ | 24 | $\Delta^{0}\rightarrow p\pi^{-}$
7 | $K^{-*}(892)\rightarrow K^{-}\pi^{0}$ | 16 | $\Xi^{-}\rightarrow\Lambda\pi^{-}$ | 25 | $\Delta^{-}\rightarrow n\pi^{-}$
8 | $\omega(782)\rightarrow\pi^{+}\pi^{-}\pi^{0}$ | 17 | $\Xi^{0}\rightarrow\Lambda\pi^{0}$ | 26 | $\bar{\Delta}^{--}\rightarrow\bar{p}\pi^{-}$
9 | $\omega(782)\rightarrow\gamma\pi^{0}$ | 18 | $\bar{\Xi}^{+}\rightarrow\bar{\Lambda}\pi^{+}$ | 27 | $\bar{\Delta}^{0}\rightarrow\bar{p}\pi^{+}$
Data on the transverse polarization of hyperons and antihyperons are no less
interesting. The list of reactions available to date, for which their
polarization $P_{N}$ was measured, is presented in Table 3 and includes 32
reactions [90, 93]. The first 14 reactions can potentially be studied at the
SPD setup. This list can be supplemented with reactions such as
$pp\rightarrow\Sigma^{0\uparrow}(1385)X$,
$pp\rightarrow\bar{\Sigma}^{0\uparrow}X$,
$pp\rightarrow\Lambda^{\uparrow}(1405)X$,
$pp\rightarrow\Lambda^{\uparrow}(1520)X$. The initial state can be any with a
polarized or unpolarized beam: $p^{\uparrow}p$, $p^{\uparrow}d$,
$d^{\uparrow}p$, $d^{\uparrow}d$ and $AA$.
Table 3: Inclusive reactions for which the polarization ($P_{N}$) of hyperons
was measured.
$\rm N^{\underline{o}}$ | Reaction | $\rm N^{\underline{o}}$ | Reaction | $\rm N^{\underline{o}}$ | Reaction
---|---|---|---|---|---
1 | $pp(A)\rightarrow\Lambda^{\uparrow}X$ | 12 | $A_{1}A_{2}\rightarrow\Lambda^{\uparrow}X$ | 23 | $\pi^{-}A\rightarrow\Xi^{-\uparrow}X$
2 | $pp(A)\rightarrow\Xi^{-\uparrow}X$ | 13 | $A_{1}A_{2}\rightarrow\Lambda^{\uparrow(G)}X$ | 24 | $\pi^{-}A\rightarrow\bar{\Xi}^{+\uparrow}X$
3 | $pp(A)\rightarrow\Xi^{0\uparrow}X$ | 14 | $A_{1}A_{2}\rightarrow\bar{\Lambda}^{\uparrow(G)}X$ | 25 | $\pi^{-}p\rightarrow\Lambda^{\uparrow}X$
4 | $pp(A)\rightarrow\Sigma^{+\uparrow}X$ | 15 | $\Sigma^{-}A\rightarrow\Sigma^{+\uparrow}X$ | 26 | $\pi^{-}p\rightarrow\bar{\Lambda}^{\uparrow}X$
5 | $pp(A)\rightarrow\Sigma^{0\uparrow}X$ | 16 | $\Sigma^{-}A\rightarrow\Xi^{-\uparrow}X$ | 27 | $\pi^{+}p\rightarrow\Lambda^{\uparrow}X$
6 | $pp(A)\rightarrow\Sigma^{-\uparrow}X$ | 17 | $\Sigma^{-}A\rightarrow\Lambda^{\uparrow}X$ | 28 | $K^{-}A\rightarrow\Xi^{-\uparrow}X$
7 | $pp(A)\rightarrow\Omega^{-\uparrow}X$ | 18 | $\Sigma^{-}A\rightarrow\bar{\Lambda}^{\uparrow}X$ | 29 | $\bar{p}A\rightarrow\bar{\Lambda}^{\uparrow}X$
8 | $pp(A)\rightarrow\bar{\Lambda}^{\uparrow}X$ | 19 | $K^{-}p\rightarrow\Lambda^{\uparrow}X$ | 30 | $e^{+}e^{-}\rightarrow\Lambda^{\uparrow}X$
9 | $pp(A)\rightarrow\bar{\Xi}^{+\uparrow}X$ | 20 | $K^{-}p\rightarrow\bar{\Lambda}^{\uparrow}X$ | 31 | $\nu_{\mu}A\rightarrow\Lambda^{\uparrow}X$
10 | $pp(A)\rightarrow\bar{\Xi}^{0\uparrow}X$ | 21 | $K^{+}p\rightarrow\Lambda^{\uparrow}X$ | 32 | $e^{+}A\rightarrow\bar{\Lambda}^{\uparrow}X$
11 | $pp(A)\rightarrow\bar{\Sigma}^{-\uparrow}X$ | 22 | $K^{+}p\rightarrow\bar{\Lambda}^{\uparrow}X$ | 33 | -
It is important to note that for hyperons it is possible to simultaneously
measure both the transverse polarization $P_{N}$ and the single-spin asymmetry
$A_{N}$. Comparing $A_{N}$ and $P_{N}$ for a specific reaction with the
predictions of various models will bring us closer to revealing the mechanism
of the origin of polarization phenomena at high energies and will shed light
on the physics of strong interactions in the confinement region.
A systematic study of polarization data assumes the presence of a model that
describes, within a single mechanism, a large number of reactions depending on
the variables listed above. An example of such a model is the model of
chromomagnetic polarization of quarks (CPQ) [90].
References to most of publications devoted to polarization experiment data can
be found in [90, 92, 93] and other in papers listed in the cited literature.
The following sections describe in more detail the model of chromomagnetic
polarization of quarks and consider examples of existing data and calculations
of $A_{N}$ and $P_{N}$ for various reactions that can potentially be studied
using the SPD setup at the NICA collider in Dubna.
### 5.1 Model of chromomagnetic polarization of quarks
The phenomenological model of chromomagnetic polarization of quark is based on
the following basic assumptions [90]:
1) As a result of collisions of hadrons, a pair of quarks with a large
transferred transverse momentum $p_{T}$ is scattered. Further, the scattered
(test) quark with large $p_{T}$ moves in the effective chromomagnetic field
$\rm\bf{B}^{\it{a}}$ and experiences the action of the Stern-Gerlach force
proportional to the product of the field gradient components and the
corresponding components of the quark chromomagnetic moment. The direction of
the Stern-Gerlach force and the additional transverse momentum received by the
test quark in the effective chromomagnetic field depend on the projections of
the quark spin onto the quantization axis. Subsequently, the polarized quark
from the incident polarized proton recombines with other quarks to form the
observed hadron. The angular distribution of such hadrons has an azimuthal
dependence, i.e., a single-spin asymmetry arises. If unpolarized hadrons
collide, then the action of the Stern-Gerlach force imparts an additional
transverse momentum directed to the left or to the right, depending on the
direction of the projection of the quark spin up or down, when the quark
moves, for example, to the left. Thus, when scattering to the left, a quark
has predominantly one polarization sign, and when scattering to the right, the
opposite. The hyperons formed from these quarks acquire transverse
polarization relative to the scattering plane.
2) The effective chromomagnetic field $\rm\bf{B}^{\it{a}}$ is created by
spectator quarks, that is, all quarks that will not be included in the
recorded hadron. Spectator quarks are moving in the c.m. in the direction of
the colliding hadrons and create for a short time a circular transverse
chromomagnetic field. The sign of the circular chromomagnetic field to the
left and to the right of the collision axis is opposite, but the field
gradient does not change its direction, which ensures a nonzero polarization
effect due to the action of the Stern-Gerlach force. The predominant direction
of polarization of quarks in a chromomagnetic field arises, hence the name of
the model.
3) When taking into account the interaction of a test quark with the field
created by a moving spectator quark, it is necessary to take into account the
color factor for the corresponding pair of quarks (spectator and test quarks).
An analysis of the data showed that the quark-antiquark pair interacts
predominantly in the color-singlet state with the color factor $C_{F}$ = 4/3,
and the quark-quark or antiquark-antiquark pair interacts in the color-triplet
state with $C_{F}$ = 2/3. For a hydrogen-like potential, the wave function of
two quarks or a quark and an antiquark at zero coordinate is proportional to
$|\psi(0)|\propto(C_{F}\alpha_{S})^{3/2}$ [94], which leads to the ratio of
contributions from qq and $q$$\bar{q}$ interactions to an effective field of
the order
$\lambda\approx-|\psi_{qq}(0)|^{2}/|\psi_{q\bar{q}}(0)|^{2}=-1/8=-0.125.$ (14)
The minus sign in (14) takes into account the opposite sign of the field
created by a moving spectator quark and a moving spectator antiquark.
Experimentally, the value of the global parameter, obtained as a result of the
global fit of the polarization data, turned out to be $\lambda=-0.1363\pm
0.0003$.
If the spectator quark is a product of target fragmentation and moves in the
c.m. in the opposite direction, then its contribution to the effective field
will be additionally suppressed by the factor $-\tau$, where $\tau=0.0267\pm
0.0012$ is another important global parameter of the CPQ model. This
suppression of the contribution of quarks from the target is due to the fact
that the chromomagnetic field they create is in a different region of space-
time and, therefore, has almost no effect on the test quarks moving forward.
4) The presence of an effective chromomagnetic field should lead to the
precession of the test quark spin when it moves in the field. Analysis of the
data showed that the effective field length and the corresponding precession
angle are proportional to $x_{A}=(x_{R}+x_{\rm{F}})/2$ and
$x_{B}=(x_{R}-x_{\rm{F}})/2$ in the fragmentation region of the incident
particle A and target B, respectively. As a result, this leads to oscillations
of the dependence of $A_{N}$ and $P_{N}$ on the kinematic variables $x_{A}$
and $x_{B}$, and hence on $x_{\rm{F}}$ and $p_{T}$. These oscillations are the
main feature of the CPQ model and should manifest themselves in the case of
strong fields, when the precession angles reach values of the order of $\pi$
or more.
Figure 9: The mechanism of origin of single-spin polarization phenomena.
The mechanism of origin of single-spin polarization phenomena is shown
schematically in fig. 9. The interaction of colliding particles $A$ and $B$ is
considered in the c.m. of pair of colliding nucleons.
Observables $A_{N}$ and $P_{N}$ are described by equations (15) and (16):
$P_{N}=C(\sqrt{s})F(p_{T},A)[G(\phi_{A})-\sigma G(\phi_{B})],$ (15)
$G(\phi)=(1-\cos\phi)/\phi+\epsilon\cdot\phi,$ (16)
where function (16) takes into account the action of the Stern-Gerlach forces
and the precession of the quark spin, and where $\epsilon=-0.00497\pm 0.00009$
is the global parameter of the CPQ model, $\sigma$ is the local parameter. The
integral angles of precession of the quark spin are
$\phi_{A}=\omega^{0}_{A}y_{A},\hskip 42.67912pt\phi_{B}=\omega^{0}_{B}y_{B},$
(17)
in the fragmentation region of colliding particles $A$ and $B$, respectively.
The oscillation frequency $\omega^{0}_{A(B)}$ is described by the equation
$\omega^{0}_{A(B)}=g_{s}\alpha_{s}\nu_{A(B)}m_{r}(g^{a}_{Q}-2)/M_{Q},$ (18)
where $\alpha_{s}=g_{s}^{2}/(4\pi)$ is the current strong interaction
constant, $g_{s}$ is the color charge, $M_{Q}$ is the mass of the constituent
quark $Q$, $g^{a}_{Q}$ is the the Lande gyromagnetic color factor of a quark,
$m_{r}=0.2942\pm 0.0072$ GeV is the global parameter that can be considered as
the ratio of the maximum longitudinal extension of the chromomagnetic field to
the square of its radius.
The total contribution of spectator quarks (with weights $\lambda$ and
$-\tau$) to $\nu_{A(B)}$ in the fragmentation region of colliding particles
$A$ and $B$, respectively, is calculated using quark diagrams and the quark
counting rules [90].
Quark diagrams for the reactions $p^{\uparrow}$+$p$$\rightarrow$$\pi^{+}$+$X$
and $p^{\uparrow}$+$p(A)$$\rightarrow$$p$+$X$ are shown in fig. 10a and 10b,
respectively. When nucleus is the target, as in the case of fig. 10b, the
number of target spectator quarks is equal to $3A_{\rm{eff}}\propto A^{1/3}$,
where $A$ is an atomic weight, since all target quarks hit by the incident
proton contribute to the spectator quarks [90]. Below we assume
$A_{\rm{eff}}=A=1$.
|
---|---
(a) | (b)
Figure 10: Quark flux diagrams for the reaction
$p^{\uparrow}$+$p$$\rightarrow$$\pi^{+}$+$X$ (a) and
$p^{\uparrow}$+$p(A)$$\rightarrow$$p$+$X$ (b).
In the approximation of moderate energies ($\sqrt{s}<70$ GeV), we obtain
$\nu_{A}$ for the reaction $p^{\uparrow}$+$p$$\rightarrow$$\pi^{+}$+$X$:
$\nu_{A}=\nu_{B}=3\lambda-3\tau\lambda A_{\rm eff}=-0.398,$ (19)
and for the reaction $p^{\uparrow}$+$p(A)$$\rightarrow$$p$+$X$:
$\nu_{A}=\nu_{B}=2+2\lambda-3\tau\lambda A_{\rm eff}=1.738.$ (20)
To calculate $\nu_{A}$ we have to add up all the contributions ($\nu$) of the
spectator quarks shown to the right of the quark diagram. The $\nu_{A}$ value
for the reaction $p^{\uparrow}$+$p$$\rightarrow$$\pi^{+}$+$X$ is much less
than 1 in absolute value. Consequently, the oscillation frequency
$\omega^{0}_{A(B)}$ is also low, and the $A_{N}(x_{\rm{F}})$ dependence is
close to linear. For the reaction $p^{\uparrow}$+$p(A)$$\rightarrow$$p$+$X$,
the value of $\nu_{A}$ is significantly greater than unity in absolute value,
and for it, as we will see below, a nonmonotonic oscillating dependence
$A_{N}(x_{\rm{F}})$ is indeed observed.
Kinematic variables
$y_{A}=x_{A}-(E_{0}/\sqrt{s}+f_{0})[1+\cos\theta_{cm}]+a_{0}[1-\cos\theta_{cm}],$
(21)
$y_{B}=x_{B}-(E_{0}/\sqrt{s}+f_{0})[1-\cos\theta_{cm}]+a_{0}[1+\cos\theta_{cm}],$
(22)
are expressed in terms of the scaling variables $x_{A}$ and $x_{B}$, the
reaction energy $\sqrt{s}$, the emission angle $\theta_{cm}$ in c.m. and three
local parameters $E_{0}$, $a_{0}$ and $f_{0}$. Function
$C(\sqrt{s})=v_{0}/[(1-E_{R}/\sqrt{s})^{2}+\delta_{R}^{2}]^{1/2},$ (23)
takes into account the dependence of the rate of precession of the spin of a
quark on its energy $E_{Q}$ in c.m. and the effect of attraction ($E_{R}>0$)
or repulsion ($E_{R}<0$) between the test quark and spectator quarks. The
$E_{R}$ sign is determined by the factor $-g_{S}\nu_{A}$, where $g_{S}$ is the
color charge of the test quark (positive for a quark and negative for an
antiquark). An example of a reaction with $E_{R}>0$ is
$p+p\rightarrow\bar{\Lambda}+X$, and reaction with $E_{R}<0$ is
$p+p\rightarrow\Lambda+X$. The global data fit confirms the $E_{R}$ sign rule
for most of 85 investigated reactions (96,5%).
The coefficient $v_{0}$ determines the value of $A_{N}$ and $P_{N}$ and is
calculated as follows:
$v_{0}=-D_{r}g^{a}_{Q}P_{Q}/2(g^{a}_{Q}-2),$ (24)
where $D_{r}$ is a local dimensionless parameter of order 0.8, which is the
ratio of the spectrum slope in $p_{T}$ to the transverse radius of the
effective field, $P_{Q}$ is the polarization of the $Q$ quark in a polarized
proton (+1 for $u$-quark and -1 for $d$-quark), $g^{a}_{Q}$ is the Lande
gyromagnetic factor for the $Q$-type quark, which is a global parameter. The
$A_{N}$ or $P_{N}$ sign for most reactions at small $\phi_{A}$ is the product
of three factors: $-g_{S}\nu_{A}P_{Q}$. When calculating the polarization of
hyperons, we set $P_{Q}=1$.
The color form factor $F(p_{T},A)$ suppresses $A_{N}$ and $P_{N}$ at low
$p_{T}$, when the colored quarks inside the hadron are not visible due to the
uncertainty relation:
$F(p_{T},A)={1-\exp[-(p_{T}/p^{0}_{T})^{2.5}]}(1-\alpha_{A}\ln{A}),$ (25)
where $p^{0}_{T}$ is a local parameter, and the other parameter $\alpha_{A}$
is zero for most of reactions.
The dependence of a number of parameters on the atomic masses $A_{1}$ and
$A_{2}$ turned out to be universal for most of the reactions in Tables 1 and 3
[90, 95]. Further development of the CPQ model is reflected in papers [95, 96,
97, 98, 99, 100, 101, 102, 103, 104].
### 5.2 Single-spin hadron asymmetry
|
---|---
(a) | (b)
Figure 11: $A_{N}(x_{\rm{F}})$ for the reaction
$p^{\uparrow}+p(A)\rightarrow\pi^{+}+X$ (a) and $p^{\uparrow}+p(A)\rightarrow
p+X$ [102] (b) [102].
The most numerous experiments to measure the single-spin asymmetry were
carried out for the reactions of the production of charged and neutral
$\pi$-mesons in $p^{\uparrow}p$ and $p^{\uparrow}A$ collisions. These data are
included in the general database of polarization phenomena, which contains
3608 experimental points for 85 different inclusive reactions, in which the
polarization of one of the particles is known or measured in the initial or
final state [90, 102]. A global fit was performed for the entire dataset using
the CPQ model.
Data on $A_{N}$ for the reaction $p^{\uparrow}+p(A)\rightarrow\pi^{+}+X$ at
different energies are shown in fig. 11a from [102], where they are compared
with the results of calculations using the CPQ model. As seen from fig. 11a,
$A_{N}(x_{\rm{F}})$ dependence for the reactions
$p^{\uparrow}+p(A)\rightarrow\pi^{+}+X$ at moderately high energies
$\sqrt{s}<70$ GeV, is almost linear, which agrees with the predictions of the
CPQ model. This is due to the insignificant value of the parameter
$\nu_{A}=\nu_{B}=3\lambda-3\tau\lambda=-0.398$, which follows from the quark
diagram shown in fig. 10a. The positive sign of $A_{N}(x_{\rm{F}})$ for the
reaction $p^{\uparrow}+p(A)\rightarrow\pi^{+}+X$ is explained by the dominant
contribution of the positively polarized test $u$-quark from a polarized
proton.
A very unexpected and interesting feature of the reaction
$p^{\uparrow}+p(A)\rightarrow\pi^{-}+X$ turned out to be the threshold
dependence of $A_{N}(y_{A})$ on the angle of production $\theta_{cm}$ in c.m.
In fig. 12a from [96] is shown the dependence of the quantity
$(1-E_{R}/\sqrt{s})A_{N}$ on $y_{A}$, where $E_{R}=4.98\pm 0.29$ GeV. It
turned out that this quantity is described by the universal function of
$y_{A}$ if $\theta_{cm}<74^{o}$, and is equal to zero if $\theta_{cm}>74^{o}$.
In fig. 12a, two clearly distinct branches are visible, into which the
experimental points are grouped.
Within the framework of the CPQ model, the threshold effect for $A_{N}(y_{A})$
can be qualitatively explained by the greater mass of the constituent
$d$-quark as compared to the mass of the $u$-quark.
|
---|---
(a) | (b)
Figure 12: Dependence of the value $(1-E_{R}/\sqrt{s})A_{N}$ on $y_{A}$, where
$E_{R}=4.98\pm 0.29$ GeV for the reaction
$p^{\uparrow}p(A)\rightarrow\pi^{-}X$ (a) and $E_{R}=1.92\pm 0.30$ GeV, for
the reaction $p^{\uparrow}p(A)\rightarrow\pi^{+}X$ (b) [96].
In fig. 12b from [96] is shown the dependence of the quantity
$(1-E_{R}/\sqrt{s})A_{N}$ on $y_{A}$ for the reaction
$p^{\uparrow}+p(A)\rightarrow\pi^{+}+X$, where $E_{R}=1.92\pm 0.30$ GeV. Most
of the light test $u$-quarks flying into the front hemisphere will be from a
polarized proton, which means that the asymmetry $A_{N}>0$ for $\pi^{+}$
mesons [96]. All data in fig. 12b are located on the same branch, for a wide
range of energies $\sqrt{s}$ and production angles in c.m.
The positive value $E_{R}=4.98\pm 0.29$ GeV for the reaction
$p^{\uparrow}+p(A)\rightarrow\pi^{-}+X$, found in the framework of the CPQ
model, is a manifestation of the effect of "attraction" of test quarks and
spectator quarks. According to formula (23), $A_{N}$ reaches its maximum value
at energy $\sqrt{s}\approx E_{R}$ [90, 96]. Investigation of the effect of
"attraction" of test quarks for various reactions is one of the objectives of
this proposal and involves scanning in energy $\sqrt{s}$ near $E_{R}$. This
phenomenon is observed not only for single-spin asymmetry, but also for the
polarization of hyperons, in those reactions for which $E_{R}$ is positive and
amounts to several GeV [90].
Finding of scaling (independence of $(1-E_{R}/\sqrt{s})A_{N}$ from energy
$\sqrt{s}$) in the variable $y_{A}$ was one of the stages in the process of
creating the CPQ model [90, 92, 96]. Investigation of the scaling for
polarization observables $A_{N}$ and $P_{N}$ is of independent interest and
can be one of the tasks for the SPD setup. In the framework of the CPQ model,
scaling in polarization phenomena is due to the occurrence of processes at the
quark level, in the limit of high energies and large transverse momentum [90,
92, 93, 96].
Data and calculations of $A_{N}(x_{\rm{F}})$ for the reaction
$p^{\uparrow}+p(A)\rightarrow p+X$, taken from [102], are shown in fig. 11b.
The data of the FODS-2 experiment [105], measured in a wide range in the
variable $x_{\rm{F}}$, at an energy of $\sqrt{s}=8.77$ GeV (solid squares in
fig. 11b and curve (3)), demonstrate a nonmonotonic oscillatory dependence of
$A_{N}(x_{\rm{F}})$. This is a consequence of the large value of the parameter
$\nu_{A}$ and the significant precession angle of the quark spin in the
chromomagnetic field. The quantity
$\nu_{A}=\nu_{B}=[2+2\lambda-3\tau\lambda]=1.738$ is large enough, which
follows from the quark diagram shown in fig. 10b. In the energy region of the
NICA collider, a negative asymmetry $A_{N}(x_{\rm{F}})$ of about 10% is
expected near $x_{\rm{F}}=0.2$ (fig. 11b, curves (3) for $\sqrt{s}=8.77$ GeV).
Another new and interesting direction in the study of polarization phenomena
is associated with the dependence of $A_{N}$ and $P_{N}$ on the multiplicity
of charged particles ($N_{ch}$) in an event. The first results in this region
were obtained for reactions $p^{\uparrow}+p\rightarrow\pi^{\pm}+X$ in the
BRAHMS experiment at an energy $\sqrt{s}=200$ GeV [106]. The single-spin
asymmetry $A_{N}$ increases in absolute value, if we select events with
$N_{ch}$ above the average, and decreases, if we select events with $N_{ch}$
below the average. These data, together with the calculations, are disscussed
in [100]. In the CPQ model, events with a multiplicity above the mean
correspond to quark diagrams with additional quark-antiquark pairs compared to
the minimum required number. This effect, which can manifest itself for both
$A_{N}$ and $P_{N}$, can be studied at the SPD facility.
### 5.3 Transverse polarization of hyperons
Hyperons have the remarkable property that their decay in the weak interaction
makes it possible to determine the transverse polarization to the scattering
plane ($P_{N}$) - the only one possible in strong interactions, due to the
conservation of parity in them. Therefore, the polarization of hyperons can be
studied in collisions of practically any particles. In the case of the first
phase of the SPD NICA project, we are interested in collisions $pp$, $pd$,
$dd$, C+C and Ca+Ca. The available data are discussed in detail in [93].
Quark diagrams for the production of $\Xi^{-}$ hyperons in pp-collisions can
be found in [104]. The effective number of spectator quarks for the reaction
$p+p\rightarrow\Xi^{-\uparrow}+X$ is
$\nu_{A}=\nu_{B}=2+2\lambda-3\tau\lambda\approx 1.7383$. Similar calculations
for the reaction $p+p\rightarrow\Lambda^{\uparrow}+X$ give
$\nu_{A}=\nu_{B}=1+\lambda-3\tau\lambda\approx 0.8746$. Therefore, a
nonmonotonic dependence $P_{N}(x_{\rm{F}})$ can be expected in the case of the
reaction $p+p\rightarrow\Xi^{-\uparrow}+X$.
|
---|---
(a) | (b)
Figure 13: $P_{N}(x_{\rm{F}})$ data and the CPQ model calculations for the
reaction $p+p(A)\rightarrow\Lambda^{\uparrow}+X$ (a) and
$p+p(A)\rightarrow\Xi^{-\uparrow}+X$ (b), taken from [104].
The $P_{N}(x_{\rm{F}})$ data for the reaction
$p+p(A)\rightarrow\Lambda^{\uparrow}+X$ are shown in fig. 13a, and the data
for the reaction $p+p(A)\rightarrow\Xi^{-\uparrow}+X$ are shown in fig. 13b,
together with the CPQ model predictions [104].
As seen from fig. 13b, the $P_{N}(x_{\rm{F}})$ dependence for cascade hyperons
is nonlinear function and $P_{N}(x_{\rm{F}})$ reaches its maximum absolute
value at $x_{\rm{F}}$ in the range 0.5 - 0.6, in agreement with the
calculations by the CPQ model. For the reaction
$p+p(A)\rightarrow\Lambda^{\uparrow}+X$, a close to linear dependence is
observed, since the parameter $\nu_{A}=\nu_{B}\approx 0.8746$ in this case is
approximately two times smaller. The maximum of the absolute value of
polarization for the reaction $p+p(A)\rightarrow\Lambda^{\uparrow}+X$ is
approximately twice that for $p+p(A)\rightarrow\Xi^{-\uparrow}+X$, and
continues to increase with increasing $x_{\rm{F}}$ up to 0.75, in agreement
with the calculations in the framework of the CPQ model.
Detailed calculations of $P_{N}(x_{\rm{F}})$ for reactions
$p+A\rightarrow\Xi^{-}+X$ and $p+A\rightarrow\Xi^{0}+X$ can be found in [104],
which also covers the energy range, available at the NICA collider.
The highest oscillation frequency $P_{N}(x_{\rm{F}})$ is expected, according
to calculations by the CPQ model, in the reactions of antibaryon production in
baryon collisions. This is due to the large number of spectator quarks from
projectile (there are 6 of them, see fig. 14a) accompanying the production of
three antiquarks, which make up an antibaryon. There is a very limited set of
data on the polarization $P_{N}(x_{\rm{F}})$ of antihyperons produced in
nucleon-nucleon collisions. In fig. 14a is shown quark diagram for the
reaction $p+A\rightarrow\bar{\Xi}^{+}+X$. The weighted number of spectator
quarks for both reactions is $\nu_{A}=\nu_{B}=6-3\tau A_{eff}\approx 5.92$.
This leads to a high oscillation frequency $P_{N}(x_{\rm{F}})$ according to
(18), so that in the range $0<x_{\rm{F}}<1$, several complete cycles can be
observed.
|
---|---
(a) | (b)
Figure 14: Quark flow diagram (a) and $P_{N}(x_{\rm{F}})$ data [107] (b) for
the reaction $p+\rm{Be}\rightarrow\bar{\Xi}^{+}+X$, taken from [104].
In fig. 14b are shown the data for the reaction
$p+A\rightarrow\bar{\Xi}^{+}+X$ [107]. There are also shown there the
calculations of $P_{N}(x_{\rm{F}})$ according to the CPQ model [104]. Although
the available data agree with the calculations of $P_{N}(x_{\rm{F}})$ using
the CPQ model, the number of experimental points are clearly insufficient to
prove the phenomenon of $P_{N}(x_{\rm{F}})$ oscillations. New additional data
are required in the range $0<x_{\rm{F}}<1$ to observe several cycles of
$P_{N}(x_{\rm{F}})$ oscillations. Examples of $P_{N}(x_{\rm{F}})$ calculations
for the reactions $p+A\rightarrow\bar{\Xi}^{+}+X$ and
$p+A\rightarrow\bar{\Xi}^{0}+X$ can be found in [104].
The effect of "attraction" in the polarization of antihyperons should manifest
itself most clearly in the reaction $p+A\rightarrow\bar{\Lambda}+X$ [101]. The
dependence of $P_{N}$ on the energy $\sqrt{s}$ of the resonance type is
expected, with a maximum at $\sqrt{s}=E_{R}=6.98$ GeV. This behavior
$P_{N}(\sqrt{s})$ is based on a single non-zero $P_{N}$ report for reaction
$p+A\rightarrow\bar{\Lambda}+X$, observed in experiment E766 at
$\sqrt{s}=7.31$ GeV [108]. It is very important to repeat such measurements
that are within the energy range available at the NICA collider in p+p, d+d,
C+C and Ca+Ca collisions. The width of the "resonant" peak is small, since the
precession of only one test $\bar{s}$-quark is important in this case [101].
In case of the reaction $p+A\rightarrow\bar{\Xi}^{+}+X$, there are two
$\bar{s}$-quarks and one $\bar{d}$-quark with different precession
frequencies, which broadens the "resonant" peak.
Investigation of the dependences $P_{N}(\sqrt{s})$ of the "resonant" type and
$P_{N}(x_{\rm{F}})$ of the "oscillating" type for the reaction
$p+A\rightarrow\bar{\Lambda}+X$ is a very interesting problem affecting many
aspects of strong interactions, such as color forces between quarks,
precession of quark spin in a chromomagnetic field, quark counting rules for
spectator quarks creating the field, anomalous chromomagnetic moment of
quarks, the role of constituent (dressed) quarks in hadron interaction and
formation and quark confinement phenomenon.
An example of possible studies of $P_{N}$ in collisions of ions can be found
in [98]. It is shown that the higher is the atomic weight of the ions, the
higher is the frequency of the oscillations, since the effective
chromomagnetic field is increased by the quarks, coming from colliding ions.
The only available data for the $A+A\rightarrow\Lambda X$ reaction in heavy
ion collisions, where $P_{N}$ was measured, were used as input to the CPQ
model. The data were obtained in a fixed-target experiment, where $\Lambda$
was produced in Au+Au collisions at c.m. energy $\sqrt{s}=4.86$ GeV [109].
Already at the first stage of the SPD NICA project, it is possible to start
studing the transverse polarization of hyperons and antihyperons in ion
collisions. We also note the possibility of simultaneous measurement of the
so-called global polarization with respect to the reaction plane. In this case
the rotation of hadron or quark matter after collision of two nuclei leads to
the hyperon polarization with respect to the reaction plane, determined by an
impact parameter.
To conclusion, the study of single-spin polarization phenomena in the SPD NICA
project makes it possible to reveal the regularities in the behavior of the
single-spin asymmetry of hadrons and the transverse polarization of hyperons
and antihyperons. Such studies are possible due to the $4\pi$ geometry of the
SPD facility, a developed identification system, a fairly wide range of
available energies, the presence of beams of polarized protons and deuterons,
as well as ion beams. Among the most interesting tasks on this topic are the
following:
1) Measurement of $A_{N}$ and $P_{N}$ at several energies $\sqrt{s}$ in a wide
range for $x_{F}$ and $p_{T}$, in order to separate the dependences on these
three kinematic variables. The form of these dependences reflects the
mechanism of the origin of polarization phenomena. These measurements should
be carried out for as many reactions as possible, which is important for
studying the dependence of $A_{N}$ and $P_{N}$ on the type of particles
participating in the reaction. In general, this study will significantly
expand the database available for theoretical analysis and discrimination of
theoretical models.
2) Investigation of the scaling phenomenon for $A_{N}$ and $P_{N}$ and
corrections to it, reflecting the peculiarities of the mechanism of the origin
of polarization phenomena.
3)Investigation of the threshold phenomena for $A_{N}$, including the
measurement of the threshold angle of hadron production in the c.m. on which
$A_{N}$ becomes null.
4) Investigation of the phenomenon of $A_{N}$ and $P_{N}$ oscillations and the
relationship between the oscillation frequency and the number of spectator
quarks and the type of hadrons participating in the reaction. Particularly
interesting in this respect are antihyperons and cascade hyperons, as well as
secondary protons and neutrons, for which the oscillation frequency reaches a
significant value, which facilitates its measurement. High oscillation
frequency is expected also in heavy ion collisions.
5) Investigation of the phenomenon of "resonance" dependence of $A_{N}$ and
$P_{N}$ on energy $\sqrt{s}$. Disclosure of the mechanism of this phenomenon.
6) Study of the dependence of $A_{N}$ and $P_{N}$ on the atomic weights of the
particles involved in collisions. This will allow not only to link the data
obtained with different nuclei, but also to use the nuclei as tools for
investigating the mechanism underlying the polarization phenomena. Research
using ion collisions will provide a new insight into the phenomena previously
studied in hadron-hadron collisions. Until now there is only one experiment in
which the transverse polarization of a hyperon was measured in heavy ion
collisions. Global polarization with respect to the reaction plane can be
measured in addition to the $P_{N}$, which is measured with respect to the
production plane.
7) Additional possibilities for studying the mechanism of polarization
phenomena are provided by the use of such variables as the multiplicity of
charged particles in an event, as well as the centrality of collisions and the
impact parameter in the case of collisions of nuclei.
The data obtained in the proposed studies will significantly expand the
general world database on polarization measurements and become the basis for
their systematic theoretical analysis, within the framework of a unified
approach. One of the models that makes it possible to carry out a systematic
global analysis of polarization data is the model of chromomagnetic
polarization of quarks, which makes it possible to analyze various reactions
in a wide range of kinematic and other variables that determine the
experimental conditions. Global analysis of the entire dataset is suggested.
## 6 Vector light and charmed meson production 121212 This section is written
by E. Tomasi-Gustafsson; E-mail<EMAIL_ADDRESS>
PACS: 13.85.-t; 14.40.-n; 14.40.Lb
### Introduction
Among the wide possibilities opened by the future availability of the beams
from the NICA collider and the operation of the large acceptance SPD detector,
we focus here on two issues: charm production (hidden and open) and backward
vector meson production. The study of such channels will take full advantage
of the possibility of accelerating polarized $p$, $d$ beams (as well as
heavier ions) in a kinematical region where data are scarce on cross sections
and polarization effects are mostly unmeasured. New, precise data will be
extremely useful for the understanding of the mechanism of charm creation and
of hadronic matter dynamics in the non-perturbative region of QCD. In general,
threshold meson production channels in $NN$-collisions, $p+p\to
p+p+\omega(\phi)$, $p+p\to\Lambda(\Sigma^{0})+K^{+}+p$, and $p+p\to
p+p+\eta(\eta^{\prime})$, give deeper insight in the reaction mechanisms as it
is shown by the experimental programs at different proton accelerators as
SATURNE and COSY.
In this respect, $J/\psi$ production has a specific interest: the production
and the propagation of charm in ion-ion collisions have been considered as one
of the most promising probe of quark-gluon plasma (QGP) [110], but in order to
state the evidence of a clear signal, it is necessary to analyze in detail all
possible mechanisms for $J/\psi$ production in ion-ion collisions, and also
all other processes which are responsible for the dissociation of the produced
$J/\psi$ meson. The study of charmonium (hidden strangeness) and $D$ $(D^{*})$
mesons (open charm) are equally important.
### 6.1 Charm production
The elementary $pp$ cross section are collected and illustrated in Ref. [111].
In the energy region that can be investigated with the NICA-SPD facility,
$3.4\leq\sqrt{s}$[GeV]$\leq 27$ [112] for $pp$ collisions, the total $pp$
cross section is relatively constant around 40 mb, whereas the elastic cross
section decreases due to the opening of different inelastic channels as the
energy increases 141414Fundaments of elastic $pp$ scattering up to LHC
energies have been recently reviewed in Ref. [113] and references therein..
The order of magnitude of the inelastic cross section can therefore be
sizable, reaching 30 mb at the highest energies. Among these inelastic cross
sections, the channels $p+p\to p+p+J/\Psi$ and $p+p\to
p+\Lambda_{C}(\Sigma_{C})+D$ open around $\sqrt{s_{Thr}}\sim 5$ GeV and they
are expected to grow up to several $\mu b$ in the considered energy range.
The production mechanisms for charmonium (hidden strangeness) and $D$
$(D^{*})$ mesons (open charm) in nucleon-nucleon collision are not yet
understood. The question is how charm quarks - that are not preexisting in the
nucleon as valence quarks - are formed and how they hadronize. To interpret
the production and the propagation of charm in heavy ion collision as a probe
of quark-gluon plasma (QGP), it is necessary to have a solid theoretical
background based on the understanding of elementary processes.
Experimental data and theoretical studies of $J/\psi$ production in different
processes and of its decays exist: for a review, see [114] and for a most
recent data collection [115]. As a result of high statistics and high
resolution experiments a large information on the properties of the $J/\psi$
meson have been collected, on the production processes and on its numerous
decays. From a theoretical point of view, the interpretation of the data, in
particular in confinement regime, is very controversial. As an example, the
$c-$quark mass is too large, if compared to predictions from chiral symmetry,
but for theories based on expansion of heavy quark mass (Heavy Quark Effective
Theory), this mass is too small [116].
In the threshold region, the final particles are produced in $S$-state and the
spin structure of the matrix element is largely simplified. Simple
considerations indicate that this region is quite wide: the effective proton
size, which is responsible for charm creation, has to be small, $r_{c}\simeq
1/m_{c}\simeq 0.13$ fm, where $m_{c}$ is the $c$-quark mass, selecting small
impact parameters [117]. The $S$-wave picture can therefore be applied for
$q\leq m_{c}$, where $q$ is the norm of the $J/\psi-$ three-momentum in the
reaction center of mass (CMS). The momenta of the produced particles are
small, but the mechanisms for the production of charmed quarks must involve
large scales. In Ref. [48], the near-threshold $J/\psi-$ production in
nucleon-nucleon collisions was analyzed in the framework of a general model
independent formalism, which can be applied to any reaction $N+N\to
N+N+V^{0}$, where $V^{0}=\omega$, $\phi$, or $J/\psi$. Such reactions show
large isotopic effects: a large difference for $pp$\- and $pn$-collisions,
which is due to the different spin structure of the corresponding matrix
elements.
In Ref. [48] an estimation of $J/\Psi$ production was suggested from the
comparison of the cross sections for the $\phi$ and $J/\psi$ production in
$pp$ collisions. The same approach, namely $\pi$ exchange in $N+N\to
N+N+V^{0}$ and $\rho$ exchange for the sub process $\pi+N\to N+V^{0}$, with
$V^{0}=\phi$ or $J/\psi$ was considered. For the same value of the energy
excess, $Q=\sqrt{s}-2m-m_{V}$, taking into account the different phase space
volumes, coupling constants for the decay $V\to\pi\rho$, monopole-like
phenomenological form factor for the vertex $\pi^{*}\rho^{*}V$, with virtual
$\pi$ and $\rho$, one finds the following simple parametrization for the cross
section, holding in the near threshold region only:
$\sigma[nb]=0.097(Q[\mbox{GeV}])^{2}.$ (26)
In Ref. [118] a parametrization of exponential form
$\sigma[nb]=ae^{-bM_{J/\Psi}/\sqrt{(}s)};$ (27)
was suggested. The values $a$= 1000 [nb], and $b$=16.7 GeV reproduce well the
experimental data over threshold.
The threshold for this reaction is $E_{th}$=12.24 GeV which corresponds to
$\sqrt{s}=2m+m_{J/\psi}\simeq$ 4.97 GeV. In Fig. 15 the data for $p+p\to
J/\psi+p+p$ (red circles) and $p+A\to J/\psi+X$ (blue squares) are plotted
from the recollections in Refs. [114] (filled symbols) and [115] (open
symbols). Different symbols differentiate $J/\psi$ production in $pp$ or
(extrapolated from) $pA$ collisions. The data, mostly collected at CERN, are
reconstructed from the measurement using models and/or assumptions, and the
compiled total cross section for $J/\Psi$ production may differ up to a factor
of two. For example, the original reference for the measurement from Protvino
at $\sqrt{s}=11.5GeV$ [119] gives $\sigma(pp\to(J/Psi\to\mu+\mu^{-})+X)=9.5\pm
2.5$ nb, whereas the same experimental point is referenced as $\sigma=11\pm 3$
nb, in Ref. [114] and $\sigma=20\pm 5.2$ nb, in Ref. [115]. The cross section
from Ref. [48] is also plotted in Fig. 15 (solid line).
Taking the value of luminosity ${\cal L}=10^{30}$ cm-2 s-1, one expects 3
counts/hour for such a process with a cross section of the order of 1 nb. This
number is not corrected for the detector efficiency and reconstruction with
identification, for example, in a missing mass. The reconstruction of $J/\Psi$
through its decay into a lepton pair, that is the preferred mode, requires two
additional orders of magnitude as the branching ratio is $(\simeq 5.9\pm
0.5)10^{-2}$.
Note also that in the framework of the considered model, one can find a large
isotopic effect, due to the different spin structure of the matrix element at
threshold:
$\displaystyle\frac{\sigma(np\to npJ/\psi)}{\sigma(pp\to ppJ/\psi)}=5,$
which would require a correction of the experimental data on $pA$ reaction,
where equal $np$ and $pp$ cross sections are usually assumed for the
extraction of the elementary cross section in $pp$ collisions.
Figure 15: Experimental data on $J\psi$ production in $pp$ (red circles) and
$pA$ (blue squares) reactions, from the recollections in Refs. [114] (filled
symbols) and [115] (open symbols). The solid line is the calculation from Ref.
[48]. The plot is drawn from the $J/\psi$ production threshold (black line).
The green filled region represents the range that can be investigated with
NICA-SPD.
From Ref. [48] one also learns that only one polarization observable, the
$J/\psi$-polarization, is identical for $pp$ and $pn$ collisions: the $J/\psi$
meson is transversely polarized - even in collisions of unpolarized nucleons.
The experimental determination of the ratio of the total cross sections for
$np$ and $pp$ collisions gives important information for the identification of
the reaction mechanism.
The possibility of presence of intrinsic charm as a higher order component of
the development of the Fock expansion for the proton state has been discussed
in Ref. [120]. Near threshold, all partons must transfer their energy to the
charm quarks, within a time $t\sim 1/m_{c}$, thus selecting short range
correlations between the valence quarks. Most interesting is the deuteron
case, where all six quarks must be involved coherently, giving access to the
hidden color part of the deuteron wave function.
### 6.2 Open charm production
Open charm production, $N+N\to N+\bar{D}+\Lambda_{C}(\Sigma_{C})$ gives
information on scattering lengths, effective radius, hadronic form factors,
and coupling constants and is also related to the dynamics of charm creation
in $NN$, $NA$, $AA*$ collisions. Some predictions can be done from an analogy
with strangeness production, relying on the equivalence of SU(3) and SU(4)
symmetries, that is however, not totally reliable. Existing information and
estimation indicate that near threshold cross section can be of the order of
microbarns. The threshold cross section, normalized at the lowest existing
value is plotted in Fig. 16, where the insert highlights the threshold region.
A dedicated simulation should be done to evaluate the counting rates, as the
charmed particles should be reconstructed from the most suitable decay
channels.
Figure 16: Total charm production in $pp$ and $pA$ collisions. Data are from
Ref. [121]. The line is a threshold parametrization (see text).
The spin and isospin structure of the matrix element for the reactions
$N+N\to\Lambda_{C}(\Sigma_{C})+\bar{D}+N$ was derived for open charm in Ref.
[122]. Detailed estimation of cross sections and the expressions of the
polarization observables can be found there.
The charm production near threshold cross section follows the behaviour:
$\sigma[\mu b]=0.03(Q[\mbox{GeV}])^{2}$ (28)
that can be useful for simulation purposes. It is plotted in Fig. 16 over a
collection of data from Ref. [121] reanalyzed from several experiments on
charm production in $pp$ and $pA$ collisions at different facilities.
We stress that these are difficult measurements, with low counting rates, but
that even setting upper limits will be important, as no data at all are
present in the threshold region.
### 6.3 Backward meson production
Larger counting rates are expected for light meson productions, since cross
sections are of the order of mb. The $\rho^{0}$ meson production in elementary
collisions and on nuclei has been discussed for example in Ref. [123] and
references therein. The $\rho^{0}$ inclusive cross section has been measured
at different accelerators since the 70s, mostly at CERN [124], and more
recently by the HADES collaboration [125]. In Ref. [126], the inclusive cross
section for $\rho$ production in $pp$ collision is calculated in frame of a
generalized vector meson dominance model, and the existing data up to
$\sqrt{s}=65$ GeV are fairly reproduced and compared to other models. In Ref.
[127] the following parametrization was suggested
$\sigma(pp\to\rho^{0}X)=(0.38\pm 0.02)\ln^{2}s-(2.1\pm 0.4).$ (29)
This parametrization is shown together with the data for the inclusive cross
section of $p+p\to\rho+X$ are in Fig. 17.
Figure 17: $\rho$ production in $pp$ and $pA$ collisions. The red circles
(black squares) are for inclusive (exclusive) $\rho$ production in different
experiments. The line is the parametrization from Ref. [127]). The shaded
region represents the SPD energy range.
One can see that it is of the order of the mb from the near threshold region,
and therefore measurable at SPD already in the first phase of the experiment.
In Ref. [128] a specific kinematics, the backward light meson production in
$pp$ or $pA$ collisions, was discussed in similarity with the ’quasi real
electron method’, where a hard photon is produced on the collision of
electrons on any target [129]. Two important characteristics have been proved
for the electron case: -the collinear emission probability has a logarithmic
enhancement - the cross section can be factorized in a term related to the
probability of the meson emission with a given energy at a given angle from
the beam particle, and a term related to the interaction of the beam remnant
after emission on the target.
Figure 18: Feynman diagram for collinear hard photon emission in $eT$
reactions (T stands for any target). The hadron equivalent is obtained by
replacing the photon by a $\rho$-meson and the electron by a proton.
The cross sections for the reactions of interest are:
$\displaystyle d\sigma^{pT\to h_{+}X}(s,x)$ $\displaystyle=$
$\displaystyle\sigma^{nT\to X}(\bar{x}s)dW_{h_{+}}(x),$ $\displaystyle
d\sigma^{pT\to h_{0}X}(s,x)$ $\displaystyle=$ $\displaystyle\sigma^{pT\to
X}(\bar{x}s)dW_{h_{0}}(x),$ (30)
where $h$ is a hadron. The quantity $dW_{\rho}(x)$ can be inferred using the
QED result:
$\displaystyle\frac{dW_{\rho^{i}}(x)}{dx}$ $\displaystyle=$
$\displaystyle\frac{g^{2}}{4\pi^{2}}\frac{1}{x}\sqrt{1-\frac{m_{\rho}^{2}}{x^{2}E^{2}}}$
$\displaystyle\left[\left(1-x+\frac{1}{2}x^{2}\right)L-(1-x)\right],$
$\displaystyle
1>x=\frac{E_{\rho}}{E}>\frac{m_{\rho}}{E},L=\ln\left(1+\frac{E^{2}\theta_{0}^{2}}{M^{2}}\right),\rho^{i}=\rho^{+},\rho^{-},\rho^{0},$
where $M$, $m_{\rho}$, $E$, and $E_{\rho}$ are the masses and the energies of
the initial proton and the emitted $\rho$-meson in the Laboratory system.
The integrated quantities $W_{h}$, $h={\rho,\pi}$ can, in general, exceed
unity, violating unitarity. To restore unitarity, we have to take into account
virtual corrections: the vertex for the emission of a single pion (charged or
neutral) from the proton has to include ’radiative corrections’, which account
for emission and absorption of any number of virtual pions. For this aim we
use the known expression for the probability of emission of $n$ "soft" photons
in processes of charged particles hard interaction, $i.e.,$ the Poisson
formula for emission of $n$ soft photons $W^{n}=(a^{n}/n!)e^{-a}$ (where $a$
is the probability of emission of a single soft photon) [130].
The probability of emission of ’soft’ neutral pions follows a Poisson
distribution, which is not the case for the emission of charged pions.
Fortunately, in our case, it is sufficient to consider the emission of one
charged pion at lowest order (the process of one charged pion emission) plus
any number of real and virtual pions with total charge zero. In such a
configuration, this vertex has the form of the product of the Born probability
of emission of a single pion multiplied by the Poisson-like factor:
$P_{\pi,\rho}=e^{-W_{\pi,\rho}},$ (32)
which takes into account virtual corrections.
The final result is obtained using the replacement:
$\sigma(s)\to\sigma(s)\times{\cal R_{\pi}},~{}{\cal
R_{\pi}}=P_{\pi}\sum_{k=0}^{k=n}\frac{W^{k}_{\pi}}{k!},$ (33)
where ${\cal R}_{\pi}$ is the renormalization factor in order to account for
the emission of $n$ real soft neutral pions escaping the detection.
Concerning the production of two charged pions, accompanied by a final state
$X$, we can write:
$d\sigma^{p\bar{p}\to\rho^{0}X}=2\frac{dW_{\rho}(x)}{dx}\sigma^{p\bar{p}\to
X}(\bar{x}s)\times P_{\rho},~{}$ (34)
where the factor of two takes into account two kinematical situations,
corresponding to the emission along each of the initial particles and
$P_{\rho}$ is the survival factor (32) which takes into account virtual
radiative corrections. The cross section is shown in Fig. 19 as a function of
the $\rho$ energy fraction, for two values of the incident energy and of the
emission angle.
Figure 19: Cross section $d\sigma(p,\bar{p}\to\rho^{0}X)$ as function of the
$\rho$ energy fraction for two values of the incident energy and of the $\rho$
emission angle: $E=10$ GeV and $\theta_{0}=10^{\circ}$ (black, solid line),
$E=10$ GeV and $\theta_{0}=20^{\circ}$ (red, dashed line), $E=20$ GeV and
$\theta_{0}=10^{\circ}$ (green, dotted line), $E=20$ GeV and
$\theta_{0}=20^{\circ}$ (blue, dash-dotted line).
The $x$ dependence shows a characteristic peak at $x=x_{max}$ that has the
same nature as for the QED process $e^{+}+e^{-}\to\mu^{+}+\mu^{-}+\gamma$. As
explained in Ref. [131], it is a threshold effect, corresponding to the
creation of a muon pair, where $x_{max}=1-4M_{\mu}^{2}/s$, $M_{\mu}$ is the
muon mass.
The prediction of the model for backward $\rho$-meson production in $pp$
collisions is shown in Fig. 20, as a black solid thick line. The red dashed
line is the renormalization factor, from Eq. (33), integrated over $x$. The
total $pp$ cross section is the black dash-dotted line, of the order of 40 mb,
and it is quite flat in all the considered energy region. The blue line is the
parametrization from Ref. [127]) of the inclusive $\rho$ cross section. The
available data are also shown, as different symbols and colors for inclusive
measurements and as black squares for exclusive $\rho$ production. Backward
production can be of the order of several mb, therefore accessible at NICA-SPD
also with the initial lower luminosity.
An original application is the possibility of creating neutron beams by
tagging the incident proton beam with a negative meson emitted backwards.
Charge exchange reaction takes place, and the beam remnant is a neutron
impinging on the target beam.
Figure 20: Cross section for $\rho$-meson production in $pp$ collisions:
inclusive (different symbols and colors from different experiments) and
exclusive data from $pp\to pp\rho$ (black squares). The present calculation is
shown as a black line. The red dashed line is the renormalization factor from
Eq. (33). The black dash-dotted line is the total $pp$ cross section.The first
red point is the inclusive measurement from Ref. [125]. The blue line is the
parametrization from Ref. [127].
### 6.4 Conclusions
The understanding of charm production (open or hidden) should unify the
different steps: parton-level hard process with production of $c\overline{c}$
pairs, after hadronization of $c\overline{c}$ into $J/\psi$ or into charmed
hadrons (mesons and baryons) including the final state interaction of the
produced charmed hadrons with other particles. The relatively large
transferred momenta involved in most processes of $J/\psi$ production in
hadron-hadron collisions allow to treat the first step in framework of
perturbative QCD. But the applicability of QCD is not so straightforward for
the description of the $c$-quark hadronization. In this respect, precise data
collected in the NICA-SPD energy range will bring important information,
especially if covering a wide range above threshold. Light meson as $\rho$
meson production is definitely easier to be measured. Collecting precise,
systematic data should help to refine the models and of great interest also
for the collision on heavy targets. Backward kinematics could constitute an
original contribution to the field, offering an alternative possibility to
produce neutron beams.
## 7 Exclusive hard processes with deuteron at NICA151515This section is
written by M. Strikman; E-mail<EMAIL_ADDRESS>
PACS number(s) 13.75.Cs, 25.10.+s, 25.40Ep
Our understanding of the dynamics of NN interactions at the energy range of
$\sqrt{s}\sim 5\div 20\,GeV$ is still rather limited. In particular, it is not
clear yet where transition occurs from nonperturbative to perturbative
dynamics in few body processes with a large momentum transfer ($-t$). This
includes even the most basic process of the large $-t$ elastic nucleon -
nucleon scattering at large $-t$. Among the puzzles are large spin effects in
large angle scattering of polarized protons[38] and a complicated energy
dependence of the nuclear transparency in large angle scattering of incident
protons off the protons embedded in nuclei [44]. Also the recent observations
of two nucleon short range / high momentum correlations in nuclei mostly in
electron - nucleus scattering (see review in [132, 133]) require confirmation
and testing universality of the SRCs using other projectiles - protons,
photons, etc.
Questions involved in studies of the short-range / high momentum nuclear
structure and understanding microscopic nucleon structure and dynamics of
large momentum transfer processes are delicately intertwined: understanding of
hard dynamics of two body processes is also necessary for precision studies of
the short range nuclear structure.
Several strategies are possible to address these questions. Here we will
concentrate on reactions with the deuteron since the nonrelativistic deuteron
wave function is well known and hence the measurements could be matched to
detailed theoretical calculations. Also, the use of the deuteron allows to
choose special kinematic domains where $p^{2}H$ scattering is sensitive to the
short-range nuclear correlations. The collider kinematics presents a number of
advantages as all particles in the reactions in question have large momenta
and hence can be easily detected.
### 7.1 Probing dynamics of nucleon - nucleon interaction in proton -
deuteron quasielastic scattering
The simplest reaction which would be interesting to study is the process
$p^{2}H\to ppn$ where one of the nucleons has small transverse momentum and
two others have approximately back to back large transverse momenta[134, 135].
In the impulse approximation this process corresponds to elastic scattering of
the projectile proton off a quasifree nucleon of the target. There exist
however kinematical conditions where the dominant contributions are due to
soft rescatterings of the initial and final nucleons, which accompany the hard
$pp(pn)$ reaction. The eikonal approximation, which accounts for relativistic
kinematics as dictated by the Feynman diagrams, reveals the important role
played by the initial and final state interactions in the angular and momentum
dependences of the differential cross section in well defined kinematics. The
condition for the applicability of the generalized eikonal approximation [136]
is that the c.m. scattering angle and invariant mass of the two nucleon system
are large enough so that $-t,-u\geq\mbox{2 GeV}^{2}$.
It was suggested in [5, 6] that nucleons in the elementary reaction interact
in small size configurations with a small cross section - so called color
transparency phenomenon. This effect is suppressed by the space - time |
# On the Use of Unrealistic Predictions in Hundreds of Papers Evaluating Graph
Representations
Li-Chung Lin,1 Cheng-Hung Liu,1 Chih-Ming Chen,2 Kai-Chin Hsu,3
I-Feng Wu,4 Ming-Feng Tsai,2 Chih-Jen Lin1
###### Abstract
Prediction using the ground truth sounds like an oxymoron in machine learning.
However, such an unrealistic setting was used in hundreds, if not thousands of
papers in the area of finding graph representations. To evaluate the multi-
label problem of node classification by using the obtained representations,
many works assume that the number of labels of each test instance is known in
the prediction stage. In practice such ground truth information is rarely
available, but we point out that such an inappropriate setting is now
ubiquitous in this research area. We detailedly investigate why the situation
occurs. Our analysis indicates that with unrealistic information, the
performance is likely over-estimated. To see why suitable predictions were not
used, we identify difficulties in applying some multi-label techniques. For
the use in future studies, we propose simple and effective settings without
using practically unknown information. Finally, we take this chance to compare
major graph-representation learning methods on multi-label node
classification.
## 1 Introduction
Recently unsupervised representation learning over graphs has been an
important research area. One of the primary goals is to find embedding vectors
as feature representations of graph nodes. Many effective techniques (e.g.,
Perozzi, Al-Rfou, and Skiena 2014; Tang et al. 2015; Grover and Leskovec 2016)
have been developed and widely applied. This research area is very active as
can be seen from the tens of thousands of related papers.
The obtained embedding vectors can be used in many downstream tasks, an
important one being node classification. Because each node may be associated
with multiple labels, this application falls into the category of multi-label
problems in machine learning. In this study, we point out that in many (if not
most) papers using node classification to evaluate the quality of embedding
vectors, an unrealistic setting was adopted for prediction and evaluation.
Specifically, in the prediction stage, the number of labels of each test
instance is assumed to be known. Then according to decision values, this
number of top-ranked labels is considered to be associated with the instance.
Because information on the number of labels is usually not available in
practice, this setting violates the machine learning principle that ground-
truth information should not be used in the prediction stage. Unfortunately,
after surveying numerous papers, we find that this inappropriate setting is so
ubiquitous that many started thinking it is a standard and valid one.
While the research community should move to use appropriate settings, some
detailed investigation is needed first. In this work, we aim to do so by
answering the following research questions.
* •
Knowing this unrealistic setting has been commonly used, how serious is the
situation and why does it occur?
To confirm the seriousness of the situation, we identify a long list of papers
that have used the unrealistic predictions. Our analysis then indicates that
with unrealistic information, the performance is likely over-estimated.
Further, while the setting clearly cheats, it roughly works for some node
classification problems that are close to a multi-class one with many single-
labeled instances.
* •
What are suitable settings without using unknown information? Are there
practical difficulties for researchers to apply them?
After explaining that multi-label algorithms and/or tools may not be readily
available, we suggest pragmatic solutions for future studies. Experimental
comparisons with the unrealistic setting show that we can effectively optimize
some commonly used metrics such as Macro-F1.
* •
Because of the use of unrealistic predictions, past comparisons on methods to
generate embedding vectors may need to be re-examined. Can we give comparisons
under appropriate multi-label predictions?
By using suitable prediction settings, our results give new insights into
comparing influential methods on representation learning.
This paper is organized as follows. Sections 2-3 address the first research
question, while Sections 4 and 5 address the second and the third research
questions, respectively. Finally, Section 6 concludes this work. Programs and
supplementary materials are available at
www.csie.ntu.edu.tw/~cjlin/papers/multilabel-embedding/
## 2 Unrealistic Predictions in Past Works
After finding the embedding vectors, past studies on representation learning
experiment with various applications. An important downstream task is node
classification, which is often a multi-label classification problem.
In machine learning, multi-label classification is a well-developed area with
many available training methods. The most used one may be the simple one-
versus-rest setting, also known as binary relevance. This method has been
adopted by most works on representation learning. The main idea is to train a
binary classification problem for each label on data with/without that label.
The binary optimization problem on label-feature pairs
$(y_{i},\boldsymbol{x}_{i}),$ where $y_{i}=\pm 1$ and $i=1,\ ...,$ # training
instances, takes the following form.
$\displaystyle\min_{\boldsymbol{w}}\quad\frac{1}{2}\boldsymbol{w}^{T}\boldsymbol{w}+C\sum\nolimits_{i}\xi(y_{i}\boldsymbol{w}^{T}\boldsymbol{x}_{i}),$
(1)
where $\xi(\cdot)$ is the loss function, $\boldsymbol{w}^{T}\boldsymbol{w}/2$
is the regularization, and $C$ is the regularization parameter.111In some
situations a bias term is considered, so
$\boldsymbol{w}^{T}\boldsymbol{x}_{i}$ is replaced by
$\boldsymbol{w}^{T}\boldsymbol{x}_{i}+b$. Now embedding vectors
$\boldsymbol{x}_{i},\ \forall i$ are available and fixed throughout all binary
problems. Then for each label, the construction of problem (1) is simply by
assigning
$y_{i}=\begin{cases}1,&\text{if}\ \boldsymbol{x}_{i}\ \text{is associated with
the label},\\\ -1,&\text{otherwise.}\end{cases}$
Because representation learning aims to get a low-dimensional but informative
vector, a linear classifier is often sufficient in the downstream task. For
the loss function, logistic regression is usually considered, and many use the
software LIBLINEAR (Fan et al. 2008) to solve (1).
To check the performance after the training process, we find that hundreds, if
not thousands of papers222See a long list compiled in supplementary materials.
in this area used the following procedure.
* •
Prediction stage: for each test instance, assume
the number of labels of this instance is known.
Predict this number of labels by selecting those with the largest decision
values from all binary models.
* •
Evaluation stage: many works report Micro-F1 and Macro-F1.
Clearly, this setting violates the principle that in the prediction stage,
ground-truth information should not be used. The reason is obvious that in the
practical model deployment, such information is rarely available.
In particular, some influential works with thousands of citations (e.g.,
Perozzi, Al-Rfou, and Skiena 2014; Tang et al. 2015) employed such unrealistic
predictions, and many subsequent works followed. The practice is now
ubiquitous and here we quote the descriptions in some papers.
* •
Chanpuriya and Musco (2020): “As in Perozzi, Al-Rfou, and Skiena (2014) and
Qiu et al. (2018), we assume that the number of labels for each test example
is given.”
* •
Schlötterer et al. (2019): “we first obtain the number of actual labels to
predict for each sample from the test set. … This is a common choice in the
evaluation setup of the reproduced methods.”
Interestingly, we find that such unrealistic predictions were used long before
the many recent studies on representation learning. An example is as follows.
* •
Tang and Liu (2009): “we assume the number of labels of unobserved nodes is
already known and check the match of the top-ranking labels with the
truth.”333Tang and Liu (2009) stated that “Such a scheme has been adopted for
other multi-label evaluation works (Liu, Jin, and Yang 2006)”. However, we
found no evidence that Liu, Jin, and Yang (2006) assumed that the number of
labels is known.
Our discussion shows how an inappropriate setting can eventually propagate to
an entire research area. Some works did express concerns about the setting.
For example,
* •
Faerman et al. (2018): “Precisely, this method uses the actual number of
labels $k$ each test instance has. … In real world applications, it is fairly
uncommon that users have such knowledge in advance.”444See the version at
https://arxiv.org/abs/1710.06520
* •
Liu and Kim (2018): “we note that at the prediction stage previous approaches
often employs information that is typically unknown. Precisely, they use the
actual number of labels $m$ each testing node has (Perozzi, Al-Rfou, and
Skiena 2014; Qiu et al. 2018). … However, in real-world situations it is
fairly uncommon to have such prior knowledge of $m$.”
To be realistic, Faerman et al. (2018); Liu and Kim (2018) predict labels by
checking the sign of decision values.555More precisely, if logistic regression
is used, they check if the probability is greater than 0.5 or not. This is the
same as checking the decision value in (2). We name this method and give its
details as follows.
* •
one-vs-rest-basic: for a test instance ${\boldsymbol{x}}$,
$\boldsymbol{w}^{T}\boldsymbol{x}\begin{dcases}\geq 0\\\
<0\end{dcases}\Rightarrow\begin{dcases}\boldsymbol{x}\ \text{predicted to have
the label,}\\\ \text{otherwise.}\end{dcases}$ (2)
Their resulting Macro-F1 and Micro-F1 are much lower than works that have used
unknown information.
If so many works consider an unrealistic setting for predictions, they
probably have reasons for doing so. Some papers explain the difficulties that
lead to their assumption of knowing the number of labels.
* •
Li, Zhu, and Zhang (2016): “As the datasets are not only multi-class but also
multi-label, we usually need a thresholding method to test the results. But
literature gives a negative opinion of arbitrarily choosing thresholding
methods because of the considerably different performances. To avoid this, we
assume that the number of the labels is already known in all the test
processes.”
* •
Qiu et al. (2018): “To avoid the thresholding effect (Tang, Rajan, and
Narayanan 2009), we assume that the number of labels for test data is given
(Perozzi, Al-Rfou, and Skiena 2014; Tang, Rajan, and Narayanan 2009).”
To see what is meant by the thresholding effect and the difficulties it
imposes, we give a simple illustration. For a data set BlogCatalog (details in
Section 5.1), we apply the one-vs-rest training on embedding vectors generated
by the method DeepWalk (Perozzi, Al-Rfou, and Skiena 2014). Then the
unrealistic prediction of knowing the number of labels in each test instance
is performed. Results (Micro-F1 = 0.41, Macro-F1 = 0.27) are similar to those
reported in some past works.
In contrast, when using the one-vs-rest-basic setting as in Faerman et al.
(2018); Liu and Kim (2018), results are very poor (Micro-F1 = 0.33 and
Macro-F1 = 0.19). We see that many instances are predicted to have no label at
all. A probable cause of this situation is the class imbalance of each binary
classification problem. That is, in problem (1), few training instances have
$y_{i}=1$, and so the decision function tends to predict everything as
negative. Many multi-label techniques are available to address such
difficulties, and an important one is the thresholding method (e.g., Yang
2001; Fan and Lin 2007). Via a constant $\Delta$ to adjust the decision value,
in (2) we can replace
$\boldsymbol{w}^{T}\boldsymbol{x}\quad\text{with}\quad\boldsymbol{w}^{T}\boldsymbol{x}+\Delta.$
(3)
A positive $\Delta$ can make the binary problem produce more positive
predictions. Usually $\Delta$ is decided by a cross-validation (CV) procedure.
Because each label needs one $\Delta$, the overall procedure is more
complicated than one-vs-rest-basic. Moreover, the training time is
significantly longer. Therefore, past works may not consider such a technique.
## 3 Analysis of the Unrealistic Predictions
We analyze the effect of using the unrealistic predictions. To facilitate the
discussion, in this section we consider
$i:\text{index of test instances, and }j:\text{index of labels.}$
We further assume that for test instance $i$,
$\begin{split}&K_{i}:\text{true number of labels},\\\
&\hat{K}_{i}:\text{predicted number of labels}.\end{split}$ (4)
In multi-label classification, two types of evaluation metrics are commonly
used (Wu and Zhou 2017).
* •
Ranking measures: examples include precision@K, nDCG@K, ranking loss, etc. For
each test instance, all we need to predict is a ranked list of labels.
* •
Classification measures: examples include Hamming loss, Micro-F1, Macro-F1,
Instance-F1, etc. For each test instance, several labels are chosen as the
predictions.
Among these metrics, Macro-F1 and Micro-F1 are used in most works on
representation learning. We first define Macro-F1, which is the average of F1
over labels:
$\text{Macro-F1}=\text{Label-F1}=\frac{\sum\text{F1 of label
}j}{\\#\text{labels}},$ (5)
where
$\text{F1 of label
}j=\displaystyle\frac{2\times\text{TP}_{j}}{\text{TP}_{j}+\text{FP}_{j}+\text{TP}_{j}+\text{FN}_{j}}.$
Note that $\text{TP}_{j}$, $\text{FP}_{j}$, and $\text{FN}_{j}$ are
respectively the number of true positives, false positives and false negatives
on the prediction of a given label $j$. Then Micro-F1 is the F1 by considering
all instances (or all labels) together:
$\text{Micro-F1}=\frac{2\times\text{TP sum}}{\text{TP sum + FP sum + TP sum +
FN sum}},$ (6)
where “sum” indicates the accumulation of prediction results over all binary
problems. Next we prove an upper bound of Micro-F1.
###### Theorem 1.
With the definition in (4), we have
$\text{Micro-F1}\leq\frac{2\times\sum\nolimits_{i=1}^{l}\min\bigl{(}\hat{K}_{i},K_{i}\bigl{)}}{\sum\nolimits_{i=1}^{l}\bigl{(}K_{i}+\hat{K}_{i}\bigl{)}}\leq
1,$ (7)
where $l$ is the number of test instances. Moreover, when $\hat{K}_{i}=K_{i}$,
the bound in (7) achieves the maximum (i.e., 1).
The proof is in supplementary materials. For the upper bound of Micro-F1
proved in Theorem 1, we see that knowing $K_{i}$ “pushes” the bound to its
maximum. If a larger upper bound leads to a larger Micro-F1, then Theorem 1
indicates the advantage of knowing $K_{i}$.
While Theorem 1 proves only an upper bound, by some assumption on the decision
values,666 Wu and Zhou (2017) also assumed (8) for analyzing Micro-F1.
However, their results are not suited for our use here because of various
reasons. In particular, they made a strong assumption that Micro-F1 is equal
to Instance-F1. we can exactly obtain Micro-F1 for analysis. The following
theorem shows that if all binary models are good enough, the upper bound in
(7) is attained. Further, if $K_{i}$ is known, we achieve the best possible
$\text{Micro-F1}=1$.
###### Theorem 2.
Assume for each test instance $i$, decision values are properly ranked so that
$\begin{split}&\ \text{decision values of its $K_{i}$ labels}\\\ >&\
\text{decision values of other labels.}\end{split}$ (8)
Under specified $\hat{K}_{i}$, $\forall$ i, the best Micro-F1 is obtained by
predicting labels with the largest decision values. Moreover, the resulting
Micro-F1 is the same as the upper bound in (7). That is,
$\text{Micro-F1}=\frac{2\times\sum\nolimits_{i=1}^{l}\min\bigl{(}\hat{K}_{i},K_{i}\bigl{)}}{\sum\nolimits_{i=1}^{l}\bigl{(}K_{i}+\hat{K}_{i}\bigl{)}}.$
(9)
If $\hat{K}_{i}=K_{i}$, the best possible $\text{Micro-F1}=1$ is attained.
The proof is in supplementary materials. Theorem 2 indicates that even if the
classifier can output properly ranked decision values, without the true number
of labels $K_{i}$, optimal Micro-F1 still may not be obtained. Therefore,
using $K_{i}$ gives predictions an inappropriate advantage and may cause the
performance to be over-estimated as a result.
Next, we investigate why unrealistic predictions were commonly considered and
point out several possible reasons in the current and subsequent sections. The
first one is the relation to multi-class problems. Some popular node
classification benchmarks are close to multi-class problems because many of
their instances are single-labeled with $K_{i}=1$. See the data statistics in
Table 1. For multi-class problems, the number of labels (i.e., one) for each
instance is known. Thus in prediction, we simply find the most probable label.
In this situation, Theorem 3 shows that the accuracy commonly used for
evaluating multi-class problems is the same as Micro-F1. The proof is in
supplementary materials.
###### Theorem 3.
For multi-class problems,
accuracy = Micro-F1.
Therefore, using Micro-F1 with prior knowledge on the number of labels is
entirely valid for multi-class classification. Some past studies may
conveniently but erroneously extend the setting to multi-label problems.
Based on the findings so far, in Section 3.1 we explain that the unrealistic
prediction roughly works if a multi-label problem contains mostly single-
labeled instances.
### 3.1 Predicting at Least One Label per Instance
The discussion in Theorem 3 leads to an interesting issue on whether in multi-
label classification, at least one label should be predicted for each
instance. In contrast to multi-class classification, for multi-label
scenarios, we may predict that an instance is associated with no label. For
the sample experiment on one-vs-rest-basic in Section 2, we mentioned that
this “no label” situation occurs on many test instances and results in poor
performance. A possible remedy by tweaking the simple one-vs-rest-basic method
is:
* •
one-vs-rest-no-empty: The method is the same as one-vs-rest-basic, except that
for instances predicted to have no label, we predict the label with the
highest decision value.
For the example considered in Section 2, this new setting greatly improves the
result to 0.39 Micro-F1 and 0.24 Macro-F1. If we agree that each instance is
associated with at least a label (i.e., $K_{i}\geq 1$), then the method one-
vs-rest-no-empty does not take any unknown information in the prediction
stage. In this regard, the method of unrealistic predictions is probably
usable for single-labeled instances. However, it is definitely inappropriate
for multi-labeled instances. For some benchmark sets in Section 5, the
majority of instances are multi-labeled. Thus there is a need to develop
effective prediction methods without using unrealistic information. This
subject will be discussed in Section 4.
## 4 Appropriate Methods for Training and Prediction
Multi-label classification is a well-developed area, so naturally we may
criticize researchers in representation learning for not applying suitable
techniques. However, this criticism may not be entirely fair: what if
algorithms and/or tools on the multi-label side are not quite ready for them?
In this section, we discuss the difficulties faced by researchers on
representation learning and explain why simple and effective settings are hard
to obtain.
The first challenge faced by those handling multi-label problems is that they
must choose from a myriad of methods according to the properties of their
applications. Typically two considerations are
* •
number of labels, and
* •
evaluation metrics.
For example, some problems have extremely many labels, and the corresponding
research area is called “eXtreme Multi-label Learning (XML);” see the website
(Bhatia et al. 2016) containing many such sets. For this type of problems it
is impossible to train and store the many binary models used by the one-vs-
rest setting, so advanced methods that organize labels into a tree structure
are needed (e.g., You et al. 2019; Khandagale, Xiao, and Babbar 2020; Chang et
al. 2021). With a huge number of tail labels (i.e., labels that rarely occur),
the resulting Macro-F1, which is the average F1 over all labels, is often too
low to be used. In practice, a short ranked list is considered in the
prediction stage, so precision@K or nDCG@K commonly serve as the evaluation
metrics.
Nevertheless, the focus now is on node classification problems in past studies
on representation learning. The number of labels is relatively small, and some
even contain many single-labeled instances. From the predominant use of
Micro-F1 and Macro-F1 in past works it seems that a subset of labels instead
of a ranked list is needed for node classification. Therefore, our
considerations are narrowed to
* •
methods that are designed for problems without too many labels, and
* •
methods that can predict a subset of labels (instead of just ranks) and
achieve a high classification measure such as Micro-F1, Macro-F1, and
Instance-F1.
In addition to one-vs-rest, other methods are applicable for our scenario
(e.g., Tai and Lin 2012; Read et al. 2011; Read, Pfahringer, and Holmes 2008;
Tsoumakas and Vlahavas 2007). Because one-vs-rest does not consider label
correlation, this aspect is the focus of some methods. For simplicity we stick
with the one-vs-rest setting here and prioritize achieving good Macro-F1.
Macro-F1 in (5) is the average of F1 results over labels, so under the one-vs-
rest framework, all we need is to design a method that can give satisfactory
F1 on each single label. In contrast, optimizing Micro-F1 is more difficult
because it couples all labels and all instances together; see the definition
in (6).777 See, for example, “… is the most challenging measure, since it does
not decompose over instances nor over labels.” in Pillai, Fumera, and Roli
(2017) Therefore, we mainly focus on techniques to optimize Macro-F1 in the
following sections.
### 4.1 Extending One-vs-rest to Incorporate Parameter Selection
If we examine the one-vs-rest-basic method more closely, it is easy to see
that a crucial process is missing – parameter selection of the regularization
parameter $C$. While the importance of parameter selection is well recognized,
this step is easily forgotten in many places (Liu et al. 2021). For example,
out of the works that criticized the unrealistic setting (see Section 2),
Faerman et al. (2018) used a fixed regularization parameter for comparing with
past works, but Liu and Kim (2018) conducted cross-validation in their one-vs-
rest implementation. Therefore, a more appropriate baseline should be the
following extension of one-vs-rest-basic:
* •
one-vs-rest-basic-C: For each binary problem, cross-validation is performed on
the training data by checking a grid of $C$ values. The one yielding the best
F1 score is chosen to train the binary model of the label for future
prediction.
CV is so standard in machine learning that the above procedure seems to be
extremely simple. Surprisingly, several issues may hamper its wide use.
* •
We learned in Section 2 that some binary problems may not predict any
positives in the prediction process. Thus cross-validation F1 may be zero
under all $C$ values. In this situation, which $C$ should we choose?
* •
To improve robustness, should the same splits of data for CV be used
throughout all $C$ values?
* •
If $C$ is slightly changed from one value to another, solutions of the two
binary optimization problems may be similar. Thus a warm-start implementation
of using the solution of one problem as the initialization for training the
other can effectively reduce the running time. However, the implementation,
together with CV, can be complicated.
The discussion above shows that even for a setting as simple as one-vs-rest-
basic-C, off-the-shelf implementations may not be directly available to
users.888 LIBLINEAR supports warm-start and same CV folds for parameter
selection after their work in Chu et al. (2015). However, the purpose is to
optimize CV accuracy. Our understanding is that an extension to check F1
scores is available only very recently.
### 4.2 Thresholding Techniques
While the basic concept of thresholding has been discussed in Section 2, the
actual procedure is more complicated and several variants exist (Yang 2001).
From early works such as Lewis et al. (1996); Yang (1999), a natural idea is
to use decision values of validation data to decide $\Delta$ in (3). For each
label, the procedure is as follows.
* •
For each CV fold, sort validation decision values.
Sequentially assign $\Delta$ as the midpoint of two adjacent decision values
and select the one achieving the best F1 as the threshold of the current fold.
* •
Solve a binary problem (1) using all training data. The average of $\Delta$
values over all folds is then used to adjust the decision function.
However, Yang (2001) showed that this setting easily overfits data if the
binary problem is unbalanced. Consequently, the same author proposed the $fbr$
heuristic to reduce the overfitting problem. Specifically, if the F1 of a
label is smaller than a pre-defined $fbr$ value, then the threshold is set to
the largest decision value of the validation data. This method requires a
complicated two-level CV procedure. The outer level uses CV to check that
among a list of given $fbr$ candidates, which one leads to the best F1. The
inner CV checks if the validation F1 is better than the given $fbr$.
The above $fbr$ heuristic was further studied in an influential paper (Lewis
et al. 2004). An implementation from Fan and Lin (2007) as a LIBLINEAR
extension has long been publicly available. Interestingly, our survey seems to
indicate that no one in the field of representation learning ever tried it.
One reason may be that the procedure is complicated. If we also select the
parameter $C$, then a cumbersome outer-level CV to sweep some $(C,fbr)$ pairs
is needed. Furthermore, it is difficult to use the same data split, especially
in the inner CV. Another reason may be that as a heuristic, people are not
confident about the method. For example, Tang and Liu (2009) stated that
because “thresholding can affect the final prediction performance drastically
(Fan and Lin 2007; Tang, Rajan, and Narayanan 2009),” they decided that “For
evaluation purpose, we assume the number of labels of unobserved nodes is
already known.”
### 4.3 Cost-sensitive Learning
We learned in Section 2 that because of class imbalance, one-vs-rest-basic
suffers from the issue of predicting very few positives. While one remedy is
the thresholding technique to adjust the decision function, another
possibility is to conduct cost-sensitive learning. Namely, by using a higher
loss on positive training instances (usually through a larger regularization
parameter), the resulting model may predict more positives. For example,
Parambath, Usunier, and Grandvalet (2014) give some theoretical support
showing that the F1 score can be optimized through cost-sensitive learning.
They extend the optimization problem (1) to
$\displaystyle\min_{\boldsymbol{w}}\
\frac{1}{2}\boldsymbol{w}^{T}\boldsymbol{w}+C^{+}\\!\\!\sum\limits_{i:y_{i}=1}\xi(y_{i}\boldsymbol{w}^{T}\boldsymbol{x}_{i})+C^{-}\\!\\!\\!\\!\sum\limits_{i:y_{i}=-1}\xi(y_{i}\boldsymbol{w}^{T}\boldsymbol{x}_{i}),$
where
$C^{+}=C(2-t),\ C^{-}=Ct,\ \text{and }t\in[0,1].$
Then we can check cross-validation F1 on a grid of $(C,t)$ pairs. The best
pair is then applied to the whole training set to get the final decision
function.
An advantage over the thresholding method ($fbr$ heuristic) is that only a
one-level CV is needed. However, if many $(C,t)$ pairs are checked, the
running time can be long. In Section 5.2 we discuss two implementations for
this approach.
## 5 Experiments
In this section we experiment with training/prediction methods discussed in
Sections 2-4 on popular node classification benchmarks. Embedding vectors are
generated by some well-known methods and their quality is assessed.
### 5.1 Experimental Settings
Data | #instances | #labels | avg. #labels per instance
---|---|---|---
single-labeled | multi-labeled
BlogCatalog | 7,460 | 2,852 | 39 | 1.40
Flickr | 62,521 | 17,992 | 195 | 1.34
YouTube | 22,374 | 9,329 | 46 | 1.60
PPI | 85 | 54,873 | 121 | 38.26
Table 1: Data statistics.
We consider the following popular node classification problems:
BlogCatalog, Flickr, YouTube, PPI.
From data statistics in Table 1, some have many single-labeled instances, but
some have very few. We generate embedding vectors by the following influential
works.
* •
DeepWalk (Perozzi, Al-Rfou, and Skiena 2014).
* •
Node2vec (Grover and Leskovec 2016).
* •
LINE (Tang et al. 2015).
Since we consider representation learning independent of the downstream task,
the embedding-vector generation is unsupervised. As such, deciding the
parameters for each method can be tricky. We reviewed many past works and
selected the most used values.
In past studies, Node2vec often had two of its parameters $p,q$ selected based
on the results of the downstream task. This procedure is in effect a form of
supervised learning. Therefore, in our experiments, the parameters $p,q$ are
fixed to the same values for all data sets.
For training each binary problem, logistic regression is solved by the
software LIBLINEAR (Fan et al. 2008). We follow many existing works to
randomly split each set to $80\%$ for training and $20\%$ for testing. This
process is repeated five times and the average score is presented. The same
training/testing split is used across the different graph representations.
More details on experimental settings are given in the supplementary
materials.
### 5.2 Multi-label Training and Prediction Methods for Comparisons
We consider the following methods. Unless specified, for binary problems (1),
we mimic many past works to set $C=1$.
* •
unrealistic: After the one-vs-rest training, the unrealistic prediction of
knowing the number of labels is applied.
* •
one-vs-rest-basic: After the one-vs-rest training, each binary classifier
predicts labels that have positive decision values.
* •
one-vs-rest-basic-C: The method, described in Section 4.1, selects the
parameter $C$ by cross-validation. We use a LIBLINEAR parameter-selection
functionality that checks dozens of automatically selected $C$ values. It
applies a warm-start technique to save the running time. An issue mentioned in
Section 4.1 is that CV F1=0 for every $C$ may occur. We checked a few ways to
choose $C$ in this situation, but find results do not differ much.
* •
one-vs-rest-no-empty: This method slightly extends one-vs-rest-basic so that
if all decision values of a test instance are negative, then we predict the
label with the highest decision value; see Section 3.1.
* •
thresholding: The method was described in Section 4.2.
For the approach in Section 4.3 we consider two variants.
* •
cost-sensitive: A dense grid of $(C,t)$ is used. The range of $t$ is
$\\{0.1,0.2,\ldots,1\\}$. For each $t$, we follow one-vs-rest-basic-C to use a
LIBLINEAR functionality that checks dozens of automatically selected $C$
values. In this variant, we do not ensure that CV folds are the same across
different $t$.
* •
cost-sensitive-simple: We check fewer parameter settings by considering
$t=\\{1/7,2/7,\ldots,1\\}$ and $C=\\{0.01/t,0.1/t,1/t,10/t,100/t\\}$. We
ensure the same data split is applied on the CV for every pair. The
implementation is relatively simple if all parameter pairs are independently
trained without time-saving techniques such as warm-start.
Similar to one-vs-rest-basic, for thresholding or cost-sensitive approaches,
an instance may be predicted to have no labels. Therefore, we check the
following extension.
* •
cost-sensitive-no-empty: This method extends cost-sensitive by the same way
from one-vs-rest-basic to one-vs-rest-no-empty.
Training and | BlogCatalog | Flickr | PPI
---|---|---|---
prediction methods | DeepWalk | Node2vec | LINE | DeepWalk | Node2vec | LINE | DeepWalk | Node2vec | LINE
| Macro-F1 (avg. of five; std. in supplementary)
unrealistic | 0.276 | 0.294 | 0.239 | 0.304 | 0.306 | 0.258 | 0.483 | 0.442 | 0.504
one-vs-rest-basic-C | 0.208 | 0.220 | 0.195 | 0.209 | 0.208 | 0.188 | 0.183 | 0.150 | 0.243
thresholding | 0.269 | 0.283 | 0.221 | 0.299 | 0.302 | 0.264 | 0.482 | 0.457 | 0.498
cost-sensitive | 0.270 | 0.283 | 0.250 | 0.297 | 0.301 | 0.279 | 0.482 | 0.461 | 0.495
| Micro-F1 (avg. of five; std. in supplementary)
unrealistic | 0.417 | 0.426 | 0.406 | 0.416 | 0.420 | 0.409 | 0.641 | 0.626 | 0.647
one-vs-rest-basic-C | 0.344 | 0.355 | 0.335 | 0.291 | 0.296 | 0.289 | 0.458 | 0.441 | 0.489
thresholding | 0.390 | 0.396 | 0.353 | 0.370 | 0.376 | 0.364 | 0.535 | 0.482 | 0.553
cost-sensitive | 0.366 | 0.371 | 0.341 | 0.352 | 0.358 | 0.354 | 0.533 | 0.495 | 0.548
Table 2: Results of representative training/prediction methods applied to various embedding vectors. Each value is the average of five 80/20 training/testing splits. The score of the best training/prediction method (excluding unrealistic) is bold-faced. Training and prediction | BlogCatalog | Flickr | YouTube | PPI
---|---|---|---|---
methods on DeepWalk vectors | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1
one-vs-rest-basic | 0.190 | 0.334 | 0.195 | 0.283 | 0.213 | 0.287 | 0.181 | 0.449
one-vs-rest-basic-C | 0.208 | 0.344 | 0.209 | 0.291 | 0.217 | 0.290 | 0.183 | 0.458
one-vs-rest-no-empty | 0.241 | 0.390 | 0.256 | 0.377 | 0.263 | 0.382 | 0.181 | 0.449
cost-sensitive | 0.270 | 0.366 | 0.297 | 0.352 | 0.360 | 0.374 | 0.482 | 0.533
cost-sensitive-no-empty | 0.268 | 0.351 | 0.298 | 0.343 | 0.359 | 0.372 | 0.482 | 0.533
cost-sensitive-simple | 0.266 | 0.353 | 0.297 | 0.358 | 0.357 | 0.372 | 0.481 | 0.529
Table 3: Ablation study on variations of one-vs-rest-basic and cost-sensitive
applied to embedding vectors generated by DeepWalk. Each value is the average
of five 80/20 training/testing splits. The best training/prediction method is
bold-faced.
### 5.3 Results and Analysis
In Table 2 we compare the unrealistic method and representative methods in
Section 4. Other variants are investigated in Table 3 later. Due to the space
limit, we omit the YouTube data set, though results follow similar trends.
Observations from Table 2 are as follows.
* •
As expected, unrealistic is the best in nearly all situations. It
significantly outperforms others on Micro-F1, a situation confirming not only
the analysis in Theorem 3 but also that unrealistic may over-estimate
performance.
* •
In Section 2 we showed an example that one-vs-rest-basic performs poorly
because of the thresholding issue. Even with the parameter selection, one-vs-
rest-basic-C still suffers from the same issue and performs the worst.
* •
Both thresholding and cost-sensitive effectively optimize Macro-F1 and achieve
similar results to unrealistic. Despite Micro-F1 not being the optimized
metric, the improvement over one-vs-rest-basic-C is still significant.
In Table 3 we study the variations of one-vs-rest-basic and cost-sensitive. We
only present the results of embedding vectors generated by DeepWalk, while
complete results with similar trends are in supplementary materials. Some
observations from Table 3 are as follows.
* •
Even with parameter selection, one-vs-rest-basic-C is only marginally better
than one-vs-rest-basic. This result is possible because for binary logistic
regression, it is proved that after $C$ is sufficiently large, the decision
function is about the same (Theorem 3 in Chu et al. 2015). The result shows
that conducting parameter selection is not enough to overcome the thresholding
issue.
* •
Following the analysis in Section 3.1, one-vs-rest-no-empty significantly
improves upon one-vs-rest-basic for problems that have many single-labeled
instances. However, it has no visible effect on the set PPI, in which most
instances are multi-labeled.
* •
However, cost-sensitive-no-empty shows no such improvement over cost-sensitive
because cost-sensitive mitigates the issue of predicting no labels for a large
portion of instances. Further, for the remaining instances with no predicted
labels, the label with the highest decision value may be an incorrect one,
resulting in worse Micro-F1 in some cases. This experiment shows the
importance to have techniques that allow empty predictions.
* •
cost-sensitive-simple is generally competitive with cost-sensitive and
thresholding.
An issue raised in Section 4 is whether the same split of data (i.e., CV
folds) should be used in the multiple CV procedures ran by, for example, cost-
sensitive-simple. We have conducted some analysis, but leave details in
supplementary materials due to the space limitation.
Regarding methods for representation learning, we have the following
observations.
* •
Our results of the unrealistic method are close to those in the recent
comparative study (Khosla, Setty, and Anand 2021). This outcome supports the
validity of our experiments.
* •
Among the three methods to generate representations, there is no clear winner,
indicating that the selection may be application dependent. DeepWalk and
Node2vec are closer to each other because they are both based on random walks.
In contrast, LINE is based on edge modeling.
* •
DeepWalk is a special case of Node2vec under some parameter values, though
here Node2vec is generated by other commonly suggested values. Because
DeepWalk is generally competitive and does not require the selection of some
Node2vec’s parameters, DeepWalk may be a better practical choice.
* •
The relative difference between the three representation learning methods
differs from what unrealistic suggests. Even though in our comparisons such
effects are not large enough to change their relative ranking, an unfair
comparison diminishes the utility of benchmark results.
## 6 Conclusions
We summarize the results on training/prediction methods. The two methods
thresholding and cost-sensitive are effective and can be applied in future
studies. They are robust without the concerns mentioned in some papers.
Further, if an easy implementation is favored, then the simple yet competitive
cost-sensitive-simple can be a pragmatic choice. The implementations are
available in an easy-to-use package
https://github.com/ASUS-AICS/LibMultiLabel
Thus, researchers in the area of representation learning can easily apply
appropriate prediction settings.
In the well-developed world of machine learning, it may be hard to believe
that unrealistic predictions were used in almost an entire research area.
However, it is not the time to blame anyone. Instead, the challenge is to
ensure that appropriate settings are used in the future. In this work, we
analyze how and why unrealistic predictions were used in the past. We then
discuss suitable replacements. Through our investigation hopefully unrealistic
predictions will no longer be used.
## 7 Acknowledgments
This work was supported by MOST of Taiwan grant 110-2221-E-002-115-MY3 and
ASUS Intelligent Cloud Services.
## References
* Bhatia et al. (2016) Bhatia, K.; Dahiya, K.; Jain, H.; Kar, P.; Mittal, A.; Prabhu, Y.; and Varma, M. 2016. The extreme classification repository: Multi-label datasets and code.
* Chang et al. (2021) Chang, W.-C.; Jiang, D.; Yu, H.-F.; Teo, C.-H.; Zhang, J.; Zhong, K.; Kolluri, K.; Hu, Q.; Shandilya, N.; Ievgrafov, V.; Singh, J.; and Dhillon, I. S. 2021. Extreme Multi-label Learning for Semantic Matching in Product Search. In _Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)_.
* Chanpuriya and Musco (2020) Chanpuriya, S.; and Musco, C. 2020. InfiniteWalk: Deep Network Embeddings as Laplacian Embeddings with a Nonlinearity. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)_ , 1325–1333.
* Chu et al. (2015) Chu, B.-Y.; Ho, C.-H.; Tsai, C.-H.; Lin, C.-Y.; and Lin, C.-J. 2015. Warm Start for Parameter Selection of Linear Classifiers. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)_.
* Faerman et al. (2018) Faerman, E.; Borutta, F.; Fountoulakis, K.; and Mahoney, M. W. 2018. LASAGNE: Locality and Structure Aware Graph Node Embedding. In _Proceedings of IEEE/WIC/ACM International Conference on Web Intelligence (WI)_ , 246–253.
* Fan et al. (2008) Fan, R.-E.; Chang, K.-W.; Hsieh, C.-J.; Wang, X.-R.; and Lin, C.-J. 2008. LIBLINEAR: a library for large linear classification. _Journal of Machine Learning Research_ , 9: 1871–1874.
* Fan and Lin (2007) Fan, R.-E.; and Lin, C.-J. 2007. A study on threshold selection for multi-label classification. Technical report, Department of Computer Science, National Taiwan University.
* Grover and Leskovec (2016) Grover, A.; and Leskovec, J. 2016. Node2vec: Scalable Feature Learning for Networks. In _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)_ , 855–864.
* Khandagale, Xiao, and Babbar (2020) Khandagale, S.; Xiao, H.; and Babbar, R. 2020. Bonsai: diverse and shallow trees for extreme multi-label classification. _Machine Learning_ , 109: 2099–2119.
* Khosla, Setty, and Anand (2021) Khosla, M.; Setty, V.; and Anand, A. 2021. A Comparative Study for Unsupervised Network Representation Learning. _IEEE Transactions on Knowledge and Data Engineering_ , 33(5): 1807–1818.
* Lewis et al. (1996) Lewis, D. D.; Schapire, R. E.; Callan, J. P.; and Papka, R. 1996. Training algorithms for linear text classifiers. _Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval_ , 298–306.
* Lewis et al. (2004) Lewis, D. D.; Yang, Y.; Rose, T. G.; and Li, F. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. _Journal of Machine Learning Research_ , 5: 361–397.
* Li, Zhu, and Zhang (2016) Li, J.; Zhu, J.; and Zhang, B. 2016. Discriminative Deep Random Walk for Network Classification. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)_ , 1004–1013.
* Liu et al. (2021) Liu, J.-J.; Yang, T.-H.; Chen, S.-A.; and Lin, C.-J. 2021. Parameter Selection: Why We Should Pay More Attention to It. In _Proceedings of the 59th Annual Meeting of the Association of Computational Linguistics (ACL)_. Short paper.
* Liu and Kim (2018) Liu, X.; and Kim, K.-S. 2018. A Comparative Study of Network Embedding Based on Matrix Factorization. In _International Conference on Data Mining and Big Data_ , 89–101.
* Liu, Jin, and Yang (2006) Liu, Y.; Jin, R.; and Yang, L. 2006. Semi-supervised multi-label learning by constrained non-negative matrix factorization. In _Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI)_ , 421–426.
* Parambath, Usunier, and Grandvalet (2014) Parambath, S. A. P.; Usunier, N.; and Grandvalet, Y. 2014. Optimizing F-Measures by Cost-Sensitive Classification. In _Advances in Neural Information Processing Systems_ , volume 27.
* Perozzi, Al-Rfou, and Skiena (2014) Perozzi, B.; Al-Rfou, R.; and Skiena, S. 2014. DeepWalk: Online Learning of Social Representations. In _Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)_ , 701–710.
* Pillai, Fumera, and Roli (2017) Pillai, I.; Fumera, G.; and Roli, F. 2017. Designing multi-label classifiers that maximize F measures: State of the art. _Pattern Recognition_ , 61: 394–404.
* Qiu et al. (2018) Qiu, J.; Dong, Y.; Ma, H.; Li, J.; Wang, K.; and Tang, J. 2018. Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and Node2vec. In _Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM)_ , 459–467.
* Read, Pfahringer, and Holmes (2008) Read, J.; Pfahringer, B.; and Holmes, G. 2008. Multi-label Classification Using Ensembles of Pruned Sets. In _Proceedings of IEEE International Conference on Data Mining (ICDM)_ , 995–1000.
* Read et al. (2011) Read, J.; Pfahringer, B.; Holmes, G.; and Frank, E. 2011. Classifier chains for multi-label classification. _Machine learning_ , 85: 333–359.
* Schlötterer et al. (2019) Schlötterer, J.; Wehking, M.; Rizi, F. S.; and Granitzer, M. 2019. Investigating Extensions to Random Walk Based Graph Embedding. In _Proceedings of IEEE International Conference on Cognitive Computing_ , 81–89.
* Tai and Lin (2012) Tai, F.; and Lin, H.-T. 2012. Multilabel Classification with Principal Label Space Transformation. _Neural Computation_ , 24: 2508–2542.
* Tang et al. (2015) Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; and Mei, Q. 2015. Line: Large-scale information network embedding. In _Proceedings of the 24th international Conference on World Wide Web (WWW)_ , 1067–1077.
* Tang and Liu (2009) Tang, L.; and Liu, H. 2009. Scalable learning of collective behavior based on sparse social dimensions. In _Proceedings of the 18th ACM conference on Information and Knowledge Management (CIKM)_ , 1107–1116.
* Tang, Rajan, and Narayanan (2009) Tang, L.; Rajan, S.; and Narayanan, V. K. 2009. Large scale multi-label classification via metalabeler. In _Proceedings of the 18th International Conference on World Wide Web (WWW)_ , 211–220.
* Tsoumakas and Vlahavas (2007) Tsoumakas, G.; and Vlahavas, I. 2007. Random k-labelsets: An ensemble method for multilabel classification. In _European conference on machine learning_ , 406–417.
* Wu and Zhou (2017) Wu, X.-Z.; and Zhou, Z.-H. 2017. A Unified View of Multi-Label Performance Measures. In _Proceedings of the 34th International Conference on Machine Learning (ICML)_ , 3780–3788.
* Yang (1999) Yang, Y. 1999. An Evaluation of Statistical Approaches to Text Categorization. _Information Retrieval_ , 1(1/2): 69–90.
* Yang (2001) Yang, Y. 2001. A Study on Thresholding Strategies for Text Categorization. In Croft, W. B.; Harper, D. J.; Kraft, D. H.; and Zobel, J., eds., _Proceedings of the 24th ACM International Conference on Research and Development in Information Retrieval_ , 137–145. New Orleans, US: ACM Press, New York, US.
* You et al. (2019) You, R.; Zhang, Z.; Wang, Z.; Dai, S.; Mamitsuka, H.; and Zhu, S. 2019. AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification. In _Advances in Neural Information Processing Systems_ , volume 32.
|
# Time-reversal asymmetries in $\Lambda_{b}\to\Lambda(\to
p\pi^{-})\ell^{+}\ell^{-}$
Chao-Qiang Geng, Chia-Wei Liu, Zheng-Yi<EMAIL_ADDRESS>School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute
for Advanced Study, UCAS, Hangzhou 310024, China
University of Chinese Academy of Sciences, 100190 Beijing, China
###### Abstract
We study the decays of $\Lambda_{b}\to\Lambda(\to p\pi^{-})\ell^{+}\ell^{-}$
with $\ell=(e,\mu,\tau)$. In particular, we examine the full angular
distributions with polarized $\Lambda_{b}$ and identify the time-reversal
asymmetries or T-odd observables. By using the homogeneous bag model, we find
that the decay branching fractions of $\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}$
are $(9.1\pm 2.5,7.9\pm 1.8,2.1\pm 0.2)\times 10^{-7}$ for
$\ell=(e,\mu,\tau)$, respectively. In addition, we obtain that
$A_{FB}^{\ell}=-0.369\pm 0.007$ and $A_{FB}^{h}=-0.333\pm 0.004$, averaged in
the range of $15\leq q^{2}\leq 20~{}\text{GeV}^{2}$. These results are well
consistent with the current experimental data. We also explore the T-odd
observables in $\Lambda_{b}\to\Lambda(\to p\pi^{-})\mu^{+}\mu^{-}$, which are
sensitive to new physics (NP). Explicitly, we illustrate that the current
experimental measurement from one of the T-odd observables favors the
existence of NP, such as the extra $Z$-boson model.
## I Introductions
The CP violating observables in $b\to s\ell^{+}\ell^{-}$ with
$\ell=(e,\mu,\tau)$ play important roles to search for new physics (NP) as
they are highly suppressed in the standard model (SM) Bobeth:2011gi ;
Kruger:1999xa ; Altmannshofer:2008dz ; Bobeth:2008ij ; LHCb:2017slr ;
Fleischer:2017ltw ; Kindra:2018ayz . In recent years, special attentions have
been given to the decays of $B\to K^{(*)}\mu^{+}\mu^{-}$ and
$B_{s}\to\phi\mu^{+}\mu^{-}$ LHCb:2013tgx ; LHCb:2014cxe ; LHCb:2013ghj .
Benefited by the experimental developments, precise measurements of the
angular observables are now accessible CMS:2015bcy ; ATLAS:2018gqc ;
LHCb:2020lmf ; LHCb:2021xxq ; LHCb:2020gog ; CMS:2017rzx ; LHCb:exp ;
LHCb:2015svh ; LHCb:2018angular . These observables are useful in
disentangling the helicities, providing reliable methods to probe the Lorentz
structure of NP Buchalla:1995vs ; Mott:2011cx ; Roy:2017dum ; Das:2018iap ;
Aliev:2002nv ; Huang:1998ek ; gutsche ; Boer:2014kda . Besides, the ratios of
$R_{K^{(*)}}\equiv\Gamma(B\to K^{(*)}\mu^{+}\mu^{-})/\Gamma(B\to
K^{(*)}e^{+}e^{-})$ were measured, where discrepancies against the SM were
given. In particular, 3.1$\sigma$ and 2.5$\sigma$ deviations have been found
in $R_{K}(1.1\text{GeV}^{2}\leq q^{2}\leq 6.0\text{GeV}^{2})$ and
$R_{K^{*}}(0.045\text{GeV}^{2}\leq q^{2}\leq 6.0\text{GeV}^{2})$ LHCb:2021trn
; LHCb:2017avl , showing that the lepton universality may be violated by NP.
Very recently, a global fit of $b\to s\ell^{+}\ell^{-}$ with the $B$ meson
experiments has been performed SinghChundawat:2022zdf , and the large complex
Wilson coefficients have been demonstrated to be permitted by the current
experimental data.
The baryonic decays of $\Lambda_{b}\to\Lambda(\to p\pi^{-})\ell^{+}\ell^{-}$
are interesting for several reasons. For polarized $\Lambda_{b}$, the decays
of $\Lambda_{b}\to\Lambda(\to p\pi^{-})\ell^{+}\ell^{-}$ provide dozens of
angular observables, which are three times more than those in $B\to
K\mu^{+}\mu^{-}$. The polarization fraction $(P_{b})$ of $\Lambda_{b}$ is
reported as $(6\pm 7)\%$ at the center of mass energy 7 TeV of $pp$ collisions
LHCb:2013hzx . The full angular distribution of $\Lambda_{b}\to\Lambda(\to
p\pi^{-})\mu^{+}\mu^{-}$ has been measured at LHCb LHCb:2018angular . Notably,
the experiment obtains that one of the physical observables is given by
$K_{10}=-0.045\pm 0.037\pm 0.006\,,$ (1)
which deviates to the SM prediction of $K_{10}\approx 0$ by $1.2\sigma$. It is
reasonable to expect that the precision will be improved in the forthcoming
update. In this work, we will show explicitly that $K_{10}$ is an T-odd
quantity, which can be sizable in the presence of NP.
On the theoretical aspect, the angular distributions of
$\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}$ have been studied intensively gutsche ;
Boer:2014kda ; Blake:2017angular . In particular, an analysis of NP with real
Wilson coefficients has been performed in Ref. Blake:2019guk , in which
$P_{b}=(0\pm 5)\%$ is found at $1\sigma$ confidence level. In this work, we
would like to focus on the time-reversal (T) violating observables induced by
the complex NP Wilson coefficients. In comparison to the CP violating
quantities, the T violating ones do not require strong phases. In the leptonic
decays, this feature is very useful as strong phases are often negligible.
This paper is organized as follows. In Sec. II , we decompose
$\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}$ into products of two-body decays. In
Sec. III , we construct T-odd observables. In Sec. VI, we briefly review the
angular distributions of $\Lambda_{b}\to\Lambda(\to
p\pi^{-})\ell^{+}\ell^{-}$, and identify the T-odd observables. In Sec. V, we
give the numerical results from the homogeneous bag model (HBM). We conclude
the study in Sec. VI.
## II Helicity amplitudes
The amplitudes of $\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}$, induced by the
transitions of $b\to s\ell^{+}\ell^{-}$ at the quark level, are given as
Buchalla:2000sk
$\displaystyle\frac{G_{F}}{\sqrt{2}}\frac{\alpha
V^{*}_{ts}V_{tb}}{2\pi}\left[\langle\Lambda|\bar{s}j_{1}^{\mu}b|\Lambda_{b}\rangle\bar{\ell}\gamma_{\mu}\ell+\langle\Lambda|\bar{s}j_{2}^{\mu}b|\Lambda_{b}\rangle\bar{\ell}\gamma_{\mu}\gamma_{5}\ell\right],$
(2)
where $G_{F}$ is the Fermi constant, $V_{ts,tb}$ are the Cabibbo-Kobayashi-
Maskawa (CKM) matrix elements,
$\displaystyle
j_{1}^{\mu}=(C_{9}^{eff}+C^{\text{NP}}_{9})L^{\mu}-\frac{2m_{b}}{q^{2}}C_{7\gamma}^{eff}i\sigma^{\mu
q}(1+\gamma_{5})+(C_{L}+C_{R})R^{\mu}\,,$ (3) $\displaystyle
j_{2}^{\mu}=(C_{10}+C^{\text{NP}}_{10})L^{\mu}+(C_{R}-C_{L})R^{\mu}\,,$
$C^{(eff)}$ are the (effective) Wilson coefficients, $\sigma^{\mu
q}=i(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})q_{\nu}/2$ with
$q=(q^{0},\vec{q}~{})$ the four-momentum of $\ell^{+}\ell^{-}$,
$L^{\mu}=\gamma^{\mu}(1-\gamma_{5})$, $R^{\mu}=\gamma^{\mu}(1+\gamma_{5})$,
and $m_{q}$ stands for the quark mass. The first (second) term in Eq. (2) can
be interpreted as $\Lambda_{b}\to\Lambda j_{eff}^{1(2)}$ followed by
$j_{eff}^{1(2)}\to\ell^{+}\ell^{-}$, where $j_{eff}^{1(2)}$ is an effective
off-shell (axial) vector boson, conserving the parity in its cascade decays,
and $j^{\mu}_{1,2}$ are the couplings of $b-s-j_{eff}^{1,2}$. Alternatively,
the interpretation can also be rephrased as $\Lambda_{b}\to\Lambda
j_{eff}^{R,L}$, where $j_{eff}^{R(L)}$ couples only to the right-handed (left-
handed) leptons, given as
$\displaystyle\frac{G_{F}}{\sqrt{2}}\frac{\alpha
V^{*}_{ts}V_{tb}}{2\pi}\left[\langle\Lambda|\bar{s}j_{+}^{\mu}b|\Lambda_{b}\rangle\bar{\ell}R_{\mu}\ell+\langle\Lambda|\bar{s}j_{-}^{\mu}b|\Lambda_{b}\rangle\bar{\ell}L_{\mu}\ell\right],$
(4)
where $j_{\pm}^{\mu}=(j_{1}^{\mu}\pm j^{\mu}_{2})/2$. In the SM,
$C^{\text{NP}}_{9,10}=C_{L,R}=0$ and the others are gutsche ; faustov
$\displaystyle C_{7\gamma}^{eff}$ $\displaystyle=-0.313,$ (5) $\displaystyle
C_{9}^{eff}$
$\displaystyle=C_{9}+h(\frac{m_{c}}{m_{b}},\frac{q^{2}}{m_{b}^{2}})-\frac{1}{2}h(1,\frac{q^{2}}{m_{b}^{2}})(4C_{3}+4C_{4}+3C_{5}+C_{6})$
$\displaystyle-\frac{1}{2}h(0,\frac{q^{2}}{m_{b}^{2}})(C_{3}+3C_{4})+\frac{2}{9}(3C_{3}+C_{4}+3C_{5}+C_{6}),$
where
$\displaystyle\begin{split}h(\frac{m_{c}}{m_{b}},\frac{q^{2}}{m_{b}})&=-\frac{8}{9}\ln\frac{m_{c}}{m_{b}}+\frac{8}{27}+\frac{4}{9}x-\frac{2}{9}(2+x)\\\
&\times|1-x|^{1/2}\left\\{\begin{aligned}
\left(\ln\big{|}\frac{\sqrt{1-x}+1}{\sqrt{1-x}-1}\big{|}-i\pi\right),&~{}~{}\text{for}~{}~{}x\
<1,\\\
2\text{arctan}\frac{1}{\sqrt{x-1}},&~{}~{}\text{for}~{}~{}x>1,\end{aligned}\right.\\\
h(0,\frac{q^{2}}{m_{b}})&=\frac{8}{27}-\frac{4}{9}\ln\frac{q^{2}}{m_{b}}+\frac{4}{9}i\pi,\end{split}$
(6)
and $x=4m_{c}^{2}/q^{2}$. Their explicit values can be found in Ref. faustov .
As the parity is conserved in $j_{eff}^{1,2}\to\ell^{+}\ell^{-}$, it is easier
to obtain the angular distributions with the $j_{eff}^{1,2}$ interpretations.
However, to examine NP, the second interpretation with $j_{eff}^{R,L}$ is more
preferable as NP is likely to couple with the leptons with the same
handedness. We note that physical quantities are of course independent of the
interpretations. For our purpose, the angular distributions are studied with
$j_{eff}^{1,2}$, whereas NP with $j_{eff}^{R,L}$.
By decomposing the Minkowski metric as
$g^{\mu\nu}=\epsilon_{t}^{\mu}\epsilon_{t}^{*\nu}-\sum_{\lambda=0,\pm}\epsilon_{\lambda}^{\mu}\epsilon_{\lambda}^{*\nu}\,,$
(7)
we arrive at
$\frac{G_{F}}{\sqrt{2}}\frac{\alpha
V^{\dagger}_{ts}V_{tb}}{2\pi}\sum_{m=1,2}\left(L_{t}^{m}B_{t}^{m}-\sum_{\lambda=0,\pm}L_{\lambda}^{m}B_{\lambda}^{m}\right)\,,$
(8)
where
$\displaystyle
B_{\lambda_{m}}^{m}=\epsilon_{\lambda_{m}}^{\ast\mu}\langle\Lambda|\bar{s}j_{m}^{\mu}b|\Lambda_{b}\rangle\,,$
$\displaystyle~{}~{}~{}L_{\lambda_{m}}^{1}=\epsilon_{\lambda_{m}}^{\mu}\bar{u}_{\ell}\gamma_{\mu}v\,,$
$\displaystyle~{}~{}~{}L_{\lambda_{m}}^{2}=\epsilon_{\lambda_{m}}^{\mu}\bar{u}_{\ell}{\gamma_{\mu}\gamma_{5}}v\,,$
(9)
${\lambda_{m}}=(t,0,\pm)$ is the helicity of $j_{eff}^{m}$ with $t$ indicating
spin-0 off-shell contributions, and $\epsilon$ are the polarization vectors of
$j_{eff}^{m}$, given as TRSEMI
${\epsilon}^{\mu}_{\pm}=\frac{1}{\sqrt{2}}(0,\pm
1,i,0)^{T}\,,\quad{\epsilon}_{0}^{\mu}=(0,0,0,-1)^{T}\,,\quad{\epsilon}^{\mu}_{t}=(-1,0,0,0)^{T}\,,$
(10)
and
${\epsilon}^{\mu}_{\pm}=\frac{1}{\sqrt{2}}(0,\mp
1,i,0)^{T}\,,\quad{\epsilon}_{0}^{\mu}=\frac{1}{\sqrt{q^{2}}}(|\vec{q}\,|,0,0,-q^{0})^{T}\,,\quad{\epsilon}^{\mu}_{t}=-\frac{1}{\sqrt{q^{2}}}q^{\mu},$
(11)
in the center of mass (CM) frames of $j_{eff}^{m}$ and $\Lambda_{b}$,
respectively. In Eq. (8), the amplitudes are decomposed as the products of
Lorentz scalars, where $B_{\lambda_{m}}$ and $L_{\lambda_{m}}$ describe
$\Lambda_{b}\to\Lambda j_{eff}^{m}$ and $j_{eff}^{m}\to\ell^{+}\ell^{-}$,
respectively, reducing the three-body problems to two-body ones.
To deal with the spins, we adopt the helicity approach. The projection
operators in the $SO(3)$ rotational $(SO(3)_{R})$ group are given by
$|J,M\rangle\langle J,N|=\frac{2J+1}{8\pi^{2}}\int d\phi d\theta d\psi
R_{z}(\phi)R_{y}(\theta)R_{z}(\psi)D^{J\dagger}(\phi,\theta,\psi)^{N}\,_{M}\,,$
(12)
where $N$ and $M$ are the angular momenta toward the $\hat{z}$ direction, the
Wigner-$D$ matrices are defined by
$D^{J}(\phi,\theta,\psi)^{M}\,_{N}\left\langle
J,N|J,N\right\rangle=\left\langle
J,M\left|R_{z}(\phi)R_{y}(\theta)R_{z}(\psi)\right|J,N\right\rangle\,,$ (13)
and $R_{y(z)}$ are the rotation operator pointing toward $\hat{y}(\hat{z})$.
We note that it is important for Eq. (12) to be a linear superposition of
$R_{y,z}$, which commutes with scalar operators. In the following, we take the
shorthand notation of $D^{J}(\phi,\theta)\equiv D^{J}(\phi,\theta,0)$.
The simplest two-particle state with a nonzero momentum is defined by
$|p\hat{z},\lambda_{1},\lambda_{2}\rangle\equiv
L_{z}|\vec{p}=0,J_{z}=\lambda_{1}\rangle_{1}\otimes
L_{z}^{\prime}|\vec{p}=0,J_{z}=-\lambda_{2}\rangle_{2}\,,$ (14)
where $\lambda_{1,2}$ are the helicities, the subscripts denote the particles,
and $L^{(\prime)}_{z}$ is the Lorentz boost, which brings the first (second)
particle to $(-)p\hat{z}$. As $L_{z}^{(\prime)}$ commutes with $R_{z}$, the
state defined by Eq. (14) is an eigenstate of $J_{z}=\lambda_{1}-\lambda_{2}$.
Plugging Eq. (12) into Eq. (14) with $N=\lambda_{1}-\lambda_{2}$, we arrive at
$\displaystyle|\vec{p}\,^{2},\lambda_{1},\lambda_{2};J,J_{z}\rangle=\frac{2J+1}{4\pi}\int
d\phi d\cos\theta
R_{z}(\phi)R_{y}(\theta)|p\hat{z},\lambda_{1},\lambda_{2}\rangle_{1,2}D^{J*}(\phi,\theta)^{J_{z}}\,_{N}\,,$
(15)
which expresses the angular momentum eigenstate as the linear superposition of
the three-momentum ones. Conversely, we have
$|p\hat{z},\lambda_{1},\lambda_{2}\rangle=\sum_{J}|\vec{p}\,^{2},\lambda_{1},\lambda_{2};J,N\rangle\,.$
(16)
Note that the identities of Eqs. (15) and (16) purely come from the
mathematical consideration. The simplification happens when the angular
momentum conservation is considered. At the CM frames of $\Lambda_{b}$ and
$j_{eff}^{m}$, it is clear that only $J=1/2$ and $J=(0,1)$ need to be
considered for the $\Lambda j_{eff}^{m}$ and $\ell^{+}\ell^{-}$ systems,
respectively.
Utilizing Eq. (16), we have that
$\langle\vec{p}\,^{2},\lambda_{1},\lambda_{2};J,N|{\cal
S}|J,J_{z};i\rangle=\langle p\hat{z},\lambda_{1},\lambda_{2}|{\cal
S}|J,J_{z};i\rangle\,,$ (17)
where ${\cal S}$ is an arbitrary scalar operator, and $|J,J_{z};i\rangle$
stands for an arbitrary initial state. In Eq. (17), the final state in the
left side possesses a definite angular momentum, which is irreducible under
$SO(3)_{R}$, i.e. it contains only the dynamical details. On the contrary, the
one in the right side is three-momentum eigenstate, containing less physical
insights but providing a way to compute the helicity amplitude.
Let us return to $\Lambda_{b}\to\Lambda j_{eff}^{m}$ and
$j_{eff}^{m}\to\ell^{+}\ell^{-}$. We take the uppercase and lowercase of $H$
and $h$ for the helicity amplitudes of $\Lambda_{b}\to\Lambda j_{eff}^{m}$ and
$j_{eff}^{m}\to\ell^{+}\ell^{-}$, respectively. To be explicit, we have
$\displaystyle H_{\lambda_{\Lambda}\lambda_{m}}^{m}$
$\displaystyle=B_{\lambda_{m}}\left(\lambda_{\Lambda_{b}}=\lambda_{\Lambda}-\lambda_{m},\lambda_{\Lambda},\vec{p}_{\Lambda}=-\vec{q}=|\vec{p}_{\Lambda}|\hat{z}\right)\,,$
(18) $\displaystyle h_{0,\lambda_{+}\lambda_{-}}^{m}$
$\displaystyle=L_{t}^{m}(\lambda_{+},\lambda_{-},\vec{q}=0,\vec{p}_{+}=-\vec{p}_{-}=|\vec{p}_{+}|\hat{z})\,,$
$\displaystyle h_{1,\lambda_{+}\lambda_{-}}^{m}$
$\displaystyle=L_{\lambda_{+}-\lambda_{-}}^{m}(\lambda_{+},\lambda_{-},\vec{q}=0,\vec{p}_{+}=-\vec{p}_{-}=|\vec{p}_{+}|\hat{z})\,,$
where $\lambda_{\Lambda_{b}}$ corresponds to the angular momentum of
$\Lambda_{b}$, $(\lambda_{\Lambda},\lambda_{\pm})$ are the helicities of
$(\Lambda,\ell^{\pm})$, and $\vec{p}_{\Lambda}$ and $\vec{p}_{\pm}$ are the
3-momentua of $\Lambda$ and $\ell^{\pm}$ in the CM frame of $\Lambda_{b}$ and
$j_{eff}^{m}$, respectively. Theoretically speaking, the dynamical parts of
the amplitudes are extracted by Eq. (17), whereas the kinematic dependencies
are governed by $D^{J}$.
For compactness, we take the abbreviations
$\displaystyle|a^{m}_{\pm}\rangle=|\vec{p}\,^{2},\pm 1/2,0;J,J_{z}\rangle,$
$\displaystyle|b^{m}_{\pm}\rangle=|\vec{p}\,^{2},\mp 1/2,\mp
1;J,J_{z}\rangle,{}{}{}$ $\displaystyle|c^{m}_{\pm}\rangle=|\vec{p}\,^{2},\pm
1/2,t;J,J_{z}\rangle~{}~{}~{}$ (19) $\displaystyle
a^{m}_{\pm}=H^{m}_{\pm\frac{1}{2}0}=\langle a_{\pm}^{m}|{\cal
S}_{eff}|\Lambda_{b}\rangle,$ $\displaystyle
b^{m}_{\pm}=H^{m}_{\mp\frac{1}{2}\mp 1}=\langle a_{\pm}|{\cal
S}_{eff}|\Lambda_{b}\rangle,$
$\displaystyle~{}~{}c^{m}_{\pm}=H^{m}_{\pm\frac{1}{2}t}=\langle
c_{\pm}^{m}|{\cal S}_{eff}|\Lambda_{b}\rangle,$
where ${\cal S}_{eff}$ is the transition operator responsible for
$\Lambda_{b}\to\Lambda j_{eff}^{m}$, and $J_{z}$ is not written down
explicitly. The artificial ${\cal S}_{eff}$ is needed to interpret
$\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}$ as products of two-body ones. For the
$\Lambda_{b}\to\Lambda j_{eff}^{R,L}$ interpretation, the helicity amplitudes
are
$\displaystyle
a_{\pm}^{R}=\frac{1}{\sqrt{2}}(a_{\pm}^{1}+a_{\pm}^{2})\,,~{}~{}~{}a_{\pm}^{L}=\frac{1}{\sqrt{2}}(a_{\pm}^{1}-a_{\pm}^{2})\,,$
$\displaystyle
b_{\pm}^{R}=\frac{1}{\sqrt{2}}(b_{\pm}^{1}+b_{\pm}^{2})\,,~{}~{}~{}~{}b_{\pm}^{L}=\frac{1}{\sqrt{2}}(b_{\pm}^{1}-b_{\pm}^{2})\,,$
$\displaystyle
c_{\pm}^{R}=\frac{1}{\sqrt{2}}(c_{\pm}^{1}+c_{\pm}^{2})\,,~{}~{}~{}~{}c_{\pm}^{L}=\frac{1}{\sqrt{2}}(c_{\pm}^{1}-c_{\pm}^{2})\,.$
(20)
## III T-odd observables
From Eq. (LABEL:eq3), we see that the NP contributions are absorbed into the
couplings of $b-s-j_{eff}^{r}$, while the Lorentz structures of
$j_{eff}^{r}\to\ell^{+}\ell^{-}$ are simple with $r=(1,2,R,L)$. Thus, to
discuss the NP effects, it is sufficient to study $\Lambda_{b}\to\Lambda
j_{eff}^{r}$.
The most simple T-odd operator in $\Lambda_{b}\to\Lambda j_{eff}^{m}$ is
defined as TRlamV
$\hat{T}=(\vec{s}_{\Lambda}\times\vec{s}_{m})\cdot\hat{p}_{\Lambda},$ (21)
$\vec{s}_{\Lambda}$ and $\vec{s}_{m}$ are the spin operators of $\Lambda$ and
$j^{m}_{eff}$, respectively, and $\hat{p}_{\Lambda}$ is the unit vector of
$\vec{p}_{\Lambda}$. The spin operators can only be defined for the massive
objects, given as
$M\vec{s}=P^{0}\vec{J}-\vec{p}\times\vec{K}-\frac{1}{P^{0}+M}\vec{p}(\vec{p}\cdot\vec{J})\,,$
(22)
where $M$ is the particle mass, and $P^{0}$, $\vec{p}$, $\vec{J}$ and
$\vec{K}$ are the time translation, space translation, rotation and Lorentz
boost generators, respectively. As $(\vec{p},\vec{J})$ and $\vec{K}$ are T-odd
and T-even, respectively, $\vec{s}$ is T-odd. In addition, $\vec{s}$ satisfies
the relations
$\displaystyle\vec{s}\cdot\vec{p}=\vec{J}\cdot\vec{p}\,,~{}~{}~{}[s_{i},s_{j}]=i\epsilon^{ijk}\epsilon_{k}\,,~{}~{}~{}[s_{i},p_{j}]=0\,,$
(23)
$\displaystyle\vec{s}\exp(i\vec{K}\cdot\vec{\omega})|\vec{p}=0,J_{z}=M\rangle=\exp(i\vec{K}\cdot\vec{\omega})\vec{J}|\vec{p}=0,J_{z}=M\rangle\,,$
with arbitrary $\vec{\omega}$. The key of solving the eigenstates of $\hat{T}$
relies on that $\hat{T}$ is a scalar operator. We have
$\displaystyle\hat{T}|\vec{p}\,^{2},\lambda_{1},\lambda_{2};J,J_{z}\rangle$
(24) $\displaystyle~{}~{}~{}=\frac{2J+1}{4\pi}\int d\phi d\cos\theta
R_{z}(\phi)R_{y}(\theta)\hat{T}|p\hat{z},\lambda_{1},\lambda_{2}\rangle_{1,2}D^{J*}(\phi,\theta)^{J_{z}}\,_{\lambda_{1}-\lambda_{2}}\,,$
and
$\hat{T}|p\hat{z},\lambda_{1},\lambda_{2}\rangle=\frac{i}{2}(s_{\Lambda}^{+}s_{m}^{-}-s_{\Lambda}^{-}s_{m}^{+})|p\hat{z},\lambda_{1},\lambda_{2}\rangle\,,$
(25)
with $s^{\pm}=s_{x}\pm is_{y}$. It is then straightforward to show that
$\hat{T}|a^{m}_{\pm}\rangle=\pm\frac{i}{\sqrt{2}}|b^{m}_{\pm}\rangle,~{}~{}~{}\hat{T}|b^{m}_{\pm}\rangle=\mp\frac{i}{\sqrt{2}}|a^{m}_{\pm}\rangle,$
(26)
resulting in the eigenstates
$\displaystyle|\lambda_{T}^{m}=\pm\frac{1}{\sqrt{2}},\lambda_{\text{tot}}=\frac{1}{2}\rangle=\frac{1}{\sqrt{2}}(|a^{m}_{+}\rangle\mp
i|b^{m}_{+}\rangle),$ (27)
$\displaystyle|\lambda_{T}^{m}=\pm\frac{1}{\sqrt{2}},\lambda_{\text{tot}}=-\frac{1}{2}\rangle=\frac{1}{\sqrt{2}}(|a^{m}_{-}\rangle\pm
i|b^{m}_{-}\rangle)\,,$
where $\lambda_{T}^{m}$ and $\lambda_{\text{tot}}$ are the eigenvalues of
$\hat{T}$ and $\vec{J}\cdot\vec{p}$, respectively. They are also the
eigenstates of $\vec{J}\cdot\vec{p}$, as $\hat{T}$ commutes with both
$\vec{J}$ and $\vec{p}$. Note that $c_{\pm}^{m}$ are not involved since they
are contributed by spinless $j_{eff}^{m}$.
Because $\hat{T}$ and $\vec{J}\cdot\vec{p}$ are T-odd and T-even,
respectively, we have
${\cal
I}_{t}|\lambda_{T}^{m},\lambda_{\text{tot}}\rangle=e^{i\theta_{T}}|-\lambda_{T}^{m},\lambda_{\text{tot}}\rangle\,,~{}~{}~{}{\cal
I}_{s}|\lambda_{T}^{m},\lambda_{\text{tot}}\rangle=e^{i\theta_{m}}|-\lambda_{T}^{m},-\lambda_{\text{tot}}\rangle\,,$
(28)
where ${\cal I}_{t(s)}$ is the time-reversal (space-inversion) operator, and
$\theta_{T,m}$ depend on the conventions. On the other hand, ${\cal I}_{s}$
would interchange $j_{eff}^{R}$ and $j_{eff}^{L}$, given as
${\cal
I}_{s}|\lambda_{T}^{R},\lambda_{\text{tot}}\rangle=e^{i\theta_{R}}|-\lambda_{T}^{L},-\lambda_{\text{tot}}\rangle\,,~{}~{}~{}{\cal
I}_{s}|\lambda_{T}^{L},\lambda_{\text{tot}}\rangle=e^{-i\theta_{R}}|-\lambda_{T}^{R},-\lambda_{\text{tot}}\rangle\,,$
(29)
with
$\small|\lambda_{T}^{R},\lambda_{\text{tot}}\rangle=\frac{1}{\sqrt{2}}\left(|\lambda_{T}^{1},\lambda_{\text{tot}}\rangle+|\lambda_{T}^{2},\lambda_{\text{tot}}\rangle\right)\,,~{}~{}~{}|\lambda_{T}^{L},\lambda_{\text{tot}}\rangle=\frac{1}{\sqrt{2}}\left(|\lambda_{T}^{1},\lambda_{\text{tot}}\rangle-|\lambda_{T}^{2},\lambda_{\text{tot}}\rangle\right)\,,$
(30)
since $j_{eff}^{1}$ and $j_{eff}^{2}$ have opposite parity.
For each combinations of $\lambda_{\text{tot}}$ and $j_{eff}^{r}$, we define
an T-odd quantity
${\cal
T}_{\lambda_{\text{tot}}}^{\,r}\equiv|\langle\lambda_{T}^{r}=1/\sqrt{2},\lambda_{\text{tot}}|{\cal
S}_{eff}|\lambda_{b}\rangle|^{2}-|\langle\lambda_{T}^{r}=-1/\sqrt{2},\lambda_{\text{tot}}|{\cal
S}_{eff}|\lambda_{b}\rangle|^{2}\,,$ (31)
which vanishes if ${\cal S}_{eff}$ is invariant under ${\cal I}_{t}$.
Explicitly, we find
$\displaystyle{\cal
T}_{+}^{\,r}=-2\text{Im}\left(a_{+}^{r}\overline{b_{+}^{r}}\right)\,,~{}~{}$
$\displaystyle{\cal
T}_{-}^{\,r}=2\text{Im}\left(a_{-}^{r}\overline{b_{-}^{r}}\right)\,,$ (32)
which are proportional to the relative complex phase. They are called as T-odd
quantities as ${\cal I}_{t}$ interchanges the final states of the two terms in
Eq. (31).
The operator of $\hat{T}$ contains $\vec{s}_{\Lambda}$, which is difficult to
be measured directly. To probe the spin of $\Lambda$, it is plausible to study
the cascade decays of $\Lambda\to p\pi^{-}$. Subsequently, the final states
involve four particles $p\pi^{-}\ell^{+}\ell^{-}$, containing three
independent three-momenta. It is then possible to observe the triple product
$\alpha(\vec{p}_{+}\times\vec{p}_{p})\cdot\vec{p}_{\Lambda},$ (33)
where $\alpha$ is the polarization asymmetry in $\Lambda\to p\pi^{-}$, and
$\vec{p}_{p}$ is the three-momentum of the proton. Notice that $\alpha$ is a
necessary component in Eq. (33) as $\vec{s}_{\Lambda}$ does not affect
$\vec{p}_{p}$ if $\alpha=0$. Observe that Eq. (33) is P-even. Therefore, we
have to construct P-even observables out of Eq. (32). From the transformation
rules, it is easy to see that
${\cal T}^{R}\equiv{\cal T}_{-}^{R}-{\cal T}_{+}^{L}\,,~{}~{}~{}{\cal
T}^{L}\equiv{\cal T}_{-}^{L}-{\cal T}_{+}^{R}\,,$ (34)
which are both T-odd and P-even.
## IV Angular distributions
The lepton helicity amplitudes are calculated as
$\displaystyle h_{0,++}^{1}$ $\displaystyle=0\,,~{}~{}~{}$ $\displaystyle
h_{1,++}^{1}=2M_{\ell}\,,$ (35) $\displaystyle h_{0,++}^{2}$
$\displaystyle=2M_{\ell}\,,~{}~{}~{}$ $\displaystyle h_{1,++}^{2}=0\,,$
$\displaystyle h_{1,+-}^{1}$ $\displaystyle=-\sqrt{2q^{2}}\,,~{}~{}~{}$
$\displaystyle h_{1,+-}^{2}=\sqrt{2q^{2}(1-\delta_{\ell})}\,,$
where $\delta_{\ell}=4M_{\ell}^{2}/q^{2}$ and $M_{\ell}$ is the lepton mass.
On the other hand, the baryonic matrix elements are conventionally
parameterized by the form factors, given by
$\displaystyle\langle\Lambda|\bar{s}\gamma^{\mu}b|\Lambda_{b}\rangle$
$\displaystyle=\bar{u}_{\Lambda}\big{[}f_{1}^{V}(q^{2})\gamma^{\mu}-f_{2}^{V}(q^{2})i\sigma^{\mu\nu}\frac{q_{\nu}}{M_{\Lambda_{b}}}+f_{3}^{V}(q^{2})\frac{q^{\mu}}{M_{\Lambda_{b}}}\big{]}u_{\Lambda_{b}},$
(36)
$\displaystyle\langle\Lambda|\bar{s}\gamma^{\mu}\gamma_{5}b|\Lambda_{b}\rangle$
$\displaystyle=\overline{u}_{\Lambda}\big{[}f_{1}^{A}(q^{2})\gamma^{\mu}-f_{2}^{A}(q^{2})i\sigma^{\mu\nu}\frac{q_{\nu}}{M_{\Lambda_{b}}}+f_{3}^{A}(q^{2})\frac{q^{\mu}}{M_{\Lambda_{b}}}\big{]}\gamma_{5}u_{\Lambda_{b}},$
$\displaystyle\langle\Lambda|\bar{s}i\sigma^{\mu q}b|\Lambda_{b}\rangle$
$\displaystyle=\bar{u}_{\Lambda}\left[\frac{f_{1}^{TV}(q^{2})}{M_{\Lambda_{b}}}\left(\gamma^{\mu}q^{2}-q^{\mu}\not{q}\right)-f_{2}^{TV}(q^{2})i\sigma^{\mu
q}\right]u_{\Lambda_{b}},$ $\displaystyle\langle\Lambda|\bar{s}i\sigma^{\mu
q}\gamma_{5}b|\Lambda_{b}\rangle$
$\displaystyle=\bar{u}_{\Lambda}\left[\frac{f_{1}^{TA}(q^{2})}{M_{\Lambda_{b}}}\left(\gamma^{\mu}q^{2}-q^{\mu}\not{q}\right)-f_{2}^{TA}(q^{2})i\sigma^{\mu
q}\right]\gamma_{5}u_{\Lambda_{b}},$
where $u_{\Lambda_{(b)}}$ and $M_{\Lambda_{(b)}}$ are the Dirac spinor and
mass of $\Lambda_{(b)}$. In turn, we find that
$\displaystyle H^{Vm}_{\frac{1}{2},\,0}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{Q_{-}}{q^{2}}}\left[M_{+}F^{Vm}_{1}(q^{2})+\frac{q^{2}}{M_{\Lambda_{b}}}F^{Vm}_{2}(q^{2})\right]\,,$
(37) $\displaystyle H^{Vm}_{\frac{1}{2},\,1}$ $\displaystyle=$
$\displaystyle\sqrt{2Q_{-}}\left[F^{Vm}_{1}(q^{2})+\frac{M_{+}}{M_{\Lambda_{b}}}F^{Vm}_{2}(q^{2})\right]\,,$
(38) $\displaystyle H^{Vm}_{\frac{1}{2},\,t}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{Q_{+}}{q^{2}}}\left[M_{-}F^{Vm}_{1}(q^{2})+\frac{q^{2}}{M_{\Lambda_{b}}}F^{Vm}_{3}(q^{2})\right]\,,$
(39) $\displaystyle H^{Am}_{\frac{1}{2},\,0}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{Q_{+}}{q^{2}}}\left[M_{-}F^{Am}_{1}(q^{2})-\frac{q^{2}}{M_{\Lambda_{b}}}F^{Am}_{2}(q^{2})\right]\,,$
(40) $\displaystyle H^{Am}_{\frac{1}{2},\,1}$ $\displaystyle=$
$\displaystyle\sqrt{2Q_{+}}\left[F^{Am}_{1}(q^{2})+\frac{M_{-}}{M_{\Lambda_{b}}}F^{Am}_{2}(q^{2})\right]\,,$
(41) $\displaystyle H^{Am}_{\frac{1}{2},\,t}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{Q_{-}}{q^{2}}}\left[M_{+}F^{Am}_{1}(q^{2})-\frac{q^{2}}{M_{\Lambda_{b}}}F^{Am}_{3}(q^{2})\right]\,,$
(42)
where $M_{\pm}=M_{\Lambda_{b}}\pm M_{\Lambda}$, $Q_{\pm}=(M_{\pm})^{2}-q^{2}$,
and
$\displaystyle F^{V1}_{1}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{9}^{eff}+C^{\text{NP}}_{9}+(C_{L}+C_{R})]f^{V}_{1}(q^{2})-\frac{2m_{b}}{M_{\Lambda_{b}}}C_{7\gamma}^{eff}f^{TV}_{1}(q^{2})\,,$
(43) $\displaystyle F^{V1}_{2}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{9}^{eff}+C^{\text{NP}}_{9}+(C_{L}+C_{R}))]f^{V}_{2}(q^{2})-\frac{2m_{b}M_{\Lambda_{b}}}{q^{2}}C_{7\gamma}^{eff}f^{TV}_{2}(q^{2})\,,$
(44) $\displaystyle F^{V1}_{3}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{9}^{eff}+C^{\text{NP}}_{9}+(C_{L}+C_{R})]f^{V}_{3}(q^{2})+\frac{2m_{b}M_{-}}{q^{2}}C_{7\gamma}^{eff}f^{TV}_{1}(q^{2})\,,$
(45) $\displaystyle F^{A1}_{1}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{9}^{eff}+C^{\text{NP}}_{9}-(C_{L}+C_{R})]f^{A}_{1}(q^{2})+\frac{2m_{b}}{M_{\Lambda_{b}}}C_{7\gamma}^{eff}f^{TA}_{1}(q^{2})\,,$
(46) $\displaystyle F^{A1}_{2}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{9}^{eff}+C^{\text{NP}}_{9}-(C_{L}+C_{R}))]f^{A}_{2}(q^{2})+\frac{2m_{b}M_{\Lambda_{b}}}{q^{2}}C_{7\gamma}^{eff}f^{TA}_{2}(q^{2})\,,$
(47) $\displaystyle F^{A1}_{3}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{9}^{eff}+C^{\text{NP}}_{9}-(C_{L}+C_{R})]f^{V,A}_{3}(q^{2})+\frac{2m_{b}M_{+}}{q^{2}}C_{7\gamma}^{eff}f^{TA}_{1}(q^{2})\,,$
(48) $\displaystyle F^{V2}_{i}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{10}+C^{\text{NP}}_{10}+(C_{R}-C_{L})]f^{V}_{i}(q^{2})\,,$
(49) $\displaystyle F^{A2}_{i}(q^{2})$ $\displaystyle=$
$\displaystyle[C_{10}+C^{\text{NP}}_{10}-(C_{R}-C_{L})]f^{A}_{i}(q^{2})\,,$
(50)
with $i=(1,2,3).$ Combining the relations
$H^{m}_{\lambda_{\Lambda}\lambda_{m}}=H^{Vm}_{\lambda_{\Lambda}\lambda_{m}}-H^{Am}_{\lambda_{\Lambda}\lambda_{m}}\,,~{}~{}~{}H^{Vm}_{-\lambda_{\Lambda},\,-\lambda_{m}}=H^{Vm}_{\lambda_{\Lambda},\,\lambda_{m}}\,,~{}~{}~{}H^{Am}_{-\lambda_{\Lambda},\,-\lambda_{m}}=-H^{Am}_{\lambda_{\Lambda},\,\lambda_{m}},$
the evaluations of $H$ are completed once the form factors are given.
Figure 1: Definitions of the angles
The angular distributions of $\Lambda_{b}\to\Lambda(\to
p\pi^{-})\ell^{+}\ell^{-}$, related to the kinematic parts, are given by
piling $D^{J}$ to be
$\displaystyle{\cal
D}(q^{2},\vec{\Omega})\equiv\frac{\partial^{6}\Gamma(\Lambda_{b}\to\Lambda(\to
p\pi^{-})\ell^{+}\ell^{-})}{\partial
q^{2}\partial\cos\theta\partial\cos\theta_{b}\partial\cos\theta_{\ell}\partial\phi_{b}\partial\phi_{\ell}}=\mathcal{B}(\Lambda\to
p\pi^{-})\frac{\zeta(q^{2})}{32\pi^{2}}\sum_{\lambda_{p}\,,\lambda_{\pm}\,,\lambda_{b}}\rho_{\lambda_{\Lambda_{b}}\lambda_{\Lambda_{b}}}\left|A_{\lambda_{p}}\right|^{2}$
$\displaystyle\left|\sum_{m}\sum_{\lambda_{m},\lambda_{\Lambda}}(-1)^{J_{m}}H_{\lambda_{\Lambda}\lambda_{m}}^{m}D^{\frac{1}{2}*}(0,\theta)^{\lambda_{b}}\,_{\lambda_{\Lambda}-\lambda_{m}}D^{\frac{1}{2}*}(\phi_{b},\theta_{b})^{\lambda_{\Lambda}}\,_{\lambda_{p}}h^{m}_{J_{m},\lambda_{+}\lambda_{-}}D^{J_{m}*}(\phi_{\ell},\theta_{\ell})^{\lambda_{m}}\,_{\lambda_{+}-\lambda_{-}}\right|^{2},$
$\displaystyle\zeta(q^{2})=\frac{\alpha^{2}G_{F}^{2}|V_{ts}^{\dagger}V_{tb}|^{2}}{32\pi^{5}}\frac{q^{2}|\vec{p}_{\Lambda}|}{24M_{\Lambda_{b}}^{2}}\sqrt{1-\delta_{\ell}},$
(51)
where $\rho_{\pm,\pm}=(1\pm P_{b})/2$, $|A_{\pm}|^{2}=(1\pm\alpha)/2$,
$\lambda_{p}=\pm 1/2$,
$|\vec{p}_{\Lambda}|=\sqrt{Q_{+}Q_{-}}/2M_{\Lambda_{b}}$, and $J_{m}=0~{}(1)$
for $\lambda_{m}=t~{}(\pm,0)$. The angles are defined in FIG. 1, where
$\theta,\theta_{b}$ and $\theta_{\ell}$ are defined in the CM frames of
$\Lambda_{b},\Lambda$ and $\ell^{+}\ell^{-}$, respectively, and
$\phi_{b,\ell}$ are the azimuthal angles between the decay planes.
The physical meaning of Eq. (IV) is decomposed as follows:
* •
The
$H^{m}_{\lambda_{\Lambda}\lambda_{m}}D^{\frac{1}{2}*}(0,\theta)^{\lambda_{b}}\,_{\lambda_{\Lambda}-\lambda_{m}}$
is responsible for $\Lambda_{b}\to\Lambda j_{eff}^{m}$, where $H$ and $D$
describe the dynamical and kinematic parts of the amplitudes, respectively.
* •
The kinematic part of the $\Lambda\to p\pi^{-}$ is described by
$D^{\frac{1}{2}*}(\phi_{b},\theta_{b})^{\lambda_{\Lambda}}\,_{\lambda_{p}}$,
while the dynamical part by $|A_{\lambda_{p}}|$.
* •
The terms of $h^{m}_{J_{m},\lambda_{+}\lambda_{-}}$ and
$D^{J_{m}*}(\phi_{\ell},\theta_{\ell})^{\lambda_{m}}\,_{\lambda_{+}-\lambda_{-}}$
describe the dynamical and kinematic parts of
$j^{m}_{eff}\to\ell^{+}\ell^{-}$, respectively.
The derivation is similar to those of the appendices in Ref. TRSEMI . We
cross-check our results of $\mathcal{D}(\vec{\Omega})$ with Ref.
Blake:2017angular and find that they are matched. For practical purposes,
$\mathcal{D}(\vec{\Omega})$ is expanded as LHCb:2018angular
$\displaystyle{\cal
D}(q^{2},\vec{\Omega})=\frac{3}{32\pi^{2}}\Big{(}\left(K_{1}\sin^{2}\theta_{l}+K_{2}\cos^{2}\theta_{l}+K_{3}\cos\theta_{l}\right)+\left(K_{4}\sin^{2}\theta_{l}+K_{5}\cos^{2}\theta_{l}+K_{6}\cos\theta_{l}\right)\cos\theta_{b}+$
(52)
$\displaystyle\left(K_{7}\sin\theta_{l}\cos\theta_{l}+K_{8}\sin\theta_{l}\right)\sin\theta_{b}\cos\left(\phi_{b}+\phi_{l}\right)+\left(K_{9}\sin\theta_{l}\cos\theta_{l}+K_{10}\sin\theta_{l}\right)\sin\theta_{b}\sin\left(\phi_{b}+\phi_{l}\right)+$
$\displaystyle\left(K_{11}\sin^{2}\theta_{l}+K_{12}\cos^{2}\theta_{l}+K_{13}\cos\theta_{l}\right)\cos\theta+\left(K_{14}\sin^{2}\theta_{l}+K_{15}\cos^{2}\theta_{l}+K_{16}\cos\theta_{l}\right)\cos\theta_{b}\cos\theta+$
$\displaystyle\left(K_{17}\sin\theta_{l}\cos\theta_{l}+K_{18}\sin\theta_{l}\right)\sin\theta_{b}\cos\left(\phi_{b}+\phi_{l}\right)\cos\theta+\left(K_{19}\sin\theta_{l}\cos\theta_{l}+K_{20}\sin\theta_{l}\right)\sin\theta_{b}\sin\left(\phi_{b}+\phi_{l}\right)\cos\theta$
$\displaystyle+\left(K_{21}\cos\theta_{l}\sin\theta_{l}+K_{22}\sin\theta_{l}\right)\sin\phi_{l}\sin\theta+\left(K_{23}\cos\theta_{l}\sin\theta_{l}+K_{24}\sin\theta_{l}\right)\cos\phi_{l}\sin\theta+$
$\displaystyle\left(K_{25}\cos\theta_{l}\sin\theta_{l}+K_{26}\sin\theta_{l}\right)\sin\phi_{l}\cos\theta_{b}\sin\theta+\left(K_{27}\cos\theta_{l}\sin\theta_{l}+K_{28}\sin\theta_{l}\right)\cos\phi_{l}\cos\theta_{b}\sin\theta+$
$\displaystyle\left(K_{29}\cos^{2}\theta_{l}+K_{30}\sin^{2}\theta_{l}\right)\sin\theta_{b}\sin\phi_{b}\sin\theta+\left(K_{31}\cos^{2}\theta_{l}+K_{32}\sin^{2}\theta_{l}\right)\sin\theta_{b}\cos\phi_{b}\sin\theta+$
$\displaystyle\left(K_{33}\sin^{2}\theta_{l}\right)\sin\theta_{b}\cos\left(2\phi_{l}+\phi_{b}\right)\sin\theta+\left(K_{34}\sin^{2}\theta_{l}\right)\sin\theta_{b}\sin\left(2\phi_{l}+\phi_{b}\right)\sin\theta\Big{)}~{}\,,$
where the definitions of $K_{i}(i=1\sim 34)$ can be found in Appendix A. We
note that $K_{11\sim 34}$ are proportional to $P_{b}$, imposing difficulties
to extract physical meanings since $P_{b}$ depends on the productions.
Interestingly, $K_{9}$ and $K_{10}$ are found to be
$\displaystyle K_{9}$
$\displaystyle=\frac{\sqrt{2}\alpha\left(1-\delta_{\ell}\right)}{4}\left({\cal
T}^{R}+{\cal T}^{L}\right)\,,$ (53) $\displaystyle K_{10}$
$\displaystyle=-\frac{\sqrt{2}\alpha\sqrt{1-\delta_{\ell}}}{4}\left({\cal
T}^{R}-{\cal T}^{L}\right)\,,$
which are T-odd according to Eq. (34). We note that $K_{19,20}$, $K_{21,22}$,
$K_{25,26}$, $K_{29,30}$ and $K_{34}$ are also sensitive to the complex phases
of NP as they are proportional to the imaginary parts of the helicity
amplitudes.
## V Numerical results
In this work, we estimate the form factors by the HBM, where the calculation
details are given in Ref. centerofmass . The bag parameters adopted in this
work are given as
$(m_{s}\,,m_{b})=(0.28,4.8)~{}\text{GeV}\,,~{}~{}0.313~{}\text{GeV}<E_{u,d}<0.368~{}\text{GeV}\,,$
(54)
where $R=4.8~{}\text{GeV}^{-1}$ and $E_{q}$ are the bag radius and quark
energy, respectively. Recently, $\alpha$ has been updated by BESIII bes32018 ;
bes3 with remarkable precision. We take $\alpha=0.732\pm 0.014$,
$M_{\Lambda_{b}}=5.6196$ GeV and the $\Lambda_{b}$ lifetime of
$\tau_{b}=1.471\times 10^{-12}$s from the particle data group pdg2022 . The
main uncertainties of the HBM come from $E_{q}$, affected the form factors
largely at the low $q^{2}$ region.
Table 1: ${\cal B}_{\ell}$ in units of $10^{-6}$
| HBM | CQM | LCSR | LCSR | BSE | CQM | LCSR | RQM | Data
---|---|---|---|---|---|---|---|---|---
| | gutsche | aliev | ymwang | liu2019 | mott2015 | gan2012 | faustov | pdg2022
${\cal B}_{e}$ | 0.91(25) | 1.0 | 4.6(1.6) | | $0.660\sim 1.208$ | | $2.03(^{26}_{9})$ | 1.07 | 1.08(28)
${\cal B}_{\mu}$ | 0.79(18) | 1.0 | 4.0(1.2) | $6.1(^{5.8}_{1.7})$ | $0.812\sim 1.445$ | 0.70 | | 1.05
${\cal B}_{\tau}$ | 0.21(2) | 0.2 | 0.8(3) | $2.1(^{2.3}_{0.6})$ | $0.252\sim 0.392$ | 0.22 | | 0.26
The total branching fractions are obtained by integrating $\vec{\Omega}$ and
$q^{2}$ in Eq. (IV), given as
${\cal B}_{\ell}={\cal
B}(\Lambda_{b}\to\Lambda\ell^{+}\ell^{-})=\tau_{b}\int^{M_{-}^{2}}_{4m_{\ell}^{2}}\zeta(K_{1}+2K_{2})dq^{2}\,.$
(55)
The computed values and the ones in the literature within the SM are listed in
Table 1. In the literature, Refs. gutsche ; mott2015 consider the covariant
quark model (CQM), Refs. aliev ; ymwang ; gan2012 light-cone QCD sum rules
(LCSR), Ref. faustov relativistic quark model (RQM), and Ref. liu2019 Bethe-
Salpeter equation (BSE). We see that our results of ${\cal B}_{\ell}$ are
consistent with those of the CQM, RQM and current experimental data but
systematically smaller than LCSR. Notably, we find that ${\cal B}_{e}>{\cal
B}_{\mu}$, which are consistent with Refs. faustov and aliev . Explicitly, we
obtain ${\cal B}_{e}/{\cal B}_{\mu}=1.15$ with little uncertainties due to the
correlations. The future experiments on ${\cal B}_{e}/{\cal B}_{\mu}$ may
discriminate the approaches.
Some of the angular observables ($K_{i}$) bear special names. In the
following, we concentrate on $\ell=\mu$. The integrated $K_{i}$ are defined as
$\langle
K_{i}\rangle=\frac{1}{\Gamma_{\kappa}}\int^{\kappa^{\prime}}_{\kappa}\zeta
K_{i}dq^{2}\,,~{}~{}~{}\Gamma_{\kappa}=\int^{\kappa^{\prime}}_{\kappa}\zeta(K_{1}+2K_{2})dq^{2}\,.$
(56)
The integrated hadron (lepton) forward-backward asymmetry of
$A_{FB}^{h}~{}(A_{FB}^{\ell})$ is related to $\langle K_{i}\rangle$ through
$\displaystyle A_{FB}^{h}=\langle K_{4}\rangle+\frac{1}{2}\langle
K_{5}\rangle\,,~{}~{}~{}A_{FB}^{\ell}=\frac{3}{2}\langle K_{3}\rangle\,,$ (57)
while
$A_{FB}^{\ell h}=\frac{3}{4}\langle K_{6}\rangle\,,~{}~{}~{}F_{L}=2\langle
K_{1}\rangle-\langle K_{2}\rangle\,,$ (58)
are the combined forward-backward asymmetry and longitudinal polarized
fraction, respectively. The average decay branching fraction is defined as
$\left\langle\frac{\partial{\cal B}}{\partial
q^{2}}\right\rangle\equiv\frac{\tau_{b}}{\kappa^{\prime}-\kappa}\Gamma_{\kappa}\,.$
(59)
Note that the $q^{2}$ regions of $[\kappa,\kappa^{\prime}]=[8,11]$ and
$[12.5,15]$ in units of GeV2 are contaminated largely by the charmonium
resonances, so are not considered.
Our results within the HBM are given in Table 2, along with the ones from the
literature and experimental data LHCb:exp ; LHCb:2018angular . Our values of
$A_{FB}^{h,\ell,h\ell}$ and $F_{L}$ have little uncertainties as $K_{i}$ are
correlated in the model calculations. In the literature, Ref. detmold employs
the lattice QCD, and Ref. faustov includes the contributions from the
charmonium resonances. We see that the angular observables in the literature
and this work are basically consistent. Our results of $\langle
A^{h}_{FB}\rangle$ and $\langle A^{\ell h}_{FB}\rangle$ are slightly larger
than the others due to the updated $\alpha$222They used $\alpha=0.642\pm
0.013$ pdg2016 , in sharp contrast to $\alpha=0.732\pm 0.014$ adopted in this
work.. Notably, the experimental values of $A_{FB}^{\ell h}$ are nearly twice
larger than the theoretical predictions.
Table 2: Decay observables, where $\langle\partial{\cal B}/\partial
q^{2}\rangle$ and $\kappa^{(\prime)}$ are in units of $10^{-7}$ GeV-2 and
GeV2, respectively.
| $[\kappa,\kappa^{\prime}]$ | HBM | RQM faustov | lattice detmold | LHCb LHCb:exp ; LHCb:2018angular
---|---|---|---|---|---
$\left\langle\frac{\partial{\cal B}}{\partial q^{2}}\right\rangle$ | $[0.1,2]$ | 0.25(11) | 0.34 | 0.25(23) | $0.36(^{14}_{13})$
$[2,4]$ | 0.16(7) | 0.31 | 0.18(12) | $0.11(^{12}_{9})$
$[4,6]$ | 0.20(8) | 0.40 | 0.23(11) | $0.02(^{9}_{1})$
$[6,8]$ | 0.26(9) | 0.57 | 0.307(94) | $0.25(^{13}_{12})$
$[11,12.5]$ | 0.44(11) | 0.65 | | 0.75(21)
$[15,16]$ | 0.61(10) | 0.72 | 0.796(75) | 1.12(30)
$[16,18]$ | 0.65(8) | 0.68 | 0.827(76) | 1.22(29)
$[1.1,6]$ | 0.18(7) | 0.34 | 0.20(12) | $0.09(^{6}_{5})$
$[15,20]$ | 0.60(6) | 0.61 | 0.756(70) | $1.20(^{26}_{27})$
$A_{FB}^{\ell}$ | $[0.1,2]$ | 0.076(0) | 0.067 | 0.095(15) | $0.37(^{37}_{48})$
$[11,12.5]$ | $-0.357(6)$ | $-0.35$ | | $0.01(^{20}_{19})$
$[15,16]$ | $-0.403(8)$ | $-0.41$ | $-0.374(14)$ | $-0.10(^{18}_{16})$
$[16,18]$ | $-0.396(9)$ | $-0.36$ | $-0.372(13)$ | $-0.07(^{14}_{13})$
$[18,20]$ | $-0.320(9)$ | $-0.32$ | $-0.309(15)$ | $0.01(^{16}_{15})$
| $[15,20]$ | $-0.369(7)$ | $-0.33$ | $-0.350(13)$ | $-0.39(4)$
$A_{FB}^{h}$ | $[0.1,2]$ | $-0.294(2)$ | $-0.26$ | $-0.310(18)$ | $-0.12(^{34}_{32})$
$[11,12.5]$ | $-0.408(2)$ | $-0.30$ | | $-0.50(^{11}_{4})$
$[15,16]$ | $-0.384(4)$ | $-0.32$ | $-0.3069(83)$ | $-0.19(^{14}_{16})$
$[16,18]$ | $-0.358(6)$ | $-0.31$ | $-0.2891(90)$ | $-0.44(^{10}_{6})$
$[18,20]$ | $-0.275(6)$ | $-0.25$ | $-0.227(10)$ | $-0.13(^{10}_{12})$
| $[15,20]$ | $-0.333(4)$ | $-0.29$ | $-0.2710(92)$ | $-0.30(5)$
$A_{FB}^{h\ell}$ | $[0.1,2]$ | $-0.028(0)$ | $-0.021$ | $-0.0302(51)$ |
$[2,4]$ | $-0.001(1)$ | 0.010 | $-0.0169(99)$ |
$[4,6]$ | 0.047(2) | 0.045 | 0.021(13) |
$[6,8]$ | 0.084(1) | 0.072 | 0.053(13) |
$[15,20]$ | 0.179(1) | 0.129 | 0.1398(43) | 0.25(4)
$F_{L}$ | $[0.1,2]$ | 0.541(4) | 0.66 | 0.465(84) | $0.56(^{24}_{56})$
$[11,12.5]$ | 0.615(0) | 0.51 | | $0.40(^{37}_{36})$
$[15,16]$ | 0.507(1) | 0.41 | 0.454(20) | $0.49(30)$
$[16,18]$ | 0.469(0) | 0.38 | 0.417(15) | $0.68(^{15}_{21})$
$[18,20]$ | 0.416(1) | 0.35 | 0.3706(79) | $0.62(^{24}_{27})$
After showing that our results in the HBM are compatible with those in the
literature, we are ready to estimate the NP contributions to the T-odd
observables. From the global fit in the $B$ meson decays
SinghChundawat:2022zdf , the permitted imaginary parts of the NP Wilson
coefficients are found in TABLE 3 with four different scenarios333See FIG. 1
of Ref. SinghChundawat:2022zdf . It is clear that the signs of NP Wilson
coefficients are barely determined. . As an illustration, we calculate
$\langle K_{j}\rangle$ with $K_{j}\in\\{K_{9},K_{10},K_{19},K_{30}\\}$ and
$(\kappa,\kappa^{\prime})=(15~{}\text{GeV}^{2},20~{}\text{GeV}^{2})$ in
different scenarios given in TABLE 3. We fit $P_{b}$ from the data of
$K_{1-34}$ and find that $P_{b}$ is consistent with zero regardless to the
presence of NP.
Table 3: The Wilson coefficients and $\langle K_{j}\rangle$ in units of $10^{-3}$, with four NP scenarios. Scenarios | $\text{Im}(C_{9}^{NP})$ | $\text{Im}(C_{10}^{NP})$ | $\text{Im}(C_{L})$ | $\text{Im}(C_{R})$ | $K_{9}$ | $K_{10}$ | $K_{19}$ | $K_{30}$ | $P_{b}$
---|---|---|---|---|---|---|---|---|---
Scenario #1 | $\pm 0.73$ | 0 | 0 | 0 | $0$ | $\mp 4$ | $0$ | $0$ | $-0.022(72)$
Scenario #2 | 0 | $\pm 1.86$ | 0 | 0 | | | | |
Scenario #3 | $\pm 1.66$ | $\mp 1.66$ | 0 | 0 | $0$ | $\pm 3$ | $0$ | $0$ | $-0.021(65)$
Scenario #4 | $\pm 0.77$ | 0 | $\mp 0.77$ | $\mp 0.77$ | $\mp 1$ | $\mp 42$ | $\mp 1$ | $0$ | $-0.019(64)$
In the SM, due to lacking of relative complex phases, $\langle K_{j}\rangle$
are found to be less than $10^{-4}$. Therefore, they provide excellent
opportunities to test the SM. Since $K_{j}$ are proportional to the imaginary
parts of the NP Wilson coefficients, which have not been determined yet, their
signs remain unknown. However, nonzero values in the experiments would be a
smoking gun of NP. Scenario #1 affects little to $\langle K_{j}\rangle$, and
the results are not listed. In addition, $\langle K_{9}\rangle$ is found to be
very small in all the scenarios, which is consistent with the experimental
searches. Remarkably, the experimental result of $\langle K_{10}\rangle$ can
be explained by Scenario #4, which can be provided by the $Z^{\prime}$ model
Chao:2021qxq ; Li:2021cty ; Alok:2022pjb . The reason can be traced back to
$C_{L}$ as it interferes largely with the left-handed particles produced by
the SM. On the other hand, $K_{19}$ and $K_{30}$ are highly suppressed by
$P_{b}$.
## VI conclusions
We have derived the angular distributions of $\Lambda_{b}\to\Lambda(\to
p\pi^{-})\ell^{+}\ell^{-}$ based on the effective schemes of
$\Lambda_{b}\to\Lambda(\to p\pi^{-})j_{eff}^{m}(\to\ell^{+}\ell^{-})$. We have
shown that our results are consistent with those in the literature. By
studying the effective two-body decays of $\Lambda_{b}\to\Lambda j_{eff}^{m}$,
we have explored the time-reversal asymmetries by identifying the T-odd
correlations in the form of
$(\vec{s}_{\Lambda}\times\vec{s}_{m})\cdot\hat{p}$. For the numerical
estimations, we have adopted the HBM and found that
$\mathcal{B}_{e}=0.91(25)\times 10^{-6}$, $\mathcal{B}_{\mu}=0.79(18)\times
10^{-6}$, and $\mathcal{B}_{\tau}=0.21(2)\times 10^{-6}$. For
$\Lambda_{b}\to\Lambda(\to p\pi^{-})\mu^{+}\mu^{-}$, $A_{FB}^{\ell}$ and
$A_{FB}^{h}$, averaged in $15\leq q^{2}\leq 20\text{GeV}^{2}$, have been
evaluated as $-0.369(7)$ and $-0.333(4)$, respectively. These results are
consistent with those in the literature and experiments, showing that the HBM
is suitable for estimating $\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}$. We have
demonstrated that $K_{9}$ and $K_{10}$ are related to
$(\vec{s}_{\Lambda}\times\vec{s}_{m})\cdot\hat{p}_{\Lambda}$, in which
$K_{10}$ is sensitive to the complex phases generated by NP. We have found
that $C_{L}=-0.77i$ can explain the $K_{10}$ puzzle. We recommend the
experiment to revisit $K_{10}$ for a stringent constraint.
###### Acknowledgements.
This work is supported in part by the National Key Research and Development
Program of China under Grant No. 2020YFC2201501 and the National Natural
Science Foundation of China (NSFC) under Grant No. 12147103.
## Appendix A Angular observables
All $K_{i}$ are real, which are given as
$\displaystyle
K_{1}=\frac{1}{4}\Big{(}-\delta_{\ell}a^{2}_{+}\overline{a^{2}_{+}}-\delta_{\ell}a^{2}_{-}\overline{a^{2}_{-}}+\frac{\delta_{\ell}b^{1}_{+}\overline{b^{1}_{+}}}{2}-\frac{\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}}{2}+\frac{\delta_{\ell}b^{1}_{-}\overline{b^{1}_{-}}}{2}-\frac{\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}}{2}+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}$
(60)
$\displaystyle+\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}+a^{1}_{+}\overline{a^{1}_{+}}+a^{2}_{+}\overline{a^{2}_{+}}+a^{1}_{-}\overline{a^{1}_{-}}+a^{2}_{-}\overline{a^{2}_{-}}+\frac{b^{1}_{+}\overline{b^{1}_{+}}}{2}+\frac{b^{2}_{+}\overline{b^{2}_{+}}}{2}+\frac{b^{1}_{-}\overline{b^{1}_{-}}}{2}+\frac{b^{2}_{-}\overline{b^{2}_{-}}}{2}\Big{)},$
$\displaystyle
K_{2}=\frac{1}{4}\Big{(}\delta_{\ell}a^{1}_{+}\overline{a^{1}_{+}}+\delta_{\ell}a^{1}_{-}\overline{a^{1}_{-}}-\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}-\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}+\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}+b^{1}_{+}\overline{b^{1}_{+}}$
$\displaystyle+b^{2}_{+}\overline{b^{2}_{+}}+b^{1}_{-}\overline{b^{1}_{-}}+b^{2}_{-}\overline{b^{2}_{-}}\Big{)},$
$\displaystyle
K_{3}=-\frac{K_{16}}{P_{b}}=\frac{\sqrt{1-\delta_{\ell}}}{4}\Big{(}b^{1}_{+}\overline{b^{2}_{+}}+b^{2}_{+}\overline{b^{1}_{+}}-b^{1}_{-}\overline{b^{2}_{-}}-b^{2}_{-}\overline{b^{1}_{-}}\Big{)}$
$\displaystyle
K_{4}=\frac{1}{4}\alpha\Big{(}-\delta_{\ell}a^{2}_{+}\overline{a^{2}_{+}}+\delta_{\ell}a^{2}_{-}\overline{a^{2}_{-}}-\frac{\delta_{\ell}b^{1}_{+}\overline{b^{1}_{+}}}{2}+\frac{\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}}{2}+\frac{\delta_{\ell}b^{1}_{-}\overline{b^{1}_{-}}}{2}-\frac{\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}}{2}+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}$
$\displaystyle-\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}+a^{1}_{+}\overline{a^{1}_{+}}+a^{2}_{+}\overline{a^{2}_{+}}-a^{1}_{-}\overline{a^{1}_{-}}-a^{2}_{-}\overline{a^{2}_{-}}-\frac{b^{1}_{+}\overline{b^{1}_{+}}}{2}-\frac{b^{2}_{+}\overline{b^{2}_{+}}}{2}+\frac{b^{1}_{-}\overline{b^{1}_{-}}}{2}+\frac{b^{2}_{-}\overline{b^{2}_{-}}}{2}\Big{)},$
$\displaystyle
K_{5}=\frac{1}{4}\alpha\Big{(}\delta_{\ell}a^{1}_{+}\overline{a^{1}_{+}}-\delta_{\ell}a^{1}_{-}\overline{a^{1}_{-}}+\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}-\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}-\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}-b^{1}_{+}\overline{b^{1}_{+}}$
$\displaystyle-b^{2}_{+}\overline{b^{2}_{+}}+b^{1}_{-}\overline{b^{1}_{-}}+b^{2}_{-}\overline{b^{2}_{-}}\Big{)},$
$\displaystyle
K_{6}=-\frac{K_{13}}{P_{b}}=\frac{\alpha\sqrt{1-\delta_{\ell}}}{4}\Big{(}-b^{1}_{+}\overline{b^{2}_{+}}-b^{2}_{+}\overline{b^{1}_{+}}-b^{1}_{-}\overline{b^{2}_{-}}-b^{2}_{-}\overline{b^{1}_{-}}\Big{)},$
$\displaystyle
K_{7}-iK_{9}=\frac{\sqrt{2}\alpha\left(1-\delta_{\ell}\right)}{4}\Big{(}a^{1}_{-}\overline{b^{1}_{-}}+a^{2}_{-}\overline{b^{2}_{-}}-b^{1}_{+}\overline{a^{1}_{+}}-b^{2}_{+}\overline{a^{2}_{+}}\Big{)},$
$\displaystyle
K_{8}-iK_{10}=-\frac{\sqrt{2}\alpha\sqrt{1-\delta_{\ell}}}{4}\Big{(}a^{1}_{-}\overline{b^{2}_{-}}+a^{2}_{-}\overline{b^{1}_{-}}+b^{1}_{+}\overline{a^{2}_{+}}+b^{2}_{+}\overline{a^{1}_{+}}\Big{)},$
$\displaystyle
K_{11}=\frac{P_{b}}{4}\Big{(}-\delta_{\ell}a^{2}_{+}\overline{a^{2}_{+}}+\delta_{\ell}a^{2}_{-}\overline{a^{2}_{-}}+\frac{\delta_{\ell}b^{1}_{+}\overline{b^{1}_{+}}}{2}-\frac{\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}}{2}-\frac{\delta_{\ell}b^{1}_{-}\overline{b^{1}_{-}}}{2}+\frac{\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}}{2}+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}$
$\displaystyle-\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}+a^{1}_{+}\overline{a^{1}_{+}}+a^{2}_{+}\overline{a^{2}_{+}}-a^{1}_{-}\overline{a^{1}_{-}}-a^{2}_{-}\overline{a^{2}_{-}}+\frac{b^{1}_{+}\overline{b^{1}_{+}}}{2}+\frac{b^{2}_{+}\overline{b^{2}_{+}}}{2}-\frac{b^{1}_{-}\overline{b^{1}_{-}}}{2}-\frac{b^{2}_{-}\overline{b^{2}_{-}}}{2}\Big{)},$
$\displaystyle
K_{12}=\frac{P_{b}}{4}\Big{(}\delta_{\ell}a^{1}_{+}\overline{a^{1}_{+}}-\delta_{\ell}a^{1}_{-}\overline{a^{1}_{-}}-\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}+\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}$
$\displaystyle+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}-\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}+b^{1}_{+}\overline{b^{1}_{+}}+b^{2}_{+}\overline{b^{2}_{+}}-b^{1}_{-}\overline{b^{1}_{-}}-b^{2}_{-}\overline{b^{2}_{-}}\Big{)},$
$\displaystyle
K_{14}=\frac{P_{b}}{4}\alpha\Big{(}-\delta_{\ell}a^{2}_{+}\overline{a^{2}_{+}}-\delta_{\ell}a^{2}_{-}\overline{a^{2}_{-}}-\frac{\delta_{\ell}b^{1}_{+}\overline{b^{1}_{+}}}{2}+\frac{\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}}{2}-\frac{\delta_{\ell}b^{1}_{-}\overline{b^{1}_{-}}}{2}+\frac{\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}}{2}+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}$
(61)
$\displaystyle+\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}+a^{1}_{+}\overline{a^{1}_{+}}+a^{2}_{+}\overline{a^{2}_{+}}+a^{1}_{-}\overline{a^{1}_{-}}+a^{2}_{-}\overline{a^{2}_{-}}-\frac{b^{1}_{+}\overline{b^{1}_{+}}}{2}-\frac{b^{2}_{+}\overline{b^{2}_{+}}}{2}-\frac{b^{1}_{-}\overline{b^{1}_{-}}}{2}-\frac{b^{2}_{-}\overline{b^{2}_{-}}}{2}\Big{)},$
$\displaystyle K_{15}=\frac{P_{b}}{4}\alpha\Big{(}\delta_{\ell}a^{1}_{+}\
\overline{a^{1}_{+}}+\delta_{\ell}a^{1}_{-}\overline{a^{1}_{-}}+\delta_{\ell}b^{2}_{+}\overline{b^{2}_{+}}+\delta_{\ell}b^{2}_{-}\overline{b^{2}_{-}}+\delta_{\ell}c^{2}_{+}\overline{c^{2}_{+}}$
$\displaystyle+\delta_{\ell}c^{2}_{-}\overline{c^{2}_{-}}-b^{1}_{+}\overline{b^{1}_{+}}-b^{2}_{+}\overline{b^{2}_{+}}-b^{1}_{-}\overline{b^{1}_{-}}-b^{2}_{-}\overline{b^{2}_{-}}\Big{)},$
$\displaystyle
K_{17}-iK_{19}=-\frac{\sqrt{2}P_{b}\alpha\left(1-\delta_{\ell}\right)}{4}\Big{(}a^{1}_{-}\overline{b^{1}_{-}}+a^{2}_{-}\overline{b^{2}_{-}}+b^{1}_{+}\overline{a^{1}_{+}}+b^{2}_{+}\overline{a^{2}_{+}}\Big{)},$
$\displaystyle
K_{18}-iK_{20}=-\frac{\sqrt{2}P_{b}\alpha\sqrt{1-\delta_{\ell}}}{4}\Big{(}-a^{1}_{-}\overline{b^{2}_{-}}-a^{2}_{-}\overline{b^{1}_{-}}+b^{1}_{+}\overline{a^{2}_{+}}+b^{2}_{+}\overline{a^{1}_{+}}\Big{)},$
$\displaystyle
K_{23}-iK_{21}=\frac{P_{b}\sqrt{2}(1-\delta_{\ell})}{4}\Big{(}b^{1}_{+}\overline{a^{1}_{-}}-a^{1}_{+}\overline{b^{1}_{-}}-a^{2}_{+}\overline{b^{2}_{-}}+b^{2}_{+}\overline{a^{2}_{-}}\Big{)},$
$\displaystyle
K_{24}-iK_{22}=-\frac{P_{b}\sqrt{2}\sqrt{(1-\delta_{\ell})}}{4}\Big{(}a^{1}_{+}\overline{b^{2}_{-}}+a^{2}_{+}\overline{b^{1}_{-}}+b^{1}_{+}\overline{a^{2}_{-}}+b^{2}_{+}\overline{a^{1}_{-}}\Big{)},$
$\displaystyle
K_{27}-iK_{25}=-\frac{P_{b}\alpha\sqrt{2}(1-\delta_{\ell})}{4}\Big{(}-a^{1}_{+}\overline{b^{1}_{-}}-a^{2}_{+}\overline{b^{2}_{-}}-b^{1}_{+}\overline{a^{1}_{-}}-b^{2}_{+}\overline{a^{2}_{-}}\Big{)},$
$\displaystyle
K_{28}-iK_{26}=-\frac{P_{b}\alpha\sqrt{2}\sqrt{(1-\delta_{\ell})}}{4}\Big{(}a^{1}_{+}\overline{b^{2}_{-}}+a^{2}_{+}\overline{b^{1}_{-}}-b^{1}_{+}\overline{a^{2}_{-}}-b^{2}_{+}\overline{a^{1}_{-}}\Big{)},$
$\displaystyle
K_{31}-iK_{29}=-\frac{P_{b}\alpha\delta_{\ell}}{2}\Big{(}a^{1}_{-}\overline{a^{1}_{+}}+c^{2}_{-}\overline{c^{2}_{+}}\Big{)},$
$\displaystyle
K_{32}-iK_{30}=-\frac{P_{b}\alpha}{2}\Big{(}-a^{1}_{-}\overline{a^{1}_{+}}-a^{2}_{-}\overline{a^{2}_{+}}\Big{)}+\delta_{\ell}\Big{(}a^{2}_{-}\overline{a^{2}_{+}}-c^{2}_{-}\overline{c^{2}_{+}}\Big{)},$
$\displaystyle
K_{33}-iK_{34}=\frac{P_{b}\alpha}{4}b^{1}_{+}\overline{b^{1}_{-}}.$
## References
* (1) C. Bobeth, G. Hiller and D. van Dyk, JHEP 07, 067 (2011).
* (2) F. Kruger, L. M. Sehgal, N. Sinha and R. Sinha, Phys. Rev. D 61, 114028 (2000) [erratum: Phys. Rev. D 63, 019901 (2001)].
* (3) W. Altmannshofer, P. Ball, A. Bharucha, A. J. Buras, D. M. Straub and M. Wick, JHEP 01, 019 (2009).
* (4) C. Bobeth, G. Hiller and G. Piranishvili, JHEP 07, 106 (2008).
* (5) R. Aaij et al. [LHCb], JHEP 06, 108 (2017).
* (6) R. Fleischer, R. Jaarsma and G. Tetlalmatzi-Xolocotzi, JHEP 05, 156 (2017).
* (7) B. Kindra and N. Mahajan, Phys. Rev. D 98, 094012 (2018).
* (8) R. Aaij et al. [LHCb], JHEP 07, 084 (2013).
* (9) R. Aaij et al. [LHCb], JHEP 06, 133 (2014).
* (10) R. Aaij et al. [LHCb], Phys. Rev. Lett. 111, 191801 (2013).
* (11) R. Aaij et al. [LHCb], Phys. Rev. Lett. 125, 011802 (2020).
* (12) V. Khachatryan et al. [CMS], Phys. Lett. B 753, 424 (2016).
* (13) M. Aaboud et al. [ATLAS], JHEP 10, 047 (2018).
* (14) R. Aaij et al. [LHCb], JHEP 11, 043 (2021).
* (15) R. Aaij et al. [LHCb], Phys. Rev. Lett. 126, 161802 (2021).
* (16) A. M. Sirunyan et al. [CMS], Phys. Lett. B 781, 517 (2018).
* (17) R. Aaij et al. [LHCb], JHEP 02, 104 (2016).
* (18) R. Aaij et al. [LHCb], JHEP 06, 115 (2015) [erratum: JHEP 09, 145 (2018)].
* (19) R. Aaij et al. [LHCb], JHEP 09, 146 (2018).
* (20) L. Mott and W. Roberts, Int. J. Mod. Phys. A 27, 1250016 (2012).
* (21) S. Roy, R. Sain and R. Sinha, Phys. Rev. D 96, 116005 (2017).
* (22) D. Das, JHEP 07, 063 (2018).
* (23) T. M. Aliev, A. Ozpineci, M. Savci and C. Yuce, Phys. Lett. B 542, 229 (2002).
* (24) C. S. Huang and H. G. Yan, Phys. Rev. D 59, 114022 (1999) [erratum: Phys. Rev. D 61, 039901 (2000)].
* (25) G. Buchalla, A. J. Buras and M. E. Lautenbacher, Rev. Mod. Phys. 68, 1125 (1996).
* (26) T. Gutsche, M. A. Ivanov, J. G. Korner, V. E. Lyubovitskij and P. Santorelli, Phys. Rev. D 87, 074031 (2013).
* (27) P. Böer, T. Feldmann and D. van Dyk, JHEP 01, 155 (2015).
* (28) R. Aaij et al. [LHCb], JHEP 08, 055 (2017).
* (29) R. Aaij et al. [LHCb], Nature Phys. 18, 277 (2022).
* (30) N. R. Singh Chundawat, arXiv:2207.10613 [hep-ph].
* (31) R. Aaij et al. [LHCb], Phys. Lett. B 724, 27 (2013).
* (32) T. Blake and M. Kreps, JHEP 11, 138 (2017).
* (33) T. Blake, S. Meinel and D. van Dyk, Phys. Rev. D 101, 035023 (2020).
* (34) G. Buchalla, G. Hiller and G. Isidori, Phys. Rev. D 63, 014015 (2000).
* (35) R. N. Faustov and V. O. Galkin, Phys. Rev. D 96, 053006 (2017).
* (36) C. Q. Geng, X. N. Jin and C. W. Liu, Phys. Rev. D 106, 053006 (2022).
* (37) C. Q. Geng and C. W. Liu, JHEP 11, 104 (2021).
* (38) C. Q. Geng, X. N. Jin, C. W. Liu, Z. Y. Wei and J. Zhang, Phys. Lett. B 834, 137429 (2022).
* (39) C. Q. Geng, X. N. Jin and C. W. Liu, [arXiv:2210.15588 [hep-ph]].
* (40) C. W. Liu and C. Q. Geng, [arXiv:2205.08158 [hep-ph]].
* (41) M. Ablikim et al. [BESIII], Nature Phys. 15, 631-634 (2019).
* (42) M. Ablikim et al. [BESIII], Phys. Rev. Lett. 129, no.13, 131801 (2022).
* (43) R. L. Workman et al. [Particle Data Group], PTEP 2022, 083C01 (2022).
* (44) T. M. Aliev, K. Azizi and M. Savci, Phys. Rev. D 81, 056006 (2010).
* (45) Y. m. Wang, Y. Li and C. D. Lu, Eur. Phys. J. C 59, 861 (2009).
* (46) L. L. Liu, X. W. Kang, Z. Y. Wang and X. H. Guo, Chin. Phys. C 44, 083107 (2020).
* (47) L. Mott and W. Roberts, Int. J. Mod. Phys. A 30, 1550172 (2015).
* (48) L. F. Gan, Y. L. Liu, W. B. Chen and M. Q. Huang, Commun. Theor. Phys. 58, 872(2012).
* (49) W. Detmold and S. Meinel, Phys. Rev. D 93, 074501 (2016).
* (50) C. Patrignani et al. [Particle Data Group], Chin. Phys. C, 40,100001 (2016).
* (51) S. Bhattacharya, S. Nandi, S. K. Patra and R. Sain, Phys. Rev. D 101, 073006 (2020).
* (52) R. Aaij et al. [LHCb], JHEP 05, 040 (2020).
* (53) W. Chao, H. Wang, L. Wang and Y. Zhang, Chin. Phys. C 45, 083105 (2021).
* (54) X. Q. Li, M. Shen, D. Y. Wang, Y. D. Yang and X. B. Yuan, Nucl. Phys. B 980, 115828 (2022).
* (55) A. K. Alok, N. R. Singh Chundawat, S. Gangal and D. Kumar, Eur. Phys. J. C 82, 967 (2022).
|
11institutetext: Francisco J. Aragón-Artacho 22institutetext: Department of
Mathematics, University of Alicante, Alicante, Spain
22email<EMAIL_ADDRESS>33institutetext: Boris S. Mordukhovich
44institutetext: Department of Mathematics, Wayne State University, Detroit,
Michigan 48202, USA
44email<EMAIL_ADDRESS>55institutetext: Pedro Pérez-Aros
66institutetext: Instituto de Ciencias de la Ingeniería, Universidad de
O’Higgins, Rancagua, Chile
66email<EMAIL_ADDRESS>
# Coderivative-Based Semi-Newton Method
in Nonsmooth Difference Programming ††thanks: Research of the first author was
partially supported by the Ministry of Science, Innovation and Universities of
Spain and the European Regional Development Fund (ERDF) of the European
Commission, Grant PGC2018-097960-B-C22, and by the Generalitat Valenciana,
grant AICO/2021/165. Research of the second author was partially supported by
the USA National Science Foundation under grants DMS-1808978 and DMS-2204519,
by the Australian Research Council under Discovery Project DP-190100555, and
by the Project 111 of China under grant D21024. Research of the third author
was partially supported by grants: Fondecyt Regular 1190110 and Fondecyt
Regular 1200283.
Francisco J. Aragón-Artacho Boris S. Mordukhovich Pedro Pérez-Aros
###### Abstract
This paper addresses the study of a new class of nonsmooth optimization
problems, where the objective is represented as a difference of two generally
nonconvex functions. We propose and develop a novel Newton-type algorithm to
solving such problems, which is based on the coderivative generated second-
order subdifferential (generalized Hessian) and employs advanced tools of
variational analysis. Well-posedness properties of the proposed algorithm are
derived under fairly general requirements, while constructive convergence
rates are established by using additional assumptions including the
Kurdyka–Łojasiewicz condition. We provide applications of the main algorithm
to solving a general class of nonsmooth nonconvex problems of structured
optimization that encompasses, in particular, optimization problems with
explicit constraints. Finally, applications and numerical experiments are
given for solving practical problems that arise in biochemical models,
constrained quadratic programming, etc., where advantages of our algorithms
are demonstrated in comparison with some known techniques and results.
###### Keywords:
Nonsmooth difference programming generalized Newton methods global convergence
convergence rates variational analysis generalized differentiation
###### MSC:
49J53, 90C15, 9J52
## 1 Introduction
The primary mathematical model considered in this paper is described by
$\min_{x\in\mathbb{R}^{n}}\varphi(x):=g(x)-h(x),$ (1)
where $g:\mathbb{R}^{n}\to\mathbb{R}$ is of class $\mathcal{C}^{1,1}$ (i.e.,
the collection of $\mathcal{C}^{1}$-smooth functions with locally Lipschitzian
derivatives), and where $h:\mathbb{R}^{n}\to\mathbb{R}$ is a locally
Lipschitzian and prox-regular function; see below. Although (1) is a problem
of unconstrained optimization, it will be shown below that a large class of
constrained optimization problems can be reduced to this form. In what
follows, we label the optimization class in (1) as problems of difference
programming.
The difference form (1) reminds us of problems of DC $($difference of
convex$)$ programming, which have been intensively studied in optimization
with a variety of practical applications; see, e.g., Aragon2020 ; Artacho2019
; AragonArtacho2018 ; Oliveira_2020 ; hiriart ; Toh ; Tao1997 ; Tao1998 ;
Tao1986 and the references therein. However, we are not familiar with a
systematic study of the class of difference programming problems considered in
this paper.
Our main goal here is to develop an efficient numerical algorithm to solve the
class of difference programs (1) with subsequent applications to nonsmooth and
nonconvex problems of particular structures, problems with geometric
constraints, etc. Furthermore, the efficiency of the proposed algorithm and
its modifications is demonstrated by solving some practical models for which
we conduct numerical experiments and compare the obtained results with
previously known developments and computations by using other algorithms. The
proposed algorithm is of a regularized damped Newton type with a novel choice
of directions in the iterative scheme providing a global convergence of
iterates to a stationary point of the cost function. At the first order, the
novelty of our algorithm, in comparison with, e.g., the most popular DCA
algorithm by Tao et al. Tao1997 ; Tao1998 ; Tao1986 and its boosted
developments by Aragón-Artacho et al.Aragon2020 ; Artacho2019 ;
AragonArtacho2018 ; MR4078808 in DC programming, is that instead of a convex
subgradient of $h$ in (1), we now use a limiting subgradient of $-h$. No
second-order information on $h$ is used in what follows. Concerning the other
function $g$ in (1), which is nonsmooth of the second-order, our algorithm
replaces the classical Hessian matrix by the generalized Hessian/second-order
subdifferential of $g$ in the sense of Mordukhovich m92 . The latter
construction, which is defined as the coderivative of the limiting
subdifferential has been well recognized in variational analysis and
optimization due its comprehensive calculus and explicit evaluations for broad
classes of extended-real-valued functions arising in applications. We refer
the reader to, e.g., chhm ; dsy ; Helmut ; hmn ; hos ; hr ;
2020arXiv200910551D ; MR3823783 ; MR2191744 ; mr ; os ; yy and the
bibliographies therein for more details. Note also that the aforementioned
generalized Hessian has already been used in differently designed algorithms
of the Newton type to solve optimization-related problems of different
nonsmooth structures in comparison with (1); see Helmut ; 2020arXiv200910551D
; jogo ; 2021arXiv210902093D ; BorisEbrahim . Having in mind the discussions
above, we label the main algorithm developed in this paper as the regularized
coderivative-based damped semi-Newton method (abbr. RCSN).
The rest of the paper is organized as follows. Section 2 recalls constructions
and statements from variational analysis and generalized differentiation,
which are broadly used in the formulations and proofs of the major results.
Besides well-known facts, we present here some new notions and further
elaborations.
In Section 3, we design our main RCSN algorithm, discuss each of its steps,
and establish various results on its performance depending on imposed
assumptions whose role and importance are illustrated by examples.
Furthermore, Section 4 employs the Kurdyka-Łojasiewicz (KL) property of the
cost function to establish quantitative convergence rates of the RCSN
algorithm depending on the exponent in the KL inequality.
Section 5 addresses the class of (nonconvex) problems of structured
optimization with the cost functions given in the form $f(x)+\psi(x)$, where
$f\colon\mathbb{R}^{n}\to\mathbb{R}$ is a twice continuously differentiable
function with a Lipschitzian Hessian (i.e., of class ${\cal C}^{2,1}$), while
$\psi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}:=(-\infty,\infty]$ is an
extended-real-valued prox-bounded function. By using the forward-backward
envelope MR3845278 and the associated Asplund function asplund , we reduce
this class of structured optimization problems to the difference form (1) and
then employ the machinery of RCSN to solving problems of this type. As a
particular case of RCSN, we design and justify here a new projected-like
Newton algorithm to solve optimization problems with geometric constraints
given by general closed sets.
Section 6 is devoted to implementations of the designed algorithms and
numerical experiments in two different problems arising in practical modeling.
Although these problems can be treated after some transformations by DCA-like
algorithms, we demonstrate in this section numerical advantages of the newly
designed algorithms over the known developments in both smooth and nonsmooth
settings. The concluding Section 7 summarizes the major achievements of the
paper and discusses some directions of our future research.
## 2 Tools of Variational Analysis and Generalized Differentiation
Throughout the entire paper, we deal with finite-dimensional Euclidean spaces
and use the standard notation and terminology of variational analysis and
generalized differentiation; see, e.g., MR3823783 ; MR1491362 , where the
reader can find the majority of the results presented in this section. Recall
that $\mathbb{B}_{r}(x)$ stands for the closed ball centered at
$x\in\mathbb{R}^{n}$ with radius $r>0$ and that
$\mathbb{N}:=\\{1,2,\ldots\\}$.
Given a set-valued mapping $F:\mathbb{R}^{n}\;{\lower
1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise
2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$, its graph is the set
$\operatorname{gph}F:=\big{\\{}(v,w)\in{\mathbb{R}^{n}}\times\mathbb{R}^{m}\;|\;w\in
F(x)\big{\\}}$, while the (Painlevé–Kuratowski) outer limit of $F$ at
$x\in\mathbb{R}^{n}$ is defined by
$\mathop{{\rm Lim}\,{\rm sup}}_{u\to
x}F(u):=\big{\\{}y\in\mathbb{R}^{m}\;\big{|}\;\exists\,u_{k}\to x,\,y_{k}\to
y,\;y_{k}\in F(u_{k})\;\mbox{as}\;k\in\mathbb{N}\big{\\}}.$ (2)
For a nonempty set $C\subseteq\mathbb{R}^{n}$, the (Fréchet) regular normal
cone and (Mordukhovich) basic/limiting normal cone at $x\in C$ are defined,
respectively, by
$\displaystyle\widehat{N}(x;C)=\widehat{N}_{C}(x):$
$\displaystyle=\Big{\\{}x^{*}\in\mathbb{R}^{n}\;\Big{|}\;\limsup\limits_{u\overset{C}{\to}x}\Big{\langle}x^{\ast},\frac{u-x}{\|u-x\|}\Big{\rangle}\leq
0\Big{\\}},$ (3) $\displaystyle N(x;C)=N_{C}(x):$ $\displaystyle=\mathop{{\rm
Lim}\,{\rm sup}}\limits_{u\overset{C}{\to}x}\widehat{N}(u;C),$
where “$u\overset{C}{\to}x$” means that $u\to x$ with $u\in C$. We use the
convention $\widehat{N}(x;C)=N(x;C):=\emptyset$ if $x\notin C$. The indicator
function $\delta_{C}(x)$ of $C$ is equal to 0 if $x\in C$ and to $\infty$
otherwise.
For a lower semicontinuous (l.s.c.) function
$f:{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$, its domain and epigraph are
given by $\mbox{\rm dom}\,f:=\\{x\in\mathbb{R}^{n}\mid f(x)<\infty\\}$ and
$\mbox{\rm
epi}\,f:=\\{(x,\alpha)\in{\mathbb{R}^{n}}\times\mathbb{R}\;|\;f(x)\leq\alpha\\},$
respectively. The regular and basic subdifferentials of $f$ at $x\in\mbox{\rm
dom}\,f$ are defined by
$\displaystyle\widehat{\partial}f(x)$
$\displaystyle:=\big{\\{}x^{\ast}\in\mathbb{R}^{n}\mid(x^{\ast},-1)\in\widehat{N}\big{(}(x,f(x));\mbox{\rm
epi}\,f\big{)}\big{\\}},$ (4) $\displaystyle\partial f(x)$
$\displaystyle:=\big{\\{}x^{\ast}\in\mathbb{R}^{n}\mid(x^{\ast},-1)\in
N\big{(}(x,f(x));\mbox{\rm epi}\,f\big{)}\big{\\}},$
via the corresponding normal cones (3) to the epigraph. The function $f$ is
said to be lower/subdifferentially regular at $\bar{x}\in\mbox{\rm dom}\,f$ if
$\partial f(\bar{x})=\widehat{\partial}f(\bar{x})$.
Given further a set-valued mapping/multifunction $F:{\mathbb{R}^{n}}\;{\lower
1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise
2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$, the regular and basic
coderivatives of $F$ at $(x,y)\in\operatorname{gph}F$ are defined for all
$y^{*}\in\mathbb{R}^{m}$ via the corresponding normal cones (3) to the graph
of $F$, i.e.,
$\displaystyle\widehat{D}^{\ast}F(x,y)(y^{\ast})$
$\displaystyle:=\big{\\{}x^{\ast}\in{\mathbb{R}^{n}}\;\big{|}\;(x^{\ast},-y^{\ast})\in\widehat{N}\big{(}(x,y);\operatorname{gph}F\big{)}\big{\\}},$
(5) $\displaystyle{D}^{\ast}F(x,y)(y^{\ast})$
$\displaystyle:=\big{\\{}x^{\ast}\in{\mathbb{R}^{n}}\;\big{|}\;(x^{\ast},-y^{\ast})\in
N\big{(}(x,y);\operatorname{gph}F\big{)}\big{\\}},$
where $y$ is omitted if $F$ is single-valued at $x$. When $F$ is single-valued
and locally Lipschitzian around $x$, the basic coderivative has the following
representation via the basic subdifferential of the scalarization
$\displaystyle{D}^{\ast}F(x)(y^{\ast})=\partial\langle
y^{\ast},F\rangle(x),\text{ where }\langle y^{\ast},F\rangle(x):=\langle
y^{\ast},F(x)\rangle.$ (6)
Recall that a set-valued mapping $F:\mathbb{R}^{n}\;{\lower
1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise
2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is strongly metrically subregular
at $(\bar{x},\bar{y})\in\operatorname{gph}F$ if there exist
$\kappa,\varepsilon>0$ such that
$\displaystyle\|x-\bar{x}\|\leq\kappa\|y-\bar{y}\|\;\text{ for all
}\;(x,y)\in\mathbb{B}_{\varepsilon}(\bar{x},\bar{y})\cap\operatorname{gph}F.$
(7)
It is well-known that this property of $F$ is equivalent to the calmness
property of the inverse mapping $F^{-1}$ at $(\bar{y},\bar{x})$. In what
follows, we use the calmness property of single-valued mappings
$h\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ at $\bar{x}$ meaning that there exist
positive numbers $\kappa$ and $\varepsilon>0$ such that
$\|h(x)-h(\bar{x})\|\leq\kappa\|x-\bar{x}\|\;\text{ for all
}\;x\in\mathbb{B}_{\varepsilon}(\bar{x}).$ (8)
The infimum of all $\kappa>0$ in (8) is called the _exact calmness bound_ of
$h$ at $\bar{x}$ and is denoted it by $\mbox{\rm clm}\,h(\bar{x})$. On the
other hand, a multifunction $F\colon\mathbb{R}^{n}\;{\lower
1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise
2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is strongly metrically regular
around $(\bar{x},\bar{y})\in\operatorname{gph}F$ if its inverse $F^{-1}$
admits a single-valued and Lipschitz continuous localization around this
point.
Along with the (first-order) basic subdifferential in (4), we consider the
second-order subdifferential/generalized Hessian of
$f:{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$ at $x\in\mbox{\rm dom}\,f$
relative to $x^{\ast}\in\partial f(x)$ defined by
$\partial^{2}f(x,x^{\ast})(v^{\ast})=\left(D^{\ast}\partial
f\right)(x,x^{\ast})(v^{\ast}),\quad v^{\ast}\in{\mathbb{R}^{n}}$ (9)
and denoted by $\partial^{2}f(x)(v^{\ast})$ when $\partial f(x)$ is a
singleton. If $f$ is twice continuously differentiable
($\mathcal{C}^{2}$-smooth) around $x$, then
$\partial^{2}f(x)(v^{\ast})=\\{\nabla^{2}f(x)v^{\ast}\\}$.
Next we introduce an extension of the notion of positive-definiteness for
multifunctions, where the the corresponding constant may not be positive.
###### Definition 1
Let $F:\mathbb{R}^{n}\;{\lower
1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise
2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{n}$ and $\xi\in\mathbb{R}$. Then $F$
is _$\xi$ -lower-definite_ if
$\displaystyle\langle y,x\rangle\geq\xi\|x\|^{2}\;\text{ for all
}\;(x,y)\in\operatorname{gph}F.$ (10)
###### Remark 1
We can easily check the following:
(i) For any symmetric matrix $Q$ with the smallest eigenvalue
$\lambda_{\min}(Q)$, the function $f(x)=Qx$ is $\lambda_{\min}(Q)$-lower-
definite.
(ii) If a function $f:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is strongly
convex with modulus $\rho>0$, (i.e., $f-\frac{\rho}{2}\|\cdot\|^{2}$ is
convex), it follows from (MR3823783, , Corollary 5.9) that
$\partial^{2}f(x,x^{\ast})$ is $\rho$-lower-definite for all
$(x,x^{\ast})\in\operatorname{gph}\partial f$.
(iii) If $F_{1},F_{2}:\mathbb{R}^{n}\;{\lower
1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise
2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{n}$ are $\xi_{1}$ and $\xi_{2}$-lower-
definite, then the sum $F_{1}+F_{2}$ is $(\xi_{1}+\xi_{2})$-lower-definite.
Recall next that a function $f:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is
_prox-regular_ at $\bar{x}\in\mathbb{R}^{n}$ _for_ $\bar{v}\in\partial
f(\bar{x})$ if it is l.s.c. around $\bar{x}$ and there exist $\varepsilon>0$
and $r\geq 0$ such that
$\displaystyle f(x^{\prime})\geq f(x)+\langle
v,x^{\prime}-x\rangle-\frac{r}{2}\|x^{\prime}-x\|^{2}$ (11)
whenever $x,x^{\prime}\in\mathbb{B}_{\varepsilon}(\bar{x})$ with $f(x)\leq
f(\bar{x})+\varepsilon$ and $v\in\partial
f(x)\cap\mathbb{B}_{\varepsilon}(\bar{v})$. If this holds for all
$\bar{v}\in\partial f(\bar{x})$, $f$ is said to be _prox-regular at_
$\bar{x}$.
###### Remark 2
The class of prox-regular functions has been well-recognized in modern
variational analysis. It is worth mentioning that if $f$ is a locally
Lipschitzian function around $\bar{x}$, then the following properties of $f$
are equivalent: (i) prox-regularity at $\bar{x}$, (ii) lower-$\mathcal{C}^{2}$
at $\bar{x}$, and (iii) primal-lower-nice at $\bar{x}$; see, e.g., (MR2101873,
, Corollary 3.12) for more details.
Given a function $f:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ and
$\bar{x}\in\mbox{\rm dom}\,f$, the upper directional derivative of $f$ at
$\bar{x}$ with respect to $d\in\mathbb{R}^{n}$ is defined by
$f^{\prime}(\bar{x};d):=\limsup\limits_{t\to
0^{+}}\frac{f(\bar{x}+td)-f(\bar{x})}{t}.$ (12)
The following proposition establishes various properties of prox-regular
functions used below. We denote the convex hull of a set by “co”.
###### Proposition 1
Let $f:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ be locally Lipschitzian around
$\bar{x}$ and prox-regular at this point. Then $f$ is lower regular at
$\bar{x}$, $\mbox{\rm co}\,\partial(-f)(\bar{x})=-\partial f(\bar{x})$, and
for any $d\in\mathbb{R}^{n}$ we have the representations
$\displaystyle(-f)^{\prime}(\bar{x};d)=\inf\big{\\{}\langle
w,d\rangle\;\big{|}\;w\in\partial(-f)(\bar{x})\big{\\}}=\inf\big{\\{}\langle
w,d\rangle\;\big{|}\;w\in-\partial f(\bar{x})\big{\\}}.$ (13)
###### Proof
First we fix an arbitrary subgradient $\bar{v}\in\partial f(\bar{x})$ and
deduce from (11) applied to $x=\bar{x}$ and $v=\bar{v}$ that
$f(x^{\prime})\geq
f(\bar{x})+\langle\bar{v},x^{\prime}-\bar{x}\rangle-\frac{r}{2}\|x^{\prime}-\bar{x}\|^{2}\;\text{
for all }\;x^{\prime}\in\mathbb{B}_{\varepsilon}(\bar{x}).$
Passing to the limit as $x^{\prime}\to\bar{x}$ tells us that
$\displaystyle\liminf_{x^{\prime}\to\bar{x}}\frac{f(x^{\prime})-f(\bar{x})-\langle\bar{v},x^{\prime}-\bar{x}\rangle}{\|x^{\prime}-\bar{x}\|}\geq
0,$
which means that $\bar{v}\in\widehat{\partial}f(\bar{x})$ and thus shows that
$f$ is lower regular at $\bar{x}$. By the Lipschitz continuity of $f$ around
$\bar{x}$ and the convexity of the set $\widehat{\partial}f(\bar{x})$, we have
that $\widehat{\partial}f(\bar{x})=\partial f(\bar{x})=\mbox{\rm co}\,\partial
f(\bar{x})=\overline{\partial}f(\bar{x})$, where $\overline{\partial}$ denotes
the (Clarke) generalized gradient. It follows from
$\overline{\partial}(-f)(\bar{x})=-\overline{\partial}f(\bar{x})$ that
$\overline{\partial}(-f)(\bar{x})=-\partial f(\bar{x})$, which implies
therefore that $\partial(-f)(\bar{x})\subseteq-\partial f(\bar{x})$.
Pick $v\in\partial(-f)(\bar{x})$, $d\in\mathbb{R}^{n}$ and find by the prox-
regularity of $f$ at $\bar{x}$ for $-v\in\partial f(\bar{x})$ that there
exists $r>0$ such that
$\displaystyle\langle
v,d\rangle+\frac{rt}{2}\|d\|^{2}\geq\frac{-f(\bar{x}+td)+f(\bar{x})}{t}$
if $t>0$ is small enough. This yields $(-f)^{\prime}(\bar{x};d)\leq\langle
v,d\rangle$ for all ${v}\in\partial(-f)(\bar{x})$ and thus verifies the
inequality “$\leq$” in the first representation of (13).
To prove the opposite inequality therein, take $t_{k}\to 0^{+}$ such that
$\displaystyle\lim_{k\to\infty}\frac{-f(\bar{x}+t_{k}d)+f(\bar{x})}{t_{k}}=(-f)^{\prime}(\bar{x};d).$
Employing the mean value theorem from (MR3823783, , Corollary 4.12)) gives us
$\displaystyle f(\bar{x}+t_{k}d)-f(\bar{x})=t_{k}\langle v_{k},d\rangle\text{
for some }v_{k}\in\partial f(\bar{x}+\lambda_{k}t_{k}d)\text{ with
}\lambda_{k}\in(0,1).$
It follows from the Lipschitz continuity of $f$ that $\\{v_{k}\\}$ is bounded,
and so we can assume that $v_{k}\to\bar{v}\in\partial f(\bar{x})$. Therefore,
$\begin{array}[]{ll}(-f)^{\prime}(\bar{x};d)=\langle-\bar{v},d\rangle\geq\inf\big{\\{}\langle
w,d\rangle\;\big{|}\;w\in-\partial f(\bar{x})\big{\\}}\\\
=\inf\big{\\{}\langle w,d\rangle\;\big{|}\;w\in\mbox{\rm
co}\,\partial(-f)(\bar{x})\big{\\}}=\inf\big{\\{}\langle
w,d\rangle\;\big{|}\;w\in\partial(-f)(\bar{x})\big{\\}},\end{array}$
which verifies (13) and completes the proof of the proposition.
Next we define the notion of stationary points for problem (1) the finding of
which is the goal of our algorithms.
###### Definition 2
Let $\varphi=g-h$ be the cost function in (1), where $g$ is of class
$\mathcal{C}^{1,1}$ around some point $\bar{x}$, and where $h$ is locally
Lipschitzian around $\bar{x}$ and prox-regular at this point. Then $\bar{x}$
is a _stationary point_ of (1) if $0\in\partial\varphi(\bar{x})$.
###### Remark 3
The stationarity notion $0\in\partial\varphi(\bar{x})$, expressed via the
limiting subdiffential, is known as the M$($ordukhovich$)$-stationarity. Since
no other stationary points are considered in this paper, we skip “M” in what
follows. Observe from the subdifferential sum rule in our setting that
$\bar{x}$ is a stationary point in (1) if and only if $0\in\nabla
g(\bar{x})+\partial(-h)(\bar{x})$. Thus every stationary point $\bar{x}$ is a
critical point in the sense that $0\in\nabla g(\bar{x})-\partial h(\bar{x})$.
By Proposition 1, the latter can be equivalently described in terms of the
generalized gradient and also via the symmetric subdifferential MR3823783 of
$\varphi$ at $\bar{x}$ defined by
$\partial^{0}\varphi(\bar{x}):=\partial\varphi(\bar{x})\cup\big{(}-\partial(-\varphi)(\bar{x})\big{)}$
(14)
which possesses the plus-minus symmetry
$\partial^{0}(-\varphi(\bar{x}))=-\partial^{0}(\varphi(\bar{x}))$. When both
$g$ and $h$ are convex, the classical DC algorithm Tao1986 ; Tao1997 and its
BDCA variant MR4078808 can be applied for solving problem (1). Although these
algorithms only converge to critical points, they can be easily combined as in
Aragon2020 with a basic derivative-free optimization scheme to converge to
d-stationary points, which satisfy $\partial h(\bar{x})=\\{\nabla
g(\bar{x})\\}$ (or, equivalently, $\varphi^{\prime}(\bar{x};d)=0$ for all
$d\in\mathbb{R}^{n}$; see (Aragon2020, , Proposition 1)). In the DC setting,
every local minimizer of problem (1) is a d-stationary point (Toland1979, ,
Theorem 3), a property which is stronger than the notion of stationarity in
Definition 2.
To proceed, recall that a mapping $f:U\to\mathbb{R}^{m}$ defined on an open
set $U\subseteq\mathbb{R}^{n}$ is _semismooth_ at $\bar{x}$ if it is locally
Lipschitzian around $\bar{x}$, directionally differentiable at this point, and
the limit
$\displaystyle\lim\limits_{A\in\tiny{\mbox{\rm
co}\,}\overline{\nabla}f(\bar{x}+tu^{\prime}),\atop u^{\prime}\to u,t\to
0^{+}}Au^{\prime}$
exists for all $u\in\mathbb{R}^{n}$, where
$\overline{\nabla}f(x):=\\{A\;|\;\exists x_{k}\overset{D}{\to}x\text{ and
}\nabla f(x_{k})\to A\\}$, and where $D$ is the set on which $f$ is
differentiable; see MR1955649 ; MR3289054 for more details. We say that a
function $g:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is semismoothly
differentiable at $\bar{x}$ if $g$ is $\mathcal{C}^{1}$-smooth around
$\bar{x}$ and its gradient mapping $\nabla g$ is semismooth at this point.
Recall further that a function $\psi:\mathbb{R}^{n}\to\overline{\mathbb{R}}$
is _prox-bounded_ if there exists $\lambda>0$ such that
${\mathtt{e}}_{\lambda}\psi(x)>-\infty$ for some $x\in\mathbb{R}^{n}$, where
${\mathtt{e}}_{\lambda}\psi:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is the
_Moreau envelope_ of $\psi$ with parameter $\lambda>0$ defined by
${\mathtt{e}}_{\lambda}\psi(x):=\inf_{z\in\mathbb{R}^{n}}\Big{\\{}\psi(z)+\frac{1}{2\lambda}\|x-z\|^{2}\Big{\\}}.$
(15)
The number
$\lambda_{\psi}:=\sup\\{\lambda>0\;|\;{\mathtt{e}}_{\lambda}\psi(x)>-\infty\text{
for some }x\in\mathbb{R}^{n}\\}$ is called the _threshold_ of prox-boundedness
of $\psi$. The corresponding _proximal mapping_ is the multifunction
${\mathtt{Prox}}_{\lambda\psi}:\mathbb{R}^{n}\;{\lower
1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise
2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{n}$ given by
${\mathtt{Prox}}_{\lambda\psi}(x):=\mathop{\rm
argmin}_{z\in\mathbb{R}^{n}}\Big{\\{}\psi(z)+\frac{1}{2\lambda}\|x-z\|^{2}\Big{\\}}.$
(16)
Next we observe that that the Moreau envelope can be represented as a DC
function. For any function $\varphi:\mathbb{R}^{n}\to\overline{\mathbb{R}}$,
consider its _Fenchel conjugate_
$\phi^{*}(x):=\sup_{z\in\mathbb{R}^{n}}\big{\\{}\langle
x,z\rangle-\phi(z)\big{\\}},$
and for any $\psi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ and
$\lambda>0$, define the Asplund function
${{\mathtt{A}}_{\lambda}\psi}(x):=\sup\limits_{z\in\mathbb{R}^{n}}\Big{\\{}\frac{1}{\lambda}\langle
z,x\rangle-\psi(z)-\frac{1}{2\lambda}\|z\|^{2}\Big{\\}}=\Big{(}\psi+\frac{1}{2\lambda}\|\cdot\|^{2}\Big{)}^{\ast}(x),$
(17)
which is inspired by Asplund’s study of metric projections in asplund . The
following proposition presents the precise formulation of the aforementioned
statement and reveals some remarkable properties of the Asplund function (17).
###### Proposition 2
Let $\psi$ be a prox-bounded function with threshold $\lambda_{\psi}$. Then
for every $\lambda\in(0,\lambda_{\psi})$, we have the representation
${\mathtt{e}}_{\lambda}\psi(x)=\frac{1}{2\lambda}\|x\|^{2}-{{\mathtt{A}}_{\lambda}\psi}(x),\quad
x\in\mathbb{R}^{n},$ (18)
where the Asplund function is convex and Lipschitz continuous on
$\mathbb{R}^{n}$. Furthermore, for any $x\in\mathbb{R}^{n}$ the following
subdifferential evaluations hold:
$\displaystyle{}\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)$
$\displaystyle\subseteq-\frac{1}{\lambda}{\mathtt{Prox}}_{\lambda\psi}(x),$
(19) $\displaystyle\partial{{\mathtt{A}}_{\lambda}\psi}(x)$
$\displaystyle=\frac{1}{\lambda}\mbox{\rm
co}\,\left({\mathtt{Prox}}_{\lambda\psi}(x)\right).$ (20)
Moreover, if $v\in{\mathtt{Prox}}_{\lambda\psi}(x)$ is such that
$v\notin\mbox{\rm
co}\,\left({\mathtt{Prox}}_{\lambda\psi}(x)\backslash\\{v\\}\right)$, then the
vector $-\frac{1}{\lambda}v$ belongs to
$\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)$. If in addition $f$ is of class
$\mathcal{C}^{2,1}$ on $\mathbb{R}^{n}$, then the function
$x\mapsto{{\mathtt{A}}_{\lambda}\psi}(x-\lambda\nabla f(x))$ is prox-regular
at any point $x\in\mathbb{R}^{n}$.
###### Proof
Representation (18) easily follows from definitions of the Moreau envelope and
Asplund function. Due to the second equality in (17), the Asplund function is
convex on $\mathbb{R}^{n}$. It is also Lipschitz continuous due its finite-
valuedness on $\mathbb{R}^{n}$, which is induced by this property of the
Moreau envelope. The subdifferential evaluations in (19) and (MR1491362, ,
Example 10.32) and the subdifferential sum rule in (MR3823783, , Proposition
1.30)) tell us that
$\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)=-\lambda^{-1}x+\partial({\mathtt{e}}_{\lambda}\psi)(x)$
and
$\partial{{\mathtt{A}}_{\lambda}\psi}(x)=\lambda^{-1}x+\partial(-{\mathtt{e}}_{\lambda}\psi)(x)$
for any $x\in\mathbb{R}^{n}$.
Take further $v\in{\mathtt{Prox}}_{\lambda\psi}(x)$ with
$-\frac{1}{\lambda}v\not\in\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)$ and
show that $v\in{\rm
co}\left({\mathtt{Prox}}_{\lambda\psi}(x)\backslash\\{v\\}\right)$. Indeed, it
follows from (19) that
$\displaystyle\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)$
$\displaystyle\subseteq-\frac{1}{\lambda}{\mathtt{Prox}}_{\lambda\psi}(x)\backslash\\{v\\}.$
The Lipschitz continuity and convexity of ${{\mathtt{A}}_{\lambda}\psi}$
implies that
$\displaystyle{\rm
co}\,\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)=-\partial{{\mathtt{A}}_{\lambda}\psi}(x)$
(21)
by (MR2191744, , Theorem 3.57), which allows us to deduce from (20) and (21)
that
${\rm co}\big{(}{\mathtt{Prox}}_{\lambda\psi}(x)\big{)}={\rm
co}\big{(}{\mathtt{Prox}}_{\lambda\psi}(x)\backslash\\{v\\}\big{)}.$
This verifies the inclusion $v\in\mbox{\rm
co}\,\left({\mathtt{Prox}}_{\lambda\psi}(x)\backslash\\{v\\}\right)$ as
claimed.
Observe finally that the function
$x\mapsto{{\mathtt{A}}_{\lambda}\psi}_{\lambda}(x-\lambda\nabla f(x))$ is the
composition of the convex function ${{\mathtt{A}}_{\lambda}\psi}$ and the
$\mathcal{C}^{1,1}$ mapping $x\mapsto x-\lambda\nabla f(x)$, which ensures by
(MR2069350, , Proposition 2.3) its prox-regularity at any point
$x\in\mathbb{R}^{n}$.
The following remark discusses a useful representation of the basic
subdifferential of the function $-{{\mathtt{A}}_{\lambda}\psi}$ and other
functions of this type.
###### Remark 4
It is worth mentioning that the subdifferential
$\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)$ can be expressed via the set
$D:=\\{x\in\mathbb{R}^{n}\;|\;{{\mathtt{A}}_{\lambda}\psi}\;\mbox{ is
differentiable at }\;x\\}$ as follows:
$\partial(-{{\mathtt{A}}_{\lambda}\psi})(x)=\big{\\{}v\in\mathbb{R}^{n}\;\big{|}\;\text{
there exists }x_{k}\overset{D}{\to}x\text{ and
}\nabla{{\mathtt{A}}_{\lambda}\psi}(x_{k})\to-v\big{\\}}.$ (22)
We refer to (MR1491362, , Theorem 10.31) for more details. Note that we do not
need to take the convex hull on the right-hand side of (22) as in the case of
the generalized gradient of locally Lipschitzian functions.
Finally, recall the definitions of the convergence rates used in the paper.
###### Definition 3
Let $\\{x_{k}\\}$ be a sequence in $\mathbb{R}^{n}$ converging to $\bar{x}$ as
$k\rightarrow\infty$. The convergence rate is said to be:
(i) _R-linear_ if there exist $\mu\in(0,1),c>0$, and $k_{0}\in\mathbb{N}$ such
that
$\left\|x_{k}-\bar{x}\right\|\leq c\mu^{k}\;\text{ for all }\;k\geq k_{0}.$
(ii) _Q-linear_ if there exists $\mu\in(0,1)$ such that
$\limsup_{k\to\infty}\frac{\left\|x_{k+1}-\bar{x}\right\|}{\left\|x_{k}-\bar{x}\right\|}=\mu.$
(iii) _Q-superlinear_ if it is Q-linear for all $\mu\in(0,1)$, i.e., if
$\lim_{k\to\infty}\frac{\left\|x_{k+1}-\bar{x}\right\|}{\left\|x_{k}-\bar{x}\right\|}=0.$
(iv) _Q-quadratic_ if we have
$\limsup_{k\to\infty}\frac{\left\|x_{k+1}-\bar{x}\right\|}{\left\|x_{k}-\bar{x}\right\|^{2}}<\infty.$
## 3 Regularized Coderivative-Based Damped Semi-Newton Method in Nonsmooth
Difference Programming
The goal of this section is to justify the well-posedness and good performance
of the novel algorithm RCSN under appropriate and fairly general assumptions.
In the following remark, we discuss the difference between the choice of
subgradients and hence of directions in RCSN and DC algorithms.
Our main RCSN algorithm to find stationary points of nonsmooth problems (1) of
difference programming is labeled below as Algorithm 1.
1:$x_{0}\in\mathbb{R}^{n}$, $\beta\in(0,1)$, $\zeta>0$, $t_{\min}>0$,
$\rho_{\max}>0$ and $\sigma\in(0,1)$.
2:for $k=0,1,\ldots$ do
3: Take $w_{k}\in\partial\varphi(x_{k})$. If $w_{k}=0$, STOP and return
$x_{k}$.
4: Choose $\rho_{k}\in[0,\rho_{\max}]$ and
$d_{k}\in\mathbb{R}^{n}\backslash\\{0\\}$ such that $\displaystyle-
w_{k}\in\partial^{2}g(x_{k})(d_{k})+\rho_{k}d_{k}\quad\text{and}\quad\langle
w_{k},d_{k}\rangle\leq-\zeta\|d_{k}\|^{2}.$ (23)
5: Choose any $\overline{\tau}_{k}\geq t_{\min}$. Set
$\overline{\tau}_{k}:=\tau_{k}$.
6: while $\varphi(x_{k}+\tau_{k}d_{k})>\varphi(x_{k})+\sigma\tau_{k}\langle
w_{k},d_{k}\rangle$ do
7: $\tau_{k}=\beta\tau_{k}$.
8: end while
9: Set $x_{k+1}:=x_{k}+\tau_{k}d_{k}$.
10:end for
Algorithm 1 Regularized coderivative-based damped semi-Newton algorithm for
nonsmooth difference programming
###### Remark 5
Observe that Step 2 of Algorithm 1 selects
$w_{k}\in\partial\varphi(x_{k})=\nabla g(x_{k})+\partial(-h)(x_{k})$, which is
equivalent to choosing $v_{k}:=w_{k}-\nabla g(x_{k})$ in the basic
subdifferential of $-h$ at $x_{k}$. Under our assumptions, the set
$\partial(-h)(x_{k})$ can be considerably smaller than $\partial h(x_{k})$;
see the proof of Proposition 1 and also Remark 4 above. Therefore, Step 2
differs from those in DC algorithms, which choose subgradients in $\partial
h(x_{k})$. The purpose of our development is to find a stationary point
instead of a (classical) critical point for problem (1). In some applications,
Algorithm 1 would not be implementable if the user only has access to
subgradients contained in $\partial h(x_{k})$ instead of
$\partial(-h)(x_{k})$. In such cases, a natural alternative to Algorithm 1
would be a scheme replacing $w_{k}\in\partial\varphi(x_{k})$ in Step 2 by
$w_{k}:=\nabla g(x_{k})+v_{k}$ with $v_{k}\in\partial h(x_{k})$. Under the
setting of our convergence results, the modified algorithm would find a
critical point for problem (1), which is not guaranteed to be stationary.
The above discussions are illustrated by the following example.
###### Example 1
Consider problem (1) with $g(x):=\frac{1}{2}x^{2}$ and $h(x):=|x|$. If an
algorithm similar to Algorithm 1 was run by using $x_{0}=0$ as the initial
point but choosing $w_{0}=\nabla g(x_{0})+v_{0}$ with $v_{0}=0\in\partial
h(0)$ (instead of $w_{0}\in\partial\varphi(x_{0})$), it would stop at the
first iteration and return $x=0$, which is a critical point, but not a
stationary one. On the other hand, for any
$w_{0}\in\partial\varphi(0)=\\{-1,1\\}$ we get $w_{0}\neq 0$, and so Algorithm
1 will continue iterating until it converges to one of the two stationary
points $-1/2$ and $1/2$, which is guaranteed by our main convergence result;
see Theorem 3.1 below.
The next lemma shows that Algorithm 1 is well-defined by proving the existence
of a direction $d_{k}$ satisfying (23) in Step 3 for sufficiently large
regularization parameters $\rho_{k}$.
###### Lemma 1
Let $\varphi:\mathbb{R}^{n}\to\mathbb{R}$ be the objective function in problem
(1) with $g\in\mathcal{C}^{1,1}$ and $h$ being locally Lipschitz around
$\bar{x}$ and prox-regular at this point. Further, assume that
$\partial^{2}g(\bar{x})$ is $\xi$-lower-definite for some $\xi\in\mathbb{R}$
and consider a nonzero subgradient $w\in\partial\varphi(\bar{x})$. Then for
any $\zeta>0$ and any $\rho\geq\zeta-\xi$, there exists a nonzero direction
$d\in\mathbb{R}^{n}$ satisfying the inclusion
$\displaystyle-w\in\partial^{2}g(\bar{x})(d)+\rho d.$ (24)
Moreover, any nonzero direction from (24) obeys the conditions:
(i) $\varphi^{\prime}(\bar{x};d)\leq\langle w,d\rangle\leq-\zeta\|d\|^{2}$.
(ii) Whenever $\sigma\in(0,1)$, there exists $\eta>0$ such that
$\displaystyle\varphi(\bar{x}+\tau d)<\varphi(\bar{x})+\sigma\tau\langle
w,d\rangle\leq\varphi(\bar{x})-\sigma\zeta\tau\|d\|^{2}\;\mbox{ when
}\;\tau\in(0,\eta).$
###### Proof
Consider the function $\psi(x):=g(x)+\langle w-\nabla
g(\bar{x}),x\rangle+\frac{\rho}{2}\|x\|^{2}$ for which we clearly have that
$\partial^{2}\psi(\bar{x})=\partial^{2}g(\bar{x})+\rho I$, where $I$ denotes
the identity mapping. This shows by Remark 1 that $\partial^{2}\psi(\bar{x})$
is $(\xi+\rho)$-lower-definite, and thus it is $\zeta$-lower-definite as well.
Since $\nabla\psi(\bar{x})=w\neq 0$ and $\zeta>0$, it follows from
(2021arXiv210902093D, , Proposition 3.1) (which requires $\psi$ to be
$\mathcal{C}^{1,1}$ on $\mathbb{R}^{n}$, but actually only $\mathcal{C}^{1,1}$
around $\bar{x}$ is needed) that there exists a nonzero direction $d$ such
that $-\nabla\psi(\bar{x})\in\partial^{2}\psi(\bar{x})(d)$. This readily
verifies (24), which yields in turn the second inequality in (i) due to
Definition 1. On the other hand, we have by Proposition 1 the following:
$\displaystyle\begin{aligned} \varphi^{\prime}(\bar{x};d)=\lim\limits_{t\to
0^{+}}\frac{g(\bar{x}+td)-g(\bar{x})}{t}+\limsup\limits_{t\to
0^{+}}\frac{-h(\bar{x}+td)+h(\bar{x})}{t}\\\ =\langle\nabla
g(\bar{x}),d\rangle+\inf\big{\\{}\langle w,d\rangle\;\big{|}\;w\in-\partial
h(\bar{x})\big{\\}}\leq\langle\nabla
g(\bar{x})+v,d\rangle\leq-\zeta\|d\|^{2},\end{aligned}$ (25)
where in the last estimate is a consequence of the second inequality in (i).
Finally, assertion (ii) follows directly from (25) and the definition of
directional derivatives (12).
###### Remark 6
Under the $\xi$-lower-definiteness of $\partial^{2}g(x_{k})$, Lemma 1
guarantees the existence of a direction $d_{k}$ satisfying both conditions in
(23) for all $\rho_{k}\geq\zeta-\xi$. When $\xi$ is unknown, it is still
possible to implement Step 3 of the algorithm as follows. Choose first any
initial value of $\rho\geq 0$, then compute a direction satisfying the
inclusion in (23) and continue with Step 4 if the descent condition in (23)
holds. Otherwise, increase the value of $\rho$ and repeat the process until
the descent condition is satisfied.
The next example demonstrates that the prox-regularity of $h$ is not a
superfluous assumption in Lemma 1. Namely, without it the direction $d$ used
in Step 3 of Algorithm 1 can even be an ascent direction.
###### Example 2
Consider the least squares problem given by
$\min_{x\in\mathbb{R}^{2}}\frac{1}{2}(Ax-b)^{2}+\|x\|_{1}-\|x\|_{2},\quad
x\in\mathbb{R}^{2},$
with $A:=[1,0]$ and $b:=1$. Denote $g(x):=\frac{1}{2}\|Ax-b\|^{2}$ and
$h(x):=\|x\|_{2}-\|x\|_{1}$. If we pick $\bar{x}:=(1,0)^{T}$, the function $h$
is not prox-regular at $\bar{x}$ because it is not lower regular at $\bar{x}$;
see Proposition 1. Indeed, $\widehat{\partial}h(\bar{x})=\emptyset$, while
$\partial
h(\bar{x})=\frac{\bar{x}}{\|\bar{x}\|}+\partial(-\|\cdot\|_{1})(\bar{x})=\left\\{\begin{pmatrix}0\\\
-1\end{pmatrix},\begin{pmatrix}0\\\ 1\end{pmatrix}\right\\}.$
Therefore, although $\nabla^{2}g(\bar{x})=A^{T}A$ is
$\lambda_{\min}(A^{T}A)$-lower-definite, the assumptions of Lemma 1 are not
satisfied. Due to the representation
$\partial(-h)(\bar{x})=-\frac{\bar{x}}{\|\bar{x}\|}+\partial\|\cdot\|_{1}(\bar{x})=\left\\{\begin{pmatrix}0\\\
v\end{pmatrix}\;\Bigg{|}\;v\in[-1,1]\right\\},$
the choice of $v:=(0,1)^{T}\in\partial(-h)(\bar{x})$ yields $w:=\nabla
g(\bar{x})+v=(0,1)^{T}\in\partial\varphi(\bar{x})$. For any $\rho>0$,
inclusion (24) gives us $d=(0,-1/\rho)^{T}$. This is an ascent direction for
the objective function $\varphi(x)=g(x)-h(x)$ at $\bar{x}$ due to
$\varphi(\bar{x}+\tau
d)=1+\frac{\tau}{\rho}-\sqrt{1+(\tau/\rho)^{2}}>\varphi(\bar{x})=0\;\mbox{ for
all }\;\tau>0,$
which illustrates that the prox-regularity is an essential assumption in Lemma
1.
Algorithm 1 either stops at a stationary point, or produces an infinite
sequence of iterates. The convergence properties of the iterative sequence of
our algorithm are obtained below in the main theorem of this section. Prior to
the theorem, we derive yet another lemma, which establishes the following
descent property for the difference of a $\mathcal{C}^{1,1}$ function and a
prox-regular one.
###### Lemma 2
Let $\varphi(x)=g(x)-h(x)$, where $g$ is of class $\mathcal{C}^{1,1}$ around
$\bar{x}$, and where $h$ is continuous around $\bar{x}$ and prox-regular at
this point. Then for every $\bar{v}\in\partial h(\bar{x})$, there exist
positive numbers $\varepsilon$ and $r$ such that
$\displaystyle\varphi(y)\leq\varphi(x)+\langle\nabla
g(x)-v,y-x\rangle+r\|y-x\|^{2}$
whenever $x,y\in\mathbb{B}_{\varepsilon}(\bar{x})$ and $v\in\partial
h(x)\cap\mathbb{B}_{\varepsilon}(\bar{v})$.
###### Proof
Pick any $\bar{v}\in\partial h(\bar{x})$ and deduce from the imposed prox-
regularity and continuity of $h$ that there exist $\varepsilon_{1}>0$ and
$r_{1}>0$ such that
$\displaystyle-h(y)\leq-h(x)+\langle-v,y-x\rangle+r_{1}\|y-x\|^{2}\;\mbox{ for
all }\;x,y\in\mathbb{B}_{\varepsilon_{1}}(\bar{x})$ (26)
and all $v\in\partial h(x)\cap\mathbb{B}_{\varepsilon_{1}}(\bar{v})$. It
follows from the $\mathcal{C}^{1,1}$ property of $g$ by (MR3289054, , Lemma
A.11) that there exist positive numbers $r_{2}$ and $\varepsilon_{2}$ such
that
$\displaystyle g(y)\leq g(x)+\langle\nabla
g(x),y-x\rangle+r_{2}\|y-x\|^{2}\;\mbox{ for all
}\;\mathbb{B}_{\varepsilon_{2}}.$ (27)
Summing up the inequalities in (26) and (27) and defining $r:=r_{1}+r_{2}$ and
$\varepsilon:=\min\\{\varepsilon_{1},\varepsilon_{2}\\}$, we get that
$\displaystyle g(y)-h(y)\leq g(x)-h(x)+\langle\nabla
g(x)-v,y-x\rangle+r\|y-x\|^{2}$
for all $x,y\in\mathbb{B}_{\varepsilon}(\bar{x})$ and all $v\in\partial
h(x)\cap\mathbb{B}_{\varepsilon}(\bar{v})$. This completes the proof.
Now we are ready to establish the aforementioned theorem about the performance
of Algorithm 1.
###### Theorem 3.1
Let $\varphi:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ be the objective function
of problem (1) given by $\varphi=g-h$ with $\inf\varphi>-\infty$. Pick an
initial point $x_{0}\in\mathbb{R}^{n}$ and suppose that the sublevel set
$\Omega:=\\{x\in\mathbb{R}^{n}\;|\;\varphi(x)\leq\varphi(x_{0})\\}$ is closed.
Assume also that:
(a) The function $g$ is $\mathcal{C}^{1,1}$ around every $x\in\Omega$ and the
second-order subdifferential $\partial^{2}g(x)$ is $\xi$-lower-definite for
all $x\in\Omega$ with some $\xi\in\mathbb{R}$.
(b) The function $h$ is locally Lipschitzian and prox-regular on $\Omega$.
Then Algorithm 1 either stops at a stationary point, or produces sequences
$\\{x_{k}\\}\subseteq\Omega$, $\\{\varphi(x_{k})\\}$, $\\{w_{k}\\}$,
$\\{d_{k}\\}$, and $\\{\tau_{k}\\}$ such that:
(i) The sequence $\\{\varphi(x_{k})\\}$ monotonically decreases and converges.
(ii) If $\\{x_{k_{j}}\\}$ as $j\in\mathbb{N}$ is any bounded subsequence of
$\\{x_{k}\\}$, then $\displaystyle\inf_{j\in\mathbb{N}}\tau_{k_{j}}>0$,
$\displaystyle\sum\limits_{j\in\mathbb{N}}\|d_{k_{j}}\|^{2}<\infty,\;\sum\limits_{j\in\mathbb{N}}\|x_{k_{j}+1}-x_{k_{j}}\|^{2}<\infty,\;\text{
and }\;\sum\limits_{j\in\mathbb{N}}\|w_{k_{j}}\|^{2}<\infty.$
In particular, the boundedness of the entire sequence $\\{x_{k}\\}$ ensures
that the set of accumulation points of $\\{x_{k}\\}$ is a nonempty, closed,
and connected.
(iii) If $x_{k_{j}}\to\bar{x}$ as $j\to\infty$, then $\bar{x}$ is a stationary
point of problem (1) with the property
$\varphi(\bar{x})=\displaystyle\inf_{k\in\mathbb{N}}\varphi(x_{k})$.
(iv) If $\\{x_{k}\\}$ has an isolated accumulation point $\bar{x}$, then the
entire sequence $\\{x_{k}\\}$ converges to $\bar{x}$ as $k\to\infty$, where
$\bar{x}$ is a stationary point of (1).
###### Proof
If Algorithm 1 stops after a finite number of iterations, then it clearly
returns a stationary point. Otherwise, it produces an infinite sequence
$\\{x_{k}\\}$. By Step 5 of Algorithm 1 and Lemma 1, we have that
$\inf\varphi\leq\varphi(x_{k+1})<\varphi(x_{k})$ for all $k\in\mathbb{N}$,
which proves assertion (i) and also shows that $\\{x_{k}\\}\subseteq\Omega$.
To proceed, suppose that $\\{x_{k}\\}$ has a bounded subsequence
$\\{x_{k_{j}}\\}$ (otherwise there is nothing to prove) and split the rest of
the proof into the five claims.
Claim 1: _The sequence $\\{\tau_{k_{j}}\\}$, associated with $\\{x_{k_{j}}\\}$
as $j\in\mathbb{N}$ and produced by Algorithm 1, is bounded from below._
Indeed, otherwise consider a subsequence $\\{\tau_{\nu_{i}}\\}$ of
$\\{\tau_{k_{j}}\\}$ such that $\tau_{\nu_{i}}\to 0^{+}$ as $i\to\infty$.
Since $\\{x_{k_{j}}\\}$ is bounded, we can assume that $\\{x_{\nu_{i}}\\}$
converges to some point $\bar{x}$. By Lemma 1, we have that
$\displaystyle-\langle
w_{\nu_{i}},d_{\nu_{i}}\rangle\geq\zeta\|d_{\nu_{i}}\|^{2}\;\mbox{ for all
}\;i\in\mathbb{N},$ (28)
which yields by the Cauchy–Schwarz inequality the estimate
$\displaystyle\|w_{\nu_{i}}\|\geq\zeta\|d_{\nu_{i}}\|,\quad i\in\mathbb{N}.$
(29)
Since $\varphi$ is locally Lipschitzian and
$w_{\nu_{i}}\in\partial\varphi(x_{{\nu_{i}}})$, we suppose without loss of
generality that $w_{\nu_{i}}$ converges to some
$\bar{w}\in\partial\varphi(\bar{x})\subseteq\nabla g(\bar{x})-\partial
h(\bar{x})$ as $i\to\infty$. It follows from (29) that $\\{d_{\nu_{i}}\\}$ is
bounded, and therefore $d_{\nu_{i}}\to\bar{d}$ along a subsequence. Since
$\tau_{\nu_{i}}\to 0^{+}$, we can assume that $\tau_{\nu_{i}}<t_{\min}$ for
all $i\in\mathbb{N}$, and hence Step 5 of Algorithm 1 ensures the inequality
$\displaystyle\varphi(x_{\nu_{i}}+\beta^{-1}\tau_{\nu_{i}}d_{\nu_{i}})>\varphi(x_{\nu_{i}})+\sigma\beta^{-1}\tau_{\nu_{i}}\langle
w_{\nu_{i}},d_{\nu_{i}}\rangle,\quad i\in\mathbb{N}.$ (30)
Lemma 2 gives us a constant $r>0$ such that
$\displaystyle\varphi(x_{\nu_{i}}+\beta^{-1}\tau_{\nu_{i}}d_{\nu_{i}})\leq\varphi(x_{\nu_{i}})+\beta^{-1}\tau_{\nu_{i}}\langle
w_{\nu_{i}},d_{\nu_{i}}\rangle+r\beta^{-2}\tau_{\nu_{i}}^{2}\|d_{\nu_{i}}\|^{2}$
(31)
for all $i$ sufficiently large. Combining (30), (31), and (28) tells us that
$\begin{array}[]{ll}\sigma\beta^{-1}\tau_{\nu_{i}}\langle
w_{\nu_{i}},d_{\nu_{i}}\rangle<\varphi(x_{\nu_{i}}+\beta^{-1}\tau_{\nu_{i}}d_{\nu_{i}})-\varphi(x_{\nu_{i}})\\\
\leq\beta^{-1}\tau_{\nu_{i}}\langle
w_{\nu_{i}},d_{\nu_{i}}\rangle+r\beta^{-2}\tau_{\nu_{i}}^{2}\|d_{\nu_{i}}\|^{2}\leq\beta^{-1}\tau_{\nu_{i}}\left(1-\displaystyle\frac{r}{\zeta\beta}\tau_{\nu_{i}}\right)\langle
w_{\nu_{i}},d_{\nu_{i}}\rangle\end{array}$
for large $i$. Since $\langle w_{\nu_{i}},d_{\nu_{i}}\rangle<0$ by (28), we
get that $\sigma>1-\frac{r}{\zeta\beta}\tau_{\nu_{i}}$ for such $i$, which
contradicts the choice of $\sigma\in(0,1)$ and thus verifies this claim.
Claim 2: _We have the series convergence
$\sum_{j\in\mathbb{N}}\|d_{k_{j}}\|^{2}<\infty$,
$\sum_{j\in\mathbb{N}}\|x_{k_{j}+1}-x_{k_{j}}\|^{2}<\infty$, and
$\sum_{j\in\mathbb{N}}\|w_{k_{j}}\|^{2}<\infty$._
To justify this, deduce from Step 5 of Algorithm 1 and Lemma 1 that
$\displaystyle\sum\limits_{k\in\mathbb{N}}\zeta\tau_{k}\|d_{k}\|^{2}\leq\frac{1}{\sigma}\Big{(}\varphi(x_{0})-\inf_{k\in\mathbb{N}}\varphi(x_{k})\Big{)}.$
It follows from Claim 1 that $\zeta\tau_{k_{j}}>\gamma>0$ for all
$j\in\mathbb{N}$, which yields
$\sum_{j\in\mathbb{N}}\|d_{k_{j}}\|^{2}<\infty$. On the other hand, we have
that $\|x_{k_{j}+1}-x_{k_{j}}\|=\tau_{k_{j}}\|d_{k_{j}}\|$, and again Claim 1
ensures that $\sum_{j\in\mathbb{N}}\|x_{k_{j}+1}-x_{k_{j}}\|^{2}<\infty$. To
proceed further, let $l_{2}:=\sup\\{\|d_{k_{j}}\|\;|\;\in\mathbb{N}\\}$ and
use the Lipschitz continuity of $\nabla g$ on the compact set ${\rm
cl}\\{x_{k_{j}}\;|\;{j\in\mathbb{N}}\\}\subseteq\Omega$. Employing the
subdifferential condition from (MR3823783, , Theorem 4.15) together with the
coderivative scalarization in (6), we get by the standard compactness argument
the existence of $l_{3}>0$ such that
$\displaystyle w\in\partial\langle d,\nabla
g\rangle(x_{k_{j}})=\partial^{2}g(x_{k_{j}})(d)\Longrightarrow\|w\|\leq l_{3}$
for all $j\in\mathbb{N}$ and all $d\in\mathbb{B}_{l_{2}}(0)$. Therefore, it
follows from the inclusion
$-w_{k_{j}}\in\partial^{2}g(x_{k_{j}})(d_{k_{j}})+\rho_{k_{j}}d_{k_{j}}$ that
we have
$\displaystyle\|w_{k_{j}}+\rho_{k_{j}}d_{k_{j}}\|\leq
l_{3}\|d_{k_{j}}\|\;\text{ for all large }\;j\in\mathbb{N}.$ (32)
Using finally the triangle inequality and the estimate
$\rho_{k}\leq\rho_{\max}$ leads us to the series convergence
$\sum_{j\in\mathbb{N}}\|w_{k_{j}}\|^{2}<\infty$ as stated in Claim 2.
Claim 3: _If the sequence $\\{x_{k}\\}$ is bounded, then the set of its
accumulation points is nonempty, closed and connected._
Applying Claim 2 to the sequence $\\{x_{k}\\}$, we have the _Ostrowski
condition_ $\lim_{k\to\infty}\|x_{k+1}-x_{k}\|=0$. Then, the conclusion
follows from (Ostrowski1966, , Theorem 28.1).
Claim 4: _If $x_{k_{j}}\to\bar{x}$ as $j\to\infty$, then $\bar{x}$ is a
stationary point of (1) being such that
$\varphi(\bar{x})=\inf_{k\in\mathbb{N}}\varphi(x_{k})$._
By Claim 2, we have that the sequence $w_{k_{j}}\in\partial\varphi(x_{k_{j}})$
with $w_{k_{j}}\to 0$ as $j\to\infty$. The closedness of the basic subgradient
set ensures that $0\in\partial\varphi(\bar{x})$. The second assertion of the
claim follows from the continuity of $\varphi$ at $\bar{x}\in\Omega$.
Claim 5: _If $\\{x_{k}\\}$ has an isolated accumulation point $\bar{x}$, then
the entire sequence of $x_{k}$ converges to $\bar{x}$ as $k\to\infty$, and
$\bar{x}$ is a stationary point of (1)._ Indeed, consider any subsequence
$x_{k_{j}}\to\bar{x}$. By Claim 4, $\bar{x}$ is a stationary point of (1), and
it follows from Claim 2 that $\lim_{j\to\infty}\|x_{k_{j}+1}-x_{k_{j}}\|=0$.
Then we deduce from by (MR1955649, , Proposition 8.3.10) that
$x_{k}\to\bar{x}$ as $k\to\infty$, which completes the proof of theorem.
###### Remark 7
Regarding Theorem 3.1, observe the following:
(i) If $h=0$, $g$ is of class $\mathcal{C}^{1,1}$, and $\xi>0$, then the
results of Theorem 3.1 can be found in 2021arXiv210902093D .
(ii) If $\xi\geq 0$, we can choose the regularization parameter
$\rho_{k}:=c\|w_{k}\|$ and (a varying) $\zeta:=c\|w_{k}\|$ in (23) for some
$c>0$ to verify that assertions (i) and (iii) of Theorem 3.1 still hold.
Indeed, if $\\{x_{k_{j}}\\}$ converges to some $\bar{x}$, then
$\\{w_{k_{j}}\\}$ is bounded by the Lipschitz continuity of $\varphi$. Hence
the sequence $\\{w_{k_{j}}\\}$ converges to $0$. Otherwise, there exists $M>0$
and a subsequence of $\\{w_{k_{j}}\\}$ whose norms are bounded from below by
$M$. Using the same argumentation as in the proof of Theorem 3.1 with
$\zeta=cM$, we arrive at the contradiction with $0$ being an accumulation
point of of $\\{w_{k_{j}}\\}$.
When the objective function $\varphi$ is coercive and its stationary points
are isolated, Algorithm 1 converges to a stationary point because Theorem
3.1(iii) ensures that the set of accumulation points is connected. This
property enables us to prove the convergence in some settings when even there
exist nonisolated accumulation points; see the two examples below.
###### Example 3
Consider the function $\varphi:\mathbb{R}\to\mathbb{R}$ given by
$\displaystyle\varphi(x)$
$\displaystyle:=\int_{0}^{x}t^{4}\sin\left(\frac{\pi}{t}\right)dt.$
This function is clearly $\mathcal{C}^{2}$-smooth and coercive. For any
starting point $x_{0}$, the level set
$\Omega=\\{x\;|\;\varphi(x)\leq\varphi(x_{0})\\}$ is bounded, and hence there
exists a number $\xi\in\mathbb{R}$ such that the functions $g(x):=\varphi(x)$
and $h(x):=0$ satisfy the assumptions of Theorem 3.1. Observe furthermore that
$\varphi$ is a DC function because it is $\mathcal{C}^{2}$-smooth; see, e.g.,
Oliveira_2020 ; hiriart . However, it is not possible to write its DC
decomposition with $g(x)=\varphi(x)+ax^{2}$ and $h(x)=ax^{2}$ for $a>0$, since
there exists no scalar $a>0$ such that the function $g(x)=\varphi(x)+ax^{2}$
is convex on the entire real line.
It is easy to see that the stationary points of $\varphi$ are described by
$S:=\left\\{\frac{1}{n}\;\big{|}\;n\in\mathbb{Z}\backslash\\{0\\}\right\\}\cup\\{0\\}$.
Moreover, if Algorithm 1 generates an iterative sequence $\\{x_{k}\\}$
starting from $x_{0}$, then the accumulation points form by Theorem 3.1(ii) a
nonempty, closed, and connected set $A\subseteq S$ . If $A=\\{0\\}$, the
sequence $\\{x_{k}\\}$ converges to $\bar{x}=0$. If $A$ contains any point of
the form $\bar{x}=\frac{1}{n}$, then it is an isolated point, and Theorem
3.1(iv) tells us that the entire sequence $\\{x_{k}\\}$ converges to that
point, and consequently we have $A=\\{\bar{x}\\}$.
###### Example 4
Consider the function $\varphi:\mathbb{R}^{n}\to\mathbb{R}$ given by
$\displaystyle\varphi(x):=\sum_{i=1}^{n}\varphi_{i}(x_{i}),\;\text{ where
}\;\varphi_{i}(x_{i}):=g_{i}(x_{i})-h_{i}(x_{i})$ $\displaystyle\text{ with
}\;g_{i}(x_{i}):=\frac{1}{2}x_{i}^{2}\;\text{ and
}\;h_{i}(x_{i}):=|x_{i}|+\big{|}1-|x_{i}|\,\big{|}.$
We can easily check that the function $\varphi$ is coercive and satisfies the
assumptions of Theorem 3.1 with $g(x):=\sum_{i=1}^{n}g_{i}(x_{i})$,
$h(x):=\sum_{i=1}^{n}h_{i}(x_{i})$, and $\xi=1$. For this function, the points
in the set $\\{-2,-1,0,1,2\\}^{n}$ are critical but not stationary. Moreover,
the points in the set $\\{-2,0,2\\}^{n}$ give the global minima to the
objective function $\varphi$. Therefore, Algorithm 1 leads us to global
minimizers of $\varphi$ starting from any initial point.
$-4$$-3$$-2$$-1$$1$$2$$3$$4$$-1$$1$$2$$x$$y$ Figure 1: Plot of the function
$\varphi_{i}$ in Example 4
The following theorem establishes convergence rates of the iterative sequences
in Algorithm 1 under some additional assumptions.
###### Theorem 3.2
Suppose in addition to the assumptions of Theorem 3.1, that $\\{x_{k}\\}$ has
an accumulation point $\bar{x}$ such that the subgradient mapping
$\partial\varphi$ is strongly metrically subregular at $(\bar{x},0)$. Then the
entire sequence $\\{x_{k}\\}$ converges to $\bar{x}$ with the Q-linear
convergence rate for $\\{\varphi(x_{k})\\}$ and the R-linear convergence rate
for $\\{x_{k}\\}$ and $\\{w_{k}\\}$. If furthermore, $\xi>0$,
$0<\zeta\leq\xi$, $\rho_{k}\to 0$, $\sigma\in(0,\frac{1}{2})$, $t_{\min}=1$,
$g$ is semismoothly differentiable at $\bar{x}$, $h$ is of class
$\mathcal{C}^{1,1}$ around $\bar{x}$, and $\mbox{\rm clm}\,\nabla
h(\bar{x})=0$, then the rate of convergence of all the sequences above is at
least Q-superlinear.
###### Proof
We split the proof of the theorem into the following two claims.
Claim 1: _The rate of convergence of $\\{\varphi(x_{k})\\}$ is at least
Q-linear, while both sequences $\\{x_{k}\\}$ and $\\{w_{k}\\}$ converge at
least R-linearly._
Observe first that it follows from the imposed strong metric subregularity of
$\partial\varphi$ that $\bar{x}$ is an isolated accumulation point, and so
$x_{k}\to\bar{x}$ as $k\to\infty$ by Theorem 3.1(iii). Further, we get from
(7) that there exists $\kappa>0$ such that
$\displaystyle\|x_{k}-\bar{x}\|\leq\kappa\|w_{k}\|\;\text{ for large
}\;k\in\mathbb{N},$ (33)
since $w_{k}\to 0$ as $k\to\infty$ by Theorem 3.1(ii). Using (32) and the
triangle inequality gives us $\ell>0$ such that $\|w_{k}\|\leq\ell\|d_{k}\|$
for sufficiently large $k\in\mathbb{N}$. Lemma 2 yields then the cost function
increment estimate
$\displaystyle\varphi(x_{k})-\varphi(\bar{x})\leq
r\|x_{k}-\bar{x}\|^{2}\;\text{ for all large }\;k\in\mathbb{N}.$ (34)
By Step 5 of Algorithm 1 and Lemma 1, we get that
$\varphi(x_{k})-\varphi(x_{k+1})\geq\sigma\zeta\tau_{k}\|d_{k}\|^{2}$ for
large $k\in\mathbb{N}$. Remembering that $\inf_{k\in\mathbb{N}}\tau_{k}>0$, we
deduce from Theorem 3.1(ii) the existence of $\eta>0$ such that
$\displaystyle\varphi(x_{k})-\varphi(\bar{x})-(\varphi(x_{k+1})-\varphi(\bar{x}))\geq\eta\|w_{k}\|^{2}$
(35)
whenever $k$ large enough. Therefore, applying (2021arXiv210902093D, , Lemma
7.2) to the sequences $\alpha_{k}:=\varphi(x_{k})-\varphi(\bar{x})$,
$\beta_{k}:=\|w_{k}\|$, and $\gamma_{k}:=\|x_{k}-\bar{x}\|$ with the positive
constants $c_{1}:=\eta$, $c_{2}:=\kappa^{-1}$, and $c_{3}:=r$, we verify the
claimed result.
Claim 2: _Assuming that $\sigma\in(0,\frac{1}{2})$, $t_{\min}=1$, $g$ is
semismoothly differentiable at $\bar{x}$, $h$ is of class $\mathcal{C}^{1,1}$
around $\bar{x}$, and $\mbox{\rm clm}\,\nabla h(\bar{x})=0$, we have that the
rate of convergence for all the above sequences is at least Q-superlinear._
Suppose without loss of generality that $h$ is differentiable at any
$x_{k}\to\bar{x}$. It follows from the coderivative scalarization (6) and the
basic subdifferential sum rule in (MR3823783, , Theorem 2.19) valid under the
imposed assumptions that
$\displaystyle\partial^{2}g(x_{k})(d_{k})\subseteq\partial^{2}g(x_{k})(x_{k}+d_{k}-\bar{x})+\partial^{2}g(x_{k})(-x_{k}+\bar{x}).$
(36)
This yields the existence of
$z_{k}\in\partial^{2}g(x_{k})(-x_{k}+\bar{x})+\rho_{k}(-x_{k}+\bar{x})$ such
that
$\displaystyle-\nabla g(x_{k})+\nabla
h(x_{k})-z_{k}\in\partial^{2}g(x_{k})(x_{k}+d_{k}-\bar{x})+\rho_{k}(x_{k}+d_{k}-\bar{x}).$
(37)
Moreover, the $(\xi+\rho_{k})$-lower-definiteness of
$\partial^{2}g(x_{k})+\rho_{k}I$ and the Cauchy–Schwarz inequality imply that
$\displaystyle\|x_{k}+d_{k}-\bar{x}\|\leq\frac{1}{\xi+\rho_{k}}\|\nabla
g(x_{k})-\nabla h(x_{k})+z_{k}\|.$
Combining now the semismoothness of $\nabla g$ at $\bar{x}$ with the
conditions $\nabla g(\bar{x})=\nabla h(\bar{x})$ and $\mbox{\rm clm}\,\nabla
h(\bar{x})=0$ brings us to the estimates
$\displaystyle\begin{array}[]{ll}\|\nabla g(x_{k})-\nabla
h(x_{k})+z_{k}\|\leq\|\nabla g(x_{k})-\nabla
g(\bar{x})+z_{k}+\rho_{k}(x_{k}-\bar{x})\|\\\
+\rho_{k}\|x_{k}-\bar{x}\|+\|\nabla h(\bar{x})-\nabla
h(x_{k})\|=o(\|x_{k}-\bar{x}\|).\end{array}$
Then we have $\|x_{k}+d_{k}-\bar{x}\|=o(\|x_{k}-\bar{x}\|)$ and deduce
therefore from (MR1955649, , Proposition 8.3.18) and Lemma 1(i) that
$\displaystyle\varphi(x_{k}+d_{k})\leq\varphi(x_{k})+\sigma\langle\nabla\varphi(x_{k}),d_{k}\rangle.$
(38)
It follows from (38) that $x_{k+1}=x_{k}+d_{k}$ if $k$ for large $k$. Applying
(MR1955649, , Proposition 8.3.14) yields the $Q$-superlinear convergence of
$\\{x_{k}\\}$ to $\bar{x}$ as $k\to\infty$.
Finally, conditions (33)–(35) and the Lipschitz continuity of $\nabla\varphi$
around $\bar{x}$ ensure the existence of $L>0$ such that
$\displaystyle\frac{\eta}{\kappa^{2}}\|x_{k}-\bar{x}\|^{2}$
$\displaystyle\leq\varphi(x_{k})-\varphi(\bar{x})\leq r\|x_{k}-\bar{x}\|^{2},$
$\displaystyle\quad\frac{1}{\kappa}\|x_{k}-\bar{x}\|$
$\displaystyle\leq\|\nabla\varphi(x_{k})\|\leq L\|x_{k}-\bar{x}\|$
for sufficiently large $k$, and therefore we get the estimates
$\begin{array}[]{ll}\displaystyle\frac{\varphi(x_{k+1})-\varphi(\bar{x})}{\varphi(x_{k})-\varphi(\bar{x})}\leq\kappa
r\displaystyle\frac{\|x_{k+1}-\bar{x}\|^{2}}{\|x_{k}-\bar{x}\|^{2}},\\\
\quad\;\displaystyle\frac{\|\nabla\varphi(x_{k+1})\|}{\|\nabla\varphi(x_{k})\|}\leq\kappa
L\displaystyle\frac{\|x_{k+1}-\bar{x}\|}{\|x_{k}-\bar{x}\|},\end{array}$ (39)
which thus conclude the proof of the theorem.
###### Remark 8
The property of strong metric subregularity of subgradient mappings, which is
a central assumption of Theorem 3.2, has been well investigated in variational
analysis, characterized via second-order growth and coderivative type
conditions, and applied to optimization-related problems; see, e.g., ag ; dmn
; MR3823783 and the references therein.
The next theorem establishes the $Q$-superlinear and $Q$-quadratic convergence
of the sequences generated by Algorithm 1 provided that: $\xi>0$ (i.e.,
$\partial^{2}g(x)$ is $\xi$-strongly positive-definite), $\rho_{k}=0$ for all
$k\in\mathbb{N}$ (no regularization is used), $g$ is semismoothly
differentiable at the cluster point $\bar{x}$, and the function $h$ can be
expressed as the pointwise maximum of finitely many affine functions at
$\bar{x}$, i.e., when there exist
$(x^{\ast}_{i},\alpha_{i})_{i=1}^{p}\subseteq\mathbb{R}^{n}\times\mathbb{R}$
and $\varepsilon>0$ such that
$\displaystyle h(x)=\max_{i=1,\ldots,p}\left\\{\langle
x^{\ast}_{i},x\rangle+\alpha_{i}\right\\}\;\text{ for all
}\;x\in\mathbb{B}_{\varepsilon}(\bar{x}).$ (40)
###### Theorem 3.3
In addition to the assumptions of Theorem 3.1, suppose that $\xi>0$,
$0<\zeta\leq\xi$, $\sigma\in(0,\frac{1}{2})$, $t_{\min}=1$, and $\rho_{k}=0$
for all $k\in\mathbb{N}$. Suppose also that the sequence $\\{x_{k}\\}$
generated by Algorithm 1 has an accumulation point $\bar{x}$ at which $g$ is
semismoothly differentiable and $h$ can be represented in form (40). Then we
have the convergence $x_{k}\to\bar{x}$, $\varphi(x_{k})\to\varphi(\bar{x})$,
$w_{k}\to 0$, and $\nabla g(x_{k})\to\nabla g(\bar{x})$ as $k\to\infty$ with
at least $Q$-superlinear rate. If in addition $g$ is of class
$\mathcal{C}^{2,1}$ around $\bar{x}$, then the rate of convergence is at least
quadratic.
###### Proof
Observe that by (40) and (MR2191744, , Proposition 1.113) we have the
inclusion
$\displaystyle\partial(-h)(x)\subseteq\bigcup\big{\\{}-x^{\ast}_{i}\;\big{|}\;h(x)=\langle
x^{\ast}_{i},x\rangle+\alpha_{i}\big{\\}}$ (41)
for all $x$ near $\bar{x}$. The rest of the proof is split into the five
claims below.
Claim 1: _The sequence $\\{x_{k}\\}$ converges to $\bar{x}$ as $k\to\infty$._
Observe that $\bar{x}$ is an isolated accumulation point. Indeed, suppose on
the contrary that there is a sequence $\\{y_{\nu}\\}$ of accumulation points
of $\\{x_{k}\\}$ such that $y_{\nu}\to\bar{x}$ as $\nu\to\infty$ with
$y_{\nu}\neq\bar{x}$ for all $\nu\in\mathbb{N}$. Since each $y_{\nu}$ is
accumulation point of $\\{x_{k}\\}$, they are stationary points of $\varphi$.
The ${\cal C}^{1}$-smoothness of $g$ ensures that $\nabla g(y_{\nu})\to\nabla
g(\bar{x})$ as $\nu\to\infty$, and so (41) yields $\nabla
g(y_{\nu})=x_{i_{\nu}}^{\ast}$ for large $\nu\in\mathbb{N}$. Since there are
finitely many of $x_{i}^{\ast}$ in (40), we get that $\nabla g(y_{\nu})=\nabla
g(\bar{x})$ when $\nu$ is sufficiently large. Further, it follows from
(MR3823783, , Theorem 5.16) that the gradient mapping $\nabla g$ is strongly
locally maximal monotone around $\bar{x}$, i.e., there exist positive numbers
$\varepsilon$ and $r$ such that
$\displaystyle\langle\nabla g(x)-\nabla g(y),x-y\geq r\|x-y\|^{2}\;\text{ for
all }\;x,y\in\mathbb{B}_{\varepsilon}(\bar{x}).$
Putting $x:=\bar{x}$ and $y:=y_{\nu}$ in the above inequality tells us that
$\bar{x}=y_{\nu}$ for large $\nu\in\mathbb{N}$, which is a contradiction.
Applying finally Theorem 3.1(iv), we complete the proof of this claim.
Claim 2: _The sequence $\\{x_{k}\\}$ converges to $\bar{x}$ as $k\to\infty$ at
least $Q$-superlinearly._
As $x_{k}\to\bar{x}$, we have by Theorem 3.1(ii) that $w_{k}-\nabla
g(x_{k})\to-\nabla g(\bar{x})$, and so it follows from (41) that there exists
$i\in\\{1,\ldots,p\\}$ such that $h(\bar{x})=\langle
x^{\ast}_{i},\bar{x}\rangle+\alpha_{i}$, $h(x_{k})=\langle
x^{\ast}_{i},x_{k}\rangle-\alpha_{i}$ and $w_{k}-\nabla g(x_{k})=-\nabla
g(\bar{x})=-x_{i}^{\ast}$ for all $k$ sufficiently large. Define the auxiliary
function $\widehat{\varphi}:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ by
$\displaystyle\widehat{\varphi}(x):=g(x)-\langle
x^{\ast}_{i},x\rangle-\alpha_{i}$ (42)
and observe that $\widehat{\varphi}$ is $\mathcal{C}^{1,1}$ around $\bar{x}$
and semismoothly differentiable at this point. We have the equalities
$\displaystyle\varphi(x_{k})=\widehat{\varphi}(x_{k}),\;\varphi(\bar{x})=\widehat{\varphi}(\bar{x}),\;\nabla\widehat{\varphi}(x_{k})=w_{k},\;\text{
and }\;\nabla\widehat{\varphi}(\bar{x})=0$ (43)
for large $k$. It follows from
$\partial^{2}\widehat{\varphi}(x)=\partial^{2}g(x)$ that the mapping
$\partial^{2}\widehat{\varphi}(\bar{x})+\rho_{k}I$ is $(\xi+\rho_{k})$-lower-
definite. Using (36) and (37) with the replacement of $g$ by
$\widehat{\varphi}$ and taking (43) into account ensures the existence of
$z_{k}\in\partial^{2}\widehat{\varphi}(x_{k})(-x_{k}+\bar{x})+\rho_{k}(-x_{k}+\bar{x})$
satisfying the estimate
$\displaystyle\|x_{k}+d_{k}-\bar{x}\|\leq\frac{1}{\xi+\rho_{k}}\|\nabla\widehat{\varphi}(x_{k})-\nabla\widehat{\varphi}(\bar{x})+z_{k}\|.$
The triangle inequality and the semismoothness of $\nabla\widehat{\varphi}$ at
$\bar{x}$ yield
$\displaystyle\|\nabla\widehat{\varphi}(x_{k})-\nabla\widehat{\varphi}(\bar{x})+z_{k}\|$
$\displaystyle\leq\|\nabla\widehat{\varphi}(x_{k})-\nabla\widehat{\varphi}(\bar{x})+z_{k}+\rho_{k}(x_{k}-\bar{x})\|+\rho_{k}\|x_{k}-\bar{x}\|$
$\displaystyle=o(\|x_{k}-\bar{x}\|),$
which tells us that $\|x_{k}+d_{k}-\bar{x}\|=o(\|x_{k}-\bar{x}\|)$. Then it
follows from(MR1955649, , Proposition 8.3.18) and Lemma 1(i) above that
$\displaystyle\widehat{\varphi}(x_{k}+d_{k})\leq\widehat{\varphi}(x_{k})+\sigma\langle\nabla\widehat{\varphi}(x_{k}),d_{k}\rangle$
(44)
whenever $k$ is sufficiently large. Applying finally (MR1955649, , Proposition
8.3.14) verifies the claimed $Q$-superlinear convergence of $\\{x_{k}\\}$ to
$\bar{x}$.
Claim 3: _The gradient mapping of $\widehat{\varphi}$ from (42) is strongly
metrically regular around $(\bar{x},0)$ and hence strongly metrically
subregular at this point._
Using the $\xi$-lower-definiteness of $\partial^{2}\widehat{\varphi}(\bar{x})$
and the pointbased coderivative characterization of strong local maximal
monotonicity given in (MR3823783, , Theorem 5.16), we verify this property for
$\nabla\widehat{\varphi}$ around $\bar{x}$. Then (MR3823783, , Corollary 5.15)
ensures that $\nabla\widehat{\varphi}$ is strongly metrically regular around
$(\bar{x},0)$.
Claim 4: _The sequences $\\{\varphi(x_{k})\\}$, $\\{w_{k}\\}$, and $\\{\nabla
g(x_{k})\\}$ converge at least Q-superlinearly to $\varphi(\bar{x})$, $0$, and
$\nabla g(\bar{x})$, respectively._
It follows from the estimates in (39), with the replacement of $\varphi$ by
$\widehat{\varphi}$ and with taking into account that
$\widehat{\varphi}(x_{k})-\widehat{\varphi}(\bar{x})=\varphi(x_{k})-\varphi(\bar{x})$
and $\nabla\widehat{\varphi}(x_{k})=w_{k}$ due to (43), that there exist
constants $\alpha_{1},\alpha_{2}>0$ such that
$\displaystyle\frac{\varphi(x_{k+1})-\varphi(\bar{x})}{\varphi(x_{k})-\varphi(\bar{x})}$
$\displaystyle\leq\alpha_{1}\frac{\|x_{k+1}-\bar{x}\|^{2}}{\|x_{k}-\bar{x}\|^{2}}$
$\displaystyle\frac{\|w_{k+1}\|}{\|w_{k}\|}$
$\displaystyle\leq\alpha_{2}\frac{\|x_{k+1}-\bar{x}\|}{\|x_{k}-\bar{x}\|}$
provided that $k$ is sufficiently large. Recalling that $w_{k}-\nabla
g(x_{k})=-\nabla g(\bar{x})$ for large $k$ completes the proof of the claim.
Claim 5: _If $g$ is of class $\mathcal{C}^{2,1}$ around $\bar{x}$, then the
rate of convergence of the sequences above is at least quadratic._
It is easy to see that the assumed $\mathcal{C}^{2,1}$ property of $g$ yields
this property of $\widehat{\varphi}$ around $\bar{x}$. Using estimate (44), we
deduce this claim from the quadratic convergence of the classical Newton
method; see, e.g., (Aragon2019, , Theorem 5.18) and (MR3289054, , Theorem
2.15). This therefore completes the proof of the theorem.
###### Remark 9
Concerning Theorem 3.3, observe the following:
(i) It is important to emphasize that the performance of Algorithm 1 revealed
in Theorem 3.3 is mainly due to the usage of the basic subdifferential of the
function $-h$ in contrast to that of $h$, which is calculated as
$\partial
h(x)=\text{co}\left(\bigcup\left\\{x^{\ast}_{i}\;\bigg{|}\;h(x)=\langle
x^{\ast}_{i},x\rangle+\alpha_{i}\right\\}\right)$ (45)
by (MR2191744, , Theorem 3.46). We can see from the proof of Theorem 3.3 that
it fails if the evaluation of $\partial(-h)(x)$ in (41) is replaced by the one
of $\partial h(x)$ in (45).
(ii) The main assumptions of Theorem 3.3 do not imply the smoothness of
$\varphi$ at stationary points. For instance, consider the nonconvex function
$\varphi:\mathbb{R}^{n}\to\mathbb{R}$ defined as in Example 4 but letting now
$h_{i}(x_{i}):=|x_{i}|+|1-x_{i}|$. The function $\varphi$ satisfies the
assumptions of Theorem 3.3 at any of its stationary points $\\{-2,0,2\\}^{n}$,
but $\varphi$ is not differentiable at $\bar{x}=0$; see Figure 2.
$-5$$-4$$-3$$-2$$-1$$1$$2$$3$$4$$5$$-3$$-2$$-1$$1$$2$$x$$y$ Figure 2: Plot of
function $\varphi_{i}(x)=\frac{1}{2}x^{2}-|x|-|1-x|$ in Remark 9
(iii) The functions $\varphi$, $g$, and $h$ in Example 4 satisfy the
assumptions of Theorem 3.3. Therefore, the convergence of the sequences
generated by Algorithm 1 is at least quadratic.
## 4 Convergence Rates under the Kurdyka–Łojasiewicz Property
In this section, we verify the global convergence of Algorithm 1 and establish
convergence rates in the general setting of Theorem 3.1 without additional
assumptions of Theorems 3.2 and 3.3 while supposing instead that the cost
function $\varphi$ satisfies the Kurdyka–Łojasiewicz property. Recall that the
_Kurdyka–Łojasiewicz property_ holds for $\varphi$ at $\bar{x}$ if there exist
$\eta>0$ and a continuous concave function $\psi:[0,\eta]\to[0,\infty)$ with
$\psi(0)=0$ such that $\psi$ is $\mathcal{C}^{1}$-smooth on $(0,\eta)$ with
the strictly positive derivative $\psi^{\prime}$ and that
$\displaystyle\psi^{\prime}\big{(}\varphi(x)-\varphi(\bar{x})\big{)}\,{\rm
dist}\big{(}0;\partial\varphi(x)\big{)}\geq 1$ (46)
for all $x\in\mathbb{B}_{\eta}(\bar{x})$ with
$\varphi(\bar{x})<\varphi(x)<\varphi(\bar{x})+\eta$, where ${\rm
dist}(\cdot;\Omega)$ stands for the distance function of a set $\Omega$.
The first theorem of this section establishes the global convergence of
iterative sequence generated by Algorithm 1 to a stationary point of (1).
###### Theorem 4.1
In addition to the assumptions of Theorem 3.1, suppose that the iterative
sequence $\\{x_{k}\\}$ generated by Algorithm 1 has an accumulation point
$\bar{x}$ at which the Kurdyka–Łojasiewicz property (46) is satisfied. Then
$\\{x_{k}\\}$ converges $\bar{x}$ as $k\to\infty$, which is a stationary point
of problem (1).
###### Proof
If Algorithm 1 stops after a finite number of iterations, there is nothing to
prove. Due to the decreasing property of $\\{\varphi(x_{k})\\}$ from Theorem
3.1(i), we can assume that $\varphi(x_{k})>\varphi(x_{k+1})$ for all
$k\in\mathbb{N}$. Let $\bar{x}$ be the accumulation point of $\\{x_{k}\\}$
where $\varphi$ satisfies the Kurdyka–Łojasiewicz inequality (46), which by
Theorem 3.1 is a stationary point of problem (1). Since $\varphi$ is
continuous, we have that
$\varphi(\bar{x})=\inf_{k\in\mathbb{N}}\varphi(x_{k})$. Taking the constant
$\eta>0$ and the function $\psi$ from (46) and remembering that $g$ is of
class $\mathcal{C}^{1,1}$ around $\bar{x}$, suppose without loss of generality
that $\nabla g$ is Lipschitz continuous on $\mathbb{B}_{2\eta}(\bar{x})$ with
modulus $\kappa$. Let $k_{0}\in\mathbb{N}$ be such that
$x_{k_{0}}\in\mathbb{B}_{\eta/2}(\bar{x})$ and that
$\displaystyle\varphi(\bar{x})<\varphi(x_{k})<\varphi(\bar{x})+\eta,\quad\frac{\kappa+\rho_{\max}}{\sigma\zeta}\psi\big{(}\varphi(x_{k}-\varphi(\bar{x})\big{)}<\eta/2$
(47)
for all $k\geq k_{0}$, where $\sigma\in(0,1)$, $\zeta>0$, and $\rho_{\max}>0$
are the constants of Algorithm 1. The rest of the proof is split into the
following three steps.
Claim 1: _Let $k\geq k_{0}$ be such that $x_{k}\in\mathbb{B}_{\eta}(\bar{x})$.
Then we have the estimate_
$\displaystyle\|x_{k}-x_{k+1}\|\leq\frac{\kappa+\rho_{k}}{\sigma\zeta}\big{(}\psi(\varphi(x_{k})-\varphi(\bar{x})\big{)}-\psi\big{(}\varphi(x_{k+1})-\varphi(\bar{x})\big{)}\big{)}.$
(48)
Indeed, it follows from (6), (23), and (MR3823783, , Theorem 1.22) that
$\begin{array}[]{ll}{\rm
dist}(0;\partial\varphi(x_{k})\big{)}&\leq\|w_{k}\|\leq\|w_{k}+\rho_{k}d_{k}\|+\rho_{k}\|d_{k}\|\\\
&\leq(\kappa+\rho_{k})\|d_{k}\|=\displaystyle\frac{\kappa+\rho_{k}}{\tau_{k}}\|x_{k+1}-x_{k}\|.\end{array}$
(49)
Then using Step 5 of Algorithm 1, Lemma 1, the Kurdyka–Łojasiewicz inequality
(46), the concavity of $\psi$, and estimate (49) gives us
$\displaystyle\|x_{k}-$ $\displaystyle
x_{k+1}\|^{2}=\tau^{2}_{k}\|d_{k}\|^{2}\leq\frac{\tau_{k}}{\sigma\zeta}\big{(}\varphi(x_{k})-\varphi(x_{k+1})\big{)}$
$\displaystyle\leq\frac{\tau_{k}}{\sigma\zeta}{\rm
dist}\big{(}0;\partial\varphi(x_{k})\big{)}\,\psi^{\prime}\big{(}\varphi(x_{k})-\varphi(\bar{x})\big{)}\big{(}\varphi(x_{k})-\varphi(x_{k+1})\big{)}$
$\displaystyle\leq\frac{\tau_{k}}{\sigma\zeta}{\rm
dist}\big{(}0;\partial\varphi(x_{k})\big{)}\big{(}\psi(\varphi(x_{k})-\varphi(\bar{x})\big{)}-\psi(\varphi(x_{k+1})-\varphi(\bar{x})\big{)}\big{)}$
$\displaystyle\leq\frac{\kappa+\rho_{k}}{\sigma\zeta}\|x_{k+1}-x_{k}\|\big{(}\psi\big{(}\varphi(x_{k})-\varphi(\bar{x})\big{)}-\psi\big{(}\varphi(x_{k+1})-\varphi(\bar{x})\big{)}\big{)},$
which therefore verifies the claimed inequality (48).
Claim 2: _For every $k\geq k_{0}$, we have the inclusion
$x_{k}\in\mathbb{B}_{\eta}(\bar{x})$._
Suppose on the contrary that there exists $k>k_{0}$ with
$x_{k}\notin\mathbb{B}_{\eta}(\bar{x})$ and define
$\bar{k}:=\min\left\\{k>k_{o}\;\big{|}\;x_{k}\notin\mathbb{B}_{\eta}(\bar{x})\right\\}$.
Since for $k\in\\{k_{0},\ldots,\bar{k}-1\\}$ the estimate in (48) is
satisfied, we get by using (47) that
$\displaystyle\|x_{\bar{k}}-\bar{x}\|$
$\displaystyle\leq\|x_{k_{0}}-\bar{x}\|+\sum_{k=k_{0}}^{\bar{k}-1}\|x_{k}-x_{k+1}\|$
$\displaystyle\leq\|x_{k_{0}}-\bar{x}\|+\frac{\kappa+\rho_{\max}}{\sigma\zeta}\sum_{k=k_{0}}^{\bar{k}-1}\big{(}\psi\big{(}\varphi(x_{k})-\varphi(\bar{x})\big{)}-\psi\big{(}\varphi(x_{k+1})-\varphi(\bar{x})\big{)}\big{)}$
$\displaystyle\leq\|x_{k_{0}}-\bar{x}\|+\frac{\kappa+\rho_{\max}}{\sigma\zeta}\psi\big{(}\varphi(x_{k_{0}})-\varphi(\bar{x})\big{)}\leq\eta,$
which contradicts our assumption and thus verifies this claim.
Claim 3: _We have that $\sum_{k=1}^{\infty}\|x_{k}-x_{k+1}\|<\infty$, and
consequently the sequence $\\{x_{k}\\}$ converges to $\bar{x}$ as
$k\to\infty$._
It follows from Claim 1 and Claim 2 that (48) holds for all $k\geq k_{0}$.
Thus
$\displaystyle\sum_{k=1}^{\infty}\|x_{k}-x_{k+1}\|$
$\displaystyle\leq\sum_{k=1}^{k_{0}-1}\|x_{k}-x_{k+1}\|+\sum_{k=k_{0}}^{\infty}\|x_{k}-x_{k+1}\|$
$\displaystyle\leq\sum_{k=1}^{k_{0}-1}\|x_{k}-x_{k+1}\|+\frac{\kappa+\rho_{\max}}{\sigma\zeta}\psi\big{(}\varphi(x_{k_{0}})-\varphi(\bar{x})\big{)}<\infty,$
which therefore completes the proof of the theorem.
The next theorem establishes convergence rates for iterative sequence
$\\{x_{k}\\}$ in Algorithm 1 provided that the function $\psi$ in (46) is
selected in a special way. Since the proof while using Theorem 4.1, is similar
to the corresponding one from (MR4078808, , Theorem 4.9) in a different
setting, it is omitted.
###### Theorem 4.2
In addition to the assumptions of Theorem 4.1, suppose that the
Kurdyka–Łojasiewicz property (46) holds at the accumulation point $\bar{x}$
with $\psi(t):=Mt^{1-\theta}$ for some $M>0$ and $\theta\in[0,1)$. The
following assertions hold:
(i) If $\theta=0$, then the sequence $\\{x_{k}\\}$ converges in a finite
number of steps.
(ii) If $\theta\in(0,1/2]$, then the sequence $\\{x_{k}\\}$ converges at least
linearly.
(iii) If $\theta\in(1/2,1)$, then there exist $\mu>0$ and $k_{0}\in\mathbb{N}$
such that
$\displaystyle\|x_{k}-\bar{x}\|\leq\mu k^{-\frac{1-\theta}{2\theta-1}}\;\text{
for all }\;k\geq k_{0}.$
###### Remark 10
Together with our main Algorithm 1, we can consider its modification with the
replacement of $\partial(-h)(x_{k})$ by $-\partial h(x_{k})$. In this case,
the most appropriate version of the Kurdyka–Łojasiewicz inequality (46),
ensuring the fulfillment the corresponding versions of Theorem 4.1 and 4.2, is
the one
$\displaystyle\psi\big{(}\varphi(x)-\varphi(\bar{x})\big{)}\,{\rm
dist}\big{(}0;\partial^{0}\varphi(x)\big{)}\geq 1$
expressed in terms of the symmetric subdifferential $\partial^{0}\varphi(x)$
from (14). Note that the latter is surely satisfied where the symmetric
subdifferential is replaced by the generalized gradient
$\overline{\partial}\varphi(x)$, which is the convex hull of
$\partial^{0}\varphi(x)$.
## 5 Applications to Structured Constrained Optimization
In this section, we present implementations and specifications of our main
RCSN Algorithm 1 for two structured classes of optimization problems. The
first class contains functions represented as sums of two nonconvex functions
one of which is smooth, while the other is extended-real-valued. The second
class concerns minimization of smooth functions over closed constraint sets.
### 5.1 Minimization of Structured Sums
Here we consider the following class of structured optimization problems:
$\min_{x\in\mathbb{R}^{n}}\varphi(x):=f(x)+\psi(x),$ (50)
where $f:\mathbb{R}^{n}\to\mathbb{R}$ is of class $\mathcal{C}^{2,1}$ with the
$L_{f}$-Lipschitzian gradient, and where
$\psi:\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is an extended-real-valued prox-
bounded function with the threshold $\lambda_{\psi}>0$. When both functions
$f$ and $\psi$ are convex, problems of type (50) have been largely studied
under the name of “convex composite optimization” emphasizing the fact that
$f$ and $\psi$ are of completely different structures. In our case, we do not
impose any convexity of $f,\psi$ and prefer to label (50) as minimization of
structured sums to avoid any confusions with optimization of function
compositions, which are typically used in major models of variational analysis
and constrained optimization; see, e.g., MR1491362 .
In contrast to the original class of unconstrained problems of difference
programming (1), the structured sum optimization form (50) covers optimization
problems with constraints given by $x\in\mbox{\rm dom}\,\psi$. Nevertheless,
we show in what follows that the general class of problem (50) can be reduced
under the assumptions imposed above to the difference form (1) satisfying the
conditions for the required performance of Algorithm 1.
This is done by using an extended notion of envelopes introduced by Patrinos
and Bemporad in Patrinos2013 , which is now commonly referred as the _forward-
backward envelope_ ; see, e.g., MR3845278 .
###### Definition 4
Given $\varphi=f+\psi$ and $\lambda>0$, the _forward-backward envelope_ (FBE)
of the function $\varphi$ with the parameter $\lambda$ is defined by
$\displaystyle\varphi_{\lambda}(x):=\inf_{z\in\mathbb{R}^{n}}\Big{\\{}f(x)+\langle\nabla
f(x),z-x\rangle+\psi(z)+\frac{1}{2\lambda}\|z-x\|^{2}\Big{\\}}.$ (51)
Remembering the constructions of the Moreau envelope (15) and the Asplund
function (17) allows us to represent $\varphi_{\lambda}$ for every
$\lambda\in(0,\lambda_{\psi})$ as:
$\begin{array}[]{ll}\varphi_{\lambda}(x)&=f(x)-\displaystyle\frac{\lambda}{2}\|\nabla
f(x)\|^{2}+{\mathtt{e}}_{\lambda}\psi\big{(}x-\lambda\nabla f(x)\big{)}\\\
&=\displaystyle f(x)+\frac{1}{2\lambda}\|x\|^{2}-\langle\nabla
f(x),x\rangle-{{\mathtt{A}}_{\lambda}\psi}\big{(}x-\lambda\nabla
f(x)\big{)}.\end{array}$ (52)
###### Remark 11
It is not difficult to show that whenever $\nabla f$ is $L_{f}$-Lipschitz on
$\mathbb{R}^{n}$ and $\lambda\in(0,\frac{1}{L_{f}})$, the optimal values in
problems (1) and (51) are the same
$\displaystyle\inf_{x\in\mathbb{R}^{n}}\varphi_{\lambda}(x)=\inf_{x\in\mathbb{R}^{n}}\varphi(x).$
(53)
Indeed, the inequality “$\leq$” in (53) follows directly from the definition
of $\varphi_{\lambda}$. The reverse inequality in (53) is obtained by
$\begin{array}[]{ll}\displaystyle\inf_{x\in\mathbb{R}^{n}}\varphi_{\lambda}(x)=\displaystyle\inf_{x\in\mathbb{R}^{n}}\displaystyle\inf_{z\in\mathbb{R}^{n}}\Big{\\{}f(x)+\langle\nabla
f(x),z-x\rangle+\psi(z)+\displaystyle\frac{1}{2\lambda}\|z-x\|^{2}\Big{\\}}\\\
\geq\displaystyle\inf_{x\in\mathbb{R}^{n}}\displaystyle\inf_{z\in\mathbb{R}^{n}}\Big{\\{}f(z)-\frac{L_{f}}{2}\|z-x\|^{2}+\psi(z)+\displaystyle\frac{1}{2\lambda}\|z-x\|^{2}\Big{\\}}\\\
=\displaystyle\inf_{z\in\mathbb{R}^{n}}\displaystyle\inf_{x\in\mathbb{R}^{n}}\Big{\\{}f(z)+\psi(z)+\Big{(}\frac{1}{2\lambda}-\displaystyle\frac{L_{f}}{2}\Big{)}\|z-x\|^{2}\Big{\\}}=\displaystyle\inf_{z\in\mathbb{R}^{n}}\varphi(x).\end{array}$
Moreover, (53) does not hold if $\nabla f$ is not Lipschitz continuous on
$\mathbb{R}^{n}$. Indeed, consider $f(x):=\frac{1}{4}x^{4}$ and $\psi:=0$.
Then we have $\inf_{x\in\mathbb{R}^{n}}\varphi(x)=0$ while
$\varphi_{\lambda}(x)=\frac{1}{4}x^{4}-\frac{\lambda}{2}x^{6}$, which yields
$\inf_{x\in\mathbb{R}^{n}}\varphi_{\lambda}(x)=-\infty$, and so (53) fails.
The next theorem shows that FBE (51) can be written as the difference of a
$\mathcal{C}^{1,1}$ function and a Lipschitzian prox-regular function.
Furthermore, it establishes relationships between minimizers and critical
points of $\varphi$ and $\varphi_{\lambda}$.
###### Theorem 5.1
Let $\varphi=f+\psi$, where $f$ is of class $\mathcal{C}^{2,1}$ and where
$\psi$ is prox-bounded with threshold $\lambda_{\psi}>0$. Then for any
$\lambda\in(0,\lambda_{\psi})$, we have the inclusion
$\partial\varphi_{\lambda}(x)\subseteq\lambda^{-1}\big{(}I-\lambda\nabla^{2}f(x)\big{)}\big{(}x-{\mathtt{Prox}}_{\lambda\psi}\big{(}x-\lambda\nabla
f(x)\big{)}\big{)}.$ (54)
Furthermore, the following assertions are satisfied:
(i) If $x\in\mathbb{R}^{n}$ is a stationary point of $\varphi_{\lambda}$, then
$0\in\widehat{\partial}\varphi(x)$ provided that the matrix
$I-\lambda\nabla^{2}f(x)$ is nonsingular.
(ii) The FBE (51) can be written as $\varphi_{\lambda}=g-h$, where
$g(x):=f(x)+\frac{1}{2\lambda}\|x\|^{2}$ is of class $\mathcal{C}^{2,1}$, and
where $h(x):=\langle\nabla
f(x),x\rangle+{{\mathtt{A}}_{\lambda}\psi}(x-\lambda\nabla f(x))$ is locally
Lipschitzian and prox-regular on $\mathbb{R}^{n}$. Moreover, $\nabla^{2}g(x)$
is $\xi$-lower-definite for all $x\in\mathbb{R}^{n}$ with
$\xi:=\frac{1}{\lambda}-L_{f}$.
(iii) If $\psi:=\delta_{C}$ for a closed set $C$, then
$\partial(-{{\mathtt{A}}_{\lambda}\psi})=-\frac{1}{\lambda}\mathtt{P}_{C}$,
where $\mathtt{P}_{C}$ denotes the $($generally set-valued$)$ projection
operator onto $C$. In this case, inclusion (54) holds as an equality.
(iv) If both $f$ and $\psi$ are convex, we have that $\varphi_{\lambda}=g-h$,
where $g(x):=f(x)+{\mathtt{e}}_{\lambda}\psi(x-\lambda\nabla f(x))$ and
$h(x):=\frac{\lambda}{2}\|\nabla f(x)\|^{2}$ are of class $\mathcal{C}^{1,1}$
$($and hence prox-regular$)$ on $\mathbb{R}^{n}$, and that
$\displaystyle\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;\nabla\varphi_{\lambda}(x)=0\big{\\}}=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;0\in\partial\varphi(x)\big{\\}}$
(55)
provided that $I-\lambda\nabla^{2}f(x)$ is nonsingular at any stationary point
of $\varphi_{\lambda}$.
###### Proof
Observe that inclusion (54) follows directly by applying the basic
subdifferential sum and chain rules from (MR3823783, , Theorem 2.19 and
Corollary 4.6), respectively, the first representation of $\varphi_{\lambda}$
in (52) with taking into account the results of Lemma 2. Now we pick any
stationary point $x\in\mathbb{R}^{n}$ of the FBE $\varphi_{\lambda}$ and then
deduce from $0\in\partial\varphi_{\lambda}(x)$ and (54) that
$x\in{\mathtt{Prox}}_{\lambda\psi}\big{(}x-\lambda\nabla f(x))\big{)},$
which readily implies that $0\in\nabla
f(x)+\widehat{\partial}\psi(x)=\widehat{\partial}\varphi(x)$ and thus verifies
(i). Assertion (ii) follows directly from Proposition 2 and the smoothness of
$f$.
To prove (iii), we need to verify the reverse inclusion “$\supseteq$” in (19),
for which it suffices to show that the inclusion $v\in\mathtt{P}_{C}(x)$
yields $v\not\in{\rm co}(\mathtt{P}_{C}(x)\setminus\\{v\\})$. On the contrary,
if $v\in\mathtt{P}_{C}(x)\cap{\rm co}(\mathtt{P}_{C}(x)\setminus\\{v\\})$,
then there exist $c_{1},\ldots,c_{m}\in P_{C}(x)\setminus\\{v\\}$ and
$\mu_{1},\ldots,\mu_{m}\in(0,1)$ such that $v=\sum_{i=1}^{m}\mu_{i}c_{i}$ with
$\sum_{i=1}^{m}\mu_{i}=1$. By definition of the projection, we get the
equalities
$|c_{1}-x\|^{2}=\ldots=\|c_{m}-x\|^{2}=\|v-x\|^{2}=\Big{\|}\sum_{i=1}^{m}\mu_{i}(c_{i}-x)\Big{\|}^{2},$
which contradict the strict convexity of $\|\cdot\|^{2}$ and thus verifies
(iii).
The first statement in (iv) follows from the differentiability of $f$ and of
the Moreau envelope ${\mathtt{e}}_{\lambda}\psi$ by (MR1491362, , Theorem
2.26). Further, the inclusion “$\subseteq$” in (55) is a consequence of (i).
To justify the reverse inclusion in (55), observe that any $x$ satisfying
$0\in\partial\varphi(x)$ is a global minimizer of the convex function
$\varphi$, and so $x={\mathtt{Prox}}_{\lambda\psi}(x-\lambda\nabla f(x))$. The
differentiability of $\varphi_{\lambda}$ and (54) (which holds as an equality
in this case) tells us that $\nabla\varphi_{\lambda}(x)=0$, and thus (55)
holds. This completes the proof of the theorem.
###### Remark 12
Based on Theorem 5.1(ii), it is not hard to show that the FBE function
$\varphi_{\lambda}$ can be represented as a difference of convex functions.
Indeed, since ${{\mathtt{A}}_{\lambda}\psi}$ is a locally Lipschitzian and
prox-regular function, we have by (MR2101873, , Corollary 3.12) that $h$ is a
lower-$\mathcal{C}^{2}$ function, and hence by (MR1491362, , Theorem 10.33),
it is locally a DC function. Similarly, $g$ being a $\mathcal{C}^{2}$ function
is a DC function, so the difference $\varphi=g-h$ is also a DC function.
However, it is difficult to determine for numerical purposes what is an
appropriate representation of $\varphi$ as a difference of convex functions.
Moreover, such a representation of the objective in terms of convex functions
may generate some theoretical and algorithmic challenges as demonstrated below
in Example 5.
### 5.2 Nonconvex Optimization with Geometric Constraints
This subsection addresses the following problem of constrained optimization
with explicit geometric constraints given by:
$\mbox{minimize }\;f(x)\;\mbox{ subject to }\;x\in C,$ (56)
where $f:\mathbb{R}^{n}\to\mathbb{R}$ is of class $\mathcal{C}^{2,1}$, and
where $C\subseteq\mathbb{R}^{n}$ is an arbitrary closed set. Due to the lack
of convexity, most of the available algorithms in the literature are not able
to directly handle this problem. Nevertheless, Theorem 5.1 provides an
effective machinery allowing us to reduce (56) to an optimization problem that
can be solved by using our developments. Indeed, define
$\psi(x):=\delta_{C}(x)$ and observe that $\psi$ is prox-regular with
threshold $\lambda_{\psi}=\infty$. In this setting, FBE (51) reduces to the
formula
$\varphi_{\lambda}(x)=f(x)-\frac{\lambda}{2}\|\nabla
f(x)\|^{2}+\frac{1}{2\lambda}{\rm dist}^{2}\big{(}x-\lambda\nabla
f(x);C\big{)}.$
Furthermore, it follows from Theorem 5.1(iii) that
$\displaystyle\partial\varphi_{\lambda}(x)=\lambda^{-1}\big{(}I-\lambda\nabla^{2}f(x)\big{)}\big{(}x-\mathtt{P}_{C}\big{(}x-\lambda\nabla
f(x)\big{)}\big{)}.$
Based on Theorem 5.1, we deduce from Algorithm 1 with $\rho_{k}=0$ its
following version to solve the constrained problem (56).
1:$x_{0}\in\mathbb{R}^{n}$, $\beta\in(0,1)$, $t_{\min}>0$ and
$\sigma\in(0,1)$.
2:for $k=0,1,\ldots$ do
3: Take
$w_{k}\in\left(\lambda^{-1}I-\nabla^{2}f(x_{k})\right)\big{(}x_{k}-\mathtt{P}_{C}\big{(}x-\lambda\nabla
f(x_{k})\big{)}\big{)}$.
4: If $w_{k}=0$, STOP and return $x_{k}$. Otherwise set $d_{k}$ as the
solution to the linear system $(\nabla^{2}f(x_{k})+\lambda^{-1}I)d_{k}=w_{k}$.
5: Choose any $\overline{\tau}_{k}\geq t_{\min}$. Set
$\overline{\tau}_{k}:=\tau_{k}$.
6: while
$\varphi_{\lambda}(x_{k}+\tau_{k}d_{k})>\varphi_{\lambda}(x_{k})+\sigma\tau_{k}\langle\nabla
w_{k},d_{k}\rangle$ do
7: $\tau_{k}=\beta\tau_{k}$.
8: end while
9: Set $x_{k+1}:=x_{k}+\tau_{k}d_{k}$.
10:end for
Algorithm 2 Projected-like Newton algorithm for constrained optimization
To the best of our knowledge, Algorithm 2 is new even for the case of convex
constraint sets $C$. All the results obtained for Algorithm 1 in Sections 3
and 4 can be specified for Algorithm 2 to solve problem (56). For brevity, we
present just the following direct consequence of Theorem 3.1.
###### Corollary 1
Considering problem (56), suppose that $f:\mathbb{R}^{n}\to\mathbb{R}$ is of
class $\mathcal{C}^{2,1}$, that $C\subset\mathbb{R}^{n}$ is closed, and that
$\inf_{x\in C}f(x)>-\infty$. Pick an initial point $x_{0}\in\mathbb{R}^{n}$
and a parameter $\lambda\in(0,\frac{1}{L_{f}})$. Then Algorithm 2 either stops
at a point $x$ such that $0\in\nabla f(x)+\widehat{N}_{C}(x)$, or generates
infinite sequences $\\{x_{k}\\}$, $\\{\varphi_{\lambda}(x_{k})\\}$,
$\\{w_{k}\\}$, $\\{d_{k}\\}$, and $\\{\tau_{k}\\}$ satisfying the assertions:
(i) The sequence $\\{\varphi_{\lambda}(x_{k})\\}$ monotonically decreases and
converges.
(ii) If $\\{x_{k_{j}}\\}$ is a bounded subsequence of $\\{x_{k}\\}$, then
$\inf_{j\in\mathbb{N}}\tau_{k_{j}}>0$ and
$\displaystyle\sum\limits_{j\in\mathbb{N}}\|d_{k_{j}}\|^{2}<\infty,\;\sum\limits_{j\in\mathbb{N}}\|x_{k_{j}+1}-x_{k_{j}}\|^{2}<\infty,\;\sum\limits_{j\in\mathbb{N}}\|w_{k_{j}}\|^{2}<\infty.$
If, in particular, the entire sequence $\\{x_{k}\\}$ is bounded, then the set
of its accumulation points is nonempty, closed, and connected.
(iii) If $x_{k_{j}}\to\bar{x}$ as $j\to\infty$, then $0\in\nabla
f(\bar{x})+\widehat{N}_{C}(\bar{x})$ and the equality
$\varphi_{\lambda}(\bar{x})=\inf_{k\in\mathbb{N}}\varphi_{\lambda}(x_{k})$
holds.
(iv) If the sequence $\\{x_{k}\\}$ has an isolated accumulation point
$\bar{x}$, then it converges to $\bar{x}$ as $k\to\infty$, and we have
$0\in\nabla f(\bar{x})+\widehat{N}_{C}(\bar{x})$.
The next example illustrates our approach to solve (56) via Algorithm 2 in
contrast to algorithms of the DC type.
###### Example 5
Consider the minimization of a quadratic function over a closed (possibly
nonconvex) set $C$:
$\displaystyle\mbox{minimize }\;\frac{1}{2}x^{T}Qx+b^{T}x\;\text{ subject to
}\;x\in C,$ (57)
where $Q$ is a symmetric matrix, and where $b\in\mathbb{R}^{n}$. In this
setting, FBE (51) can be written as $\varphi_{\lambda}(x)=g(x)-h(x)$ with
$\begin{array}[]{ll}g(x)&:=\displaystyle\frac{1}{2}x^{T}\big{(}Q+\lambda^{-1}I\big{)}x+b^{T}x,\\\
h(x)&:=x^{T}Qx+b^{T}x+{{\mathtt{A}}_{\lambda}\psi}\big{(}(I-\lambda
Q)x-\lambda b\big{)}.\end{array}$ (58)
Our method does not require a DC decomposition of the objective function
$\varphi_{\lambda}$. Indeed the function $h$ in (58) is generally nonconvex.
Specifically, consider $Q=\begin{bmatrix}0&-1\\\ -1&0\end{bmatrix}$,
$b=(0,0)^{T}$, and $C$ being the unit sphere centered at the origin. Then $g$
in (58) is strongly convex for any $\lambda\in(0,1)$, while $h$ therein is not
convex whenever $\lambda>0$. More precisely, in this case we have
$h(x_{1},x_{2})=-2x_{1}x_{2}+{{\mathtt{A}}_{\lambda}\psi}(x_{1}+\lambda
x_{2},\lambda x_{1}+x_{2})\;\mbox{ with}$
$\displaystyle{{\mathtt{A}}_{\lambda}\psi}(x)=\frac{1}{2\lambda}\left(\|x\|^{2}-d_{C}^{2}(x)\right)=\frac{1}{2\lambda}\left(\|x\|^{2}-(\|x\|-1)^{2}\right)=\frac{1}{2\lambda}\left(2\|x\|-1\right).$
This tells us, in particular, that
$h(-1/2,-1/2)-\frac{1}{2}h(-1,-1)-\frac{1}{2}h(0,0)=\frac{1}{2},$
and thus $h$ is not convex regardless of the value of $\lambda$; see Figure 3.
Figure 3: Contour plot of the functions $f$, $\varphi_{\lambda}$, $g$ and $h$
in (58) with $\lambda=0.9$
## 6 Further Applications and Numerical Experiments
In this section, we demonstrate the performance of Algorithm 1 and Algorithm 2
in two different problems. The first problem is smooth and arises from the
study of system biochemical reactions. It can be successfully tackled with
DCA-like algorithms, but they require to solve subproblems whose solutions
cannot be analytically computed and are thus time-consuming. This is in
contrast to Algorithm 1, which only requires solving the linear equation (23)
at each iteration. The second problem is nonsmooth and consists of minimizing
a quadratic function under both convex and nonconvex constraints. Employing
FBE (51) and Theorem 5.1, these two problems can be attacked by using DCA,
BDCA, and Algorithm 2.
Both Algorithms 1 and 2 have complete freedom in the choice of the initial
value of the stepsizes $\overline{\tau}_{k}$ in Step 4, as long as they are
bounded from below by a positive constant $t_{\min}$, while the choice of
$\overline{\tau}_{k}$ totally determines the performance of the algorithms. On
the one hand, a small value would permit the stepsize to get easily accepted
in Step 5, but it would imply little progress in the iteration and (likely) in
the reduction of the objective function, probably making it more prone to
stagnate at local minima. On the other hand, we would expect a large value to
ameliorate these issues, while it could result in a significant waste of time
in the linesearch Steps 5-7 of both algorithms.
Therefore, it makes sense to consider a choice which sets the trial stepsize
$\overline{\tau}_{k}$ depending on the stepsize $\tau_{k-1}$ accepted in the
previous iteration, perhaps increasing it if no reduction of the stepsize was
needed. This technique was introduced in (MR4078808, , Section 5) under the
name of _Self-adaptive trial stepsize_ , and it was shown there that this
accelerates the performance of BDCA in practice. A similar idea is behind the
so-called _two-way backtracking_ linesearch, which was recently proposed in
Truong2021 for the gradient descent method, showing good numerical results on
deep neural networks. In contrast to BDCA, our theoretical results require
$t_{\min}$ to be strictly positive, so the technique should be slightly
adapted as shown in Algorithm 3. Similarly to MR4078808 , we adopt a
conservative rule of only increasing the trial stepsize $\overline{\tau}_{k}$
when two consecutive trial stepsizes were accepted without decreasing them.
1:$\gamma>1$, $\overline{\tau}_{0}>0$.
2:Obtain $\tau_{0}$ by Steps 5-7 of Algorithms 1 or 2.
3:Set $\overline{\tau}_{1}:=\max\\{\tau_{0},t_{\min}\\}$ and obtain $\tau_{1}$
by Steps 5-7 of Algorithms 1 or 2.
4:for $k=2,3,\ldots$ do
5: if $\tau_{k-2}=\overline{\tau}_{k-2}$ and
$\tau_{k-1}=\overline{\tau}_{k-1}$ then
6: $\overline{\tau}_{k}:=\gamma\tau_{k-1}$;
7: else
8: $\overline{\tau}_{k}:=\max\\{\tau_{k-1},t_{\min}\\}$.
9: end if
10: Obtain $\tau_{k}$ by Steps 5-7 of Algorithms 1 or 2.
11:end for
Algorithm 3 Self-adaptive trial stepsize
The codes in the first subsection below were written and ran in MATLAB version
R2021b, while for the second subsection we used Python 3.8. The tests were ran
on a desktop of Intel Core i7-4770 CPU 3.40GHz with 32GB RAM, under Windows 10
(64-bit).
### 6.1 Smooth DC Models in Biochemistry
Here we consider the problem motivating the development of BDCA in
AragonArtacho2018 , which consists of finding a steady state of a dynamical
equation arising in the modeling of biochemical reaction networks. We ran our
experiments on the same 14 biochemical reaction network models tested in
AragonArtacho2018 ; MR4078808 . The problem can be modeled as finding a zero
of the function
$f(x):=\left([F,R]-[R,F]\right)\exp\left(w+[F,R]^{T}x\right),$
where $F,R\in\mathbb{Z}_{\geq 0}^{m\times n}$ denote the forward and reverse
_stoichiometric matrices_ , respectively, where $w\in\mathbb{R}^{2n}$ is the
componentwise logarithm of the _kinetic parameters_ , where $\exp(\cdot)$ is
the componentwise exponential function, and where $[\,\cdot\,,\cdot\,]$ stands
for the horizontal concatenation operator. Finding a zero of $f$ is equivalent
to minimizing the function $\varphi(x):=\|f(x)\|^{2}$, which can be expressed
as a difference of the convex functions
$g(x):=2\left(\|p(x)\|^{2}+\|c(x)\|^{2}\right)\quad\text{and}\quad
h(x):=\|p(x)+c(x)\|^{2},$ (59)
where the functions $p(x)$ and $c(x)$ are given by
$p(x):=[F,R]\exp\left(w+[F,R]^{T}x\right)\quad\text{and}\quad
c(x):=[R,F]\exp\left(w+[F,R]^{T}x\right).$
In addition, it is also possible to write
$\varphi(x)=\|f(x)\|^{2}=\|p(x)-c(x)\|^{2}=\|p(x)\|^{2}+\|c(x)\|^{2}-2p(x)c(x),$
and so $\varphi(x)$ can be decomposed as the difference of the functions
$g(x):=\|p(x)\|^{2}+\|c(x)\|^{2}\quad\text{and}\quad h(x)=2p(x)c(x)$ (60)
with $g$ being convex. Therefore, $\nabla^{2}g(x)$ is $0$-lower definite, and
minimizing $\varphi$ can be tackled with Algorithm 1 by choosing
$\rho_{k}\geq\zeta$ for some fixed $\zeta>0$. As shown in AragonArtacho2018 ,
the function $\varphi$ is real analytic and thus satisfies the
Kurdyka–Łojasiewicz assumption of Theorem 4.2, but as observed in
(AragonArtacho2018, , Remark 5), a linear convergence rate cannot be
guaranteed.
Our first task in the conducted experiments was to decide how to set the
parameters $\zeta$ and $\rho_{k}$. We compared the strategy of taking
$\rho_{k}$ equal to some fixed value for all $k$, setting a decreasing
sequence bounded from below by $\zeta$, and choosing
$\rho_{k}=c\|w_{k}\|+\zeta$ for some constant $c>0$. In spite of Remark 7(ii),
$\zeta$ was added in the last strategy to guarantee both Theorem 3.1(ii) and
Theorem 4.2. We took $\zeta=10^{-8}$ and a constant $c=5$, which worked well
in all the models. We tried several options for the decreasing strategy, of
which a good choice seemed to be $\rho_{k}=\frac{\|w_{0}\|}{10^{\lfloor
k/50\rfloor}}+\zeta$, where $\lfloor\cdot\rfloor$ denotes the floor function
(i.e., the parameter was initially set to $\|w_{0}\|$ and then divided by $10$
every 50 iterations). The best option was this decreasing strategy, as can be
observed in the two models in Figure 4, and this was the choice for our
subsequent tests.
Figure 4: Comparison of the objective values for three strategies for setting
the regularization parameter $\rho_{k}$: constant (with values $10^{6}$,
$10^{5}$, $10^{3}$ and $1$), decreasing, and adaptive with respect to the
value of $\|w_{k}\|$.
###### Experiment 1
For finding a steady state of each of the 14 biochemical models, we compared
the performance of Algorithm 1 and BDCA with self-adaptive strategy, which was
the fastest method tested in MR4078808 (on average, 6.7 times faster than
DCA). For each model, 5 kinetic parameters were randomly chosen with
coordinates uniformly distributed in $(-1,1)$, and 5 random starting points
with random coordinates in $(-2,2)$ were picked. BDCA was ran using the same
parameters as in MR4078808 , while we took $\sigma=\beta=0.2$ for Algorithm 1.
We considered two strategies for setting the trial stepsize
$\overline{\tau}_{k}$ in Step 4 of Algorithm 1: constantly initially set to
50, and self-adaptive strategy (Algorithm 3) with $\gamma=2$ and
$t_{\min}=10^{-8}$. For each model and each random instance, we computed 500
iterations of BDCA with self-adaptive strategy and then ran Algorithm 1 until
the same value of the target function $\varphi$ was reached. As in
AragonArtacho2018 , the BDCA subproblems were solved by using the function
fminunc with optimoptions('fminunc', 'Algorithm', 'trust-region', 'GradObj',
'on', 'Hessian', 'on', 'Display', 'off', 'TolFun', 1e-8, 'TolX', 1e-8).
The results are summarized in Figure 5, where we plot the ratios of the
running times between BDCA with self-adaptive stepsize and Algorithm 1 with
constant trial stepsize against Algorithm 1 with self-adaptive stepsize. On
average, Algorithm 1 with self-adaptive strategy was $6.69$ times faster than
BDCA, and was $1.33$ times faster than Algorithm 1 with constant strategy. The
lowest ratio for the times of self-adaptive Algorithm 1 and BDCA was $3.17$.
Algorithm 1 with self-adaptive stepsize was only once (out of the 70
instances) slightly slower (a ratio of 0.98) than with the constant strategy.
Figure 5: Ratios of the running times of Algorithm 1 with constant stepsize
and BDCA with self-adaptive stepsize to Algorithm 1 with self-adaptive
stepsize. For each of the models, the algorithms were run using the same
random starting points. The overall average ratio is represented with a dashed
line
In Figure 6, we plot the values of the objective function for each algorithm
and also include for comparison the results for DCA and BDCA without self-
adaptive strategy. The self-adaptive strategy also accelerates the performance
of Algorithm 1. We can observe in Figure 7 that there is a correspondence
between the drops in the objective value and large increases of the stepsizes
$\tau_{k}$ (in a similar way to what was shown for BDCA in (MR4078808, , Fig.
12)).
Figure 6: Value of the objective function (with logarithmic scale) of
Algorithm 1, DCA and BDCA for two biochemical models. The value attained after
500 iterations of BDCA with self-adaptive stepsize is shown by a dashed line.
Figure 7: Comparison of the self-adaptive and the constant (with
$\overline{\tau}_{k}=50$) choices for the trial stepsizes in Step 4 of
Algorithm 1 for two biochemical models. The plots include two scales, a
logarithmic one for the objective function values and a linear one for the
stepsizes (which are represented with discontinuous lines).
### 6.2 Solving Constrained Quadratic Optimization Models
This subsection contains numerical experiments to solve problems of
constrained quadratic optimization formalized by
$\displaystyle\mbox{minimize }\;\frac{1}{2}x^{T}Qx+b^{T}x\;\text{ subject to
}\;x\in C:=\bigcup_{i=1}^{p}C_{i},$ (61)
where $Q$ is a symmetric matrix (not necessarily positive-semidefinite),
$b\in\mathbb{R}^{n}$, and $C_{1},\ldots,C_{p}\subseteq\mathbb{R}^{n}$ are
nonempty, closed, and convex sets.
When $C=\mathbb{B}_{r}(0)$ (i.e., $p=1$), this problem is referred as the
trust-region subproblem. If $Q$ is positive-semidefinite, then (61) is a
problem of convex quadratic programming. Even when $Q$ is not positive-
semidefinite, Tao and An Tao1998 showed that this particular instance of
problem (61) could be efficiently addressed with the DCA algorithm by using
the following DC decomposition:
$g(x):=\frac{1}{2}\rho\|x\|^{2}+b^{T}x+\delta_{\mathbb{B}_{r}(0)},\quad
h(x):=\frac{1}{2}x^{T}(\rho I-Q)x,$ (62)
where $\rho\geq\|Q\|_{2}$. However, this type of decomposition would not be
suitable for problem (61) when $C$ is not convex.
As shown in Subsection 5.2, problem (61) for $p\geq 1$ can be reformulated by
using FBE (51) to be tackled with Algorithm 2 with
$\lambda\in(0,\frac{1}{\|Q\|_{2}})$. Although the decomposition in (58) may
not be suitable for DCA when $Q$ is not positive-definite, it can be
regularized by adding $\frac{1}{2}\rho\|x\|^{2}$ to both $g$ and $h$ with
$\rho\geq\max\\{0,-2\lambda_{\min}(Q)\\}$. Such a regularization would
guarantee the convexity of the resulting functions $g$ and $h$ given by
$\displaystyle g(x)$
$\displaystyle:=\frac{1}{2}x^{T}\left(Q+(\rho+\lambda^{-1})I\right)x+b^{T}x,$
(63) $\displaystyle h(x)$ $\displaystyle:=\frac{1}{2}x^{T}\left(2Q+\rho
I\right)x+b^{T}x+{{\mathtt{A}}_{\lambda}\delta_{C}}((I-\lambda Q)x-\lambda
b).$ (64)
The function $g$ in (62) is not smooth, but the function $g$ in (63) is. Then
it is possible to apply BDCA MR4078808 to formulation (63)–(64) in order to
accelerate the convergence of DCA. Note that it would also be possible to do
it with (62) if the $\ell_{1}$ or $\ell_{\infty}$ balls were used; see
Artacho2019 for more details.
Let us describe two numerical experiments to solve problem (61).
###### Experiment 2
Consider (61) with $C=\mathbb{B}_{r}(0)$ and replicate the hardest setting in
Tao1998 , which was originally considered in More1983 . Specifically, in this
experiment we generated potentially difficult cases by setting $Q:=UDU^{T}$
for some diagonal matrix $D$ and orthogonal matrix $U:=U_{1}U_{2}U_{3}$ with
$U_{j}:=I-2u_{j}u_{j}^{T}/\|u_{j}\|^{2}$, $j=1,2,3$. The components of $u_{j}$
were random numbers uniformly distributed in $(-1,1)$, while the elements in
the diagonal of $D$ were random numbers in $(-5,5)$. We took $b:=Uz$ for some
vector $z$ whose elements were random numbers uniformly distributed in
$(-1,1)$ except for the component corresponding to the smallest element of
$D$, which was set to $0$. The radius $r$ was randomly chosen in the interval
$(\|d\|,2\|d\|)$, where $d_{i}:=z_{i}/(D_{ii}-\lambda_{\min}(D))$ if
$D_{ii}\neq\lambda_{\min(D)}$ and $0$ otherwise.
For each $n\in\\{100,200,\ldots,900,1000,1250,1500,\ldots,3750,4000\\}$, we
generated 10 random instances, took for each instance a random starting point
in $\mathbb{B}_{r}(0)$, and ran from it the four algorithms described above:
DCA applied to formulation (62) (without FBE), DCA and BDCA applied to
(63)–(64), and Algorithm 2. We took $\lambda=0.8/\|Q\|_{2}$ as the parameter
for FBE (both for DCA and Algorithm 2). The regularization parameter $\rho$
was chosen as $\max\\{0,-2\lambda_{\min}(Q)\\}$ for DCA with FBE and
$0.1+\max\\{0,-2\lambda_{\min}(Q)\\}$ for BDCA, as $h$ should be strongly
convex. Both Algorithm 2 and BDCA were ran with the self-adaptive trial
stepsize for the backtracking step introduced in MR4078808 with parameters
$\sigma=\beta=0.2$ and $\gamma=4$, and with $t_{\min}=10^{-6}$. For the shake
of fairness, we did not compute function values for the runs of DCA at each
iteration, since it is not required by the algorithm. Instead, we used for
both versions of DCA the stopping criterion from Tao1998 that $er\leq
10^{-4}$, where
$er=\left\\{\begin{array}[]{lc}\left\|x^{k+1}-x^{k}\right\|/\left\|x^{k}\right\|&\text{
if }\left\|x^{k}\right\|>1,\\\ \left\|x^{k+1}-x^{k}\right\|&\text{
otherwise.}\end{array}\right.$
As DCA with FBE was clearly the slowest method, we took the function value of
the solution returned by DCA without FBE as the target value for both
Algorithm 2 and BDCA, so these algorithms were stopped when that function
value was reached. In Figure 8, we plot the time ratio of each algorithm
against Algorithm 2. On average, Algorithm 2 was more than 5 times faster than
DCA with FBE and more than 2 times faster than DCA without FBE. BDCA greatly
accelerated the performance of DCA with FBE, but still Algorithm 2 was more
than 1.5 times faster. Only for size 300, the performance of DCA without FBE
was comparable to that of Algorithm 2. We observe on the right plot that the
advantage of Algorithm 2 is maintained for larger sizes.
Figure 8: Time ratio for 10 random instances of DCA with FBE, DCA without FBE,
and BDCA with respect to Algorithm 2. Average ratio within each size is
represented with a triangle for DCA with FBE, with a square for DCA without
FBE and with a circle for BDCA. The overall average ratio for each pair of
algorithms is represented by a dotted line.
###### Experiment 3
With the aim of finding the minimum of a quadratic function with integer and
box constraints, we modified the setting of Experiment 2 and considered
instead a set $C$ composed by $9^{n}$ balls of various radii centered at
$\\{-4,-3,-2,-1,0,1,2,3,4\\}^{n}$, with
$n\in\\{2,10,25,50,100,200,500,1000\\}$. As balls of radius $1/2\sqrt{n}$
cover the region $[-4,4]^{n}$, we ran our tests with balls of radii
$c/2\sqrt{n}$ with $c\in\\{0.1,0.2,\ldots,0.8,0.9\\}$. This time we considered
both convex and nonconvex objective functions. The nonconvex case was
generated as in Experiment 2, while for the convex case, the elements of the
diagonal of $D$ were chosen as random numbers uniformly distributed in
$(0,5)$.
For each $n$ and $r$, 100 random instances were generated. For each instance,
a starting point was chosen with random coordinates uniformly distributed in
$[-5,5]^{n}$. As the constraint sets are nonconvex, FBE was also needed to run
DCA. The results are summarized in Table 3, where for each $n$ and each
radius, we counted the number of instances (out of 100) in which the value of
$\varphi_{\lambda}$ at the rounded output of DCA and BDCA was lower and higher
than that of Algorithm 2 when ran from the same starting point. We used the
same parameter settings for the algorithms as in Experiment 2. Finally, we
plot in Figure 9 two instances in $\mathbb{R}^{2}$ in which Algorithm 2
reached a better solution.
| Radius of the balls
---|---
7pt. | Alg. 2 vs | $\frac{1}{20}\sqrt{n}$ | $\frac{2}{20}\sqrt{n}$ | $\frac{3}{20}\sqrt{n}$ | $\frac{4}{20}\sqrt{n}$ | $\frac{5}{20}\sqrt{n}$ | $\frac{6}{20}\sqrt{n}$ | $\frac{7}{20}\sqrt{n}$ | $\frac{8}{20}\sqrt{n}$ | $\frac{9}{20}\sqrt{n}$
$n=2$ | DCA | 0/34 | 1/20 | 1/26 | 1/19 | 0/19 | 0/18 | 0/3 | 1/1 | 0/2
BDCA | 6/12 | 2/13 | 3/14 | 4/9 | 0/4 | 1/10 | 0/2 | 1/1 | 0/2
$n=10$ | DCA | 2/89 | 2/83 | 1/66 | 5/53 | 8/28 | 3/7 | 1/1 | 3/1 | 1/0
BDCA | 21/68 | 38/53 | 33/39 | 23/24 | 18/16 | 2/7 | 0/0 | 0/1 | 0/0
$n=25$ | DCA | 0/99 | 0/98 | 2/87 | 11/58 | 9/32 | 3/8 | 2/9 | 2/2 | 5/4
BDCA | 16/83 | 29/71 | 40/58 | 37/40 | 13/26 | 2/3 | 0/1 | 0/0 | 1/2
$n=50$ | DCA | 0/100 | 0/100 | 0/91 | 2/86 | 13/41 | 14/12 | 9/12 | 6/10 | 12/12
BDCA | 8/92 | 6/94 | 31/69 | 36/53 | 16/28 | 8/8 | 6/5 | 3/4 | 5/3
$n=100$ | DCA | 0/100 | 0/100 | 0/99 | 9/87 | 18/49 | 18/31 | 12/22 | 18/20 | 11/21
BDCA | 2/98 | 6/94 | 39/61 | 36/61 | 23/33 | 16/14 | 9/8 | 9/8 | 13/9
$n=200$ | DCA | 0/100 | 0/100 | 0/100 | 1/98 | 23/64 | 31/41 | 25/29 | 22/30 | 20/41
BDCA | 3/97 | 2/98 | 38/62 | 37/63 | 33/39 | 27/17 | 18/18 | 14/13 | 16/18
$n=500$ | DCA | 0/100 | 0/100 | 0/100 | 1/99 | 6/94 | 15/80 | 27/61 | 29/65 | 36/48
BDCA | 0/100 | 1/99 | 41/59 | 44/56 | 33/63 | 25/56 | 34/39 | 32/47 | 17/35
(a) Convex case
| Radius of the balls
---|---
7pt. | Alg. 2 vs | $\frac{1}{20}\sqrt{n}$ | $\frac{2}{20}\sqrt{n}$ | $\frac{3}{20}\sqrt{n}$ | $\frac{4}{20}\sqrt{n}$ | $\frac{5}{20}\sqrt{n}$ | $\frac{6}{20}\sqrt{n}$ | $\frac{7}{20}\sqrt{n}$ | $\frac{8}{20}\sqrt{n}$ | $\frac{9}{20}\sqrt{n}$
$n=2$ | DCA | 1/8 | 1/9 | 1/10 | 1/6 | 0/6 | 0/6 | 0/8 | 3/2 | 1/0
BDCA | 2/4 | 1/4 | 1/3 | 3/4 | 0/5 | 0/4 | 0/6 | 3/2 | 1/0
$n=10$ | DCA | 9/39 | 4/39 | 7/39 | 4/35 | 10/30 | 3/27 | 5/45 | 2/34 | 8/29
BDCA | 9/31 | 11/33 | 13/29 | 6/31 | 11/29 | 6/25 | 7/38 | 5/29 | 10/30
$n=25$ | DCA | 6/69 | 13/67 | 7/62 | 5/61 | 10/53 | 3/59 | 6/56 | 3/72 | 3/66
BDCA | 16/58 | 16/63 | 16/55 | 12/48 | 9/52 | 11/52 | 13/52 | 12/57 | 11/58
$n=50$ | DCA | 11/81 | 10/79 | 8/87 | 5/90 | 3/87 | 4/80 | 2/86 | 5/89 | 8/81
BDCA | 24/68 | 21/64 | 23/70 | 17/73 | 14/75 | 9/73 | 10/75 | 18/74 | 16/71
$n=100$ | DCA | 4/96 | 6/94 | 4/94 | 5/94 | 4/96 | 3/97 | 2/98 | 7/91 | 9/91
BDCA | 15/85 | 16/83 | 18/80 | 14/84 | 17/83 | 11/89 | 9/91 | 20/79 | 19/80
$n=200$ | DCA | 4/96 | 4/96 | 4/96 | 2/98 | 1/99 | 2/98 | 4/96 | 3/97 | 0/100
BDCA | 11/89 | 16/84 | 11/89 | 8/92 | 6/94 | 11/89 | 10/90 | 13/87 | 8/92
$n=500$ | DCA | 1/99 | 2/98 | 0/100 | 0/100 | 0/100 | 1/99 | 1/99 | 2/98 | 1/99
BDCA | 12/88 | 17/83 | 15/85 | 9/91 | 15/85 | 11/89 | 9/91 | 18/82 | 20/80
(b) Nonconvex case (c) For different values of $n$ (space dimension) we
computed 100 random instances of problem (61) with $Q$ positive definite and
$C$ formed by the union of balls whose centers have integer coordinates
between $-4$ and $4$. We counted the number of instances in which DCA and BDCA
obtained a lower/upper value than Algorithm 2.
Figure 9: Two instances of problem (61). On the left, both line searches of
Algorithm 2 and BDCA help to reach a better solution for a nonconvex quadratic
function, while only Algorithm 2 succeeds on the right for the convex case.
## 7 Conclusion and Future Research
This paper proposes and develops a novel RCSN method to solve problems of
difference programming whose objectives are represented as differences of
generally nonconvex functions. We establish well-posedness of the proposed
algorithm and its global convergence under appropriate assumptions. The
obtained results exhibit advantages of our algorithm over known algorithms for
DC programming when both functions in the difference representations are
convex. We also develop specifications of the main algorithm in the case of
structured problems of constrained optimization and conduct numerical
experiments to confirm the efficiency of our algorithms in solving practical
models.
In the future research, we plan to relax assumptions on the program data
ensuring the linear, superlinear, and quadratic convergence rates for RCSN and
also extend the spectrum of applications to particular classes of constrained
optimization problems as well as to practical modeling.
## References
* (1) Aragón-Artacho, F.J., Goberna, M.A., López, M.A., Rodríguez, M.M.L.: Nonlinear optimization. Springer, Cham (2019)
* (2) Aragón-Artacho, F.J., Geoffroy, M.H.: Metric subregularity of the convex subdifferential in Banach spaces. J. Nonlinear Convex Anal. 15, 35–47 (1014)
* (3) Aragón-Artacho, F.J., Campoy, R., Vuong, P.T.: Using positive spanning sets to achieve d-stationarity with the boosted DC algorithm. Vietnam J. Math. 48, 363–376 (2020)
* (4) Aragón-Artacho, F.J., Campoy, R., Vuong, P.T.: The boosted DC algorithm for linearly constrained DC programming. Set-Valued Var. Anal. 30, 1265–1289 (2022)
* (5) Aragón-Artacho, F.J., Fleming, R.M.T., Vuong, P.T.: Accelerating the DC algorithm for smooth functions. Math. Program. 169, 95–118 (2018)
* (6) Aragón-Artacho, F.J., Vuong, P.T.: The boosted difference of convex functions algorithm for nonsmooth functions. SIAM J. Optim. 30, 980–1006 (2020)
* (7) Asplund, E.: Fréchet differentiability of convex functions. Acta Math. 121, 31–47 (1968).
* (8) Bernard, F., Thibault, L.: Prox-regularity of functions and sets in Banach spaces. Set-Valued Anal. 12, 25–47 (2004)
* (9) Bernard, F., Thibault, L.: Uniform prox-regularity of functions and epigraphs in Hilbert spaces. Nonlinear Anal. 60, 187–207 (2005)
* (10) Colombo, G., Henrion, R., Hoang, N.D.; Mordukhovich, B.S.: Optimal control of sweeping processes over polyhedral control sets. J. Diff. Eqs. 260, 3397–3447 (2016)
* (11) de Oliveira, W.: The ABC of DC programming. Set-Valued Var. Anal. 28, 679–706 (2020)
* (12) Ding, C., Sun, D., Ye, J.J.: First-order optimality conditions for mathematical programs with semidefinite cone complementarity constraints. Math. Program. 147, 539–379 (2014)
* (13) Drusvyatskiy, D., Mordukhovich, B.S., Nghia, T.T.A.: Second-order growth, tilt stability, and metric regularity of the subdifferential. J. Convex Anal. 21, 1165–1192 (2014)
* (14) Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, I, II. Springer, New York (2003)
* (15) Gfrerer, H., Outrata, J.V.: On a semismooth∗ Newton method for solving generalized equations. SIAM J. Optim. 31, 489–517 (2021)
* (16) Henrion, R., Mordukhovich, B.S., Nam, N.M.: Second-order analysis of polyhedral systems in finite and infinite dimensions with applications to robust stability of variational inequalities. SIAM J. Optim. 20, 2199–2227 (2010)
* (17) Henrion, R., Outrata, J., Surowiec, T.: On the co-derivative of normal cone mappings to inequality systems. Nonlinear Anal. 71, 1213–1226 (2009)
* (18) Henrion, R., Römisch, W.: On $M$-stationary points for a stochastic equilibrium problem under equilibrium constraints in electricity spot market modeling. Appl. Math. 52, 473–494 (2007)
* (19) Hiriart-Urruty, J.-B.: Generalized differentiability, duality and optimization for problems dealing with differences of convex functions. In: Ponstein, J. (ed.) Convexity and Duality in Optimization. Lecture Notes Econ. Math. Syst. 256, pp. 37–70. Springer, Berlin (1985)
* (20) Izmailov, A.F., Solodov, M.V.: Newton-Type Methods for Optimization and Variational Problems. Springer, Cham (2014)
* (21) Khanh, P.D., Mordukhovich, B.S., Phat, V.T.: A generalized Newton method for subgradient systems. Math. Oper. Res. (2022), DOI 10.1287/moor.2022.1320
* (22) Khanh, P.D., Mordukhovich, B.S., Phat, V.T., Tran, D.B.: Generalized Newton algorithms in nonsmooth optimization via second-order subdifferentials. J. Global Optim. (2022). DOI 10.1007/s10898-022-01248-7
* (23) Khanh, P.D., Mordukhovich, B.S., Phat, V.T., Tran, D.B.: Globally convergent coderivative-based generalized Newton methods in nonsmooth optimization (2022). arXiv:2109.02093
* (24) Li, W., Bian, W., Toh, K.-C.: Difference-of-convex algorithms for a class of sparse group $\ell_{0}$ regularized optimization problems. SIAM J. Optim. 32, 1614–1641 (2022)
* (25) Mordukhovich, B.S.: Sensitivity analysis in nonsmooth optimization. In: Field, D.A., Komkov, V.(eds) Theoretical Aspects of Industrial Design, pp. 32–46. SIAM Proc. Appl. Math. 58. Philadelphia, PA (1992)
* (26) Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications. Springer, Berlin (2006)
* (27) Mordukhovich, B.S.: Variational Analysis and Applications. Springer, Cham (2018)
* (28) Mordukhovich, B.S., Outrata, J.V.: On second-order subdifferentials and their applications. SIAM J. Optim. 12, 139–169 (2001)
* (29) Mordukhovich, B.S., Rockafellar, R.T.: Second-order subdifferential calculus with applications to tilt stability in optimization. SIAM J. Optim. 22, 953–986 (2012)
* (30) Mordukhovich, B.S., Sarabi, M.E.: Generalized Newton algorithms for tilt-stable minimizers in nonsmooth optimization. SIAM J. Optim. 31, 1184–1214 (2021)
* (31) Moré, J.J., Sorensen, D.C.: Computing a trust region step. SIAM J. Sci. Statist. Comput. 4. 553–572 (1983)
* (32) Ostrowski, A.M.: Solution of Equations and Systems of Equations, 2nd ed. Academic Press, Cambridge, MA (1966)
* (33) Outrata, J.V., Sun, D.: On the coderivative of the projection operator onto the second-order cone. Set-Valued Anal. 16 (999-1014 (2008)
* (34) Patrinos, P., Bemporad, A.: Proximal Newton methods for convex composite optimization. In: 52nd IEEE Conf. Dec. Cont., pp. 2358–2363. Florence, Italy (2013)
* (35) Rockafellar, R.T., Wets, R.J-B.: Variational Analysis. Springer, Berlin (1998)
* (36) Tao, P.D., An, L.T.H.: Convex analysis approach to DC programming: theory, algorithms and applications. Acta Math. Vietnam. 22, 289–355 (1997)
* (37) Tao, P.D., An, L.T.H.: A DC optimization algorithm for solving the trust-region subproblem. SIAM J. Optim. 8, 476–505 (1998)
* (38) Tao, P.D., Bernoussi, E.S.: Algorithms for solving a class of nonconvex optimization problems. Methods of subgradients. North-Holland Math. Stud. 129, 249–271 (1986)
* (39) Themelis A., Stella, L. and Patrinos, P.: Forward-backward envelope for the sum of two nonconvex functions: further properties and nonmonotone linesearch algorithms. SIAM J. Optim. 28, 2274–2303 (2018)
* (40) Toland, J.F.: On subdifferential calculus and duality in non-convex optimization. Mem. Soc. Math. France. 60, 177–183 (1979)
* (41) Truong, T.T., Nguyen, H.T.: Backtracking gradient descent method and some applications in large scale optimisation, II: Algorithms and experiments. Appl. Math. Optim. 84, 2557–2586 (2021)
* (42) Yao, J.-C., Yen, N.D.: Coderivative calculation related to a parametric affine variational inequality. Part 1: Basic calculation. Acta Math. Vietnam. 34, 157–172 (2009)
|
# Tunable coupling scheme for implementing two-qubit gates on fluxonium qubits
I. N. Moskalenko National University of Science and Technology ”MISIS”,
119049 Moscow, Russia Russian Quantum Center, 143025 Skolkovo, Moscow, Russia
I. S. Besedin<EMAIL_ADDRESS>National University of Science and
Technology ”MISIS”, 119049 Moscow, Russia Russian Quantum Center, 143025
Skolkovo, Moscow, Russia I. A. Simakov National University of Science and
Technology ”MISIS”, 119049 Moscow, Russia Russian Quantum Center, 143025
Skolkovo, Moscow, Russia Skolkovo Institute of Science and Technology, 143026
Moscow, Russia Moscow Institute of Physics and Technology, 141701
Dolgoprundy, Russia A. V. Ustinov National University of Science and
Technology ”MISIS”, 119049 Moscow, Russia Russian Quantum Center, 143025
Skolkovo, Moscow, Russia Physikalisches Institut, Karlsruhe Institute of
Technology, Karlsruhe, Germany
###### Abstract
The superconducting fluxonium circuit is an RF-SQUID-type flux qubit that uses
a large inductance built from an array of Josephson junctions or a high
kinetic inductance material. This inductance suppresses charge sensitivity
exponentially and flux sensitivity quadratically. In contrast to the transmon
qubit, the anharmonicity of fluxonium can be large and positive, allowing for
better separation between the low energy qubit manifold of the circuit and
higher-lying excited states. Here, we propose a tunable coupling scheme for
implementing two-qubit gates on fixed-frequency fluxonium qubits, biased at
half flux quantum. In this system, both qubits and coupler are coupled
capacitively and implemented as fluxonium circuits with an additional harmonic
mode. We investigate the performance of the scheme by simulating a universal
two-qubit fSim gate. In the proposed approach, we rely on a planar on-chip
architecture for the whole device. Our design is compatible with existing
hardware for transmon-based devices, with the additional advantage of lower
qubit frequency facilitating high-precision gating.
Quantum superconducting circuits based on Josephson tunnel junctions are a
flexible platform for building artificial atoms. Rapid progress has been made
in the last decade due to appearance of new types of qubits [1, 2] and
improvements in coherence properties [3]. Successful prototypes of
superconducting quantum processors developed by different research groups [4,
5, 6] to date are based on transmons, which have shown the best gate
fidelities among superconducting qubits. Despite the relatively high values of
coherence times of transmons in the order $100\ $\mathrm{\SIUnitSymbolMicro
s}$$ they are outperformed by an order magnitude in $T_{1}$ coherence times by
fluxonium qubits [2, 4]. The spectra of transmon qubits are similar to those
of weakly anharmonic oscillators. Although multiqubit processors with
efficient two-qubit gates[4, 5, 6] have already been demonstrated, weak
anharmonicity of their base elements presents a significant challenge for
further scaling them up and improving gate fidelities.
A changeover to fluxonium qubits could provide a possible upgrade path towards
large-scale superconducting quantum processors [2, 9, 3, 11, 4, 5] as
fluxoniums have millisecond energy relaxation times at flux degeneracy point.
Such long lifetime of the first excited state is partially due to its very low
(hundreds of megahertz) transition frequency from the ground state. This leads
to lower decay rates, since single-photon dielectric loss tangents only weakly
depend on frequency[13]. Low transition frequencies, however, lead to
operation of the qubit in a relatively “hot” environment. Because of this,
qubits can’t be initialized in the ground state by passive thermalization.
However, in a practical quantum processor qubit state initialization can be
realized by fast active reset [14]. Promising coherence times ($>200\
$\mathrm{\SIUnitSymbolMicro s}$$) have already been obtained in chip-
integrated fluxoniums [15], while in 3D cavities coherence times exceed even 1
ms [16]. In a recent work [7] first microwave-activated CZ gates have been
demonstrated also in a 3D cavity. Recently, another type of microwave-
activated two-qubit gate has been proposed, the bSWAP gate [18]. However,
high-fidelity two-qubit gates in planar geometry are yet to be demonstrated.
Moreover, scaling up beyond two qubits is extremely challenging in a 3D
architecture.
Figure 1: (color online) (a) Modified fluxonium circuit diagram, consisting of
one Josephson junction, two large inductors and three capacitors. (b) Concept
layout with readout resonator and bias line for magnetic flux control. (c)
Energy levels of the modified fluxonium system vs external magnetic flux
${\Phi^{\textnormal{x}}}$ for $E_{\textnormal{J}}=2.24\ $\mathrm{GHz}$$,
$E_{\textnormal{L}}=1.64\ $\mathrm{GHz}$$, $C_{1,2}=70.1\ $\mathrm{fF}$$,
$C_{\textnormal{J}}=1.3\ $\mathrm{fF}$$
In this work, we consider a specific parameter regime of fluxonium which
allows strong capacitive coupling to the qubit transition. In terms of
frequency and anharmonicity it is close to conventional fluxonium [2, 9, 3],
while the ratio between the Josephson and shunt inductance is close to the
quarton regime [1]. At the same time, the charging energy is relatively high:
$E_{J}\sim E_{L}\sim 4E_{C}$. A detailed comparison is given in the
Supplementary Information. The circuit consists of two superconducting islands
connected with a small Josephson junction, and inductively shunted to the
ground electrode (Fig. 1a). The proposed fluxonium can be utilized as the unit
cell (both qubit and coupler) for a scalable quantum processor. A possible
layout corresponding to realistic capacitances and inductances is shown in
Fig. 1b. Neighboring qubits can be capacitively coupled, allowing to adapt the
simple and broadly applicable capacitive tunable coupling scheme [20, 21, 5].
The scheme that we propose here consists of two fluxonium qubits with a
tunable coupler between them, which by itself is also a fluxonium qubit. Both
computational qubits are biased at the flux degeneracy point. The interaction
strength between the qubits is controlled by the central “coupler” fluxonium
flux bias. At the flux degeneracy point, all three qubits are close to
resonance and exhibit a strong $XX$-type interaction. Away from it, only a
small residual $ZZ$-type interaction between the qubits is left. A
$\sqrt{\mathrm{iSWAP}}$-like gate is performed by tuning the coupler from the
upper flux sweet stop to the lower sweet spot, waiting quarter of a vacuum
Rabi cycle, and tuning back. Using numerical simulation, we demonstrate how
decoherence, leakage and coherent errors can affect the gate performance.
The proposed scheme is compatible with existing hardware, moreover, the
additional advantage of this approach is the ability to use lower frequency
electronics for qubit and coupler control. Switching to sub-gigahertz controls
could drastically reduce the cost and complexity of the control electronics
and wiring.
A modified fluxonium circuit and a possible layout are shown in Fig. 1. It
consists of a Josephson junction with energy $E_{\textnormal{J}}$ shunted by a
capacitance $C_{\textnormal{J}}$ and two large (super-) inductors $L_{1}$ and
$L_{2}$ linked to form a loop. Superinductances $L_{1,2}$ can be built from
long arrays ($>50$) of large identical Josephson junctions. Both nodes $1;2$
have a distributed mutual capacitance with the ground node $C_{1;2}$. External
magnetic flux $\Phi^{\mathrm{x}}$ can be applied with a current bias line,
which is grounded through a part of the fluxonium loop. The inductance of that
wire $M$ determines how much current is required to tune the qubit frequency
from maximum to minimum. We neglect the influence of this inductance for the
qubit Hamiltonian, as it is several orders of magnitude smaller that the large
inductances $L_{1}$ and $L_{2}$.
The circuit has two degrees of freedom. We denote the nodal phases as
$\varphi_{1}$ and $\varphi_{2}$. Due to the circuit’s symmetry, the normal
mode coordinates of the circuit are defined as:
$\vartheta^{+}=\varphi_{1}+\varphi_{2};\ \ \ \
\vartheta^{-}=\varphi_{1}-\varphi_{2}.\ \ \ \ \ \ \ \ \ \ $ (1)
The $\vartheta^{-}$-mode is associated with a phase differences across the
Josephson junction and is thus nonlinear, the $\vartheta^{+}$-mode does not
bias the junction and is therefore a fully harmonic mode. In the absence of
disorder among circuit elements $L_{1}=L_{2}=L$, $C_{1}=C_{2}=C$ the modes are
decoupled, and the Hamiltonian is
$\hat{H}=\hat{H}_{\textnormal{h}}+\hat{H}_{\textnormal{f}},\ \ \ \ \ \ \ \ \ \
\ \ \ \ $ (2)
$\hat{H}_{\textnormal{h}}=4E_{\textnormal{Ch}}(\hat{n}^{+})^{2}+\frac{1}{2}E_{\textnormal{L}}(\hat{\vartheta}^{+}-{\varphi}^{x})^{2},\
\ \ \ \ \ \ \ $ (3)
$\hat{H}_{\textnormal{f}}=4E_{\textnormal{Cf}}(\hat{n}^{-})^{2}+\frac{1}{2}E_{\textnormal{L}}(\hat{\vartheta}^{-}-{\varphi}^{x})^{2}+E_{\textnormal{J}}[1-\cos(\hat{\vartheta^{-}})],$
(4)
where $\hat{n}^{-}$ and $\hat{n}^{+}$ are the canonically conjugate Cooper
pair numbers to $\hat{\vartheta}^{-}$ and $\hat{\vartheta}^{+}$, respectively.
Here we also introduce a dimensionless variable for external flux
$\varphi^{\textnormal{x}}=\frac{2\pi{\Phi}^{\textnormal{x}}}{\Phi_{0}}$, and
convert the circuit element parameters to energy units
$E_{\textnormal{L}}=(\Phi_{0}/2\pi)^{2}/2L$,
$E_{\textnormal{Cf}}=e^{2}/2C_{\textnormal{f}}$, where
$C_{\textnormal{f}}=(C+C_{\textnormal{J}})/2$,
$E_{\textnormal{Ch}}=e^{2}/2C_{\textnormal{h}}$, where
$C_{\textnormal{h}}=C/2$.
Mutual capacitance between the fluxonium mode and other circuit elements is a
scarce resource. Increasing the absolute value of a mutual capacitance also
increases the total capacitance of the fluxonium mode, which drives down the
qubit frequency and decreases the coupling strength of the fluxonium to
everything else. This contrasts with inductively coupled fluxonium qubits,
where the coupling strength does not directly depend on the qubit frequency.
The two-island configuration of the fluxonium qubit can utilize either of the
two islands to couple to other elements, while the total effective capacitance
is half of the total capacitance of each of the islands relative to the ground
electrode. This configuration allows us to work in the $300-700\
$\mathrm{MHz}$$ qubit frequency range at the operating point and still have
large coupling strengths between neighboring fluxoniums.
The computed energy spectrum for our qubit as a function of external flux
$\Phi^{\textnormal{x}}$ is plotted in Fig. 1 (c). The circuit parameters are
$E_{\textnormal{J}}=2.24\ $\mathrm{GHz}$$, $E_{\textnormal{L}}=1.64\
$\mathrm{GHz}$$, $C=70.1\ $\mathrm{fF}$$, $C_{\textnormal{J}}=1.3\
$\mathrm{fF}$$. These circuit parameters will be further used for the tunable
coupler. The eigenstates are labeled as
$\ket{n_{\textnormal{h}},n_{\textnormal{f}}}$, where $n_{\textnormal{h}}$ is
the harmonic mode occupancy and $n_{\textnormal{f}}$ is the fluxonium mode
occupancy. The harmonic mode frequency is $2.0\ $\mathrm{GHz}$$. The fluxonium
mode fundamental transition frequency $f_{\textnormal{Q}}$ spans from $625\
$\mathrm{MHz}$$ at the flux degeneracy point to $3.31\ $\mathrm{GHz}$$ at zero
flux bias. The fluxonium mode anharmonicity $\delta f_{\textnormal{Q}}$ at the
flux degeneracy point is around $1.911\ $\mathrm{GHz}$$.
The flux bias line is coupled to the fluxonium mode of the qubit, allowing to
perform both excitation and qubit frequency control with a single wire. This
approach has been used to reduce wiring complexity in large NISQ devices [24].
However, if the inductance $M$ is too large, it becomes a significant decay
channel for the qubit excitation. The decay rate of this process can be
obtained through Fermi’s Golden rule:
$\gamma=\omega\frac{R_{Q}}{2Z_{0}}\left(\frac{M}{L_{1}+L_{2}}\right)^{2}\left|\langle
0|\hat{\vartheta^{-}}|1\rangle\right|^{2},$ (5)
where $\omega$ is the qubit frequency,
$Z_{0}=$50\text{\,}\mathrm{\SIUnitSymbolOhm}$$ is the control line impedance,
$R_{Q}$ is the von Klitzing constant, and $\langle
0|\hat{\vartheta^{-}}|1\rangle$ is the matrix element of the fluxonium mode
phase operator for the fundamental transition. We choose $M=12~{}\mathrm{pH}$
for the control wire inductance, which corresponds to a relaxation time of 1
ms in the flux degeneracy point. Inducing half a flux quantum in the SQUID
loop requires $83\text{\,}\mathrm{\SIUnitSymbolMicro A}$ of current. Due to
the lower frequency of the fluxonium, this current is lower than the current
required to induce the same flux in the SQUID of a typical transmon with the
same decay rate into the flux line. Lower control signal amplitudes are
beneficial because they help reducing RF crosstalk and give more flexibility
in signal chain attenuation and filtering at low temperatures.
A simplified scheme of the two qubit coupling design is shown in Fig. 2(a).
The system has three qubit-qubit coupling channels: direct capacitive
coupling, fluxonium mode-mediated coupling and harmonic mode-mediated
coupling. Due to the different symmetries of the harmonic mode and the
fluxonium mode, the coupling constants resulting from them have different
signs. By carefully choosing the mutual capacitances and mode frequencies, we
aim to utilize the destructive interference between the coupling channels and
minimize the static ZZ interaction between the qubits near the zero flux bias
point of the coupler.
The harmonic modes of the qubits also interact with the coupler. Since these
modes are out of resonance with the computational subspace, we exclude them in
the simulation of gate dynamics for the sake of computational efficiency.
However, due to their non-negligable contribution to the crosstalks, coupling
to these modes is accounted for in the calculation of static coupling terms.
The electric circuit schematic is shown in Fig. 2b. It consists of two
computational fluxonium qubits ($f_{1}$, $f_{2}$) each coupled to a tunable
coupler with fluxonium ($f_{\textnormal{C}}$) and harmonic
($h_{\textnormal{C}}$) modes with a coupling strength $g_{j\textnormal{f}}$
and $g_{j\textnormal{h}}$ (j = 1, 2), as well as to each other with a coupling
strength $g_{12}$. The Hamiltonian for the circuit is:
$\hat{H}_{\textnormal{full}}=\hat{H}_{\textnormal{f1}}+\hat{H}_{\textnormal{hc}}+\hat{H}_{\textnormal{fc}}+\hat{H}_{\textnormal{f2}}+\hat{H}_{\textnormal{V}}$
(6)
where first four terms describe the independent Hamiltonians for qubit and
coupler modes and $\hat{H}_{V}$ is responsible for the effective qubit-qubit
interaction. The interaction term has five contributions (see Supplementary
Information for the derivation): one term due to direct qubit-qubit coupling
(capacitive connection between the blue and green nodes), and four terms
corresponding to the interaction of either of the qubits to either of the
coupler modes (capacitive connection to red nodes in Fig. 2b). Due to the
different symmetries of the harmonic and fluxonium modes of the coupler,
effective couplings mediated by these modes interfere destructively, allowing
to cancel out either the XX or the ZZ coupling completely [22].
Figure 2: (color online) (a) Simplified system schematic. Two fluxonium qubits
($f_{1;2}$) are capacitively coupled via a coupler with harmonic
($h_{\textnormal{C}}$) and tunable fluxonium ($f_{\textnormal{C}}$) modes. The
plus and minus signs denote the sign of the $XX$ coupling constant between the
corresponding modes. (b) Electric circuit schematic. Each mode is highlighted
in different colours (qubit mode 1 (blue), qubit mode 2 (green), and coupler
mode c (red)). The computational qubits are biased at the flux degeneracy
point.
The natural gate available for this device is an iSWAP-like fSim gate[23]. In
our simulation, the gate is executed by applying a time-dependent flux to the
coupler, changing the coupler’s fluxonium mode frequency $f_{\textnormal{C}}$.
As the coupler’s fluxonium mode frequency gets close to the qubit frequencies,
the mediated interaction becomes resonant and energy exchange occurs. Due to
the finite anharmonicity of the fluxonium qubits, the interaction is not
purely transverse.
The effective interaction strength between the qubits can be obtained by
diagonalizing the full system Hamiltonian, eliminating the coupler degrees of
freedom, and building an effective low-energy Hamiltonian:
$\hat{H}_{\textnormal{eff}}/\hbar=-\frac{1}{2}\omega_{1}\sigma^{\textnormal{z}}_{1}-\frac{1}{2}\omega_{2}\sigma^{\textnormal{z}}_{2}+g_{\textnormal{xx}}\sigma^{\textnormal{x}}_{1}\sigma^{\textnormal{x}}_{2}+\frac{1}{4}\zeta_{\textnormal{zz}}\sigma^{\textnormal{z}}_{1}\sigma^{\textnormal{z}}_{2}.$
(7)
Details of the numerical calculations are presented in the Supplementary
Information. For equal-frequency data qubits, the energy gap between symmetric
and antisymmetric modes corresponds to the effective coupling
$2g_{\textnormal{xx}}(\Phi^{\textnormal{x}}_{\textnormal{C}})$ (Fig. 3a). The
parasitic ZZ crosstalk between $f_{1}$ and $f_{2}$ (Fig. 3b) is defined as
$\zeta_{ZZ}=\omega_{11}-\omega_{10}-\omega_{01}$.
Figure 3: (color online) Effective couplings as a functions of the magnetic
flux threading the coupler loop. (a) Effective transverse coupling strength
$2g_{\textnormal{XX}}(\Phi^{\textnormal{x}}_{\textnormal{C}})$. (b) ZZ
crosstalk $\zeta_{\textnormal{ZZ}}(\Phi^{\textnormal{x}}_{\textnormal{C}})$
Figure 4: Shape of drive flux signal and corresponding frequency of the
coupler fluxonium mode (inserted plots). (a) Data qubits have the same
frequencies. The gate can be optimized over the control flux pulse rise and
fall time and flat top duration. (b) Data qubits with different frequencies.
Here we can also optimize the control flux pulse edges, frequency and duration
of modulation.
Magnetic flux in the coupler can be used to turn on and off the effective
transverse qubit-qubit interaction. Near the zero flux bias point the
effective coupling is $40\ $\mathrm{kHz}$$ and increases to $13\
$\mathrm{MHz}$$ at the flux degeneracy point. At the same time, the parasitic
ZZ crosstalk can be reduced to around $5\ $\mathrm{kHz}$$ near the zero flux
bias point. Switching between coupling on and coupling off using flux bias may
induce resonant leakage into the fluxonium coupler mode, when its frequency
crosses the sum of the qubit frequencies, as shown in the Supplementary
Information. This resonance also gives rise in the singularity in the
$\zeta_{\textnormal{zz}}$ dependence on flux. In the operating point
($\Phi^{x}_{C}=0.5\Phi_{0}$) the parasitic ZZ crosstalk reaches
$\zeta_{ZZ}/2\pi=-1.5\ $\mathrm{MHz}$$ and causes phase accumulation of the
doubly excited state. In applications this phase accumulation can be
eliminated using an echo protocol.
The fSim family of two-qubit gates [5, 23] describes the set of excitation
number-preserving quantum logic operations on two qubits up to single-qubit
phase rotations. Its matrix representation in the $\ket{00}$, $\ket{01}$,
$\ket{10}$, $\ket{11}$ basis is given by:
$\operatorname{fSim}(\theta,\varphi)=\left(\begin{array}[]{cccc}1&0&0&0\\\
0&\cos\theta&-i\sin\theta&0\\\ 0&-i\sin\theta&\cos\theta&0\\\
0&0&0&e^{-i\varphi}\end{array}\right).$ (8)
Here we focus on the implementation of an $\sqrt{\mathrm{iSWAP}}$-like gate,
with $\theta=-\pi/4$. Due to the non-negligible ZZ crosstalk, our gate also
accumulates some small conditional phase $\varphi$. An important feature of
this gate is that its entangling power does not depend on $\varphi$, and two
such gates can be used to construct the maximally entangling CPHASE gate (see
Supplementary Material for the gate sequence). In combination with single-
qubit gates, $\operatorname{fSim}\left(-\pi/4,\varphi\right)$ gates can be
used to build any arbitrary two-qubit gate.
The interaction between the computational qubits can be adiabatically turned
on by slowly tuning the external magnetic flux in the coupler loop to the flux
degeneracy point ($\Phi^{\textnormal{x}}_{\textnormal{C}}=0.5\Phi_{0}$). Once
the coupler fluxonium mode frequency is close to the frequency of the data
qubits, their effective transverse coupling strength increases, inducing
vacuum Rabi oscillations between them. After half of a Rabi cycle, we
similarly turn off the coupler flux bias.
The pulse should be as short as possible while remaining adiabatic with
respect to leakage outside the computational subspace. The most probable
leakage scenarios involve populating the coupler fluxonium mode. To avoid
these transitions, we use a smooth pulse shape
$\Phi_{\mathrm{C}}^{\mathrm{x}}(t)$ with slow ramp close to the flux sweet
spot.
Figure 5: Time evolution of populations for four initial computational states
during the gate: (a-d) qubits with the same frequency; (e-h) frequency
difference of data qubits around 28 MHz. Obtained fidelities are $F\approx
0.9999$ and $F\approx 0.9996$, conditional phase $\varphi$ in the fSim gate is
$-0.07\pi$ and $-0.20\pi$ respectively. The state notation corresponds to the
mode occupations of the Hamiltonian (6) as follows:
$|f_{1}h_{C}f_{C}f_{2}\rangle$, where $f_{1}$, $f_{2}$ relate to computational
qubits, $h_{\textnormal{C}}$ and $f_{\textnormal{C}}$ are harmonic and
fluxonium modes of the tunable coupler.
The Hamiltonian of the system is given by the formula (6). In each mode of
excitation, the first three energy levels are taken into account. This
approximation captures the main effects of the system’s evolution. We simulate
the time evolution of the system by numerically solving the Schrödinger
equation with the computational stationary states as the initial conditions,
and compute the projections of the resulting states onto the computational
stationary states. The simulation accounts for leakage outside the
computational subspace, which can occur, for example, due to excitation of the
coupler degree of freedom, which results in the non-unitarity of the resulting
matrix. To simplify further analysis, we remove the single-qubit rotations
about the $z$-axis. We optimize the gate duration to get $\theta$ equal to
$-\pi/4$. The resulting 35-ns long pulse corresponds to an fSim gate with
$\varphi\approx-0.07\pi$ with fidelity $F\approx 0.9999$. We use the standard
expression for the two-qubit gate fidelity [25]:
$F=\frac{\text{Tr}(R_{\text{ideal}}^{\dagger}R)+4}{20}.$ (9)
Here, $R_{\text{ideal}}$ and $R$ are Pauli transfer matrices corresponding to
the actions of the closest ideal fSim gate and our simulated gate,
respectively. Time evolution of the computational states during the gate
operation are presented in Fig. 5(a-d).
In real devices, qubits may be detuned from each other. In that case, one can
use a parametric modulation approach and implement the very same gate by
replacing the flat-top pulse by a periodic modulation of the tunable coupler.
Here we suggest to modulate the drive flux near the operating point
($0.5\Phi_{0}$) with a sine wave profile at a frequency close to the energy
difference between the fundamental transitions of the computational qubits as
shown in Fig. 4(b). In this case we get $F\approx 0.9996$ with
$\varphi\approx-0.20\pi$ and the dynamics of the population of the
computational states is presented in Fig. 5(e-h). In this case we have also
optimized the drive pulse rise and fall times, as well as frequency and
duration of the flux modulation. The entire parametric gate duration is 67 ns
and can be reduced further by advanced flux pulse shaping.
Finally, we perform a decoherence-aware simulation of the gate by numerically
integrating the Lindblad equation with the fourth order Runge-Kutta method
with different collapse operators. The gate error is calculated as
$\epsilon=1-F$ where $F$ denotes the gate fidelity, see Eq. (9). We take into
account decoherence mechanisms involving only the ground and first excited
levels of each mode because the other levels are practically unoccupied during
the gate time (Fig. 5b) and hardly contribute to the resulting gate error. The
collapse operators corresponding to relaxation and dephasing are defined as:
$\displaystyle L_{1}=\frac{1}{\sqrt{T_{1}}}\left(\begin{array}[]{ccc}0&1&0\\\
0&0&0\\\
0&0&0\end{array}\right)\,L_{\varphi}=\frac{1}{\sqrt{2T_{\varphi}}}\left(\begin{array}[]{ccc}1&0&0\\\
0&-1&0\\\ 0&0&0\end{array}\right)$ (10)
The gate errors introduced by each decoherence channel are presented in Table
1. Apart from white noise that that can be modeled with the Lindblad equation,
gates based on flux tuning of SQUIDs are susceptible to low-frequency flux
noise. The characteristic time scales of this noise are usually significantly
longer than the gate duration and they can be approximated by a random static
flux shift during the gate. In the flux sweet spots the circuit is to first
order insensitive to flux noise, leaving the rising and falling edges of the
flux pulse most vulnerable to such noise. For the simulations we use
estimates of the coherence times
$T_{1}=$300\text{\,}\mathrm{\SIUnitSymbolMicro s}$$ and
$T_{\varphi}=$300\text{\,}\mathrm{\SIUnitSymbolMicro s}$$ [15]. In the small-
error limit, errors are linear with respect to the decoherence rates. Our
simulation shows that the effect of decoherence on the data qubits contributes
on the level of $\sim 10^{-5}$ to the gate error, while the effect of coupler
decoherence is by a further order of magnitude smaller. Taking into account
the latest coherence results for fluxonium qubits in a 3D cavity [3], we
believe that improvements in fabrication techniques will likely continue to
enhance the coherence of planar devices. All time-domain simulations have been
carried out using the open-source packages TensorFlow and NumPy.
| Unitary | Relaxation | Dephasing
---|---|---|---
| errors | $T_{1}=$300\text{\,}\mathrm{\SIUnitSymbolMicro s}$$ | $T_{\varphi}=$300\text{\,}\mathrm{\SIUnitSymbolMicro s}$$
| | $f_{1}$ | $f_{2}$ | $h_{C}$ | $f_{C}$ | $f_{1}$ | $f_{2}$ | $h_{C}$ | $f_{C}$
$\epsilon,\ 10^{-4}$ | 3.6 | 0.6 | 0.6 | 0.0 | 0.0 | 0.2 | 0.2 | 0.0 | 0.0
Table 1: Error budget. In the “unitary errors” column we show infidelity of
the gate due to leakage and non-excitation-number preserving processes, and in
the next eight columns we perform infidelity calculation for each decoherence
channel separately.
In conclusion, we have proposed an experimentally realizable tunable coupling
scheme for implementing scalable two-qubit fSim-type gates between fluxonium
qubits. The scheme is based on a simple base element with experimentally
accessible circuit parameters. The performance and properties of the circuit
have been simulated using numerical diagonalization of the circuit
Hamiltonian.
The gate fidelity in our scheme is mainly limited by unitary errors. The
largest contributions to non-unitary errors come from $T_{1}$ and
$T_{\varphi}$ times of the data qubits. These coherence times have been shown
to routinely exceed hundreds of microseconds in fluxonium devices. Our
proposed iSWAP-like parametrically driven gate provides a promising
alternative pathway towards high fidelity two-qubit gates using the existing
transmon-based designs. We emphasize that the low frequency of fluxonium
qubits opens the possibility of using sub-gigahertz wiring and electronics for
gate operations.
## Data availablity
The data that supports the findings of this study are available within the
article.
###### Acknowledgements.
Development of theoretical model was supported by the Russian Science
Foundation, Project (21-72-30026). Numerical simulations were supported by the
Ministry of Science and Higher Education of the Russian Federation (project
no. K2A-2018-048). This work was partially supported by Rosatom.
## References
* [1] W. D. Oliver, P. B. Welander, Materials in superconducting quantum bits, MRS Bulletin, 38(10), 816 (2013).
* [2] Z. L. Xiang, S. Ashhab, J. Q. You, F. Nori, Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems, Reviews of Modern Physics, 85(2), 623 (2013).
* [3] Place, A.P.M., Rodgers, L.V.H., Mundada, P. et al. New material platform for superconducting transmon qubits with coherence times exceeding 0.3 milliseconds. Nat Commun 12, 1779 (2021).
* [4] Petar Jurcevic, Ali Javadi-Abhari, Lev S. Bishop, and all. Demonstration of quantum Vol. 64 on a superconducting quantum computing system. Quantum Sci. Technol. 6 025020, (2021).
* [5] Arute, F., Arya, K., Babbush, R. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019). doi: 10.1038/s41586-019-1666-5
* [6] Hong, Sabrina S. and Papageorge, Alexander T. and Sivarajah, Prasahnt and Crossman, Genya and Didier, Nicolas and Polloreno, Anthony M. and Sete, Eyob A. and Turkowski, Stefan W. and da Silva, Marcus P. and Johnson, Blake R. Demonstration of a parametrically activated entangling gate protected from flux noise. Phys. Rev. A, 101, 012302, 6 Jan 2020. doi: 10.1103/PhysRevA.101.012302
* [7] Vladimir E. Manucharyan, Jens Koch, Leonid I. Glazman, Michel H. Devoret Fluxonium: Single Cooper-Pair Circuit Free of Charge Offsets Science, Vol. 326, pp. 113-116, (2009)
* [8] Pop, I., Geerlings, K., Catelani, G. et al. Coherent suppression of electromagnetic dissipation due to superconducting quasiparticles. Nature 508, 369–372 (2014). https://doi.org/10.1038/nature13017
* [9] V. E. Manucharyan, Superinductance, PhD thesis, Yale University (2012).
* [10] Long B. Nguyen, Yen-Hsiang Lin, Aaron Somoroff, Raymond Mencia, Nicholas Grabon, and Vladimir E. Manucharyan. High-Coherence Fluxonium Qubit. Phys. Rev. X 9, 041041, 25 Nov 2019. doi: 10.1103/PhysRevX.9.041041.
* [11] N. A. Masluk, Reducing the losses of the fluxonium artificial atom, PhD thesis, Yale University (2012).
* [12] Helin Zhang, Srivatsan Chakram, Tanay Roy, Nathan Earnest, Yao Lu, Ziwen Huang, D.K. Weiss, Jens Koch, and David I. Schuster. Universal Fast-Flux Control of a Coherent, Low-Frequency Qubit. Phys. Rev. X 11, 011010.15 January 2021, doi: 10.1103/PhysRevX.11.011010.
* [13] S. T. Skacel, C. Kaiser, S. Wuensch, H. Rotzinger, A. Lukashenko, M. Jerger, G. Weiss, M. Siegel, and A. V. Ustinov. Appl. Phys. Lett. 106, 022603 (2015).
* [14] Richard Gebauer, Nick Karcher, Daria Gusenkova, Martin Spiecker, Lukas Grünhaupt, Ivan Takmakov, Patrick Winkel, Luca Planat, Nicolas Roch, Wolfgang Wernsdorfer, Alexey V. Ustinov, Marc Weber, Martin Weides, Ioan M. Pop, Oliver Sander, ”State preparation of a fluxonium qubit with feedback from a custom FPGA-based platform.” https://arxiv.org/abs/1912.06814
* [15] Helin Zhang, Srivatsan Chakram, Tanay Roy, Nathan Earnest, Yao Lu, Ziwen Huang, D. K. Weiss, Jens Koch, and David I. Schuster Universal Fast-Flux Control of a Coherent, Low-Frequency Qubit. Phys. Rev. X 11, 011010 (2021).
* [16] Aaron Somoroff, Quentin Ficheux, Raymond A. Mencia, Haonan Xiong, Roman Kuzmin, and Vladimir E. Manucharyan. Millisecond coherence in a superconducting qubit. https://arxiv.org/abs/2103.08578v1
* [17] Quentin Ficheux, Long B. Nguyen, Aaron Somoroff, Haonan Xiong, Konstantin N. Nesterov, Maxim G. Vavilov, and Vladimir E. Manucharyan. Fast logic with slow qubits: microwave-activated controlled-Z gate on low-frequency fluxoniums. Phys. Rev. X 11, 021026,3 May 2021. doi: 10.1103/PhysRevX.11.021026.
* [18] Nesterov, K. N., Ficheux, Q., Manucharyan, V. E., and Vavilov, M. G. Proposal for Entangling Gates on Fluxonium Qubits via a Two-Photon Transition. PRX Quantum (2021). doi:10.1103/prxquantum.2.020345
* [19] Fei Yan, Youngkyu Sung, Philip Krantz, Archana Kamal, David K. Kim, Jonilyn L. Yoder, Terry P. Orlando, Simon Gustavsson, and William D. Oliver. Engineering Framework for Optimizing Superconducting Qubit Designs. arXiv:2006.04130v1 (2020)
* [20] Fei Yan, Philip Krantz, Youngkyu Sung, Morten Kjaergaard, Daniel L. Campbell, Terry P. Orlando, Simon Gustavsson, and William D. Oliver. Tunable Coupling Scheme for Implementing High-Fidelity Two-Qubit Gates. Phys. Rev. Applied 10, 054062, 28 Nov 2018. doi: 10.1103/PhysRevApplied.10.054062.
* [21] X. Li, T. Cai, H. Yan, Z. Wang, X. Pan, Y. Ma, W. Cai, J. Han, Z. Hua, X. Han, Y. Wu, H. Zhang, H. Wang, Yipu Song, Luming Duan, and Luyan Sun. Tunable Coupler for Realizing a Controlled-Phase Gate with Dynamically Decoupled Regime in a Superconducting Circuit. Phys. Rev. Applied 14, 024070, 25 Aug 2020
* [22] Mundada, P., Zhang, G., Hazard, T., and Houck, A. Suppression of Qubit Crosstalk in a Tunable Coupling Superconducting Circuit. Phys. Rev. Appl. (2019). doi:10.1103/PhysRevApplied.12.054023
* [23] B. Foxen, C. Neill, A. Dunsworth et al. Demonstrating a Continuous Set of Two-Qubit Gates for Near-Term Quantum Algorithms. Phys. Rev. Lett. 125, 120504, 15 Sep 2020 doi: 10.1103/PhysRevLett.125.120504,
* [24] Arute, F., Arya, K., Babbush, R. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019). https://doi.org/10.1038/s41586-019-1666-5
* [25] Michael A Nielsen, A simple formula for the average gate fidelity of a quantum dynamical operation, Physics Letters A, Vol. 303, Issue 4, 2002, Pages 249-252, ISSN 0375-9601,
## Appendix A COMPARISON WITH OTHER PARAMETER REGIMES
Fluxonium qubits can be described in the framework of the generalized flux
qubit [1] system. For sufficiently long chains of junctions used to implement
shunt inductance, generalized flux qubits are essentially RF SQUIDs and can be
described by three parameters: charging energy $E_{C}$, Josephson energy
$E_{J}$ and shunt inductance energy $E_{L}$. In compare to previous RF-SQUID
type qubits, fluxonium[2] utilizes a chain of Josephson junctions, which
allows to exceed the vacuum impedance and operate in the $E_{J}\gg E_{C}\gg
E_{C}$ regime. Additional capacitive shunting of the phase slip junction and
reduction of energy participation ratios of interfaces and improves coherence
times[3, 4]. Extreme shunting of the phase slip junction, both inductive and
capacitive, significantly lowers the qubit frequency and reduces sensitivity
to AC voltage; the corresponding parameter regime has been dubbed heavy
fluxonium [5, 6]. Between fluxonium-type qubits with coherent tunneling in a
double-well potential and transmon qubits with plasma oscillations in a weakly
anharmonic potential lies the quarton[1] which is characterized by
$E_{J}=E_{L}$. We compare anharmonicity, qubit frequency and coupling strength
between two identical capacitively coupled qubits biased at half flux quantum
for different ratios of $E_{L}/E_{C}$ and $E_{J}/E_{C}$. For this purpose we
consider the Hamiltonian
$\hat{H}=\sum\limits_{\alpha=1,2}4E_{C}\hat{n}_{\alpha}^{2}+E_{J}\cos\hat{\varphi}_{\alpha}+\frac{1}{2}E_{L}\hat{\varphi}_{\alpha}^{2}\\\
+4\kappa E_{C}\hat{n}_{1}\hat{n}_{2},$ (11)
which corresponds to two capacitively coupled fluxonium qubits (Fig. 6). The
charging energy $E_{C}=e^{2}/(2C_{\Sigma})$ is defined by the effective
fluxonium capacitance $C_{\Sigma}=(C+2C_{C})/(1+C/C_{Q})$, and the effective
capacitive coupling ratio $\kappa=C_{C}/(C+C_{C})$ cannot exceed 1.
Figure 6: (color online) Equivalent lumped-element circuit for the two
capacitively coupled generalized flux qubits. Each qubit circuit is
highlighted in different colours (qubit 1 (blue), qubit 2 (green)). $L_{i}$
stand for inductors, $C_{i}$ stand for the capacitances with respect to the
ground electrode, $C_{C}$ are the mutual capactitances between nodes $1$ and
$2$ that facilitate coupling between the qubits.
Results of the comparison are shown in Fig. 7. For presentation purposes, the
parameter regimes demonstrated for regular fluxoniums [2, 3, 4], heavy
fluxoniums [5, 6], and quarton qubits [1], as well as our proposed design, are
shown with solid markers.
Figure 7: (color online) Dependence of the two-qubit system parameters on the
qubit Josephson junction energy and inductive energy. a) Qubit frequency; b)
anharmonicity; c) the effective coupling strength between two capacitively
coupled qubits.
The frequency and coupling ratio-normalized capacitive coupling strength shown
in Fig. 7b is limited to $0.5$. This maximal normalized coupling is realized
in the harmonic oscillator limit $E_{L}\gg E_{J}$ and in transmon qubits. We
propose to operate capacitively coupled fluxoniums in the frequency regime
typical for regular fluxoniums $\sim$0.5\text{\,}\mathrm{GHz}$$, while
maintaining a $E_{J}/E_{L}$ ratio close to unity characteristic to quarton
qubits, which does not significanlty degrade coupling strength. At the same
time, the relative qubit anharmonicity is significantly larger than the
asympototical 0.3 value of $E_{J}\gg E_{C}$ quartons.
It should be noted that the coupling strength degradation only applies to the
fundamental qubit transition. Capacitive coupling to other transitions of
fluxoniums can be effective even for $E_{J}/E_{L}\sim 10$, allowing fast two-
qubit gates as shown in the work [7].
## Appendix B FULL-CIRCUIT HAMILTONIAN AND QUANTIZATION
The extended circuit model implementing our proposal is shown in Fig. 8. Each
of the three elements is treated as a modified heavy fluxonium formed by two
capacitors $C_{i}$, two inductors $L_{i}$, where $i=1,\dots,6$, and a
Josephson junction $J_{\lambda}$, where $\lambda=1,C,2$. The external fluxes
$\Phi^{\textnormal{x}}_{\lambda}$ are applied to loops of the computational
qubits and coupler.
Figure 8: (color online) Equivalent lumped-element circuit for the proposed
two qubit scheme with a tunable coupler. Each heavy fluxonium circuit is
highlighted in different colours (qubit 1 (blue), qubit 2 (green), and coupler
C (red)). $L_{i}$ stand for superinductors, $C_{i}$ stand for the electrode
capacitances with respect to the ground electrode, $C_{J\lambda}$
($\lambda=1,C,2$) are the capacitance of Josephson junctions, $C_{ij}$ are the
mutual capactitances between nodes $i$ and $j$ that facilitate coupling
between the qubits.
We choose node fluxes $\phi_{i}$, corresponding to nodes $i$ in Fig. 8, as the
generalized coordinates of the system. We can write down the circuit
Lagrangian $L(\phi_{i},\dot{\phi_{i}})$ using node fluxes together with the
voltages $\dot{\phi}_{i}$:
$L=T-U,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (12)
$T=\frac{1}{2}\big{[}C_{1}\dot{\phi}_{1}^{2}+C_{2}\dot{\phi}_{2}^{2}+C_{\textnormal{J1}}(\dot{\phi}_{2}-\dot{\phi}_{1})^{2}+C_{3}\dot{\phi}_{3}^{2}+\\\
C_{4}\dot{\phi}_{4}^{2}+C_{\textnormal{JC}}(\dot{\phi}_{4}-\dot{\phi}_{3})^{2}+C_{5}\dot{\phi}_{5}^{2}+C_{6}\dot{\phi}_{6}^{2}+\\\
C_{\textnormal{J2}}(\dot{\phi}_{6}-\dot{\phi}_{5})^{2}+C_{13}(\dot{\phi}_{3}-\dot{\phi}_{1})^{2}+C_{23}(\dot{\phi}_{3}-\dot{\phi}_{2})^{2}+\\\
C_{45}(\dot{\phi}_{5}-\dot{\phi}_{4})^{2}+C_{46}(\dot{\phi}_{6}-\dot{\phi}_{4})^{2}+C_{24}(\dot{\phi}_{4}-\dot{\phi}_{2})^{2}+\\\
C_{35}(\dot{\phi}_{5}-\dot{\phi}_{3})^{2}C_{25}(\dot{\phi}_{5}-\dot{\phi}_{2})^{2}\big{]},$
(13)
$U=E_{\textnormal{J1}}[1-\cos(\frac{2\pi(\phi_{2}-\phi_{1})}{\Phi_{0}})]+\\\
E_{\textnormal{JC}}[1-\cos(\frac{2\pi(\phi_{4}-\phi_{3})}{\Phi_{0}})]+E_{\textnormal{J2}}[1-\cos(\frac{2\pi(\phi_{6}-\phi_{5})}{\Phi_{0}})]+\\\
\frac{1}{2L_{1}}\phi_{1}^{2}+\frac{1}{2L_{2}}(\phi_{2}-\phi^{\textnormal{x}}_{1})^{2}+\frac{1}{2L_{3}}\phi_{3}^{2}+\\\
\frac{1}{2L_{4}}(\phi_{4}-\phi^{\textnormal{x}}_{C})^{2}+\frac{1}{2L_{5}}\phi_{5}^{2}+\frac{1}{2L_{6}}(\phi_{6}-\phi^{\textnormal{x}}_{2})^{2},$
(14)
where $T$ and $U$ are, respectively, the kinetic and potential energy.
The kinetic energy term can be rewritten in matrix form
$T=\frac{1}{2}\vec{\dot{\phi}}^{T}C_{\textnormal{mat}}\vec{\dot{\phi}}$, where
$\vec{\dot{\phi}}=[\dot{\phi}_{1},\dot{\phi}_{2},\dot{\phi}_{3},\dot{\phi}_{4},\dot{\phi}_{5},\dot{\phi}_{6}]$
and $C_{\textnormal{mat}}$ is a $6\times 6$ capacitance matrix:
$C_{\textnormal{mat}}=\begin{bmatrix}C_{\textnormal{f1}}&-C_{\textnormal{J1}}&-C_{13}&0&0&0\\\
-C_{\textnormal{J1}}&C_{\textnormal{f2}}&-C_{23}&-C_{24}&-C_{25}&0\\\
-C_{13}&-C_{23}&C_{\textnormal{f3}}&-C_{\textnormal{JC}}&-C_{35}&0\\\
0&-C_{24}&-C_{\textnormal{JC}}&C_{\textnormal{f4}}&-C_{45}&-C_{46}\\\
0&-C_{25}&-C_{35}&-C_{45}&C_{\textnormal{f5}}&-C_{\textnormal{J2}}\\\
0&0&0&-C_{46}&-C_{\textnormal{J2}}&C_{\textnormal{f6}}\end{bmatrix},\\\ $ (15)
where
$C_{\textnormal{f1}}=C_{1}+C_{\textnormal{J1}}+C_{13},\ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \\\
C_{\textnormal{f2}}=C_{2}+C_{\textnormal{J1}}+C_{23}+C_{24}+C_{25},\ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\
C_{\textnormal{f3}}=C_{3}+C_{\textnormal{JC}}+C_{13}+C_{23}+C_{35},\ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\
C_{\textnormal{f4}}=C_{4}+C_{\textnormal{JC}}+C_{24}+C_{45}+C_{46},\ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\
C_{\textnormal{f5}}=C_{5}+C_{\textnormal{J2}}+C_{45}+C_{35}+C_{25},\ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\
C_{\textnormal{f6}}=C_{6}+C_{\textnormal{J2}}+C_{46}.\ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (16)
To simplify further calculations, the superinductances and capacitances in
each fluxonium are set equal, $L_{1}=L_{2}=L_{\textnormal{Q1}}$,
$L_{3}=L_{4}=L_{\textnormal{QC}}$, $L_{5}=L_{6}=L_{\textnormal{Q2}}$,
$C_{f1}=C_{f2}=C_{\textnormal{Q1}}$, $C_{f3}=C_{f4}=C_{\textnormal{QC}}$,
$C_{f5}=C_{f6}=C_{\textnormal{Q2}}$.
Neglecting capacitive interactions between the qubits, the circuit normal
modes can be defined as
$\theta^{+}_{1}=\phi_{1}+\phi_{2};\ \ \ \ \ \theta^{-}_{1}=\phi_{1}-\phi_{2};\
\ \ \ \\\ \theta^{+}_{C}=\phi_{3}+\phi_{4};\ \ \ \ \
\theta^{-}_{C}=\phi_{3}-\phi_{4};\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \\\ \theta^{+}_{2}=\phi_{5}+\phi_{6};\ \ \ \ \
\theta^{-}_{2}=\phi_{5}-\phi_{6}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (17)
Appling this coordinate transformation to the capacitance matrix yields
$C_{\textnormal{new}}=T_{r}^{T}\times C_{\textnormal{mat}}\times T_{r},\ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (18)
where the transformation matrix $T_{r}$ is defined as:
$T_{r}=\frac{1}{2}\begin{bmatrix}1&1&0&0&0&0\\\ 1&-1&0&0&0&0\\\ 0&0&1&1&0&0\\\
0&0&1&-1&0&0\\\ 0&0&0&0&1&1&\\\ 0&0&0&0&1&-1&\end{bmatrix}.\ \ \ \ \ \ \ \ \ \
\ \ \ \ \ $ (19)
The potential energy becomes
$U=\sum_{i=1,C,2}\bigg{[}E_{\textnormal{J}i}[1-\cos(\frac{2\pi\theta^{-}_{i}}{\Phi_{0}})]+\\\
\frac{1}{4L_{\textnormal{Qi}}}(\theta^{+}_{i}-\phi^{\textnormal{x}}_{i})^{2}+\frac{1}{4L_{\textnormal{Q}i}}(\theta^{-}_{i}-\phi^{\textnormal{x}}_{i})^{2}\bigg{]}.$
(20)
We define the canonically conjugate momenta ${q^{\pm}}_{i}$ corresponding to
the variables introduced in Eq. (17) as
$q^{\pm}_{i}=\frac{\partial L}{\partial{\dot{\theta}^{\pm}}_{i}},\ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ $ (21)
and the canonical momentum vector
$\vec{q}=[q^{+}_{1},q^{-}_{1},q^{+}_{C},q^{-}_{C},q^{+}_{2},q^{-}_{2}]$.
The system Hamiltonian in terms of the first-order normal modes is defined as
$H=\sum_{i,\alpha}q^{\alpha}_{i}{\dot{\theta}^{\alpha}}_{i}-L=\frac{1}{2}\vec{q}^{T}C^{-1}_{\textnormal{new}}\vec{q}+U,\
\ \ \ \ $ (22)
where $C^{-1}_{\textnormal{new}}$ is the inverse capacitance matrix.
Finally, promoting classical degrees of freedom to quantum operators, we
obtain
$\hat{H}=\sum_{\alpha}\hat{H}_{\alpha}+\sum_{\alpha\not=\beta}\hat{H}_{\alpha\beta},\
\
\\{\alpha,\beta\\}\in\\{\textnormal{h}_{1},\textnormal{f}_{1},\textnormal{h}_{\textnormal{C}},\textnormal{f}_{\textnormal{C}},\textnormal{h}_{2},\textnormal{f}_{2}\\}.$
(23)
The indeces $\textnormal{h}_{i}$ and $\textnormal{f}_{j}$ correspond to the
Hamiltonian terms associated with the symmetric $\theta^{+}_{i}$ and
antisymmetric $\theta^{-}_{i}$ mode coordinates. The symmetric modes are
described by harmonic oscillator-type Hamiltonians
$\hat{H}_{\textnormal{h}i}=4E_{\textnormal{C}{\textnormal{h}i}}({\hat{n}^{+}}_{i})^{2}+\frac{1}{2}E_{L{\textnormal{h}i}}(\vartheta^{+}_{i}-\varphi^{\textnormal{x}}_{i})^{2},$
(24)
while the antisymmetric modes are described by fluxonium-type Hamiltonians
$\hat{H}_{\textnormal{f}i}=4E_{\textnormal{C}{\textnormal{f}i}}({\hat{n}^{-}}_{i})^{2}+E_{\textnormal{J}i}[1-\cos(\vartheta^{-}_{i})]+\frac{1}{2}E_{\textnormal{L}{\textnormal{f}i}}(\vartheta^{-}_{i}-\varphi^{\textnormal{x}}_{i})^{2}.$
(25)
where the dimensionless variables for the flux
${\hat{\vartheta}^{\alpha}}_{i}=2\pi{\hat{\theta}^{\alpha}}_{i}/\Phi_{0}$ and
their canonically conjugate Cooper pair numbers
${\hat{n}^{\alpha}}_{i}={\hat{q}^{\alpha}}_{i}/2e$ are introduced. The
inductive and capacitive energies are defined as
$E_{L{\textnormal{h}i}}=E_{L{\textnormal{f}i}}=\frac{[\Phi_{0}/(2\pi)]^{2}}{2L_{\textnormal{Q}i}},\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (26)
$E_{C\alpha}=\frac{e^{2}}{2}\left(C_{\text{new}}^{-1}\right)_{\alpha\alpha}=\frac{[\Phi_{0}/(2\pi)]^{2}}{2L_{\textnormal{Q}i}},\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (27)
where $\left(C_{\text{new}}^{-1}\right)_{\alpha\alpha}$ is the diagonal matrix
element of the inverse capacitance matrix corresponding to the variable
$\alpha$,
$\alpha\in\\{\text{h}_{1},\text{f}_{1},\text{h}_{C},\text{f}_{C},\text{h}_{2},\text{f}_{2}\\}$
and the dimensionless external fluxes are defined as
$\varphi^{\textnormal{x}}_{i}=\frac{2\pi}{\Phi_{0}}\phi^{\textnormal{x}}_{i}.\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (28)
The double-indexed terms $\hat{H}_{\alpha\beta}$ in Eq.(23) describe the
capacitive coupling between different modes. In a symmetric circuit, direct
interaction between the harmonic and fluxonium modes on the same node vanish:
$\hat{H}_{\textnormal{h1}\textnormal{f1}}=0,\ \ \
\hat{H}_{\textnormal{hc}\textnormal{fc}}=0,\ \ \
\hat{H}_{\textnormal{h2}\textnormal{f2}}=0.\ \ \ $ (29)
The simplified Hamiltonian in the main text of the article Eq. 5 can be
obtained by dropping the harmonic mode terms of the computational qubits,
yielding
$\hat{H}_{\textnormal{full}}=\hat{H}_{\textnormal{f1}}+\hat{H}_{\textnormal{hc}}+\hat{H}_{\textnormal{fc}}+\hat{H}_{\textnormal{f2}}+\hat{H}_{\textnormal{V}},\
\ \ \ \ $ (30)
where the interaction $\hat{H}_{\textnormal{V}}$ of two qubits consists of
five terms: the direct coupling ($\hat{H}_{\textnormal{f1}\textnormal{f2}}$),
the indirect coupling via the coupler harmonic mode
($\hat{H}_{\textnormal{f1}\textnormal{hc}}$ and
$\hat{H}_{\textnormal{hc}\textnormal{f2}}$) and the indirect coupling via the
coupler fluxonium mode ($\hat{H}_{\textnormal{f1}\textnormal{fc}}$ and
$\hat{H}_{\textnormal{fc}\textnormal{f2}}$).
Note that this description is not entirely accurate, as the harmonic modes do
interact with the fluxonium modes of the computational qubit due to their
coupling to the coupler’s modes. Moreover, circuit asymmetry and nonlinearity
in the superinductor can also contribute to the interaction between the
fluxonium and harmonic modes on a single node. The contribution of the
harmonic modes of the qubits to the effective qubit-qubit interactions leads
to a small renormalization of the low-energy Hamiltonian. We include these
modes in our static Hamiltonian simulations, specifically for the static ZZ-
interaction, and neglect them in the gate simulations.
The circuit parameters used for the following calculations are
$C_{1}=C_{6}=70.53\ $\mathrm{fF}$$, $C_{2}=C_{5}=51.17\ $\mathrm{fF}$$,
$C_{3}=C_{4}=49.17\ $\mathrm{fF}$$, $C_{J1}=C_{JC}=C_{J2}=1.056\
$\mathrm{fF}$$, $C_{25}=0.167\ $\mathrm{fF}$$, $C_{23}=C_{45}=19.20\
$\mathrm{fF}$$, $C_{13}=C_{46}=0.176\ $\mathrm{fF}$$, $C_{24}=C_{35}=0.234\
$\mathrm{fF}$$,
$E_{\textnormal{J1}}=E_{\textnormal{JC}}=E_{\textnormal{J2}}=2.14\
$\mathrm{GHz}$$, $E_{L1}=E_{L2}=E_{L5}=E_{L6}=1.514\ $\mathrm{GHz}$$,
$E_{L3}=E_{L4}=1.634\ $\mathrm{GHz}$$. This choice of capacitances allowed us
to reach the desired values of qubit frequencies and effective qubit-qubit
coupling. The Josephson junction energies and inductive energies are
accessible within the fabrication techniques used in our previous work [8].
For the phase slip element we propose to use a $S_{1}\approx 100\times 90\
$\mathrm{nm}$^{2}$ Josephson junction, and for the superinductance an array
($N\approx 80$) of series-connected of big Josephson junctions ($S_{2}\approx
1000\times 500\ $\mathrm{nm}$^{2}$). All junctions can be fabricated by the
shadow evaporation technique with critical current density $j=0.5\
$\mathrm{\SIUnitSymbolMicro A}$/$\mathrm{\SIUnitSymbolMicro m}$^{2}$.
## Appendix C NUMERICAL RESULTS
In this Appendix we present the results of numerical calculation of the full
system Hamiltonian. We found the eigenvalues and charge matrix elements for
all independent fluxonium and harmonic modes from Eqs. (24),(25) using
numerical diagonalization. The data qubits are design to be kept in the lower
flux sweet spot ($\varphi_{1,2}^{\text{x}}=\pi$), while the magnetic flux in
the coupler loop is varied between zero flux and half flux quantum
($\varphi_{C}^{\text{x}}\in\left[0,\pi\right]$).
Figure 9: (color online) a) Energy levels of the tunable system vs magnetic
flux in the coupler ${\Phi}^{\textnormal{x}}_{\textnormal{C}}$. b) The red
dotted rectangle outlines eigenenergies of the data qubits one-excitation
manifold.
To specify the complete Hamiltonian we used the open-source QuTiP[9] package.
In each fluxonium-type mode we took the first five levels, and in each
harmonic mode we took the first three levels and used corresponding matrix
elements to take into account the terms responsible for the interaction (30).
Finally, we numerically diagonalized the full Hamiltonian. The computed energy
spectrum as a function of magnetic flux
${\Phi}^{\textnormal{x}}_{\textnormal{C}}$ is plotted in Fig. 9a.
Full system eigenstates are labeled as
$\ket{n_{\textnormal{h1}},n_{\textnormal{f1}},n_{\textnormal{hc}},n_{\textnormal{fc}},n_{\textnormal{h2}},n_{\textnormal{f2}}}$,
where $n_{\alpha}$ is the occupancy of the $\alpha$-mode,
$\alpha\in\\{\text{h}_{1},\text{f}_{1},\text{h}_{C},\text{f}_{C},\text{h}_{2},\text{f}_{2}\\}$.
The five lowest-lying levels are labeled in Fig. 9a. These levels play a key
role in the two-qubit gates. Since the computational levels of first qubit
$\ket{010000}$ and second qubit $\ket{000001}$ are degenerate (Fig. 9b), the
eigenstates are their symmetric (green line) and antisymmetric (orange line)
combinations, and the energy gap between these states corresponds to the
effective $XX$ coupling.
Figure 10: (color online) Dependence of the low-energy effective Hamiltonian
parameters on the critical current of small and large Josephson junctions. a)
The effective coupling at the zero flux bias point
$g^{\textnormal{off}}_{\textnormal{xx}}=g_{\textnormal{xx}}(\Phi_{\textnormal{C}}=0)$;
b) the effective coupling at the flux degeneracy point
$g^{\textnormal{on}}_{\textnormal{xx}}=g_{\textnormal{xx}}(\Phi_{\textnormal{C}}=\Phi_{0})$;
d) parasitic ZZ crosstalk at the zero flux bias point
$\zeta^{\textnormal{off}}_{\textnormal{zz}}$; e) parasitic ZZ crosstalk at the
flux degeneracy point $\zeta^{\textnormal{on}}_{\textnormal{zz}}$. c),f),g),h)
Qubit and coupler frequencies $f^{\textnormal{off}}_{\textnormal{Q}}$ and
$f^{\textnormal{on}}_{\textnormal{Q}}$,
$f^{\textnormal{off}}_{\textnormal{C}}$ and
$f^{\textnormal{on}}_{\textnormal{C}}$ at the zero flux bias point and at the
flux degeneracy point of the coupler. i) Data qubit anharmonicity $\delta
f^{\textnormal{off}}_{\textnormal{Q}}$.
## Appendix D CRITICAL CURRENT DEPENDENCE
A crucial issue for large scale Josephson junction based circuits is
robustness with respect to critical current deviations of small junctions. The
aim of this section is to identify how these deviations affect the effective
low-energy Hamiltonian parameters. We sweep the critical current value of
small Josephson junctions used as the nonlinear element for data qubits and
coupler (for simplicity we consider them the same) and large Josephson
junctions used in superinductances arrays. The data qubits’ superinductances
consist of 41 junctions, while the coupler’s superindutances have 38 junctions
each, which results in the coupler frequency being $\approx 100\ \mathrm{MHz}$
higher in the flux degeneracy point. The result of this calculation are shown
in Fig. 10.
Figure 11: Suitable critical current values. Black area indicates the range of
critical currents values allowing one to implement the proposed scheme of two
fluxonium qubits in the desired range of low energy effective Hamiltonian
parameters.
Here we found the effective coupling at the zero flux bias point and the flux
degeneracy point in the coupler loop ($g^{\textnormal{off}}_{\textnormal{xx}}$
and $g^{\textnormal{on}}_{\textnormal{xx}}$ respectively) as well as parasitic
ZZ crosstalk ($\zeta^{\textnormal{off}}_{\textnormal{zz}}$ and
$\zeta^{\textnormal{on}}_{\textnormal{zz}}$ respectively). We also defined
data qubits frequencies $f^{\textnormal{off}}_{\textnormal{Q}}$ and
$f^{\textnormal{on}}_{\textnormal{Q}}$ and coupler frequencies
$f^{\textnormal{off}}_{\textnormal{C}}$ and
$f^{\textnormal{on}}_{\textnormal{C}}$ at the coupler zero flux bias point and
the flux degeneracy point. For the sake of completeness we also present here
data qubit anharmonicity $\delta f^{\textnormal{off}}_{\textnormal{Q}}$. Fig.
11 shows the region (black area) with suitable critical current values, at
which the proposed tunable coupling scheme can be physically implemented. This
region was defined from the conditions: $8\
\mathrm{MHz}<g^{\textnormal{on}}_{\textnormal{xx}}<30\ \mathrm{MHz}$,
$g^{\textnormal{off}}_{\textnormal{xx}}<0.5\ \mathrm{MHz}$,
$|\zeta^{\textnormal{off}}_{\textnormal{zz}}|<5\ \mathrm{kHz}$,
$|\zeta^{\textnormal{on}}_{\textnormal{zz}}|<1.5\ \mathrm{MHz}$, $200\
\mathrm{MHz}<f^{\textnormal{off}}_{\textnormal{Q}}<600\ \mathrm{MHz}$, $\delta
f^{\textnormal{off}}_{\textnormal{Q}}>1.2\ \mathrm{GHz}$. It should be noted
that the Fig. 11 is shown as an example and the selected conditions are not
strict.
## Appendix E CONSTRUCTION OF THE CPHASE GATE
The control parameter used to implement the two-qubit gates, the coupler flux,
changes both qubit frequencies, XX and ZZ couplings at the same time. As a
result, the two-qubit gate family that can be implemented using this method is
equivalent to $\operatorname{fSim}(\theta,\varphi)$, with both $\theta$ and
$\phi$ somehow depending on the control signal $\Phi_{C}^{x}(t)$ applied to
the coupler flux line.
& []H []U_1(-φ) [2]fSim(π4, φ) []X [2]fSim(π4, φ) []U_1(-φ) []H []S
[]H []U_1(-φ) []U_1(-φ) []H []S^† .
Figure 12: Construction of the CPHASE gate from two fSim gates with
$\theta=\pi/4$ and arbitrary conditional phase angle $\varphi$
A wide range of quantum algorithms relies on the CPHASE gate. To construct the
CPHASE we gate using our proposed two-qubit scheme, we propose the spin-echo
technique initially devised to remove the conditional phase from cross-
resonance gates [10]. The gate sequence implementing a CPHASE gate is shown in
Fig. 12. The gate sequence consists of two two-qubit fSim gates interleaved by
single-qubit gates. In applications the single-qubit gates before and after
the fSim gates can be merged together with other gates for better fidelity.
## Appendix F COUPLING OF HARMONIC AND FLUXONIUM MODE
The presence of finite asymmetry in the qubit capacitances and inductances
translates into coupling between the harmonic and fluxonium mode. For a single
qubit circuit, we introduce the capacitance and inductance asymmetries $\delta
C,\delta L$, with $C_{1}=C+\delta C/2$, $C_{2}=C-\delta C/2$, $L_{1}=L+\delta
L/2$, $L_{2}=L-\delta L/2$.
For small asymmetries the Hamiltonian perturbation is defined as
$\hat{V}=\frac{4e^{2}\delta C}{C(C+2C_{J})-\delta
C^{2}/4}\hat{n}_{f}\hat{n}_{h}+\\\ \frac{\hbar^{2}}{4e^{2}}\frac{\delta
L}{L^{2}-\delta L^{2}/4}\hat{\varphi}_{f}\hat{\varphi}_{h},$ (31)
which is Jaynes-Cummings type Hamiltonian for a fluxonium qubit coupled to a
resonator. When the qubit is biased at half flux quantum, the frequency
detuning between the two modes is large. In this dispersive regime excitations
in the resonator mode induce a dispersive shift $\chi$ in the qubit frequency
which is quadratic in $\delta L$ and $\delta C$. For relative asymmetries
$\delta C/C$ and $\delta L/L$ of 5% in both capacitance and inductance the
dispersive shift arising from this coupling is $\chi=23~{}\mathrm{MHz}$.
Another source of dispersive shifts is the nonlinearilty of the
superinductors. In the proposed design with $N=80$ junction in each inductor,
the first nonlinear correction to the superinductance Hamiltonian is given by
$\hat{V}=-\frac{E_{L}}{24N^{2}}\left(\hat{\varphi}_{f}+\hat{\varphi}_{h}\right)^{4}.$
(32)
From first-order perturbation theory we obtain a cross-Kerr coefficient of
$\chi=0.5~{}\mathrm{MHz}$.
Similar to the case of 0-$\pi$ qubits, thermal excitation may degrade qubit
coherence times [11]. The pure dephasing rate associated with this process can
be estimated for low thermal harmonic mode occupancies $n_{\mathrm{th}}$ by
the formula[12]
$\gamma_{\varphi}=\frac{n_{\mathrm{th}}\kappa\chi^{2}}{\chi^{2}+\kappa^{2}},$
(33)
where $n_{\mathrm{th}}$ is the photon number, $\kappa$ is the harmonic mode
decay rate and $\chi$ is the dispersive shift. We expect that in real devices
$\chi\gg\kappa$. The thermal population of the harmonic mode can be estimated
as $n_{\mathrm{th}}\approx 10^{-4}$ for $T=10~{}\mathrm{mK}$.
The decay rate for the harmonic mode $\kappa$ can be obtained through Fermi’s
Golden rule:
$\kappa=\omega\frac{2\pi
Z_{0}}{R_{Q}}\left(\frac{C_{\mathrm{an}}}{C+C_{\mathrm{an}}}\right)^{2}\left|\langle
0|\hat{n}^{+}|1\rangle\right|^{2},$ (34)
where $\omega$ is the harmonic mode frequency,
$Z_{0}=$50\text{\,}\mathrm{\SIUnitSymbolOhm}$$ is the control line impedance,
$R_{Q}$ is the von Klitzing constant, and $\langle 0|\hat{n}^{+}|1\rangle$ is
the matrix element of the harmonic mode charge operator for the fundamental
transition. We choose $C_{\mathrm{an}}=0.34~{}\mathrm{fF}$ for the coupling
capacitance with microwave antenna (Fig. 1 from the main article), which
corresponds to a decay rate $\kappa=0.01~{}\mathrm{MHz}$. From Eq. (33) we
obtain $T_{\varphi}>1~{}\mathrm{s}$ for $T=10~{}\mathrm{mK}$.
## References
* [1] Fei Yan, Youngkyu Sung, Philip Krantz, Archana Kamal, David K. Kim, Jonilyn L. Yoder, Terry P. Orlando, Simon Gustavsson, and William D. Oliver. Engineering Framework for Optimizing Superconducting Qubit Designs. arXiv:2006.04130v1 (2020)
* [2] Vladimir E. Manucharyan, Jens Koch, Leonid I. Glazman, Michel H. Devoret Fluxonium: Single Cooper-Pair Circuit Free of Charge Offsets Science, Vol. 326, pp. 113-116, (2009)
* [3] Long B. Nguyen, Yen-Hsiang Lin, Aaron Somoroff, Raymond Mencia, Nicholas Grabon, and Vladimir E. Manucharyan. High-Coherence Fluxonium Qubit. Phys. Rev. X 9, 041041, 25 Nov 2019. doi: 10.1103/PhysRevX.9.041041.
* [4] Pop, I., Geerlings, K., Catelani, G. et al. Coherent suppression of electromagnetic dissipation due to superconducting quasiparticles. Nature 508, 369–372 (2014). https://doi.org/10.1038/nature13017
* [5] Helin Zhang, Srivatsan Chakram, Tanay Roy, Nathan Earnest, Yao Lu, Ziwen Huang, D.K. Weiss, Jens Koch, and David I. Schuster. Universal Fast-Flux Control of a Coherent, Low-Frequency Qubit. Phys. Rev. X 11, 011010 (2021). doi: 10.1103/PhysRevX.11.011010.
* [6] N.Earnest, S.Chakram, Y.Lu,N. Irons, R.K.Naik, N.Leung, L.Ocola, D.A.Czaplewski, B.Baker, Jay Lawrence, Jens Koch, and D.I. Schuster. Realization of a $\lambda$ System with Metastable States of a Capacitively Shunted Fluxonium. Phys. Rev. Lett. 120, 150504 (2018). doi: 10.1103/PhysRevX.11.011010.
* [7] Quentin Ficheux, Long B. Nguyen, Aaron Somoroff, Haonan Xiong, Konstantin N. Nesterov, Maxim G. Vavilov, and Vladimir E. Manucharyan. Fast logic with slow qubits: microwave-activated controlled-Z gate on low-frequency fluxoniums. Phys. Rev. X 11, 021026,3 May 2021. doi: 10.1103/PhysRevX.11.021026.
* [8] Moskalenko, I.N., Besedin, I.S., Tsitsilin, I.A. et al. Planar Architecture for Studying a Fluxonium Qubit. JETP Lett. 110, 574–579 (2019) doi: 10.1134/S0021364019200074
* [9] J.R. Johansson, P.D. Nation, Franco Nori, QuTiP 2: A Python framework for the dynamics of open quantum systems, Computer Physics Communications, Vol. 184, Issue 4, 2013, Pages 1234-1240,
* [10] Córcoles, A.D., J.M. Gambetta, J.M. Chow, J.A. Smolin, M. Ware, J. Strand, B.L.T. Plourde, and M.Steffen, Phys. Rev. A 87, 030301 (2013).
* [11] Peter Groszkowski, A. Di Paolo, A.L. Grimsmo, A. Blais, D.I. Schuster, A.A. Houck, and Jens Koch, New J. Phys. 20, 043053 (2018).
* [12] Z. Wang, S. Shankar, Z.K. Minev, P. Campagne-Ibarcq, A. Narla, and M.H. Devoret, Cavity Attenuators for Superconducting Qubits. Phys. Rev. Applied 11, 014031 (2019).
|
collected by the agent during training, and we also change the loss function
from the Huber loss (Eq. 4.59) to the mean squared error so as to allow for
divergence in gradient. In our experiments, we find that whether the DQN
algorithm diverges or not is generally task-dependent, and it has a larger
probability to diverge if the task is more difficult. The result for Atari
2600 game Space Invaders is shown in Fig. 4.9. It can be seen that while the
DQN algorithm diverges, the C-DQN algorithm learns stably and its learning
speed is only slightly reduced. This confirms that the C-DQN algorithm is
convergent regardless of the properties of the training data.
Figure 4.9: Training performance and training loss on Atari 2600 games Space
Invaders when half of the data are randomly discarded.
Figure 4.10: Training performance and training loss on Atari 2600 games Space
Invaders when the replay memory adopts a random replacement strategy (left)
and when the size of the replay memory is reduced by a factor of 10 and adopts
different strategies (middle and right).
The same situation arises if when the replay memory (i.e. the dataset) is
full, one does not use the first-in-first-out (FIFO) strategy to replace old
data by new data, but randomly chooses old transition data to be replaced by
new data. In this case, the dataset can also contain incomplete trajectories
of data. In Fig. 4.10, we show that the DQN algorithm can actually diverge in
this simple setting. In existing literature on DQN algorithms, the replacement
strategy used with the dataset is often ignored, while here we find that it
can be an important detail that affects the final results of reinforcement
learning. In practice, the FIFO strategy is almost always used; nevertheless,
the FIFO strategy makes the data in the dataset less diverse, and it also
increases the risk of non-convergence due to the oscillation of the co-
evolvement of the learned policy and the dataset. As a consequence, a large
size of the replay memory is often necessary in reinforcement learning. In
Fig. 4.10, it can be seen that when we reduce the size of the replay memory by
a factor of $10$, the C-DQN algorithm can utilize the random replacement
strategy to achieve a higher performance, while the DQN algorithm cannot. Note
that the DQN algorithm does not show divergence in this experiment. We
conjecture that it is because the replay memory is small, it contains more
recent data and more complete trajectories of experience, which alleviates
divergence. As the C-DQN algorithm can be trained stably with data that comes
from an arbitrary distribution, the result opens up a new possibility of
reinforcement learning of only storing and learning important data, which is
not possible with the conventional DQN algorithm.
##### Difficult Games in Atari 2600
Here we consider difficult games in the Atari 2600 benchmark, which require
the use of large discount factors $\gamma$. Although the DQN algorithm becomes
unstable and often diverges when $\gamma$ becomes increasingly close to $1$,
the convergence property of the C-DQN algorithm does not depend on $\gamma$
and it can work with any $\gamma$ in principle. Nevertheless, we notice that a
large $\gamma$ does not necessarily lead to better performance, because a
large $\gamma$ requires the agent to learn to predict rewards that are distant
in future time steps, which is often not necessary and is irrelevant for
learning the task. Therefore, a large $\gamma$ can reduce the learning
efficiency. We also notice that if we have $\gamma\geq 0.9999$, the order of
magnitude of the term $(1-\gamma)Q_{\theta}$ becomes close to the inherent
noise in the gradient descent optimization algorithm due to the finite
learning rate, and the learning process can stagnate. Therefore, we do not
expect $\gamma$ to be larger than $0.9999$ and we only require $\gamma$ to
satisfy $0.99\leq\gamma\leq 0.9998$. Because the appropriate discount factor
$\gamma$ can be different for different problems, we also use a heuristic
algorithm to evaluate the frequency of reward signals in each problem so as to
determine $\gamma$ for each problem separately. Details concerning this
strategy are presented in Ref. [51] in the appendix. We also normalize the Q
functions before training using the evaluated mean and scale of the reward
signals, and we follow the techniques in Ref. [99] to transform the Q
functions approximately by the square root, so that the Q functions always
have appropriate orders of magnitude.
With the C-DQN algorithm and large values of $\gamma$, several difficult
problems which could not be solved by simple variants of the DQN algorithm can
now be solved, as shown in Fig. 4.11. Especially for Atari 2600 games Skiing,
Private Eye and Venture, the agent significantly benefits from large values of
$\gamma$ and achieves a higher best performance during training, despite the
fact that the games Private Eye and Venture are only partially observable
problems and therefore not fully learnable, which results in unstable
performance.
Figure 4.11: Training performance for several difficult games in the Atari
2600 benchmark. Each line shows the performance in a single repetition of the
experiment and the shaded regions show the standard deviation. The discount
factors $\gamma$ are shown in the titles. The DQN algorithm fails to learn
these tasks and shows significant instability, and the DQN loss increases up
to around $10^{5}\sim 10^{8}$ in these experiments, while the C-DQN loss stays
below $1$.
In the following, we compare the test performance of the C-DQN algorithm with
other works. To evaluate the test performance, we pick the best-performing
agent during training and test its performance using 400 trials, using the
$\epsilon$-greedy policy with $\epsilon=0.01$ and no-op starts666No-op starts
mean that whenever an episode begins, the no-operation action is executed
randomly for $1$ to $30$ frames, so that the agent does not always starts at
exactly the same state. [62]. The average of the test performances of the 3
different repetitions of our experiments are shown in Table 4.1 with the
standard error, compared with existing works and the human performance.
Table 4.1: Test performance on difficult Atari 2600 games. The results for the DQN algorithm are obtained by us using the same experimental settings as the C-DQN algorithm. Human results and results for Agent57 are due to Ref. [88], and results for Rainbow DQN are due to Ref. [69]. Note that the human results only correspond to reasonably adequate performance and they are not the highest possible performance of human. Task | C-DQN | DQN | Human | Rainbow DQN | Agent57 (SOTA)
---|---|---|---|---|---
Skiing | -3697 $\pm$ 157 | -29751 $\pm$ 224 | -4337 | -12958 | -4203 $\pm$ 608
Tennis | 10.9 $\pm$ 6.3 | -2.6 $\pm$ 1.4 | -8.3 | 0.0 | 23.8 $\pm$ 0.1
Private Eye | 14730 $\pm$ 37 | 7948 $\pm$ 749 | 69571 | 4234 | 79716 $\pm$ 29545
Venture | 893 $\pm$ 51 | 386 $\pm$ 85 | 1188 | 5.5 | 2624 $\pm$ 442
As we have followed the standard procedure of training the DQN agent on the
Atari 2600 benchmark as in Ref. [62] and Ref. [69], the performance obtained
by our C-DQN agent allows for a fair comparison with the results of the
Rainbow DQN algorithm in Ref. [69].777Precisely speaking, a fair comparison
with the Rainbow DQN algorithm cannot be made on the game Skiing. This is
because the reward clipping strategy adopted by the Rainbow DQN algorithm does
not permit learning the game Skiing. However, this does not affect our
conclusion. In Table 4.1, we see that the Rainbow DQN algorithm fails to make
progress in learning for these four difficult Atari 2600 games, and the C-DQN
algorithm can achieve performance higher than the Rainbow DQN algorithm and it
has non-trivial learning behaviour. The results of the Agent57 algorithm are
only for reference [88], representing the currently known highest overall
performance on the Atari 2600 benchmark. The results of the Agent57 algorithm
do not allow for a fair comparison with the C-DQN algorithm, because it
requires considerably more computation, sophisticated methods, and larger and
more complicated neural networks. Notably, we find that our result on the game
Skiing is exceptional, where the C-DQN algorithm achieves the state-of-the-art
performance despite its simplicity, which is discussed in following.
##### The Atari Game Skiing
77footnotetext: https://github.com/mgbellemare/Arcade-Learning-Environment
In Table 4.1, an exceptional result is that the C-DQN algorithm achieves a
performance that is higher than the Agent57 algorithm on the game Skiing, in
fact, utilizing an amount of computation budget less than $0.1\%$ of that of
the Agent57 algorithm. We find that this is the highest performance so far and
therefore achieves the state-of-the-art (SOTA) for this task. To elucidate the
reason, we describe the game in the following.
Figure 4.12: A screenshot of game Skiing in the Atari 2600 benchmark. The
program is provided under the GNU General Public License v2.0 in Ref.
[50].99footnotemark: 9
Figure 4.13: Training performance of the C-DQN algorithm on Atari 2600 game
Skiing when the learning rate is reduced by half, following the same
experimental procedure as in Fig. 4.11. The standard deviation is shown as the
shaded region.
A screenshot of Atari 2600 game Skiing is shown in Fig. 4.12. This game is
basically a racing game, where the player needs to go downhill as fast as
possible, and the time elapsed before reaching the goal is regarded as the
minus reward. For each time step, the agent receives a small amount of minus
reward which represents the time elapsed, until it reaches the goal and the
game terminates. Additionally, the player is required to pass through the
gates along the way, which are shown by the two small blue flags in Fig. 4.12.
If the player fails to pass through a gate, at the moment when the player
reaches the goal, a 5-second penalty is added to the elapsed time. The number
of gates that are yet to pass are shown on the top of the screen in the game.
With the standard setting in Ref. [62], the number of time steps for an
episode in this game is $\sim 1300$ for the random policy, is $\sim 4500$ if
the player slows down, and is $\sim 500$ if the policy is near-optimal.
Because the penalty for not passing through a gate is given to the agent only
at the end of the game, it is necessary for the agent to learn the relation
between the penalty at the end of the game and the events that occur early in
the game. Therefore, the time horizon for planning must be long enough, and
the discount factor $\gamma$ should be at least $1-\frac{1}{500}$ to allow for
learning. However, learning may stagnate if $\gamma$ is not even larger than
$1-\frac{1}{500}$, because if $\gamma$ is small, the agent would prefer
spending longer time before reaching the goal, so that the penalty at the end
of the game is delayed and the Q values for the states in the early game are
increased, which will further increase the number of time steps for an episode
and make learning difficult. Therefore, we have tuned our hyperparameter
setting so that we have $\gamma\approx 1-\frac{1}{5000}$ on this game. The
C-DQN agent learns with this value of $\gamma$ successfully and produces a new
record on this task. The large fluctuations on the learning curves shown in
Fig. 4.11 are mainly due to noise coming from the large learning rate, which
have been confirmed in Fig. 4.13 by repeating the experiment using a smaller
learning rate. However, we find that with the small learning rate, the policy
can easily get trapped in local optima and the test performance is actually
worse, and therefore we still use the large learning rate in our experiments.
It is worth noting that in fact, we cannot compare this result fairly with
that of the Agent57 algorithm, because our hyperparameters have been tuned so
that the discount factor $\gamma$ is suitable for this task, while the Agent57
algorithm adopts a bandit algorithm to dynamically determine $\gamma$, which
is more general.
### 4.5 Conclusions and Outlook
In this chapter, we have discussed the inefficiency issues of the RG
algorithm, and proposed a convergent DQN (C-DQN) algorithm to address the
long-standing problem of non-convergence of Q-learning. We have discussed the
properties of the C-DQN algorithm and demonstrated the effectiveness of the
C-DQN algorithm on the standard Atari 2600 benchmark for deep reinforcement
learning. With the stability of the C-DQN algorithm, we can tune the discount
factor $\gamma$ freely without sacrificing stability, and we can consider the
possibility of only learning important pieces of data to improve efficiency.
The C-DQN algorithm has a better stability and convergence property than the
DQN algorithm, and it can be applied to difficult tasks for which the DQN
algorithm fails due to instability. The idea of the C-DQN algorithm can be
combined with other reinforcement learning strategies involving target
networks and potentially improve their stability.
Many outstanding issues exist concerning the C-DQN algorithm. The C-DQN loss
is non-smooth, and it is still not clear how the non-smoothness affects the
optimization and the learning process. When the task is stochastic, the MSBE
loss $L_{\textit{MSBE}}$ used in the C-DQN algorithm does not converge exactly
to the optimal Q function, and therefore, it would be desirable if the C-DQN
algorithm can be improved so that stochastic tasks can be learned correctly
without bias. It would be interesting to investigate how the C-DQN loss
interplays with several other DQN extensions such as distributional DQN and
soft Q-learning [91, 92] and how the gradient descent optimization algorithm
affects the learning dynamics of the C-DQN algorithm. It is also an
interesting problem how the target network in the C-DQN algorithm can be
updated smoothly as in the DDPG algorithm [100].
## Chapter 5 Control of Continuous Quantum Systems with Convergent Deep
Q-Learning
### 5.1 Introduction
In this chapter, we apply our convergent deep Q network (C-DQN) algorithm [51]
to quantum measurement-feedback control problems in continuous space, namely,
measurement-feedback cooling of a quantum-mechanical quartic oscillator and
measurement-feedback cooling of a trapped quantum-mechanical rigid body in
numerical simulation, and we compare the results obtained using C-DQN with
those obtained using the conventional DQN algorithm to demonstrate the
advantages of C-DQN. In Section 5.2, we present our model of the controlled
quantum quartic oscillator, and show that the C-DQN algorithm performs
significantly more stably compared with the conventional DQN algorithm, which
suffers much less randomness in the final results. In Section 5.3, we first
introduce the background of the quantum-mechanical rigid body and review the
derivation of the Hamiltonian following Ref. [101] in Section 5.3.1, and then,
we present our model of the trapped and controlled quantum-mechanical rigid
body and derive its Hamiltonian and the time-evolution equation in Section
5.3.2. We then apply the C-DQN and the DQN algorithms to the cooling problem
of this system, and compare them with the standard linear-quadratic-Gaussian
control strategy that involves approximation in Section 5.3.3.
### 5.2 Cooling of a Quantum Quartic Oscillator
In this section, we consider the problem of measurement-feedback cooling of a
one-dimensional quantum quartic oscillator, which is a minimal model of a
nonlinear quantum system in continuous space. We consider a situation in which
the system is subject to continuous position measurement, and we consider a
controllable external force for use in feedback control to reduce the energy
of the system, thereby cooling the system. We show that while the conventional
DQN algorithm exhibits instability and a large variance in learning the task,
the C-DQN algorithm is much more stable. We have mostly followed our previous
work [102] concerning the settings of the quantum system.
#### 5.2.1 Significance of the Quartic System
In contrast to a quantum quartic oscillator, a quantum harmonic oscillator is
simple and its optimal control strategy can be analytically obtained [22]. The
Hamiltonian of a harmonic oscillator is given by
$\hat{H}=\frac{\hat{p}^{2}}{2m}+\frac{k}{2}\hat{x}^{2},$ (5.1)
where $\hat{p}$ is the momentum operator and $\hat{x}$ is the position
operator. The Hamiltonian is therefore quadratic with respect to the operators
$\hat{x}$ and $\hat{p}$, and the time-evolution equations of the expectation
values $\langle\hat{x}\rangle$ and $\langle\hat{x}\rangle$ are linear and
given by
$d\langle\hat{x}\rangle=\frac{1}{m}\langle\hat{p}\rangle\,dt,\qquad
d\langle\hat{p}\rangle=k\langle\hat{x}\rangle\,dt.$ (5.2)
When the position of the system is continuously measured as discussed in
Section 2.2.3, the time evolution of the position $\langle\hat{x}\rangle$ and
the momentum $\langle\hat{p}\rangle$ is subject to a Gaussian noise, and the
standard linear-quadratic-Gaussian (LQG) control is applicable. The LQG
control regards $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ as the
system variables and it effectively minimizes the term
$\mathbb{E}\left[\int\left(\frac{1}{2m}\langle\hat{p}\rangle^{2}+\frac{k}{2}\langle\hat{x}\rangle^{2}\right)dt\right],$
(5.3)
which corresponds to the minimization of the expectation of the total energy
$\mathbb{E}\left[\int\langle\hat{H}\rangle\,dt\right]=\mathbb{E}\left[\int\left(\frac{1}{2m}\langle\hat{p}\rangle^{2}+\frac{k}{2}\langle\hat{x}\rangle^{2}+\frac{1}{2m}(\langle\hat{p}^{2}\rangle-\langle\hat{p}\rangle^{2})+\frac{k}{2}(\langle\hat{x}^{2}\rangle-\langle\hat{x}\rangle^{2})\right)\,dt\right],$
(5.4)
since the variances $\langle\hat{p}^{2}\rangle-\langle\hat{p}\rangle^{2}$ and
$\langle\hat{x}^{2}\rangle-\langle\hat{x}\rangle^{2}$ in the above equation
are known to converge to steady values under continuous measurement [22], and
the state becomes a Gaussian state. Therefore, the quantum harmonic oscillator
under continuous measurement is effectively classical, in the sense that the
position and momentum variables $\langle\hat{x}\rangle$ and
$\langle\hat{p}\rangle$ are sufficient to describe the state in the time
evolution, and the control strategy can be conveniently derived in the same
way as for classical systems. The control force of the LQG control is linear
with respect to the variables $\langle\hat{x}\rangle$ and
$\langle\hat{p}\rangle$.
Nevertheless, for a quartic oscillator of which the Hamiltonian is given by
$\hat{H}=\frac{\hat{p}^{2}}{2m}+\lambda\hat{x}^{4},$ (5.5)
the time evolution of $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ is
given by
$d\langle\hat{x}\rangle=\frac{\langle\hat{p}\rangle}{m},\qquad
d\langle\hat{p}\rangle=-4\lambda\langle\hat{x}^{3}\rangle,$ (5.6)
which involves the cubic term $\langle\hat{x}^{3}\rangle$. Therefore, the
skewness
$\left\langle\left(\hat{x}-\langle\hat{x}\rangle\right)^{3}\right\rangle$ is
also relevant in the dynamics. The time evolution of the skewness is given by
[102]
$d\left\langle\left(\hat{x}-\langle\hat{x}\rangle\right)^{3}\right\rangle=\frac{3\left\langle(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\right\rangle}{m}dt$
(5.7)
and the time evolution of
$\left\langle(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\right\rangle$
is given by
$\begin{split}d\left\langle(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\right\rangle&=\frac{2\left\langle(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)\right\rangle}{m}dt\\\
&\quad-4\lambda\left\langle(\hat{x}^{3}-\langle\hat{x}^{3}\rangle)(\hat{x}-\langle\hat{x}\rangle)^{2}\right\rangle
dt.\end{split}$ (5.8)
For a Gaussian state, of which the skewness, the odd central moments and the
excess kurtosis are all zero, the term
$\left\langle(\hat{x}^{3}-\langle\hat{x}^{3}\rangle)(\hat{x}-\langle\hat{x}\rangle)^{2}\right\rangle$
is found to be [102]
$\begin{split}\left\langle(\hat{x}^{3}-\langle\hat{x}^{3}\rangle)(\hat{x}-\langle\hat{x}\rangle)^{2}\right\rangle=9\left(\langle\hat{x}^{2}\rangle-\langle\hat{x}\rangle^{2}\right)\langle\hat{x}\rangle,\end{split}$
(5.9)
which implies that a state that is initially Gaussian in a quartic potential
gradually develops nonzero skewness. The dynamics of $\langle\hat{x}\rangle$
and that of $\langle\hat{p}\rangle$ are affected by the profile of the wave
function, and therefore, the state cannot be merely characterized by the
expectation values $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ and the
variances, and the dynamics exhibits genuinely quantum-mechanical effects, as
shown in Figs. 5.1 and 5.2.
Figure 5.1: Snapshots of the time evolution of a state in a quartic potential.
The left panels shows the initial state which is Gaussian, and after several
oscillations in the quartic potential, the state becomes non-Gaussian and it
is shown in the right panel. The blue and the orange curves show the real and
the imaginary parts of the wave functions, and the red curves show the
probability densities. The grey curves show the potential, and the scale of
the potential is arbitrary. Figure 5.2: The time evolution of the expectation
value of the position $\langle x\rangle$ of the state evolving in the quartic
potential as in Fig. 5.1. It can be seen that the expectation value of the
position ceases to oscillate, while the energy of the system remains high.
It has been known that a one-dimensional quartic oscillator corresponds to the
one-dimensional $\phi^{4}$ theory [103], and the system is difficult to
analyse. The appropriate control strategy of this system is unknown, and
therefore, we consider cooling of the quartic oscillator as our first
nontrivial example of continuous quantum control, and we compare the result
obtained using the conventional DQN algorithm with the result obtained using
the C-DQN algorithm.
#### 5.2.2 Model
Following Section 2.2.3, the stochastic time-evolution equation of the state
subject to continuous position measurement is given by
$\displaystyle
d|\psi\rangle=\left[\left(-\frac{i}{\hbar}\hat{H}-\frac{\gamma}{4}(\hat{x}-\langle\hat{x}\rangle)^{2}\right)dt+\sqrt{\dfrac{\gamma}{2}}(\hat{x}-\langle\hat{x}\rangle)dW\right]|\psi\rangle,$
(5.10)
$\displaystyle\hat{H}=\frac{\hat{p}^{2}}{2m}+\lambda\hat{x}^{4}-F_{\text{con}}\hat{x},$
(5.11)
where $\gamma$ is the measurement strength and $dW$ is a Wiener increment,
which is a Gaussian random variable satisfying $\mathbb{E}[dW]=0$ and
$\mathbb{E}[dW^{2}]=dt$, and $F_{\text{con}}$ is the external control force.
Due to the Gaussian property of the position measurement, the system tends to
behave like a classical particle and to have a Gaussian profile when the
measurement strength is large. Therefore, we choose a sufficiently small
measurement strength so that quantum effects can be significant. The system
parameters that we use in our numerical simulation are given in Table 5.1.
| $m$ ($m_{c}$) | $\lambda$ $\left(\frac{m^{2}_{c}\omega^{3}_{c}}{\hbar}\right)$ | $\gamma$ ($\frac{m_{c}\omega_{c}^{2}}{\hbar}$) | $F_{\text{max}}$ ($\sqrt{\hbar m_{c}\omega_{c}^{3}}$) | $x_{\text{max}}$ ($\sqrt{\frac{\hbar}{m_{c}\omega_{c}}}$)
---|---|---|---|---|---
quartic oscillator | $\dfrac{1}{\pi}$ | $\dfrac{\pi}{25}$ | $\dfrac{\pi}{100}$ | $3\pi$ | 8.5
Table 5.1: System parameters of the quartic oscillator used in our numerical
experiments, in terms of a reference angular momentum $\omega_{c}$ and a
reference mass $m_{c}$. $F_{\text{max}}$ is the maximum of the control force
$F_{\text{con}}$ that we allow. $x_{\text{max}}$ is the boundary of the 1D
space that we simulate, or the maximal distance away from the center of the
potential, in our numerical simulation.
We assume that the measurement efficiency is unity. As the measurement can
purify an arbitrary mixed state and the state is continuously measured, we
assume that the state, or the wave function, is already known by the external
observer and is therefore available. Every $\frac{1}{18\omega_{c}}$ time, the
controller determines the control force $F_{\text{con}}$ on the basis of the
information about the instantaneous wave function $|\psi\rangle$, and the
force $F_{\text{con}}$ is kept constant during a time of
$\frac{1}{18\omega_{c}}$. The control loss, or the minus reward in the setting
of reinforcement learning, is the energy of the state, given by
$\left\langle\frac{\hat{p}^{2}}{2m}+\lambda\hat{x}^{4}\right\rangle$.
To meet the requirement of deep Q-learning, the space of actions, namely, the
space of possible choices of the control force $F_{\text{con}}$, must be
discrete. We therefore discretize the continuous interval
$\left[-F_{\text{max}},+F_{\text{max}}\right]$ into 21 points, and the set of
control actions is given by
$\\{F_{\text{con}}\ |\ F_{\text{con}}=n\times 0.3\pi\sqrt{\hbar
m_{c}\omega_{c}^{3}},\quad-10\leq n\leq 10,\quad n\in\mathbb{Z}\\}.$ (5.12)
In the setting of reinforcement learning, the agent, or the controller,
determines its action $F_{\text{con}}$ based on the current state
$|\psi\rangle$. After an evolution time of $\frac{1}{18\omega_{c}}$ of the
state, the agent experiences its next time step and it observes the state
again to make decision on the control action. The reward is given by the minus
energy of the state at each time step.
The deep neural network $Q_{\theta}$ used in reinforcement learning as
discussed in Chapter 3 takes information of the state $|\psi\rangle$ as its
input, and it outputs the Q values for each $F_{\text{con}}$ in the action set
in Eq. (5.12). The neural network has 4 layers in total, with hidden units
512, 512, and 256. The neural network is trained using the Adam optimizer [64]
with a minibatch size 512, learning each experience data for 8 times on
average, and the update period of the target network is set to be 300 gradient
descent steps. The discount factor $\gamma$ in Q-learning discussed in Chapter
3 is set to be $0.99$.
To efficiently give the necessary information of the state to the neural
network, we use distribution moments of the Wigner quasiprobability
distribution of the state as the input of the neural network. Specifically, we
include $\langle\hat{x}\rangle$, $\langle\hat{p}\rangle$,
$\left\langle\left(\hat{x}-\langle\hat{x}\rangle\right)^{2}\right\rangle$,
$\left\langle\left(\hat{p}-\langle\hat{p}\rangle\right)^{2}\right\rangle$,
$\text{Re}\left[\left\langle\left(\hat{x}-\langle\hat{x}\rangle\right)\left(\hat{p}-\langle\hat{p}\rangle\right)\right\rangle\right]$,
$\left\langle\left(\hat{x}-\langle\hat{x}\rangle\right)^{3}\right\rangle$, and
$\left\langle\left(\hat{x}-\langle\hat{x}\rangle\right)\left(\hat{p}-\langle\hat{p}\rangle\right)\left(\hat{x}-\langle\hat{x}\rangle\right)\right\rangle$,
etc. in the input of the neural network, totally up to all fifth central
distribution moments with respect to $\hat{x}$ and $\hat{p}$. The motivation
for using distribution moments as the input is that the LQG control only needs
$\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ to find the optimal
control force for a harmonic oscillator, and naturally by including higher
distribution moments, the controller may become aware of additional nontrivial
details of the state and better control strategies may be found. These
distribution moments are also physically relevant, as they are physical
observables.
The quantum state is approximately simulated in discrete space, and the
spacing between adjacent discrete sites is set to be
$0.1\sqrt{\frac{\hbar}{m_{c}\omega_{c}}}$, and the time step used in the
numerical integration of the time-evolution equation is set to be
$\frac{1}{1440\omega_{c}}$. To numerically evaluate the term
$\dfrac{\partial}{\partial x}$ by finite difference methods, we use the
formula
$\begin{split}\frac{\partial}{\partial
x}f(x_{0})=&\frac{3f(x_{0}-4h)-32f(x_{0}-3h)+168f(x_{0}-2h)-672f(x_{0}-h)}{840h}\\\
&+\frac{672f(x_{0}+h)-168(x_{0}+2h)+32(x_{0}+3h)-3(x_{0}+4h)}{840h}+O(h^{8})\end{split}$
(5.13)
and to evaluate the term $\dfrac{\partial^{2}}{\partial x^{2}}$, we use
$\begin{split}\frac{\partial^{2}}{\partial
x^{2}}f(x_{0})=&\frac{-9f(x_{0}-4h)+128f(x_{0}-3h)-1008f(x_{0}-2h)+8064f(x_{0}-h)-14350f(x_{0})}{5040h}\\\
&+\frac{8064f(x_{0}+h)-1008(x_{0}+2h)+128(x_{0}+3h)-9(x_{0}+4h)}{5040h}+O(h^{8}),\end{split}$
(5.14)
so that we can obtain accurate results efficiently in our numerical
simulation. We use the implicit 1.5 order strong scheme [104] for numerical
integration of the stochastic differential equation (5.10), and additionally,
in order to prevent numerical divergence due to the high energy part of the
quartic potential, we include high-order terms of the time evolution of the
Hamiltonian in our numerical integration, including additional terms
$\frac{(-\frac{i}{\hbar}\,dt\,\hat{H})^{n}}{n!}$ up to $n=6$. The programming
codes of our numerical experiments have been publicized for
reference,111https://github.com/Z-T-
WANG/PhDThesis/tree/main/quartic%20oscillator where all details can be found.
For discussion concerning numerical integration of stochastic differential
equations, see appendix in Ref. [102].
#### 5.2.3 Evaluation and Results
After establishing the model, we train the reinforcement learning agent to
reduce the energy of the state using the control force $F_{\text{con}}$
through trial and error. In the following, we describe how we have set the
task for the agent and how we evaluate the performance of the agent, and the
results are presented, comparing the DQN algorithm with the C-DQN algorithm.
##### Initialization
To make sure that the initial state at the beginning of the control is a
typical non-Gaussian state in the quartic potential, at the initialization, we
first initialize the state as a Gaussian wave packet at the center of the
potential with a random momentum ranging from $-0.3\pi\sqrt{\hbar
m_{c}\omega_{c}}$ to $+0.3\pi\sqrt{\hbar m_{c}\omega_{c}}$, and then, we let
the state evolve during a time which changes randomly from
$\dfrac{15}{\omega_{c}}$ to $\dfrac{20}{\omega_{c}}$ with $F_{\text{con}}=0$,
and lastly we use the resulting state as the initial state for the
reinforcement learning agent to start to control. This procedure ensures that
the initial state from the perspective of the controller is sufficiently non-
Gaussian, and the initial energy of the state approximately lies in
$5\hbar\omega_{c}$ and $7\hbar\omega_{c}$. We re-initialize the state if its
energy exceeds $7\hbar\omega_{c}$.
##### Setting of Episodes
Following convention in reinforcement learning, we set an episode of the
control to be $\dfrac{100}{\omega_{c}}$ time in simulation. In other words, we
maximally allow the state to evolve for a time interval of
$\dfrac{100}{\omega_{c}}$ under control, and after that evolution, we stop the
simulation and re-initialize the state to start over again. To evaluate the
performance of the controller, we calculate the average energy of the
controlled state starting from time $\frac{30}{\omega_{c}}$ to
$\frac{100}{\omega_{c}}$, so that transient behaviour of the control when the
state is initialized is ignored.
In order to make sure that the state does not move out of the boundary of the
space that is simulated, we stop the simulation and end the episode whenever
the probability distribution around the boundary of the space is not
negligible. Also, regarding the energy, we end the episode whenever the energy
of the state exceeds $12\hbar\omega_{c}$. In other words, $12\hbar\omega_{c}$
is the maximal energy that we allow during the control, beyond which we stop
the control and regard the control as having failed in this episode. In these
two cases of failure, the evaluated performance, i.e. the average energy, is
considered to be $12\hbar\omega_{c}$. If an episode ends with failure, we set
the Q value that is to be learned at the end of the episode to be the final
energy of the state divided by $\frac{1}{1-\gamma}$, which is a large value,
so that the reinforcement learning agent should learn to avoid getting close
to these failure cases.
At the beginning of training, the control of the agent fails quickly and
frequently, and as the learning makes progress, the state is kept at a low
energy for a longer period of time, and finally the state can be stabilized
and cooled. In training, we first train the agent until it can stabilize the
state for a time interval of $\frac{100}{\omega_{c}}$, i.e., until it
completes a whole episode without failure, and then, we train the agent for
11000 more episodes, during which we gradually reduce the learning rate and
the $\epsilon$ hyperparameter of the $\epsilon$-greedy policy in Q-learning.
##### Results
Figure 5.3: Learning curves of the quartic cooling problem for the DQN
algorithm and the C-DQN algorithm, with the same experimental settings, for 5
different random seeds. The abscissa shows the simulated time of the evolution
of the quartic oscillator, which represents the number of learned data and the
training time. The ordinate shows the average energy, which shows the
performance of the controller, and a smaller value represents a better
performance. Gaussian smoothing with the standard deviation of 40 is applied
to smooth the performance data. The failure of stabilization and control
corresponds to an energy of 12.
We repeat the experiment for 5 times using 5 different random seeds, and the
learning curves of the DQN algorithm and the C-DQN algorithm in the training
process are shown in Fig. 5.3, 5.4 and 5.5, for each of the random seeds
separately, and summarized in Fig. 5.6.
In Fig. 5.3, we see that although both the DQN and the C-DQN algorithm can
learn to reduce the energy of the system, the performance of C-DQN is
consistently better than DQN during training. In addition, whereas the
performance of C-DQN improves steadily and consistently during training, the
performance of DQN does not improve consistently, and the performance
fluctuates. Especially, the performance can get worse after the agent has
learned how to cool the system well. To see more details, we show the
performance data with less smoothing in Fig. 5.5. In Fig. 5.5, it is clear
that the performance of C-DQN is very stable, while the performance of DQN
strongly fluctuates in training when it does not perform well. Specifically,
if we check the failure rate of the controller in training, i.e. the
probability of an episode to end with control failure, we see that the
deterioration of the performance of DQN is strongly correlated with control
failure, as shown in Fig. 5.5, except for the case of the random seed 5.
Therefore, we see that the DQN algorithm does not perform stably throughout
the training process, and it tends to get unstable and occasionally fails even
at a later stage of training, and such instability is hard to remove and does
not completely disappear after an extended period of training.
Figure 5.4: Learning curves of the quartic cooling problem for the DQN
algorithm and the C-DQN algorithm, for 5 different random seeds, which shows
the same data as in Fig. 5.3 but using Gaussian smoothing with the standard
deviation of 4 to show the fluctuations in performance. Figure 5.5: Failure
rate of the controller learned by the DQN algorithm and the C-DQN algorithm in
the process of training, for 5 different random seeds, evaluated using a
Gaussian window with the standard deviation of 20.
Meanwhile, by comparing the results of the repetitions of the experiment with
different random seeds, we see that the performance of DQN has a large
variance with respect to the random seeds, while that of C-DQN almost has no
variance, as shown in Fig. 5.6. The learning curves of DQN can be
qualitatively different for different random seeds, and especially, the
results for the random seed 5 are qualitatively different compared with the
others, as can be seen in Fig. 5.3, 5.4 and 5.5. The agent does not cool the
system to an energy that is as low as the others, but it performs stably
throughout training and does not encounter many failures, which implies that
the agent has learns differently from the others. These results show that the
instability of DQN increases the randomness in its results, and therefore,
reduces the reproducibility and reliability of the results, especially if the
task is relatively difficult. The significant randomness in results
considerably increases the difficulty of improving and fine-tuning the AI
algorithm, and requires experimenters to repeat the experiments many times in
order to confirm a result, and the result may change qualitatively due to
minor details which are supposed to only slightly perturb the training [45].
Also, in practical scenarios, as one usually wants a controller that is
trained to have a performance as high as possible, one usually needs to run
the experiments many times in order to pick the best trained controller, and
therefore, the obtained performance eventually would highly depend on the
number of repetitions of the experiment.
Compared with the DQN algorithm, using exactly the same experimental setting,
the C-DQN algorithm does not encounter any difficulty encountered by the DQN
algorithm, and the C-DQN algorithm steadily approaches a satisfactory
performance without instability and with little randomness, as shown in Fig.
5.6. This clearly demonstrates the stability of C-DQN and the consistency of
its results, which translates into the reliability of the results of C-DQN in
solving scientific problems.
Figure 5.6: Learning curves of the quartic cooling problem for the DQN
algorithm (left) and the C-DQN algorithm (right). Each curve represent a
repetition of the experiment with a different random seed, summarizing the
results in Fig. 5.3.
As the satisfactory stability and consistency of C-DQN have been confirmed as
above for the nontrivial quantum control problem, we achieve our purpose of
developing a stable and reliable deep reinforcement learning algorithm for
physical control problems. In the following, we apply the C-DQN algorithm to a
more complicated system which is of realistic relevance.
### 5.3 Cooling of a Quantum-Mechanical Rigid Body
With rapid development in synthesis and manipulation of nanoparticles,
nanorotors have recently been experimentally realized and studied [47]. When
the nanoparticles are sufficiently isolated from their environment and
sufficiently cooled, quantum-mechanical effects of rotations are expected to
be observed, and these quantum-mechanical rotors are expected to find their
applications in sensing devices and fundamental tests of physical principles,
because they allow an unprecedented accuracy of sensing using their rotational
degrees of freedom. However, in experiments, the rotational degrees of freedom
have yet to be successfully cooled to approach a quantum regime so far, and
only the center-of-mass degree of freedom has been successfully cooled to
approach the ground state [105, 106]. The dynamics of rotation is essentially
nonlinear, and it is much harder to manipulate compared to the dynamics in
position space.
In the following, we consider the problem of measurement-feedback cooling of a
quantum-mechanical rigid body, which is both of realistic significance and of
theoretical interest. We derive the Hamiltonian of the system, describe our
model, and present the results of the control learned by the DQN and the C-DQN
algorithms, and compare the results with those obtained using the standard LQG
control which employs a linear approximation.
#### 5.3.1 Derivation of the Rotational Hamiltonian
In this section, we derive the Hamiltonian of a free quantum rigid body in
terms of the angles which define the orientation of the rigid body, following
Ref. [101].
##### Euler Angles
A standard representation of the orientation of a rigid body is given by the
Euler angles $(\alpha,\beta,\gamma)$. There are several different conventions
on the Euler angles, and here we follow Ref. [101] and use the z-x-z
convention for the Euler rotation (see Eq. (5.15) below). In this case, the
rotation $(\alpha,\beta,\gamma)$ represents first rotating the rigid body
around the $z$ axis of the laboratory frame through angle $\gamma$, then
rotating around the $x$ axis of the laboratory frame through angle $\beta$,
and finally rotating around the $z$ axis of the laboratory frame through angle
$\alpha$. In terms of angular momentum operators
$\hat{L}_{x},\hat{L}_{y},\hat{L}_{z}$, this Euler rotation is given by
$e^{-\frac{i}{\hbar}\alpha\hat{L}_{z}}e^{-\frac{i}{\hbar}\beta\hat{L}_{x}}e^{-\frac{i}{\hbar}\gamma\hat{L}_{z}}.$
(5.15)
If we assume that before the rotation, the rigid body is at the default
position and the local $x$, $y$ and $z$ axes of the rigid body align with the
$x$, $y$ and $z$ axes of the laboratory frame, then, the rotation operator
$e^{-\frac{i}{\hbar}\alpha\hat{L}_{z}}e^{-\frac{i}{\hbar}\beta\hat{L}_{x}}e^{-\frac{i}{\hbar}\gamma\hat{L}_{z}}$
also represents the orientation of the rigid body in the laboratory frame
after the rotation. The angle $\beta$ is called the polar angle or the zenith
angle, the angle $\alpha$ is called the azimuthal angle, and the angle
$\gamma$ corresponds to self-rotation of the object around its local $z$ axis.
Although the representation of Euler angles is physically straightforward, it
suffers from the well-known problem called gimbal lock, or, coordinate
singularity, when $\beta$ is equal to $0$ and $\pi$. The cases of $\beta$
being equal to $0$ and $\pi$ correspond to the north pole and the south pole
of a spherical coordinate. When we have $\beta=0$, we have
$e^{-\frac{i}{\hbar}\alpha\hat{L}_{z}}e^{-\frac{i}{\hbar}\beta\hat{L}_{x}}e^{-\frac{i}{\hbar}\gamma\hat{L}_{z}}=e^{-\frac{i}{\hbar}(\alpha+\gamma)\hat{L}_{z}},$
(5.16)
and therefore $\alpha$ and $\gamma$ become equivalent and essentially one
degree of freedom disappears. A similar situation occurs for $\beta=\pi$, in
which case we have
$e^{-\frac{i}{\hbar}\alpha\hat{L}_{z}}e^{-\frac{i}{\hbar}\beta\hat{L}_{x}}e^{-\frac{i}{\hbar}\gamma\hat{L}_{z}}=e^{-\frac{i}{\hbar}(\alpha-\gamma)\hat{L}_{z}}e^{-\frac{i}{\hbar}\beta\hat{L}_{x}},$
(5.17)
and the remaining degree of freedom becomes $\alpha-\gamma$. This issue leads
to singularity when one attempts to find the wave function of a quantum state
in the space of Euler angles and attempts to differentiate with respect to
$\alpha$, $\beta$ and $\gamma$. Also, if one considers a smooth movement
across the point $\beta=0$, it is clear that the angles $\alpha$ and $\gamma$
flip and change discontinuously, which shows that a continuous physical
movement does not correspond to a continuous change in the coordinate
representation using Euler angles.
##### Quaternions
To overcome the problem of coordinate singularity, we use quaternions in the
following. The quaternion variables are related to the Euler angles by
$\begin{split}\xi=\cos\dfrac{\alpha-\gamma}{2}\sin\frac{\beta}{2},\\\
\eta=\sin\frac{\alpha-\gamma}{2}\sin\frac{\beta}{2},\\\
\zeta=\sin\frac{\alpha+\gamma}{2}\cos\frac{\beta}{2},\\\
\chi=\cos\frac{\alpha+\gamma}{2}\cos\frac{\beta}{2}.\end{split}$ (5.18)
It can be checked that at $\beta=0$, we have $\sin\frac{\beta}{2}=0$ and
$\xi=\eta=0$, and the remaining degree of freedom becomes $\alpha+\gamma$; at
$\beta=\pi$, we have $\cos\frac{\beta}{2}=0$ and $\zeta=\chi=0$, and the
remaining degree of freedom becomes $\alpha-\gamma$. All coordinate
singularities are removed by the quaternion representation. Nevertheless,
there is a redundant degree of freedom, and we have
$\xi^{2}+\eta^{2}+\zeta^{2}+\chi^{2}\equiv 1$, and the rotation of
$(\xi,\eta,\zeta,\chi)$ and that of $(-\xi,-\eta,-\zeta,-\chi)$ represent the
same physical rotation.
As an extension of complex numbers, in quaternion computation, the quaternions
are conventionally written as
$\chi+\xi\,i+\eta\,j+\zeta\,k,$ (5.19)
where $\chi$ represents a real number, and the calculation rules are given by
$i^{2}=j^{2}=k^{2}=-1,$ (5.20)
and
$i\times j=-j\times i=k,\quad j\times k=-k\times j=i,\quad k\times i=-i\times
k=j.$ (5.21)
The inverse of a quaternion is given by
$\frac{1}{\chi+\xi\,i+\eta\,j+\zeta\,k}=\frac{\chi-\xi\,i-\eta\,j-\zeta\,k}{\xi^{2}+\eta^{2}+\zeta^{2}+\chi^{2}}.$
(5.22)
The quaternions are intimately related to rotations in three-dimensional
space. For a vector $\vec{r}=(r_{x},r_{y},r_{z})$, after rotating around a
unit vector $\vec{a}=(a_{x},a_{y},a_{z})$ by angle $\theta$, the resulting
vector can be represented by
$\begin{split}&\left(\cos\frac{\theta}{2}+\sin\frac{\theta}{2}a_{x}i+\sin\frac{\theta}{2}a_{y}j+\sin\frac{\theta}{2}a_{z}k\right)\times\left(r_{x}i+r_{y}j+r_{z}k\right)\\\
&\times\left(\cos\frac{\theta}{2}-\sin\frac{\theta}{2}a_{x}i-\sin\frac{\theta}{2}a_{y}j-\sin\frac{\theta}{2}a_{z}k\right)\\\
={}&\left[\left(\cos\theta+a_{x}^{2}(1-\cos\theta)\right)r_{x}+\left(a_{x}a_{y}(1-\cos\theta)-a_{z}\sin\theta\right)r_{y}+\left(a_{z}a_{x}(1-\cos\theta)+a_{y}\sin\theta\right)r_{z}\right]i\\\
&+\left[\left(a_{x}a_{y}(1-\cos\theta)+a_{z}\sin\theta\right)r_{x}+\left(\cos\theta+a_{y}^{2}(1-\cos\theta)\right)r_{y}+\left(a_{y}a_{z}(1-\cos\theta)-a_{x}\sin\theta\right)r_{z}\right]j\\\
&+\left[\left(a_{z}a_{x}(1-\cos\theta)-a_{y}\sin\theta\right)r_{x}+\left(a_{y}a_{z}(1-\cos\theta)+a_{x}\sin\theta\right)r_{y}+\left(\cos\theta+a_{z}^{2}(1-\cos\theta)\right)r_{z}\right]k,\end{split}$
(5.23)
where the factors before $i$, $j$ and $k$ are the $x$, $y$ and $z$ coordinates
of the vector, respectively. Therefore, quaternions can be used to compute a
combination of rotations. The result of rotation 2,
$\chi_{2}+\xi_{2}\,i+\eta_{2}\,j+\zeta_{2}\,k$, after rotation 1,
$\chi_{1}+\xi_{1}\,i+\eta_{1}\,j+\zeta_{1}\,k$, can be computed as
$(\chi_{2}+\xi_{2}\,i+\eta_{2}\,j+\zeta_{2}\,k)\times(\chi_{1}+\xi_{1}\,i+\eta_{1}\,j+\zeta_{1}\,k),$
(5.24)
since the vector $\vec{r}$ after the rotation 1 and rotation 2 is given by
$\begin{split}&(\chi_{2}+\xi_{2}\,i+\eta_{2}\,j+\zeta_{2}\,k)\times(\chi_{1}+\xi_{1}\,i+\eta_{1}\,j+\zeta_{1}\,k)\times\left(r_{x}i+r_{y}j+r_{z}k\right)\\\
&\times(\chi_{1}-\xi_{1}\,i-\eta_{1}\,j-\zeta_{1}\,k)\times(\chi_{2}-\xi_{2}\,i-\eta_{2}\,j-\zeta_{2}\,k).\end{split}$
(5.25)
##### Angular Momentum Operators
Consider an infinitesimal rotation around the $z$ axis of the laboratory frame
by $\epsilon$, denoted by $\hat{R}_{z;\epsilon}$. In the quaternion
representation, the rotation is given by
$\hat{R}_{z;\epsilon}=1+\sin\frac{\epsilon}{2}\,k+O(\epsilon^{2}).$ (5.26)
Therefore, given an initial orientation
$\hat{R}_{0}=\chi_{0}+\xi_{0}i+\eta_{0}j+\zeta_{0}k$, after rotation
$\hat{R}_{z;\epsilon}$, the orientation becomes
$\begin{split}\hat{R}_{z;\epsilon}\hat{R}_{0}=(\chi_{0}-\frac{\epsilon}{2}\zeta_{0})+(\xi_{0}-\frac{\epsilon}{2}\eta_{0})i+(\eta_{0}+\frac{\epsilon}{2}\xi_{0})j+(\zeta_{0}+\frac{\epsilon}{2}\chi_{0})k.\end{split}$
(5.27)
That is, the rotation $\hat{R}_{z;\epsilon}$, or in terms of the angular
momentum operator, $e^{-\frac{i}{\hbar}\epsilon\hat{L}_{z}}$, shifts the
quaternion variables $(\xi,\eta,\zeta,\chi)$ by an amount of
$\frac{\epsilon}{2}$. Therefore, for functions defined in the space of the
quaternion variables $(\xi,\eta,\zeta,\chi)$, the angular momentum operator
$\hat{L}_{z}$ that generates the rotations around the $z$ axis is given by
$\hat{L}_{z}=-\frac{i\hbar}{2}\left(-\eta\frac{\partial}{\partial\xi}+\xi\frac{\partial}{\partial\eta}+\chi\frac{\partial}{\partial\zeta}-\zeta\frac{\partial}{\partial\chi}\right).$
(5.28)
Similarly, the other angular momentum operators are given by
$\hat{L}_{x}=-\frac{i\hbar}{2}\left(+\chi\frac{\partial}{\partial\xi}-\zeta\frac{\partial}{\partial\eta}+\eta\frac{\partial}{\partial\zeta}-\xi\frac{\partial}{\partial\chi}\right),$
(5.29)
$\hat{L}_{y}=-\frac{i\hbar}{2}\left(+\zeta\frac{\partial}{\partial\xi}+\chi\frac{\partial}{\partial\eta}-\xi\frac{\partial}{\partial\zeta}-\eta\frac{\partial}{\partial\chi}\right).$
(5.30)
To confirm the commutation relations among the angular momentum operators,
using the bilinearity of the commutator, we can easily obtain
$\begin{split}[\hat{L}_{x},\hat{L}_{y}]&=\hat{L}_{x}\hat{L}_{y}-\hat{L}_{y}\hat{L}_{x}\\\
&=-\frac{\hbar^{2}}{2}\left(-\chi\frac{\partial}{\partial\zeta}+\zeta\frac{\partial}{\partial\chi}+\eta\frac{\partial}{\partial\xi}-\xi\frac{\partial}{\partial\eta}\right)\\\
&=i\hbar\hat{L}_{z},\end{split}$ (5.31)
where $[\cdot,\cdot]$ is the commutator, defined as $[A,B]:=AB-BA$. Similarly
we have
$=i\hbar\hat{L}_{x},\qquad[\hat{L}_{z},\hat{L}_{x}]=i\hbar\hat{L}_{y},$ (5.32)
which are the standard commutation relations of the angular momentum
operators.
Defining the total angular momentum operator $\hat{L}^{2}$ by
$\hat{L}^{2}:=\hat{L}_{x}^{2}+\hat{L}_{y}^{2}+\hat{L}_{z}^{2},$ (5.33)
we have
$[\hat{L}_{x},\hat{L}^{2}]=[\hat{L}_{y},\hat{L}^{2}]=[\hat{L}_{z},\hat{L}^{2}]=0.$
(5.34)
Besides the operators $\hat{L}_{x}$, $\hat{L}_{y}$ and $\hat{L}_{z}$, there
are several other important operators. A rigid body has body-fixed principal
axes which rotate together with the rigid body, and these local axes of the
rigid body are associated with the principal moments of inertia
$I_{x},I_{y},I_{z}$. To find the rotational energy of the rigid body, we need
the angular momenta projected onto these principal axes, and we consider these
principal axes to be the $x$, $y$ and $z$ axes of the local frame of the rigid
body.
The angular momentum operators in the directions of the body-fixed local axes
are associated with rotations around those local axes. Given an orientation
$\hat{R}_{0}$, the orientation $\hat{R}_{0}$ can be interpreted as the
orientation after rotating by $\hat{R}_{0}$ from the default position
$(\xi=0,\eta=0,\zeta=0,\chi=1)$, i.e., $\alpha=\beta=\gamma=0$, at which point
the body-fixed local axes align with the laboratory axes. Therefore, rotating
around the local $z$ axis of a rigid body by $\epsilon$ results in the
orientation $\hat{R}_{0}\hat{R}_{z;\epsilon}$. This is because the action of
rotating around the local $z$ axis of a rigid body is denoted by
$\hat{R}_{0}\hat{R}_{z;\epsilon}\hat{R}_{0}^{-1}$, which represents the
rotation around the rotated $z$ axis, rotated by $\hat{R}_{0}$, and we have
$\hat{R}_{0}\hat{R}_{z;\epsilon}\hat{R}_{0}^{-1}\hat{R}_{0}=\hat{R}_{0}\hat{R}_{z;\epsilon}$.
The result means that it is the same for the body to first rotate by
$\hat{R}_{0}$ and then rotate around its local $z$ axis, and to first rotate
around the laboratory $z$ axis at the default position and then rotate by
$\hat{R}_{0}$. The orientation after the rotation is given by
$\begin{split}\hat{R}_{0}\hat{R}_{z;\epsilon}&=(\xi_{0},\eta_{0},\zeta_{0},\chi_{0})\cdot(0,0,\frac{\epsilon}{2},1)\\\
&=(\xi_{0}+\frac{\epsilon}{2}\eta_{0},\ \eta_{0}-\frac{\epsilon}{2}\xi_{0},\
\zeta_{0}+\frac{\epsilon}{2}\chi_{0},\
\chi_{0}-\frac{\epsilon}{2}\zeta_{0}).\end{split}$ (5.35)
Therefore, the corresponding angular momentum operator $\hat{Q}_{z}$ which
generates the rotation around the local $z$ axis is given by
$\hat{Q}_{z}=-\frac{i\hbar}{2}\left(\eta\frac{\partial}{\partial\xi}-\xi\frac{\partial}{\partial\eta}+\chi\frac{\partial}{\partial\zeta}-\zeta\frac{\partial}{\partial\chi}\right),$
(5.36)
and the other angular momentum operators are given by
$\hat{Q}_{x}=-\frac{i\hbar}{2}\left(\chi\frac{\partial}{\partial\xi}+\zeta\frac{\partial}{\partial\eta}-\eta\frac{\partial}{\partial\zeta}-\xi\frac{\partial}{\partial\chi}\right),$
(5.37)
$\hat{Q}_{y}=-\frac{i\hbar}{2}\left(-\zeta\frac{\partial}{\partial\xi}+\chi\frac{\partial}{\partial\eta}+\xi\frac{\partial}{\partial\zeta}-\eta\frac{\partial}{\partial\chi}\right).$
(5.38)
The commutation relations among these operators are given by
$\begin{split}[\hat{Q}_{x},\hat{Q}_{y}]&=\hat{Q}_{x}\hat{Q}_{y}-\hat{Q}_{y}\hat{Q}_{x}\\\
&=-\frac{\hbar^{2}}{2}\left(+\chi\frac{\partial}{\partial\zeta}-\zeta\frac{\partial}{\partial\chi}+\eta\frac{\partial}{\partial\xi}-\xi\frac{\partial}{\partial\eta}\right)\\\
&=-i\hbar\hat{Q}_{z},\end{split}$ (5.39)
and
$\displaystyle=-i\hbar\hat{Q}_{x},\qquad[\hat{Q}_{z},\hat{Q}_{x}]=-i\hbar\hat{Q}_{y},$
(5.40)
$\displaystyle\hat{Q}^{2}:=\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}+\hat{Q}_{z}^{2}\equiv\hat{L}^{2},$
(5.41)
$\displaystyle[\hat{Q}_{x},\hat{Q}^{2}]=[\hat{Q}_{y},\hat{Q}^{2}]=[\hat{Q}_{z},\hat{Q}^{2}]=0.$
(5.42)
We have the relation $\hat{Q}^{2}\equiv\hat{L}^{2}$, indicating that the total
angular momentum is the same for both the local frame and the laboratory
frame.
Comparing Eqs. (5.31) and (5.32) with Eqs. (5.39) and (5.40), one notices that
the angular momentum operators $\hat{Q}_{x}$, $\hat{Q}_{y}$ and $\hat{Q}_{z}$
have a different sign in the commutation rules compared with the operators
$\hat{L}_{x}$, $\hat{L}_{y}$ and $\hat{L}_{z}$. This is because the rotations
generated by $\hat{Q}_{x}$, $\hat{Q}_{y}$ and $\hat{Q}_{z}$ are combined on
the right of $\hat{R}_{0}$, while the rotations generated by $\hat{L}_{x}$,
$\hat{L}_{y}$ and $\hat{L}_{z}$ are combined on the left. Other important
results are
$\begin{split}[\hat{L}_{z},\hat{Q}_{z}]&=-\frac{\hbar^{2}}{4}\left(\left[-\eta\frac{\partial}{\partial\xi}+\xi\frac{\partial}{\partial\eta},\,+\eta\frac{\partial}{\partial\xi}-\xi\frac{\partial}{\partial\eta}\right]+\left[\chi\frac{\partial}{\partial\zeta}-\zeta\frac{\partial}{\partial\chi},\,\chi\frac{\partial}{\partial\zeta}-\zeta\frac{\partial}{\partial\chi}\right]\right)\\\
&=0,\end{split}$ (5.43)
and
$[\hat{Q}_{x},\hat{L}_{y}]=0,$ (5.44)
that is,
$[\hat{Q}_{x/y/z},\hat{L}_{x/y/z}]=0,$ (5.45)
and any of $\hat{Q}_{x}$, $\hat{Q}_{y}$ and $\hat{Q}_{z}$ commutes with all of
$\hat{L}_{x}$, $\hat{L}_{y}$ and $\hat{L}_{z}$. Therefore, the state of a
quantum-mechanical rigid body can be characterized by three quantum numbers
$(\hat{L}^{2},\hat{L}_{z},\hat{Q}_{z})$.
###### Physical Interpretation
Unlike a shapeless particle, e.g., an elementary particle, which can be
specified by $(\hat{L}^{2},\hat{L}_{z})$, the state of a rigid body needs
three quantum numbers $(\hat{L}^{2},\hat{L}_{z},\hat{Q}_{z})$ to be specified
because besides the angular momentum of the rotational motion, there are
remaining degrees of freedom concerning how the rotation aligns with the
principal axes, i.e., the shape, of the rigid body. For example, a rod can
rotate around its axis of symmetry and align with the laboratory $z$ axis,
having both large $\hat{L}_{z}$ and $\hat{Q}_{z}$ angular momenta; however,
the axis of symmetry of the rod may rotate on the $x$-$y$ plane of the
laboratory frame as well, so that the rod can have a large $\hat{L}_{z}$
angular momentum while having a zero $\hat{Q}_{z}$ angular momentum.
Therefore, it is necessary to specify both the angular momentum in the
laboratory frame and how the angular momentum aligns with the shape, i.e., the
local axes, of the rigid body. This explains the reason why $\hat{L}_{z}$ and
$\hat{Q}_{z}$ commute but they are not independent: they result from the same
angular momentum projected onto the laboratory $z$ axis and onto the rigid
body $z$ axis, and therefore both of them are constrained by the total angular
momentum $\hat{L}^{2}$ and have the same dimension in their matrix
representations.
##### Symmetry and Conservation of Angular Momenta
The rotational energy of a classical rigid body is given by
$\frac{1}{2}\left(I_{x}\omega_{x}^{2}+I_{y}\omega_{y}^{2}+I_{z}\omega_{z}^{2}\right),$
(5.46)
where $\omega_{x}$, $\omega_{y}$ and $\omega_{z}$ are the angular velocities
projected onto the local $x$, $y$ and $z$ axes of the rigid body, and
therefore the Hamiltonian of a quantum-mechanical rigid body is given by
$\hat{H}=\frac{\hat{Q}_{x}^{2}}{2I_{x}}+\frac{\hat{Q}_{y}^{2}}{2I_{y}}+\frac{\hat{Q}_{z}^{2}}{2I_{z}}.$
(5.47)
As $\hat{H}$ commutes with $\hat{L}_{x}$, $\hat{L}_{y}$ and $\hat{L}_{z}$, the
angular momenta in the laboratory frame and the total angular momentum
$\hat{L}^{2}=\hat{Q}^{2}$ are conserved.
The conservation of $\hat{Q}_{x}$, $\hat{Q}_{y}$ and $\hat{Q}_{z}$ depends on
the symmetry of the rigid body, i.e., the relations among $I_{x}$, $I_{y}$ and
$I_{z}$. For a spherically symmetric rigid body, we have
$I:=I_{x}=I_{y}=I_{z}$ and therefore
$\hat{H}=\dfrac{\hat{Q}^{2}}{2I}=\dfrac{\hat{L}^{2}}{2I}$ (5.48)
and all angular momenta are conserved.
For an axially symmetric rigid body, we have $I_{\perp}:=I_{x}=I_{y}$ and
$I_{\parallel}:=I_{z}$ and
$\begin{split}\hat{H}&=\frac{\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}}{2I_{\perp}}+\frac{\hat{Q}_{z}^{2}}{2I_{\parallel}}\\\
&=\frac{1}{2}\left(\frac{1}{I_{\parallel}}-\frac{1}{I_{\perp}}\right)\hat{Q}_{z}^{2}+\frac{\hat{Q}^{2}}{2I_{\perp}},\end{split}$
(5.49)
and therefore we have
$[\hat{Q}_{z},\hat{H}]=\left[\hat{Q}_{z},\,\frac{1}{2}\left(\frac{1}{I_{\parallel}}-\frac{1}{I_{\perp}}\right)\hat{Q}_{z}^{2}\right]+\left[\hat{Q}_{z},\,\frac{\hat{Q}^{2}}{2I_{\perp}}\right]=0,$
(5.50)
showing that the angular momentum $\hat{Q}_{z}$ is conserved. In this case,
the angular momentum operators $\hat{L}^{2}$, $\hat{L}_{z}$ and $\hat{Q}_{z}$
fully diagonalize the Hamiltonian. For a generic asymmetric rigid body,
$\hat{H}$ does not commute with $\hat{Q}_{x}$, $\hat{Q}_{y}$ or $\hat{Q}_{z}$
and none of the angular momenta $\hat{Q}_{x}$, $\hat{Q}_{y}$ and $\hat{Q}_{z}$
is conserved. These results are consistent with the case for a classical rigid
body.
##### Expressions of the Angular Momentum Operators and the Hamiltonian in
terms of Angles
To find the expressions in terms of straightforwardly physically relevant
variables, we need to take the results from the coordinates of
$(\xi,\eta,\zeta,\chi)$ to the coordinates of angles and distances. First, to
compensate for the redundant degree of freedom in the quaternion
parametrization, we introduce an additional degree of freedom $r$ and define
$\begin{split}\xi&=r\sin\frac{\beta}{2}\cos\dfrac{\alpha-\gamma}{2}\\\
\eta&=r\sin\frac{\beta}{2}\sin\frac{\alpha-\gamma}{2}\\\
\zeta&=r\cos\frac{\beta}{2}\sin\frac{\alpha+\gamma}{2}\\\
\chi&=r\cos\frac{\beta}{2}\cos\frac{\alpha+\gamma}{2},\end{split}$ (5.51)
so that the total number of degrees of freedom becomes 4, and we
have$\xi^{2}+\eta^{2}+\zeta^{2}+\chi^{2}=r^{2}$. Next, for simplicity, we
define
$\mu:=\frac{\alpha-\gamma}{2},\quad\nu:=\frac{\alpha+\gamma}{2},$ (5.52)
and we have
$\alpha\equiv\nu+\mu,\quad\gamma\equiv\nu-\mu,$ (5.53)
and the relations between coordinates $(\xi,\eta,\zeta,\chi)$ and
$(r,\beta,\mu,\nu)$ are given by
$\begin{split}\xi&=r\sin\frac{\beta}{2}\cos\mu,\\\
\eta&=r\sin\frac{\beta}{2}\sin\mu,\\\ \zeta&=r\cos\frac{\beta}{2}\sin\nu,\\\
\chi&=r\cos\frac{\beta}{2}\cos\nu,\end{split}$ (5.54)
and
$\begin{split}r&=\sqrt{\xi^{2}+\eta^{2}+\zeta^{2}+\chi^{2}},\\\
\beta&=2\arcsin\sqrt{\frac{\xi^{2}+\eta^{2}}{\xi^{2}+\eta^{2}+\zeta^{2}+\chi^{2}}},\\\
\mu&=\arctan\frac{\eta}{\xi},\\\ \nu&=\arctan\frac{\zeta}{\chi}.\end{split}$
(5.55)
To transform from the coordinates from $(\xi,\eta,\zeta,\chi)$ to
$(r,\beta,\mu,\nu)$, we need to replace differentials like
$\dfrac{\partial}{\partial\xi}$ with $\dfrac{\partial
r}{\partial\xi}\dfrac{\partial}{\partial
r}+\dfrac{\partial\beta}{\partial\xi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\xi}\dfrac{\partial}{\partial\mu}+\dfrac{\partial\nu}{\partial\xi}\dfrac{\partial}{\partial\nu}$
by utilizing the following relations:
$\begin{split}\dfrac{\partial
r}{\partial\xi}=\frac{\xi}{r},\qquad\dfrac{\partial\beta}{\partial\xi}&=\frac{2}{\tan\frac{\beta}{2}}\frac{\xi}{r^{2}},\qquad\
\
\dfrac{\partial\mu}{\partial\xi}=-\frac{\sin\mu}{r\sin\frac{\beta}{2}},\quad\dfrac{\partial\nu}{\partial\xi}=0,\\\
\dfrac{\partial
r}{\partial\eta}=\frac{\eta}{r},\qquad\dfrac{\partial\beta}{\partial\eta}&=\frac{2}{\tan\frac{\beta}{2}}\frac{\eta}{r^{2}},\qquad\
\
\dfrac{\partial\mu}{\partial\eta}=\frac{\cos\mu}{r\sin\frac{\beta}{2}},\quad\
\ \dfrac{\partial\nu}{\partial\eta}=0,\\\ \dfrac{\partial
r}{\partial\zeta}=\frac{\zeta}{r},\qquad\dfrac{\partial\beta}{\partial\zeta}&=-2\tan\frac{\beta}{2}\,\frac{\zeta}{r^{2}},\quad\dfrac{\partial\mu}{\partial\zeta}=0,\qquad\qquad\dfrac{\partial\nu}{\partial\zeta}=\frac{\cos\nu}{r\cos\frac{\beta}{2}},\\\
\dfrac{\partial
r}{\partial\chi}=\frac{\chi}{r},\qquad\dfrac{\partial\beta}{\partial\chi}&=-2\tan\frac{\beta}{2}\,\frac{\chi}{r^{2}},\quad\dfrac{\partial\mu}{\partial\chi}=0,\qquad\qquad\dfrac{\partial\nu}{\partial\chi}=-\frac{\sin\nu}{r\cos\frac{\beta}{2}},\end{split}$
(5.56)
where for simplicity, $\xi$, $\eta$, $\zeta$, and $\chi$ are considered as
functions of $(r,\beta,\mu,\nu)$ on the right-hand sides of the equations.
Then, we can expand the expression of the angular momentum operator
$\hat{Q}_{z}$
$\begin{split}\hat{Q}_{z}=&-\frac{i\hbar}{2}\left(+\eta\frac{\partial}{\partial\xi}-\xi\frac{\partial}{\partial\eta}+\chi\frac{\partial}{\partial\zeta}-\zeta\frac{\partial}{\partial\chi}\right),\\\
=&-\frac{i\hbar}{2}\left[\eta\left(\dfrac{\partial
r}{\partial\xi}\dfrac{\partial}{\partial
r}+\dfrac{\partial\beta}{\partial\xi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\xi}\dfrac{\partial}{\partial\mu}+\dfrac{\partial\nu}{\partial\xi}\dfrac{\partial}{\partial\nu}\right)\right.\\\
&-\xi\left(\dfrac{\partial r}{\partial\eta}\dfrac{\partial}{\partial
r}+\dfrac{\partial\beta}{\partial\eta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\eta}\dfrac{\partial}{\partial\mu}+\dfrac{\partial\nu}{\partial\eta}\dfrac{\partial}{\partial\nu}\right)\\\
&+\chi\left(\dfrac{\partial r}{\partial\zeta}\dfrac{\partial}{\partial
r}+\dfrac{\partial\beta}{\partial\zeta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\zeta}\dfrac{\partial}{\partial\mu}+\dfrac{\partial\nu}{\partial\zeta}\dfrac{\partial}{\partial\nu}\right)\\\
&\left.-\zeta\left(\dfrac{\partial r}{\partial\chi}\dfrac{\partial}{\partial
r}+\dfrac{\partial\beta}{\partial\chi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\chi}\dfrac{\partial}{\partial\mu}+\dfrac{\partial\nu}{\partial\chi}\dfrac{\partial}{\partial\nu}\right)\right].\end{split}$
(5.57)
The terms involving $\dfrac{\partial}{\partial r}$ in the expression of
$\hat{Q}_{z}$ are found to be the following which cancel out
$\eta\xi\frac{\partial}{\partial r}-\xi\eta\frac{\partial}{\partial
r}+\chi\zeta\frac{\partial}{\partial r}-\zeta\chi\frac{\partial}{\partial
r}=0,$ (5.58)
and therefore, we can show that the degree of freedom $r$ is irrelevant to
rotations. The same holds true for $\hat{Q}_{x}$, $\hat{Q}_{y}$,
$\hat{L}_{x}$, $\hat{L}_{y}$ and $\hat{L}_{z}$, all of which are independent
of $r$. After calculation, the angular momentum operators $\hat{Q}_{x}$,
$\hat{Q}_{y}$ and $\hat{Q}_{z}$ are given by
$\begin{split}\hat{Q}_{z}=&-\frac{i\hbar}{2}\left[\eta\left(\dfrac{\partial\beta}{\partial\xi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\xi}\dfrac{\partial}{\partial\mu}\right)-\xi\left(\dfrac{\partial\beta}{\partial\eta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\eta}\dfrac{\partial}{\partial\mu}\right)\right.\\\
&\left.+\chi\left(\dfrac{\partial\beta}{\partial\zeta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\nu}{\partial\zeta}\dfrac{\partial}{\partial\nu}\right)-\zeta\left(\dfrac{\partial\beta}{\partial\chi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\nu}{\partial\chi}\dfrac{\partial}{\partial\nu}\right)\right]\\\
=&-\frac{i\hbar}{2}\left(-\frac{\partial}{\partial\mu}+\frac{\partial}{\partial\nu}\right),\end{split}$
(5.59)
$\begin{split}\hat{Q}_{x}=&-\frac{i\hbar}{2}\left[\chi\left(\dfrac{\partial\beta}{\partial\xi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\xi}\dfrac{\partial}{\partial\mu}\right)+\zeta\left(\dfrac{\partial\beta}{\partial\eta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\eta}\dfrac{\partial}{\partial\mu}\right)\right.\\\
&\left.-\eta\left(\dfrac{\partial\beta}{\partial\zeta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\nu}{\partial\zeta}\dfrac{\partial}{\partial\nu}\right)-\xi\left(\dfrac{\partial\beta}{\partial\chi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\nu}{\partial\chi}\dfrac{\partial}{\partial\nu}\right)\right]\\\
=&-\frac{i\hbar}{2}\left(2\cos(\nu-\mu)\dfrac{\partial}{\partial\beta}+\frac{\sin(\nu-\mu)}{\tan\frac{\beta}{2}}\frac{\partial}{\partial\mu}+\tan\frac{\beta}{2}\,\sin(\nu-\mu)\frac{\partial}{\partial\nu}\right)\end{split}$
(5.60)
and
$\begin{split}\hat{Q}_{y}=&-\frac{i\hbar}{2}\left[-\zeta\left(\dfrac{\partial\beta}{\partial\xi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\xi}\dfrac{\partial}{\partial\mu}\right)+\chi\left(\dfrac{\partial\beta}{\partial\eta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\mu}{\partial\eta}\dfrac{\partial}{\partial\mu}\right)\right.\\\
&\left.+\xi\left(\dfrac{\partial\beta}{\partial\zeta}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\nu}{\partial\zeta}\dfrac{\partial}{\partial\nu}\right)-\eta\left(\dfrac{\partial\beta}{\partial\chi}\dfrac{\partial}{\partial\beta}+\dfrac{\partial\nu}{\partial\chi}\dfrac{\partial}{\partial\nu}\right)\right]\\\
=&-\frac{i\hbar}{2}\left(-2\sin(\nu-\mu)\dfrac{\partial}{\partial\beta}+\frac{\cos(\nu-\mu)}{\tan\frac{\beta}{2}}\frac{\partial}{\partial\mu}+\tan\frac{\beta}{2}\,\cos(\nu-\mu)\frac{\partial}{\partial\nu}\right).\end{split}$
(5.61)
The rotational Hamiltonian is then given by
$\hat{H}=\frac{\hat{Q}_{x}^{2}}{2I_{x}}+\frac{\hat{Q}_{y}^{2}}{2I_{y}}+\frac{\hat{Q}_{z}^{2}}{2I_{z}}.$
(5.62)
#### 5.3.2 Model
In this section, we describe the model that we consider in our numerical
experiment of cooling of a quantum rigid body, and derive its time-evolution
equation.
We consider a axially symmetric trapped nanorod as in Refs. [107, 108, 109],
with $I_{x}=I_{y}$, and the axis of symmetry of the rigid body is
approximately aligned with the $z$ axis of the laboratory by a trapping
potential. The trap we consider is the optical dipole trap, or the optical
tweezer [110, 107], which uses optical fields to trap the nanoparticle, and
the potential is given by
$V\propto-\left(\vec{l}\cdot\vec{E}\right)^{2},$ (5.63)
where $\vec{l}$ is the vector of the axis of the rod, and $\vec{E}$ is the
electric field.
##### Hamiltonian in terms of Locally Flat Coordinates near $\beta=0$
As the rod is approximately aligned with the $z$ axis of the laboratory frame,
the angle $\beta$ is small, and therefore, we consider a set of coordinates
that approximately describe the local plane around $\beta=0$, i.e., around the
north pole of spherical coordinates. The coordinates that we consider should
straightforwardly describe the position of the head of the rod, as well as the
rotation of the rod around its axis of symmetry. Our choice of the coordinates
is given by the following:
$\begin{split}x:={}&\beta\sin(\mu+\nu)=\beta\sin\alpha,\\\
y:={}&-\beta\cos(\mu+\nu)=-\beta\cos\alpha,\\\ \theta:={}&2\nu.\end{split}$
(5.64)
We transform from the coordinates $(\mu,\nu,\beta)$ or $(\alpha,\beta,\gamma)$
to the coordinates $(x,y,\theta)$. Consider the local plane at the point of
$\beta=0$ of spherical coordinates, as $\beta$ is small, with the azimuthal
angle $\alpha$, the angles $\beta$ and $\alpha$ represent the radius and the
azimuth of the two-dimensional polar coordinates for the local plane around
$\beta=0$, whereas $x$ and $y$ correspond to the usual Euclidean coordinates.
The inverse relations are given by
$\begin{split}\beta&=\sqrt{x^{2}+y^{2}},\\\
\cos\alpha&=\cos(\mu+\nu)=-\frac{y}{\sqrt{x^{2}+y^{2}}},\\\
\sin\alpha&=\sin(\mu+\nu)=\frac{x}{\sqrt{x^{2}+y^{2}}},\\\
\nu&=\frac{\theta}{2},\\\
\gamma&=\nu-\mu=\theta+\arctan\dfrac{x}{y}.\end{split}$ (5.65)
To evaluate the partial differential operators, we use the following
relations:
$\begin{split}\dfrac{\partial
x}{\partial\beta}&=\sin(\mu+\nu),\quad\quad\dfrac{\partial
y}{\partial\beta}=-\cos(\mu+\nu),\ \
\dfrac{\partial\theta}{\partial\beta}=0,\\\ \dfrac{\partial
x}{\partial\mu}&=\beta\cos(\mu+\nu),\quad\dfrac{\partial
y}{\partial\mu}=\beta\sin(\mu+\nu),\quad\dfrac{\partial\theta}{\partial\mu}=0,\\\
\dfrac{\partial x}{\partial\nu}&=\beta\cos(\mu+\nu),\quad\dfrac{\partial
y}{\partial\nu}=\beta\sin(\mu+\nu),\quad\dfrac{\partial\theta}{\partial\nu}=2,\end{split}$
(5.66)
where $\mu$, $\nu$ and $\beta$ on the right-hand sides of the equations are
considered as functions of $x$, $y$ and $\theta$.
Then, we can calculate
$\begin{split}\hat{Q}_{x}=&-{i\hbar}\left[\cos(\nu-\mu)\frac{\partial}{\partial\beta}+\frac{\sin(\nu-\mu)}{2\tan\frac{\beta}{2}}\frac{\partial}{\partial\mu}+\frac{\tan\frac{\beta}{2}\,\sin(\nu-\mu)}{2}\frac{\partial}{\partial\nu}\right]\\\
=&-{i\hbar}\left[\cos(\nu-\mu)\left(\sin(\mu+\nu)\frac{\partial}{\partial
x}-\cos(\mu+\nu)\frac{\partial}{\partial y}\right)\right.\\\
&+\frac{\sin(\nu-\mu)}{2\tan\frac{\beta}{2}}\left(\beta\cos(\mu+\nu)\frac{\partial}{\partial
x}+\beta\sin(\mu+\nu)\frac{\partial}{\partial y}\right)\\\
&\left.+\frac{\tan\frac{\beta}{2}\,\sin(\nu-\mu)}{2}\left(\beta\cos(\mu+\nu)\frac{\partial}{\partial
x}+\beta\sin(\mu+\nu)\frac{\partial}{\partial
y}+2\frac{\partial}{\partial\theta}\right)\right]\\\
=&-{i\hbar}\left[\cos\gamma\left(\frac{x}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial
x}+\frac{y}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial
y}\right)+\sin\gamma\left(-\frac{y}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial
x}+\frac{x}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial y}\right)\right.\\\
&\left.+\left(\frac{1}{2\tan\frac{\beta}{2}}-\frac{1}{\beta}+\frac{\tan\frac{\beta}{2}}{2}\right)\sin\gamma\left(-y\frac{\partial}{\partial
x}+x\frac{\partial}{\partial
y}\right)+\tan\frac{\beta}{2}\sin\gamma\frac{\partial}{\partial\theta}\right].\end{split}$
(5.67)
For simplicity, we define
$g(\beta):=\frac{1}{2\tan\frac{\beta}{2}}-\frac{1}{\beta}+\frac{\tan\frac{\beta}{2}}{2},$
(5.68)
and use
$\beta\equiv\sqrt{x^{2}+y^{2}},$ (5.69)
and we have
$\begin{split}\hat{Q}_{x}=&-{i\hbar}\left[\cos\gamma\left(\frac{x}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial
x}+\frac{y}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial
y}\right)+\sin\gamma\left(-\frac{y}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial
x}+\frac{x}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial y}\right)\right.\\\
&\left.+g(\beta)\sin\gamma\left(-y\frac{\partial}{\partial
x}+x\frac{\partial}{\partial
y}\right)+\tan\frac{\beta}{2}\sin\gamma\frac{\partial}{\partial\theta}\right].\\\
=&-{i\hbar}\left[\left(\frac{x\cos\gamma-y\sin\gamma}{\sqrt{x^{2}+y^{2}}}-g(\beta)y\sin\gamma\right)\frac{\partial}{\partial
x}+\left(\frac{x\sin\gamma+y\cos\gamma}{\sqrt{x^{2}+y^{2}}}+g(\beta)x\sin\gamma\right)\frac{\partial}{\partial
y}\right.\\\
&\left.+\tan\frac{\beta}{2}\sin\gamma\frac{\partial}{\partial\theta}\right]\\\
={}&{i\hbar}\left\\{\left[\sin\theta+y(x\cos\theta+y\sin\theta)\frac{g(\beta)}{\beta}\right]\frac{\partial}{\partial
x}-\left[\cos\theta+x(x\cos\theta+y\sin\theta)\frac{g(\beta)}{\beta}\right]\frac{\partial}{\partial
y}\right.\\\
&\left.-(x\cos\theta+y\sin\theta)\frac{\tan\frac{\beta}{2}}{\beta}\frac{\partial}{\partial\theta}\right\\}.\end{split}$
(5.70)
The angular momentum operator $\hat{Q}_{y}$ is given by
$\begin{split}\hat{Q}_{y}={}&{i\hbar}\left\\{\left[\cos\theta+y(y\cos\theta-x\sin\theta)\frac{g(\beta)}{\beta}\right]\frac{\partial}{\partial
x}+\left[\sin\theta-x(y\cos\theta-x\sin\theta)\frac{g(\beta)}{\beta}\right]\frac{\partial}{\partial
y}\right.\\\
&\left.-(y\cos\theta-x\sin\theta)\frac{\tan\frac{\beta}{2}}{\beta}\frac{\partial}{\partial\theta}\right\\},\end{split}$
(5.71)
whereas $\hat{Q}_{z}$ is simply given by
$\begin{split}\hat{Q}_{z}=&-{i\hbar}\frac{\partial}{\partial\theta}.\end{split}$
(5.72)
It can be confirmed that the commutation relations
$[\hat{Q}_{x},\hat{Q}_{y}]=-i\hbar\hat{Q}_{z},\qquad[\hat{Q}_{y},\hat{Q}_{z}]=-i\hbar\hat{Q}_{x},\qquad[\hat{Q}_{z},\hat{Q}_{x}]=-i\hbar\hat{Q}_{y}$
(5.73)
indeed hold.
To calculate the Hamiltonian, we need to evaluate the squared operators
$\hat{Q}_{x}^{2}$, $\hat{Q}_{y}^{2}$ and $\hat{Q}_{z}^{2}$ which involve
multiple differentiations. To proceed, for simplicity, we define
$h:=x\cos\theta+y\sin\theta,\qquad k:=y\cos\theta-x\sin\theta,$ (5.74)
and
$g_{1}:=\frac{1}{x}\left(\frac{\partial}{\partial
x}\left(\frac{g}{\beta}\right)\right)\equiv\frac{1}{y}\left(\frac{\partial}{\partial
y}\left(\frac{g}{\beta}\right)\right),\qquad
g_{2}:=\frac{1}{x}\left(\frac{\partial}{\partial
x}\left(\frac{\tan\frac{\beta}{2}}{\beta}\right)\right)\equiv\frac{1}{y}\left(\frac{\partial}{\partial
y}\left(\frac{\tan\frac{\beta}{2}}{\beta}\right)\right),$ (5.75)
and the expressions for $\hat{Q}_{x}^{2}$ and $\hat{Q}_{y}^{2}$ are explicitly
given by
$\begin{split}\hat{Q}_{x}^{2}=&-\hbar^{2}\left\\{\left[\sin\theta+yh\frac{g}{\beta}\right]^{2}\partial_{x}^{2}+\left[\cos\theta+xh\frac{g}{\beta}\right]^{2}\partial_{y}^{2}+h^{2}\left(\frac{\tan\frac{\beta}{2}}{\beta}\right)^{2}\partial_{\theta}^{2}\right.\\\
&\qquad-2\left[\sin\theta\cos\theta+\left((y^{2}+x^{2})\sin\theta\cos\theta+xy\right)\frac{g}{\beta}+xyh^{2}\left(\frac{g}{\beta}\right)^{2}\right]\partial_{x}\partial_{y}\\\
&\qquad-\left(\sin\theta+yh\frac{g}{\beta}\right)h\frac{2\tan\frac{\beta}{2}}{\beta}\partial_{x}\partial_{\theta}+\left(\cos\theta+xh\frac{g}{\beta}\right)h\frac{2\tan\frac{\beta}{2}}{\beta}\partial_{y}\partial_{\theta}\\\
&\qquad-h\left[\cos\theta\left(\frac{g}{\beta}+\frac{\tan\frac{\beta}{2}}{\beta}\right)+\left(\left(x^{2}-y^{2}\right)\cos\theta+2xy\sin\theta\right)\left(\frac{g}{\beta}\right)^{2}+yk\left(g_{1}+\frac{\tan\frac{\beta}{2}}{\beta}\frac{g}{\beta}\right)\right]\partial_{x}\\\
&\qquad-h\left[\sin\theta\left(\frac{g}{\beta}+\frac{\tan\frac{\beta}{2}}{\beta}\right)-\left(\left(x^{2}-y^{2}\right)\sin\theta-2xy\cos\theta\right)\left(\frac{g}{\beta}\right)^{2}-xk\left(g_{1}+\frac{\tan\frac{\beta}{2}}{\beta}\frac{g}{\beta}\right)\right]\partial_{y}\\\
&\left.\qquad-
hk\left[\frac{g\tan\frac{\beta}{2}}{\beta^{2}}-g_{2}-\left(\frac{\tan\frac{\beta}{2}}{\beta}\right)^{2}\right]\partial_{\theta}\right\\},\end{split}$
(5.76)
$\begin{split}\hat{Q}_{y}^{2}=&-\hbar^{2}\left\\{\left[\cos\theta+yk\frac{g}{\beta}\right]^{2}\partial_{x}^{2}+\left[\sin\theta-
xk\frac{g}{\beta}\right]^{2}\partial_{y}^{2}+k^{2}\left(\frac{\tan\frac{\beta}{2}}{\beta}\right)^{2}\partial_{\theta}^{2}\right.\\\
&\qquad+2\left[\sin\theta\cos\theta+\left((y^{2}+x^{2})\sin\theta\cos\theta-
xy\right)\frac{g}{\beta}-xyk^{2}\left(\frac{g}{\beta}\right)^{2}\right]\partial_{x}\partial_{y}\\\
&\qquad-\left(\cos\theta+yk\frac{g}{\beta}\right)k\frac{2\tan\frac{\beta}{2}}{\beta}\partial_{x}\partial_{\theta}-\left(\sin\theta-
xk\frac{g}{\beta}\right)k\frac{2\tan\frac{\beta}{2}}{\beta}\partial_{y}\partial_{\theta}\\\
&\qquad+k\left[\sin\theta\left(\frac{g}{\beta}+\frac{\tan\frac{\beta}{2}}{\beta}\right)+\left(\left(x^{2}-y^{2}\right)\sin\theta-2xy\cos\theta\right)\left(\frac{g}{\beta}\right)^{2}+yh\left(g_{1}+\frac{\tan\frac{\beta}{2}}{\beta}\frac{g}{\beta}\right)\right]\partial_{x}\\\
&\qquad-k\left[\cos\theta\left(\frac{g}{\beta}+\frac{\tan\frac{\beta}{2}}{\beta}\right)-\left(\left(x^{2}-y^{2}\right)\cos\theta+2xy\sin\theta\right)\left(\frac{g}{\beta}\right)^{2}+xh\left(g_{1}+\frac{\tan\frac{\beta}{2}}{\beta}\frac{g}{\beta}\right)\right]\partial_{y}\\\
&\left.\qquad+hk\left[\frac{g\tan\frac{\beta}{2}}{\beta^{2}}-g_{2}-\left(\frac{\tan\frac{\beta}{2}}{\beta}\right)^{2}\right]\partial_{\theta}\right\\}.\end{split}$
(5.77)
We consider an axially symmetric rigid body, so that we have
$I_{\perp}:=I_{x}=I_{y}$ and $I_{\parallel}:=I_{z}$, and the Hamiltonian is
given by
$\hat{H}=\frac{\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}}{2I_{\perp}}+\frac{\hat{Q}_{z}^{2}}{2I_{\parallel}}+V.$
(5.78)
Here the term $\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}$ is calculated to be
$\begin{split}\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}=&-\hbar^{2}\left\\{\left[1+2y^{2}\frac{g}{\beta}+y^{2}g^{2}\right]^{2}\partial_{x}^{2}+\left[1+2x^{2}\frac{g}{\beta}+x^{2}g^{2}\right]^{2}\partial_{y}^{2}+\left(\tan\frac{\beta}{2}\right)^{2}\partial_{\theta}^{2}\right.\\\
&\qquad-2xy\left[\frac{2g}{\beta}+g^{2}\right]\partial_{x}\partial_{y}-y\left(1+g\beta\right)\frac{2\tan\frac{\beta}{2}}{\beta}\partial_{x}\partial_{\theta}+x\left(1+g\beta\right)\frac{2\tan\frac{\beta}{2}}{\beta}\partial_{y}\partial_{\theta}\\\
&\left.\qquad-x\left(\frac{g}{\beta}+\frac{\tan\frac{\beta}{2}}{\beta}+g^{2}\right)\partial_{x}-y\left(\frac{g}{\beta}+\frac{\tan\frac{\beta}{2}}{\beta}+g^{2}\right)\partial_{y}\right\\},\end{split}$
(5.79)
which is the main result of our derivation. The expression does not involve
explicit functions of $\theta$, which is consistent with the fact that the
Hamiltonian commutes with $\hat{Q}_{z}$.
The last term $\hat{Q}_{z}^{2}$ in the Hamiltonian is given by
$\hat{Q}_{z}^{2}=-\hbar^{2}\partial_{\theta}^{2}.$ (5.80)
##### Properties of the Hamiltonian
To see the properties of the Hamiltonian, we calculate the leading terms of
the Taylor series of $g$ and $\tan\frac{\beta}{2}$. They are given by
$g(\beta)=\frac{\beta}{6}+\frac{7}{360}\beta^{3}+\frac{31}{15120}\beta^{5}+\frac{127}{604800}\beta^{7}+\frac{73}{3421440}\beta^{9}+O(\beta^{11}),$
(5.81)
and
$\tan\frac{\beta}{2}=\frac{\beta}{2}+\frac{\beta^{3}}{24}+\frac{\beta^{5}}{240}+\frac{17}{40320}\beta^{7}+\frac{31}{725760}\beta^{9}+O(\beta^{11}),$
(5.82)
showing that the high-order terms decay rapidly, as the angle $\beta$ is the
polar angle of spherical coordinates, which necessarily satisfies
$\beta\in[0,\pi]$.
Up to the second order of $\beta$, the term $\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}$
is given by
$\begin{split}\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}=&-\hbar^{2}\left[\left(1+\frac{y^{2}}{3}\right)\frac{\partial^{2}}{\partial
x^{2}}+\left(1+\frac{x^{2}}{3}\right)\frac{\partial^{2}}{\partial
y^{2}}+\frac{x^{2}+y^{2}}{4}\frac{\partial^{2}}{\partial\theta^{2}}-\frac{2xy}{3}\frac{\partial^{2}}{\partial
x\,\partial y}\right.\\\ &\left.-y\frac{\partial^{2}}{\partial
x\,\partial\theta}+x\frac{\partial^{2}}{\partial
y\,\partial\theta}-\frac{2x}{3}\frac{\partial}{\partial
x}-\frac{2y}{3}\frac{\partial}{\partial y}\right]+O(\beta^{3}),\end{split}$
(5.83)
where we have used $x\sim O(\beta)$ and $y\sim O(\beta)$ because of
$\beta=\sqrt{x^{2}+y^{2}}$. When $\beta$ is small, the state of the rigid body
is localized around $\beta=0$ in a small region, and, therefore, $x$ and $y$
become small, whereas $\partial_{x}$ and $\partial_{y}$ can become large. If
we keep the terms involving $\partial_{\theta}$ and ignore the other small
terms, the above equation becomes
$\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}\approx-\hbar^{2}\left(\frac{\partial^{2}}{\partial
x^{2}}+\frac{\partial^{2}}{\partial
y^{2}}+\frac{x^{2}+y^{2}}{4}\frac{\partial^{2}}{\partial\theta^{2}}-y\frac{\partial^{2}}{\partial
x\,\partial\theta}+x\frac{\partial^{2}}{\partial y\,\partial\theta}\right),$
(5.84)
which is equivalent to the Hamiltonian of a charged particle in a magnetic
field. As the Hamiltonian commutes with
$\hat{Q}_{z}\equiv-{i\hbar}{\partial_{\theta}}$, we assume that the state we
investigate is an eigenstate of $\hat{Q}_{z}$ so that we can regard
$\hat{Q}_{z}$ as a constant. Then, we map Eq. (5.84) to the Hamiltonian of a
charged particle in a magnetic field. Since the Hamiltonian of a charged
particle in a magnetic field is given by
$H=\frac{1}{2m}\left(\vec{p}-\frac{Q}{c}\vec{A}\right)^{2},$ (5.85)
the term $\frac{Q}{c}\vec{A}$ in our case should be
$\frac{Q}{c}\vec{A}=\left(-\frac{y}{2}\hat{Q}_{z},\,+\frac{x}{2}\hat{Q}_{z},\,0\right),$
(5.86)
and we have
$\nabla\times\frac{Q}{c}\vec{A}=\left(0,\,0,\,\hat{Q}_{z}\right),$ (5.87)
which corresponds to a magnetic field that drives the particle to rotate
counter-clockwise, given a positive value of $\hat{Q}_{z}$. Therefore, up to
the lowest order with respect to a small $\beta$, the system of the quantum-
mechanical rigid body behaves in the same way as a charged particle in a
magnetic field, and the strength of the magnetic field is determined by the
angular momentum $\hat{Q}_{z}$.
##### Metric, Hermiticity and Momentum Operators
The space of Euler angles is non-Euclidean, and when we integrate a function
$f$ over the space, we should include the factor resulting from the metric,
and the integration is given by
$\int_{0}^{2\pi}\int_{0}^{2\pi}\int_{0}^{\pi}f\sin\beta\,d\beta\,d\alpha\,d\gamma.$
(5.88)
In terms of the coordinates $(\beta,\mu,\nu)$, the integration is given by
$\int_{0}^{\pi}\int_{\mu}^{\mu+2\pi}\int_{0}^{\pi}f\cdot
2\sin\beta\,d\beta\,d\nu\,d\mu.$ (5.89)
The determinant of the Jacobian matrix given by Eq. (5.66) is equal to
$2\beta$, and therefore, in terms of the coordinates $(x,y,\theta)$, the
integration is given by
$\int_{0}^{2\pi}\left(\iint_{\sqrt{x^{2}+y^{2}}\leq\pi}f\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\,dy\right)\,d\theta,$
(5.90)
and we see that the metric term
$\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}$ is approximately equal to
$1$ when both $x$ and $y$ are small, showing that the space is indeed
approximately flat near $\beta=0$.
Due to the nontrivial metric term that one needs to take into account when one
performs the integration, the inner product $\langle\phi|\psi\rangle$ should
also be calculated using
$\int_{0}^{2\pi}\iint_{\sqrt{x^{2}+y^{2}}\leq\pi}\phi^{*}\psi\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\,dy\,d\theta,$
(5.91)
and, therefore, usual operators like $-{i}{\hbar}\frac{\partial}{\partial x}$
and $-{i}{\hbar}\frac{\partial}{\partial y}$ are no longer hermitian, as one
generally has
$\begin{split}\int_{x_{0}}^{x_{1}}\phi^{*}\left(\frac{\partial}{\partial
x}\psi\right)\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\neq-\int_{x_{0}}^{x_{1}}\left(\frac{\partial}{\partial
x}\phi^{*}\right)\psi\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\end{split}$
(5.92)
even if $\phi$ and $\psi$ vanish at $x_{0}$ and $x_{1}$, subject to the
vanishing boundary condition. Instead, integration by parts yields
$\int_{x_{0}}^{x_{1}}\phi^{*}\left(\frac{\partial}{\partial
x}\psi\right)\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx=-\int_{x_{0}}^{x_{1}}\left(\frac{\partial}{\partial
x}\left(\phi^{*}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\right)\psi\,dx,$
(5.93)
and an additional term $\phi^{*}\left(\frac{\partial}{\partial
x}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)$ appears.
With the nontrivial metric, the hermitian momentum operators conjugate to the
position operators $x$ and $y$ can be given by
$\hat{p}_{x}:=-i\hbar\left(\frac{\partial}{\partial
x}+\frac{1}{2}\left(\frac{\partial}{\partial
x}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\right),$ (5.94)
$\hat{p}_{y}:=-i\hbar\left(\frac{\partial}{\partial
y}+\frac{1}{2}\left(\frac{\partial}{\partial
y}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\right).$ (5.95)
To confirm the hermiticity, using the vanishing boundary condition and
integration by parts, we calculate
$\begin{split}&\int_{x_{0}}^{x_{1}}\phi^{*}\left(\hat{p}_{x}\psi\right)\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\\\
=&\int_{x_{0}}^{x_{1}}\phi^{*}\left(-i\hbar\left(\frac{\partial}{\partial
x}\psi+\frac{1}{2}\left(\frac{\partial}{\partial
x}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\psi\right)\right)\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\\\
=&\int_{x_{0}}^{x_{1}}i\hbar\left(\frac{\partial}{\partial
x}\left(\phi^{*}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\right)\psi\,dx-\frac{i\hbar}{2}\int_{x_{0}}^{x_{1}}\phi^{*}\psi\left(\frac{\partial}{\partial
x}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\\\
=&\int_{x_{0}}^{x_{1}}i\hbar\left(\frac{\partial}{\partial
x}\phi^{*}\right)\psi\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx+\frac{i\hbar}{2}\int_{x_{0}}^{x_{1}}\phi^{*}\psi\left(\frac{\partial}{\partial
x}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\\\
=&\int_{x_{0}}^{x_{1}}i\hbar\left(\frac{\partial}{\partial
x}\phi^{*}+\frac{1}{2}\left(\frac{\partial}{\partial
x}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\right)\phi^{*}\right)\psi\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx\\\
=&\int_{x_{0}}^{x_{1}}\left(\hat{p}_{x}\phi\right)^{*}\psi\cdot\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}}\,dx,\end{split}$
(5.96)
which confirms the hermiticity.
The momentum operators $\hat{p}_{x}$ and $\hat{p}_{y}$ satisfy the usual
commutation relations
$\left[\hat{p}_{x},\hat{p}_{y}\right]=0,\quad\left[x,\hat{p}_{x}\right]=i\hbar,\quad\left[y,\hat{p}_{y}\right]=i\hbar,\quad\left[x,\hat{p}_{y}\right]=\left[y,\hat{p}_{x}\right]=0.$
(5.97)
We use these operators to compute the distribution moments of the wave
function in our numerical algorithm in the following sections.
##### Control Fields
In the following, we proceed to describe the model of the trapped and
controlled rigid body that we consider.
We consider a nanorod trapped by an electromagnetic field through dipole-
induced dipole interaction using a laser, and the potential is given by Eq.
(5.63), i.e.
$V\propto-\left(\vec{l}\cdot\vec{E}\right)^{2},$ (5.98)
where $\vec{l}$ is the vector of the axis of the rod, and $\vec{E}$ is the
electric field. In order to control the rotation of the rigid body,
specifically, to control the movement of the rigid body in $x$ and $y$
coordinates discussed above, we consider two additional lasers which can shift
the center of the trapping potential in the space spanned by $x$ and $y$. The
trapping laser has a fixed intensity $E_{z}$ with the polarization direction
$z$, and the two control lasers have tunable intensities $E_{x}$ and $E_{y}$
with the polarization directions $x$ and $y$. The two control lasers are
arranged perpendicularly, and the three laser beams are phased-locked and
arranged on the same plane, the $x$-$y$ plane, and they intersect at the
position of the rigid body, creating a field $\vec{E}=(E_{x},E_{y},E_{y})$ at
the rigid body. Then, we use $E_{x}$ and $E_{y}$ as the relevant control
variables to control the time evolution of the system and cool the system.
The position of the head of the nanorod relative to the center of the rod is
given by
$\left(\frac{xl}{2}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}},\,\frac{yl}{2}\frac{\sin\sqrt{x^{2}+y^{2}}}{\sqrt{x^{2}+y^{2}}},\,\frac{l}{2}\cos\sqrt{x^{2}+y^{2}}\right),$
(5.99)
where $l$ is the length of the rigid body. Therefore, the potential is given
by
$V=-\left(x\frac{\sin\beta}{\beta}E_{x}+y\frac{\sin\beta}{\beta}E_{y}+\cos\beta
E_{z}\right)^{2},\qquad\beta\equiv\sqrt{x^{2}+y^{2}},$ (5.100)
where the other coefficients have been absorbed into the coefficients $E_{x}$,
$E_{y}$ and $E_{z}$. With $E_{x}=E_{y}=0$, up to the lowest order in $\beta$,
we have
$V=-E_{z}^{2}+E_{z}^{2}\beta^{2}=\frac{k}{2}(x^{2}+y^{2})-E_{z}^{2}+O(\beta^{4}),\qquad
k:=2E_{z}^{2},$ (5.101)
where we have used $x\sim O(\beta)$ and $y\sim O(\beta)$, and $k$ is the
strength of the trapping potential. Therefore, in the neighbourhood of
$\beta=0$, the trapping potential is the standard harmonic potential up to a
shift of the constant $-E_{z}^{2}$.
With nonzero $E_{x}$ and $E_{y}$, we have
$V=E_{z}^{2}\left[\left(x-\frac{E_{x}}{E_{z}}\right)^{2}+\left(y-\frac{E_{y}}{E_{z}}\right)^{2}\right]-E_{z}^{2}-E_{x}^{2}-E_{y}^{2}+O(\beta^{3}),$
(5.102)
where we have assumed $\dfrac{E_{x}}{E_{z}}\sim O(\beta)$ and
$\dfrac{E_{y}}{E_{z}}\sim O(\beta)$. The result shows that the center of the
potential is shifted by $\dfrac{E_{x}}{E_{z}}$ and $\dfrac{E_{y}}{E_{z}}$ in
the $x$ and $y$ directions.
##### Measurements
To our knowledge, so far there is not any widely accepted model of quantum-
mechanical measurement for a rigid body system, and in our model, we simply
assume that the measurement is Gaussian with respect to the $x$ and $y$
coordinates, which represent the position of the head of the quantum-
mechanical rod, and we assume that the measurement efficiency is unity.
The measurement can be regarded as Gaussian if the following assumptions hold
true: (1) the polar angle $\beta$ is small so that the space spanned by $x$
and $y$ is isotropic; (2) the measurement is week and repetitive so that it
can be regarded as approximately continuous; (3) the state of the rigid body
is sufficiently localized and the variance of measurement outcomes is not
large, so that one can take the average of measurement outcomes without
ambiguity in the space of angles.
One simple example of the measurement that measures the position of the head
of the nanorod is given in Ref. [107], where a probe light shines at the rigid
body in the $z$ direction and the direction of the scattered light is probed.
Because the rigid body is axially symmetric, the scattered light does not
provide information on the angle $\theta$, and it only provides information
about $x$ and $y$, which represent the position of the head of the trapped
rod. As both $x$ and $y$ are measured, the measurement backaction perturbs the
state in both the $x$ and $y$ directions simultaneously, and we regard the
perturbations in the $x$ and $y$ directions as independent.
The model of measurement completes the description of our quantum-mechanical
model of the controlled rigid body. The stochastic time-evolution equation of
the state is given by
$\displaystyle
d|\psi\rangle=\left[\left(-\frac{i}{\hbar}\hat{H}-\frac{\gamma}{4}(\hat{x}-\langle\hat{x}\rangle)^{2}-\frac{\gamma}{4}(\hat{y}-\langle\hat{y}\rangle)^{2}\right)dt+\sqrt{\dfrac{\gamma}{2}}(\hat{x}-\langle\hat{x}\rangle)dW_{1}+\sqrt{\dfrac{\gamma}{2}}(\hat{y}-\langle\hat{y}\rangle)dW_{2}\right]|\psi\rangle,$
(5.103)
$\displaystyle\hat{H}=\frac{\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}}{2I_{\perp}}+\frac{\hat{Q}_{z}^{2}}{2I_{\parallel}}+V,$
(5.104)
where $\gamma$ is the measurement strength, $I_{\perp}$ and $I_{\parallel}$
are the moments of inertia, $dW_{1}$ and $dW_{2}$ are independent Wiener
increments, i.e. random variables satisfying $dW_{1}\sim\mathcal{N}(0,dt)$ and
$dW_{2}\sim\mathcal{N}(0,dt)$, following the convention discussed in Section
2.2.3. The terms $\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}$, $\hat{Q}_{z}^{2}$ and $V$
are given by Eqs. (5.79), (5.80) and (5.100), respectively.
##### Settings of the Numerical Algorithm
We use Eq. (5.103) in our numerical simulation of the controlled quantum-
mechanical nanorod, and we do not use any approximation with respect to the
polar angle $\beta$, or, with respect to $x$ and $y$. We assume that the state
is an eigenstate of $\hat{Q}_{z}$ and we set the value of
$\hat{Q}_{z}\equiv-{i\hbar}{\partial_{\theta}}$ to be a constant, and we
simulate the time evolution of the wave function in the two-dimensional space
spanned by $x$ and $y$. As in Section 5.2, we use finite difference methods,
discretizing the continuous space and time into discrete sites and time steps,
so as to simulate the time evolution of the state.
As discussed in the section of the properties of the Hamiltonian, when $\beta$
is very small, the system is approximately linear, because if we define
$\hat{p}_{x}:=-i\hbar\frac{\partial}{\partial{x}},\qquad\hat{p}_{y}:=-i\hbar\frac{\partial}{\partial{y}},$
(5.105)
we can rewrite Eq. (5.84) into
$\begin{split}\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}&\approx-\hbar^{2}\left(\frac{\partial^{2}}{\partial
x^{2}}+\frac{\partial^{2}}{\partial
y^{2}}+\frac{x^{2}+y^{2}}{4}\frac{\partial^{2}}{\partial\theta^{2}}-y\frac{\partial^{2}}{\partial
x\,\partial\theta}+x\frac{\partial^{2}}{\partial y\,\partial\theta}\right)\\\
&=\hat{p}_{x}^{2}+\hat{p}_{y}^{2}+\frac{x^{2}+y^{2}}{4}\hat{Q}_{z}^{2}-y\hat{p}_{x}\hat{Q}_{z}+x\hat{p}_{y}\hat{Q}_{z},\end{split}$
(5.106)
which is quadratic with respect to $x$, $y$, $\hat{p}_{x}$ and $\hat{p}_{y}$,
and the time evolutions of $x$, $y$, $\hat{p}_{x}$ and $\hat{p}_{y}$ are thus
linear. Especially, as the high-order terms in Eq. (5.79) are relatively small
as discussed above, we need to ensure that $x$ and $y$ are moderately large so
that nonlinear behaviour of the system can be observed. Therefore, we use the
parameter regime $x,y\sim 0.5$ in our numerical experiments, and we describe
our system parameters in the following. Note that the coordinates $x$ and $y$
are dimensionless because they represent angles.
We simulate the space spanned by $x$ and $y$ in the region of $-1.29\leq x\leq
1.29$ and $-1.29\leq y\leq 1.29$, which approximately corresponds to a polar
angle $\beta$ of $74$ degrees. The spacing between adjacent discrete sites in
the simulated discretized space is set to be $0.03$, and therefore, we
simulate on the $87\times 87$ grid.
Instead of directly setting the system parameters $k$, $I_{\perp}$ and
$\gamma$, we set them indirectly through the parameters in Table 5.2.
| $\omega$ ($\omega_{c}$) | $\sigma_{g}$ | $\dfrac{\gamma}{k}$ | $\hat{Q}_{z}$ ($\hbar$) | $\dfrac{\max\left(\sqrt{E_{x}^{2}+E_{y}^{2}}\right)}{E_{z}}$
---|---|---|---|---|---
rigid body | $\pi$ | $0.1$ | $1,2,3$ | $5,80$ | 1
Table 5.2: System parameters of the quantum-mechanical rigid body that we use
in our numerical experiments, in terms of a reference angular momentum
$\omega_{c}$. $\omega$ is the angular frequency of the harmonic potential
given by Eq. (5.101) ignoring the high-order terms and $\sigma_{g}$ is the
standard deviation of the probability distribution of the corresponding ground
state, which is a Gaussian state, of the harmonic potential. We use $\omega$
and $\sigma_{g}$ to find the parameters $k$, $E_{z}$ and $I_{\perp}$ of the
system of the rigid body.
Using the angular frequency $\omega$ of the potential $V$ at the lowest-order
approximation given by Eq. (5.101), and the standard deviation $\sigma_{g}$ of
the probability distribution of the corresponding ground-state wave function,
as given in Table 5.2, we can find the values of $I_{\perp}$ and $k$, given by
$I_{\perp}=\frac{\hbar}{2\omega\sigma_{g}^{2}}$ (5.107)
and
$k=\omega^{2}I_{\perp},$ (5.108)
and we have
$E_{z}=\sqrt{\frac{k}{2}},$ (5.109)
following Eq. (5.101).
Because $k$ and the measurement strength $\gamma$ are of the same physical
dimension, instead of directly setting the value of $\gamma$, we tune the
ratio $\frac{\gamma}{k}$, and we try three different values $1,2$ and $3$ in
our experiments. The angular momentum $\hat{Q}_{z}$ is set to be $5$ or $80$.
When we have $\hat{Q}_{z}=5$, assuming
$I_{\parallel}\approx\dfrac{I_{\perp}}{10}$, the energy of the rotation around
the local $z$ axis, given by $\dfrac{\hat{Q}_{z}}{2I_{\parallel}}$, is
comparable to the rest of the energy of the system, given by
$\dfrac{\hat{Q}_{x}^{2}+\hat{Q}_{y}^{2}}{2I_{\perp}}+V$; when we have
$\hat{Q}_{z}=80$, the rotational energy around the local $z$ axis is hundreds
of times larger than the rest of the energy of the system. We therefore try
the two different cases in our experiments.
The control forces we allow are given by the following discrete set
$\left\\{(E_{x},E_{y})\ |\ \sqrt{E_{x}^{2}+E_{y}^{2}}\leq
E_{z},\quad\frac{E_{x}}{E_{z}}=0.2n_{x},\quad\frac{E_{y}}{E_{z}}=0.2n_{y},\quad
n_{x},n_{y}\in\mathbb{Z}\right\\},$ (5.110)
including 81 different choices of control forces in total.
Whenever the controller determines a choice of control forces, the control
forces are kept constant during a time of $\dfrac{1}{5\omega_{c}}$. After an
evolution time of $\dfrac{1}{5\omega_{c}}$ of the state, the energy
$\langle\hat{H}\rangle$ with $E_{x}=E_{y}=0$ is calculated as the minus reward
for the reinforcement learning agent, and the controller determines a new
choice of control forces on the basis of the information about the
instantaneous wave function.
Most of the other settings of the numerical algorithm are the same as those in
Section 5.2. The discount factor $\gamma$ in Q-learning is tuned to be 0.96,
and we change the numbers of the hidden units in the deep neural network to be
512, 1024, 512.
The input of the neural network is set to be the distribution moments of the
wave function as in our experiments of the case of a quartic oscillator. In
the case of a rigid body, the distribution moments are computed with respect
to the 4 operators $x$, $y$, $\hat{p}_{x}$ and $\hat{p}_{y}$, where
$\hat{p}_{x}$ and $\hat{p}_{y}$ are given by Eqs. (5.94) and (5.95), and we
compute up to the fifth central moments, totally constituting 125 real
numbers, and use them as the input of the neural network. Additionally, we
rescale $x$ and $y$ by a factor of $\sqrt{\frac{2}{k}}$ and rescale
$\hat{p}_{x}$ and $\hat{p}_{y}$ by a factor of $\sqrt{2I_{\perp}}$ to ensure
that the results are of the order of $O(1)$.
As in the case of the simulation of a quartic oscillator, we use the finite
difference methods given by Eqs. (5.13) and (5.14) to evaluate the partial
differential operators. To numerically integrate the stochastic time-evolution
equation given by Eq. (5.103), we use a time step of
$0.00125\times\frac{1}{\omega_{c}}$, and we use the explicit 1.5 order strong
scheme [104] with several modifications. First, we additionally include terms
obtained using the 4th-order Runge-Kutta method to include the high-order
terms of the deterministic time evolution of the state; next, we do not
include the 1.5 order terms of the stochastic evolution that are contributed
by the cross terms of the stochastic noise, $\iiint
dW_{1/2}\,dW_{1/2}\,dW_{1/2}$, because they are very hard calculate, and they
have been confirmed to contribute little to the time evolution, only making a
difference smaller than $0.1\%$ to the state; lastly, we ignore the parts of
the wave function that are too far away from the center of the probability
distribution, setting the amplitudes to be zero for $|x-\langle
x\rangle|>\frac{\pi}{3}$ and $|y-\langle y\rangle|>\frac{\pi}{3}$. The cross
term $\iint dW_{1}\,dW_{2}$ which is necessary for the numerical integration
of the differential equation is evaluated approximately based on series
expansion using the Legendre series, following Ref. [111] (pp. 761), up to 500
terms. Our numerical codes have been publicized and all details can be found
therein.222https://github.com/Z-T-WANG/PhDThesis/tree/main/rigid%20body
#### 5.3.3 Evaluation and Results
The settings of episodes and the evaluation of performances are the same as in
the case of our experiments of a quartic oscillator, described in Section
5.2.3, except for several minor differences.
The initial state we use in our numerical experiments is set to be a Gaussian
wave packet centered at a position randomly picked in the region $-0.4\leq
x\leq 0.4$ and $-0.4\leq y\leq 0.4$, and the momentum of the wave packet is
set such that the velocity is zero, i.e. $\dfrac{d\langle x\rangle}{dt}=0$ and
$\dfrac{d\langle y\rangle}{dt}=0$, on the basis of the approximation given by
Eq. (5.84). The highest energy we allow in our simulation is $30\hbar\omega$,
or, $30\pi\hbar\omega_{c}$, beyond which we stop the simulation and consider
the control as having failed. We also stop the simulation when the wave
function gets close to the boundary of the simulated space, when the
probability of being around the boundary is larger than $1.5\times 10^{-3}$.
We use the 4 discrete sites counted from the boundary inwards to calculate
this probability. Regarding training, the agent is trained for 9000 episodes
after it successfully stabilizes the state for a time evolution of
$\dfrac{100}{\omega_{c}}$.
##### The LQG Control
To compare the results obtained using the DQN algorithm and the C-DQN
algorithm with standard control strategies, we consider the linear-quadratic-
Gaussian (LQG) control on the basis of the approximation given by Eqs. (5.84)
and (5.102). The control force is chosen such that after an evolution time of
$\dfrac{1}{5\omega_{c}}$ of the state, the state is expected to satisfy
$\frac{v_{x}}{\langle
x\rangle}=-\sqrt{\frac{k}{I_{\perp}}},\qquad\frac{v_{y}}{\langle
y\rangle}=-\sqrt{\frac{k}{I_{\perp}}},$ (5.111)
where
$v_{x}:=\dfrac{d\langle x\rangle}{dt},\qquad v_{y}:=\dfrac{d\langle
y\rangle}{dt},$ (5.112)
and the LQG controller effectively treats the state as a classical particle,
only taking the variables $\langle x\rangle$, $\langle y\rangle$, $v_{x}$ and
$v_{y}$ into consideration. This controller pushes the particle onto the
physical trajectory that corresponds to the Lagrangian
$L=T-V=\frac{1}{2I_{\perp}}\left(v_{x}^{2}+v_{y}^{2}\right)-\left(-\frac{k}{2}\left(x^{2}+y^{2}\right)\right)=\frac{1}{2I_{\perp}}\left(v_{x}^{2}+v_{y}^{2}\right)+\frac{k}{2}\left(x^{2}+y^{2}\right),$
(5.113)
which is exactly equal to the energy of the system. Therefore, this control
minimizes the energy of the system integrated with respect to time according
to the Hamilton principle.
Although the LQG control is provably optimal [15] when the dynamics is linear
and Gaussian noise is present, it can fail to stabilize the rigid body due to
nonlinearity of the system. Specifically, because the interaction potential is
given by
$V\propto-\left(\vec{l}\cdot\vec{E}\right)^{2},$ (5.114)
when the direction of the control field and the direction of the controlled
rod are perpendicular, the control force becomes $0$, and when the angle
between the two directions exceeds $\frac{\pi}{2}$, the interaction becomes
repulsive. However, the LQG control replies on the linear approximation and
considers that larger values of $E_{x}$ and $E_{y}$ should always drive the
particle in the $x$ and $y$ directions more strongly, which is not necessarily
true, as the particle can move away from the center and the angle between
$\vec{l}$ and $\vec{E}$ can become large. Therefore, in order to prevent the
systematic failure of the LQG control in controlling the rigid body, we put
constraints on $E_{x}$ and $E_{y}$ such that we have $\left|\langle
x\rangle-\frac{E_{x}}{E_{z}}\right|\leq\frac{\pi}{4}$ and $\left|\langle
y\rangle-\frac{E_{y}}{E_{z}}\right|\leq\frac{\pi}{4}$, and we do not put these
constraints on the AI-based controllers. With these constraints, the LQG
control can stabilize the rigid body for a long time in our numerical
experiments. For a fair comparison with the AI-based controllers, we also
discretize the control forces of the LQG control, mapping the control forces
of the LQG control to the nearest choice in the set given by Eq. (5.110).
##### Data Augmentation
Because the time-evolution equation given by Eq. (5.103) has central symmetry,
given a trajectory of the time evolution of a quantum state, we can flip the
$x$ and $y$ directions and obtain an equally valid trajectory, i.e., employing
the transformation $(x,y)\rightarrow(-x,-y)$ and
$(E_{x},E_{y})\rightarrow(-E_{x},-E_{y})$. Therefore, we use both the data of
the directly simulated state and the data of the symmetric counterpart of the
simulated state produced by the flip of the $x$ and $y$ directions for the AI
to learn. This method is called data augmentation in machine learning
literature because it increases the number of data for the AI to learn, and in
our case the number of data effectively becomes twice. In our numerical
experiments, we randomly flip the $x$ and $y$ directions of the data. The
learning curves with and without the data augmentation technique are shown in
Fig. (5.7), showing that the data augmentation is indeed beneficial for
learning.
Figure 5.7: Learning curves of the cooling problem of a rigid body for the
C-DQN algorithm, with and without the data augmentation technique. The
ordinate and the abscissa are the same in the two figures. The left panel
shows the results including the episodes that end with control failure,
applying Gaussian smoothing with the standard deviation of 40, and the right
panel shows the results excluding the episodes that end with control failure,
applying Gaussian smoothing with the standard deviation of 20. The system
parameters are given by $\dfrac{\gamma}{k}=3$ and $\hat{Q}_{z}=5$.
##### Experimental Results
As discussed above, we follow the same procedure as in Section 5.2 with
several modifications to train the AI using the DQN and the C-DQN algorithms
on the task of cooling a quantum-mechanical rigid body. The learning curves
for different values of system parameters $\gamma$ and $\hat{Q}_{z}$ are shown
in Figs. 5.8, 5.9, 5.10, 5.11 and 5.12, where the performances of the LQG
control are shown for comparison. In these figures, we see that both the DQN
and the C-DQN algorithms can learn the cooling problem and their performances
are comparable, which are also comparable with that of the LQG control,
showing that the AI algorithms can indeed successfully cool the quantum-
mechanical rigid body.
Figure 5.8: Learning curves of the cooling problem of a rigid body for the DQN
and the C-DQN algorithms, with the system parameters $\dfrac{\gamma}{k}=3$ and
$\hat{Q}_{z}=80$, and the horizontal dashed line shows the performance of the
LQG control. The left panel shows the results including the episodes that end
with control failure, applying Gaussian smoothing with the standard deviation
of 40, and the right panel shows the results excluding the episodes that end
with control failure, applying Gaussian smoothing with the standard deviation
of 10. The figure in the right panel is enlarged. Figure 5.9: Learning curves
of the cooling problem of a rigid body for the DQN and the C-DQN algorithms as
in Fig. 5.8, with the system parameters $\dfrac{\gamma}{k}=2$ and
$\hat{Q}_{z}=80$. Figure 5.10: Learning curves of the cooling problem of a
rigid body for the DQN and the C-DQN algorithms as in Fig. 5.8, with the
system parameters $\dfrac{\gamma}{k}=1$ and $\hat{Q}_{z}=80$. Figure 5.11:
Learning curves of the cooling problem of a rigid body for the DQN and the
C-DQN algorithms as in Fig. 5.8, with the system parameters
$\dfrac{\gamma}{k}=3$ and $\hat{Q}_{z}=5$. Figure 5.12: Learning curves of the
cooling problem of a rigid body for the DQN and the C-DQN algorithms as in
Fig. 5.8, with the system parameters $\dfrac{\gamma}{k}=1$ and
$\hat{Q}_{z}=5$.
As shown in the above figures, compared with the DQN algorithm, while the
C-DQN algorithm learns faster at the beginning of training when the behaviour
of the controller is unstable on this task, it learns more slowly at a later
stage. Given an equal amount of time of training, the performances of the
C-DQN algorithm can be marginally lower than that of the DQN algorithm, and as
we gradually reduce the learning rate throughout training, the DQN algorithm
quickly reaches a stable performance while the learning of the C-DQN algorithm
is slowed down and does not exactly reach the performance of the DQN algorithm
at the end of training. To see the results for a prolonged training period, we
double the training time of the C-DQN algorithm for the experiments for the
system parameters $\frac{\gamma}{k}=1,\hat{Q}_{z}=80$ and
$\frac{\gamma}{k}=3,\hat{Q}_{z}=5$, and the results are shown in Figs. 5.13
and 5.14. The results show that the performance of the C-DQN algorithm indeed
approaches that of the DQN algorithm given a longer time of training, and
confirms the effectiveness of the learning of the C-DQN algorithm.
The lower performance of the C-DQN algorithm compared with the DQN algorithm
can be attributed to the behaviour of the C-DQN algorithm on the stochasticity
of the task. As discussed in Section 4.3.2 and experimentally shown in Section
4.4.1, if the time evolution of the system is stochastic, the C-DQN algorithm
does not necessarily converge to the optimal solution of the Bellman equation,
and it may converge somewhere between the solution of the residual gradient
algorithm and the DQN algorithm, and as a result, the performance may be
slightly worse than that of the optimal control, which is a disadvantage of
the C-DQN algorithm. Nevertheless, as discussed in Section 4.4.1 and shown in
Figs. 5.13 and 5.14 above, after a prolonged period of training, the
performance of C-DQN can improve to be approximately equal to that of DQN,
and, therefore, the disadvantage of C-DQN is not significant and can easily be
remedied on this task.
Figure 5.13: Learning curves of the cooling problem of a rigid body for the
DQN and the C-DQN algorithms as in Fig. 5.10, with a doubled time of training
for the C-DQN algorithm. Figure 5.14: Learning curves of the cooling problem
of a rigid body for the DQN and the C-DQN algorithms as in Fig. 5.11, with a
doubled time of training for the C-DQN algorithm.
Concerning properties of the system of the controlled rigid body, comparing
Figs. 5.8, 5.9 and 5.10, we see that with the reduced measurement strength
$\gamma$, the fluctuations in performance are reduced and the system becomes
more stable, and the controllers achieve lower energies of cooling and the
reinforcement learning algorithms learn faster. Comparing Figs. 5.8 and 5.11,
when the angular momentum $\hat{Q}_{z}$ is reduced, with a large measurement
strength, the stochasticity of the system increases, and the rate of the
failure of control becomes significant for the case of $\frac{\gamma}{k}=3$
and $\hat{Q}_{z}=5$. This is because a large angular momentum $\hat{Q}_{z}$
behaves effectively like a magnetic field which localizes the state, and with
a localized state, the measurement backaction of the position measurement is
weaker, which leads to less stochastic perturbation and better stability of
the system. Therefore, the control is easier for the case of
$\frac{\gamma}{k}=3$ and $\hat{Q}_{z}=80$ compared with the control for the
case of $\frac{\gamma}{k}=3$ and $\hat{Q}_{z}=5$. When the measurement
strength is relatively weak, as shown in Figs. 5.10 and 5.12, the change of
$\hat{Q}_{z}$ may not make a significant difference.
When the stochasticity is large and the energy is relatively high, as shown in
Fig. 5.11 and 5.14, the rate of the failure of control is non-negligible, and
the overall performance of the DQN algorithm and that of the C-DQN algorithm
are better than the performance of the LQG control. If we exclude the cases of
control failure, the energy of the control of the C-DQN algorithm is higher
than that of DQN which is then higher than that of the LQG control. This
result implies that although the energy of the system cooled by the LQG
controller is often relatively low, the LQG control fails more frequently than
the DQN controller and the C-DQN controller, and the DQN controller fails more
frequently than the C-DQN controller. This result is consistent with our
argument regarding the nonlinearity of the system. When the energy of the
state is high and the wave function moves away from the center of the trap,
the nonlinearity becomes significant and the LQG control tends to have an
inferior performance. The result also shows that C-DQN tends to avoid failure
and has a more stable behaviour compared with DQN.
### 5.4 Conclusion and Future Perspectives
In this section, we summarize and discuss the results, and we present the
conclusions and future perspectives.
##### Summary and Discussion
In the experiments of cooling of a quantum quartic oscillator, we confirm that
both of the DQN and the C-DQN algorithms can learn to sufficiently cool the
nonlinear quantum system, in which case conventional control strategies do not
work satisfactorily as the state is non-Gaussian in general [52]. However, the
DQN algorithm exhibits significant instability and does not perform stably
throughout the course of training, and it has a large variance in its final
performance, which is likely due to the complexity of the nonlinear quantum
system. On the other hand, the C-DQN algorithm always performs stably for this
complicated system, and as a consequence, it achieves a better performance at
the end of training and has a negligible variance in its final performance.
The results demonstrate the stability and reliability of the C-DQN algorithm
for physical control problems.
In the experiments of cooling of a quantum-mechanical rigid body, we confirm
that both of the DQN and the C-DQN algorithms can learn to stabilize and cool
the system of the trapped quantum-mechanical nanorod, which is a non-Euclidean
multi-dimensional system, and that when the linear approximation holds, the
constrained conventional LQG controller also cools the system efficiently.
Within the regime of the system parameters we have considered, both the DQN
and C-DQN algorithms perform comparably with the LQG controller. The final
performances of the C-DQN algorithm are marginally lower than those of the DQN
algorithm given the same computational budget, because the C-DQN algorithm
generally learns more slowly and possibly converges to a solution that is
suboptimal when the system is stochastic. Nevertheless, our experimental
results show that the performances of the C-DQN algorithm can match those of
the DQN algorithm if the AI is trained for a longer period of time. Compared
with the results of the cooling of a quartic oscillator, we see that the DQN
algorithm may be preferred if it does not encounter any instability, in which
case the problem is often relatively easy and the AI learns quickly and
converges to an optimal solution. If the problem to be solved is difficult and
complicated, the DQN algorithm may suffer from significant instabilities, in
which case the C-DQN algorithm clearly performs better. The C-DQN algorithm
tends to have more stable and reproducible behaviour in the learning process,
which is beneficial for research and fine-tuning in general.
##### Future Perspectives
Concerning future perspectives, as the C-DQN algorithm may not converge to the
optimal solution, it is worthwhile to investigate the case where the C-DQN and
the DQN algorithms are used in combination. For example, one may train the AI
using the C-DQN algorithm at an early stage to avoid instability, and change
to the DQN algorithm to improve the final performance at a later stage. It is
also worthwhile to consider possible modifications of the C-DQN algorithm so
that it can satisfactorily deal with stochasticity.
Regarding the study of a quantum-mechanical rigid body, the model of
measurement we considered is only approximate, and models of more convenient
and realistic measurement protocols are desired. For example, one may consider
the measurement of the position of the head of the nanorod in the three-
dimensional space, by attaching particles or charges that are easily
measurable at the heads of the nanorod. Although different models of
measurement should result in different behaviour of the system, we believe the
our results concerning the DQN and the C-DQN algorithms are universal and hold
true for any measurement model. As experimental techniques continue
developing, the controls that we have investigated may be applied to real
experiments to cool the state of a rigid body to a quantum regime.
For simplicity, the rigid body we considered in our numerical experiments is
axially symmetric. It is also worthwhile to investigate the more complicated
case where the rigid body is asymmetric in general. The nonlinearity of an
asymmetric rigid body is more significant, and as $\hat{Q}_{z}$ does not
commute with the Hamiltonian, one would be able to cool all rotational degrees
of freedom using control fields in the $x$ and $y$ directions only. However,
experimentally, it can be more difficult to measure the orientation of an
asymmetric rigid body, and the computational cost is also considerably higher.
## Chapter 6 Conclusions
In this thesis, we have reviewed the formulations of continuous quantum
measurement and deep reinforcement learning in Chapters 2 and 3. We have
discussed the non-convergence issue of conventional Q-learning strategies and
the inefficiency issues of existing convergent approaches in the reinforcement
learning literature, and developed a new convergent deep Q-learning algorithm
in Chapter 4, which we call the convergent deep Q network (C-DQN) algorithm,
as an alternative to the conventional deep Q network (DQN) algorithm. The
C-DQN algorithm is provably convergent, scalable and efficient, and we have
demonstrated its effectiveness on standard benchmarks in the reinforcement
learning literature, namely, the Atari 2600 benchmark [50]. Finally, in
Chapter 5, we have applied the C-DQN algorithm to the measurement-feedback
cooling problems of a quantum-mechanical quartic oscillator and a trapped
quantum-mechanical rigid body. We presented the physical models and analysed
the properties of the systems, and showed that although both of the DQN and
the C-DQN algorithms can learn to cool the systems, the C-DQN algorithm learns
stably and has better performances if the DQN algorithm suffers from
instability when the task is difficult; however, the C-DQN algorithm learns
relatively more slowly when the task is sufficiently simple such that the DQN
algorithm can work stably and quickly. Because the performances of the DQN
algorithm can have large variances and lack consistency from trial to trial if
the underlying task is difficult, the C-DQN algorithm can be a better choice
for researches on complicated physical control problems.
Our contribution is twofold: we have investigated the non-convergence issue of
the standard reinforcement learning algorithm, Q-learning, and developed a new
convergent algorithm and examined the properties of our algorithm; we have
established the quantum-mechanical model of the trapped and controlled rigid
body, and demonstrated the effectiveness of our control strategies for the
measurement-feedback cooling problem of this system.
Regarding future directions, we may consider the combination of the DQN and
the C-DQN algorithms so that we can obtain both the stability of the C-DQN
algorithm and the high performance of the final result of the DQN algorithm.
It is also desired if the C-DQN algorithm can be improved to deal with
stochasticity satisfactorily and to converge to an optimal solution of the
Bellman equation in the presence of stochasticity. Concerning the study of a
quantum rigid body, the control strategies we have investigated may be applied
to real experiments, using application-specific integrated circuits which
embed the control strategies to control the lasers to reduce the energy of the
trapped rigid bodies. The control strategies we have considered can help cool
the system of a trapped rigid body so that a quantum regime may be realized,
which has applications in sensing devices and fundamental physical research
[47]. It is also possible to extend our research to the case of a more
complicated asymmetric rigid body, which has highly nonlinear dynamics.
This thesis contributes to the field of the interdisciplinary study of quantum
control and machine learning, and we hope that our work helps the development
of the use of machine learning technologies for physical problems and the
development of better control strategies in the microscopic quantum world.
## Bibliography
* [1] Daoyi Dong and Ian R Petersen. Quantum control theory and applications: a survey. IET Control Theory & Applications, 4(12):2651–2671, 2010.
* [2] Warren S. Warren, Herschel Rabitz, and Mohammed Dahleh. Coherent control of quantum dynamics: The dream is alive. Science, 259(5101):1581–1589, 1993.
* [3] Steven Chu. Cold atoms and quantum control. Nature, 416(6877):206, 2002.
* [4] Moshe Shapiro and Paul Brumer. Quantum control of molecular processes. John Wiley & Sons, 2012.
* [5] Marcos Dantus and Vadim V Lozovoy. Experimental coherent laser control of physicochemical processes. Chemical reviews, 104(4):1813–1860, 2004.
* [6] Vlasta Bonacić-Kouteckỳ and Roland Mitrić. Theoretical exploration of ultrafast dynamics in atomic clusters: Analysis and control. Chemical reviews, 105(1):11–66, 2005.
* [7] Peter Lodahl, Sahand Mahmoodian, and Søren Stobbe. Interfacing single photons and single quantum dots with photonic nanostructures. Rev. Mod. Phys., 87:347–400, May 2015.
* [8] Ronald Hanson, Leo P Kouwenhoven, Jason R Petta, Seigo Tarucha, and Lieven MK Vandersypen. Spins in few-electron quantum dots. Reviews of modern physics, 79(4):1217, 2007.
* [9] Iulia M Georgescu, Sahel Ashhab, and Franco Nori. Quantum simulation. Reviews of Modern Physics, 86(1):153, 2014.
* [10] H. Häffner, C.F. Roos, and R. Blatt. Quantum computing with trapped ions. Physics Reports, 469(4):155 – 203, 2008.
* [11] Lucio Robledo, Lilian Childress, Hannes Bernien, Bas Hensen, Paul FA Alkemade, and Ronald Hanson. High-fidelity projective read-out of a solid-state spin quantum register. Nature, 477(7366):574, 2011.
* [12] Andreas Wallraff, David I Schuster, Alexandre Blais, Luigi Frunzio, R-S Huang, Johannes Majer, Sameer Kumar, Steven M Girvin, and Robert J Schoelkopf. Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. Nature, 431(7005):162, 2004.
* [13] Shuntaro Takeda and Akira Furusawa. Perspective: Toward large-scale fault-tolerant universal photonic quantum computing. arXiv preprint arXiv:1904.07390, 2019.
* [14] Markus Aspelmeyer, Tobias J. Kippenberg, and Florian Marquardt. Cavity optomechanics. Rev. Mod. Phys., 86:1391–1452, Dec 2014.
* [15] Brian D. O. Anderson and John B. Moore. Optimal Control: Linear Quadratic Methods. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1990.
* [16] J Werschnik and E K U Gross. Quantum optimal control theory. Journal of Physics B: Atomic, Molecular and Optical Physics, 40(18):R175–R211, sep 2007.
* [17] Patrick Doria, Tommaso Calarco, and Simone Montangero. Optimal control technique for many-body quantum dynamics. Phys. Rev. Lett., 106:190501, May 2011.
* [18] Navin Khaneja, Timo Reiss, Cindie Kehlet, Thomas Schulte-Herbrüggen, and Steffen J. Glaser. Optimal control of coupled spin dynamics: design of nmr pulse sequences by gradient ascent algorithms. Journal of Magnetic Resonance, 172(2):296 – 305, 2005.
* [19] Jens Jakob WH Sørensen, Mads Kock Pedersen, Michael Munch, Pinja Haikka, Jesper Halkjær Jensen, Tilo Planke, Morten Ginnerup Andreasen, Miroslav Gajdacz, Klaus Mølmer, Andreas Lieberoth, et al. Exploring the quantum speed limit with computer games. Nature, 532(7598):210–213, 2016.
* [20] Ehsan Zahedinejad, Sophie Schirmer, and Barry C. Sanders. Evolutionary algorithms for hard quantum control. Phys. Rev. A, 90:032310, Sep 2014.
* [21] Navin Khaneja, Roger Brockett, and Steffen J. Glaser. Time optimal control in spin systems. Phys. Rev. A, 63:032308, Feb 2001.
* [22] A. C. Doherty and K. Jacobs. Feedback control of quantum systems using continuous state estimation. Phys. Rev. A, 60:2700–2711, Oct 1999.
* [23] Leigh Martin, Felix Motzoi, Hanhan Li, Mohan Sarovar, and K. Birgitta Whaley. Deterministic generation of remote entanglement with active quantum feedback. Phys. Rev. A, 92:062321, Dec 2015.
* [24] F. Motzoi, J. M. Gambetta, P. Rebentrost, and F. K. Wilhelm. Simple pulses for elimination of leakage in weakly nonlinear qubits. Phys. Rev. Lett., 103:110501, Sep 2009.
* [25] David Guéry-Odelin, Andreas Ruschhaupt, Anthony Kiely, Erik Torrontegui, Sofia Martínez-Garaot, and Juan Gonzalo Muga. Shortcuts to adiabaticity: concepts, methods, and applications. Reviews of Modern Physics, 91(4):045001, 2019.
* [26] S. Meiboom and D. Gill. Modified spin-echo method for measuring nuclear relaxation times. Review of Scientific Instruments, 29(8):688–691, 1958.
* [27] Lorenza Viola, Emanuel Knill, and Seth Lloyd. Dynamical decoupling of open quantum systems. Phys. Rev. Lett., 82:2417–2421, Mar 1999.
* [28] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA, 2012\. Curran Associates Inc.
* [29] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
* [30] Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, and Emmanuel Dupoux. The Zero Resource Speech Challenge 2017. arXiv e-prints, page arXiv:1712.04313, Dec 2017.
* [31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
* [32] Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
* [33] Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, and Rodney Tsing. StarCraft II: A New Challenge for Reinforcement Learning. arXiv e-prints, page arXiv:1708.04782, Aug 2017.
* [34] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144, 2018.
* [35] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
* [36] Yuki Fujimoto, Kenji Fukushima, and Koichi Murase. Methodology study of machine learning for the neutron star equation of state. Phys. Rev. D, 98:023019, Jul 2018.
* [37] Tomohiro Mano and Tomi Ohtsuki. Phase diagrams of three-dimensional anderson and quantum percolation models using deep three-dimensional convolutional neural network. Journal of the Physical Society of Japan, 86(11):113704, 2017.
* [38] E. Barberio, B. Le, E. Richter-Was, Z. Was, J. Zaremba, and D. Zanzi. Deep learning approach to the higgs boson $cp$ measurement in $H\rightarrow\tau\tau$ decay and associated systematics. Phys. Rev. D, 96:073002, Oct 2017.
* [39] Talitha Weiss and Oriol Romero-Isart. Quantum motional state tomography with non-quadratic potentials and neural networks. arXiv preprint arXiv:1906.08133, 2019.
* [40] Yue Liu, Tianlu Zhao, Wangwei Ju, and Siqi Shi. Materials discovery and design using machine learning. Journal of Materiomics, 3(3):159 – 177, 2017. High-throughput Experimental and Modeling Research toward Advanced Batteries.
* [41] Murphy Yuezhen Niu, Sergio Boixo, Vadim N. Smelyanskiy, and Hartmut Neven. Universal quantum control through deep reinforcement learning. npj Quantum Information, 5(1):33, 2019.
* [42] Marin Bukov, Alexandre G. R. Day, Dries Sels, Phillip Weinberg, Anatoli Polkovnikov, and Pankaj Mehta. Reinforcement learning in different phases of quantum control. Phys. Rev. X, 8:031086, Sep 2018.
* [43] Dalgaard Mogens, Motzoi Felix, Jens Jakob Sørensen, and Sherson Jacob. Global optimization of quantum dynamics with alphazero deep exploration. NPJ Quantum Information, 6(1), 2020.
* [44] Thomas Fösel, Petru Tighineanu, Talitha Weiss, and Florian Marquardt. Reinforcement learning with neural networks for quantum feedback. Phys. Rev. X, 8:031084, Sep 2018. |
# Interpretable Multimodal Emotion Recognition using Facial Features and
Physiological Signals
Puneet Kumar and Xiaobai Li111Corresponding Author<EMAIL_ADDRESS>
CMVS, University of Oulu, Finland.
{puneet.kumar<EMAIL_ADDRESS>
###### Abstract
This paper aims to demonstrate the importance and feasibility of fusing
multimodal information for emotion recognition. It introduces a multimodal
framework for emotion understanding by fusing the information from visual
facial features and rPPG signals extracted from the input videos. An
interpretability technique based on permutation feature importance analysis
has also been implemented to compute the contributions of rPPG and visual
modalities toward classifying a given input video into a particular emotion
class. The experiments on IEMOCAP dataset demonstrate that the emotion
classification performance improves by combining the complementary information
from multiple modalities.
Keywords: Affective Computing, Interpretable & Deployable AI, Multimodal
Analysis, rPPG, Facial Features.
## 1 Introduction
Emotions, characterized by a rich and complex mix of physiological and
cognitive states, hold significant importance across multiple fields such as
psychology, human-computer interaction, affective computing, and even
extending to broader domains such as virtual reality, user experience design,
healthcare, and education [1]. Understanding and accurately interpreting
emotions is essential in human communication and social interactions [2]. With
the surge in the development and accessibility of multimodal sensing
technologies, researchers can explore multiple modalities to enhance the
accuracy and robustness of emotion recognition systems [3]. The current
research trend focuses on building Artificial Intelligence (AI) systems that
can be deployed for real-life applications [4].
Two such modalities, facial expressions and physiological signals, have
garnered significant attention due to the rich information they offer and
their non-invasive nature [5]. Facial expressions, direct and non-invasive
indicators of emotion, have been thoroughly investigated [6]. Various
techniques involving the extraction of facial landmarks, local descriptors, or
holistic representations have been proposed to capture nuanced variations in
facial muscle movements that reflect different emotional states [7].
Physiological signals, such as remote photoplethysmography (rPPG) signals,
provide another layer of emotional cues. These signals, obtained through non-
contact video-based techniques, offer insights into physiological changes
associated with emotional responses [5]. The interplay of these two modalities
offers a more holistic understanding of emotions, thus enhancing the
robustness of emotion recognition systems [8].
Emotion classification through audio-visual information is a well-established
research task [9, 10, 11]. However, recognizing emotion using the
physiological context along with the audio-visual information score for
further exploration [5]. Furthermore, despite the significant advancements,
many multimodal emotion recognition models do not provide meaningful
interpretations for their predictions [12, 13]. Most existing interpretability
techniques have been implemented for visual modality and have yet to be fully
explored for multimodal analysis [14, 15, 6].
This paper proposes an interpretable multimodal emotion recognition framework
that extracts rPPG signals and facial features from the input videos and uses
their combined context for emotion detection. The Haar cascades classifier
[16] has been implemented to extract the rPPG signals, whereas a pre-trained
ResNet-34-based network extracts the visual features. Further, early and late
fusion approaches that integrate the static facial expression features and
dynamic rPPG signals to capture both spatial and temporal aspects of emotions
have been incorporated.
An interpretability technique based on permutation feature importance (PFI)
[17] has also been incorporated that computes the contribution of rPPG and
visual modality towards classifying a given input video into a particular
emotion class. The experiments performed on Interactive Emotional Dyadic
Motion Capture (IEMOCAP) dataset [18] have resulted in an accuracy of 54.61%
while classifying the input videos into ten emotion classes (‘neutral,’
‘happy,’ ‘sad,’ ‘angry,’ ‘excited,’ ‘frustrated,’ ‘fearful,’ ‘surprised,’
‘distressed’ and ‘other’). The increased performance on using the multimodal
context than the individual accuracies on using rPPG or visual modality alone
advocates the importance of leveraging the multimodal context for emotion
understanding. The average contributions of rPPG and visual modalities towards
emotion recognition have been computed as 37.67% and 62.33%, respectively.
The contributions of this paper can be summarized as follows:
* •
A multimodal emotion recognition framework has been proposed to classify a
given video into discrete emotion classes. It extracts the dynamic rPPG
signals from the input videos and combines them with static facial expressions
using early and late fusion approaches.
* •
An interpretability technique has been incorporated that computes the
contribution of rPPG and visual modalities towards emotion classification
using the PFI algorithm.
* •
Extensive experiments have been performed on the IEMOCAP dataset, and the
results have been presented in terms of accuracy, precision, recall, F1 score,
and modality-wise contributions toward emotion classification.
## 2 Proposed Method
The proposed framework has been diagrammatically depicted in Figure 1 and
described in the following sections.
Figure 1: Schematic illustration of the proposed framework.
### 2.1 Preprocessing and Feature Extraction
The video files are loaded and processed frame by frame using OpenCV (cv2)
library 222https://opencv.org/ and processed to extract rPPG signals and
facial features.
i) rPPG Signals Extraction: Face detection within each video frame during the
rPPG signal extraction process is accomplished using Haar cascades [16]. The
region of interest (ROI), predominantly the facial region, is isolated from
each frame, after which the mean intensity is computed to generate the rPPG
signal for each video. The calculation of the mean intensity within the ROI
($\bar{I}c$) is represented in Eq. 1.
$\bar{I}c=\frac{1}{N}\sum_{x=1}^{W}\sum_{y=1}^{H}I_{x,y,c}$ (1)
Where $I_{x,y,c}$ is the intensity of the pixel at location $(x,y)$ for color
channel $c$ in the ROI, and $N$ is the total number of pixels in the ROI,
whereas $W$ and $H$ represent the width and height of the ROI, respectively,
and $c\in{R,G,B}$.
ii) Facial Features Extraction: Facial feature extraction employs Dlib’s shape
predictor [19], which is a version of the ResNet-34 trained on Face Scrub
dataset[20] to identify the facial landmarks in a given image of a face. As
per Eq. 2, it identifies 68 facial landmarks for each detected face within
every frame, distinguishing unique facial characteristics.
$\begin{split}P&=D(F,\\{L_{i}\\})\\\ F&=[f_{1},f_{2},\ldots,f_{n}]\end{split}$
(2)
Where $F$ represents the face detected in a frame, $P$ represents the
predicted points on the face, $D(F,\\{L_{i}\\})$ is the function for
predicting points on the face, and $L_{i}$ is the set of landmark points for
the $i^{th}$ point. As signals from different videos might differ in length,
it becomes crucial to standardize the input for the neural network model. This
standardization is achieved by zero-padding $\bar{I}$ and $P$ to match the
maximum signal length.
### 2.2 Multimodal Feature Fusion
Early fusion and late fusion approaches are used to combine the rPPG signals
and facial features.
i) Early Fusion: In the early fusion approach, the rPPG signals and facial
features are concatenated before being fed into the model. The fused data are
then passed through a neural network comprising a flatten layer, followed by
CNN layers of dimensions 512 and 256, and the final layer of size equal to the
number of classes. The flatten layer transforms the 3D input tensor into a 1D
tensor, and the subsequent CNN layers functions perform the classification
task. The model structure is represented as per Eq. 3.
$\displaystyle I^{\prime}$ $\displaystyle=\text{concatenate}(\bar{I}c,P)$ (3)
$\displaystyle I^{\prime\prime}$ $\displaystyle=\text{flatten}(I^{\prime})$
$\displaystyle F_{early}$ $\displaystyle=\text{NNet}(I^{\prime\prime},C)$
Where $I$ is the input shape, $C$ denotes the number of classes, $\bar{I}c$ is
the mean intensity within the ROI from the rPPG signals, $P$ represents the
facial features, $NNet$ represents the early fusion network and $F_{early}$ is
the output of the early fusion.
ii) Late Fusion: In the late fusion approach, the rPPG and visual models are
trained separately, and their outputs are combined using a weighted average.
Eq. 4 represents a late fusion approach where the models are trained
separately, and their outputs are combined in the final output $F_{late}$.
$\begin{split}F_{late}&=w_{1}\cdot M_{\text{rPPG}}(\bar{I}c)+w_{2}\cdot
M_{\text{facial}}(P)\end{split}$ (4)
Where $M_{\text{rPPG}}(\bar{I}c)$ and $M_{\text{facial}}(P)$ represent the
outputs of the rPPG model and the visual model, respectively, and $w_{1}$ and
$w_{2}$ are the weights assigned to each model’s output in the final fusion.
### 2.3 Emotion Classification
This study employs three separate models for emotion classification. Two of
these models operate independently, utilizing rPPG signals and facial
features. The third model operates via ‘early fusion,’ exploiting the combined
context of data from the rPPG and visual models. The outputs of these
individual models are then collaboratively integrated through a ‘late fusion’
approach that uses a weighted addition technique. The individual models, based
on rPPG signals and facial features, are constructed as follows.
i) rPPG Model: This model utilizes a Deep Convolutional Neural Network (CNN)
with two hidden layers. It incorporates Rectified Linear Unit (ReLU)
activation functions for emotion classification derived from rPPG signals.
ii) Visual Model: This model, built on facial features, employs a ResNet-based
Deep CNN with two hidden layers and ReLU activation functions.
### 2.4 Interpretability
An explainability method based on permutation feature importance (PFI) [17] is
implemented, which is used to estimate the importance of features by permuting
the values of each feature and measuring the resulting impact on model
performance. The PFI of feature $j$ is the decrease in the model score when
values of feature $j$ are randomly permuted. PFI for a feature $j$ is the
difference in the model score when the values of feature $j$ are randomly
permuted. Eq. 5 mathematically represents the concept of permutation feature
importance.
$PFI(j)=E_{\pi}[f(X^{(i)})]-E_{\pi}[f(X^{(i)}_{\pi_{j}})]$ (5)
Where $PFI(j)$ is the permutation feature importance of feature $j$,
$E_{\pi}[f(X^{(i)})]$ is the expected value of the model score over all
samples in the dataset when the model is scored normally,
$E_{\pi}[f(X^{(i)}_{\pi_{j}})]$ is the expected value of the model score when
the values of feature $j$ are permuted according to some permutation $\pi$,
and $X^{(i)}_{\pi_{j}}$ denotes the dataset $X^{(i)}$ with the values of
feature $j$ permuted according to $\pi$.
## 3 Results and Discussion
### 3.1 Experimental Setup
The emotion classification experiments have been performed on the IEMOCAP
dataset [18] consisting of 10,039 videos labeled with ten discrete emotion
labels (‘neutral,’‘ happy,’ ‘sad,’‘ angry,’ ‘excited,’ ‘frustrated,’
‘fearful,’ ‘surprised,’ ‘distressed’ and‘other’). The model training has been
trained on NVIDIA RTX 4090 GPU for 50 epochs with a batch size of 32 and a
learning rate of 0.001. The performance has been evaluated using accuracy,
precision, recall, and F1 score metrics.
### 3.2 Results
Table 1 summarizes the accuracy of the individual and fusion models, whereas
the average contributions of rPPG and visual modalities towards emotion
recognition in the early fusion setup are presented in Table 2. The proposed
framework has demonstrated an emotion classification accuracy of 54.61%, and
the average contributions of rPPG and visual modalities towards emotion
recognition have been computed as 37.67% and 62.33%, respectively.
Table 1: Detailed performance of the individual and fusion models. Model | Accuracy | Precision | Recall | F1 Score
---|---|---|---|---
rPPG | 37.45% | 0.37 | 0.38 | 0.38
Facial Features | 46.42% | 0.49 | 0.49 | 0.49
Late Fusion | 41.17% | 0.43 | 0.42 | 0.42
Early Fusion | 54.61% | 0.56 | 0.58 | 0.57
Table 2: Average contribution of each modality towards emotion recognition. Modality | Contribution
---|---
rPPG | 37.67%
Visual | 62.33%
Table 1 shows that both the individual models performed reasonably well.
However, the fusion model outperformed the individual models, demonstrating
the advantage of combining rPPG signals and facial feature information for
emotion recognition.
### 3.3 Discussion
This paper presents a compelling case for including multimodal context in
emotion recognition. While the models trained on individual modalities show
moderate performance, their fusion significantly improves emotion recognition
accuracy. It emphasizes the complementarity of these modalities in capturing
emotional states. However, the late fusion of modalities underperforms
compared to the early fusion approach, indicating that integrating modalities
at an earlier stage allows for more effective learning of emotional states.
However, this study has a few limitations of the proposed work. The IEMOCAP
dataset, while widely used, may limit the generalizability of the findings.
Cross-dataset experiments on larger and more diverse datasets could further
strengthen the results. Moreover, more modalities such as audio, text, and
other physiological signals can also be incorporated for emotion recognition.
Finally, a more in-depth interpretability mechanism can be developed to
explain the role of individual features in emotion detection.
## 4 Conclusion
This work presents a multimodal emotion recognition framework using rPPG
signals and facial features. It paves the way for practical applications where
transparent and interpretable emotion understanding is important. The results
highlight the benefits of integrating multiple modalities for emotion
recognition, with an early fusion approach yielding the highest accuracy.
While there are limitations and potential improvements, our study provides a
promising direction for future research in emotion recognition, emphasizing
the importance of multimodal data and fusion techniques.
## References
* [1] Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. A Review of Affective Computing: From Unimodal Analysis to Multimodal Fusion. Elsevier Information Fusion Journal, 37:98–125, 2017.
* [2] Yucel Cimtay, Erhan Ekmekcioglu, and Seyma Caglar-Ozhan. Cross Subject Multimodal Emotion Recognition Based on Hybrid Fusion. IEEE Access, 8:168865–168878, 2020.
* [3] Tadas Baltrušaitis, Chaitanya Ahuja, and Louis Morency. Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 41(2):423–443, 2018.
* [4] Andrei Paleyes, Raoul-Gabriel Urma, and Neil D Lawrence. Challenges in Deploying Machine Learning: A Survey of Case Studies. ACM Computing Surveys, 55(6):1–29, 2022.
* [5] Zitong Yu, Xiaobai Li, and Guoying Zhao. Facial Video-based Physiological Signal Measurement: Recent Advances and Affective Applications. Signal Processing Magazine, 38(6):50–58, 2021.
* [6] Sarthak Malik, Puneet Kumar, and Balasubramanian Raman. Towards Interpretable Facial Emotion Recognition. In The 12th Indian Conference on Computer Vision, Graphics and Image Processing, pages 1–9, 2021.
* [7] Nannan Wang, Xinbo Gao, Dacheng Tao, Heng Yang, and Xuelong Li. Facial Feature Point Detection: A Comprehensive Survey. Neurocomputing, 275:50–65, 2018.
* [8] Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 31(1):39–58, 2009.
* [9] Tianrong Rao, Xiaoxu Li, and Min Xu. Learning Multi-level Deep Representations for Image Emotion Classification. Neural Processing Letters, pp. 1–19, 2019.
* [10] M Xu, F Zhang, and S Khan. Improve Accuracy of Speech Emotion Recognition with Attention Head Fusion. In IEEE Annual Computing and Communication Workshop and Conference (CCWC), pages 1058–1064, 2020.
* [11] Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. DialogueRNN: An Attentive RNN for Emotion Detection in Conversations. In The 31st AAAI Conference on Artificial Intelligence (AAAI), volume 33, pages 6818–6825, 2019.
* [12] W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. Definitions, Methods, and Applications in Interpretable Machine Learning. Proceedings of the National Academy of Sciences, 116(44):22071–22080, 2019.
* [13] Luca Longo et al. Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions. In The Springer International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), pages 1–16, 2020.
* [14] Marco Tulio Ribeiro, S. Singh, and C. Guestrin. Why Should I Trust You? Explaining Predictions of Any Classifier. In International Conference on Knowledge Discovery & Data mining (KDD), pages 1135–1144, 2016.
* [15] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In The IEEE/CVF International Conference on Computer Vision (ICCV), pages 618–626, 2017.
* [16] Sander Soo. Object Detection using Haar Cascade Classifier. Institute of Computer Science, University of Tartu, 2(3):1–12, 2014\.
* [17] André Altmann, Laura Toloşi, Oliver Sander, and Thomas Lengauer. Permutation Importance: A Corrected Feature Importance Measure. Bioinformatics, 26(10):1340–1347, 2010.
* [18] Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. IEMOCAP: Interactive Emotional dyadic MOtion CAPture data. Language Resources and Evaluation, 42(4), 2008.
* [19] Davis E. King. DLIB Models. https://github.com/davisking/dlib-models, 2016. Accessed on 21.05.2023.
* [20] Hong-Wei Ng and Stefan Winkler. A Data Driven Approach to Cleaning Large Face Datasets. In IEEE International Conference on Image Processing (ICIP), pages 343–347. IEEE, 2014.
|
# VulCurator: A Vulnerability-Fixing Commit Detector
Truong-Giang Nguyen Singapore Management UniversitySingaporeSingapore
<EMAIL_ADDRESS>, Thanh Le-Cong Singapore Management
UniversitySingaporeSingapore<EMAIL_ADDRESS>, Hong Jin Kang Singapore
Management UniversitySingaporeSingapore<EMAIL_ADDRESS>, Xuan-
Bach D. Le University of MelbourneMelbourneAustralia<EMAIL_ADDRESS>and David Lo Singapore Management UniversitySingaporeSingapore
<EMAIL_ADDRESS>
(2022)
###### Abstract.
Open-source software (OSS) vulnerability management process is important
nowadays, as the number of discovered OSS vulnerabilities is increasing over
time. Monitoring vulnerability-fixing commits is a part of the standard
process to prevent vulnerability exploitation. Manually detecting
vulnerability-fixing commits is, however, time-consuming due to the possibly
large number of commits to review. Recently, many techniques have been
proposed to _automatically_ detect vulnerability-fixing commits using machine
learning. These solutions either: (1) did not use deep learning, or (2) use
deep learning on only limited sources of information. This paper proposes
VulCurator, a tool that leverages deep learning on richer sources of
information, including commit messages, code changes and issue reports for
vulnerability-fixing commit classification. Our experimental results show that
VulCurator outperforms the state-of-the-art baselines up to 16.1% in terms of
F1-score.
VulCurator tool is publicly available at
https://github.com/ntgiang71096/VFDetector and
https://zenodo.org/record/7034132#.Yw3MN-xBzDI, with a demo video at
https://youtu.be/uMlFmWSJYOE.
Vulnerability-Fixing Commits, Deep Learning, BERT
††copyright: acmlicensed††price: 15.00††doi:
10.1145/3540250.3558936††journalyear: 2022††submissionid:
fse22demo-p103-p††isbn: 978-1-4503-9413-0/22/11††conference: Proceedings of
the 30th ACM Joint European Software Engineering Conference and Symposium on
the Foundations of Software Engineering; November 14–18, 2022; Singapore,
Singapore††booktitle: Proceedings of the 30th ACM Joint European Software
Engineering Conference and Symposium on the Foundations of Software
Engineering (ESEC/FSE ’22), November 14–18, 2022, Singapore, Singapore††ccs:
Computing methodologies Supervised learning††ccs: Security and privacy
Vulnerability management††ccs: Computing methodologies Supervised learning by
classification
## 1\. Introduction
Open-source software (OSS) vulnerabilities can severely damage systems. An
infamous example is the Equifax Data
Breach111https://nvd.nist.gov/vuln/detail/cve-2017-5638, which led to millions
of cases of identity theft. Another example is
Log4Shell222https://nvd.nist.gov/vuln/detail/CVE-2021-44228 incident, which
led to many vulnerable cloud services and applications. For vulnerability
management, the information of vulnerabilities are collected in the Common
Vulnerabilities and Exposures (CVE) (Corporation, 1999) or National
Vulnerability Database (NVD) (of Standards and Technology, 1999). OSS users
can use vulnerability information such as vulnerable version(s) of a specific
third-party library or how the vulnerability is fixed to make informed
decisions, e.g., migrating the dependencies to invulnerable versions or
patching their own client code.
Unfortunately, in practice, there is often a delay between the time a
vulnerability is fixed and the time it is publicly disclosed (Sabetta and
Bezzi, 2018), leading to a risk that OSS users are unaware of vulnerabilities
in their applications. Therefore, OSS users would benefit from a tool that
automatically detect security-relevant changes, i.e., vulnerability-fixing
commits, that are not yet disclosed (Sabetta and Bezzi, 2018; Truong-Giang et
al., 2022).
Many existing techniques (Zhou and Sharma, 2017; Zhou et al., 2021b; Sabetta
and Bezzi, 2018; Chen et al., 2020; Truong-Giang et al., 2022; Le et al.,
2021; Sawadogo et al., 2020; Tian et al., 2012) have recently proposed
solutions for automatically identifying vulnerability-fixing commits. Several
approaches (Zhou et al., 2021c; Zhou et al., 2021b; Le et al., 2021; Sawadogo
et al., 2020) use deep learning, but only consider only commit messages and
code changes. Our recent work, HERMES (Truong-Giang et al., 2022), combines
information from commit messages, code changes, and issue reports, however,
uses Support Vector Machine (SVM). In this paper, we introduce VulCurator, a
tool using a deep learning to detect vulnerability-fixing commits based on
commit messages, code changes, and issue reports. Different from previous
works, VulCurator leverages BERT-based models to represent both text-based and
code-based information of a commit. Specifically, we use two RoBERTa (Liu et
al., 2019) models for commit messages and issue reports respectively, and a
CodeBERT (Feng et al., 2020) model for code changes. The output probabilities
from the aforementioned classifiers are aggregated using a stacking ensemble
to form the final output probability. Based on the output probability,
VulCurator provides a list of commits ranked by their likelihood of being
vulnerability-fixing commits.
To evaluate the performance of VulCurator, we conduct an empirical evaluation
on two benchmarks, including the SAP dataset proposed by Sabetta et al.
(Sabetta and Bezzi, 2018) and a newly collected dataset of TensorFlow
vulnerabilities. While the former contains 1,132 vulnerability-fixing and
5,995 non-vulnerability-fixing commits written in Java and Python, the latter
contains 290 vulnerability-fixing and 1,535 non-vulnerability-fixing commits
from TensorFlow (Abadi et al., 2016), a well-known deep learning framework. We
compare VulCurator with two recently proposed approaches, HERMES (Truong-Giang
et al., 2022), which uses Support Vector Machine classifiers using information
from commit messages, code changes and issue reports, and VulFixMiner (Zhou et
al., 2021b), a deep learning model classifying code changes from commits. Our
experiments show that VulCurator outperforms HERMES by 16.1% and 8.5% on the
SAP and TensorFlow dataset respectively, and VulCurator improves over
VulFixMiner by 3.9% and 4.7%.
## 2\. Background and Related Work
Vulnerability-fixing commit classification. Vulnerability-fixing commit
classification has been an active and challenging topic in software
engineering research. Zhou et al. (Zhou and Sharma, 2017) use word2vec
(Mikolov et al., 2013) to represent commit messages and forward it to a K-fold
stacking model for classification. Zhou et al. (Zhou et al., 2021b) fine-tuned
CodeBERT to transform code changes into embedding vectors and then use one-
layer neural network to classify commits. Sabetta et al. (Sabetta and Bezzi,
2018) and Zhou et al. (Zhou et al., 2021c) proposed to train message
classifier and code change classifier separately before combining them for
commit classification. The former approach uses Support Vector Machine, while
the latter uses LSTM and multi-layer CNN. Nguyen et al. recently proposed
HERMES (Truong-Giang et al., 2022), which uses issue reports as a third source
of information using an issue classifier and an issue linker. The issue linker
maps commits without explicitly linked issues to best-matching issues.
BERT-based models. RoBERTa (Liu et al., 2019) is a multi-layer bidirectional
Transformer model, which is trained on a large dataset of natural language.
CodeBERT (Feng et al., 2020), a variant of RoBERTa, is trained on large-scale
dataset consisting of bimodal data points which refer to natural language -
programming language pair, and unimodal data points which refer to only
programming language. Both RoBERTa and CodeBERT have shown to be effective in
various tasks, including vulnerability-fixing classification (Zhou et al.,
2021b; Zhou et al., 2021c), type inference (Kazerounian et al., 2021), program
repair (Mashhadi and Hemmati, 2021), program analysis (Le-Cong et al., 2022)
or defect prediction (Zhou et al., 2021a).
## 3\. VulCurator Architecture
Figure 1 provides an overview of VulCurator. Our tool takes as input a JSON
file ① containing a list of commits with their messages, code changes and
linked issues. Note that VulCurator allows commits without explicitly linked
issues. In these cases, VulCurator leverages an issue linker ②, which is built
based on an issue corpus ③ for mapping each commit to the most relevant issue
in the corpus. Then, VulCurator feeds each type of commit information to their
the corresponding classifiers, i.e. message classifier ④, patch classifier ⑤,
or issue classifier ⑥. Each classifier produces a probability indicating the
likelihood of a commit being a vulnerability-fixing commit. Then, the
predicted probabilities from three classifiers are combined using stacking
ensemble ⑦ to form the final probability.
Figure 1. Overview of VulCurator
Issue Linker. VulCurator first recovers commit-issue link for every commit
without any corresponding issues as only a fraction of commits are explicitly
linked to issue reports (Sun et al., 2017). Particularly, similar from HERMES
(Truong-Giang et al., 2022), VulCurator uses FRLink (Sun et al., 2017) to map
each commit without any corresponding issues to its most similar issue in the
input data based on a pre-defined similarity function. The similarity function
is calculated with respect to the Term Frequency-Inverse Document Frequency
(TF-IDF) of natural language terms and code terms in commit message, code
changes and issue content. The TF-IDF value of every word is calculated once
using TfidfVectorizer333https://scikit-
learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
and stored locally using pickle444https://scikit-
learn.org/stable/model_persistence.html for the model inference phase. From
the findings of prior work (Truong-Giang et al., 2022), the accuracy of
commit-issue linking affects the classification performance. By limiting the
issue linker’s similarity threshold, only accurate links will be recovered.
Patch Classifier. We use the same approach as VulFixMiner (Zhou et al., 2021b)
for the patch classifier of VulCurator.
CodeBERT555https://huggingface.co/microsoft/codebert-base is used as the core
model. For code changes of each file, the added code and removed code version
of code changes are extracted separately. The codes are tokenized using
CodeBERT Tokenizer, and then formed as input for CodeBERT following the format
below:
(1) $[CLS]\langle\text{rem-code}\rangle[SEP]\langle\text{added-
code}\rangle[EOS]$
where $\mathit{rem-code}$ and $\mathit{add-code}$ are the sequence of tokens
of the removed code and added code, respectively; [CLS], [SEP], [EOS] are
special tokens given by CodeBERT, denoting the classification, separation and
end of sequence token, respectively. The input will be forwarded to the
CodeBERT to obtain an embedding vector, i.e. vector of numerical numbers,
representing the semantic of code changes of each file. Finally, the embedding
vectors are forwarded by an aggregator followed by a neural classifier to
output the final probability for each commit.
Message Classifier. The message classifier leverages the multi-layer
bidirectional Transformer model, RoBERTa (Liu et al., 2019). Specifically, a
commit message is tokenized into tokens using RobertaTokenizer and then
forwarded into the base version of the Roberta
model666https://huggingface.co/roberta-base and a softmax function to obtain
the output probability.
Issue Classifier. Similar to the message classifier, the issue classifier also
uses the base version of RoBERTa model. The model takes the commit issue’s
title and body as inputs, and outputs the predicted probability that the
commit corresponding to the issue is for vulnerability-fixing.
Stacking Ensemble and Output Prediction. Given the output probabilities from
the three aforementioned classifiers, VulCurator leverages a logistic
regression model which acts as a stacking ensemble classifier to produce the
final probability for each commit. Commits with a final probability larger
than a threshold will be deemed as vulnerability-fixing commits. By default,
the classification threshold is set as 0.5 but VulCurator allows users to
adjust the threshold (see details in Section 4).
## 4\. Usage
### 4.1. Installation
User can either clone our GitHub (Nguyen-Truong et al., 2022b) repository and
install required dependencies or use our Docker image to run VulCurator
(Nguyen-Truong et al., 2022a). For full customization of VulCurator, a user
can follow the following steps.
### 4.2. Preparation
VulCurator contains a built-in Issue Linker and pre-trained Classifiers, which
users can directly use. However, users can build their own Issue Linker and
Classifiers following instructions below.
Issue Linker. User can customize issue corpus by providing a folder that
contains files that store issue reports followed our pre-defined format (see
details in our GitHub repository (Nguyen-Truong et al., 2022b)), where each
issue contains issue title, issue body, and issue comments (optional). Given
the corpus, users can use their own Issue Linker by using the following
commands:
python linker_builder.py --corpus_path <corpus_path>
VulCurator models. Users can also train new classifiers for VulCurator with
their own dataset by using the following command:
python model_builder.py --data_path <path_to_data>
Note that the training dataset must follow a pre-defined format, which is
provided on our GitHub repository (Nguyen-Truong et al., 2022b).
### 4.3. Inference
VulCurator provides command line interface with two modes for end-users:
prediction and ranking.
Input format. To use VulCurator, users need to prepare data following our pre-
defined json format as below:
⬇
1[
2 {
3 {
4 "id": <commit_id>,
5 "message": <commit_message>,
6 "issue": {
7 "title": <issue_title>,
8 "body": <issue_body>,
9 "comments" : [<list_of_comments]
10 },
11 "patch": [list_of_code_change]
12 },
13 ...
Prediction mode. In prediction mode, given the input of a dataset of commits,
VulCurator returns a list of likely vulnerability fixing commits along with
the confidence scores. Although VulCurator sets the classification threshold
at 0.5 by default, VulCurator allows the threshold to be adjusted with the
option \--threshold. Users can use the following command to obtain the
results:
$\begin{array}[]{lll}\texttt{python
application.py}&\texttt{--mode}&\texttt{prediction}\\\
&\texttt{--input}&\texttt{<input\\_path>}\\\
&\texttt{--threshold}&\texttt{<threshold>}\\\
&\texttt{--output}&\texttt{<output\\_path>}\\\ \end{array}$
Ranking mode. In ranking mode, users can input data following our format and
VulCurator will output a list of commits sorted by the probability that the
commits is vulnerability-fixing. Users can use the following commands:
$\begin{array}[]{lll}\texttt{python application.py
}&\texttt{--mode}&\texttt{ranking}\\\
&\texttt{--input}&\texttt{<input\\_path>}\\\
&\texttt{--output}&\texttt{<output\\_path>}\\\ \end{array}$
## 5\. Performance Evaluation
In this section, we investigate the following research questions:
* •
RQ1. How effective is VulCurator?
* •
RQ2. How much does each classifier contribute?
### 5.1. Experimental Setting
#### 5.1.1. Dataset
We empirically evaluate VulCurator using two datasets, the SAP dataset
proposed by Sabetta et al. (Sabetta and Bezzi, 2018) and a newly prepared
TensorFlow dataset. For each dataset, we use 80% data for training and the
remaining 20% for testing.
SAP dataset: We evaluate our tool on the SAP dataset, which is widely used
(Sabetta and Bezzi, 2018; Truong-Giang et al., 2022). The dataset contains
vulnerability-fixing commits of widely used open-source projects manually-
curated by SAP Security Research over a period of four years. Non-
vulnerability-fixing commits are randomly sampled with a ratio of five non-
vulnerability-fixing commits for one vulnerability-fixing commit from the same
project. In total, the dataset contains 1,132 vulnerability-fixing and 5,995
non-vulnerability-fixing commits, in which, 37% of the commits are explicitly
linked to issues.
TensorFlow dataset: We introduce a new dataset with commits from TensorFlow,
which is a well-known deep learning library. The purpose of the dataset is
two-fold. First, with the increase of vulnerabilities in deep learning
libraries in recent years, we would like to investigate whether VulCurator is
also applicable in this domain. Second, we wish to avoid overfitting our
experiments and tool design to the SAP dataset. To construct the dataset, we
collect all vulnerability-fixing commits of TensorFlow, which are listed on
National Vulnerability Database (NVD) (of Standards and Technology, 1999) up
until May 2022. We randomly sampled non-vulnerability-fixing commits from
TensorFlow’s repository using the same setting as Nguyen et al. (Truong-Giang
et al., 2022) and Sabetta et al. (Sabetta and Bezzi, 2018). As a result, our
dataset contains 290 vulnerability-fixing and 1,535 non-vulnerability-fixing
commits. In this dataset, no commit is explicitly linked to an issue.
#### 5.1.2. Evaluation metrics
Similar to prior studies (Tian et al., 2012; Zhou et al., 2021c; Truong-Giang
et al., 2022; Chen et al., 2020), both precision and recall are important.
Therefore, we use F1-score, which is the harmonic mean of precision and
recall, to evaluate the effectiveness of VulCurator and HERMES.
In our task, a true positive (TP) is a vulnerability-fixing commit that is
correctly detected. A false positive (FP) is a non-vulnerability-fixing commit
that is incorrectly detected as vulnerability-fixing. A false negative (FN) is
a vulnerability-fixing commit that is not detected. Precision (P) and Recall
(R) are computed as follows:
$\text{P}=\frac{\text{ TP }}{\text{TP}+\text{FP}}$ $\text{R}=\frac{\text{ TP
}}{\text{TP}+\text{FN}}$
Then, the F1 score is calculated as follows:
$F1=\frac{2(P\times R)}{P+R}\\\ $
### 5.2. Experimental Result
Table 1. F1 score of VulCurator and HERMES on SAP dataset. The number with the asterisk(*) denotes the result of VulFixMiner Model | Message | Issue | Patch | Ensemble
---|---|---|---|---
HERMES | 0.67 | 0.51 | 0.60 | 0.68
VulCurator | 0.76 | 0.65 | 0.76* | 0.79
Table 2. F1 score of VulCurator and HERMES on TensorFlow dataset. The number with the asterisk(*) denotes the result of VulFixMiner Model | Message | Issue | Patch | Ensemble
---|---|---|---|---
HERMES | 0.87 | 0.75 | 0.69 | 0.82
VulCurator | 0.81 | 0.80 | 0.85* | 0.89
(a) SAP dataset
(b) TensorFlow dataset
Figure 2. Relationship between true positive cases predicted by three base
classifiers of VulCurator
#### 5.2.1. RQ1: Effectiveness
To answer this question, we train and test both VulCurator and HERMES on the
two datasets. The experimental results are shown in Tables 1 and 2. On the SAP
dataset, all VulCurator’s base models and the whole model outperform HERMES’s.
Specifically, VulCurator’s message, issue, patch classifiers and the whole
model improve HERMES’s counterparts by 13.4%, 27.4%, 26.7%, and 16.1% in terms
of F1, respectively. On the TensorFlow dataset, while VulCurator’s message
classifier has a decrease of 6.9% in message classifier compared to HERMES,
VulCurator issue classifier and patch classifier improves over HERMES by 6.7%
and 23.2% respectively, leading to an overall 8.5% improvement over HERMES.
The experiment results suggest that VulCurator benefits from the use of pre-
trained deep learning models.
The patch classifier of VulCurator uses the same model as VulFixMiner (Zhou et
al., 2021b). The improvement in F1 of the ensemble model over the patch
classifier alone (from 0.76 to 0.79 on SAP dataset and 0.85 to 0.89 on
TensorFlow dataset) shows that combining multiple sources of information
allows VulCurator to outperform VulFixMiner (Zhou et al., 2021b). This result
also validates the finding of Nguyen et al. (Truong-Giang et al., 2022) that
using information from the issue tracker boosts classification performance.
#### 5.2.2. RQ2: Ablation Study
We investigate if different sources of information capture different aspects
of a commit. On the SAP dataset (Figure. 2(a)), out of 221 discovered
vulnerability-fixing commits, there are 20, 15, and 16 commits that can only
be exposed by message classifier, issue classifier, patch classifier,
respectively. The similar finding is also found in TensorFlow (Figure. 2(b)).
The experimental results show that each classifier helps detect unique
vulnerability-fixing commits.
## 6\. Conclusion and Future Work
We present VulCurator, a tool for detecting vulnerability-fixing commits.
VulCurator combines multiple sources of information such as commit messages,
code changes, and issue reports in a deep learning model. In the future, to
better support security researchers in monitoring commits, we plan to apply
explainable AI techniques (Ribeiro et al., 2016; Pornprasit et al., 2021) to
provide explanations for each prediction.
## Acknowledgment
This project is supported by the National Research Foundation, Singapore and
National University of Singapore through its National Satellite of Excellence
in Trustworthy Software Systems (NSOE-TSS) office under the Trustworthy
Computing for Secure Smart Nation Grant (TCSSNG) award no. NSOE-TSS2020-02.
Any opinions, findings and conclusions or recommendations expressed in this
material are those of the author(s) and do not reflect the views of National
Research Foundation, Singapore and National University of Singapore (including
its National Satellite of Excellence in Trustworthy Software Systems (NSOE-
TSS) office).
Xuan-Bach D. Le is supported by the Australian Government through the
Australian Research Council’s Discovery Early Career Researcher Award, project
number DE220101057
## References
* (1)
* Abadi et al. (2016) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016\. TensorFlow: a system for Large-Scale machine learning. In _12th USENIX symposium on operating systems design and implementation (OSDI 16)_. 265–283.
* Chen et al. (2020) Yang Chen, Andrew E Santosa, Ang Ming Yi, Abhishek Sharma, Asankhaya Sharma, and David Lo. 2020\. A Machine Learning Approach for Vulnerability Curation. In _Proceedings of the 17th International Conference on Mining Software Repositories (MSR)_. 32–42.
* Corporation (1999) The MITRE Corporation. 1999\. Common Vulnerabilities and Exposures. https://cve.mitre.org.
* Feng et al. (2020) Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020\. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings_. 1536–1547.
* Kazerounian et al. (2021) Milod Kazerounian, Jeffrey S Foster, and Bonan Min. 2021\. SimTyper: sound type inference for Ruby using type equality prediction. _Proceedings of the ACM on Programming Languages_ 5, OOPSLA (2021), 1–27.
* Le et al. (2021) Triet HM Le, David Hin, Roland Croft, and M Ali Babar. 2021\. DeepCVA: Automated Commit-level Vulnerability Assessment with Deep Multi-task Learning. _arXiv preprint arXiv:2108.08041_ (2021).
* Le-Cong et al. (2022) Thanh Le-Cong, Kang Hong Jin, Truong Giang Nguyen, Stefanus Agus Haryono, David Lo, Xuan Bach Le Dinh, and Thang Huynh-Quyet. 2022. AutoPruner: Tranformer-based Call Graph Pruning. In _2022 the 30th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)_. ACM.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ (2019).
* Mashhadi and Hemmati (2021) Ehsan Mashhadi and Hadi Hemmati. 2021. Applying codebert for automated program repair of java simple bugs. In _2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR)_. IEEE, 505–509.
* Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013\. Efficient estimation of word representations in vector space. _arXiv preprint arXiv:1301.3781_ (2013).
* Nguyen-Truong et al. (2022a) Giang Nguyen-Truong, Thanh Le-Cong, Hong Jin Kang, Xuan-Bach Dinh Le, and David Lo. 2022a. VulCurator’s Docker Image. https://hub.docker.com/r/nguyentruongggiang/vfdetector.
* Nguyen-Truong et al. (2022b) Giang Nguyen-Truong, Thanh Le-Cong, Hong Jin Kang, Xuan-Bach Dinh Le, and David Lo. 2022b. VulCurator’s Repository. https://github.com/ntgiang71096/VFDetector.
* of Standards and Technology (1999) U.S. National Institute of Standards and Technology. 1999. National Vulnerability Database. https://nvd.nist.gov.
* Pornprasit et al. (2021) Chanathip Pornprasit, Chakkrit Tantithamthavorn, Jirayus Jiarpakdee, Michael Fu, and Patanamon Thongtanunam. 2021. PyExplainer: Explaining the Predictions of Just-In-Time Defect Models. In _2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE)_. IEEE, 407–418.
* Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016\. ” Why should i trust you?” Explaining the predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_. 1135–1144.
* Sabetta and Bezzi (2018) Antonino Sabetta and Michele Bezzi. 2018. A practical approach to the automatic classification of security-relevant commits. In _2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. IEEE, 579–582.
* Sawadogo et al. (2020) Arthur D Sawadogo, Tegawendé F Bissyandé, Naouel Moha, Kevin Allix, Jacques Klein, Li Li, and Yves Le Traon. 2020\. Learning to catch security patches. _arXiv preprint arXiv:2001.09148_ (2020).
* Sun et al. (2017) Yan Sun, Qing Wang, and Ye Yang. 2017. Frlink: Improving the recovery of missing issue-commit links by revisiting file relevance. _Information and Software Technology_ 84 (2017), 33–47.
* Tian et al. (2012) Yuan Tian, Julia Lawall, and David Lo. 2012. Identifying linux bug fixing patches. In _2012 34th International Conference on Software Engineering (ICSE)_. IEEE, 386–396.
* Truong-Giang et al. (2022) Nguyen Truong-Giang, Kang Hong Jin, David Lo, Abhishek Sharma, Andrew Santosa, Asankhaya Sharma, and Ming Yi Ang. 2022. HERMES: Using Commit-Issue Linking to Detect Vulnerability-Fixing Commits. In _The 2022 29th IEEE International Conference on Software Analysis, Evolution and Reengineering_. IEEE.
* Zhou et al. (2021b) Jiayuan Zhou, Michael Pacheco, Zhiyuan Wan, Xin Xia, David Lo, Yuan Wang, and Ahmed E Hassan. 2021b. Finding A Needle in a Haystack: Automated Mining of Silent Vulnerability Fixes. (2021).
* Zhou et al. (2021a) Xin Zhou, DongGyun Han, and David Lo. 2021a. Assessing generalizability of CodeBERT. In _2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. IEEE, 425–436.
* Zhou and Sharma (2017) Yaqin Zhou and Asankhaya Sharma. 2017. Automated identification of security issues from commit messages and bug reports. In _Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (FSE)_. 914–919.
* Zhou et al. (2021c) Yaqin Zhou, Jing Kai Siow, Chenyu Wang, ShangQing Liu, and Yang Liu. 2021c. SPI: Automated Identification of Security Patches via Commits. _ACM Transactions on Software Engineering and Methodology (TOSEM)_ (2021).
## Appendix A Demonstration
This is a run-through demonstration for VulCurator using our Docker image. For
manual installation, please check our GitHub repository. We also provide a
demo video of VulCurator at the link https://youtu.be/uMlFmWSJYOE
Step 1: User installs VulCurator by pulling Docker image using the command:
docker pull nguyentruongggiang/vfdetector:v1
After a successful install, you should see a similar result to the screenshot
below:
Figure 3. Installation Success
Step 2: Open Docker container using the command:
docker run --name vfdetector -it --shm-size 16G --gpus all
nguyentruongggiang/vfdetector:v1
Step 3 : Move to VulCurator’s working folder
cd ../VFDetector
Step 4: Inferring an output
User needs to prepare a JSON input file follow our format. Below is an
example:
Figure 4. Input File Example
Next, run the command for either “prediction” mode or “ranking” mode:
python application.py -mode prediction -input sample_1.json -output
prediction_sample_1.json
Above is an example for ”prediction” mode, which takes sample_1.json as input
and return prediction_sample_1.json as output.
The following output should be seen:
Figure 5. Screenshot for Prediction Mode
The result of the prediction is written in prediction_sample_1.json:
Figure 6. Example Output for Prediction Mode
Similarly, when running VulCurator in ”ranking” mode, user will obtain a list
of sorted commit based on the computed confidence scores similar to below:
Figure 7. Example Output for Ranking Mode
|
# An Application of Pseudo-Log-Likelihoods to Natural Language Scoring
Darren Abramson
Department of Philosophy
Dalhousie University
Halifax, Nova Scotia, Canada
<EMAIL_ADDRESS>
&Ali Emami
Department of Computer Science
Brock University
Saint Catharines, Ontario, Canada
<EMAIL_ADDRESS>
###### Abstract
Language models built using semi-supervised machine learning on large corpora
of natural language have very quickly enveloped the fields of natural language
generation and understanding. In this paper we apply a zero-shot approach
independently developed by a number of researchers now gaining recognition as
a significant alternative to fine-tuning for evaluation on common sense tasks.
A language model with relatively few parameters and training steps (albert-
xxlarge-v2) compared to a more recent language model (T5) can outperform it on
a recent large data set (TimeDial), while displaying robustness in its
performance across a similar class of language tasks. Surprisingly, this
result is achieved by using a hyperparameter-free zero-shot method with the
smaller model, compared to fine-tuning to the larger model. We argue that
robustness of the smaller model ought to be understood in terms of
compositionality, in a sense that we draw from recent literature on a class of
similar models. We identify a practical cost for our method and model: high
GPU-time for natural language evaluation. The zero-shot measurement technique
that produces remarkable stability, both for ALBERT and other BERT variants,
is an application of pseudo-log-likelihoods to masked language models for the
relative measurement of probability for substitution alternatives in forced
choice language tasks such as the Winograd Schema Challenge, Winogrande,
CommonsenseQA, and others. One contribution of this paper is to bring together
a number of similar, but independent strands of research. We produce some
absolute state-of-the-art (SOTA) results for common sense reasoning in binary
choice tasks, performing better than any published result in the literature,
including fine-tuned efforts. In others our results are SOTA relative to
published methods similar to our own – in some cases by wide margins, but
below SOTA absolute for fine-tuned alternatives. In addition we show a
remarkable consistency of the model’s performance under adversarial settings,
which we argue is best explained by the model’s compositionality of
representations.
## 1 Introduction
Computational linguistics has made major strides in the adoption of machine
learning techniques applied to unstructured corpora consisting of human
generated natural language text. For example, some methods take advantage of
the frequencies of words in natural human text to productive ends (Collobert
et al., 2011; Mikolov et al., 2013a; b; Peters et al., 2018). N-gram models
providing frequencies of pairs, triples, etc. of words in natural text
provided further gains on related tasks. However, a very influential paper in
2018, signalling a major shift in the application of machine learning to
natural text, advocated for an architecture that has “a more structured memory
for handling long-term dependencies in text, compared to alternatives like
recurrent networks, resulting in robust transfer performance across diverse
tasks.” (Radford et al., 2018) This culminated in the application of the
Transformer (Vaswani et al., 2017) to the creation of representations through
language prediction tasks, motivated by the importance of long-term
dependencies in natural language text for not only the choice of _model_ , but
also _training data_ ; as Radford et al. note, “Crucially, [BooksCorpus], a
common corpus for a multitude of emerging transformer models, contains long
stretches of contiguous text, which allows the generative model to learn to
condition on long-range information.”
Why might ‘long stretches of contiguous text’, via learning conditioned on
that text, lead to success at diverse tasks like natural language inference,
question answering, sentence similarity, and classification (Radford et al.,
2018, Table 1)? After all, these tasks typically involve very short,
independent sections of text.
Solving the Winograd Schema Challenge (WSC) Levesque et al. (2012) seems to
require vast amounts of common sense knowledge, and the job of learning long-
term dependencies was supposed to help replacing actual knowledge of the world
with the proxy knowledge that human-generated text provides. Although language
models do well at common sense benchmarks through fine-tuning, when we
evaluate them using standard fine-tuning methods with a small, admittedly
unreliable ‘quick-probe’, they do not generalize well to new samples that we
offer. On the other hand, a recent zero-shot technique using an idiosyncratic
model with several unique architectural and training features shows remarkable
consistency and absolute performance on our unreliable quick probe, but also
on a family of challenging common sense problems.
### 1.1 Summary of Contributions:
In this paper we investigate the properties of a language model with parameter
sharing: albert-xxlarge-v2, small in both parameter count and pre-training
corpus relative to the field of language models generally. We find that
pseudo-log-likelihoods (PLL) and token-normalied PLLs (NormPLL) methods for
scoring natural language with this model performs at a mixture of outright
state-of-the-art (SOTA) performance and robust, but SOTA performance just for
zero-shot methods at a series of recent binary common sense language tasks.
The combination of model and method is remarkable consistent, scoring around
75-80% under conditions both designed to be adversarial against language
models. The approach is also robust against accidental processes that reduce
zero-shot performance in language models generally, such as semantically and
syntactically noisy data.
To our knowledge, our results are SOTA for any approach to the TimeDial (Qin
et al., 2021) dataset; SOTA for any zero-shot approach to solving the train-xl
split of Winogrande (Sakaguchi et al., 2020); SOTA for an average score on the
perturbed Winograd set (Abdou et al., 2020); and, SOTA for any zero-shot
approach to WSC, with the exception of a reported result in which training and
testing sets were mixed. In other cases, our approach is SOTA for zero-shot
and competitive with fine-tuned approaches. We provide an explanation for the
results and their significance.
## 2 Related Work
### 2.1 Bidirectional vs. unidirectional models
The two most recent GPT papers, ‘Language Models are Unsupervised Multitask
Learners’ (Radford et al., 2019) and ‘Language models are few-shot learners’
(Brown et al., 2020) identify in their titles the nature or purpose of machine
learning models for language with the purposes they put their GPT variants to
in their paper. Emphatic titles aside, the most influential fine-tuning papers
also advocate for few- and zero-shot results. A more important differentiator
between GPT style and NormPLL-suitable models is the significant benefit of a
bidirectional masked objective for success with PLL scoring methods over
single-directional masked objectives, as Salazar et al. (2020), Zhou et al.
(2020), and Ma et al. (2021) show.
### 2.2 The ‘quick-probe assumption’
In his discussion of Winograd schemas, Dennett defines what he calls the
‘quick-probe assumption’: success on a few Winograd schemas in a Turing test-
style evaluation ought to indicate generalizability of a computer’s ability to
make common sense judgements, not merely success at the few examples like it,
or examples like it in some superficial way only (Dennett, 1984).
One of us, skeptical of fine-tuning for success at tasks like the Winograd
Schema Challenge and similar problems, hand-made a set of 20 sentence
pairs111https://anonymous.4open.science/r/NotSoFineTuning-4620/winogradversarial/examples.json
We have reproduced the dataset in its entirety in Appendix A.2. prior to
collaboration on the present paper. The purpose of this set of Winograd-style
pairs is to test whether fine-tuning can be attacked directly, as follows.
Suppose a training set contains multiple complete pairs, such that reference
is shifted every time a sentence has a twin that is different only in some
modifier or short phrase. Then perhaps a pair in which reference _isn’t_
shifted will be scored poorly, if the model is spuriously using the modifier
trick. This can be an exploited trick (at least in principle) if, for example,
one member of a Winograd schema pair is in the train set, and the other is in
the test set 222This is in fact turns out to be the case in the WNLI dataset
which is part of the general natural understaning benchmark of SuperGLUE Wang
et al. (2019).
Here is an example from this small, hand-made data set:
1. 1.
This is why people are supposed to take salt tablets when $<mask>$ sweat a
lot. Answers: people, salt tablets
2. 2.
This is why people are supposed to take salt tablets when $<mask>$ sweat a
little. Answers: people, salt tablets
By substituting the answers in for the mask above we get two pairs of
sentences for a model to score, or assess the relative likelihood of,
resulting in two questions of the suitcase/trophy example above. The correct
answer above for both examples is ‘people’, since salt tablets don’t sweat.
Model | Fine-tuned | Zero-Shot
---|---|---
BERT | 45% | 55%
RoBERTa | 50% | 60%
ALBERT | 55% | 65%
DeBERTa | 50% | 55%
Table 1: Performance of various transformer models (large versions), Fine-
tuning performed on Winogrande.
In Table 1 we compare the performance of a variety of models that have been
fine-tuned on the Winogrande, a scaled WSC-variant debiased against RoBERTa
(Sakaguchi et al., 2020). We find that the BERT family of language models
generally does poorly on this data set when evaluating its fine-tuned
discriminator on the data set. On the other hand, using a method of scoring
sentences using language models in a manner which is free of hyperparameters,
we also score the models – in the second column, there is no training beyond
the objective functions of the models during semi-supervised pre-training.
Notice that a single model outperforms the others: albert-large. The albert-
xxlarge-v2 variant scores an impressive 80% on the Winogradversarial dataset
we present. It is a well-defined question to ask whether this high value for
that last variant is a statistical fluke, or evidence of a robust ability to
score binary common sense sentence pairs at a rate of around 80%.
An anonymous reviewer points out that, on 20 coin flips, there is a greater
than 25% chance of achieving more than 12 heads, or 60% accuracy. Therefore
the results of Table 1 are not particularly meaningful. We agree: these
results are not particularly meaningful, by themselves. This paper argues on
the basis of new results on very large binary choice and similar data sets
that the 80% score achieved by the albert-xxlarge-v2 is due to its
compositionality of representation and corresponding systematicity of
behaviour. We also cite independent research that supports our interpretation
of our results.
### 2.3 Hyperparameters and zero-shot
A broad survey of machine learning research concludes that demonstrating an
ability to innovate in model construction dominates work done in data set
collection and cleaning (Sambasivan et al., 2021). Resource constrained
researchers are able to use platforms like huggingface to leverage pre-
training with novel models contributed by other researchers. PLLs applied to
language models for common sense tasks present both opportunities and
challenges distinct from this standard approach to distributed work in NLP.
Predictive language model scoring using a pre-trained deep learning model has
been used since at least (Linzen et al., 2016), although as we discuss below
PLLs seem to display unique benefits for architectures with idiosyncratic
features such as parameter sharing and bidrectionality.
Despite its nascent application, scholarly literature has already recognized
availability of GPU time for researchers as a limiting factor in applying
NormPLLs (our term) for large language models. Laban et al. (2021) explicitly
limit their investigation to the smallest, ‘base’ models of BERT and RoBERTa
in their published results. As we demonstrate below, ALBERT requires an order
of magnitude more GPU time for NormPLL scoring than BERT, but we nevertheless
provide results for a number of important data sets using the ‘xxlarge’
variant of ALBERT.
In Appendix C we compare the approach we share here to a wide range of common
sense tasks, including COPA (Roemmele et al., 2011). The website associated
with the COPA dataset contains an ethics injunction for its users with
specific imperatives: researchers should not peek at the data set before they
evaluating their model on it; and, researchers should only evaluate their
method on COPA once.333See https://people.ict.usc.edu/g̃ordon/copa.html Our
zero-shot method scores an impressive 80% on COPA; as we argue below,
sensitivity of the method to even extra spaces around punctuation marks
necessitates a certain amount of familiarity with the data.
We see critical views of fine-tuning in industry. A recent white paper eschews
fine-tuning and even few-shot evaluation for assessing the representational
quality of a natural language model because of their potential for spurious
results.444See https://www.ai21.com/blog/announcing-ai21-studio-and-jurassic-1
Research parallelism can produce spurious results simply as a consequence of
the number of hypotheses tested.555For an easy to digest example, see
https://xkcd.com/882/ It is beyond the scope of this paper, but there are
number of methods, such as Bonferroni correction, that can be used in settings
of multiple hypothesis testing. Regardless of one’s priors for the compliance
of fine-tuning of language models by the research community with statistical
best practices, one may find zero-shot measurements of language models more
reliable simply because of the fewer points of possible p-hacking, intentional
or otherwise.
## 3 Methods and Results
### 3.1 Methods
Here we describe three recent papers that all use some form of PLL or NormPLL,
recently published, that do not cite one another – this speaks to a large
community of focused and determined researchers in a wide variety of private
and public settings. We employ the codebase of the first two approaches in
preparation of our own results. For brevity we refer the reader to the papers
mentioned below for an explanation of the algorithms.
#### 3.1.1 mlm-scoring
We first became aware of PLL scoring using language models via Salazar et al.
(2020), and to our understanding their arxiv submission of that paper in late
2019 is the first treatment of the approach in the machine learning
literature, although we acknowledge that the vast literature is growing ever
more quickly. Much of our scoring is performed using the codebase associated
with the paper.666See https://github.com/awslabs/mlm-scoring One key advantage
of this codebase is that its use of mxnet means that scoring of individual
sentences is efficiently shared among multiple GPUs if available. A minor
disadvantage is that, in our experience on a managed academic computing
platform, is that package compatibility was harder to achieve.
Salazar et al. (2020) style scoring is reported with GPU time for evaluation
on an academic computing node with the following characteristics: 32 cores,
RAM of 187G or 192000M, 2 x Intel Silver 4216 Cascade Lake @ 2.1GHz, 1 x 480G
SSD, and 4 GPUs, all NVIDIA V100 Volta (32G HBM2 memory).
Notably the Salazar et al. (2020) paper treats many topics of interest in
machine learning related to language, but does not examine PLL-style scoring
on any common sense data sets.
#### 3.1.2 CATS scoring
Zhou et al. (2020) exclusively focuses on the application of NormPLL-style
scoring to common sense data sets. What we call the NormPLL algorithm is the
Salazar et al. (2020) pseudo-log-likelihood scoring method, but dividing
scores by the tokenized length of the expression. We had already completed a
number of experiments when finding this codebase, and had already considered
the concept of normalization over tokenized length. When comparing Winograd-
style pairs, substitutions are usually of similar length – but they may not
be.
An advantage of this codebase777See https://github.com/XuhuiZhou/CATS is that
it can be run in any environment that supports the huggingface and pytorch
packages. A disadvantage of this approach is that there is no built-in
parallelization across multiple GPUs if available; however, because the
NormPLL algorithm involves summing over multiple forward passes of language
models, it is well-suited to standard MapReduce-style parallelization.
#### 3.1.3 Zero-shot with hyperparameters
Ma et al. (2021) present NormPLLs also under a scoring term (see their
$S_{MLM}(T)$ definition) and then augment their performance by providing
language models with additional mask-filling training on instances of common
sense judgements with tags. It is interesting to note that this results in the
presentation of zero-shot results that are qualified with a ‘95% confidence
interval’. Below we compare some of their results with NormPLLs using albert-
xxlarge-v2, with a fuller picture available in Appendix C.
### 3.2 State of the art without fine-tuning
#### 3.2.1 Purely Winogradversarial Datasets
We consider the performance of a variety of models on data sets that, with no
more than light preprocessing, provide pairs of sentences that are labeled
according to super-majority human judgement of common sense, with exactly one
right answer per pair.
In a forthcoming paper (anonymous) we demonstrate, using NormPLLs with albert-
xxlarge-v2, a 6.5% average improvement over the best zero-shot scoring
presented in Abdou et al. (2020) over their ‘perturbed’ Winograd schemas. The
perturbed schemas are explicitly designed to reveal the brittleness of
language model performance on common sense tasks. We briefly report here
pertinent results. $Avg\Delta_{Acc}.$ and absolute score for RoBERTa both
improved significantly. The $Avg\Delta_{Acc}.$ for RoBERTa on perturbed fell
from worse than human to better than human; absolute accuracy improved around
4%. ALBERT provided a higher average score than Abdou et al. (2020)’s best
reported average score by 11%.
These surprising results prompted further investigation. The code made
available with Zhou et al. (2020) makes it a trivial task (given GPU time) to
extend their implementation (see their ‘$Score(S)$’ definition) of NormPLLs
for the suite of tasks they provide. They test variations and sizes of GPT,
BERT, XLNet, and RoBERTa. Table 2 reproduces the best scoring NormPLL and
Human metrics from their table along with new results for albert-xxlarge-v2.
Note our absolute improvement over their average score by substituting the
computationally expensive per parameter ALBERT for RoBERTa. RoBERTa has
approximately 50% more parameters and 10 times as much training data as
ALBERT. These findings are out of step with the prevailing narrative that
bigger is better for both language model size and pre-training corpuses.
| CA | WSC | SM | SMR | SWAG | HellaSwag | ARCT1 | ARCT2 | Average
---|---|---|---|---|---|---|---|---|---
_roberta-large_ | _0.962_ | _0.694_ | _0.792_ | _0.512_ | _0.769_ | _0.5_ | _0.606_ | _0.599_ | _0.679_
albert-xxlarge-v2 | 0.972 | 0.798 | 0.733 | 0.571 | 0.789 | 0.553 | 0.493 | 0.554 | 0.701
_HUMAN_ | _0.993_ | _0.920_ | _0.991_ | _0.975_ | _0.880_ | _0.945_ | _0.909_ | _0.909_ | _0.945_
Table 2: Comparison of albert-xxlarge-v2 to best reported model in Zhou et al. (2020). Scored using the Zhou et al. (2020) method. Zero-shot model | grad | grande | grad-grande | Model size | GPU time
---|---|---|---|---|---
xlm-mlm-17-1280 | 55.44 | 52.03 | 3.41 | 1.1GB | 04:36:56
gpt2-345m | 57.19 | 56.40 | 0.79 | 1.4GB | 03:24:50
bert-large-cased-wwm | 65.97 | 57.32 | 8.65 | 1.2GB | 02:55:28
roberta-large | 76.84 | 70.77 | 6.07 | 1.3GB | 03:58:21
albert-xxlarge-v1 | 79.64 | 74.82 | 4.82 | 851MB | 15:30:23
albert-xxlarge-v2 | 81.05 | 76.71 | 4.34 | 851MB | 17:38:25
Table 3: PLL zero-shot performance on Winograd (Levesque et al., 2012) and
Winogrande (train-xl) (Sakaguchi et al., 2020) data sets for a number of
recent large language models. We have sorted by Winograd scores, ascending.
Model size in Pytorch .bin from https://huggingface.co/models. Scored using
the Salazar et al. (2020) library.
Table 3 contains results using PLLs on a variety of language models for the
Winograd Schema Challenge data set (Levesque et al., 2012). In these data sets
tokenized lengths tend to be similar across sentence pairs, and in these
experiments we did not normalize scores when evaluating models. This data set
is unusual in that every example contains the name of its author, researchers
associated with the authors. It also contains results for the train-xl split
of the Winogrande data set (Sakaguchi et al., 2020) containing over 44k
crowdsourced Winograd schema-style examples.
The train-xl split contains 64,505 sentence comparisons. Each comparison
involves scoring two sentences, and the model is scored correct if the higher
scored sentence is labeled correct. This results in under 1 seconds of node
time per row, or slightly under 0.5 seconds per sentence. This is slow by
machine standards, but not slow by human standards. In each row we indicate
the difference in score of a given model for the two data sets.
We are not aware of a higher zero-shot score on this Winogrande split. Notice
that the value reported in Appendix C for the ‘WG’ column are reported for the
much smaller development set from Winogrande. We are aware of a higher zero-
shot score for the Winograd Schema Challenge data set in Brown et al. (2020) –
88.3*–, but that value is asterisked by the authors of that paper because they
demonstrated that the web crawled pre-training corpus for GPT-3 is
contaminated by portions of the WSC dataset.
#### 3.2.2 A recent (almost) Winogradversarial dataset: TimeDial
Each row of the TimeDial dataset (Qin et al., 2021) contains four
substitutions into a given sentence, two of which are right and two of which
are wrong. The term ‘2-best accuracy’ is defined such that a given row is
marked correct iff the scores for the two correct substitutions are both
scored higher than the highest scored incorrect substitution. In their paper,
the authors describe the best fine-tuned performance for TimeDial on a pre-
trained model, achieved with T5 (first row of Table 4), as so low as to
question whether fine-tuning is a viable approach to the reasoning over
temporal dialogs. Note that our zero-shot approach improves absolute accuracy
over their fine-tuning results with about one-third fewer parameters, two
orders of magnitude less pre-training data, and no fine-tuning.
Model | 2-best Accuracy | Model size | GPU time
---|---|---|---
_T5-large generation_ | _0.748_ | 2.75GB | unknown
bert-large-cased-whole-word-masking, kws | 0.620 | 1.2GB | 01:23:32
bert-large-cased-whole-word-masking, not kws | 0.619 | 1.2GB | 01:24:23
albert-xxlarge-v2, kws | 0.752 | 851MB | 09:19:02
albert-xxlarge v2, not kws | 0.761 | 851MB | 07:52:56
Table 4: PLL zero-shot performance on TimeDial data set (Qin et al., 2021) for
a number of recent large language models, with highest fine-tuned score from
that paper in italics. Scored using the Salazar et al. (2020) library but with
normalization by tokenized length. Dataset filtered to examples with tokenized
length less than 450 tokens. Model features are reported from
https://huggingface.co/models with model size in Pytorch.bin. ‘kws’ (short for
‘keep weird spaces’) indicates that the TimeDial dataset is used as original
presented at https://raw.githubusercontent.com/google-research-
datasets/TimeDial/main/test.json. ‘not kws’ indicates the application of a
string function to input that removes spaces before punctuation symbols.
Table 4 shows scores for a number of models on the TimeDial data set that
includes common sense judgements about the reasonability of judgements about
time. Because the text data for these examples are so large, we artificially
limit the pool to examples for which both scored passages are less than 450
tokens long once tokenized. This reduces the set by about 5%; in future work,
methods like the ones used by Ma et al. (2021) can be used to approxiate full
NormPLLs for sections of text larger than can be scored on a 32GB GPU; simple
windowing is also a solution. Notice the significant increase in run-time for
albert-xxlarge-v2 due to its parameter sharing; at run-time, parameters are
‘unrolled’ across a larger network than the size on disk would suggest. Using
NormPLLs with albert-xxlarge-v2 produces a score on TimeDial that is, so far
as we know, is an absolute SOTA –even when compared to to the best fine-tuned
model.
#### 3.2.3 Brittleness of the approach
In Appendix C, Table 5, we provide a full picture of our results comparing
zero-shot experiments on CSR benchmarks with large and extra large versions of
ALBERT with the best performing model, to our knowledge, reported in the
literature corresponding to a RoBERTa-Large model trained on additional
synthetic datasets drawn from a combination of knowledge bases including
ATOMIC, ConceptNet, WordNet, and Wikidata from Ma et al. (2021). As can be
seen, our zero-shot language model approach achieves best results for binary
choice tasks, but peforms less well than their approach, which augments
language models with in-domain learning. For multiple choice questions with
one unique answer among more than two options, our approach is inferior. Some
data sets, such as COPA, can be rendered into two candidate sentence form: in
this case, our performance is similar to binary choice problems.
Important findings that we highlight are that the while our experimentation
demonstrates that without any additional data or knowledge source (which in
itself would have invite an opportunity for multiple experimentation, even in
the zero-shot regime, i.e., multitake) ALBERT pre-trained only on its original
pre-training corpora achieves SOTA on a number of the CSR benchmarks (e.g.,
WSC, Winogrande, HellaSwag), it performs competitively (but sightly worse) on
others, and is yet outperformed by a large margin on a few others, the most
noticeable of which is on SIQA (-14.89%).
### 3.3 A tale of three Winograds
Here we draw attention to three quantities that should, abstractly, be
identical, but are instead different. The Winograd Schema Challenge is a
public dataset that is currently available in xml format on the open
web.888See https://cs.nyu.edu/d̃avise/papers/WinogradSchemas/WSCollection.xml
Visiting this site in a modern browser such as Google Chrome results in a
nicely formatted series of questions, reproduced in Appendix D. On the other
hand, by ‘viewing the source’ of the rendered xml, a different representation
can be seen making certain features more obvious of the dataset, also
reproduced there.
The second representation makes more clear that there is extra white space in
the strings for some fields but not others; in some cases there is extra white
space at the front, but not back of a string. Also, there are initial
capitalizations in the two answer fields that won’t be appropriate when
substituted for the pronoun so as to complete scoreable sentences.
Now consider the question: how does the albert-xxlarge-v2 perform on the data
set presented in these figures? Consider table 2: 0.798. In (anonymous), using
Abdou et al. (2020)’s presentation, it is 0.796. Finally, according to table
3, it is 0.810. These scores are all supposedly produced using the same method
on the same data set. The roberta-large scores are, respectively, 0.694,
0.708, and 0.768.
Here is the source of the discrepancy. The highest scores in both cases, table
3, correspond to PLL scoring on Winograd schema challenge data that we have
provided a single python
script999https://anonymous.4open.science/r/NotSoFineTuning-
DB54/Winograd/GetAndCleanWinograd.py for downloading from its public location
on the Web, and then provides some explicit cleaning and concatenating to
produce two individual sentences. The other two scores are produced using
pipelines from Zhou et al. (2020) for table 2 and Abdou et al. (2020) for the
perturbed results we cite using our method.
Those pipelines included preprocessing of the xml into other formats that can
be inspected via the repositories for those papers.101010
https://github.com/XuhuiZhou/CATS/blob/master/commonsense_ability_test/wsc.txt
and https://github.com/mhany90/perturbed-
wsc/blob/release/data/dataset/enhanced_wsc_jsons/text_original.jsonl It is to
the credit of the authors of both papers that their pipeline has been made
public, including the parts. Thanks to this transparency, we can report
problems with both datasets.
The Zhou et al. (2020) Winograd Schema Challenge data for table 2 contains
what we call here ‘weird spaces’. These are expressions such as care of john .
John . In addition, it contains numerous odd concatenations, such as
Xenophanesconveys. Finally, it lower cases some proper names, likely in trying
to deal with leading Thes in answer fields, but not others. The Abdou et al.
(2020) data is entirely lower cased. The codebase does not provide the final
sentence as it is used for model scoring, but inspecting the jsonl reveals
many extra spaces before punctuation marks.
## 4 Discussion
### 4.1 Recent work on compositionality
Interest in the problem of compositionality has been reinvigorated in the
context of the advancing capacities of neural networks for language. Baroni
(2020) and Russin et al. (2021) lay out existence proofs, providing clear
evidence of the learnability of compositional syntactic and semantic domains.
Ontañón et al. (2021) go further, and for a series of synthetic tasks that
strongly benefit from compositionality, such as arithmetic, they perform
ablation experiments across a number of features of modern Transformer
architectures, notably for parameter sharing: an unusual feature of ALBERT.
Ontañón et al. (2021) conclude that weight sharing (sometimes called
‘parameter sharing’, as in that adopted by the Transformer-based model of
ALBERT (Lan et al. (2019)) is, alone, a design decision that “significantly
boosts compositional generalization accuracy, and almost all models [with
weight sharing] achieve a higher average accuracy across all datasets than
their equivalent models [without weight sharing]…”(Ontañón et al., 2021, 6)
111111This paper was released after we had completed the vast majority of our
experiments. This use of ‘compositionality’ has taken over the meaning of
‘systematicity’, referring to behavioral consistency instead of
representational form. Generalization accuracy across binary choice common
sense natural language tasks as we have seen across a narrow range for albert-
xxlarge-v2 of about 76%-80% might be partly explained by this result for
synthetic data sets.
How might a bidirectional transformer encode generalizable language knowledge?
Some recent probes of linguistic generalization measures whether BERT’s layers
can be interpreted as, to some degree, representing the knowledge expressed by
Penn Treebank parse trees (Clark et al., 2019), (Hewitt & Manning, 2019).
Another approach offering a metric for assessing parse trees and localizable
activation in BERT claims in its title that the model ‘Rediscovers the
Classical NLP Pipeline’ Tenney et al. (2019).
These approaches have self-acknowledged limitations. Clark et al. (2019) and
Tenney et al. (2019) both point out the poor performance of BERT on
coreference resolution. Hewitt & Manning (2019) highlight that the evidence
provided by syntactic probes are neither necessary nor sufficient for finding
linguistic representations and their use in downstream behavior. “For this
reason, [they] “emphasize the importance of combining structural analysis with
behavioral studies…to provide a more complete picture of what information
these models encode and how that information affects performance on downstream
tasks.” We find their motivating insight reminiscent of Dennett (1991), and
endorse the need for behavioral studies that identify generalizable linguistic
abilities in language models; structural probes are necessarily incomplete.
Winograd schemas are notable in that many examples involve the production of a
sentence that alters parse trees on the basis of a semantic change in a single
word. Consider the reference of ‘she’ for the choice of good/bad in the
following (Levesque et al., 2012): _Although they ran at about the same speed,
Sue beat Sally because she had such a good/bad start._ These schemas, given
robust human performance, suggest that a knowledge of syntax that is separate
from world knowledge may not be possible for human language, as suggested by
Miller (1999). Winograd schemas belong to a class of coreference resolution
problems. Through PLL scoring with albert-xxlarge-v2, we have presented a
robust body of behavioural evidence that fewer parameters can produce more
consistent coreference resolution behaviour than previously recognized.
### 4.2 ALBERT’s unique architectural features
One important consequence of the design decision to incorporate parameter
sharing into a transformer architecture is that it trades parameter size for
computation time. In other words, parameter sharing yields representations
that are space efficient relative to other models, but time _inefficient_. At
least some researchers have recently argued, though, that time is a resource
that ought to be traded for accuracy benefits (Banino et al., 2021).
In addition to a bidirectional masked language objective, like all BERT
descendants, ALBERT also has a sentence order prediction task. This binary
categorical objective function is more difficut than the original BERT
sentence order prediction task which, as has been widely noted, reduces to
topic modeling, for which bag-of-word representations are good approximations.
ALBERT’s sentence order prediction task corresponds to the problem of
determining, for a pair of consecutive sentences, which one comes first in the
source text and which one comes second. Consider the following pair of
sentences chosen randomly from Wikipedia:
Sentence 1: “Audrey Henshall was born in Oldham, Lancashire in 1927[2] and
studied at the University of Edinburgh, graduating with an MA in 1949.”
Sentence 2: “From 1960 to 1971 she was the Assistant Keeper of Archaeology at
the National Museum of Antiquities of Scotland.”
One method to identify that Sentence 2 comes after Sentence 1 and not vice-
versa is simply the presence of date information. The English language groups
together many different causal relations and human experiences into physical
metaphors, as has long been noted in the cognitive science literature (Thelen,
1995). The evidence suggests that ALBERT is an architecture that both excels
at the formation of compositional representations, but was also trained with
an objective function that encourages learning of asymmetric relations, such
as ‘before’; furthermore, those relations are implicated across multiple
domains of human activity.
## 5 Conclusion
The remarkable consistency of ALBERT’s performance on Winogradversarial,
TimeDial, WSC, and Winogrande datasets is a point of optimism for the
generalizability of the performance of language models. A limitation of the
current approach is that the robust performance seems to be limited to cases
in which a common sense judgement can be expressed as the relative likelihood
of two natural language alternatives, a promising avenue for future work.
We emphasize the improvement in both computational efficiency and accuracy
that is effected for the TimeDial dataset by cleaning punctuation so as to
more closely match normal human conventions. The difference between Grad and
Grande (Table 3) across language models measured through PLLs provides early,
incomplete evidence for the hypothesis that crowdsourcing common sense data
sets produces a measurable decline in data quality. The findings of this paper
support the view that attention and care to data is as important as model
innovations in machine learning generally, despite academic and industry
practice not always matching this ideal (Sambasivan et al., 2021).
## 6 Reproducibility
We have prepared a public repository with the files that generated our results
tables in .csv form, the .csv scored tables, and scripts to read scores from
.csv files.
The repository is available publicly at
https://anonymous.4open.science/r/NotSoFineTuning-4620/
## References
* Abdou et al. (2020) Mostafa Abdou, Vinit Ravishankar, Maria Barrett, Yonatan Belinkov, Desmond Elliott, and Anders Søgaard. The sensitivity of language models and humans to winograd schema perturbations. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pp. 7590–7604, 2020.
* Banino et al. (2021) Andrea Banino, Jan Balaguer, and Charles Blundell. Pondernet: Learning to ponder. _CoRR_ , abs/2107.05407, 2021. URL https://arxiv.org/abs/2107.05407.
* Baroni (2020) Marco Baroni. Linguistic generalization and compositionality in modern artificial neural networks. _Philosophical Transactions of the Royal Society B_ , 375(1791):20190307, 2020.
* Brown et al. (2020) Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _arXiv preprint arXiv:2005.14165_ , 2020.
* Clark et al. (2019) Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What does BERT look at? an analysis of bert’s attention. _CoRR_ , abs/1906.04341, 2019. URL http://arxiv.org/abs/1906.04341.
* Collobert et al. (2011) Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. _Journal of machine learning research_ , 12(ARTICLE):2493–2537, 2011.
* Dennett (1991) Daniel C Dennett. Real patterns. _The journal of Philosophy_ , 88(1):27–51, 1991\.
* Dennett (1984) DC Dennett. Can machines think? in (m. shafto, ed) how we know, 1984.
* Hewitt & Manning (2019) John Hewitt and Christopher D Manning. A structural probe for finding syntax in word representations. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pp. 4129–4138, 2019.
* Laban et al. (2021) Philippe Laban, Luke Dai, Lucas Bandarkar, and Marti A. Hearst. Can transformer models measure coherence in text: Re-thinking the shuffle test. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021_ , pp. 1058–1064. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.acl-short.134. URL https://doi.org/10.18653/v1/2021.acl-short.134.
* Lan et al. (2019) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. _arXiv preprint arXiv:1909.11942_ , 2019.
* Levesque et al. (2012) Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In _Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning_ , 2012.
* Linzen et al. (2016) Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. Assessing the ability of lstms to learn syntax-sensitive dependencies. _Transactions of the Association for Computational Linguistics_ , 4:521–535, 2016.
* Ma et al. (2021) Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, and Alessandro Oltramari. Knowledge-driven data construction for zero-shot evaluation in commonsense question answering. In _35th AAAI Conference on Artificial Intelligence_ , 2021.
* Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. _arXiv preprint arXiv:1301.3781_ , 2013a.
* Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In _Advances in neural information processing systems_ , pp. 3111–3119, 2013b.
* Miller (1999) George A Miller. On knowing a word. _Annual review of psychology_ , 50(1):1–19, 1999\.
* Ontañón et al. (2021) Santiago Ontañón, Joshua Ainslie, Vaclav Cvicek, and Zachary Fisher. Making transformers solve compositional tasks. _arXiv preprint arXiv:2108.04378_ , 2021.
* Peters et al. (2018) Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In _Proceedings of NAACL-HLT_ , pp. 2227–2237, 2018.
* Qin et al. (2021) Lianhui Qin, Aditya Gupta, Shyam Upadhyay, Luheng He, Yejin Choi, and Manaal Faruqui. Timedial: Temporal commonsense reasoning in dialog. _arXiv preprint arXiv:2106.04571_ , 2021.
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018\.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019\.
* Roemmele et al. (2011) Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In _2011 AAAI Spring Symposium Series_ , 2011.
* Russin et al. (2021) Jacob Russin, Roland Fernandez, Hamid Palangi, Eric Rosen, Nebojsa Jojic, Paul Smolensky, and Jianfeng Gao. Compositional processing emerges in neural networks solving math problems. _arXiv preprint arXiv:2105.08961_ , 2021.
* Sakaguchi et al. (2020) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pp. 8732–8740, 2020.
* Salazar et al. (2020) Julian Salazar, Davis Liang, Toan Q Nguyen, and Katrin Kirchhoff. Masked language model scoring. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pp. 2699–2712, 2020.
* Sambasivan et al. (2021) Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. “everyone wants to do the model work, not the data work”: Data cascades in high-stakes ai. In _proceedings of the 2021 CHI Conference on Human Factors in Computing Systems_ , pp. 1–15, 2021.
* Tenney et al. (2019) Ian Tenney, Dipanjan Das, and Ellie Pavlick. Bert rediscovers the classical nlp pipeline. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pp. 4593–4601, 2019.
* Thelen (1995) Esther Thelen. Time-scale dynamics and the development of an embodied cognition. _Mind as motion: Explorations in the dynamics of cognition_ , pp. 69–100, 1995.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_ , pp. 5998–6008, 2017.
* Wang et al. (2019) Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: a stickier benchmark for general-purpose language understanding systems. In _Proceedings of the 33rd International Conference on Neural Information Processing Systems_ , pp. 3266–3280, 2019.
* Zhou et al. (2020) Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. Evaluating commonsense in pre-trained language models. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pp. 9733–9740, 2020.
## Appendix A Appendix A – Winogradversarial: An Adjective for Craft Datasets
### A.1 Winogradversarial data format
A Winogradversarial instance is defined formally as a tuple
$P=\\{S,C_{1},C_{2},K\\}$, where $S$ is the sentence with a masked token
representing a missing coreferring antecedent, $C_{1}$ and $C_{2}$ are the
candidates for the correct antecedent for the masked token and $K$ indicates
the correct antecedent. Note that both $C_{1}$ and $C_{2}$ appear in $S$.
$\\{S,C_{1},C_{2}\\}$ are provided as input for models, which must predict $K$
(e.g., as the output of a binary classification over $C_{1},C_{2}$).
Additionally, for every sentence $S$ there is another sentence in the set,
$S^{\prime}$ that differs by $S$ by single word, also known as the switch word
as in the original Winograd Schema Challenge. (i.e., $S$ has some word $w$,
while $S^{\prime}$ is identical to $S$ except $w$ is replaced by another word,
$w^{\prime}$.) Crucially, while in the WSC, the modifying of $S$ by replacing
word $w$ by $w^{\prime}$ switches the correct label, in Winogradverserial, the
correct label remains the same for both $S$ and $S^{\prime}$. This may turn
out to be adverserial to models that may be conditioned to switch their
prediction decision simply based on the changing of the switch word.
Representative sentences $S$ and $S^{\prime}$ are the following, respectively,
with $w$ and $w^{\prime}$, underlined:
1. 1.
$S=${Jordan} wanted to appear nice to {Jim} so $\langle$mask$\rangle$ ate some
breath mints.
2. 2.
$S^{\prime}=${Jordan} wanted to appear nice to {Jim} so $\langle$mask$\rangle$
offered some breath mints.
Here, $C_{1}=\text{Jordan}$, $C_{2}=\text{Jim}$, $K=C_{2}=\text{Jordan}$,
$w=\text{ate}$, and $w^{\prime}=\text{offered}$
### A.2 20 Questions
This is the complete ‘winogradversarial’ dataset we used to generate the
hypothesis for this paper. We have inserted line breaks here in sentences to
preserve readability.
{"sentence":
"Jordan wanted to appear nice to Jim so <mask> ate some breath mints",
"option1": "Jordan",
"option2": "Jim",
"answer": "1"}
{"sentence": "Jordan wanted to appear nice to Jim so <mask> offered
some breath mints",
"option1": "Jordan",
"option2": "Jim",
"answer": "1"}
{"sentence": "I get tons of spam at both locations but gmail incorrectly
identifies <mask>.",
"option1": "tons of spam",
"option2": "both locations",
"answer": "1"}
{"sentence": "I get tons of spam at both locations but gmail correctly
identifies <mask>.",
"option1": "tons of spam",
"option2": "both locations",
"answer": "1"}
{"sentence": "Sydney isn’t currently better than Jack but <mask> has the
potential to be.",
"option1": "Sydney",
"option2": "Jack",
"answer": "1"}
{"sentence": "Sydney isn’t currently worse than Jack but <mask> has the
potential to be.",
"option1": "Sydney",
"option2": "Jack",
"answer": "1"}
{"sentence": "Homes should be prepared for children before you have <mask>.",
"option1": "homes",
"option2": "children",
"answer": "2"}
{"sentence": "Homes should be prepared for children after you have <mask>.",
"option1": "homes",
"option2": "children",
"answer": "2"}
{"sentence": "This is why people are supposed to take salt tablets
when <mask> sweat a lot.",
"option1": "people",
"option2": "salt tablets",
"answer": "1"}
{"sentence": "This is why people are supposed to take salt tablets
when <mask> sweat a little.",
"option1": "people",
"option2": "salt tablets",
"answer": "1"}
{"sentence": "Psychologists theorize that people are less comfortable
when <mask> are given too few choices.",
"option1": "Psychologists",
"option2": "people",
"answer": "2"}
{"sentence": "Psychologists theorize that people are less comfortable
when <mask> are given too many choices.",
"option1": "Psychologists",
"option2": "people",
"answer": "2"}
{"sentence": "The lemon cake tasted better than the banana muffin because
<mask> was sweet.",
"option1": "lemon cake",
"option2": "banana muffin",
"answer": "1"}
{"sentence": "The lemon cake tasted better than the banana muffin because
<mask> was savoury.",
"option1": "lemon cake",
"option2": "banana muffin",
"answer": "1"}
{"sentence": "Mark won the competition over James
because <mask> was too small.",
"option1": "Mark",
"option2": "James",
"answer": "2"}
{"sentence": "Mark won the competition over James
because <mask> was too large.",
"option1": "Mark",
"option2": "James",
"answer": "2"}
{"sentence": "I prefer to purchase the purse over the shoe because
<mask> is too cheap.",
"option1": "the purse",
"option2": "the shoe",
"answer": "2"}
{"sentence": "I prefer to purchase the purse over the shoe because
<mask> is too expensive.",
"option1": "the purse",
"option2": "the shoe",
"answer": "2"}
{"sentence": "The TV is more valuable than the Ipad,
so I decided to sell <mask>.",
"option1": "the TV",
"option2": "the Ipad",
"answer": "1"}
{"sentence": "The TV is more valuable than the Ipad,
so I decided to buy <mask>.",
"option1": "the TV",
"option2": "the Ipad",
"answer": "1"}
## Appendix B Appendix B – Deberta
Below we provide values for various Deberta models on our benchmark suite of
Winogradversarial datasets.
Note the warning message below from huggingface when using documented masked
language model instantiation of Deberta v1 models. v2 models threw an error
when called from the
DebertaForMaskedLM
method.
Some weights of the model checkpoint at microsoft/deberta-large
were not used when initializing DebertaForMaskedLM:
[’deberta.embeddings.position_embeddings.weight’, ’config’,
’lm_predictions.lm_head.bias’,
’lm_predictions.lm_head.LayerNorm.weight’,
’lm_predictions.lm_head.dense.weight’,
’lm_predictions.lm_head.LayerNorm.bias’,
’lm_predictions.lm_head.dense.bias’]
- This IS expected if you are initializing DebertaForMaskedLM from
the checkpoint of a model trained on another task or with another
architecture (e.g. initializing a BertForSequenceClassification
model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaForMaskedLM
from the checkpoint of a model that you expect to be exactly
identical (initializing a BertForSequenceClassification model
from a BertForSequenceClassification model).
Some weights of DebertaForMaskedLM were not initialized from
the model checkpoint at microsoft/deberta-large and are newly initialized:
[’cls.predictions.transform.LayerNorm.bias’,
’cls.predictions.transform.LayerNorm.weight’,
’cls.predictions.transform.dense.bias’,
’cls.predictions.bias’,
’cls.predictions.transform.dense.weight’,
’cls.predictions.decoder.weight’]
You should probably TRAIN this model on a down-stream task to
be able to use it for predictions and inference.
## Appendix C Appendix C – Full Comparison with Ma et al. (2021)
Our preliminary experimentation reveals that, as with the data sets discussed
in the main text, the presentation format of the inputs in the datasets
themselves results in significant variance of results (e.g., punctuation
details, lower-casing, perspicuity of wording) especially for the models that
were not designed with parameter sharing. This latter point suggests two
important conclusions; a) the robustness of ALBERT isn’t only demonstrateed on
syntactic perturbations of a single presentation of benchmark, but it is
robust to presentation changes altogether in a benchmark, while other
transformer models exhibit more brittleness and b) comparisons betwen ALBERT
and such models as in Ma et al. (2021) should be treated as possibly
incongruous, as the former may reflect a more universal estimation of the
models’ performance on common-sense reasoning question answer tasks regardless
of presentation or choice of external knowledge source, while the latter the
upper-bounded performance of transformer models when under most favourable
settings.
Model | aNLI | CSQA | PIQA | SIQA | WG | COPA | DPR | GAP | PDP | GAP | WinoBias | WGA
---|---|---|---|---|---|---|---|---|---|---|---|---
bert-large-uncased | 54.96 | 29.32 | 59.19 | 42.12 | 51.14 | 63.8 | 62.1 | 50.19 | 63.75 | 68.68 | 53.75 |
roberta-large | 65.01 | 44.71 | 67.9 | 45.24 | 56.35 | 72.4 | 61.0 | 55.78 | 66.25 | 79.48 | 60.0 |
xlnet-large-cased | | 44.14 | 61.15 | 45.7 | 55.25 | | | | | 69.88 | |
albert-large-v2 | 55.74 | 34.8 | 61.8 | 43.19 | 52.48 | 62.2 | 63.5 | 49.46 | 65.0 | 68.6 | 60.0 |
albert-xxlarge-v2 | 66.84 | 52.49 | 70.02 | 48.31 | 62.11 | 79.6 | 78.9 | 58.82 | 81.25 | 81.25 | 65.56 |
Ma-GPT2-L (multitake*) | 59.2 | 48.0 | 67.5 | 53.5 | 54.7 | | | | | | |
Ma-Roberta-L (multitake*) | 70.5 | 67.4 | 72.4 | 63.2 | 60.9 | | | | | | |
Human | 85.6 | 88.9 | 94.9 | 86.9 | 94.1 | | | | | | |
Table 5: Zero-shot, single-take evaluation results of language models that
have only been exposed to pre-training corpora via original objective function
(i.e., without a selection process based on multiple experimental
configurations) across various commonsense tasks. Human performance as well as
the best textitmultiple-take model Ma et al. (2021) are included for
reference. In the case of Ma et al. (2021), multiple takes corresponded
selecting from variants of Roberta-large pre-trained on different Knowledge
Graphs, and reporting the best-performing variant in the zero-shot setting.
## Appendix D Appendix D – Browser Renders of Winograd
Figure 1: An xml entry from the Winograd Schema Challenge rendered in the
Chrome browser. Figure 2: An xml entry from the Winograd Schema Challenge
viewed through the browser’s code editor.
|
# Particle-hole asymmetric phases in doped twisted bilayer graphene
Run Hou Department of Physics and Astronomy, Rice University, Houston, TX
77005, USA Shouvik Sur Department of Physics and Astronomy, Rice University,
Houston, TX 77005, USA Lucas K. Wagner Department of Physics, University of
Illinois, Urbana-Champaign, USA Andriy H. Nevidomskyy Department of Physics
and Astronomy, Rice University, Houston, TX 77005, USA
###### Abstract
Twisted bilayer graphene (TBG) has emerged as a paradigmatic platform for
exploring the interplay between strong interactions in a multi-band system
with nearly flat bands, while offering unprecedented control over the filling
fraction of electron/hole carriers. Despite much theoretical work, developing
a comprehensive ab initio model for this system has proven challenging due to
the inherent trade-off between accurately describing the band structure and
incorporating the interactions within the Hamiltonian, particularly given the
topological obstruction – so-called fragile topology – to the description of
the model in terms of localized symmetric Wannier functions within the flat
band manifold. Here, we circumvent this obstruction by using an extended
8-orbital model, for which localized Wannier orbitals have been formulated by
Carr et al. [1]. We constructed an extended multi-orbital Hubbard model, and
performed Hartree-Fock (HF) calculations to explore its phase diagram across
commensurate fillings from -3 to 3. We found several nearly-degenerate
insulating states at charge neutrality, all of which exhibit orbital orders.
Crucially, TBG near magic angle is known to be particle-hole asymmetric, which
is naturally captured by the single-particle band structure of our model and
is reflected in the distinction between the symmetry broken states obtained at
electron and hole dopings away from the charge neutral point. At filling -1
and +2, quantum anomalous hall states are obtained, while for the rest of the
integer fillings away from charge neutrality, we found the system to realize
metallic states with various orbital, valley and spin orderings. We also
observed that most of the Hartree–Fock ground states exhibit a generalized
valley Hund’s-like rule, resulting in valley polarization. Importantly, we
show that the incorporation of the intra-valley and inter-valley exchange
interactions is crucial to properly stabilize the ordered symmetry-broken
states. In agreement with experiments, we find significant particle-hole
asymmetry, which underscores the importance of using particle-hole asymmetric
models.
## I Introduction
The theoretical prediction of nearly flat bands [2] and subsequent discovery
of correlated insulating and superconducting phenomena [3, 4] in magic-angle
twisted bilayer graphene (TBG) has sparked significant interest in the
interplay between topology and strong correlations, leading to extensive
theoretical [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78,
79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,
98, 99, 100, 101, 102, 103] and experimental studies [104, 105, 106, 107, 108,
109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123,
124, 125, 126, 127, 128, 129, 130, 131, 132] in the field of condensed matter
physics. The construction of effective models and the development of theories
to understand this system have emerged as crucial endeavors in contemporary
research.
In order to establish models that are both simple and capable of capturing the
essential features of magic-angle TBG without sacrificing the ab initio
perspective, various approaches have been explored. These include models based
on maximally localized Wannier functions (MLWF) [6, 7, 1], atomic tight-
binding [5, 93], the Bistritzer–MacDonald (BM) model [2], and topological
heavy fermion (THF) model [96, 95, 97, 101]. However, developing a
comprehensive model has proven challenging due to the inherent trade-off
between accurately describing the band structure and incorporating the
interactions within the Hamiltonian, particularly given the fragile
topological nature [8, 9] of the magic-angle TBG system.
The BM model, despite its simplicity in constructing the non-interacting band
structure of magic-angle TBG, is approximate due to its inherent k$\cdot$p
nature and moreover, encounters complexity when incorporating interactions by
projecting out remote high-energy bands. This renders the traditional methods
for studying strongly correlated systems less applicable. Alternatively, the
THF model for TBG has been developed and has yielded promising results [99,
100, 101]. However, MLWF models offer another viable option as they provide
accurate descriptions of the band structure and localized interactions, albeit
with a higher number of orbitals. Regrettably, the MLWF models as alternative
choices for magic-angle TBG have not been extensively investigated.
Therefore, in this study, we adopt an 8-orbital model of magic-angle TBG from
Carr et al. [1] and employ Hartree–Fock (HF) methods to explore its
properties. Unlike the belief that Wannierized models are challenging to use
in calculations, we found the HF method to be straightforward, as it does not
involve complicated form factors such as those arising from projecting the
interactions onto the BM model. The 8-orbital model also has the potential for
future use in more sophisticated strong correlation methods such as DMFT, to
calculate the states that cannot be captured by the HF self-consistent
calculations. Although previous studies [69] had considered the 8-band model,
those calculations were done at the Hartree level (without the Fock terms),
and did not find any interaction-driven insulating states.
In this work, we first used the Wannier orbitals from the 8-orbital model to
numerically evaluate the strength of the electron repulsion integral matrix
elements, enabling us to construct a suitably extended Hubbard model including
both the direct and exchange interactions. Subsequently, we incorporated the
complete Hartree and Fock terms to study this model. We identified several
converged ordered states for each filling. These symmetry-broken ordered
states originate from the valley, spin, and orbital degrees of freedom. Our
results compare favourably with the experimental findings [105, 106, 107, 108,
112, 114, 115, 122, 125, 104, 113, 131, 132], including the quantum anomalous
Hall (QAH) states found at certain fillings. The particle-hole asymmetry is
naturally present in our 8-orbital model, in contrast to the BM model, and is
consistent with the experimentally observed asymmetry between electron- and
hole-doped side of the phase diagram. A detailed comparison between our
results and experimental findings suggests that the symmetry-breaking and gap
formation mechanisms depend on the filling fraction and the specific device
configurations used in the experiments. Our HF calculations underscore the
importance of incorporating particle-hole asymmetric models into theoretical
considerations.
We briefly discuss the mechanisms of spontaneous symmetry breaking here. Some
previous studies had used (approximate) valley-spin SU(4) symmetry to discuss
various symmetry-broken states in TBG systems [13, 103]. The atomistic model,
such as the one employed here [1], does not actually possess full SU(4)
symmetry – instead the kinetic energy terms have
$\text{U(2)}_{+}\times\text{U(2)}_{-}$ symmetry where $\pm$ label the two
graphene valleys, with the
$\text{U(2)}=\text{U(1)}_{\text{charge}}\times\text{SU(2)}_{\text{spin}}$
symmetry separately in each valley. Nevertheless, there are close parallels
with the spontaneously broken valley and spin symmetries that are also present
in SU(4)-type models [103], or in the generalized Stoner picture [113]. We
summarize the various possible spontaneous symmetry-broken states in Table 1.
Importantly, we found that the atomistic model also contains information about
orbital degrees of freedom ($p_{+}$ and $p_{-}$ orbitals in the 8-band model
[1]), which is absent from the BM-type analysis and from the above symmetry
considerations. These orbitals are energetically degenerate in the non-
interacting limit, and this additional symmetry can be spontaneously broken in
the presence of interactions. Indeed, we found this to be the case in nearly
all of the ordered states we identified, with orbital degeneracy spontaneously
lifted. In fact, we find the ground state to be a combination of orbital
ordering and one of the symmetry-broken patterns summarized in Table 1,
depending on the filling fraction.
States | Symmetry
---|---
Normal state | $\text{U(2)}_{+}\times\text{U(2)}_{-}$
(a)Valley polarized (VP) state | Breaking time-reversal symmetry $\mathcal{T}$
(b)Orbital polarized (OP) state | Breaking discretized symmetries such as $C_{2z}\mathcal{T}$ and $C_{2x}$
(c)Spin polarized (SP) state | $\text{U(1)}_{+}\times\text{U(2)}_{-}$ or $\text{U(1)}_{+}\times\text{U(1)}_{-}$
(d)Inter-valley coherent (IVC) state | $\text{SU(2)}_{+}\times\text{SU(2)}_{-}\times\text{U(1)}$
Table 1: Examples of simplified symmetry-broken orders. When the interaction
strength is large enough, the system develops orders. Realistic models produce
more complicated orders. (a) The VP state breaks time-reversal symmetry. (b)
The OP state breaks the orbital discretized symmetry. (c) The SP state breaks
the spin SU(2) symmetry. (d) The IVC state breaks the
$\text{U(1)}_{+}\times\text{U(1)}_{-}$ symmetry.
This paper is organized as follows. In Sec. II, we review the non-interacting
8-orbital tight-binding model from previous work and numerically evaluate the
interaction parameters. In Sec. III, we discuss the numerical results, where
we present the order parameters, symmetry breaking, and associated Chern
numbers, if relevant, of the obtained HF solutions. Importantly, we also show
that the role of exchange interactions is crucial to correctly capture the
correlated insulating states of the TBG. We compare our findings to the
experiments in Sec. IV, before summarizing our conclusions.
## II Minimal interacting tight-binding model for MATBLG
In this section, we review the lattice structure and construct the interacting
8-orbital model of magic-angle TBG.
### II.1 Lattice Structure and Noninteracting Hamiltonian
To begin with, we review the lattice structure of TBG 8-band model derived in
[1]. There are 8 Wannier orbitals per spin and per valley inside a given
triangular unit cell of the moiré lattice as shown in Fig. 1. The orbital
indices, corresponding Wannier orbital centers, and orbital symmetries are
shown in TABLE 2. This 8-orbital tight binding model has correct crystalline
symmetries, avoiding the Wannier obstruction resulting from fragile topology
if only a subset of bands are included [9].
Figure 1: Lattice structure of the 8-orbital model. The Bravais lattice of the system is a triangular lattice. Orange dots in triangular lattice represent orbital 1/2 with $p_{\pm}$ symmetries and orbital 3 with $s$ symmetry. Green triangles in honeycomb lattice represent $p_{z}$ orbitals labeled by orbital indices 4 and 5. Purple diamonds in the kagome lattice represent orbitals 6,7, and 8 with $s$ symmetry. The primitive vectors are $\vec{a}_{1}$ and $\vec{a}_{2}$. Orbital indices | Region | Wyckoff position | Symmetry
---|---|---|---
1,2 | AA | $1a$ | $p_{+},p_{-}$
3 | AA | $1a$ | $s$
4,5 | AB | $2b$ | $p_{z}$
6,7,8 | DW | $3c$ | $s$
Table 2: Orbital indices and corresponding regions and symmetries. The AA
region is located at crossing of triangular lattice usually denoted as $1a$.
The AB region is located at the center of triangles denoted as $2b$. And the
DW region is located at the centers of triangular edges denoted as $3c$.
The Wannier orbitals and noninteracting Hamiltonian is obtained from the
maximally localized Wannier functions (MLWF) method, and the resulting non-
interacting tight-binding model is taken from Carr et al. [1]:
$H_{K}=\sum_{ij,ab,\tau
s}t^{\tau}_{ab}(\bm{R}_{i}-\bm{R}_{j})c^{\dagger}_{a\tau
s}(\bm{R}_{i})c_{b\tau s}(\bm{R}_{j}),$ (1)
where $c^{\dagger}_{a\tau s}(\bm{R}_{i})$ is the electron creation operator in
the $a^{\text{th}}$ Wannier orbital ($a=1,\ldots 8$) positioned within the
Moiré unit cell labeled with a lattice vector $\bm{R}_{i}$. The subscript
$\tau=\pm$ denotes graphene valley indices, and $s=\uparrow,\downarrow$ are
the electron spin indices. The hopping parameters
$t^{\tau}_{ab}(\bm{R_{i}}-\bm{R_{j}})$ are fitted from ab initio
$\bm{k}\cdot\bm{p}$ model with lattice relaxation. In the present work, we
choose as a basis the non-interacting Hamiltonian at twist angle
$\theta=1.10\degree$.
We Fourier transform the electron creation/annihilation operators into
momentum space using the periodic gauge, meaning that the Fourier phases do
not depend on specific positions of Wannier centers but rather only on the
Moiré cell coordinate $\bm{R_{i}}$:
$c^{\dagger}_{a\tau
s}(\bm{R}_{i})=\frac{1}{\sqrt{N_{k}}}\sum_{\bm{k}}c^{\dagger}_{a\tau
s}(\bm{k})e^{i\bm{R}_{i}\cdot\bm{k}}.$ (2)
The momentum-space non-interacting Hamiltonian is then written in the form
$H_{K}=\sum_{ab\tau s}t^{\tau}_{ab}(\bm{k})c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau s}(\bm{k}).$ (3)
The periodic gauge has the convenient property that
$t^{\tau}_{ab}(\bm{k}+\bm{G})=t^{\tau}_{ab}(\bm{k})$, where $\bm{G}$ is the
Moiré reciprocal lattice vector.
The noninteracting Hamiltonian has several important symmetries. The time-
reversal symmetry relates the hoppings in the two valleys via
$t^{-\tau}_{ab}(\bm{k})=t^{\tau*}_{ab}(-\bm{k}).$ (4)
The Hamiltonian is also invariant under operations of $C_{2z}\mathcal{T}$,
$C_{3z}$, and $C_{2x}$ symmetries as demonstrated in Appendix A. For the
internal symmetries, it is clear that the system preserves
$\text{U(2)}_{+}\times\text{U(2)}_{-}$ symmetry, which means for each valley
$+/-$ the system has spin SU(2) symmetry and preserves the total particle
number $N_{+}+N_{-}$ and the particle number difference $N_{+}-N_{-}$.
In Fig. 2, we showed the non-interacting band structure of the single valley
8-orbital TBG model along with the projected weights of orbitals that have
different orbital symmetries. It clearly shows that the central bands are
mainly composed by $p_{\pm}$ orbitals except for the points near $\Gamma$
point.
Figure 2: The tight-binding band structure of non-interacting Hamiltonian. (a)
The dark orange fat bands show the projected weights of $AA_{p_{\pm}}$
orbitals. (b) The orange fat bands show the projected weights of $AA_{s}$
orbital. (c) The fat bands for $AB_{p_{z}}$ orbitals. (d) The fat bands for
$DW_{s}$ orbitals. (e) The density of states of the whole band structure.
### II.2 Interacting Hamiltonian
The full Hamiltonian is composed of the noninteracting Hamiltonian $H_{K}$ and
interactions $H_{I}$:
$H=H_{K}+H_{I}.$ (5)
Now, we elaborate how to define the interactions in our multi-orbital model.
The most general form of the interacting Hamiltonian that preserves spin SU(2)
symmetry (since the spin-orbit coupling is negligible in graphene) is as
follows:
$H_{I}=\frac{1}{2}\sum_{\tau_{j},a_{j},\mathbf{R}_{j},s,s^{\prime}}V^{\tau_{1}\tau_{2}\tau_{3}\tau_{4}}_{a_{1}a_{2}a_{3}a_{4}}(\bm{R}_{1},\bm{R}_{2},\bm{R}_{3},\bm{R}_{4})\times\\\
c^{\dagger}_{a_{1}\tau_{1}s}(\bm{R}_{1})c^{\dagger}_{a_{2}\tau_{2}s^{\prime}}(\bm{R}_{2})c_{a_{3}\tau_{3}s^{\prime}}(\bm{R}_{3})c_{a_{4}\tau_{4}s}(\bm{R}_{4}).$
(6)
The four-center electron repulsion integrals
$V^{\tau_{1}\tau_{2}\tau_{3}\tau_{4}}_{a_{1}a_{2}a_{3}a_{4}}(\bm{R}_{1},\bm{R}_{2},\bm{R}_{3},\bm{R}_{4})$
were evaluated using the real-space Wannier functions, and we found the
oscillating factor $e^{i\bm{q}_{V}\cdot(\bm{r}-\bm{r}^{\prime})}$ suppresses
the absolute value of the integral when $\tau_{2}\neq\tau_{3}$ and
$\tau_{1}\neq\tau_{4}$, because the momentum $\bm{q}_{V}$ between the graphene
two valleys is large as explained in Appendix B. With the benefit of MLWF, it
is reasonable to only evaluate the 2-center interactions that have the forms
like
$V^{\tau\tau^{\prime}\tau^{\prime}\tau}_{a_{1}a_{2}a_{3}a_{4}}(\bm{R}_{1},\bm{R}_{2},\bm{R}_{2},\bm{R}_{1})$
and
$V^{\tau\tau^{\prime}\tau^{\prime}\tau}_{a_{1}a_{2}a_{3}a_{4}}(\bm{R}_{1},\bm{R}_{2},\bm{R}_{1},\bm{R}_{2})$.
Among these interaction channels, we found the strongest interaction channels
are density-density channels $H_{C}$, with the exchange channels $H_{X}$
playing a key role away from half-filling. This motivates the form of the
Hamiltonian
$H=H_{K}+H_{C}+H_{X}.$ (7)
The density-density interacting Hamiltonian takes the form
$\begin{split}H_{C}=\frac{1}{2}\sum_{ij;ab;\tau\tau^{\prime}ss^{\prime}}U_{ab}(\bm{R}_{i}-\bm{R}_{j})\times\\\
c^{\dagger}_{a\tau
s}(\bm{R}_{j})c^{\dagger}_{b\tau^{\prime}s^{\prime}}(\bm{R}_{i})c_{b\tau^{\prime}s^{\prime}}(\bm{R}_{i})c_{a\tau
s}(\bm{R}_{j}),\end{split}$ (8)
which is an extended Hubbard model with long-range interaction. The density-
density repulsive interaction $U_{ab}(\bm{R}_{i}-\bm{R}_{j})$ does not depend
on the choice of valleys and is explicitly evaluated assuming a screened
single-gated Coulomb potential $V_{SG}(r)$ often used in modeling of TBG:
$V_{SG}(r)=\frac{1}{4\pi\epsilon\epsilon_{0}}\left(\frac{1}{r}-\frac{1}{\sqrt{r^{2}+(2d)^{2}}}\right).$
(9)
The relative dielectric constant $\epsilon=12$ was set to capture the
screening effect from remote upper and lower bands [69]. The distance between
the sample and capacitor was set to $d=10\text{nm}$, which is comparable with
the magic-angle TBG Moiré lattice length and is close to realistic
experimental settings.
The resulting density-density interaction is
$\begin{split}U_{ab}(\bm{R}_{i}-\bm{R}_{j})&=\int\mathrm{d}\bm{r}^{2}\mathrm{d}{\bm{r}^{\prime}}^{2}V_{SG}(|\bm{r}-\bm{r}^{\prime}|)\times\\\
&\sum_{Y,Y^{\prime}}|\mathcal{W}^{Y}_{\bm{R}_{i}a}(\bm{r})|^{2}|\mathcal{W}^{Y^{\prime}}_{\bm{R}_{j}b}(\bm{r}^{\prime})|^{2},\end{split}$
(10)
where $\mathcal{W}^{Y}_{\bm{R}_{i}a}(\bm{r})$ is the Wannier function for the
orbital $a$ at the unit cell located at $\bm{R}_{i}$ for graphene-sublattice
$Y$[1].
We represent the general trend in the density-density interactions between
selected orbitals in Fig. 3. Noting that the lattice vector $\bm{R}_{i}$ is
not equivalent to the position of the Wannier center $\bm{R}_{i}+\bm{r}_{a}$,
we elected to plot the interaction integrals in Fig. 3 as a function of the
distance $\Delta r=|\bm{R}_{i}+\bm{r}_{a}-\bm{R}_{j}-\bm{r}_{b}|$ between the
two orbital centers in order to compare with the bare single-gated
interaction: $\tilde{U}_{ab}(\Delta r)=U_{ab}(\bm{R}_{i}-\bm{R}_{j})$. The
strongest onsite density-density interaction is around 35 meV, which is
comparable with the bandwidth of the central bands. In practice, we chose the
density-density repulsive interactions to be extended to next-nearest neighbor
unit cells. At longer distances, $V_{SG}(r)$ becomes negligible and is smaller
than the exchange interactions that we will introduce next.
Figure 3: Single-gated screened density-density channel interaction strengths
as a function of the distance between orbital centers $\Delta r$. The
horizontal axis is rescaled with Moiré lattice length $\lambda$. Data points
with different shapes correspond to different orbital pairs. Figure 4: (a) The
exchange interaction arrangement. The short-range exchange interactions
between $p_{\pm}$ and other orbitals within a unit cell are taken into
account. (b) The exchange interactions in the same valley
$X^{\tau\tau}_{ab}(\Delta\bm{R})$ and in different valleys
$X^{\tau\bar{\tau}}_{ab}(\Delta\bm{R})$. We found
$X^{\tau\bar{\tau}}_{ab}(\Delta\bm{R})$ is negligible in comparison to the
exchange interactions in the same valley.
The second part of the interacting Hamiltonian includes the exchange
interaction matrix elements $X_{ab}^{\tau\tau^{\prime}}$ which are obtained as
integrals over the corresponding Wannier orbitals, as detailed in Appendix B:
$\begin{split}&H_{X}=\frac{1}{2}\sum_{a,b,\tau,\tau^{\prime},s,s^{\prime}}\sum_{ij}X^{\tau\tau^{\prime}}_{ab}(\bm{R}_{i}-\bm{R}_{j})\times\\\
&:c^{\dagger}_{a\tau s}(\bm{R}_{i})c_{b\tau
s}(\bm{R}_{j})c^{\dagger}_{b\tau^{\prime}s^{\prime}}(\bm{R}_{j})c_{a\tau^{\prime}s^{\prime}}(\bm{R}_{i}):.\end{split}$
(11)
Rather than keep the full complexity of the matrix
$X_{ab}^{\tau\tau^{\prime}}$, we observed that the dominant effect at
experimentally accessible densities arises from the $p_{\pm}$ orbitals (which
constitute the flat bands in magic-angle TBG) interacting among themselves and
with the other orbitals, as illustrated in Fig. 4(a). In this study, we
therefore dropped the matrix elements $X_{ab}^{\tau\tau^{\prime}}$ whenever
both indices $a$ and $b$ $\not\in\\{1,2\\}$.
Furthermore, while the exchange interaction $X_{ab}^{\tau\tau^{\prime}}$
depends on the valley indices (see Appendix B for details), the computed
values shown in Fig. 4(b) demonstrate that the exchange interaction between
different valleys $X^{\tau\bar{\tau}}_{ab}$, is significantly smaller in
magnitude compared to that within the same valley $X^{\tau\tau}_{ab}$. For the
sake of simplicity, we assumed $X^{\tau\bar{\tau}}_{ab}=0$. The largest
exchange interaction is approximately $2.7$ meV, corresponding to
$X^{\tau\tau}_{13}$ and $X^{\tau\tau}_{23}$, independent of $\tau$, that is,
the exchange between the $p_{\pm}$ and $s$-orbital centered in the AA region
of the same Moiré supercell. We noted that previous studies [69, 102] of this
model did not consider exchange interactions due to its small magnitude
compared with the on-site density-density interaction. In contrast, in this
study we found that exchange interactions have pronounced effects on the
interacting model and should not be ignored.
## III Numerical results
$\nu$ | Order | Insulator | Order parameter | $C_{2z}\mathcal{T}$ symmetry | Chern number $|C|$
---|---|---|---|---|---
0 | OP (orbital-polarized) | ✓ | $\left<\Sigma_{3}\right>$ | $\crossproduct$ | 0
0 | VOP (valley-orbital polarized) | ✓ | $\left<\tau_{3}\otimes\Sigma_{3}\right>$ | $\crossproduct$ | 4
0 | VOP+OP | ✓ | $\left<\tau_{3}\otimes\Sigma_{3}\right>_{\uparrow},\left<\Sigma_{3}\right>_{\downarrow}$ | $\crossproduct$ | 2
0 | VP (valley-polarized) | $\crossproduct$ | $\left<\tau_{3}\right>$ | ✓ | N/A
0 | SP (spin-polarized) | $\crossproduct$ | $\left<s_{3}\right>$ | $\crossproduct$ | N/A
0 | IVC (inter-valley coherent) | $\crossproduct$ | $\left<\tau_{1,2}\right>$ | ✓ | N/A
2 | VP | ✓ | $\left<\tau_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | 2
2 | SP | ✓ | $\left<s_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | 0
2 | IVC (inter-valley coherent)+VP | $\crossproduct$ | $\left<\tau_{3}\right>_{\uparrow},\left<\tau_{1,2}\right>_{\downarrow},\left<\Sigma_{3}\right>$ | $\crossproduct$ | N/A
-2 | VP | $\crossproduct$ | $\left<\tau_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | 2
-2 | SP | $\crossproduct$ | $\left<s_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | 0
-2 | OP | $\crossproduct$ | $\left<\Sigma_{3}\right>$ | $\crossproduct$ | 0
1 | VPSP | $\crossproduct$ | $\left<s_{3}\right>,\left<\tau_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | N/A
1 | IVCSP | $\crossproduct$ | $\left<s_{3}\right>,\left<\tau_{1,2}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | N/A
-1 | VPSP | ✓ | $\left<s_{3}\right>,\left<\tau_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | 3
-1 | IVCSP | $\crossproduct$ | $\left<s_{3}\right>,\left<\tau_{1,2}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | N/A
-3 | VPSP | $\crossproduct$ | $\left<s_{3}\right>,\left<\tau_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | N/A
-3 | IVCSP | $\crossproduct$ | $\left<s_{3}\right>,\left<\tau_{1,2}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | N/A
3 | VPSP | $\crossproduct$ | $\left<s_{3}\right>,\left<\tau_{3}\right>,\left<\Sigma_{3}\right>$ | $\crossproduct$ | 1
Table 3: Summary of properties of converged states at different filling $\nu$
obtained from HF calculations. The third column labels whether the state is an
insulator or not. The fourth column shows the order parameters of the
corresponding state. The fifth column shows whether the $C_{2z}\mathcal{T}$
symmetry is broken. The last column shows the absolute value of the Chern
number. We use Pauli matrices $s$, $\tau$, and $\Sigma$ to label spin, valley,
and the $p_{\pm}$ degrees of freedom. The notation
$\left<\hat{O}\right>_{\uparrow/\downarrow}$ represents the order in the spin-
up/down species. For IVC states, $\left<\tau_{1,2}\right>$ refers to any order
at the $U(1)$ circle formed by $\tau_{1}$ and $\tau_{2}$. We calculated the
Chern number for all insulating states, also including some states that are
not fully gapped. This is done for reference purposes, as the bands are still
separable.
### III.1 Order parameters and symmetry breaking
In this section, we discuss the ground state candidates at the filling
fractions $\nu=0,\pm 1,\pm 2,\pm 3$ relevant to the experiment where the
correlated insulating behavior has been observed [3, 110, 105, 108, 109, 104,
107, 106, 125, 111, 124, 122, 120, 121, 113, 114, 123, 115, 132]. Because the
8-orbital model does not have particle-hole symmetry (unlike the approximate
BM model), we found the resulting states not to be the same for the particle-
doped and the hole-doped sides of the phase diagram. We performed the multi-
band Hartree-Fock (HF) calculations of the Hamiltonian Eq. (7) up to
$15\crossproduct 15$ discretized points in momentum space for a fixed twist
angle $\theta=1.10\degree$. The detailed derivation of the HF theory is
presented in Appendix C.
A closer analysis of the converged results shows that all symmetry-broken
states can be characterized by spin, valley, and orbital degrees of freedom.
The most general order parameter is the mean value of fermion bilinears
$\hat{O}(\bm{k})=\sum_{a\tau s,b\tau^{\prime}s^{\prime}}c^{\dagger}_{a\tau
s}(\bm{k})\mathcal{O}_{a\tau
s,b\tau^{\prime}s^{\prime}}c_{b\tau^{\prime}s^{\prime}}(\bm{k})$:
$\left<\hat{O}\right>=\frac{1}{N_{k}}\sum_{\bm{k}}\Tr[\mathcal{O}P(\bm{k})],$
(12)
where $P_{a\tau
s,b\tau^{\prime}s^{\prime}}(\bm{k})=\left<c^{\dagger}(\bm{k})_{a\tau
s}c(\bm{k})_{b\tau^{\prime}s^{\prime}}-\frac{1}{2}\delta_{ab}\delta_{\tau\tau^{\prime}}\delta_{ss^{\prime}}\right>$
is the one-body reduced density matrix, and $N_{k}$ is the total number of
momentum points. While the orbital symmetry breaking is in principle possible
for all 8 orbitals, we found that all the orbital ordering is limited to the
$p_{\pm}$ orbitals that constitute the nearly flat bands in magic-angle TBG.
We thus limited the orbital order to this subset and use the Pauli matrix
$\Sigma$ to label the orders formed by the $p_{\pm}$ orbitals. The ordering of
valley and spin degrees of freedom is labeled by the Pauli matrices $\tau$ and
$s$, respectively.
While the $C_{2z}\mathcal{T}$ symmetry is preserved by the tight-binding
Hamiltonian, the interactions can result in the symmetry being spontaneously
broken. Denoting the $C_{2z}\mathcal{T}$ operator by $g$ for brevity, the
corresponding symmetry breaking strength is given by (see Appendix D for
derivation):
$\displaystyle\mathcal{D}(\bm{k})=|\left<g^{-1}c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})g\right>-\left<c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})\right>|.$ (13)
When summed over all $\mathbf{k}$-points as in Eq. (12), it measures the
difference between the expectation value of the order parameter and its
$C_{2z}\mathcal{T}$ symmetry transformed value. The $C_{2z}\mathcal{T}$
symmetry-breaking strength is zero if the order parameter is invariant under
the $C_{2z}\mathcal{T}$ symmetry.
Furthermore, we calculated the Chern number $C$ which characterizes quantum
anomalous Hall states in insulators. The Chern number is calculated by
measuring the winding of the wavefunction along the Wilson loop as shown in
Appendix F.
Table 3 summaries the main results. we found that the order parameters of
converged states at all fillings are generally characterized by several broken
symmetries: the spin polarized (SP) state with $\left<s_{3}\right>\neq 0$, the
valley polarized (VP) state with $\left<\tau_{3}\right>\neq 0$, the inter-
valley coherent (IVC) state with $\left<\tau_{1,2}\right>\neq 0$, and the
orbital polarized (OP) state with $\left<\Sigma_{3}\right>\neq 0$. Depending
on the filling, we found both the correlated metallic states and insulators
with broken symmetries. Certain semimetallic states near charge neutrality
preserve the $C_{2z}\mathcal{T}$ symmetry, while all other orders break the
$C_{2z}\mathcal{T}$ symmetry.
### III.2 Hartree–Fock results at all integer fillings
Figure 5: Relative energies of converged ordered states. and schematic phase
diagram of $\theta=1.10^{\degree}$ 8-orbital model. The inset shows details of
relative energies of competing orders. Figure 6: Ground state candidates at
charge neutrality for TBG 8-orbital model. The color bar shows the valley
polarization strength for each k point. (a) Orbital polarized state (OP). (b)
Valley-orbital polarized state (VOP). (c) Valley-orbital polarized and orbital
polarized state (VOP+OP). (d)Spin polarized metal (SP). (e) Valley polarized
semimetal (VP). (f) Inter-valley coherent state (IVC).
By performing Hartree–Fock calculations on the interacting 8-orbital model in
Eq. (7), we analyzed the solutions at various integer fillings. Our results
are summarized in Table 3 and in Figure 5. In some cases, we were able to find
a well-defined ground state that spontaneously breaks the symmetries of the
model; while in other cases, we identified several candidate states nearly
degenerate in energy. The resulting symmetry-broken states and the
characteristic energy differences are outlined in Figure 5. At certain
fillings, in particular $\nu=-3,-2,+2$, we found several candidate states that
have nearly identical total energies, so the competitions of these symmetry-
broken states are possible. we found that both insulating and (semi)metallic
ground states are stabilized, depending on the filling fraction of the central
bands. QAH solutions are also found. Below, we analyse in detail our results
for all integer fillings from $\nu=-3$ to $\nu=+3$.
Numerical results at filling $\bm{\nu=0}$. At the charge neutral point, our
calculations found insulating states and semimetallic states. We generated
several initial guesses by the symmetry argument as discussed in Sec. I, and
put them into the HF self-consistent cycles. Six different converged ordered
states are obtained, whose band structures are shown in Fig. 6 – the orbital
polarized (OP) state, the valley-orbital polarized (VOP) state, a polarized
state in which the up- [down-] spin sector is orbital [valley-orbital]
polarized (VOP+OP state), the spin-polarized (SP) semimetallic state, the
valley polarized (VP) semimetallic state, and the inter-valley coherent (IVC)
state.
The first three ordered states OP, VOP, and VOP+OP shown in Fig. 6 have
similar band structures, with nearly degenerate total energies (energy
difference within $0.13$ meV as shown in Fig. 5) among which the VOP has the
lowest total energy. However, as shown in Table 3, these three states have
different total Chern number $C=0,4,2$.
The semimetallic states in general have higher total energy compared with
gapped states. One would expect opening a gap when the system is fully spin-
polarized or valley-polarized due to Hund’s rule, but this is not the case for
our VP and SP states at charge neutrality. While it is true that the exchange
interaction is repulsive between two valleys and two spins species, its
magnitude is however too small to open up the gap at the $\Gamma$ point. We
note that the semimetallic VP and IVC symmetry-broken solutions preserve the
$C_{2z}\mathcal{T}$ symmetry, resulting in Dirac points at $K$ points. SP
state has the similar structure. It breaks the $C_{2z}\mathcal{T}$ symmetry
due to spin polarization. However, if only choosing one spin flavour, and
making $\mathcal{T}^{2}=1$, the spinless band has the $C_{2z}\mathcal{T}$
symmetry.
Therefore, our results suggest that orbital orders like
$\tau_{3}\otimes\Sigma_{3}$ (VOP) and $\Sigma_{3}$ (OP) are the reason for
generating insulating states. The mechanism is explained as follows. The
interaction-driven symmetry breaking in orbital-polarized channels can be
understood through a reduced Hamiltonian composed of $p_{\pm}$ orbitals only,
which is obtained by truncating the 8-orbital single-particle Hamiltonian. In
the non-interacting limit, such an effective Hamiltonian has the general form,
$\displaystyle
H_{\text{eff}}(\bm{k})=\sum_{\alpha,\beta=0}^{3}h^{\alpha\beta}(\bm{k})$ (14)
where
$h^{\alpha\beta}(\bm{k})=\sum_{s=\uparrow,\downarrow}t^{\alpha\beta}(\bm{k})\psi_{s}^{\dagger}(\bm{k})~{}\Gamma^{\alpha\beta}~{}\psi_{s}(\bm{k}),$
(15)
with $\Gamma^{\alpha\beta}=\tau_{\alpha}\otimes\Sigma_{\beta}$ are the
generators of $SU(4)$, $\psi_{s}=(c_{1,+,s}\quad c_{2,+,s}\quad c_{1,-,s}\quad
c_{2,-,s})^{\intercal}$, and $t^{\alpha\beta}(\bm{k})$ encodes the respective
hopping amplitudes. We note that, here, the spin degrees of freedom are
trivially summed over because the Hamiltonian is identity in spin space, as
the spin-orbit coupling is negligible in graphene. For the model we study, the
only non-vanishing terms in $H_{\text{eff}}$ correspond to
$(\alpha,\beta)=(0,0),(0,1),(0,2),(3,1),(3,2)$, and $(3,0)$. While the first
five terms obtain contributions from nearest and next-nearest neighbor
hoppings, $t^{30}$ lacks contributions from nearest neighbor hoppings.
Therefore, $t^{30}$ is exponentially suppressed compared to the remaining
hopping amplitudes in $H_{\text{eff}}$. We restrict the following discussion
to nearest-neighbor hoppings only, such that
$\displaystyle H_{\text{eff}}\to
H_{\text{eff},\text{NN}}=h^{00}+h^{01}+h^{02}+h^{31}+h^{32},$ (16)
where we have suppressed the $\bm{k}$-dependence for notational simplicity. We
will comment on the potential impact of further neighbor hoppings at the end
of the present analysis. We note that the four bands produced by diagonalizing
$H_{\text{eff},\text{NN}}$ are twofold spin-degenerate.
The term $t^{00}(\bm{k})$, while being of the same order as the other four
hopping amplitudes, does not directly control the gapping of the Dirac points
in the mini-Brillouin zone. This is because it effectively serves as a local
chemical potential inside each unit-cell, and commutes with all patterns of
internal symmetry breaking. Moreover, in the vicinity of the $K$ points of the
mini-Brillouin zone, $t^{00}(\bm{k})$ is only weakly $\bm{k}$-dependent.
If the order parameter of a symmetry-broken state globally anti-commutes with
$H_{\text{eff}}$, then it follows from the properties of the Clifford algebra
that the ordered state must be gapped.
For our effective model in Eq. (16), we found the following (anti-)commutation
relations:
$\displaystyle\left\\{(H_{\text{eff},\text{NN}}-h^{00}),\Gamma^{03}\right\\}=0,$
$\displaystyle\left\\{(H_{\text{eff},\text{NN}}-h^{00}),\Gamma^{33}\right\\}=0,$
$\displaystyle\left[(H_{\text{eff},\text{NN}}-h^{00}),\Gamma^{30}\right]=0.$
(17)
The generators $\Gamma^{03}$ and $\Gamma^{33}$ correspond to the OP and VOP
order parameters, respectively, and the anticommutation agrees with our
finding of the gap opening at charge neutrality. The semimetallic states VP
(generated by $\Gamma^{30}$) and SP (generated by $s_{3}$ Pauli matrix) have
order parameters that commute with the Hamiltonian, thus remaining gapless.
The size of the gap opened by the OP and VOP order parameters is controlled by
the strength of $H_{C}$, $U_{11}\approx U_{22}$ which is much larger than
$\text{max}\left[t^{30}(\bm{k})\right]$. Therefore, switching-on of the next-
nearest neighbor hoppings acts as weak perturbations, and does not
qualitatively change the above conclusions.
Numerical results at filling $\bm{\nu=\pm 2}$. For the filling $\nu=+2$, the
VP insulator, SP insulator, and IVC+VP metallic solutions are found (see Fig.
8a,b,c), with small energy differences (around 0.5 meV) among them. The two
insulating states have small band gaps around 3 meV. Among the three orders,
the VP state turns out to have the lowest total energy and realizes an
anomalous quantum Hall state, with the Chern number $|C|=2$ (by contrast, the
SP insulating state is topologically trivial with $C=0$).
At the filling $\nu=-2$, VP, SP, and OP ordered metallic states are obtained.
Among the three states, the VP state has the lowest total energy, however all
three states have very small energy differences within about 0.01 meV. As can
be seen from the band structures displayed in Appendix G (Fig. 8d,e,f), all
three states are semimetallic – they are not fully gapped between the hole
band centered at the $\Gamma$ point and the electron band minimum centered on
the $\Gamma-M$ line.
This phenomenon shows that higher-lying (non-flat) bands participate in the
formation of the order, which is the consequence of small band gaps around the
central bands of our model, and the fragile topology of the system. Compared
with the BM models, the 8-orbital model is away from the chiral limit and has
smaller gaps.
Although at hole doping $\nu=-2$ all the states we obtained are metallic, but
it is still possible to manually separate the higher lying conduction bands
and calculate the Chern number of the valence bands as a reference. Defined in
this fashion, the VP and SP states have $|C|=2$ and $C=0$ respectively, same
as the Chern numbers at $\nu=+2$ filling. We thus conclude that the VP
semimetallic state is a ‘failed’ Chern insulator, with non-trivial (non-
quantized) anomalous Hall response.
Unlike the approximate BM model which is particle-hole symmetric, the more
realistic 8-orbital model from Carr et al [1] model is not. Thus generically,
one does not expect to find the same solution at filling $\pm|\nu|$. This is
manifestly the case in our calculations at $|\nu|=2$, where we found the
candidate ground states to all be metallic on the hole-doped $\nu=-2$ side, in
contrast to the aforementioned opening of the gap at $\nu=2$.
Numerical results at filling $\bm{\nu=\pm 1}$. At fillings $\nu=\pm 1$, we
obtained two states on both the electron and hole-doped sides as shown in
Appendix G (Fig. 9): the intervalley-coherent spin-polarized (IVCSP) state and
a valley- and spin-polarized (VPSP) state. However, only one insulating state
is found – the VPSP state at $\nu=-1$ with a direct gap around 5.6 meV. On the
other hand, the IVCSP state (whose energy is 2 meV higher) has a biquadratic
band touching at the $\Gamma$ point where the gap closes. By contrast, the
$\nu=+1$ states are metallic states. Finally, we noted that all the
aforementioned states break the $C_{2z}\mathcal{T}$ symmetry. The insulating
VPSP state at $\nu=-1$ is a QAH state with Chern number $|C|=3$.
Note that the particle-hole asymmetry characteristic of the non-interacting
8-orbital model remains clearly evident in the symmetry-broken states. This
underlies our finding that the resulting Hartree–Fock band structures in the
presence of the interactions appear very asymmetric between $\nu=-1$ (Fig.
9a,b) and $\nu=+1$ (Fig. 9c,d), similar to what we found for $|\nu|=2$ above.
Figure 7: Comparison of insulating band structures with and without exchange
interaction. (a),(b) Valley polarized and spin polarized state at $\nu=-1$.
The exchange interaction enlarges the band gap. (c),(d) the valley polarized
state at $\nu=+2$. The exchange interaction opens a gap around fermi level.
Numerical results at filling $\bm{\nu=\pm 3}$. Fewer states are found at
filling $\nu=\pm 3$. As shown in Fig. 10, only a VPSP ordered state is found
at filling $\nu=+3$. The state is not fully gapped, but if we separate the
bands and calculate the total Chern number of the valence bands, we obtained
$|C|=1$, illustrating that this is an example of a ‘failed’ Chern insulator
similar to the VP state we found at $\nu=-2$.
By contrast, at filling $\nu=-3$, two metallic states IVCSP and VPSP are
obtained. The two states show the competing feature as their total energy
difference is within 0.01 meV.
### III.3 The significance of Fock terms and exchange interactions
The previous work [69] studied the magic-angle TBG system without Fock terms
and exchange interactions. The resulting state found in the self-consistent
Hartree approximation is always a metallic state for all fillings, which
cannot explain most of the experimental results.
Fock terms are important for two reasons. On the one hand, Fock terms tend to
create symmetry-broken states that are comparable to experimental results. On
the other hand, keeping Fock terms makes the HF state a variational Slater
determinant and obey Wick’s theorem, which means systematic improvements
beyond HF can be made in the future based on HF states found in this work.
We emphasize that in the multi-orbital setting such as realized in TBG, the
“exchange interaction” is not synonymous with the “Fock term” that is
occasionally used, somewhat confusingly, to describe the exchange. Indeed, the
distinction between the direct Coulomb interaction $H_{C}$ in Eq. (8) and the
exchange interaction $H_{X}$ in Eq. (11) is made at the level of the
Hamiltonian, regardless of the approximate method such as Hartree or
Hartree–Fock used to optimize the ground state. Below, we analyze the effect
of the exchange interactions $H_{X}$ on the physics of TBG.
Given the much larger value of direct density-density interactions $H_{C}$ (as
large as 35 meV, see Fig. 3) compared to the strength of exchange interactions
$H_{X}$ ($\lesssim 3$ meV, Fig. 4), some of the previous studies [69] have
neglected the exchange interactions for this reason, performing calculations
with only density-density interactions. It is reasonable to ask whether
exchange interactions play a role in our HF calculations, if any, in
stabilizing the various symmetry-broken states we had identified.
To answer this question, we analyzed the results of our modeling with only the
direct density-density interactions in Eq. (8) vs. the full interacting
Hamiltonian including the exchange terms $H_{X}$ as in Eq. (11). We found that
the exchange interaction is essential for obtaining the correlated insulating
states we reported, except perhaps for the cases at charge neutrality. For
example, comparing the two approaches at filling $\nu=+2$ in Fig. 7(c)(d), we
found that excluding the exchange interaction would result in the gap closing
at the $\Gamma$ point. The same behavior is found at filling $\nu=-1$ as shown
in Fig. 7(a) and (b), where the band gap at $\Gamma$ point is much larger if
the exchange interactions are included.
These numerical observations can be simply explained by the generalized Hund’s
interaction involving the valley and spin degrees of freedom – the
corresponding indices are treated on an equal footing in the exchange
Hamiltonian in Eq. (11). The Hund’s effect originates from the terms that have
equal indices $\tau=\tau^{\prime}$ and $s=s^{\prime}$. In other words, the
fully spin-polarized or valley-polarized bands are preferentially filled to
lower the total energy. Of course the $\mathbf{k}$ dependence of the kinetic
energy makes the picture more complicated, but the above argument still
applies qualitatively. By the same token, the Hund’s rule would suggest that
intervalley-coherent states are less energetically favorable, which is indeed
reflected in our numerical findings summarized in Fig. 5.
## IV Comparison with experiments
The particle-hole asymmetric nature of the correlated states we found in our
calculations agrees broadly with the absence of particle-hole symmetry in the
experiments. In this section, we provide a detailed comparison with
experimental data, highlighting the agreement with our computational results.
We start from the comparison at the charge-neutral point, where scanning
tunneling microscopy (STM) experiments reported strong local correlations
[112, 109, 110, 118], while transport measurements identified semimetallic
features [3, 4, 105, 119]. Our results suggest several candidate insulating
and semimetallic states at the charge-neutral point. Crucially, our
semimetallic states exhibit symmetry-breaking, distinguishing them from the
strongly correlated symmetry-preserving states obtained by the heavy fermion
model [101, 100, 102].
The experiments, although they depend on twist angles and sample alignment
with hBN substrates, have reported quantized Hall conductance at certain
fillings. For instance, in transport measurements [106, 107], QAH states at
$\nu=+3$ were observed. Notably, our calculation yielded a ‘failed’ Chern
insulator with $|C|=1$ at this filling. Conversely, on the hole side at
$\nu=-3$, our findings indicate a metallic state. Consistent with our results,
there is no observed resistance peak at $\nu=-3$ in prior studies [107, 114].
This discrepancy may stem from the particle-hole asymmetric nature,
accentuated by lattice relaxation.
Our results at $\nu=-1$ and $\nu=+2$ also display particle-hole asymmetric
features, but unlike $|\nu|=3$, the QAH states are actually insulating in our
calculations and possess well-defined Chern numbers. Experimental observations
at $\nu=+1$ show a correlated Chern insulator and a Chern number transition
from $C=1$ to $C=3$ under an external magnetic field [128], whereas our
findings only reveal a $C=3$ QAH state at hole doping $\nu=-1$.
At $\nu=\pm 2$, experiments typically indicate correlated insulating features,
although outcomes may vary between devices [114]. Our calculations for $\nu=2$
demonstrate a anomalous Hall insulator with a direct gap and $|C|=2$. On the
hole side, $\nu=-2$, we obtained a metallic state without Chern number.
To conclude, a comparison between our results and experimental findings
suggests that the mechanisms for spontaneous symmetry breaking and gap
formation exhibit particle-hole asymmetry and depend on the experimental
details such as the twist angle and sample alignment with hBN substrate. Our
HF calculations underscore the importance of incorporating particle-hole
asymmetric models into theoretical considerations.
## V Conclusion
In this work, motivated by the challenge of accurately capturing the
interactions in the magic-angle TBG without sacrificing the ab initio
perspective, we have studied a multi-orbital model that circumvents the
fragile topological obstruction by enlarging the active orbital space. We
first employed the localized Wannier orbitals to numerically evaluate the
matrix elements of the electron repulsion, enabling us to construct a suitably
extended Hubbard model including both the direct and exchange interactions.
Subsequently, we incorporated the complete Hartree and Fock terms to study
this model.
By performing Hartree–Fock calculations, we have obtained a variety of states
as a function of integer fillings from $-3$ to $+3$ of the nearly-flat bands.
These symmetry-broken ordered states originate from the valley, spin, and
orbital degrees of freedom. The orbital polarization, which spontaneously
breaks the discrete $C_{2z}\mathcal{T}$ symmetry, appears promptly at several
fillings and is particularly important, as it is missing from the alternative
treatments such as those based on the Bistritzer–MacDonald model. Our results
are not symmetric with respect to electron and hole doping, consistent with
the experimental observations on TBG near the magic angle (see previous
Section IV for detailed comparison with experiments). This asymmetry is
natural for the model derived from first principles which lacks particle-hole
symmetry, in contrast to approximate models such as the BM model that have
been subject of many previous studies.
We found symmetry-broken insulating states at $\nu=0,-1,+2$. In many cases,
several ground state candidates have very close total energies, less than 0.05
meV apart, suggesting that the precise nature of the ground state depends
delicately on the microscopic model parameters, which in turn depend
sensitively on the twist angle, encapsulation, defects, and strain effects.
Properly including exchange interactions is crucial to obtain insulating
states especially at $\nu=-1,+2$.
We found the insulating states at $\nu=-1$ and $\nu=2$ to be anomalous Hall
insulators characterized by a non-zero Chern number of the occupied bands.
Several semimetallic states with clearly separable but overlapping in energy
conduction and valence bands can also be described as ‘failed’ Chern
insulators, with a well defined Chern number of the valence bands. These
metallic states are predicted to have (non-quantized) anomalous Hall effect
due to non-zero integrated Berry curvature of the occupied bands.
Hartree-Fock is of course a limited theory for interacting systems. In
comparison to Hartree-only approximations, the Fock term tends to create
broken-symmetry mean field states, of which we find a number that roughly
correspond to experimental observations; however, we view it as unlikely that
the Hartree-Fock ground states are sufficiently accurate to determine the
ground state with high confidence. Recently, [102], there have been
DMFT+Hartree studies of the same model, along with the aforementioned Hartree-
only [69] calculations. Those calculations drop both the exchange $H_{X}$ and
the Fock term from consideration, which we show are both important to
stabilizing symmetry-broken states. In future work, it would be interesting to
see if correlated calculations correct towards the symmetry-broken states.
###### Acknowledgements.
We would like to express our gratitude to Fang Xie for engaging in helpful
discussions. This research was supported by the U.S. Department of Energy
Computational Materials Sciences (CMS) program under Award Number DE-
SC0020177. A.H.N. is grateful for the hospitality of the Aspen Center for
Physics, supported by the National Science Foundation grant PHY-2210452, where
a portion of this work was performed.
## Appendix A Symmetries in 8-orbital TBG model
We verified the symmetries of the non-interacting Hamiltonian. While we
initially described the non-interacting Hamiltonian using a periodic gauge for
the sake of convenience in HF calculations, it is more straightforward to
describe the symmetry transformation using a physical gauge. These two
Hamiltonians are connected through a gauge transformation denoted as
$U_{\text{gauge}}(\bm{k})$. Whenever we require a specific gauge choice, we
can perform the transformation as follows:
$U^{\dagger}_{\text{gauge}}(\bm{k})H^{\text{Periodic}}_{K}(\bm{k})U_{\text{gauge}}(\bm{k})=H^{\text{Physical}}_{K}(\bm{k}).$
(18)
We discuss the symmetry operations under physical gauge. The non-interacting
Hamiltonian written in momentum space is
$H_{K}=\sum_{ab\tau s}t^{\tau}_{ab}(\bm{k})c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau s}(\bm{k}).$ (19)
We simply denote the matrix of $[t^{\tau}_{ab}(\bm{k})]$ as
$h^{\tau}(\bm{k})$. When there is no valley dependence, index $\tau$ will be
omitted. In the following verification, We use the notation $D(g)$ to label
the representation matrix of a symmetry operation $g$. We consider a valley-
independent symmetry operation acting on a creation operator:
$g^{-1}c^{\dagger}_{a\tau s}(\bm{k})g=\sum_{b}D^{*}_{ba}(g)c^{\dagger}_{b\tau
s}(g\bm{k}).$ (20)
This gives the definition of matrix $D(g)$. Using the symmetry condition
$g^{-1}H_{K}g=H_{K}$, we could derive how the matrix $h^{\tau}(\bm{k})$
transforms under a symmetry operation.
The above definition assumes the operation does not mix two valleys. We
consider an example that does mix two valleys—the time-reversal symmetry. The
time-reversal symmetry is anti-unitary and relates two valleys
$\mathcal{T}^{-1}c_{a\tau s}(\bm{k})\mathcal{T}=c_{a-\tau s}(-\bm{k}).$ (21)
Note here we choose the spinless anti-unitary convention $\mathcal{T}^{2}=1$,
but if an order is related to any spin polarization, we choose back to
original definition $\mathcal{T}^{2}=-1$. It is easy to verify that
$h^{\tau*}(-\bm{k})=h^{-\tau}(\bm{k}).$ (22)
The relationship helps to construct the whole non-interacting Hamiltonian from
a single-valley Hamiltonian.
There are three important Symmetries $C_{2x}$, $C_{2z}\mathcal{T}$, and
$C_{3z}$ in the magic-angle TBG system. According to the lattice structure and
the symmetries of Wannier orbitals, we can write down the symmetry
representation matrices easily. The Wannier basis in a single valley is
denoted as
$\left(\ket{1,p_{+}},\ket{2,p_{-}},\ket{3,s},\ket{4,p_{z}},\ket{5,p_{z}},\ket{6,s},\ket{7,s},\ket{8,s}\right)$
where the numbers label the orbital indices as in Table 2. We use
$\mathcal{K}$ to represent the complex conjugation, in case there is a anti-
unitary operator. The representation matrices of these three operators can be
derived as follows:
$D(C_{2z}\mathcal{T})=\begin{pmatrix}0&-1&0&0&0&0&0&0\\\ -1&0&0&0&0&0&0&0\\\
0&0&1&0&0&0&0&0\\\ 0&0&0&0&1&0&0&0\\\ 0&0&0&1&0&0&0&0\\\ 0&0&0&0&0&1&0&0\\\
0&0&0&0&0&0&1&0\\\ 0&0&0&0&0&0&0&1\\\ \end{pmatrix},$ (23)
$D(C_{2x})=\begin{pmatrix}0&-1&0&0&0&0&0&0\\\ -1&0&0&0&0&0&0&0\\\
0&0&1&0&0&0&0&0\\\ 0&0&0&-1&0&0&0&0\\\ 0&0&0&0&-1&0&0&0\\\ 0&0&0&0&0&1&0&0\\\
0&0&0&0&0&0&0&1\\\ 0&0&0&0&0&0&1&0\\\ \end{pmatrix},$ (24)
$D(C_{3z})=\begin{pmatrix}e^{-i2\pi/3}&0&0&0&0&0&0&0\\\
0&e^{i2\pi/3}&0&0&0&0&0&0\\\ 0&0&1&0&0&0&0&0\\\ 0&0&0&1&0&0&0&0\\\
0&0&0&0&1&0&0&0\\\ 0&0&0&0&0&0&1&0\\\ 0&0&0&0&0&0&0&1\\\ 0&0&0&0&0&1&0&0\\\
\end{pmatrix}.$ (25)
Applying these three matrices to the non-interacting Hamiltonian $h(\bm{k})$,
the Hamiltonian must satisfy the following relationships due to symmetry
constraint:
$D^{-1}(C_{2z}\mathcal{T})h^{*}(\bm{k})D(C_{2z}\mathcal{T})=h(\bm{k}),$ (26)
$D^{-1}(C_{2x})h(\bm{k})D(C_{2x})=h(C_{2x}\bm{k}),$ (27)
and
$D^{-1}(C_{3z})h(\bm{k})D(C_{3z})=h(C_{3z}\bm{k}).$ (28)
We verified that the non-interacting Hamiltonian has the correct symmetry.
## Appendix B Interacting Hamiltonian
The four-center electron repulsion integrals are evaluated numerically using
Wannier functions:
$V^{\tau_{1}\tau_{2}\tau_{3}\tau_{4}}_{a_{1}a_{2}a_{3}a_{4}}(\bm{R}_{1},\bm{R}_{2},\bm{R}_{3},\bm{R}_{4})=\int\mathrm{d}\bm{r}^{2}\mathrm{d}{\bm{r}^{\prime}}^{2}V_{SG}(|\bm{r}-\bm{r}^{\prime}|)\sum_{XX^{\prime}}\mathcal{W}^{X}_{\bm{R}_{1}a_{1}\tau_{1}}(\bm{r})\mathcal{W}^{X}_{\bm{R}_{4}a_{4}\tau_{4}}(\bm{r})\mathcal{W}^{X^{\prime}}_{\bm{R}_{2}a_{2}\tau_{2}}(\bm{r}^{\prime})\mathcal{W}^{X^{\prime}}_{\bm{R}_{3}a_{3}\tau_{3}}(\bm{r}^{\prime}),$
(29)
where the Wannier functions of two valleys are related by complex conjugation:
$\mathcal{W}^{X}_{\bm{R}a}(\bm{r}):=\mathcal{W}^{X}_{\bm{R}a\tau=+}(\bm{r})=\mathcal{W}^{X}_{\bm{R}a\tau=-}(\bm{r})^{*}.$
(30)
The Wannier function is proportional to an overall valley-dependent
modulation:
$\mathcal{W}^{X}_{\bm{R}a\tau}(\bm{r})\propto\sum^{3}_{j=1}e^{i\tau\bm{K}^{j}_{\tau}\cdot(\bm{r}-\bm{R}_{X})}.$
(31)
$\bm{K}^{j}_{\tau}$ are pristine graphene valley vectors and $\bm{R}_{X}$ are
sublattice vectors in real-space. This modulation structure suppress those
integrals whose $\tau_{1}\neq\tau_{4}$ and $\tau_{2}\neq\tau_{3}$, as
explained in the main text. We only considered two-center integrals assuming
other integrals are small. We can define the direct-channel interaction
integrals
$U_{a_{1}a_{2}}(\bm{R}_{1}-\bm{R}_{2}):=V^{\tau\tau^{\prime}\tau^{\prime}\tau}_{a_{1}a_{2}a_{2}a_{1}}(\bm{R}_{1},\bm{R}_{2},\bm{R}_{2},\bm{R}_{1})$,
and exchange-channel interaction integrals
$X^{\tau\tau^{\prime}}_{a_{1}a_{2}}(\bm{R}_{1}-\bm{R}_{2}):=V^{\tau\tau^{\prime}\tau^{\prime}\tau}_{a_{1}a_{2}a_{1}a_{2}}(\bm{R}_{1},\bm{R}_{2},\bm{R}_{1},\bm{R}_{2})$.
The exchange interaction is important to determine the phases, although it is
small than the density-density interaction $U_{ab}(\bm{R}_{i}-\bm{R}_{j})$. We
denote the exchange integral of two orbitals $a$ and $b$ located at unit cells
$\bm{R}_{i}$ and $\bm{R}_{j}$ as
$X^{\tau\tau^{\prime}}_{ab}(\bm{R}_{i}-\bm{R}_{j})$. Because it has dependence
on valleys, we can write it in two parts: the intra-valley exchange integral
$J_{ab}(\bm{R}_{i}-\bm{R}_{j})$ when $\tau=\tau^{\prime}$ and the inter-valley
exchange integral $P_{ab}(\bm{R}_{i}-\bm{R}_{j})$ when
$\tau\neq\tau^{\prime}$. The matrix form of the integral is written as
$\left[X^{\tau\tau^{\prime}}_{ab}(\bm{R}_{i}-\bm{R}_{j})\right]=\begin{pmatrix}J_{ab}(\bm{R}_{i}-\bm{R}_{j})&P_{ab}(\bm{R}_{i}-\bm{R}_{j})\\\
P_{ab}(\bm{R}_{i}-\bm{R}_{j})^{*}&J_{ab}(\bm{R}_{i}-\bm{R}_{j})\end{pmatrix}.$
(32)
Using the condition that the Wannier functions of two valleys are connected
with a complex conjugation, these two integrals are evaluated numerically
using the expressions as following:
$J_{ab}(\bm{R}_{i}-\bm{R}_{j})=\int\mathrm{d}\bm{r}^{2}\mathrm{d}{\bm{r}^{\prime}}^{2}V_{SG}(|\bm{r}-\bm{r}^{\prime}|)\sum_{XX^{\prime}}\mathcal{W}^{X}_{\bm{R}_{i}a}(\bm{r})^{*}\mathcal{W}^{X}_{\bm{R}_{j}b}(\bm{r})\mathcal{W}^{X^{\prime}}_{\bm{R}_{j}b}(\bm{r}^{\prime})^{*}\mathcal{W}^{X^{\prime}}_{\bm{R}_{i}a}(\bm{r}^{\prime}),$
(33)
$P_{ab}(\bm{R}_{i}-\bm{R}_{j})=\int\mathrm{d}\bm{r}^{2}\mathrm{d}{\bm{r}^{\prime}}^{2}V_{SG}(|\bm{r}-\bm{r}^{\prime}|)\sum_{XX^{\prime}}\mathcal{W}^{X}_{\bm{R_{i}}a}(\bm{r})^{*}\mathcal{W}^{X}_{\bm{R}_{j}b}(\bm{r})\mathcal{W}^{X^{\prime}}_{\bm{R}_{i}a}(\bm{r}^{\prime})^{*}\mathcal{W}^{X^{\prime}}_{\bm{R}_{j}b}(\bm{r}^{\prime}).$
(34)
We set the elements $X^{\tau\tau^{\prime}}_{aa}(\bm{0})=0$ to avoid double
counting of interactions that have already been calculated in density-density
interaction.
Combining all the interaction terms with the tight-binding model, the full
interacting Hamiltonian is
$H_{C}+H_{X}=H_{C}+\frac{1}{2}\sum_{a,b,\tau,\tau^{\prime},s,s^{\prime}}\sum_{ij}X^{\tau\tau^{\prime}}_{ab}(\bm{\Delta}\bm{R}_{j}):c^{\dagger}_{a\tau
s}(\bm{R}_{i})c_{b\tau
s}(\bm{R}_{i}+\bm{\Delta}\bm{R}_{j})c^{\dagger}_{b\tau^{\prime}s^{\prime}}(\bm{R}_{i}+\bm{\Delta}\bm{R}_{j})c_{a\tau^{\prime}s^{\prime}}(\bm{R}_{i}):.$
(35)
## Appendix C Hartree-Fock mean-field theory
In this section, we review the HF method and basic notations.
The Hartree-Fock order parameter which can be also called the one-body reduced
density matrix (1rdm) is defined in the orbital Bloch basis:
$P_{a\tau s,b\tau^{\prime}s^{\prime}}(\bm{k})=\left<c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})-\frac{1}{2}\delta_{ab}\delta_{\tau\tau^{\prime}}\delta_{ss^{\prime}}\right>.$
(36)
Here the subtraction
$\frac{1}{2}\delta_{ab}\delta_{\tau\tau^{\prime}}\delta_{ss^{\prime}}$ is made
to counter the double counting of the interaction. The Hartree-Fock mean-field
Hamiltonian $\mathcal{H}^{HF}(\bm{k})$ has dependence on 1rdm $P_{a\tau
s,b\tau^{\prime}s^{\prime}}(\bm{k})$ and is derived in the Appendix E. Solving
the self-consistent eigenvalue problem of the HF Hamiltonian
$\mathcal{H}^{HF}(\bm{k})\ket{u_{n}(\bm{k})}=\varepsilon_{n}(\bm{k})\ket{u_{n}(\bm{k})},$
(37)
we obtained the HF band dispersion $\varepsilon_{n}(\bm{k})$ and corresponding
band wave functions $u_{n}(\bm{k})$. The direct inversion of the iterative
subspace (DIIS) [133, 134] and energy-DIIS (EDIIS) [135] algorithms are
implemented to accelerate the convergence.
There are three important symmetries in magic-angle TBG system, in-plane
$120\degree$ rotational symmetry $C_{3z}$, out-of-plane $180\degree$
rotational symmetry $C_{2x}$, and $C_{2z}T$ symmetry. $C_{2z}T$ symmetry
combines in-plane $180\degree$ rotation with time-reversal symmetry. The
representation matrix $D(g)$ for symmetry $g$ can be found in Appendix A.
Order parameters with breaking symmetries can lead to topological non-trivial
states. For example, breaking the $C_{2z}T$ symmetry is necessary for getting
non-zero Chern number state; Therefore, We define the $C_{2z}T$ symmetry-
breaking order parameter to capture the $C_{2z}T$ symmetry-breaking strength.
The commutator-like order parameter denotes:
$\mathcal{D}(\bm{k})=\left|P(\bm{k})D(C_{2z}\mathcal{T})-D(C_{2z}\mathcal{T})P(\bm{k})^{*}\right|$
(38)
where $D(C_{2z}T)$ is the representation matrix of $C_{2z}T$ symmetry.
The insulating states with non-zero Chern number are identified as QAH states.
The total Chern number $C$ is calculated from the determinant of the Wilson
loop matrix of the occupied bands (see Appendix F). Many trial states are
randomly generated to initialize the self-consistent calculation, but only
several relevant converged results come out.
## Appendix D Derivation of the symmetry breaking strength
The symmetry breaking strength of a HF converged state can be defined using a
commutation-like expression:
$\left|\left<g^{-1}c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})g\right>-\left<c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})\right>\right|,$ (39)
where $g$ is the symmetry operation. Using the representation matrix $D(g)$
defined in Appendix A, we obtain
$\begin{split}&\left|\left<g^{-1}c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})g\right>-\left<c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})\right>\right|\\\
=&\left|D^{*}_{ca}(g)D_{db}(g)\left<c^{\dagger}_{c\tau s}(g\bm{k})c_{d\tau
s}(g\bm{k})\right>-\left<c^{\dagger}_{a\tau s}(\bm{k})c_{b\tau
s}(\bm{k})\right>\right|\\\
=&\left|D^{\dagger}(g)P(g\bm{k})D(g)-P(\bm{k})\right|\end{split}$ (40)
for unitary transformation. Multiplying $D(g)$ on the left side, the symmetry
breaking strength $\mathcal{D}(\bm{k})$ can be expressed as
$\mathcal{D}(\bm{k})=\left|P(g\bm{k})D(g)-D(g)P(\bm{k})\right|.$ (41)
For anti-unitary transformation, the same procedure gives
$\mathcal{D}(\bm{k})=\left|P(g\bm{k})D(\mathcal{K}g)-D(\mathcal{K}g)P(\bm{k})^{*}\right|.$
(42)
We mainly checked the breaking of $C_{2z}\mathcal{T}$ symmetry, because
breaking $C_{2z}\mathcal{T}$ symmetry is the necessary condition for a finite
Chern number.
## Appendix E Hartree-Fock mean-field Hamiltonian
We used Hartree-Fock mean-field method to study the 8-band extended Hubbard
model plus exchange interactions $H=H_{K}+H_{C}+H_{X}$. In this section, we
derived the HF self-consistent equations.
### E.1 Hartree-Fock decomposition of density-density interacting Hamiltonian
In the first part, we derive the HF self-consistent equations of the density-
density interaction part. The density-density interacting Hamiltonian is
expressed as
$H_{C}=\sum_{a,b,\tau,\tau^{\prime},s,s^{\prime}}\left(\sum_{i}\frac{1}{2}U_{ab}(\bm{0}):n_{a\tau
s}(\bm{R}_{i})n_{b\tau^{\prime}s^{\prime}}(\bm{R}_{i}):+\sum_{ij}\frac{1}{2}U_{ab}(\bm{\Delta}\bm{R}_{j}):n_{a\tau
s}(\bm{R}_{i})n_{b\tau^{\prime}s^{\prime}}(\bm{R}_{i}+\bm{\Delta}\bm{R}_{j}):\right).$
(43)
Assuming there is no translational symmetry breaking, we do the Fourier
transform and obtain the Hamiltonian in momentum space,
$H_{C}=\sum_{a,b,\tau,\tau^{\prime},s,s^{\prime}}\sum_{\bm{q}}\frac{1}{2N_{k}}\bar{U}_{ab}(\bm{q})\delta\rho_{a\tau
s}(\bm{q})\delta\rho_{b\tau^{\prime}s^{\prime}}(-\bm{q}),$ (44)
in which we define the charge operator $\delta\rho_{a\tau s}(\bm{q})$ as
$\delta\rho_{a\tau s}(\bm{q})=\sum_{\bm{k}}c^{\dagger}_{a\tau
s}(\bm{k})c_{a\tau s}(\bm{q}+\bm{k})-\frac{1}{2}\delta_{\bm{q},\bm{G}}.$ (45)
When doing the Fourier transform, we chose periodic gauge, so
$\delta\rho(\bm{q}+\bm{G})=\delta\rho(\bm{q})$. And $\bar{U}_{ab}(\bm{q})$ is
the Fourier transformation of real-space Coulomb repulsion, which means
$\bar{U}_{ab}(\bm{q})=\left(V_{ab}(\bm{0})+\sum_{j}U_{ab}(\bm{\Delta}\bm{R}_{j})e^{i\bm{\Delta}\bm{R}_{j}\cdot\bm{q}}\right).$
(46)
To perform the HF decomposition, we define the one-body reduced density matrix
as
$P_{a\tau s,b\tau^{\prime}s^{\prime}}(\bm{k})=\left<c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})-\frac{1}{2}\delta_{ab}\delta_{\tau\tau^{\prime}}\delta_{ss^{\prime}}\right>,$
(47)
in which the braket is evaluated using mean-field Slater determinant. The
single-body Hamiltonian from Hartree decomposition is obtained:
$\mathcal{H}^{(H)}=\frac{1}{N_{k}}\sum_{\bm{k}^{\prime}\bm{k}}\sum_{a,b,\tau,\tau^{\prime},s,s^{\prime}}\bar{U}_{ab}(\bm{0})P_{a\tau
s;a\tau
s}(\bm{k}^{\prime})\left(c^{\dagger}_{b\tau^{\prime}s^{\prime}}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})-\frac{1}{2}\right),$
(48)
and the Hamiltonian from Fock decomposition is
$\mathcal{H}^{(F)}=-\frac{1}{2N_{k}}\sum_{\bm{k}\bm{k}^{\prime}}\sum_{ab\tau\tau^{\prime}ss^{\prime}}\bar{U}_{ab}(\bm{k}-\bm{k}^{\prime})P_{b\tau^{\prime}s^{\prime};a\tau
s}(\bm{k}^{\prime})\left(c^{\dagger}_{a\tau
s}(\bm{k})c_{b\tau^{\prime}s^{\prime}}(\bm{k})-\frac{1}{2}\delta_{ab}\delta_{\tau\tau^{\prime}}\delta_{ss^{\prime}}\right)+h.c..$
(49)
### E.2 Hartree-Fock theory for exchange interaction
Here we show the HF decomposition of the exchange interacting Hamiltonian
$H_{X}$ as derived in Eq. (35). We do the Fourier transform using periodic
gauge to obtain the momentum-space exchange interaction:
$H_{X}=\sum_{ab\tau\tau^{\prime}ss^{\prime}}\sum_{\bm{k}_{1}...\bm{k}_{4}}\sum_{j}\frac{1}{2N_{k}}X^{\tau\tau^{\prime}}_{ab}(\bm{\Delta}\bm{R}_{j})e^{i(\bm{k}_{3}-\bm{k}_{2})\cdot\bm{\Delta}\bm{R}_{j}}\delta(\bm{k}_{1}-\bm{k}_{2}+\bm{k}_{3}-\bm{k}_{4}):c^{\dagger}_{a\tau
s}(\bm{k}_{1})c_{b\tau
s}(\bm{k}_{2})c^{\dagger}_{b\tau^{\prime}s^{\prime}}(\bm{k}_{3})c_{a\tau^{\prime}s^{\prime}}(\bm{k}_{4}):.$
(50)
Using the same prescriptions as shown in density-density interaction, the
Hartree and Fock terms are derived as
$\mathcal{H}^{(H)}_{X}=\frac{1}{2N_{k}}\sum_{\bm{k}\bm{k}^{\prime}}\sum_{ab\tau\tau^{\prime}ss^{\prime}}\sum_{j}X^{\tau\tau^{\prime}}_{ab}(\bm{\Delta}\bm{R}_{j})e^{i(\bm{k}^{\prime}-\bm{k})\cdot\bm{\Delta}\bm{R}_{j}}P_{a\tau
s,b\tau
s}(\bm{k})\left(c^{\dagger}_{b\tau^{\prime}s^{\prime}}(\bm{k}^{\prime})c_{a\tau^{\prime}s^{\prime}}(\bm{k}^{\prime})-\frac{1}{2}\delta_{ab}\right)+h.c.,$
(51)
and
$\mathcal{H}^{(F)}_{X}=-\frac{1}{2N_{k}}\sum_{\bm{k}\bm{k}^{\prime}}\sum_{ab\tau\tau^{\prime}ss^{\prime}}\sum_{j}X^{\tau\tau^{\prime}}_{ab}(\bm{\Delta}\bm{R}_{j})P_{a\tau
s,a\tau^{\prime}s^{\prime}}(\bm{k})\left(c^{\dagger}_{b\tau^{\prime}s^{\prime}}(\bm{k}^{\prime})c_{b\tau
s}(\bm{k}^{\prime})-\frac{1}{2}\delta_{\tau\tau^{\prime}}\delta_{ss^{\prime}}\right)+h.c..$
(52)
Combining with tight-binding Hamiltonian, the total Hartree-Fock mean-field
Hamiltonian can be written as
$\mathcal{H}^{HF}=H_{K}+\mathcal{H}^{(H)}+\mathcal{H}^{(F)}+\mathcal{H}^{(H)}_{X}+\mathcal{H}^{(F)}_{X}.$
(53)
The mean-field Hamiltonian $\mathcal{H}^{HF}$ is self-consistently determined
by its own eigenvectors. So the self-consistent mean-field states are obtained
by performing self-consistent cycles. The total energy is evaluated by
$E_{tot}=\left<H_{K}+\frac{1}{2}(\mathcal{H}^{(H)}+\mathcal{H}^{(F)}+\mathcal{H}^{(H)}_{X}+\mathcal{H}^{(F)}_{X})\right>.$
(54)
## Appendix F Wilson loops and Chern numbers
This section explains how to calculate Wilson loop matrices. The total Chern
number is calculated by counting the windings.
We define the momentum points in the first Brillouin zone
$\bm{k}=\frac{1}{2\pi}(\tilde{k}_{1}\bm{G}_{1},\tilde{k}_{2}\bm{G_{2}})=(\frac{n_{x}}{N_{x}}\bm{G}_{1},\frac{n_{y}}{N_{y}}\bm{G_{2}})$,
where $N_{x}$ and $N_{y}$ are the sampling lengths on each direction. The pair
$(n_{x},n_{y})$ labels the discretized momentum, so
$(\tilde{k}_{1},\tilde{k}_{2})$ can be viewed as reduced k points. Supposing
$u_{an}(\mathbf{k})$ is the eigenvector of band $n$ in periodic gauge, we can
define the Berry connection as $\bm{A}=(A^{1},A^{2})$, in which
$A^{1}_{nm}=\sum_{a}iu^{*}_{an}(\tilde{k}_{1},\tilde{k}_{2})\frac{\partial}{\partial_{\tilde{k}_{1}}}u_{am}(\tilde{k}_{1},\tilde{k}_{2})$.
The non-Abelian Wilson loop matrix can be written as
$\begin{split}W_{nm}(\tilde{k}_{2})&=\mathcal{P}\exp{-i\int^{2\pi}_{0}d\tilde{k}_{1}A^{1}_{nm}(\tilde{k}_{1},\tilde{k}_{2})}\\\
&\approx\prod^{2\pi-\Delta\tilde{k}}_{\tilde{k}_{1}=0}\sum_{a}u^{*}_{an}(\tilde{k}_{1},\tilde{k}_{2})u_{am}(\tilde{k}_{1}-\Delta\tilde{k},\tilde{k}_{2}).\end{split}$
(55)
In practice, to make the numerical matrix productions stable, the singular
value decomposition (SVD) is applied. The total Chern number is obtained by
counting the windings of $\det W_{nm}$ of occupied bands.
## Appendix G Details of the band structures
Figure 8: Band structures of self-consistent ground state candidates at
$\nu=\pm 2$. The color bar shows the valley polarization strength for each k
point. (a) Valley polarized state (VP). (B) Spin polarized state (SP). (C)
Inter-valley coherent and valley polarized state(IVC+VP). (d) Valley polarized
state (VP). (e) Spin polarized state (SP). (f) Orbital polarized state (OP).
Figure 9: Band structures of self-consistent ground state candidates at
$\nu=\pm 1$. The color bar shows the valley polarization strength for each k
point. (a),(c): Valley-polarized and spin-polarized state (VPSP). (b),(d)
Inter-valley coherent and spin polarized state (IVCSP). Figure 10: Band
structures of self-consistent ground state candidates at $\nu=\pm 3$. The
color bar shows the valley polarization strength for each k point. (a) Valley-
polarized and spin-polarized state (VPSP) for $\nu=-3$. (b) Inter-valley
coherent and spin-polarized state (IVCSP) for $\nu=-3$. (c) VPSP for $\nu=+3$.
## References
* Carr et al. [2019] S. Carr, S. Fang, H. C. Po, A. Vishwanath, and E. Kaxiras, Phys. Rev. Res. 1, 033072 (2019).
* Bistritzer and MacDonald [2011] R. Bistritzer and A. H. MacDonald, Proceedings of the National Academy of Sciences 108, 12233 (2011).
* Cao et al. [2018a] Y. Cao, V. Fatemi, A. Demir, S. Fang, S. L. Tomarken, J. Y. Luo, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, R. C. Ashoori, and P. Jarillo-Herrero, Nature 556, 80 (2018a).
* Cao et al. [2018b] Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Nature 556, 43 (2018b).
* Moon and Koshino [2012] P. Moon and M. Koshino, Physical Review B 85, 195458 (2012).
* Kang and Vafek [2018] J. Kang and O. Vafek, Phys. Rev. X 8, 031088 (2018).
* Koshino et al. [2018] M. Koshino, N. F. Q. Yuan, T. Koretsune, M. Ochi, K. Kuroki, and L. Fu, Phys. Rev. X 8, 031087 (2018).
* Zou et al. [2018] L. Zou, H. C. Po, A. Vishwanath, and T. Senthil, Phys. Rev. B 98, 085435 (2018).
* Po et al. [2018a] H. C. Po, H. Watanabe, and A. Vishwanath, Phys. Rev. Lett. 121, 126402 (2018a).
* Efimkin and MacDonald [2018] D. K. Efimkin and A. H. MacDonald, Phys. Rev. B 98, 035404 (2018).
* Isobe et al. [2018] H. Isobe, N. F. Q. Yuan, and L. Fu, Phys. Rev. X 8, 041041 (2018).
* Liu et al. [2018] C.-C. Liu, L.-D. Zhang, W.-Q. Chen, and F. Yang, Physical review letters 121, 217001 (2018).
* Xu and Balents [2018] C. Xu and L. Balents, Physical review letters 121, 087001 (2018).
* Ochi et al. [2018] M. Ochi, M. Koshino, and K. Kuroki, Phys. Rev. B 98, 081102 (2018).
* Xu et al. [2018] X. Y. Xu, K. T. Law, and P. A. Lee, Phys. Rev. B 98, 121406 (2018).
* Po et al. [2018b] H. C. Po, L. Zou, A. Vishwanath, and T. Senthil, Physical Review X 8, 031089 (2018b).
* Guinea and Walet [2018] F. Guinea and N. R. Walet, Proceedings of the National Academy of Sciences 115, 13174 (2018).
* Guo et al. [2018] H. Guo, X. Zhu, S. Feng, and R. T. Scalettar, Phys. Rev. B 97, 235453 (2018).
* Wu et al. [2018] F. Wu, A. H. MacDonald, and I. Martin, Phys. Rev. Lett. 121, 257001 (2018).
* Yuan and Fu [2018] N. F. Yuan and L. Fu, Physical Review B 98, 045103 (2018).
* Venderbos and Fernandes [2018] J. W. F. Venderbos and R. M. Fernandes, Phys. Rev. B 98, 245103 (2018).
* Padhi et al. [2018] B. Padhi, C. Setty, and P. W. Phillips, Nano letters 18, 6175 (2018).
* Wu et al. [2019] X.-C. Wu, C.-M. Jian, and C. Xu, Physical Review B 99 (2019), 10.1103/physrevb.99.161405.
* Thomson et al. [2018] A. Thomson, S. Chatterjee, S. Sachdev, and M. S. Scheurer, Physical Review B 98 (2018), 10.1103/physrevb.98.075109.
* Kennes et al. [2018] D. M. Kennes, J. Lischner, and C. Karrasch, Phys. Rev. B 98, 241407 (2018).
* Dodaro et al. [2018] J. F. Dodaro, S. A. Kivelson, Y. Schattner, X.-Q. Sun, and C. Wang, Physical Review B 98, 075154 (2018).
* Song et al. [2019] Z. Song, Z. Wang, W. Shi, G. Li, C. Fang, and B. A. Bernevig, Physical Review Letters 123, 036401 (2019).
* Hejazi et al. [2019a] K. Hejazi, C. Liu, H. Shapourian, X. Chen, and L. Balents, Phys. Rev. B 99, 035111 (2019a).
* Tarnopolsky et al. [2019] G. Tarnopolsky, A. J. Kruchkov, and A. Vishwanath, Physical Review Letters 122, 106405 (2019).
* Da Liao et al. [2019] Y. Da Liao, Z. Y. Meng, and X. Y. Xu, Phys. Rev. Lett. 123, 157601 (2019).
* Liu et al. [2019a] J. Liu, J. Liu, and X. Dai, Physical Review B 99, 155415 (2019a).
* Hejazi et al. [2019b] K. Hejazi, C. Liu, and L. Balents, Physical Review B 100 (2019b), 10.1103/physrevb.100.035115.
* You and Vishwanath [2019] Y.-Z. You and A. Vishwanath, npj Quantum Materials 4, 16 (2019).
* Lian et al. [2019] B. Lian, Z. Wang, and B. A. Bernevig, Phys. Rev. Lett. 122, 257002 (2019).
* Classen et al. [2019] L. Classen, C. Honerkamp, and M. M. Scherer, Phys. Rev. B 99, 195120 (2019).
* Zhang et al. [2019] Y.-H. Zhang, D. Mao, Y. Cao, P. Jarillo-Herrero, and T. Senthil, Physical Review B 99, 075127 (2019).
* Kang and Vafek [2019] J. Kang and O. Vafek, Physical Review Letters 122, 246401 (2019).
* Seo et al. [2019] K. Seo, V. N. Kotov, and B. Uchoa, Phys. Rev. Lett. 122, 246402 (2019).
* Hu et al. [2019] X. Hu, T. Hyart, D. I. Pikulin, and E. Rossi, Phys. Rev. Lett. 123, 237002 (2019).
* Pixley and Andrei [2019] J. H. Pixley and E. Y. Andrei, Science 365, 543 (2019).
* Huang et al. [2019] T. Huang, L. Zhang, and T. Ma, Science Bulletin 64, 310 (2019).
* Liu et al. [2019b] J. Liu, Z. Ma, J. Gao, and X. Dai, Physical Review X 9, 031021 (2019b).
* Gonzalez and Stauber [2019] J. Gonzalez and T. Stauber, Physical review letters 122, 026801 (2019).
* Lian et al. [2020] B. Lian, F. Xie, and B. A. Bernevig, Phys. Rev. B 102, 041402 (2020).
* Padhi et al. [2020] B. Padhi, A. Tiwari, T. Neupert, and S. Ryu, Phys. Rev. Res. 2, 033458 (2020).
* Wu and Das Sarma [2020] F. Wu and S. Das Sarma, Physical Review Letters 124 (2020), 10.1103/physrevlett.124.046403.
* Bultinck et al. [2020a] N. Bultinck, S. Chatterjee, and M. P. Zaletel, Phys. Rev. Lett. 124, 166601 (2020a).
* Bultinck et al. [2020b] N. Bultinck, E. Khalaf, S. Liu, S. Chatterjee, A. Vishwanath, and M. P. Zaletel, Phys. Rev. X 10, 031034 (2020b).
* Hejazi et al. [2021] K. Hejazi, X. Chen, and L. Balents, Phys. Rev. Research 3, 013242 (2021).
* Khalaf et al. [2021] E. Khalaf, S. Chatterjee, N. Bultinck, M. P. Zaletel, and A. Vishwanath, Science advances 7, eabf5299 (2021).
* Xie et al. [2020] F. Xie, Z. Song, B. Lian, and B. A. Bernevig, Phys. Rev. Lett. 124, 167002 (2020).
* Julku et al. [2020] A. Julku, T. J. Peltonen, L. Liang, T. T. Heikkilä, and P. Törmä, Physical Review B 101 (2020), 10.1103/physrevb.101.060505.
* Kang and Vafek [2020] J. Kang and O. Vafek, Phys. Rev. B 102, 035161 (2020).
* Chichinadze et al. [2020] D. V. Chichinadze, L. Classen, and A. V. Chubukov, Phys. Rev. B 101, 224513 (2020).
* Soejima et al. [2020] T. Soejima, D. E. Parker, N. Bultinck, J. Hauschild, and M. P. Zaletel, Phys. Rev. B 102, 205111 (2020).
* König et al. [2020] E. J. König, P. Coleman, and A. M. Tsvelik, Phys. Rev. B 102, 104514 (2020).
* Christos et al. [2020] M. Christos, S. Sachdev, and M. S. Scheurer, Proceedings of the National Academy of Sciences 117, 29543 (2020).
* Lewandowski et al. [2021] C. Lewandowski, D. Chowdhury, and J. Ruhman, Phys. Rev. B 103, 235401 (2021).
* Kwan et al. [2020] Y. H. Kwan, S. A. Parameswaran, and S. L. Sondhi, Phys. Rev. B 101, 205116 (2020).
* Kwan et al. [2021a] Y. H. Kwan, Y. Hu, S. H. Simon, and S. A. Parameswaran, Phys. Rev. Lett. 126, 137601 (2021a).
* Xie and MacDonald [2020] M. Xie and A. H. MacDonald, Phys. Rev. Lett. 124, 097601 (2020).
* Liu and Dai [2021] J. Liu and X. Dai, Phys. Rev. B 103, 035427 (2021).
* Cea and Guinea [2020] T. Cea and F. Guinea, Phys. Rev. B 102, 045107 (2020).
* Zhang et al. [2020] Y. Zhang, K. Jiang, Z. Wang, and F. Zhang, Phys. Rev. B 102, 035136 (2020).
* Liu et al. [2021a] S. Liu, E. Khalaf, J. Y. Lee, and A. Vishwanath, Phys. Rev. Research 3, 013033 (2021a).
* Da Liao et al. [2021] Y. Da Liao, J. Kang, C. N. Breiø, X. Y. Xu, H.-Q. Wu, B. M. Andersen, R. M. Fernandes, and Z. Y. Meng, Phys. Rev. X 11, 011014 (2021).
* Eugenio and Dağ [2020] P. M. Eugenio and C. B. Dağ, SciPost Phys. Core 3, 15 (2020).
* Huang et al. [2020] Y. Huang, P. Hosur, and H. K. Pal, Phys. Rev. B 102, 155429 (2020).
* Calderón and Bascones [2020] M. J. Calderón and E. Bascones, Phys. Rev. B 102, 155149 (2020).
* Ledwith et al. [2020] P. J. Ledwith, G. Tarnopolsky, E. Khalaf, and A. Vishwanath, Phys. Rev. Research 2, 023237 (2020).
* Repellin et al. [2020] C. Repellin, Z. Dong, Y.-H. Zhang, and T. Senthil, Phys. Rev. Lett. 124, 187601 (2020).
* Abouelkomsan et al. [2020] A. Abouelkomsan, Z. Liu, and E. J. Bergholtz, Phys. Rev. Lett. 124, 106803 (2020).
* Repellin and Senthil [2020] C. Repellin and T. Senthil, Phys. Rev. Research 2, 023238 (2020).
* Vafek and Kang [2020] O. Vafek and J. Kang, Phys. Rev. Lett. 125, 257602 (2020).
* Fernandes and Venderbos [2020] R. M. Fernandes and J. W. F. Venderbos, Science Advances 6 (2020), 10.1126/sciadv.aba8834.
* Wilson et al. [2020] J. H. Wilson, Y. Fu, S. Das Sarma, and J. H. Pixley, Phys. Rev. Research 2, 023325 (2020).
* Wang et al. [2021] J. Wang, Y. Zheng, A. J. Millis, and J. Cano, Phys. Rev. Research 3, 023155 (2021).
* Bernevig et al. [2021a] B. A. Bernevig, Z.-D. Song, N. Regnault, and B. Lian, Physical Review B 103, 205411 (2021a), publisher: American Physical Society.
* Song et al. [2021] Z.-D. Song, B. Lian, N. Regnault, and B. A. Bernevig, Phys. Rev. B 103, 205412 (2021).
* Bernevig et al. [2021b] B. A. Bernevig, Z.-D. Song, N. Regnault, and B. Lian, Physical Review B 103, 205413 (2021b), publisher: American Physical Society.
* Lian et al. [2021] B. Lian, Z.-D. Song, N. Regnault, D. K. Efetov, A. Yazdani, and B. A. Bernevig, Physical Review B 103, 205414 (2021), publisher: American Physical Society.
* Bernevig et al. [2021c] B. A. Bernevig, B. Lian, A. Cowsik, F. Xie, N. Regnault, and Z.-D. Song, Physical Review B 103, 205415 (2021c), publisher: American Physical Society.
* Xie et al. [2021] F. Xie, A. Cowsik, Z.-D. Song, B. Lian, B. A. Bernevig, and N. Regnault, Physical Review B 103, 205416 (2021), publisher: American Physical Society.
* Zhang et al. [2021] X. Zhang, G. Pan, Y. Zhang, J. Kang, and Z. Y. Meng, Chinese Physics Letters 38, 077305 (2021).
* Vafek and Kang [2021] O. Vafek and J. Kang, Phys. Rev. B 104, 075143 (2021).
* Cha et al. [2021] P. Cha, A. A. Patel, and E.-A. Kim, Phys. Rev. Lett. 127, 266601 (2021), arXiv:2105.08069 [cond-mat.str-el] .
* Sheffer and Stern [2021] Y. Sheffer and A. Stern, Phys. Rev. B 104, L121405 (2021), arXiv:2106.10650 [cond-mat.str-el] .
* Kang et al. [2021] J. Kang, B. A. Bernevig, and O. Vafek, Phys. Rev. Lett. 127, 266402 (2021).
* Hofmann et al. [2022] J. S. Hofmann, E. Khalaf, A. Vishwanath, E. Berg, and J. Y. Lee, Phys. Rev. X 12, 011061 (2022).
* Thomson and Alicea [2021] A. Thomson and J. Alicea, Phys. Rev. B 103, 125138 (2021).
* Kwan et al. [2021b] Y. H. Kwan, G. Wagner, T. Soejima, M. P. Zaletel, S. H. Simon, S. A. Parameswaran, and N. Bultinck, Phys. Rev. X 11, 041063 (2021b).
* Parker et al. [2021] D. E. Parker, T. Soejima, J. Hauschild, M. P. Zaletel, and N. Bultinck, Phys. Rev. Lett. 127, 027601 (2021).
* Pathak et al. [2022] S. Pathak, T. Rakib, R. Hou, A. Nevidomskyy, E. Ertekin, H. T. Johnson, and L. K. Wagner, Phys. Rev. B 105, 115141 (2022).
* Wagner et al. [2022] G. Wagner, Y. H. Kwan, N. Bultinck, S. H. Simon, and S. A. Parameswaran, Phys. Rev. Lett. 128, 156401 (2022).
* Song and Bernevig [2022] Z.-D. Song and B. A. Bernevig, Phys. Rev. Lett. 129, 047601 (2022).
* Shi and Dai [2022] H. Shi and X. Dai, Phys. Rev. B 106, 245129 (2022).
* Călugăru et al. [2023] D. Călugăru, M. Borovkov, L. L. H. Lau, P. Coleman, Z.-D. Song, and B. A. Bernevig, Low Temperature Physics 49, 640 (2023).
* Xie et al. [2023] F. Xie, J. Kang, B. A. Bernevig, O. Vafek, and N. Regnault, Phys. Rev. B 107, 075156 (2023).
* Hu et al. [2023a] H. Hu, B. A. Bernevig, and A. M. Tsvelik, Phys. Rev. Lett. 131, 026502 (2023a).
* Hu et al. [2023b] H. Hu, G. Rai, L. Crippa, J. Herzog-Arbeitman, D. Călugăru, T. Wehling, G. Sangiovanni, R. Valentí, A. M. Tsvelik, and B. A. Bernevig, Phys. Rev. Lett. 131, 166501 (2023b).
* Chou and Das Sarma [2023] Y.-Z. Chou and S. Das Sarma, Phys. Rev. Lett. 131, 026501 (2023).
* Datta et al. [2023] A. Datta, M. J. Calderón, A. Camjayi, and E. Bascones, Nature Communications 14, 5036 (2023).
* Chichinadze et al. [2022] D. V. Chichinadze, L. Classen, Y. Wang, and A. V. Chubukov, Phys. Rev. Lett. 128, 227601 (2022).
* Lu et al. [2019] X. Lu, P. Stepanov, W. Yang, M. Xie, M. A. Aamir, I. Das, C. Urgell, K. Watanabe, T. Taniguchi, G. Zhang, A. Bachtold, A. H. MacDonald, and D. K. Efetov, Nature 574, 653 (2019).
* Yankowitz et al. [2019] M. Yankowitz, S. Chen, H. Polshyn, Y. Zhang, K. Watanabe, T. Taniguchi, D. Graf, A. F. Young, and C. R. Dean, Science 363, 1059 (2019).
* Sharpe et al. [2019] A. L. Sharpe, E. J. Fox, A. W. Barnard, J. Finney, K. Watanabe, T. Taniguchi, M. A. Kastner, and D. Goldhaber-Gordon, Science 365, 605–608 (2019).
* Serlin et al. [2020] M. Serlin, C. L. Tschirhart, H. Polshyn, Y. Zhang, J. Zhu, K. Watanabe, T. Taniguchi, L. Balents, and A. F. Young, Science 367, 900 (2020).
* Polshyn et al. [2019] H. Polshyn, M. Yankowitz, S. Chen, Y. Zhang, K. Watanabe, T. Taniguchi, C. R. Dean, and A. F. Young, Nature Physics 15, 1011–1016 (2019).
* Xie et al. [2019] Y. Xie, B. Lian, B. Jäck, X. Liu, C.-L. Chiu, K. Watanabe, T. Taniguchi, B. A. Bernevig, and A. Yazdani, Nature 572, 101 (2019).
* Choi et al. [2019] Y. Choi, J. Kemmer, Y. Peng, A. Thomson, H. Arora, R. Polski, Y. Zhang, H. Ren, J. Alicea, G. Refael, and et al., Nature Physics 15, 1174–1180 (2019).
* Kerelsky et al. [2019] A. Kerelsky, L. J. McGilly, D. M. Kennes, L. Xian, M. Yankowitz, S. Chen, K. Watanabe, T. Taniguchi, J. Hone, C. Dean, and et al., Nature 572, 95–100 (2019).
* Jiang et al. [2019] Y. Jiang, X. Lai, K. Watanabe, T. Taniguchi, K. Haule, J. Mao, and E. Y. Andrei, Nature 573, 91–95 (2019).
* Saito et al. [2020] Y. Saito, J. Ge, K. Watanabe, T. Taniguchi, and A. F. Young, Nature Physics 16, 926–930 (2020).
* Stepanov et al. [2020] P. Stepanov, I. Das, X. Lu, A. Fahimniya, K. Watanabe, T. Taniguchi, F. H. L. Koppens, J. Lischner, L. Levitov, and D. K. Efetov, Nature 583, 375–378 (2020).
* Liu et al. [2021b] X. Liu, Z. Wang, K. Watanabe, T. Taniguchi, O. Vafek, and J. Li, Science 371, 1261 (2021b).
* Arora et al. [2020] H. S. Arora, R. Polski, Y. Zhang, A. Thomson, Y. Choi, H. Kim, Z. Lin, I. Z. Wilson, X. Xu, J.-H. Chu, and et al., Nature 583, 379–384 (2020).
* Cao et al. [2020] Y. Cao, D. Chowdhury, D. Rodan-Legrain, O. Rubies-Bigorda, K. Watanabe, T. Taniguchi, T. Senthil, and P. Jarillo-Herrero, Phys. Rev. Lett. 124, 076801 (2020).
* Wong et al. [2020] D. Wong, K. P. Nuckolls, M. Oh, B. Lian, Y. Xie, S. Jeon, K. Watanabe, T. Taniguchi, B. A. Bernevig, and A. Yazdani, Nature 582, 198–202 (2020).
* Zondiner et al. [2020] U. Zondiner, A. Rozen, D. Rodan-Legrain, Y. Cao, R. Queiroz, T. Taniguchi, K. Watanabe, Y. Oreg, F. von Oppen, A. Stern, and et al., Nature 582, 203–208 (2020).
* Nuckolls et al. [2020] K. P. Nuckolls, M. Oh, D. Wong, B. Lian, K. Watanabe, T. Taniguchi, B. A. Bernevig, and A. Yazdani, Nature 588, 610 (2020).
* Choi et al. [2021] Y. Choi, H. Kim, Y. Peng, A. Thomson, C. Lewandowski, R. Polski, Y. Zhang, H. S. Arora, K. Watanabe, T. Taniguchi, et al., Nature 589, 536 (2021).
* Saito et al. [2021a] Y. Saito, J. Ge, L. Rademaker, K. Watanabe, T. Taniguchi, D. A. Abanin, and A. F. Young, Nature Physics 17, 478 (2021a).
* Das et al. [2021] I. Das, X. Lu, J. Herzog-Arbeitman, Z.-D. Song, K. Watanabe, T. Taniguchi, B. A. Bernevig, and D. K. Efetov, Nature Physics 17, 710 (2021).
* Wu et al. [2021] S. Wu, Z. Zhang, K. Watanabe, T. Taniguchi, and E. Y. Andrei, Nature materials 20, 488 (2021).
* Park et al. [2021] J. M. Park, Y. Cao, K. Watanabe, T. Taniguchi, and P. Jarillo-Herrero, Nature 592, 43 (2021).
* Saito et al. [2021b] Y. Saito, F. Yang, J. Ge, X. Liu, T. Taniguchi, K. Watanabe, J. Li, E. Berg, and A. F. Young, Nature 592, 220 (2021b).
* Rozen et al. [2021] A. Rozen, J. M. Park, U. Zondiner, Y. Cao, D. Rodan-Legrain, T. Taniguchi, K. Watanabe, Y. Oreg, A. Stern, E. Berg, et al., Nature 592, 214 (2021).
* Stepanov et al. [2021] P. Stepanov, M. Xie, T. Taniguchi, K. Watanabe, X. Lu, A. H. MacDonald, B. A. Bernevig, and D. K. Efetov, Phys. Rev. Lett. 127, 197701 (2021).
* Lu et al. [2021] X. Lu, B. Lian, G. Chaudhary, B. A. Piot, G. Romagnoli, K. Watanabe, T. Taniguchi, M. Poggio, A. H. MacDonald, B. A. Bernevig, and D. K. Efetov, Proceedings of the National Academy of Sciences 118 (2021), 10.1073/pnas.2100006118.
* Grover et al. [2022] S. Grover, M. Bocarsly, A. Uri, P. Stepanov, G. Di Battista, I. Roy, J. Xiao, A. Y. Meltzer, Y. Myasoedov, K. Pareek, K. Watanabe, T. Taniguchi, B. Yan, A. Stern, E. Berg, D. K. Efetov, and E. Zeldov, Nature Physics 18, 885 (2022).
* Jaoui et al. [2022] A. Jaoui, I. Das, G. Di Battista, J. Díez-Mérida, X. Lu, K. Watanabe, T. Taniguchi, H. Ishizuka, L. Levitov, and D. K. Efetov, Nature Physics 18, 633 (2022).
* Das et al. [2022] I. Das, C. Shen, A. Jaoui, J. Herzog-Arbeitman, A. Chew, C.-W. Cho, K. Watanabe, T. Taniguchi, B. A. Piot, B. A. Bernevig, and D. K. Efetov, Phys. Rev. Lett. 128, 217701 (2022).
* Pulay [1980] P. Pulay, Chemical Physics Letters 73, 393 (1980).
* Pulay [1982] P. Pulay, Journal of Computational Chemistry 3, 556 (1982).
* Kudin et al. [2002] K. N. Kudin, G. E. Scuseria, and E. Cances, The Journal of chemical physics 116, 8255 (2002).
|
ifaamas [AAMAS ’24]Proc. of the 23rd International Conference on Autonomous
Agents and Multiagent Systems (AAMAS 2024)May 6 – 10, 2024 Auckland, New
ZealandN. Alechina, V. Dignum, M. Dastani, J.S. Sichman (eds.) 2024 2024 768
# JaxMARL: Multi-Agent RL Environments in JAX
Alexander Rutherford1111Core Contributor222Equal Contribution. Corresponding
Author<EMAIL_ADDRESS>Full authorship contribution
statements appear in the end of the document (Section 8)., Benjamin
Ellis1111Core Contributor222Equal Contribution. Corresponding Author:
<EMAIL_ADDRESS>Full authorship contribution statements appear
in the end of the document (Section 8)., Matteo Gallici2111Core
Contributor222Equal Contribution. Corresponding Author:
<EMAIL_ADDRESS>Full authorship contribution statements appear
in the end of the document (Section 8)., Jonathan Cook1111Core Contributor,
Andrei Lupu1111Core Contributor, Garðar Ingvarsson3111Core Contributor,
Timon Willi1111Core Contributor, Akbir Khan3, Christian Schroeder de Witt1,
Alexandra Souly3, Saptarashmi Bandyopadhyay4,
Mikayel Samvelyan3, Minqi Jiang3, Robert Tjarko Lange5, Shimon Whiteson1,
Bruno Lacerda1, Nick Hawes1,
Tim Rocktäschel3, Chris Lu1111Core Contributor222Equal Contribution.
Corresponding Author<EMAIL_ADDRESS>Full authorship
contribution statements appear in the end of the document (Section 8)., Jakob
Nicolaus Foerster1
1University of Oxford 2Universitat Politècnica de Catalunya 3UCL 4University
of Maryland 5Technical University Berlin
###### Abstract.
Benchmarks play an important role in the development of machine learning
algorithms. For example, research in reinforcement learning (RL) has been
heavily influenced by available environments and benchmarks. However, RL
environments are traditionally run on the CPU, limiting their scalability with
typical academic compute. Recent advancements in JAX have enabled the wider
use of hardware acceleration to overcome these computational hurdles, enabling
massively parallel RL training pipelines and environments. This is
particularly useful for multi-agent reinforcement learning (MARL) research.
First of all, multiple agents must be considered at each environment step,
adding computational burden, and secondly, the sample complexity is increased
due to non-stationarity, decentralised partial observability, or other MARL
challenges. In this paper, we present JaxMARL, the first open-source code base
that combines ease-of-use with GPU enabled efficiency, and supports a large
number of commonly used MARL environments as well as popular baseline
algorithms. When considering wall clock time, our experiments show that per-
run our JAX-based training pipeline is up to 12500x faster than existing
approaches. This enables efficient and thorough evaluations, with the
potential to alleviate the evaluation crisis of the field. We also introduce
and benchmark SMAX, a vectorised, simplified version of the popular StarCraft
Multi-Agent Challenge, which removes the need to run the StarCraft II game
engine. This not only enables GPU acceleration, but also provides a more
flexible MARL environment, unlocking the potential for self-play, meta-
learning, and other future applications in MARL. We provide code at
https://github.com/flairox/jaxmarl.
###### Key words and phrases:
Multi-Agent Reinforcement Learning, JAX, Benchmarks
## 1\. Introduction
Benchmarks play a pivotal role in the development of new single and multi-
agent reinforcement learning (MARL) algorithms by defining problems, enabling
comparisons, and focusing efforts. For example, in recent years, Go and Chess
drove the development of MuZero Schrittwieser et al. (2020) while
decentralised StarCraft Micromanagement Foerster et al. (2017) and later the
StarCraft Multi-Agent Challenge (SMAC, Samvelyan et al., 2019) resulted in the
development of algorithms such as QMIX Rashid et al. (2020), a popular MARL
technique.
Data transfer between the CPU (where the environment is simulated) and the GPU
(where the agents are evaluated) is a crucial bottleneck for simulation speed.
Simulation speed in turn is vital for progress in reinforcement learning (RL)
because RL algorithms often require a large number of environment
interactions. This problem is even worse in MARL, where non-stationarity and
decentralised partial observability greatly worsen the sample complexity
Bernstein et al. (2002). Hardware acceleration and parallelisation are crucial
to alleviating this, but current acceleration and parallelisation methods are
typically not implemented in Python, reducing their accessibility for most
machine learning researchers Shacklett et al. (2023); Weng et al. (2022). For
example, the extremely efficient Hanabi library Hu and Foerster (2020) from
Meta-AI research is implemented in C++ and has seen relatively little adoption
by the community. However, recent advances in JAX Bradbury et al. (2018) have
opened up new possibilities for using Python code directly with hardware
accelerators, enabling the wider use of massively parallel RL training
pipelines and environments.
Figure 1. JaxMARL’s philosophy. JaxMARL combines a wide range of environments
with ease of use and evaluation speed.
(a) MPE
(b) Overcooked
(c) Multi-Agent Brax
(d) STORM
(e) Hanabi
(f) Switch Riddle
(g) Coin Game
(h) SMAX
Figure 2. JaxMARL environments. We provide vectorised implementations of a
wide range of environments from different MARL settings.
The JAX Bradbury et al. (2018) library provides composable function
transformations, allowing for automatic vectorisation, device parallelisation,
automatic differentiation and just-in-time (JIT) compilation with XLA Sabne
(2020), for device-agnostic optimisation. Using JAX, both the environment
rollouts and model training can happen on a hardware accelerator (such as a
GPU or TPU), removing the cost of data transfer between devices and allowing
for significant parallelisation. Recently, PureJaxRL Lu et al. (2022a, 2023b)
has demonstrated the power of this end-to-end JAX-based approach; running both
the environment and the model training on a GPU yields a 4000x speedup over a
“traditional” pipeline with a GPU-trained policy but a CPU-based environment.
These accelerations could substantially advance RL and MARL research by
quickening the testing and iteration of ideas. Furthermore, they lower
computational hurdles for in-depth MARL research, enabling researchers to
utilise billions of frames and extract more performance from single GPUs.
Alongside the current computational issues faced by MARL researchers, recent
work also highlights issues with the evaluation standards and use of
benchmarks in the MARL community. In particular, MARL papers typically only
test on a few domains. Of the 75 recent MARL papers analysed by Gorsane et al.
(2022), 50% used only one evaluation environment and a further 30% used only
two. While SMAC and MPE Lowe et al. (2017), the two most used environments,
have various tasks or maps, the lack of a standard set raises the risk of
biased comparisons and incorrect conclusions. This leads to environment
overfitting and unclear progress markers.
Instead, novel MARL methods should be tested on a wide range of domains to
accurately evaluate their limits and enable better comparisons. The likely
issue preventing this is the lack of a unified codebase and the computational
burden of further evaluation.
This paper presents JaxMARL, a Python library that for the first time brings
together JAX implementations of eight common MARL environments under one API.
We additionally provide JAX implementations for four state-of-the-art
algorithms, allowing for end-to-end JAX-based training pipelines in a similar
fashion to PureJaxRL. As outlined in Figure 1, we present a library with end-
to-end hardware-accelerated training, simple Python implementations, and a
broad range of MARL environments. By alleviating computational constraints,
JaxMARL allows rapid evaluation of novel methods across a broad set of
domains, and hence has the potential to be a powerful tool to address MARL’s
evaluation crisis. Specifically, we find that JaxMARL achieves over 12500x
speedup compared to “conventional” aproaches.
We also create SMAX, a JAX-based simplification of the centralised training
with decentralised execution (CTDE) benchmarks SMAC Samvelyan et al. (2019)
and SMACv2 Ellis et al. (2022). SMAX features simplified dynamics, greater
flexibility and a more sophisticated but fully-decentralised heuristic AI,
while retaining the high-dimensional observation space, complex unit type
interactions and procedural scenario generation that lend SMAC and SMACv2 much
of their difficulty.
As shown in Figure 2, in addition to SMAX, our library includes the most
popular environments from several MARL settings. For centralised training with
decentralised execution (CTDE), we include the Multi-Agent Particle
Environments (MPE) Lowe et al. (2017), and Multi-Agent Brax (MABrax).
Meanwhile, for zero-shot coordination (ZSC) and ad-hoc teamplay, we include
Hanabi and Overcooked. Lastly, from the general-sum literature, we include the
CoinGame and Spatial-Temporal Representations of Matrix Games (STORM), a
representation of matrix games as grid-world scenarios with temporally
extended actions. JaxMARL provides the first JAX implementation of these
environments and unifies them in a single codebase.
We additionally provide JAX implementations of Independent PPO (IPPO) Schulman
et al. (2017); de Witt et al. (2020), QMIX, VDN Sunehag et al. (2017) and
Independent $Q$-Learning (IQL) Mnih et al. (2015), four of the most common
MARL algorithms, allowing new techniques to be easily benchmarked against
existing practices. We will extend this list before the camera-ready copy,
e.g. with the popular MAPPO Yu et al. (2022) algorithm.
## 2\. Background
### 2.1. Hardware Accelerated Environments
JAX enables the use of Python code with any hardware accelerator, allowing
researchers to write hardware-accelerated code easily. Within the RL
community, writing environment code in JAX has gained recent popularity. This
brings two chief advantages: firstly, environments written in JAX can be very
easily parallelised by using JAX’s vmap operation, which vectorises a function
across an input dimension, and secondly writing the environment in JAX allows
the agent and environment to be co-located on the GPU, which eliminates the
time taken to copy between CPU and GPU memory. Combined, these two factors
bring significant increases in training speed, with PureJaxRL Lu et al.
(2022a) achieving a $4000$x speedup over traditional training in single-agent
settings.
### 2.2. SMAC
StarCraft is a popular environment for testing RL algorithms. It typically
features features a centralised controller issuing commands to balance
_micromanagement_ , the low-level control of individual units, and
_macromanagement_ , the high level plans for economy and resource management.
SMAC Samvelyan et al. (2019), instead, focuses on decentralised unit
micromanagement across a range of scenarios divided into three broad
categories: _symmetric_ , where each side has the same units, _asymmetric_ ,
where the enemy team has more units, and _micro-trick_ , which are scenarios
designed specifically to feature a particular StarCraft micromanagement
strategy. SMACv2 Ellis et al. (2022) demonstrates that open-loop policies can
be effective on SMAC and adds additional randomly generated scenarios to
rectify SMAC’s lack of stochasticity. However, both of these environments rely
on running the full game of StarCraft II, which severely increases their CPU
and memory requirements. SMAClite Michalski et al. (2023) attempts to
alleviate this computational burden by recreating the SMAC environment
primarily in NumPy, with some core components written in C++. While this is
much more lightweight than SMAC, it cannot be run on a GPU and therefore
cannot be parallelised effectively with typical academic hardware, which
commonly has very few CPU cores compared to industry clusters.
## 3\. JaxMARL
We present JaxMARL, a library containing simple and accessible JAX
implementations of popular MARL environments and algorithms. JAX enables
significant acceleration and parallelisation over existing implementations. To
the best of our knowledge, JaxMARLis the first open source library that
provides JAX-based implementations of a wide range of MARL environments and
baselines.
### 3.1. API
The interface of JaxMARL is inspired by PettingZoo Terry et al. (2021) and
Gymnax. We designed it to be a simple and easy-to-use interface for a wide-
range of MARL problems. An example of instantiating an environment from
JaxMARL’s registry and executing one transition is presented in Figure 3.
⬇
1import jax
2from jaxmarl import make
3
4key = jax.random.PRNGKey(0)
5key, key_reset, key_act, key_step = jax.random.split(key, 4)
6
7# Initialise and reset the environment.
8env = make(’MPE_simple_world_comm_v3’)
9obs, state = env.reset(key_reset)
10
11# Sample random actions.
12key_act = jax.random.split(key_act, env.num_agents)
13actions = {agent: env.action_space(agent).sample(key_act[i]) \
14 for i, agent in enumerate(env.agents)}
15
16# Perform the step transition.
17obs, state, reward, done, infos = env.step(key_step, state, actions)
Figure 3. An example of JaxMARL’s API, which is flexible and easy-to-use.
As JAX’s JIT compilation requires pure functions, our `step` method has two
additional inputs compared to PettingZoo’s. The `state` object stores the
environment’s internal state and is updated with each call to `step`, before
being passed to subsequent calls. Meanwhile, `key_step` is a pseudo-random
key, consumed by JAX functions that require stochasticity. This key is
separated from the internal state for clarity.
Similar to PettingZoo, the remaining inputs and outputs are dictionaries keyed
by agent names, allowing for differing action and observation spaces. However,
as JAX’s JIT compilation requires arrays to have static shapes, the total
number of agents in an environment cannot vary during an episode. Thus, we do
not use PettingZoo’s agent iterator. Instead, the maximum number of agents is
set upon environment instantiation and any agents that terminate before the
end of an episode pass dummy actions thereafter. As asynchronous termination
is possible, we signal the end of an episode using a special `"__all__"` key
within `done`. The same dummy action approach is taken for environments where
agents act asynchronously (e.g. turn-based games).
To ensure clarity and reproducibility, we keep strict registration of
environments with suffixed version numbers, for example “MPE Simple Spread
V3”. Whenever JaxMARL environments correspond to existing CPU-based
implementations, the version numbers match.
### 3.2. Environments
JaxMARL contains a diverse range of environments, all implemented in JAX. We
also introduce SMAX, a SMAC-like environment implemented entirely in JAX. In
this section we introduce these environments and provide details on their
implementations.
#### SMAX
The StarCraft Multi-Agent Challenge (SMAC) is a popular benchmark but has a
number of shortcomings. First, as noted and addressed in prior work Ellis et
al. (2022), it is not sufficiently stochastic to require complex closed-loop
policies. Additionally, SMAC relies on StarCraft II as a simulator. While this
allows SMAC to use the wide range of units, objects and terrain available in
StarCraft II, running an entire instance of StarCraft II is slow Michalski et
al. (2023) and memory intensive. StarCraft II runs on the CPU and therefore
SMAC’s parallelisation is severely limited with typical academic compute.
Table 1. SMAX scenarios. The first section corresponds to SMAC scenarios, while the second corresponds to SMACv2. Scenario | Ally Units | Enemy Units | Start Positions
---|---|---|---
2s3z | 2 stalkers and 3 zealots | 2 stalkers and 3 zealots | Fixed
3s5z | 3 stalkers and 5 zealots | 3 stalkers and 5 zealots | Fixed
5m_vs_6m | 5 marines | 6 marines | Fixed
10m_vs_11m | 10 marines | 11 marines | Fixed
27m_vs_30m | 27 marines | 30 marines | Fixed
3s5z_vs_3s6z | 3 stalkers and 5 zealots | 3 stalkers and 6 zealots | Fixed
3s_vs_5z | 3 stalkers | 5 zealots | Fixed
6h_vs_8z | 6 hydralisks | 8 zealots | Fixed
smacv2_5_units | 5 uniformly randomly chosen | 5 uniformly randomly chosen | SMACv2-style
smacv2_10_units | 10 uniformly randomly chosen | 10 uniformly randomly chosen | SMACv2-style
smacv2_20_units | 20 uniformly randomly chosen | 20 uniformly randomly chosen | SMACv2-style
Using the StarCraft II game engine constrains environment design. For example,
StarCraft II groups units into three races and does not allow units of
different races on the same team, limiting the variety of scenarios that can
be generated. Secondly, SMAC does not support a competitive self-play setting
without significant engineering work. The purpose of SMAX is to address these
limitations. It provides access to a SMAC-like, hardware-accelerated,
customisable environment that supports self-play and custom unit types.
Units in SMAX are modelled as circles in a two-dimensional continuous space.
SMAX makes a number of additional simplifications to the dynamics of StarCraft
II, details of which are given in Appendix A.1.
SMAX also features a different, and more sophisticated, heuristic AI. The
heuristic in SMAC simply moves to a fixed location Michalski et al. (2023),
attacking any enemies it encounters along the way, and the heuristic in SMACv2
globally pursues the nearest agent. Thus the SMAC AI often does not
aggressively pursue enemies that run away, and cannot generalise to the SMACv2
start positions, whereas the SMACv2 heuristic AI conditions on global
information and is exploitable because of its tendency to flip-flop between
two similarly close enemies. SMAC’s heuristic AI must be coded in the map
editor, which does not provide a simple coding interface.
In contrast, SMAX features a decentralised heuristic AI that can effectively
find enemies without requiring the global information of the SMACv2 heuristic.
This guarantees that in principle a 50% win rate is always achievable by
copying the decentralised heuristic policy exactly. This means any win-rate
below 50% represents a concrete failure to learn.
SMAX scenarios incorporate both a number of the original scenarios from SMAC
and scenarios similar to those found in SMACv2. The latter sample units
uniformly across all SMAX unit types (stalker, zealot, hydralisk, zergling,
marine, marauder) and ensure fairness by having identical team composition for
the enemy and ally teams. We provide more details on SMAX in Appendix A.1.
#### Overcooked
Inspired by the popular videogame of the same name, Overcooked is commonly
used for assessing fully cooperative and fully observable Human-AI task
performance. The aim is to quickly prepare and deliver soup, which involves
putting three onions in a pot, cooking the soup, and serving it into bowls.
Two agents, or cooks, must coordinate to effectively divide the tasks to
maximise their common reward signal. Our implementation mimics the original
from Overcooked-AI (Carroll et al., 2019), including all five original layouts
and a simple method for creating additional ones. For a discussion on the
limitations of the Overcooked-AI environment, see Lauffer et al. (2023).
#### Hanabi
Hanabi is a fully cooperative partially observable multiplayer card game,
where players can observe other players’ cards but not their own. To win, the
team must play a series of cards in a specific order while sharing only a
limited amount of information between players. As reasoning about the beliefs
and intentions of other agents is central to performance, it is a common
benchmark for ZSC and ad-hoc teamplay research. Our implementation is inspired
by the Hanabi Learning Environment Bard et al. (2020) and includes custom
configurations for varying game settings, such as the number of colours/ranks,
number of players, and number of hint tokens. Compared to the Hanabi Learning
Environment, which is written in C++ and split over dozens of files, our
implementation is a single easy-to-read Python file, which simplifies
interfacing with the library and running experiments.
#### Multi-Agent Particle Environments (MPE)
The multi-agent particle environments feature a 2D world with simple physics
where particle agents can move, communicate, and interact with fixed
landmarks. Each specific environment varies the format of the world and the
agents’ abilities, creating a diverse set of tasks that include both
competitive and cooperative settings. We implement all the MPE scenarios
featured in the PettingZoo library and the transitions of our implementation
map exactly to theirs. We additionally include a fully cooperative predator-
prey variant of simple tag, presented in Peng et al. (2021). The code is
structured to allow for straightforward extensions, enabling further tasks to
be added.
#### Multi-Agent Brax (MABrax)
MABrax is a derivative of Multi-Agent MuJoCo Peng et al. (2021), an extension
of the MuJoCo Gym environment Todorov et al. (2012) that is commonly used for
benchmarking continuous multi-agent robotic control. Our implementation
utilises BraxFreeman et al. (2021) as the underlying physics engine and
includes five of Multi-Agent MuJoCo’s multi-agent factorisation tasks, where
each agent controls a subset of the joints and only observes the local state.
The included tasks, illustrated in Figure 2, are: `ant_4x2`,
`halfcheetah_6x1`, `hopper_3x1`, `humanoid_9|8`, and `walker2d_2x3`. The task
descriptions mirror those from Gymnasium-Robotics de Lazcano et al. (2023).
Table 2. Benchmark results for JAX-based MARL environments (steps-per-second) when taking random actions. All environments are significantly faster than existing CPU implementations. Environment | Original, 1 Env | Jax, 1 Env | Jax, 100 Envs | Jax, 10k Envs | Maximum Speedup
---|---|---|---|---|---
MPE Simple Spread | $8.34\text{\times}{10}^{4}$ | $5.48\text{\times}{10}^{3}$ | $5.24\text{\times}{10}^{5}$ | $3.99\text{\times}{10}^{7}$ | $4.78\text{\times}{10}^{2}$
MPE Simple Reference | $1.46\text{\times}{10}^{5}$ | $5.24\text{\times}{10}^{3}$ | $4.85\text{\times}{10}^{5}$ | $3.35\text{\times}{10}^{7}$ | $2.29\text{\times}{10}^{2}$
Switch Riddle | $2.69\text{\times}{10}^{4}$ | $6.24\text{\times}{10}^{3}$ | $7.92\text{\times}{10}^{5}$ | $6.68\text{\times}{10}^{7}$ | $2.48\text{\times}{10}^{3}$
Hanabi | $2.10\text{\times}{10}^{3}$ | $1.36\text{\times}{10}^{3}$ | $1.05\text{\times}{10}^{5}$ | $5.02\text{\times}{10}^{6}$ | $2.39\text{\times}{10}^{3}$
Overcooked | $1.91\text{\times}{10}^{3}$ | $3.59\text{\times}{10}^{3}$ | $3.04\text{\times}{10}^{5}$ | $1.69\text{\times}{10}^{7}$ | $8.85\text{\times}{10}^{3}$
MABrax Ant 4x2 | $1.77\text{\times}{10}^{3}$ | $2.70\text{\times}{10}^{2}$ | $1.81\text{\times}{10}^{4}$ | $7.62\text{\times}{10}^{5}$ | $4.31\text{\times}{10}^{2}$
Starcraft 2s3z | $8.31\text{\times}{10}^{1}$ | $5.37\text{\times}{10}^{2}$ | $4.53\text{\times}{10}^{4}$ | $2.71\text{\times}{10}^{6}$ | $3.26\text{\times}{10}^{4}$
Starcraft 27m vs 30m | $2.73\text{\times}{10}^{1}$ | $1.45\text{\times}{10}^{2}$ | $1.12\text{\times}{10}^{4}$ | $1.90\text{\times}{10}^{5}$ | $6.96\text{\times}{10}^{3}$
STORM | – | $2.48\text{\times}{10}^{3}$ | $1.75\text{\times}{10}^{5}$ | $1.46\text{\times}{10}^{7}$ | –
Coin Game | $1.97\text{\times}{10}^{4}$ | $4.67\text{\times}{10}^{3}$ | $4.06\text{\times}{10}^{5}$ | $4.03\text{\times}{10}^{7}$ | $2.05\text{\times}{10}^{3}$
#### Coin Game
Coin Game is a two-player grid-world environment which emulates social
dilemmas such as the iterated prisoner’s dilemma Snyder (1971). Used as a
benchmark for the general-sum setting, it expands on simpler social dilemmas
by adding a high-dimensional state. Two players, ‘red’ and ‘blue’ move in a
grid world and are each awarded 1 point for collecting any coin. However,
‘red’ loses 2 points if ‘blue’ collects a red coin and vice versa. Thus, if
both agents ignore colour when collecting coins their expected reward is 0.
Further details are provided in Appendix A.2.
#### Spatial-Temporal Representations of Matrix Games (STORM)
Inspired by the “in the Matrix” games in Melting Pot 2.0 Agapiou et al.
(2022), the STORM (Khan et al., 2022) environment expands on matrix games by
representing them as grid-world scenarios. Agents collect resources which
define their strategy during interactions and are rewarded based on a pre-
specified payoff matrix. This allows for the embedding of fully cooperative,
competitive or general-sum games, such as the prisoner’s dilemma Snyder
(1971). Thus, STORM can be used for studying paradigms such as opponent
shaping, where agents act with the intent to change other agents’ learning
dynamics, which has been empirically shown to lead to more prosocial outcomes
(Foerster et al., 2018; Khan et al., 2022; Lu et al., 2022b; Zhao et al.,
2022). Compared to the Coin Game or matrix games, the grid-world setting
presents a variety of new challenges such as partial observability, multi-step
agent interactions, temporally-extended actions, and longer time horizons.
Unlike the “in the Matrix” games from Melting Pot, STORM features
stochasticity, increasing the difficulty Ellis et al. (2022). A further
environment specification is provided in Appendix A.3.
#### Switch Riddle
Originally used to illustrate the Differentiable Inter-Agent Learning
algorithm Foerster et al. (2016), Switch Riddle is a simple cooperative
communication environment that we include as a debugging tool. $n$ prisoners
held by a warden can secure their release by collectively ensuring that each
has passed through a room with a light bulb and a switch. Each day, a prisoner
is chosen at random to enter this room. They have three choices: do nothing,
signal to the next prisoner by toggling the light, or inform the warden they
think all prisoners have been in the room. The game ends when a prisoner
informs the warden or the maximum time steps are reached. The rewards are +1
if the prisoner informs the warden, and all prisoners have been in the room,
-1 if the prisoner informs the warden before all prisoners have taken their
turn, and 0 otherwise, including when the maximum time steps are reached. We
benchmark using the implementation from Zhang et al. (2022).
### 3.3. Algorithms
In this section, we present our re-implementation of four well known MARL
baseline algorithms using JAX. The primary objective of these baselines is to
provide a structured framework for developing MARL algorithms leveraging the
advantages of the JaxMARL environments. All of the training pipelines are
fully compatible with JAX’s JIT and VMAP functions, resulting in a significant
acceleration of both the training and metric evaluation processes. This
enables parallelisation of training across various seeds and hyperparameters
on a single machine in parallel. We follow the CleanRL philosophy of providing
clear, single-file implementations Huang et al. (2022).
#### IPPO
Our Independent PPO (IPPO) Schulman et al. (2017); de Witt et al. (2020)
implementation is based on PureJaxRL Lu et al. (2022a), with parameter sharing
across homogeneous agents. We provide both feed-forward and RNN versions.
#### $Q$-learning Methods
Our $Q$-Learning baselines, including Independent $Q$-Learning (IQL) Tampuu et
al. (2017), Value Decomposition Networks (VDN) Sunehag et al. (2018), and QMIX
Rashid et al. (2018), have been implemented in accordance with the PyMARL
codebase Rashid et al. (2018) to ensure consistency with published results and
enable direct comparisons with PyTorch. Our baselines natively support
aggregating trajectories from batched environments, simplifying
parallelisation. This approach is more convenient than managing environments
on distinct threads and subsequently aggregating results, as done in PyMARL.
We provide a brief overview of the implemented baselines in the Appendix.
## 4\. Results
In our results, we aim to demonstrate the speed and correctness of our
environments and algorithms.In several cases, minor changes to the
environments mean that our environments do not exactly match the originals on
a step-by-step level. We therefore demonstrate the correctness in different
ways for each environment and discuss each separately. By combining this
evidence, we demonstrate that our library provides overall correct and far
quicker baselines on a wide range of sufficiently correct and easily-
modifiable environments.
(a) Hanabi
(b) MABrax Ant
(c) Overcooked
(d) Starcraft 2s3z
Figure 4. Speedup of four JaxMARL environments compared to singled-threaded
CPU-based implementations.
(a) MPE Simple Spread Returns
(b) MPE Simple Spread Returns
(c) MPE Training Speed
(d) SMAX Training Speed
Figure 5. IPPO Speed and Performance in JaxMARL compared to MARLLIB and PyMARL
in SMAX and MPE. Return results were averaged across 3 seeds. Performance
results show 1 seed collected on the hardware described in Section 4.1.
### 4.1. Environment Speed
We measure the performance of our environments in steps per second when using
random actions and compare to the original environments in Table 2 and Figure
4. All results were collected on a single NVIDIA A100 GPU and AMD EPYC 7763
64-core processor. Environments were rolled out for 1000 sequential steps.
Many environments have comparable performance to JaxMARL when comparing single
environments, but the ease of parallelisation with Jax allows for more
efficient scaling compared to CPU-based environments. For example, MPE Simple
Spread’s JAX implementation is 2̃0x slower than the original when comparing a
single environment, but even when only running $100$ environments in parallel,
the JAX environment is already over $6$x faster. When considering $10000$
environments, the JAX versions are much faster, achieving speedups of up to
$8500$x over the single-threaded environment (in the case of Overcooked).
Running this many environments in parallel using CPU environments would
require a large CPU cluster and sophisticated communication mechanisms. This
engineering is typically beyond the resources of academic labs, and therefore
JaxMARL can unlock new research directions for such institutions.
### 4.2. Algorithm Speed
We investigate the speed of our IPPO implementation in Figure 5. By
vectorising over agents, it is possible to train a vast number of agents in a
fraction of the time it takes to train a single agent without hardware
acceleration. For MPE, it is possible to train 1024 teams in $198.4$ seconds,
which is less than $0.2$ seconds per teams of agents. A single run of
MARLLIB’s IPPO implementation on the same hardware takes around $2435.7$
seconds on average. This represents a speedup of over $12500$x.
Our JAX-based $Q$-learning algorithms also offer significant speed advantages.
In Figure 6(a), training a single IQL, VDN, or QMIX policy in MPE takes $\sim
130$ seconds while using PyMarl takes over an hour. Training 1024 QMIX
learners in a batch requires $1670$ seconds, which translates to $1.6$ seconds
per learner, indicating a $2700$x speedup. This speedup is not as large as for
IPPO because $Q$-learning baselines are typically trained with fewer parallel
environments. In our experiments, we used 8 parallel environments for
$Q$-learning compared to the 25 or 64 used for PPO. This difference is due to
$Q$-learners benefiting more from a buffer with trajectories collected by
different policies, resulting in a more frequent policy update, rather than
collecting many trajectories with the same policy in parallel.
For SMAX, we compare our vectorised IPPO baseline to the MAPPO implementation
provided in Sun et al. (2023). MAPPO utilises an RNN and IPPO uses a feed
forward network. This was run on a machine with a 64-core CPU and NVIDIA
2080Ti GPU. Additionally, as discussed in Section 3.2, SMAC and SMAX are
different environments. These caveats aside, the differences in performance
are so striking that we believe this clearly demonstrates the advantages of
our approach. We trained 512 SMAX teams on 2s3z in under 33 minutes, whereas a
single training run of PyTorch IPPO implementation takes 44 hours on average.
This is roughly a $40000$x speedup.
(a) Simple Spread Training Time
(b) Simple Spread Returns
(c) Speaker-Listener Returns
(d) QMIX Training Speed
Figure 6. Performance and speed of JaxMARL $Q$-Learning baselines compared to
PyMARL on MPE. Our implementations match PyMARL’s returns, while being over
$2000$x faster to train
### 4.3. Algorithm Correctness
We verify the correctness of our algorithm implementations by comparing to
baselines from other libraries on the MPE Simple Spread and Simple Speaker
Listener environments. For IPPO we report the mean return across $3$ seeds in
Figure 5(b). Results were collected on the same hardware as listed in Section
4.1. Our IPPO implementation obtains the same performance as MARLLIB and runs
$250$x quicker, taking only ten seconds to train.
For the $Q$-learning algorithms, we verify the correctness by comparing with
PyMARL implementations of the same algorithms on the MPE Simple Spread and
Simple Speaker Listener environments. IQL, VDN and QMIX all obtain the same or
better results than their PyMARL counterparts. The returns are from greedy
policies and averaged across 8 runs. The hyperparameters used are from the
PyMARL library.
### 4.4. Environment Correctness
#### MPE
Our MPE environment corresponds exactly to the PettingZoo implementation. We
validate this for each environment using a uniform-random policy on $1000$
rollouts, ensuring all observations and rewards are within a tolerance of
$1\times 10^{-4}$ at each transition. This tolerance accounts for non-
determinism due to running floating point computation on the GPU. The
correspondence is also shown through the performance of IPPO in Figure 5(b)
and the $Q$-learning algorithms in Figures 6(b) and 6(c) respectively, as the
performance of these algorithms is inline with existing baselines Yu et al.
(2022). We additionally report training performance for IQL on the remaining
MPE environments in Appendix C.2.
#### Overcooked
The transition dynamics of our Overcooked implementation match those of the
Overcooked-AI implementation. We demonstrate this by training an IPPO policy
on our implementation and evaluating the policy on both our Overcooked
implementation and the original at regular intervals. Results are illustrated
in Figure 7(a) and performance is similar, demonstrating their equivalence.
#### SMAX
SMAX and SMAC are different environments. However, we demonstrate some
similarity between them by comparing our IPPO and MAPPO implementations
against MAPPO results on SMAC, using the implementation from Sun et al.
(2023). We show this in Figure 8. SMAX and SMAC have different opponent
policies and dynamics, which makes this comparison more qualitative than
precise. We describe the differences between the two in more depth in in the
supplementary material. However, despite these differences, the environments
seem similarly difficult, with some environments being more difficult in SMAC,
and some more difficult in SMAX. This is shown in Figure 8 and in the
supplementary material.
#### MABrax
As Brax differs subtly from MuJoCo, MABrax does not correspond to MAMuJoCo but
the learning dynamics are qualitatively similar. To demonstrate this, we
report mean training return across 10 seeds for IPPO on `ant_4x2` in Figure
7(b), and our results are in line with the performance of TRPO reported in
Kuba et al. (2021). We report the performance of IPPO on HalfCheetah and
Walker in Appendix C.1, the results are also in line with TRPO.
(a) Overcooked
(b) MABrax Ant
(c) 2 Player Hanabi
Figure 7. JaxMARL IPPO baseline results. These results correspond to similar baselines and therefore demonstrate the correctness of our implementations. Figure 8. SMAX IPPO and MAPPO baselines compared to MAPPO in SMAC. Table 3. Recommended Minimal Environment Evaluations for different research settings Setting | Recommended Environments
---|---
CTDE | SMAX (all scenarios), Hanabi (2-5 players), Overcooked
Zero-shot Coordination | Hanabi (2 players), Overcooked (5 basic scenarios)
General-Sum | STORM (iterated prisoner’s dilemma), STORM (matching pennies)
Cooperative Continuous Actions | MABrax
#### Hanabi
Our implementation does not correspond exactly to the Hanabi Learning
Environment as we use a subtly different observation space, with the reasoning
given in Appendix A.4. To demonstrate qualitative similarity, we train IPPO on
Hanabi in self-play with 2 players, with the mean test return across 3 seeds
reported in Figure 7(c).
#### STORM, Coin Game & Switch Riddle
STORM differs from Melting Pot 2.0 significantly, making direct comparisons
challenging, with differences discussed in Appendix A.3. Furthermore, STORM
and Coin Game are general-sum games, so the environment returns of IPPO in
self-play would not be a good indicator of performance. Switch Riddle is a
simple diagnostic environment – we do not use it for thorough evaluations.
## 5\. Evaluation Recommendations
Previous work Gorsane et al. (2022) has found significant differences in the
evaluation protocols between MARL research works. We identify four main
research areas that would benefit from our library: cooperative centralised
training with decentralised execution (CTDE) Foerster et al. (2016), zero-shot
coordination Hu et al. (2020), general-sum games, and cooperative continuous
action methods.
To aid comparisons between methods, we recommend standard _minimal_ sets of
evaluation environments for each of these settings in Table 3. It’s important
to note that these are _minimal_ and we encourage as broad an evaluation as
possible. For example, in the zero-shot coordination setting, all methods
should be able to evaluate on Hanabi and Overcooked. However, it may also be
possible to evaluate such methods on the SMACv2 settings of SMAX. Similarly,
SMAX could be used to evaluate two-player zero-sum methods by training in
self-play. For some settings, such as continuous action environments and
general-sum games, there is only one difficult environment. We encourage
further development of JAX-based environments in these settings to improve the
quality of evaluation.
## 6\. Related Work
Several open-source libraries exist for both MARL algorithms and environments.
The popular library PyMARL Samvelyan et al. (2019) provides PyTorch
implementations of QMIX, VDN and IQL and integrates easily with SMAC. E-PyMARL
Papoudakis et al. (2021) extends this by adding the actor-critic algorithms
MADDPG Lowe et al. (2017), MAA2C Mnih et al. (2016), IA2C Mnih et al. (2016),
and MAPPO, and supports the SMAC, Gym Brockman et al. (2016), Robot Warehouse
Christianos et al. (2020), Level-Based Foraging Christianos et al. (2020), and
MPE environments. Recently released MARLLib Hu et al. (2022) is instead based
on the open-source RL library RLLib Liang et al. (2018) and combines a wide
range of competitive, cooperative and mixed environments with a broad set of
baseline algorithms. Meanwhile, MALib Zhou et al. (2023) focuses on
population-based MARL across a wide range of environments. However, none of
these frameworks feature hardware-accelerated environments and thus lack the
associated performance benefits.
There has also been a recent proliferation of hardware-accelerated and JAX-
based RL environments. Isaac gym Makoviychuk et al. (2021) provides a GPU-
accelerated simulator for a range of robotics platforms and CuLE Dalton and
frosio (2020) is a CUDA reimplementation of the Atari Learning Environment
Bellemare et al. (2013). Both of these environments are GPU-specific and
cannot be extended to other hardware accelerators. Madrona Shacklett et al.
(2023) is an extensible game-engine written in C++ that allows for GPU
acceleration and parallelisation across environments. However, it requires
environment code to be written in C++, limiting its accessibility. VMAS
Bettini et al. (2022) provides a vectorized 2D physics engine written in
PyTorch and a set of challenging multi-robot scenarios, including those from
the MPE environment. For RL environments implemented in JAX, Jumanji Bonnet et
al. (2023) features mostly single-agent environments with a strong focus on
combinatorial problems. The authors also provide an actor-critic baseline in
addition to random actions. PGX Koyamada et al. (2023) includes several board-
game environments written in JAX. Gymnax Lange (2022) provides JAX
implementations of the BSuite Osband et al. (2019), classic continuous
control, MinAtar Young and Tian (2019) and other assorted environments.
Gymnax’s sister-library, gymnax-baselines, provides PPO and ES baselines.
Further extensions to Gymnax Lu et al. (2023a) also include POPGym
environments Morad et al. (2023). Brax Freeman et al. (2021) reimplements the
MuJoCo simulator in JAX and also provides a PPO implementation as a baseline.
Jax-LOB Frey et al. (2023) implements a vectorized limit order book as an RL
environment that runs on the accelerator. Perhaps the most similar to our work
is Mava Pretorius et al. (2021), which provides a MAPPO baseline, as well as
integration with the Robot Warehouse environment. However, none of these
libraries combine a range of JAX-based MARL environments with both value-based
and actor-critic baselines.
Broadly, no other work provides implementations of a wide range of hardware-
accelerated MARL environments, while also implementing value-based and actor-
critic baselines. Secondly, no other JAX simplification of SMAC exists. All
other versions are either tied to the StarCraft II simulator or not hardware
accelerated.
## 7\. Conclusion
Hardware acceleration offers important opportunities for MARL research by
lowering computational barriers, increasing the speed at which ideas can be
iterated, and allowing for more thorough evaluation. We present JaxMARL, an
open-source library of popular MARL environments and baseline algorithms
implemented in JAX. We combine ease of use with hardware accelerator enabled
efficiency to give significant speed-ups compared to traditional CPU-based
implementations. Furthermore, by bringing together a wide range of MARL
environments under one codebase, we have the potential to help alleviate
issues with MARL’s evaluation standards. We hope that JaxMARL will help
advance MARL by improving the ability of academic labs to conduct research
with thorough, fast, and effective evaluations.
## 8\. Author Contributions
This project is a large-scale effort spanning many labs and contributors.
AR11footnotemark: 122footnotemark: 2 led the design of the JaxMARL API and
interface the implementation of IPPO and MPE environments. BE11footnotemark:
122footnotemark: 2 led the design and implementation of the SMAX environments
and IPPO evaluations. AR and BE also led the writing of this manuscript.
MG11footnotemark: 122footnotemark: 2 led the implementation of the off-policy
MARL algorithms, their evaluations., and the implementation of the Switch
Riddle environment.
JC11footnotemark: 1 led the implementation of the Hanabi environment and
heavily assisted with benchmarking and verifying its performance.
AL11footnotemark: 1 led the implementation of the Overcooked environments.
GI11footnotemark: 1 led the implementation of the Multi-Agent Brax
environments. TW11footnotemark: 1 led the implementation of the STORM
environments. AK and AS worked on the STORM environments. CSW led the
implementation of the Predator-Prey environment.
CSW, SB, MS, MJ, and RL provided invaluable discussions for project planning
and implementations across the project. SB helped initiate the project plan.
MS worked on the Multi-Agent Brax environments. MJ worked on the Overcooked
and Hanabi environments. RL assisted with the design of the API and testing
infrastructure.
SW, BL, NH, and TR provided invaluable feedback on the project, manuscript,
and results.
CL11footnotemark: 122footnotemark: 2 initiated the project and led the
organizational and planning efforts, speed-based benchmarking, and Coin Game
implementation.
JF is the primary advisor for the project.
## References
* (1)
* Agapiou et al. (2022) John P Agapiou, Alexander Sasha Vezhnevets, Edgar A Duéñez-Guzmán, Jayd Matyas, Yiran Mao, Peter Sunehag, Raphael Köster, Udari Madhushani, Kavya Kopparapu, Ramona Comanescu, et al. 2022\. Melting Pot 2.0. _arXiv preprint arXiv:2211.13746_ (2022).
* Bard et al. (2020) Nolan Bard, Jakob N Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, et al. 2020\. The hanabi challenge: A new frontier for ai research. _Artificial Intelligence_ 280 (2020), 103216.
* Bellemare et al. (2013) M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. 2013. The Arcade Learning Environment: An Evaluation Platform for General Agents. _Journal of Artificial Intelligence Research_ 47 (jun 2013), 253–279.
* Bernstein et al. (2002) Daniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. 2002. The complexity of decentralized control of Markov decision processes. _Mathematics of operations research_ 27, 4 (2002), 819–840.
* Bettini et al. (2022) Matteo Bettini, Ryan Kortvelesy, Jan Blumenkamp, and Amanda Prorok. 2022. VMAS: A Vectorized Multi-Agent Simulator for Collective Robot Learning. _The 16th International Symposium on Distributed Autonomous Robotic Systems_ (2022).
* Bonnet et al. (2023) Clément Bonnet, Daniel Luo, Donal Byrne, Shikha Surana, Vincent Coyette, Paul Duckworth, Laurence I. Midgley, Tristan Kalloniatis, Sasha Abramowitz, Cemlyn N. Waters, Andries P. Smit, Nathan Grinsztajn, Ulrich A. Mbou Sob, Omayma Mahjoub, Elshadai Tegegn, Mohamed A. Mimouni, Raphael Boige, Ruan de Kock, Daniel Furelos-Blanco, Victor Le, Arnu Pretorius, and Alexandre Laterre. 2023. Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX. arXiv:2306.09884 [cs.LG] https://arxiv.org/abs/2306.09884
* Bradbury et al. (2018) James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018. _JAX: composable transformations of Python+NumPy programs_. http://github.com/google/jax
* Brockman et al. (2016) Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. arXiv:arXiv:1606.01540
* Carroll et al. (2019) Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter Abbeel, and Anca Dragan. 2019. On the utility of learning about humans for human-ai coordination. _Advances in neural information processing systems_ 32 (2019).
* Christianos et al. (2020) Filippos Christianos, Lukas Schäfer, and Stefano V Albrecht. 2020. Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning. In _Advances in Neural Information Processing Systems (NeurIPS)_.
* Dalton and frosio (2020) Steven Dalton and iuri frosio. 2020. Accelerating Reinforcement Learning through GPU Atari Emulation. In _Advances in Neural Information Processing Systems_ , H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 19773–19782. https://proceedings.neurips.cc/paper/2020/file/e4d78a6b4d93e1d79241f7b282fa3413-Paper.pdf
* de Lazcano et al. (2023) Rodrigo de Lazcano, Kallinteris Andreas, Jun Jet Tai, Seungjae Ryan Lee, and Jordan Terry. 2023. _Gymnasium Robotics_. http://github.com/Farama-Foundation/Gymnasium-Robotics
* de Witt et al. (2020) Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H. S. Torr, Mingfei Sun, and Shimon Whiteson. 2020. Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge? https://doi.org/10.48550/arXiv.2011.09533 arXiv:2011.09533 [cs].
* Ellis et al. (2022) Benjamin Ellis, Skander Moalla, Mikayel Samvelyan, Mingfei Sun, Anuj Mahajan, Jakob N Foerster, and Shimon Whiteson. 2022. SMACv2: An improved benchmark for cooperative multi-agent reinforcement learning. _arXiv preprint arXiv:2212.07489_ (2022).
* Foerster et al. (2016) Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. In _Advances in Neural Information Processing Systems_ , D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.), Vol. 29. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2016/file/c7635bfd99248a2cdef8249ef7bfbef4-Paper.pdf
* Foerster et al. (2018) Jakob Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. 2018. Learning with Opponent-Learning Awareness. In _Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems_. 122–130.
* Foerster et al. (2017) Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip HS Torr, Pushmeet Kohli, and Shimon Whiteson. 2017. Stabilising experience replay for deep multi-agent reinforcement learning. In _International conference on machine learning_. PMLR, 1146–1155.
* Freeman et al. (2021) C. Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. 2021. _Brax - A Differentiable Physics Engine for Large Scale Rigid Body Simulation_. http://github.com/google/brax
* Frey et al. (2023) Sascha Frey, Kang Li, Peer Nagy, Silvia Sapora, Chris Lu, Stefan Zohren, Jakob Foerster, and Anisoara Calinescu. 2023. JAX-LOB: A GPU-Accelerated limit order book simulator to unlock large scale reinforcement learning for trading. _arXiv preprint arXiv:2308.13289_ (2023).
* Gorsane et al. (2022) Rihab Gorsane, Omayma Mahjoub, Ruan de Kock, Roland Dubb, Siddarth Singh, and Arnu Pretorius. 2022. Towards a Standardised Performance Evaluation Protocol for Cooperative MARL. _arXiv preprint arXiv:2209.10485_ (2022).
* Hu and Foerster (2020) Hengyuan Hu and Jakob N Foerster. 2020. Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning. In _International Conference on Learning Representations_. https://openreview.net/forum?id=B1xm3RVtwB
* Hu et al. (2020) Hengyuan Hu, Adam Lerer, Alex Peysakhovich, and Jakob Foerster. 2020. “other-play” for zero-shot coordination. In _International Conference on Machine Learning_. PMLR, 4399–4410.
* Hu et al. (2022) Siyi Hu, Yifan Zhong, Minquan Gao, Weixun Wang, Hao Dong, Zhihui Li, Xiaodan Liang, Xiaojun Chang, and Yaodong Yang. 2022. MARLlib: Extending RLlib for Multi-agent Reinforcement Learning. _arXiv preprint arXiv:2210.13708_ (2022).
* Huang et al. (2022) Shengyi Huang, Rousslan Fernand Julien Dossa, Chang Ye, Jeff Braga, Dipam Chakraborty, Kinal Mehta, and João G.M. Araújo. 2022. CleanRL: High-quality Single-file Implementations of Deep Reinforcement Learning Algorithms. _Journal of Machine Learning Research_ 23, 274 (2022), 1–18. http://jmlr.org/papers/v23/21-1342.html
* Ierusalimschy (2006) Roberto Ierusalimschy. 2006. _Programming in lua_. Roberto Ierusalimschy.
* Khan et al. (2022) Akbir Khan, Newton Kwan, Timon Willi, Chris Lu, Andrea Tacchetti, and Jakob Nicolaus Foerster. 2022. Context and History Aware Other-Shaping. (2022).
* Koyamada et al. (2023) Sotetsu Koyamada, Shinri Okano, Soichiro Nishimori, Yu Murata, Keigo Habara, Haruka Kita, and Shin Ishii. 2023. Pgx: Hardware-accelerated Parallel Game Simulators for Reinforcement Learning. _arXiv preprint arXiv:2303.17503_ (2023).
* Kuba et al. (2021) Jakub Grudzien Kuba, Ruiqing Chen, Muning Wen, Ying Wen, Fanglei Sun, Jun Wang, and Yaodong Yang. 2021. Trust region policy optimisation in multi-agent reinforcement learning. _arXiv preprint arXiv:2109.11251_ (2021).
* Lange (2022) Robert Tjarko Lange. 2022. _gymnax: A JAX-based Reinforcement Learning Environment Library_. http://github.com/RobertTLange/gymnax
* Lauffer et al. (2023) Niklas Lauffer, Ameesh Shah, Micah Carroll, Michael D Dennis, and Stuart Russell. 2023. Who Needs to Know? Minimal Knowledge for Optimal Coordination. In _International Conference on Machine Learning_. PMLR, 18599–18613.
* Liang et al. (2018) Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. 2018. RLlib: Abstractions for distributed reinforcement learning. In _International conference on machine learning_. PMLR, 3053–3062.
* Lowe et al. (2017) Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. 2017. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. _Neural Information Processing Systems (NIPS)_ (2017).
* Lu et al. (2022a) Chris Lu, Jakub Kuba, Alistair Letcher, Luke Metz, Christian Schroeder de Witt, and Jakob Foerster. 2022a. Discovered policy optimisation. _Advances in Neural Information Processing Systems_ 35 (2022), 16455–16468.
* Lu et al. (2023a) Chris Lu, Yannick Schroecker, Albert Gu, Emilio Parisotto, Jakob Foerster, Satinder Singh, and Feryal Behbahani. 2023a. Structured State Space Models for In-Context Reinforcement Learning. _arXiv e-prints_ (2023), arXiv–2303.
* Lu et al. (2022b) Christopher Lu, Timon Willi, Christian A Schroeder De Witt, and Jakob Foerster. 2022b. Model-Free Opponent Shaping. In _International Conference on Machine Learning_. PMLR, 14398–14411.
* Lu et al. (2023b) Chris Lu, Timon Willi, Alistair Letcher, and Jakob Nicolaus Foerster. 2023b. Adversarial cheap talk. In _International Conference on Machine Learning_. PMLR, 22917–22941.
* Makoviychuk et al. (2021) Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, and Gavriel State. 2021. Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning.
* Michalski et al. (2023) Adam Michalski, Filippos Christianos, and Stefano V Albrecht. 2023. SMAClite: A Lightweight Environment for Multi-Agent Reinforcement Learning. _arXiv preprint arXiv:2305.05566_ (2023).
* Mnih et al. (2016) Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In _International conference on machine learning_. PMLR, 1928–1937.
* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015\. Human-level control through deep reinforcement learning. _nature_ 518, 7540 (2015), 529–533.
* Morad et al. (2023) Steven Morad, Ryan Kortvelesy, Matteo Bettini, Stephan Liwicki, and Amanda Prorok. 2023. POPGym: Benchmarking Partially Observable Reinforcement Learning. In _The Eleventh International Conference on Learning Representations_. https://openreview.net/forum?id=chDrutUTs0K
* Osband et al. (2019) Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepesvari, Satinder Singh, et al. 2019\. Behaviour suite for reinforcement learning. _arXiv preprint arXiv:1908.03568_ (2019).
* Papoudakis et al. (2021) Georgios Papoudakis, Filippos Christianos, Lukas Schäfer, and Stefano V. Albrecht. 2021. Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks. In _Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS)_. http://arxiv.org/abs/2006.07869
* Peng et al. (2021) Bei Peng, Tabish Rashid, Christian Schroeder de Witt, Pierre-Alexandre Kamienny, Philip Torr, Wendelin Böhmer, and Shimon Whiteson. 2021. Facmac: Factored multi-agent centralised policy gradients. _Advances in Neural Information Processing Systems_ 34 (2021), 12208–12221.
* Pretorius et al. (2021) Arnu Pretorius, Kale ab Tessera, Andries P. Smit, Kevin Eloff, Claude Formanek, St John Grimbly, Siphelele Danisa, Lawrence Francis, Jonathan Shock, Herman Kamper, Willie Brink, Herman Engelbrecht, Alexandre Laterre, and Karim Beguir. 2021. Mava: A Research Framework for Distributed Multi-Agent Reinforcement Learning. _arXiv preprint arXiv:2107.01460_ (2021). https://arxiv.org/pdf/2107.01460.pdf
* Rashid et al. (2020) Tabish Rashid, Mikayel Samvelyan, Christian Schroeder De Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. 2020. Monotonic value function factorisation for deep multi-agent reinforcement learning. _The Journal of Machine Learning Research_ 21, 1 (2020), 7234–7284.
* Rashid et al. (2018) Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. 2018. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In _International conference on machine learning_. PMLR, 4295–4304.
* Sabne (2020) Amit Sabne. 2020. XLA : Compiling Machine Learning for Peak Performance.
* Samvelyan et al. (2019) Mikayel Samvelyan, Tabish Rashid, Christian Schroeder De Witt, Gregory Farquhar, Nantas Nardelli, Tim GJ Rudner, Chia-Man Hung, Philip HS Torr, Jakob Foerster, and Shimon Whiteson. 2019. The starcraft multi-agent challenge. _arXiv preprint arXiv:1902.04043_ (2019).
* Schrittwieser et al. (2020) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. 2020\. Mastering atari, go, chess and shogi by planning with a learned model. _Nature_ 588, 7839 (2020), 604–609.
* Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_ (2017).
* Shacklett et al. (2023) Brennan Shacklett, Luc Guy Rosenzweig, Zhiqiang Xie, Bidipta Sarkar, Andrew Szot, Erik Wijmans, Vladlen Koltun, Dhruv Batra, and Kayvon Fatahalian. 2023. An Extensible, Data-Oriented Architecture for High-Performance, Many-World Simulation. _ACM Trans. Graph._ 42, 4 (2023).
* Snyder (1971) Glenn H Snyder. 1971. ” Prisoner’s Dilemma” and” Chicken” Models in International Politics. _International Studies Quarterly_ 15, 1 (1971), 66–103.
* Sun et al. (2023) Mingfei Sun, Sam Devlin, Jacob Beck, Katja Hofmann, and Shimon Whiteson. 2023. Trust region bounds for decentralized ppo under non-stationarity. In _Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems_. 5–13.
* Sunehag et al. (2017) Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. 2017\. Value-decomposition networks for cooperative multi-agent learning. _arXiv preprint arXiv:1706.05296_ (2017).
* Sunehag et al. (2018) Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. 2018\. Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward. In _Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, (AAMAS 2018)_ , Vol. 3. 2085–2087.
* Tampuu et al. (2017) Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, and Raul Vicente. 2017. Multiagent cooperation and competition with deep reinforcement learning. _PloS one_ 12, 4 (2017), e0172395.
* Terry et al. (2021) J Terry, Benjamin Black, Nathaniel Grammel, Mario Jayakumar, Ananth Hari, Ryan Sullivan, Luis S Santos, Clemens Dieffendahl, Caroline Horsch, Rodrigo Perez-Vicente, et al. 2021\. Pettingzoo: Gym for multi-agent reinforcement learning. _Advances in Neural Information Processing Systems_ 34 (2021), 15032–15043.
* Todorov et al. (2012) Emanuel Todorov, Tom Erez, and Yuval Tassa. 2012. MuJoCo: A physics engine for model-based control. In _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 5026–5033. https://doi.org/10.1109/IROS.2012.6386109
* Van Hasselt et al. (2016) Hado Van Hasselt, Arthur Guez, and David Silver. 2016. Deep reinforcement learning with double q-learning. In _Proceedings of the AAAI conference on artificial intelligence_ , Vol. 30.
* Weng et al. (2022) Jiayi Weng, Min Lin, Shengyi Huang, Bo Liu, Denys Makoviichuk, Viktor Makoviychuk, Zichen Liu, Yufan Song, Ting Luo, Yukun Jiang, Zhongwen Xu, and Shuicheng Yan. 2022. EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine. In _Advances in Neural Information Processing Systems_ , S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 22409–22421. https://proceedings.neurips.cc/paper_files/paper/2022/file/8caaf08e49ddbad6694fae067442ee21-Paper-Datasets_and_Benchmarks.pdf
* Young and Tian (2019) Kenny Young and Tian Tian. 2019. MinAtar: An Atari-Inspired Testbed for Thorough and Reproducible Reinforcement Learning Experiments. _arXiv preprint arXiv:1903.03176_ (2019).
* Yu et al. (2022) Chao Yu, Akash Velu, Eugene Vinitsky, Jiaxuan Gao, Yu Wang, Alexandre Bayen, and Yi Wu. 2022. The surprising effectiveness of ppo in cooperative multi-agent games. _Advances in Neural Information Processing Systems_ 35 (2022), 24611–24624.
* Zhang et al. (2022) Qizhen Zhang, Chris Lu, Animesh Garg, and Jakob Foerster. 2022. Centralized Model and Exploration Policy for Multi-Agent RL. In _Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems_. 1500–1508.
* Zhao et al. (2022) Stephen Zhao, Chris Lu, Roger B Grosse, and Jakob Foerster. 2022. Proximal Learning With Opponent-Learning Awareness. _Advances in Neural Information Processing Systems_ 35 (2022), 26324–26336.
* Zhou et al. (2023) Ming Zhou, Ziyu Wan, Hanjing Wang, Muning Wen, Runzhe Wu, Ying Wen, Yaodong Yang, Yong Yu, Jun Wang, and Weinan Zhang. 2023. MALib: A Parallel Framework for Population-based Multi-agent Reinforcement Learning. _Journal of Machine Learning Research_ 24, 150 (2023), 1–12. http://jmlr.org/papers/v24/22-0169.html
## Appendix A Further Details on Environments
### A.1. SMAX
Observations in SMAX are structured similarly to SMAC. Each agent observes the
health, previous action, position, weapon cooldown and unit type of all allies
and enemies in its sight range. Like SMACv2Ellis et al. (2022), we use the
sight and attack ranges as prescribed by StarCraft II rather than the fixed
values used in SMAC.
SMAX and SMAC have different returns. SMAC’s reward function, like SMAX’s, is
split into two parts: one part for depleting enemy health, and another for
winning the episode. However, in SMAC, the part which rewards depleting enemy
health scales with the number of agents. This is most clearly demonstrated in
27m_vs_30m, where a random policy gets a return of around $10$ out of a
maximum of $20$ because almost all the reward is for depleting enemy health or
killing agents, rather than winning the episode. In SMAX, however, 50% of the
total return is always for depleting enemy health, and 50% for winning. Unlike
StarCraft II, where all actions happen in a randomised order in the game loop,
some actions in SMAX are simultaneous, meaning draws are possible. In this
case both teams get $0$ reward.
Like SMAC, each environment step in SMAX consists of eight individual time
ticks. SMAX uses a discrete action space, consisting of movement in the four
cardinal directions, a stop action, and a shoot action per enemy.
SMAX makes three notable simplifications of the StarCraft II dynamics to
reduce complexity. First, zerg units do not regenerate health. This health
regeneration is slow at $0.38$ health per second, and so likely has little
impact on the game. Protoss units also do not have shields. Shields only
recharge after 10 seconds out of combat, and therefore are unlikely to
recharge during a single micromanagement task. Protoss units have additional
health to compensate for their lost shields. Finally, the available unit types
are reduced compared to SMAC. SMAX has no medivac, colossus or baneling units.
Each of these unit types has special mechanics that were left out for the sake
of simplicity. For the SMACv2 scenarios, the start positions are generated as
in SMACv2, with the small difference that the ‘surrounded’ start positions now
treat allies and enemies identically, rather than always spawning allies in
the middle of the map. This symmetry guarantees that a 50% win rate is always
achievable.
Collisions are handled by moving agents to their desired location first and
then pushing them out from one another.
### A.2. Coin Game
Two agents, ‘red’ and ‘blue’, move in a wrap-around grid and collect red and
blue coloured coins. When an agent collects any coin, the agent receives a
reward of $1$. However, when ‘red’ collects a blue coin, ‘blue’ receives a
reward of $-2$ and vice versa. Once a coin is collected, a new coin of the
same colour appears at a random location within the grid. If a coin is
collected by both agents simultaneously, the coin is duplicated and both
agents collect it. Episodes are of a set length.
### A.3. Spatial-Temporal Representations of Matrix Games (STORM)
This environment features directional agents within an 8x8 grid-world with a
restricted field of view. Agents cannot move backwards or share the same
location. Collisions are resolved by either giving priority to the stationary
agent or randomly if both are moving. Agents collect two unique resources:
cooperate and defect coins. Once an agent picks up any coin, the agent’s
colour shifts, indicating its readiness to interact. The agents can then
release an interact beam directly ahead; when this beam intersects with
another ready agent, both are rewarded based on the specific matrix game
payoff matrix. The agents’ coin collections determine their strategies. For
instance, if an agent has 1 cooperate coin and 3 defect coins, there’s a 25%
likelihood of the agent choosing to cooperate. After an interaction, the two
agents involved are frozen for five steps, revealing their coin collections to
surrounding agents. After five steps, they respawn in a new location, with
their coin count set back to zero. Once an episode concludes, the coin
placements are shuffled. This grid-based approach to matrix games can be
adapted for n-player versions. While STORM is inspired by MeltingPot 2.0,
there are noteworthy differences:
* •
Meltingpot uses pixel-based observations while we allow for direct grid
access.
* •
Meltingpot’s grid size is typically 23x15, while ours is 8x8.
* •
Meltingpot features walls within its layout, ours does not.
* •
Our environment introduces stochasticity by shuffling the coin placements,
which remain static in Meltingpot.
* •
Our agents begin with an empty coin inventory, making it easier for them to
adopt pure cooperate or defect tactics, unlike in Meltingpot where they start
with one of each coin.
* •
MeltingPot is implemented in Lua (Ierusalimschy, 2006) where as ours is a
vectorized implementation in Jax.
We deem the coin shuffling especially crucial because even large environments
representing POMDPs, such as SMAC, can be solved without the need for memory
if they lack sufficient randomness (Ellis et al., 2022).
### A.4. Hanabi
There are a few details that differ between our Hanabi implementation and the
original Hanabi Learning Environment (HLE). The most notable of these is how
we choose to represent card knowledge information in the agents’ observation.
In the HLE, card knowledge is observed as a colour/rank if there has been an
explicit hint about a given card. As a separate feature, implicit card
knowledge is represented as possible colours/ranks if there has not been an
explicit hint that indicates a given card is not that colour/rank. We, on the
other hand, combine implicit and explicit card knowledge, by only maintaining
a representation of implicit card knowledge, which reduces to explicit card
knowledge in the event an explicit hint is given about a card. This is because
all possible colours/ranks are represented as 1s, whilst all ruled out
colours/ranks are represented as 0s. By giving an explicit hint, all but one
colour/rank are ruled out, leaving a one-hot encoding of the explicit card
knowledge. We implement card knowledge this way, because knowledge updates are
implemented via tensor calculus using JAX Numpy arrays of fixed shape and data
type.
## Appendix B Value-Based MARL Methods and Implementation details
Key features of our framework include parameter sharing, a recurrent neural
network (RNN) for agents, an epsilon-greedy exploration strategy with linear
decay, a uniform experience replay buffer, and the incorporation of Double
Deep $Q$-Learning (DDQN) Van Hasselt et al. (2016) techniques to enhance
training stability.
Unlike PyMARL, we use the Adam optimizer as the default optimization
algorithm. Below is an introduction to common value-based MARL methods.
IQL (Independent $Q$-Learners) is a straightforward adaptation of Deep
$Q$-Learning to multi-agent scenarios. It features multiple $Q$-Learner agents
that operate independently, optimizing their individual returns. This approach
follows a decentralized learning and decentralized execution pipeline.
VDN (Value Decomposition Networks) extends $Q$-Learning to multi-agent
scenarios with a centralized-learning-decentralized-execution framework.
Individual agents approximate their own action’s $Q$-Value, which is then
summed during training to compute a jointed $Q_{tot}$ for the global state-
action pair. Back-propagation of the global DDQN loss in respect to a global
team reward optimizes the factorization of the jointed $Q$-Value.
QMIX improves upon VDN by relaxing the full factorization requirement. It
ensures that a global $argmax$ operation on the total $Q$-Value ($Q_{tot}$) is
equivalent to individual $argmax$ operations on each agent’s $Q$-Value. This
is achieved using a feed-forward neural network as the mixing network, which
combines agent network outputs to produce $Q_{tot}$ values. The global DDQN
loss is computed using a single shared reward function and is back-propagated
through the mixer network to the agents’ parameters. Hypernetworks generate
the mixing network’s weights and biases, ensuring non-negativity using an
absolute activation function. These hypernetworks are two-layered multi-layer
perceptrons with ReLU non-linearity.
## Appendix C Training Results
### C.1. MABrax
The performance of IPPO on HalfCheeta and Walker is reported in Figure 9, with
hyperparameters reported in Table 4.
(a) HalfCheetah
(b) Walker
Figure 9. Performance of IPPO on MABrax Tasks
### C.2. MPE
Performance of $Q$-Learning baselines in all the MPE scenarios are reported in
Figure 10. The upper row represents cooperative scenarios, with results for
all our $Q$-learning baselines reported. The bottom row refers to competitive
scenarios, and results for IQL are divided by agent types. Hyperparameters are
given in Table 10
### C.3. SMAX
The performance of IPPO in SMAX versus MAPPO in SMAC is shown in Figure 11
while the performance of our $Q$-learning baselines is reported in Figure 12.
We do not report them together because their hyperparameters were tuned over a
different number of timesteps. Hyperparameters for IPPO and the $Q$-learning
methods are given in Tables 6 and 10 respectively.
Figure 10. $Q$-Learning Baselines in all MPE scenarios. Where no algorithm
names are given, the results represent IQL. Figure 11. IPPO and MAPPO in SMAX
versus MAPPO in SMAC for all SMAC maps.
Figure 12. Performance of $Q$-Learning Baselines for all SMAX scenarios
## Appendix D Hyperparameters
Value | Ant | HalfCheetah | Walker
---|---|---|---
VF_COEF | 4.5 | 0.14 | 1.9
ENT_COEF | $2\times 10^{-6}$ | $4.5\times 10^{-3}$ | $1\times 10^{-3}$
LR | $1\times 10^{-3}$ | $6\times 10^{-4}$ | $7\times 10^{-3}$
NUM_ENVS | 64 | – | –
NUM_STEPS | 300 | – | –
TOTAL_TIMESTEPS | $1\times 10^{8}$ | – | –
NUM_MINIBATCHES | 4 | – | –
GAMMA | 0.99 | – | –
GAE_LAMBDA | 1.0 | – | –
CLIP_EPS | 0.2 | – | –
MAX_GRAD_NORM | 0.5 | – | –
ACTIVATION | tanh | – | –
ANNEAL_LR | True | – | –
Table 4. MABrax Hyperparameters, where – indicates repeated parameters
Hyperparameter | Value
---|---
LR | $0.0005$
NUM_ENVS | $25$
NUM_STEPS | $128$
TOTAL_TIMESTEPS | $1\times 10^{6}$
UPDATE_EPOCHS | $5$
NUM_MINIBATCHES | $2$
GAMMA | $0.99$
GAE_LAMBDA | $1.0$
CLIP_EPS | $0.3$
ENT_COEF | $0.01$
VF_COEF | $1.0$
MAX_GRAD_NORM | $0.5$
ACTIVATION | tanh
ANNEAL_LR | True
Table 5. Hyperparameters for MPE IPPO
Hyperparameter | Value
---|---
LR | $0.004$
NUM_ENVS | $64$
NUM_STEPS | $128$
TOTAL_TIMESTEPS | $1\times 10^{7}$
UPDATE_EPOCHS | $2$
NUM_MINIBATCHES | $2$
GAMMA | $0.99$
GAE_LAMBDA | $0.95$
CLIP_EPS | $0.2$
SCALE_CLIP_EPS | False
ENT_COEF | $0.0$
VF_COEF | $0.5$
MAX_GRAD_NORM | $0.5$
ACTIVATION | relu
Table 6. Hyperparameters for SMAX IPPO
Hyperparameter | Value
---|---
LR | $5\times 10^{-4}$
NUM_ENVS | $1024$
NUM_STEPS | $128$
TOTAL_TIMESTEPS | $1\times 10^{10}$
UPDATE_EPOCHS | $4$
NUM_MINIBATCHES | $4$
GAMMA | $0.99$
GAE_LAMBDA | $0.95$
CLIP_EPS | $0.2$
ENT_COEF | $0.01$
VF_COEF | $0.5$
MAX_GRAD_NORM | $0.5$
ACTIVATION | relu
ANNEAL_LR | True
NUM_FC_LAYERS | 2
LAYER_WIDTH | 512
Table 7. Hyperparameters for Hanabi IPPO
Hyperparameter | Value
---|---
LR | $2.5\times 10^{-4}$
NUM_ENVS | $16$
NUM_STEPS | $128$
TOTAL_TIMESTEPS | $5\times 10^{6}$
UPDATE_EPOCHS | $4$
NUM_MINIBATCHES | $4$
GAMMA | $0.99$
GAE_LAMBDA | $0.95$
CLIP_EPS | $0.2$
ENT_COEF | $0.01$
VF_COEF | $0.5$
MAX_GRAD_NORM | $0.5$
ACTIVATION | tanh
ANNEAL_LR | True
NUM_EVALS | $16$
Table 8. Hyperparameters for Overcooked IPPO
Hyperparameter | Value
---|---
NUM_ENVS | $8$
NUM_STEPS | $25$
BUFFER_SIZE | $5000$
BUFFER_BATCH_SIZE | $32$
TOTAL_TIMESTEPS | $2\times 10^{6}$
AGENT_HIDDEN_DIM | $64$
AGENT_INIT_SCALE | $2.0$
EPSILON_START | $1.0$
EPSILON_FINISH | $0.05$
EPSILON_ANNEAL_TIME | $100000$
MIXER_EMBEDDING_DIM* | $32$
MIXER_HYPERNET_HIDDEN_DIM* | $64$
MIXER_INIT_SCALE* | $0.00001$
MAX_GRAD_NORM | $25$
TARGET_UPDATE_INTERVAL | $200$
LR | $0.005$
EPS_ADAM | $0.001$
WEIGHT_DECAY_ADAM | $0.00001$
GAMMA | $0.9$
NUM_TEST_EPISODES | $32$
TEST_INTERVAL | $50000$
Table 9. Hyperparameters for MPE $Q$-Learning Algorithms
(* Parameters specific to QMix.)
Hyperparameter | Value
---|---
NUM_ENVS | $8$
NUM_STEPS | $100$
BUFFER_SIZE | $3000$
BUFFER_BATCH_SIZE | $32$
TOTAL_TIMESTEPS | $2\times 10^{7}$
AGENT_HIDDEN_DIM | $256$
AGENT_INIT_SCALE | $1.0$
EPSILON_START | $1.0$
EPSILON_FINISH | $0.05$
EPSILON_ANNEAL_TIME | $100000$
MIXER_EMBEDDING_DIM* | $64$
MIXER_HYPERNET_HIDDEN_DIM* | $256$
MIXER_INIT_SCALE* | $0.001$
MAX_GRAD_NORM | $10$
TARGET_UPDATE_INTERVAL | $200$
LR | $0.001$
EPS_ADAM | $0.00001$
WEIGHT_DECAY_ADAM | $1\times 10^{-6}$
GAMMA | $0.99$
NUM_TEST_EPISODES | $32$
TEST_INTERVAL | $1\times 10^{5}$
Table 10. Hyperparameters for SMAX $Q$-Learning Algorithms
(* Parameters specific to QMix.)
|
# Evaluating Generative Ad Hoc Information Retrieval
Lukas Gienapp Leipzig University and ScaDS.AI , Harrisen Scells Leipzig
University , Niklas Deckers Leipzig University and ScaDS.AI , Janek
Bevendorff Leipzig University , Shuai Wang The University of Queensland ,
Johannes Kiesel Bauhaus-Universität Weimar , Shahbaz Syed Leipzig
University , Maik Fröbe Friedrich-Schiller-Universität Jena , Guido Zuccon
The University of Queensland , Benno Stein Bauhaus-Universität Weimar ,
Matthias Hagen Friedrich-Schiller-Universität Jena and Martin Potthast
Leipzig University and ScaDS.AI
###### Abstract.
Recent advances in large language models have enabled the development of
viable generative information retrieval systems. A generative retrieval system
returns a grounded generated text in response to an information need instead
of the traditional document ranking. Quantifying the utility of these types of
responses is essential for evaluating generative retrieval systems. As the
established evaluation methodology for ranking-based ad hoc retrieval may seem
unsuitable for generative retrieval, new approaches for reliable, repeatable,
and reproducible experimentation are required. In this paper, we survey the
relevant information retrieval and natural language processing literature,
identify search tasks and system architectures in generative retrieval,
develop a corresponding user model, and study its operationalization. This
theoretical analysis provides a foundation and new insights for the evaluation
of generative ad hoc retrieval systems.
generative information retrieval, evaluation, ad hoc search
††ccs: Information systems Evaluation of retrieval results††ccs: Information
systems Language models
## 1\. Introduction
The development of large language models (LLMs) has prompted search engine and
AI companies to innovate the way search results are presented. LLMs can be
used to generate a text that directly satisfies an information need. However,
since LLMs can generate unreliable information (Alkaissi and McFarlane, 2023;
Ji et al., 2023; Zuccon and Koopman, 2023), conditioning their inference on
relevant documents has emerged as a potential technique to ground their
generated statements (Lewis et al., 2020; Mialon et al., 2023). This can
relieve users of the (cognitive) effort of acquiring the needed information
from individual search results themselves, which affords a change in the
design of a search engine results page (SERP; Figure 1): instead of the
proverbial list of “ten blue links” (list SERP), a generated text with
references is shown (text SERP). The first public prototypes of this kind were
You.com’s You Chat and Neeva AI, closely followed by Microsoft’s Bing Chat,
Google’s Bard, Perplexity.ai, Baidu’s Ernie,111See https://chat.you.com; Neeva
has shutdown; https://chat.bing.com (requires the Edge browser);
https://bard.google.com; https://perplexity.ai; https://yiyan.baidu.com. and
research prototypes (Koopman et al., 2023; Zhang and Pradeep, 2023).
Figure 1. A search engine results page (SERP) has traditionally been a list of
document references (list SERP, left). Large language models afford its
reinvention as a generated text document with source references (text SERP,
right).
On the left, a schematic illustration of a list of search results as it is
shown on conventional search results pages. On the right, a schematic text
with citations and a reference list of corresponding links below it as a novel
presentation of a search result.
Far ahead of this development, Sakai et al. (2011) raised an important
question: How can search engines that use text SERPs be evaluated? Evaluating
text SERPs is not straightforward, since the modern theory and practice of
evaluation in information retrieval is built on a user model premised on the
assumption that search results are presented as list SERPs.222Extensive
research on search interfaces has included many alternatives, search features,
interaction designs, and result visualizations (Hearst, 2009; Wilson, 2011;
Liu et al., 2021). Nonetheless, with the growth of the web, Google’s list SERP
design became a de facto standard for web search, and the term “search engine
results page” a synonym for “document ranking.” According to this model, the
ranked list of documents on a list SERP elicits the user’s information
behavior, which consists of reading the documents in order until the
information need is satisfied or the search is abandoned. In decades of
research, a comprehensive theoretical evaluation framework of reliable and
validated methods has been built to assess the quality of a document ranking
with respect to an information need. Replacing the ranking with a text
undermines this foundation.
In this paper, we focus on the basic task of generative ad hoc retrieval and
transferring established evaluation methodology for list SERPs to text SERPs.
Our approach is theory-driven and based on a systematic analysis of relevant
literature from information retrieval (IR) and related fields. Our
contributions relate to the systems, user, and evaluation perspectives.
Starting with a definition of the task of generative ad hoc retrieval, we
explore system models for generative retrieval and the tasks they can solve
(Section 2). We then devise a user model for text SERPs based on their salient
properties and grounded in related behavioral studies (Section 3). Based on
both, we transfer established evaluation methodologies from information
retrieval as a foundation for new text SERP effectiveness measures (Section 4)
and reliable, repeatable evaluation of generative ad hoc information retrieval
tasks.
## 2\. The Generative Retrieval Task
In this section, we define the task of generative ad hoc retrieval, review the
two fundamental paradigms of its operationalization, discuss its main
contribution to traditional ad hoc retrieval, and distinguish it from related
generative tasks in IR.
### 2.1. Generative Ad Hoc Retrieval
Consider the two distinct tasks of retrieval and language generation. As
illustrated in Figure 2, IR systems as well as generative language models are
created using large collections of documents $D$. However, their usefulness
depends on users’ needs and expectations, expressed as a set of queries or
prompts $Q$. The users of an IR system want to retrieve the most relevant
documents that satisfy their information needs. Similarly, the users of a
generative language model want to generate the most helpful text for their
current tasks. From an IR perspective, the fundamental difference between the
two is as follows: A retrieval model $\rho$ induces a ranking on a finite
document collection $D$ with respect their relevance to a query $q$. A
language model $\psi$ induces a corresponding ranking on the infinite set of
all possible texts $\mathcal{T}$. In practice, the former is used to return
the top-$k$ ranked documents from $D$, and the latter to return, i.e.,
generate, just one of the many possible relevant documents from $\mathcal{T}$.
Generative models like $\psi$ have therefore recently been framed as infinite
indexes (Deckers et al., 2023).
Since a retrieval model $\rho$ can only return existing documents, the
relevant information (nuggets) in $D$ determines the degree to which a user’s
information need can be satisfied. The user has to examine the returned
documents for the desired information. A generative language model $\psi$
instead attempts to alleviate the effort of examining documents by returning a
tailored response that compiles all information required by the user. Yet the
factual accuracy of current generative language models is often lacking and
prone to hallucinations (Zhao et al., 2023; Zuccon and Koopman, 2023; Ji et
al., 2023; Alkaissi and McFarlane, 2023) (i.e., there is only a small subset
of accurate documents among all possible texts $\mathcal{T}$). Generative ad
hoc retrieval can therefore be described as the task of combining both types
of models so that their respective advantages and disadvantages complement or
balance each other. For single-shot ad hoc retrieval, two fundamental
combination approaches can be distinguished (Figure 2, bottom): retrieval of
relevant documents from $D$ based on which a response is generated, or
generation of a response and verifying its statements by retrieving supporting
documents from $D$.
Figure 2. The task of generative ad hoc retrieval entails combining a
retrieval model and a language model. The notation waives mathematical
rigorousness in favor of intuitive understanding.
### 2.2. Operationalizing Generative Retrieval
We discern two distinctive components a generative retrieval systems: (1)
_retrieval_ , where a query is addressed with existing documents from a
collection; and (2) _generation_ , where a query is addressed by generating a
new text. The two fundamental approaches to combining both components, which
have also been pursued by existing work include _retrieval-then-generation_
and _generation-then-retrieval_. These paradigms can be seen as two atomic
approaches to operationalize generative ad hoc retrieval. However, with
increasing inference speeds for large language models in particular, and
generative AI in general, also combinations of these two paradigms are
conceivable, which we refer to as _multi-turn generative retrieval_.
In a retrieval-then-generation approach, a generative process is conditioned
with retrieved source material. This can commence by, e.g., adding evidence
from retrieved sources to the input prompt of the generative model (Izacard
and Grave, 2021; Khot et al., 2023; Lazaridou et al., 2022; Shi et al., 2023),
attending to retrieved sources during generative inference (Lewis et al.,
2020; Guu et al., 2020; Borgeaud et al., 2022), chaining models (Jiang et al.,
2022), or iterative self-attention starting from sources (Zhang et al.,
2021b).
In a generation-then-retrieval approach, instead a retrieval process is
prompted with generated text. While this approach has received little
attention in existing work (Contributors, 2023), it commonly takes the form of
retroactively retrieving references for a generated statement, similar to
claim verification (Wadden et al., 2020).
In multi-turn generative retrieval, retrieval and generation are combined in
an arbitrarily ordered sequence of retrieval and generation steps. This
commonly proceeds in a cyclical pattern, where a generated passage is then
utilized as a query to retrieve relevant sources, which in turn serve as
context for future text generation. This can be employed for continuous
generation of text (Ram et al., 2023; Jiang et al., 2023; Semnani et al.,
2023), retrieving sources at multiple steps in the process, or for refinement
through iterative inference (Khattab et al., 2022; Contributors, 2023).
However, we focus our efforts in this paper solely on the generative ad hoc
retrieval task, where we do not consider multiple turns in a conversation.
### 2.3. Contribution of Generative Retrieval
Generative ad hoc retrieval is a new variant of ad hoc retrieval, the task of
satisfying a query’s information need independent of all other queries the
user or other users submit before or after. Ad hoc retrieval has a long
history and large body of research dedicated to it (Manning et al., 2008).
This raises the question of what generative ad hoc retrieval contributes to
traditional ad hoc retrieval.
In this regard, we refer to Broder’s taxonomy of web search (Broder, 2002), as
compiled in Table 1. It spans three well-known categories of search tasks, and
juxtaposes them with three corresponding generations of search engines. Each
generation utilizes a new source of information in addition to those of its
predecessors to meet new user intents. The first generation of web search
engines supports informational tasks, relying on the information found within
a document in order to support a user’s intent to acquire (parts of) that
information. The second generation additionally exploits document relations,
supporting users that intend to reach a specific site or document, or the most
authoritative one among many alternatives, i.e., information needs that are
navigational in nature. The third generation blends results from different
vertical search engines, integrating multimedia and multimodal results into a
single SERP to support a user in performing tasks.
Generative retrieval systems can be seen as a new, 4th generation of web
search engines. They enable the synthesis of new documents relevant and
tailored to a user’s information need. Given a sufficiently complex
information need (i.e., one that cannot be answered by information from a
single document), this capability is primarily used to operationalize a user’s
intent to collect and compile a comprehensive overview of the information
required to solve their task, condensed into a long-form text. This part of
information behavior, the condensation of information from multiple sources,
has previously not been supported to the best of our knowledge. Users
therefore had to browse and parse the information from retrieved documents on
a list SERP themselves to satisfy their information needs. Generative models
relieve users from this extra work and cognitive load, so that they now only
have to read and understand a generated text.333Sakai et al. (2011) has
previously proposed to automatically identify relevant information nuggets in
retrieved documents and present them in a list, but did not consider the
aspect of condensing them. Additionally, the synthetical nature of such
systems can conceivably be harnessed to generate new pieces of information not
contained in retrieved sources, rendering the generative model itself a source
of information.
Table 1. Ad hoc web search system generations (Gen.), and what each supports in addition to (+) the previous one according to Broder (2002). Generative retrieval systems constitute the 4th generation which aids users in synthetical tasks by condensing information using generative models. Gen. | Search Task | Information Source | User Intent | Year
---|---|---|---|---
1st | informational | Document | Acquire | 1995
2nd | \+ navigational | \+ Document relations | \+ Reach | 1998
3rd | \+ transactional | \+ Search verticals | \+ Perform | 2002
4th | \+ synthetical | \+ Generative models | \+ Condense | 2023
While this could be framed as an extension to the informational search task,
we argue that it deserves to be treated on its own merits, and therefore
postulate the _synthetical search task_. Consider opinionated information
needs (“Should society invest in renewable energy?”) or decision-making ones
(“Should I get life insurance?”). These are not fully supported by the first
three generations, since (1) in contrast to informational tasks, information
is likely spread across multiple documents; (2) in contrast to navigational
tasks, no single page is premeditated to be reached by the user; and (3) in
contrast to transactional tasks, the goal, i.e., condensing the information is
to be addressed on the system side. Additionally, Broder explicitly constrains
informational queries and first generation search systems to static content:
“The purpose of such [informational] queries is to find information assumed to
be available on the web in a _static form_. No further interaction is
predicted, except reading. By _static form_ we mean that the target document
is not created in response to the user query.” (Broder, 2002, page 5)
The fourth generation of search engines supports the synthetical search task
and ideally enables users to access a single, comprehensive document that
covers a complex topic with in-depth analysis from varied perspectives.
Although the web may offer the right (set of) document(s) to answer a such
query, the system compiles them, synthesizes missing information, presents it
coherently, and is grounding its claims in the retrieved sources.
### 2.4. Other Kinds of Generative Retrieval
“Generative IR” is an umbrella term to describe a diversity of approaches that
combine retrieval and generative components to solve a task.444See also the
recent SIGIR workshop on generative IR (Bénédict et al., 2023). For example,
generative models can be augmented with retrieval capabilities or used in an
IR pipeline, such as with retrieval-augmented language models (Guu et al.,
2020; Jiang et al., 2022; Borgeaud et al., 2022) or infinite indexes (Deckers
et al., 2023). Furthermore, generative models can be used to enhance a
retrieval process (Arora et al., 2023) by augmenting documents (Nogueira et
al., 2019; Gospodinov et al., 2023; MacAvaney et al., 2020; Formal et al.,
2021; Zhuang and Zuccon, 2021) or queries (MacAvaney et al., 2021; Gallagher
et al., 2023) with hallucinated content. The entire retrieval pipeline can
also be approached end-to-end by, e.g., generating document identifiers, such
as page titles (Cao et al., 2021; Thorne, 2022; Chen et al., 2022), URLs
(Ziems et al., 2023), and (structured) string identifiers (Zhou et al., 2022;
Tay et al., 2022; Zhuang et al., 2022; Wang et al., 2022). Instead of
generating identifiers, generating parts of existing documents and performing
retrieval by string matching (Bevilacqua et al., 2022) can be highly
effective, and a (re-)ranking can also be predicted directly (Sun et al.,
2023).
Generative models can also be used to directly generate a response without
relying on retrieved information (Sallam et al., 2023). This extends to
generating multiple candidates and choosing the best or regenerating a new
response conditioned on the previous ones (Yu et al., 2023). Yet, generative
ad hoc retrieval exceeds that by requiring grounding.
Finally, ad hoc generative retrieval is strongly related to, and borrows from
several pre-existing fields. Conversational search (Salton, 1969; Radlinski
and Craswell, 2017; Culpepper et al., 2018) has led to developing new tools
(Zhang et al., 2021a; Miller et al., 2017), resources (Nguyen et al., 2016;
Trippas et al., 2020), and dialogue options (Vakulenko et al., 2019, 2020;
Kiesel et al., 2018; Zamani et al., 2020; Kiesel et al., 2021b). Question
answering has been approached with LLMs to produce direct answers (Robinson
and Wingate, 2023). Text summarization (Goyal et al., 2022; Sakai et al.,
2011) has been used in an IR context to, for example, generate snippets
(Tombros and Sanderson, 1998; Bando et al., 2010; Chen et al., 2020).
Generative ad hoc retrieval is different from these related tasks as it is
broader in scope than question answering systems (Li and Belkin, 2008),
requires explicit grounding (Chandu et al., 2021), is not interactive like
conversational search, and has more information processing requirements than
summarization.
## 3\. A User Model for Generative IR
Any IR system should align with user expectations, thus, evaluation needs to
be grounded in a user model. Yet, existing user models that have been derived
to facilitate evaluation in IR are based on the assumptions of list SERPs.
After preliminary considerations (Section 3.1) to derive a text SERP user
model, we first consider the general search process of a user (Section 3.2)
and explore how it relates to generative approaches. Then, we follow the
evaluation methodology proposed by Agosti et al. (2014): first, define
evaluation objectives (Section 3.3) and then devise a user model that
corresponds to these objectives (Section 3.4). This makes it possible to later
derive metrics that operationalize the user model, aggregated over multiple
results or queries.
This structure is also reflected in Figure 3. We base our proposed evaluation
methodology on the ad hoc information search process (Vakkari, 2016) as seen
from the users’ perspective (top of the figure), and formulate evaluation
objectives that correspond to each component (bottom of the figure), which
take into account the evaluation setting from which a user model can be
induced. Traditional IR can assist the user only during Selection with a list
SERP. Meanwhile, generative retrieval encompasses all three steps of
Selection, Interaction, and Synthesis, to support the synthetical search task
and respond with a text SERP. An evaluation of a generative retrieval system
should therefore focus on these steps, which are mirrored in the Retrieval,
Grounding, and Presentation evaluation objectives.
Formulation Selection Interaction Synthesis User Information Search Process
Inform. Need Search Outcome Prompting Retrieval Grounding Presentation
Evaluation ObjectivesSteps Supported by Generative IR Systems Figure 3. The
user information search process (Vakkari, 2016) transforms an information need
into the search outcome. The evaluation objectives allow to derive a user
model for an evaluation setting. Generative IR systems span the user steps of
_Selection_ , _Interaction_ , and _Synthesis_ , resulting in corresponding
objectives _Retrieval_ , _Grounding_ , and _Presentation_.
### 3.1. Preliminary Considerations
##### Evaluation Setting
In traditional retrieval, the user is presented with a ranked list of
documents (list SERP), each typically referenced by a linked title, snippet,
and URL. In generative IR, instead, a response text is presented (text SERP),
i.e., a sequence of statements, each optionally referencing one or more
sources of evidence in support of the statement. A statement can be any
consecutive passage of text, ranging from a single word or phrase to a
sentence and even one or more paragraphs. In this context, statements are
considered ‘atomic’ in the sense that we disregard the nesting of statements
of different lengths, and that they support one or more claims that are
pertinent to the user’s information need. They are comparable to the concept
of ‘atomic/semantic content units’ (Liu et al., 2023a; Nenkova et al., 2007)
in summarization evaluation, or ‘information nuggets’ in traditional IR (Dang
and Lin, 2007; Sakai et al., 2011; Sakai, 2023). A statement can be
referencing none, one, or more than one source. References explicitly link to
a source, like a web document containing the information on which the
generated statement is based and by which it is grounded. The evaluation
commences ad hoc, i.e., with a single-query and without session-based or
conversational elements.
##### Evaluation Paradigms
To estimate the effectiveness of retrieval systems, offline evaluation within
a Cranfield-style evaluation setting (Cleverdon, 1997) is the de facto
approach in IR research. It attempts to estimate the satisfaction of users
with the output of a system by relying on an initial pool of documents judged
by assessors for a given topic set (Sanderson et al., 2010). These initial
annotations are then be reused in throughout experiments by matching document
and query identifiers. This form of evaluation offers a way to rapidly and
cheaply perform large-scale evaluations of search systems. Yet, the output
that generative systems produce is novel at query time. In turn, this renders
it difficult to measure using such offline test collections, since no stable
document identifiers are available. At its core, this is similar to the
unjudged document problem. Traditionally, it is solved by assuming non-
relevance (Fröbe et al., 2023), which is not feasible for generative IR: since
all text is potentially novel, systems would not be separable through assuming
non-relevance alone. Therefore, more sophisticated transfer methods are
required to adapt offline evaluation for generative retrieval.
Alternatively, evaluation of generative systems can be conducted in an online
fashion (Sallam et al., 2023). Here, for each run, i.e., system configuration,
all output is judged anew, without relying on previous data. Yet, the effort
required to judge runs during structured experimentation is immense. It
requires collecting explicit user feedback about a system (Kelly et al.,
2009), e.g., by rating their satisfaction. Yet, it is often uncontrolled,
expensive to conduct, requires time to undertake and is challenging to
replicate, repeat, and reproduce (Renaud and Azzopardi, 2012). Especially in
an academic setting, where access to human user data is limited, much research
went into simulated agents to analyze (interactive) information systems
(Maxwell et al., 2015; Maxwell and Azzopardi, 2016; Câmara et al., 2022).
However, these cannot compete with “real” human feedback, which remains
challenging and expensive to collect. Automatic evaluation, where the output
of one model is judged by another, has been proposed as a possible way forward
(Liu et al., 2023b; Yue et al., 2023), but judging the output of generative
models by means of other models has itself been criticized (Sakai, 2023; Bauer
et al., 2023; Faggioli et al., 2023).
### 3.2. Components of the User Search Process
To derive suitable evaluation objectives, first, we have to consider the
search process a user undergoes when performing an ad hoc search task.
Specifically, the synthetical search task enabled by generative systems should
be reflected here. Based on Vakkari (2016), a users’ process encompasses four
steps: search formulation, source selection, source interaction, and synthesis
of information. Each of these can be mapped to capabilities of generative IR
systems.
First, during Formulation, the user crafts a specific query that expresses the
desired search outcome, addressing their information need. This is no
different in generative IR systems, though what information retrieval calls a
‘query’ is called a ‘prompt’ in artificial intelligence research. To avoid
confusion, we stick to the term ‘query’. For the purposes of this paper, we
leave this step entirely to the user who (iteratively) adapts their search
formulation. Yet, we do acknowledge that this task may also be framed as a
system task with the goal of enhancing the users’ original query with more
context or prompt templates, akin to query suggestion & query expansion in
traditional retrieval.
Second, during Selection, the user is presented with a result list and can
then examine each entry, possibly through surrogates like snippets. The user
can assess whether the results presented by the system and their information
need align and thus build a focused selection of sources. In generative IR
systems, this stage corresponds to the system selecting sources that contain
potentially relevant information.
Third, during Interaction, the user analyzes the content of each previously
selected result in-depth. The aim is to extract and structure the relevant
information from each source that addresses the knowledge gap that their
information need stems from. In generative IR systems, this step is supported
by the model attending to relevant pieces of information previously retrieved.
Finally, during Synthesis, the user assembles the search outcome. They combine
relevant information identified in multiple sources into a coherent answer to
their query. In generative IR, this corresponds to the inference of the
response text addressing all aspects of the user’s query with information from
the previously selected sources. This is key in enabling the synthetical
search task. Note that interaction and synthesis often commence concurrently.
### 3.3. Evaluation Objectives
For each of the components of the search process, we define a corresponding
evaluation objective in the context of generative IR. These are not considered
evaluation steps, but rather objectives of the evaluation of the system as a
whole (see Section 3.1).
##### Prompting Objective
Formulation is reflected in the evaluation of the models’ input prompt. While
search formulation is an important component to evaluate, we believe it is out
of the scope of this paper, since, as previously argued, the formulation step
is left to the user. For further reading, the issue of prompt engineering as
an emergent field of research is covered in relevant literature on prompt
engineering (Shin et al., 2020; Reynolds and McDonell, 2021; Gao et al., 2021;
Liu and Chilton, 2022; Sorensen et al., 2022; White et al., 2023; Yang et al.,
2023).
##### Retrieval Objective
Selection is reflected in the assessment of the context a generative IR system
draws its information from. The retrieved sources (as well as any relevant
information that was not retrieved) directly impact the quality of the
generated response. Therefore, the retrieval objective assesses a system’s
ability to identify source documents satisfying a user’s information need.
This includes its ability to select (1) _relevant_ (aligning with the users’
information need), (2) _diverse_ (covering a variety of information), (3)
_informative_ (containing valuable information), and (4) _correct_ (providing
accurate information) documents from a collection.
##### Grounding Objective
Mimicking Interaction, generative ad hoc IR models draw upon reference
documents as evidence to generate a response. Yet, grounded text generation
may suffer from hallucinations of broadly two types (Maynez et al., 2020):
intrinsic hallucinations, where the model wrongly modifies information from
the source documents, and extrinsic hallucinations, where the model generates
information that is not present in the source documents. Both negatively
impact the quality of the generated response (Maynez et al., 2020; Lux et al.,
2020). Therefore, the grounding objective assesses a system’s ability to
correlate its generated output with information from source documents. This
includes its ability to (1) _identify_ (find relevant information), (2)
_paraphrase_ (restate that information correctly), and (3) _establish
consistency_ (not produce contradictions to other sources).
##### Presentation Objective
The relevant information across multiple documents has to be synthesized into
a single search outcome. This resembles multi-document summarization.
Therefore, the presentation objective assesses a systems’ ability to convey
information to a user through the generated response in a useful manner, i.e.,
its ability to produce text that is (1) _concise_ (at a level of granularity
sensible given the topic or needed by the user (Dang, 2005)), (2) _coherent_
(in a uniform style), and (3) _accessible_ (written in an understandable way,
which, again, is dependent on user needs).
### 3.4. Components of the User Model
Generative IR poses a challenge for developing a user model. As it is a new IR
paradigm, little to no user feedback, A/B tests, laboratory studies, or user
behaviour data is available in the academic context that insight about user
behavior may be derived from. Further, the information search process that
user behavior is traditionally grounded in is replaced (in part) by the
generative system. Additionally, the assumptions of traditional user models
are made for list SERPs and thus have to be revisited, taking into account the
previously established evaluation objectives.
To this end, we contribute a user model for generative IR, extrapolating from
established evaluation practices in related fields, like question answering,
summarization, as well as traditional IR. We follow the considerations of
Carterette (2011), who argues that an IR-focused user model is constituted by
three distinct (sub-)models: (1) a _utility_ model (how each result provides
utility to the user), which induces a gain function; (2) a _browsing_ model
(how the user interacts with results), which induces a discount function; and
(3) an _accumulation_ model (how the individual utility of documents is
aggregated), combining the individual gain and discount values.
#### 3.4.1. Utility Model for Generative IR
We first motivate a utility model by surveying literature on evaluation in IR
and related fields. We identify 10 dimensions of utility applicable to the ad
hoc synthetic search task. These are grouped into five top-level categories of
_Coherence, Coverage, Consistency, Correctness_ and _Clarity_. We further
distinguish the unit from which gain is derived, being either an individual
statement that comes from the response (statement-level) or the response as a
whole (response-level). Figure 4 summarizes the dimensions of utility proposed
in this section as a taxonomy, divided into response-level and statement-level
dimensions; corresponding objectives are marked.
UtilityClarity Content Language Correctness Topical Factual Consistency
External Internal Coverage Deep Broad Coherence Logical Stylistic Response
Level Statement Level Retrieval Grounding Presentation Evaluation Objectives
Figure 4. Taxonomy of utility dimensions in generative ad hoc retrieval;
corresponding evaluation objectives colored.
##### Coherence
Coherence is a response-level dimension of utility and refers to the manner in
which the response is structured and presented. This includes arranging
statements to form a coherent narrative without contradictions (Radev and
McKeown, 1998; Shah et al., 2021) (_Logical Coherence_), but also a uniform
style of speech (_Stylistic Coherence_), rendering it readable and engaging
(Jin et al., 2020; Capra and Arguello, 2023). Both implement the presentation
objective at response level, amounting to “Is the response structured well?”
(_Logical Coherence_) and “Does the response have a uniform style of speech?”
(_Stylistic Coherence_).
##### Coverage
Coverage measures the cumulative extent to which presented information is
pertinent to the users’ information need. It can be subdivided into two forms
(Cambazoglu et al., 2021): _Broad Coverage_ , i.e., whether the response
covers a breadth of diverse information (Zheng et al., 2012), and _Deep
Coverage_ , i.e., whether the response provides in-depth detailed information
with high informativeness (Maxwell et al., 2017). Coverage implements the
retrieval objective at response level, amounting to “Does the response cover
diverse information?”(_Broad Coverage_) and “Does the response offer detailed
information?” (_Deep Coverage_).
##### Consistency
A commonly observed problem with source-based text generation is inconsistency
(Huang et al., 2021) between source and generated text, which is detrimental
to utility. Inconsistencies may also occur across multiple statements within a
response, rendering it both a statement-level and response-level dimension. We
refer to the first as _Internal Consistency_ (response level), which involves
assessing the consistency between statements that constitute the response,
ensuring that they form a coherent answer and are not contradictory (Nishino
et al., 2019; Sakai, 2023; Capra and Arguello, 2023). It should be noted that
this does not mean that different conflicting perspectives on a topic can not
be reflected in the response, however, these should be explained. The second,
_External Consistency_ (statement level), involves assessing the consistency
between a statement and its source document(s), ensuring that the generated
text aligns in terms of content and context (Maynez et al., 2020; Yue et al.,
2023; Sakai, 2023). External inconsistencies are often introduced through
model hallucinations (Ji et al., 2023). Consistency is different from factual
correctness, as it only assesses the alignment of a statement with the source,
and not its objective truth. Both notions implement the grounding objective
but on different levels, amounting to ‘Is the response free of
contradictions?” (_Internal Consistency_) and “Is the statement conveying from
sources accurately?” (_External Consistency_)
##### Correctness
Correctness gauges to which degree the information provided in the response is
factually correct, reliable, and addressing the user’s information needs. We
subdivide correctness into _Factual_ and _Topical Correctness_. The former
captures the degree to which a statement reproduces information that can be
assumed as objectively true. Yet, outside of small-scale domain-specific
evaluation studies (Sallam et al., 2023) fact-checking remains a hard and
laborious challenge (Nakov et al., 2021). It is thus often reduced to a
simpler approach, framing factual correctness in terms of verifiability (Liu
et al., 2023b), not truth, where the main requirement is that a piece of
information can attributed to a reliable reference, bestowing it correctness
(Foundation, [n. d.]; Yue et al., 2023). Topical correctness denotes whether a
statement aligns with the users’ information need (Maddalena et al., 2017;
Yang, 2017; Roitero et al., 2018). Both operationalize the retrieval objective
at the statement level, amounting to “Does the statement state things that are
verifiably true?” (_Factual Correctness_) and “Does the statement state things
within the scope of the user’s information need?” (_Topical Correctness_).
##### Clarity
The response given by a generative IR system should be expressed in a clear
and understandable manner (Zhu et al., 2009; Sameki et al., 2016). This
includes the use of language in a concise (Dang, 2005; Sakai, 2023),
comprehensible (Cambazoglu et al., 2021) way, be lexically and grammatically
correct, and accessible to the user (_Language Clarity_). Note that language
clarity does not reflect fluency, which is assumed already at human-level for
model-generated text (Sakai, 2023), but rather the response being in the
appropriate language register. For example, a technical query might warrant an
academic style of writing in the response, while a joke question might afford
a more jovial tone. Orthogonal to this, the way a statement is written should
always clearly communicate the most salient information (Schuff et al., 2022),
and where it stems from (Nourani et al., 2019), in order to make the response
explainable (_Content Clarity_). Both operationalize the presentation
objective on the statement level, amounting to “Is a statement written in an
easily readable way?” (_Language Clarity_) and “Does the statement put its
focus on the most salient points?” (_Content Clarity_).
#### 3.4.2. Reading Model for Generative IR
For list SERPs, user interaction is modeled by a browsing model, of which two
fundamental kinds exist. The set-based model assumes that a user
indiscriminately examines all documents given by the system, while the
ranking-based model assumes a user traversing documents ascending in rank,
stopping when either their information need is fulfilled or the search is
aborted (Carterette, 2011). Aborting the search is primarily motivated by the
effort being too high to justify continuing to browse. Yet, in generative IR,
the selection and interaction steps of the search process are supported by the
system, thus the user only has to the generated text, which requires
comparably less effort. This reduces the effect of stopping criteria grounded
in effort, with most users only aborting their search once their knowledge gap
is fulfilled, the response is deemed insufficient, or the whole response was
read. This is neither set-based, as reading the response is a sequential
process and early stopping might occur, nor traditionally ranking-based, as
aborting the search is not motivated by effort, but rather search satisfaction
or dissatisfaction only.
We therefore propose a reading model in generative IR, as an evolution of the
standard browsing model, which instead models the attention a user places on
each statement while reading. Since there are no empirical studies on reading
behavior for generative search at present, we instead turn to related work in
reading behavior for document comprehension. We identify a total of six
criteria which influence the reading process of documents for an information-
seeking purpose, three of which we deem relevant to the case of generative IR.
First, _Progression_ (Buscher et al., 2012; Li et al., 2018, 2019; Zheng et
al., 2019; Wu et al., 2023) implies that users parse a document sequentially,
i.e., progress through the statements constituting the text in order. Second,
_Decay_ (Frey et al., 2013; Li et al., 2018, 2019; Zheng et al., 2019; Wu et
al., 2023) implies that the reading attention diminishes over the span of the
text. Third, _Saturation_ (Li et al., 2018, 2019) implies that users abort
once they read enough to fulfill their information need. In sum, this
characterizes the browsing behavior of a user as sequentially reading with
decaying attention, stopping early if saturated.
While three other characteristics of reading behavior have additionally been
found in related work, we deem them superfluous for this reading model: (1)
perceived relevance is heightened following a relevant statement (Li et al.,
2018; Zheng et al., 2019)—we adopt the restriction to a static browsing model
(Moffat et al., 2013, 2015) without inter-statement effects, as is common ad
hoc in IR evaluation. While the effect is acknowledged, its effect size may
not justify its operationalization dependent on the costs; (2) attention is
highest around query terms (Li et al., 2018; Zheng et al., 2019)—we model
utility not per token, but on a statement level, thus rendering this effect
constant; and (3) users skip content non-relevant to them (Li et al., 2018;
Gwizdka, 2014; Buscher et al., 2012)—non-relevant statements already receive
no utility.
The properties of the proposed reading model can be related to the $C/W/L$
(Moffat et al., 2017) framework of browsing models for list SERPs. The
conditional ‘continuation probability’ ($C$) denotes how likely a user is to
continue to browse to the next item after having seen one. This can
alternatively be framed in terms of the ‘weight’ ($W$), which refers to the
probability of a user reaching each step of the sequence. The ‘last
probability’ ($L$), indicates whether a given statement is the last one to be
read before aborting, which, too, contributes to diminishing weights.
Progression indicates that the assumptions made by the $C/W/L$ framework are
applicable in the first place, requiring a sequential process. Decay is
encoded by a diminishing attention, relating to continuation probability and
weight ($C$/$W$), while saturation relates to the last probability ($L$). In
sum, this allows to operationalize the reading model as a monotonically
decreasing weight function over statements, discounting the contribution of
statements occurring later in the response. This induces a corresponding
response-level document organization where the most important pieces
information comes first and are then followed by increasingly insignificant
details (cf. the inverted pyramid scheme of news articles (Pöttker, 2003)).
#### 3.4.3. Accumulation Model for Generative IR
To combine gain and discount values over all considered statements, we argue
in favor of the accumulation model of _expected total utility_ (Carterette,
2011; Moffat et al., 2013). It considers the total utility a searcher
accumulates from the whole response. Alternatively, measures could be based
around estimating the total ‘cost’ of accruing information from the response
in terms of the effort expended (Carterette, 2011). However, we argue that
because this effort is comparatively small in text SERPs, optimizing for it is
not suitable for reliably differentiating systems in evaluation.
## 4\. Operationalizing Evaluation
Generative IR Systems Document Collection Topic Set w. Queries Setup
Segmentation (optional) Statements Reference-free Assessment Reference-based
Assessment Utility Assessment Evaluation Measure User Model System Ranking
Measure Figure 5. Overview of the evaluation procedure for generative ad hoc
IR. Given documents and topics, a generative IR system produces a response,
which is segmented into statements. Statements are assessed for utility in
initial or repeated experimentation and an evaluation measure ranks systems by
effectiveness. Solid lines indicate process flow. Dashed lines indicate
contextual information sources.
This section considers possible operationalizations of the proposed user
model. The goal is to take stake in what possibilities exist for each step of
the process, in an effort to illustrate the required components and how they
can be implemented. These considerations are summarized in Figure 5, with each
component (Figure rows) as a subsection in the following. The first is the
experimental setup (Section 4.1), encompassing a document collection, a set of
topics reflecting the search task, and a set of generative IR systems to be
evaluated. Their responses to queries are (optionally) split into statements
using a segmentation approach (Section 4.2). Statements are then assessed for
their utility (Section 4.3), distinguishing between initial experimentation
without prior reference annotations, and repeated experimentation, where
existing annotations can be referenced. Given annotations and an evaluation
measure, the systems can then be ranked with respect to their effectiveness
(Section 4.4) as indicated by an aggregated score. In each of these four
steps, we survey relevant literature and juxtapose proposed evaluation
processes with regard to their advantages and disadvantages in the context of
the assumed user model.
### 4.1. Experimental Setting
The established approach for reproducible evaluation of generative IR systems
in an academic context is offline evaluation (Cleverdon, 1997; Sanderson et
al., 2010). It encompasses a document collection, a set of topics reflecting
the information needs stated by users, and the set of systems to be tested.
Generative IR evaluation does not diverge from this basic procedure. Yet, the
topics should be reflecting the actual search task generative IR systems are
employed for, i.e., the synthetical task posited in Section 2.3, while
ensuring that the document collection can support such queries. Furthermore, a
baseline ranking of documents could be supplied for each query in order to
ablate the systems’ synthesizing ability, stemming from baseline retrieval
system, shared task results (Craswell et al., 2021a, b, 2022), or query logs
(Reimer et al., 2023). While opting for offline evaluation allows to reuse
established experiment infrastructure such as the TREC format specifications
for run and utility judgment files,555https://github.com/usnistgov/trec_eval/
generative systems pose new requirements here. Specifically, a run file
represents text SERPs, and should thus include the generated text instead of a
ranked list of document identifiers. Utility judgments should be persisted
together with the annotated text, since no static document identifiers are
available.
### 4.2. Segmenting Statements
While the complete response provided by the system can be annotated as-is
(this is especially warranted for response-level utility), in order to ease
annotation, it can to be segmented into units (suitable for statement-level
utility). This approach of subdividing a response into smaller units is well
established in evaluating generated texts in NLP (Liu et al., 2023a; Nenkova
et al., 2007; Dang and Lin, 2007), and has been proposed for IR as well (Sakai
et al., 2011; Sakai, 2023). Statements should be atomic, in the sense that an
assessor should be able to make an informed and reliable decision about the
utility of the statement from it and its context alone.
To this end, human judges can be employed to extract statements (Dang et al.,
2006; Dang and Lin, 2007), but the high effort and low repeatability, as well
as the inability to assess the effectiveness of a new system without repeated
human intervention renders this approach impractical in most settings.
Automatic means of statement segmentation, comparable to the established task
of web page segmentation (Kiesel et al., 2021a), could include splitting after
each given reference (useful for experiments investigating grounding, as each
statement has a clear attributable source), sentence-level splitting (useful
for fine-grained utility dimensions such as correctness or coverage), or
prompting the model to output already delineated statements.
### 4.3. Assessing Utility
Two different settings for collecting utility assessments can be discerned:
(1) the response to a query is assessed solely relying on the direct
assessment of the responses, without comparing to a separate ground truth; and
(2) pre-existing judgments on the same document and/or query set exist to
which the unjudged responses can be compared.
The first is similar to reference-free evaluation in summarization (Fabbri et
al., 2021), which instructs annotators to assess the summary directly, while
the second is similar to reference-based evaluation in summarization (Bhandari
et al., 2020), which instructs annotators to assess the overlap between the
system output and reference text, under the assumption that the reference text
is the gold standard of utility. Not all utility dimensions can be judged on
the generated text alone (as, e.g., clarity of language can), but also require
information beyond the generated text to assess. For example, topical
correctness requires both response and query, while factual correctness takes
into account query, text, and the external sources. We therefore discern
reference judgments and context: reference judgments are one or more existing
assessments to which the new one is compared, while context covers the
information necessary to judge. A judgment made with context only is therefore
deemed reference-free. Collecting initial assessments of utility within an
offline evaluation setting is laborious, since the to-be-judged texts are
dynamic (text SERPs are generated at query time), and thus each new response
has to be manually assessed by a judge.
##### Reference-Free Assessment
To operationalize reference-free evaluation for generative IR, the
straightforward approach is to task human judges with assessing a given
output. Yet, possibilities also include using the self-reported uncertainty of
generative models with out-of-domain data (Nalisnick et al., 2019) or relying
on other generative models to assess the quality of the output, such as
BARTScore (Yuan et al., 2021) or GPTScore (Fu et al., 2023). Classifiers
trained to estimate the magnitude of a utility dimension have also been used
(Kulesza and Shieber, 2004). Ranking, either in a pairwise or listwise fashion
is an additional form of assessment, i.e., tasking a judge with ordering
statements of unknown utility with respect to a given utility dimension
(Gienapp et al., 2020), under the hypothesis that a response with higher
utility will be ranked higher, too.
##### Reference-Based Assessment
To operationalize reference-based assessment, commonly a similarity measure is
applied between reference and response. Lazaridou et al. (2022) evaluate their
generative IR system for the task of question answering by matching words
between generated response and the gold answer. Other content overlap metrics
such as BLEU (Papineni et al., 2002), NIST (Doddington, 2002), ROUGE (Lin,
2004) TER (Snover et al., 2006), METEOR (Banerjee and Lavie, 2005), BERT Score
(Zhang et al., 2020), or MoverScore (Zhao et al., 2019) have been used to
compare to a ground truth, either the full response or each statement
individually. However, these measures should not be used to assess overlap
with retrieved documents, as these are not an adequate ground truth source.
Ranking models have also been proven useful for relative assessment of
candidates to available ground-truth, e.g., in machine translation (Duh, 2008;
Song and Cohn, 2011), both in a listwise (Li et al., 2013) as well as a
pairwise setting (Guzmán et al., 2014, 2015).
### 4.4. Measuring Effectiveness
For statement-level evaluation, the individual utility of statements has to be
combined into an overall score for the response. Effectiveness measures for
the proposed aggregation model of expected total utility take the general form
$\sum_{i=1}^{k}g(d_{i})\cdot\sum_{j=i}^{k}p(j)$ (Carterette, 2011), where $k$
is the evaluation depth, or in our case, response length; $g(d_{i})$ is the
utility of the statement at position $i$; and $p(j)$ is the probability of the
user aborting their search immediately after position $j$. The former is
referred to as a gain function, given by the utility assessments of statements
collected prior, the latter as a discount function, chosen based on prior
information about typical user behavior. The widely established measures of
$DCG$ and $nDCG$ (Järvelin and Kekäläinen, 2002) used for traditional IR
evaluation stem from this family of measures (Carterette, 2011) and seem
suitable for generative IR evaluation as well. Yet, they assume a logarithmic
discount function. It is currently unclear if this is an appropriate choice to
model the effect of decay and saturation in the proposed reading model for
generative IR. While the family of measures is thus applicable, the concrete
choice of measure needs further empirical validation from user experiments.
For response-level evaluation, two choices for measuring effectiveness exist:
either utility is annotated directly for a response, or it is aggregated from
individual statement utility. While the latter seems counterintuitive to the
response-level vs. statement-level distinction made for utility before, note
that the level of granularity on which a utility dimension is defined, and the
level of granularity at which annotations are collected can differ. Response-
level utility may be aggregated from annotations of individual statements, or
statement utility may be derived from annotations of the whole response. For
example, consider the response-level utility dimension of broad coverage. It
can be estimated by measuring the breadth of topics occurring over all
statements, hereby annotating which topics occur in each statement. The
previously motivated family of DCG-type measures can be extended to support
such evaluation. For example, measure modifications similar to
$\alpha\text{-}nDCG$ (Clarke et al., 2008) that reward a diverse set of topics
in a ranked list can be made for generative IR as well. Independent of how a
single score is produced for each response, the final system score is
aggregated over multiple topics, increasing robustness and enabling
statistical testing.
### 4.5. Comparison with Existing Frameworks
Two other approaches for the evaluation of generative IR systems have been
proposed recently: SWAN (Sakai, 2023) and EXAM (Sander and Dietz, 2021). This
naturally yields the question how our proposed evaluation approach compares to
these two. The starting point of both is a text SERP response, albeit less
formalized and without considering the synthetical search task it enables.
SWAN follows a similar approach as proposed here, first establishing the
notion of ‘information nuggets’, i.e., statements, that constitute the
response. Then, a total of 20 categories are described, indicating how a
nugget may be scored. The individual nugget scores are then averaged over the
whole response. Here, too, two different levels of score categories, i.e.,
utility dimensions are considered. While similar, our approach and SWAN differ
in three important aspects. First, we base our method on a theoretical
foundation in form of a user model whereas SWAN is mainly motivated from a
standpoint of practicability. Second, SWAN is geared towards conversational
search, while we consider the ad hoc search task. And third, the utility
dimensions we propose differ from SWAN due to the shift in scope: we exclude
dimensions specific to conversational search (e.g., recoverability,
engagingness), and also those which do not serve to operationalize evaluation
for the synthetical search task specifically (such as non-toxicity, robustness
to input variations, etc.). The majority of the remaining utility dimensions
from SWAN can be mapped to ours.
EXAM takes a completely different approach. Instead of directly evaluating
inherent qualities of the generated text, it considers the downstream
effectiveness of a Q&A system that ingests the generated answer on multiple-
choice questions. The hypothesis is that the correctness of its responses are
correlated with the quality of the generated text it uses as input. Being an
automatic evaluation method, this allows for rapid experimentation, yet
exhibits three major drawbacks: it offers no fine-grained insight into the
quality of the generated text; it is not grounded in a user model; and it
requires a suitable Q&A system, impacting reliability and comparability, since
there are not accepted standards.
In sum, our approach can be related to existing methods in terms of
compatibility, complementarity, and consistency. It is compatible with SWAN,
being derived from similar assumptions, yet adding a theoretical foundation,
and constructed with a different search task in mind. It is complementary to
EXAM, with a focus on fine-grained, reliable, user-oriented evaluation,
whereas EXAM excels for rapid, system-oriented experimentation with little
overhead. And overall, our approach is consistent with traditional IR
evaluation techniques, making only small adaptations to utility, browsing, and
aggregation model to accommodate the new search paradigm. We believe that this
renders much of the work on methods and theoretical foundation for traditional
IR evaluation still applicable.
## 5\. Conclusion
Generative IR systems offer a new paradigm for the retrieval of information.
With this new paradigm comes the need to measure and understand the new
dimensions that make text SERP responses from these systems relevant to a
user’s information need. In this survey, we have investigated a theoretical
foundation for the evaluation of generative IR systems, extrapolated from
traditional IR and related domains. Firstly, we established that the search
task of generative ad hoc IR goes beyond acquiring information, and instead
enables the condensation of information, a process we dub the ‘synthetical
search task’. The different system architectures enabling this task were
shortly outlined. Given this departure from traditional ad hoc IR, we proposed
a new user model that accommodates the task. Here, we also extrapolated
existing frameworks to model the generative IR search process, including
evaluation objectives, utility dimensions and a browsing model for text SERPs.
Finally, we outlined how one could operationalize the evaluation of generative
IR systems, surveying how existing evaluation approaches relate to, and could
fit into the proposed methodology.
Many techniques for constructing generative IR systems are currently emerging
but evaluating the output of such systems is a non-standardized and thus
rarely comparable effort lacking a theoretical motivation and methodological
rigor. We have provided in this paper our vision of a comprehensive approach
for evaluating generative ad hoc IR systems. We firmly believe that this
survey provides the IR community with the foundation to conduct future
research into new methods for the evaluation of generative ad hoc IR. Yet, we
also have several directions of future work planned, and several open
questions to tackle. Near-future work includes conducting a rigorous empirical
evaluation based on our proposal, and studying its reliability and validity
within user studies. We believe that user experiments are required to
effectively apply the theoretical motivation developed in this survey. We plan
a meta-evaluation of both existing measures and measures modified for
generative IR specifically, to study how well they align with user
preferences. We also plan to study the proposed utility dimensions and their
ability to reflect user satisfaction, akin to studies conducted for
traditional IR (Cambazoglu et al., 2021). In addition, investigating the way
users interact with generative retrieval systems is warranted; for example, do
clicks indicate relevance as before, or rather the opposite, with the aim of
generative ad hoc IR being to make clicks superfluous?
##### Limitations
The evaluation process we propose in this paper is limited in two ways. First,
we opted for a _holistic_ evaluation of text SERPs, i.e., instead of
evaluating the pipeline of components that constitute the generative IR system
individually, we focus on evaluating the final response. Second, the
evaluation is additionally limited to answer the question if an generative ad
hoc IR system is successful at supporting the synthetical search task. This
does not consider the more general evaluation objectives that all search
systems are subject to (such as bias, fairness, ethicality, or user privacy).
In that sense, our considerations are _specific_ to generative ad hoc IR,
while precluding evaluation of systemic aspects of IR as a whole. This is not
meant to deemphasize the importance of evaluating, e.g., bias in search
results, but rather considers it to be outside the scope of this paper.
###### Acknowledgements.
This publication has received funding from the European Union’s Horizon Europe
research and innovation programme under grant agreement № 101070014
(OpenWebSearch.EU, https://doi.org/10.3030/101070014). The authors also
acknowledge financial support by the Federal Ministry of Education and
Research of Germany and by Sächsische Staatsministerium für Wissenschaft,
Kultur und Tourismus in the programme Center of Excellence for AI-research
“Center for Scalable Data Analytics and Artificial Intelligence
Dresden/Leipzig”, project ID ScaDS.AI. Harrisen Scells is the recipient of an
Alexander von Humboldt Stiftung Research Fellowship.
## References
* (1)
* Agosti et al. (2014) Maristella Agosti, Norbert Fuhr, Elaine Toms, and Pertti Vakkari. 2014. Evaluation Methodologies in Information Retrieval (Dagstuhl Seminar 13441). _Dagstuhl Reports_ 3, 10 (2014), 92–126. https://doi.org/10.4230/DagRep.3.10.92
* Alkaissi and McFarlane (2023) Hussam Alkaissi and Samy I McFarlane. 2023. Artificial hallucinations in ChatGPT: implications in scientific writing. _Cureus_ 15, 2 (2023).
* Arora et al. (2023) Daman Arora, Anush Kini, Sayak Ray Chowdhury, Nagarajan Natarajan, Gaurav Sinha, and Amit Sharma. 2023. GAR-meets-RAG Paradigm for Zero-Shot Information Retrieval. _CoRR_ abs/2310.20158 (2023). https://doi.org/10.48550/ARXIV.2310.20158 arXiv:2310.20158
* Bando et al. (2010) Lorena Leal Bando, Falk Scholer, and Andrew Turpin. 2010. Constructing query-biased summaries: a comparison of human and system generated snippets. In _Information Interaction in Context Symposium, IIiX 2010, New Brunswick, NJ, USA, August 18-21, 2010_ , Nicholas J. Belkin and Diane Kelly (Eds.). ACM, 195–204. https://doi.org/10.1145/1840784.1840813
* Banerjee and Lavie (2005) Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In _Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005, Ann Arbor, Michigan, USA, June 29, 2005_ , Jade Goldstein, Alon Lavie, Chin-Yew Lin, and Clare R. Voss (Eds.). Association for Computational Linguistics, 65–72. https://aclanthology.org/W05-0909/
* Bauer et al. (2023) Christine Bauer, Ben Carterette, Nicola Ferro, and Norbert Fuhr. 2023. Report from Dagstuhl Seminar 23031: Frontiers of Information Access Experimentation for Research and Education. _CoRR_ abs/2305.01509 (2023). https://doi.org/10.48550/arXiv.2305.01509 arXiv:2305.01509
* Bénédict et al. (2023) Gabriel Bénédict, Ruqing Zhang, and Donald Metzler. 2023. Gen-IR @ SIGIR 2023: The First Workshop on Generative Information Retrieval. _CoRR_ abs/2306.02887 (2023). https://doi.org/10.48550/arXiv.2306.02887 arXiv:2306.02887
* Bevilacqua et al. (2022) Michele Bevilacqua, Giuseppe Ottaviano, Patrick S. H. Lewis, Scott Yih, Sebastian Riedel, and Fabio Petroni. 2022. Autoregressive Search Engines: Generating Substrings as Document Identifiers. In _NeurIPS_. http://papers.nips.cc/paper_files/paper/2022/hash/cd88d62a2063fdaf7ce6f9068fb15dcd-Abstract-Conference.html
* Bhandari et al. (2020) Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, Pengfei Liu, and Graham Neubig. 2020. Re-evaluating Evaluation in Text Summarization. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, 9347–9359. https://doi.org/10.18653/v1/2020.emnlp-main.751
* Borgeaud et al. (2022) Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving Language Models by Retrieving from Trillions of Tokens. In _International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA_ _(Proceedings of Machine Learning Research, Vol. 162)_ , Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (Eds.). PMLR, 2206–2240. https://proceedings.mlr.press/v162/borgeaud22a.html
* Broder (2002) Andrei Z. Broder. 2002. A taxonomy of web search. _SIGIR Forum_ 36, 2 (2002), 3–10. https://doi.org/10.1145/792550.792552
* Buscher et al. (2012) Georg Buscher, Andreas Dengel, Ralf Biedert, and Ludger van Elst. 2012. Attentive documents: Eye tracking as implicit feedback for information retrieval and beyond. _ACM Trans. Interact. Intell. Syst._ 1, 2 (2012), 9:1–9:30. https://doi.org/10.1145/2070719.2070722
* Câmara et al. (2022) Arthur Câmara, David Maxwell, and Claudia Hauff. 2022. Searching, Learning, and Subtopic Ordering: A Simulation-based Analysis. (2022). https://dblp.org/rec/journals/corr/abs-2201-11181
* Cambazoglu et al. (2021) Berkant Barla Cambazoglu, Valeria Bolotova-Baranova, Falk Scholer, Mark Sanderson, Leila Tavakoli, and W. Bruce Croft. 2021. Quantifying Human-Perceived Answer Utility in Non-factoid Question Answering. In _CHIIR ’21: ACM SIGIR Conference on Human Information Interaction and Retrieval, Canberra, ACT, Australia, March 14-19, 2021_ , Falk Scholer, Paul Thomas, David Elsweiler, Hideo Joho, Noriko Kando, and Catherine Smith (Eds.). ACM, 75–84. https://doi.org/10.1145/3406522.3446028
* Cao et al. (2021) Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive Entity Retrieval. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net. https://openreview.net/forum?id=5k8F6UU39V
* Capra and Arguello (2023) Robert Capra and Jaime Arguello. 2023. How does AI chat change search behaviors? _CoRR_ abs/2307.03826 (2023). https://doi.org/10.48550/arXiv.2307.03826 arXiv:2307.03826
* Carterette (2011) Ben Carterette. 2011. System effectiveness, user models, and user utility: a conceptual framework for investigation. In _Proceeding of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2011, Beijing, China, July 25-29, 2011_ , Wei-Ying Ma, Jian-Yun Nie, Ricardo Baeza-Yates, Tat-Seng Chua, and W. Bruce Croft (Eds.). ACM, 903–912. https://doi.org/10.1145/2009916.2010037
* Chandu et al. (2021) Khyathi Raghavi Chandu, Yonatan Bisk, and Alan W. Black. 2021. Grounding ’Grounding’ in NLP. In _Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021_ _(Findings of ACL, Vol. ACL/IJCNLP 2021)_ , Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (Eds.). Association for Computational Linguistics, 4283–4305. https://doi.org/10.18653/v1/2021.findings-acl.375
* Chen et al. (2022) Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Yixing Fan, and Xueqi Cheng. 2022. GERE: Generative Evidence Retrieval for Fact Verification. In _SIGIR ’22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022_ , Enrique Amigó, Pablo Castells, Julio Gonzalo, Ben Carterette, J. Shane Culpepper, and Gabriella Kazai (Eds.). ACM, 2184–2189. https://doi.org/10.1145/3477495.3531827
* Chen et al. (2020) Wei-Fan Chen, Shahbaz Syed, Benno Stein, Matthias Hagen, and Martin Potthast. 2020. Abstractive Snippet Generation. In _Web Conference (WWW 2020)_ , Yennung Huang, Irwin King, Tie-Yan Liu, and Maarten van Steen (Eds.). ACM, 1309–1319. https://doi.org/10.1145/3366423.3380206
* Clarke et al. (2008) Charles L. A. Clarke, Maheedhar Kolla, Gordon V. Cormack, Olga Vechtomova, Azin Ashkan, Stefan Büttcher, and Ian MacKinnon. 2008. Novelty and diversity in information retrieval evaluation. In _Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2008, Singapore, July 20-24, 2008_ , Sung-Hyon Myaeng, Douglas W. Oard, Fabrizio Sebastiani, Tat-Seng Chua, and Mun-Kew Leong (Eds.). ACM, 659–666. https://doi.org/10.1145/1390334.1390446
* Cleverdon (1997) Cyril W. Cleverdon. 1997. The Cranfield tests on index language devices.
* Contributors (2023) The AutoGPT Contributors. 2023. AutoGPT: The Heart of the Open-Source Agent Ecosystem. https://github.com/Significant-Gravitas/AutoGPT.
* Craswell et al. (2021a) Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021a. Overview of the TREC 2020 deep learning track. _CoRR_ abs/2102.07662 (2021). arXiv:2102.07662 https://arxiv.org/abs/2102.07662
* Craswell et al. (2021b) Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Jimmy Lin. 2021b. Overview of the TREC 2021 Deep Learning Track. In _Proceedings of the Thirtieth Text REtrieval Conference, TREC 2021, online, November 15-19, 2021_ _(NIST Special Publication, Vol. 500-335)_ , Ian Soboroff and Angela Ellis (Eds.). National Institute of Standards and Technology (NIST). https://trec.nist.gov/pubs/trec30/papers/Overview-DL.pdf
* Craswell et al. (2022) Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Jimmy Lin, Ellen M. Voorhees, and Ian Soboroff. 2022. Overview of the TREC 2022 Deep Learning Track. In _Proceedings of the Thirty-First Text REtrieval Conference, TREC 2022, online, November 15-19, 2022_ _(NIST Special Publication, Vol. 500-338)_ , Ian Soboroff and Angela Ellis (Eds.). National Institute of Standards and Technology (NIST). https://trec.nist.gov/pubs/trec31/papers/Overview_deep.pdf
* Culpepper et al. (2018) J. Shane Culpepper, Fernando Diaz, and Mark D. Smucker. 2018. Research Frontiers in Information Retrieval: Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018). _SIGIR Forum_ 52, 1 (2018), 34–90. https://doi.org/10.1145/3274784.3274788
* Dang (2005) Hoa Trang Dang. 2005. Overview of DUC 2005. In _Proceedings of the document understanding conference_ , Vol. 2005. 1–12.
* Dang and Lin (2007) Hoa Trang Dang and Jimmy Lin. 2007. Different Structures for Evaluating Answers to Complex Questions: Pyramids Won’t Topple, and Neither Will Human Assessors. In _ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic_ , John Carroll, Antal van den Bosch, and Annie Zaenen (Eds.). The Association for Computational Linguistics. https://aclanthology.org/P07-1097/
* Dang et al. (2006) Hoa Trang Dang, Jimmy Lin, and Diane Kelly. 2006. Overview of the TREC 2006 Question Answering Track 99. In _Proceedings of the Fifteenth Text REtrieval Conference, TREC 2006, Gaithersburg, Maryland, USA, November 14-17, 2006_ _(NIST Special Publication, Vol. 500-272)_ , Ellen M. Voorhees and Lori P. Buckland (Eds.). National Institute of Standards and Technology (NIST). http://trec.nist.gov/pubs/trec15/papers/QA06.OVERVIEW.pdf
* Deckers et al. (2023) Niklas Deckers, Maik Fröbe, Johannes Kiesel, Gianluca Pandolfo, Christopher Schröder, Benno Stein, and Martin Potthast. 2023. The Infinite Index: Information Retrieval on Generative Text-To-Image Models. In _CHIIR_. ACM, 172–186.
* Doddington (2002) George R. Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics.
* Duh (2008) Kevin Duh. 2008. Ranking vs. Regression in Machine Translation Evaluation. In _WMT@ACL_.
* Fabbri et al. (2021) Alexander R. Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2021. SummEval: Re-evaluating Summarization Evaluation. _Trans. Assoc. Comput. Linguistics_ 9 (2021), 391–409. https://doi.org/10.1162/tacl_a_00373
* Faggioli et al. (2023) Guglielmo Faggioli, Laura Dietz, Charles L. A. Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, and Henning Wachsmuth. 2023. Perspectives on Large Language Models for Relevance Judgment. In _Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2023, Taipei, Taiwan, 23 July 2023_ , Masaharu Yoshioka, Julia Kiseleva, and Mohammad Aliannejadi (Eds.). ACM, 39–50. https://doi.org/10.1145/3578337.3605136
* Formal et al. (2021) Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. 2021. SPLADE v2: Sparse lexical and expansion model for information retrieval. _arXiv preprint arXiv:2109.10086_ (2021).
* Foundation ([n. d.]) Wikimedia Foundation. [n. d.]. Wikipedia: Verifiability, not truth. https://web.archive.org/web/20230627143645/https://en.wikipedia.org/wiki/Wikipedia:Verifiability,_not_truth. Accessed: 2023-06-27.
* Frey et al. (2013) Aline Frey, Gelu Ionescu, Benoit Lemaire, Francisco López-Orozco, Thierry Baccino, and Anne Guérin-Dugué. 2013. Decision-making in information seeking on texts: an eye-fixation-related potentials investigation. _Frontiers in systems neuroscience_ 7 (2013), 39.
* Fröbe et al. (2023) Maik Fröbe, Lukas Gienapp, Martin Potthast, and Matthias Hagen. 2023. Bootstrapped nDCG Estimation in the Presence of Unjudged Documents. In _Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2-6, 2023, Proceedings, Part I_ _(Lecture Notes in Computer Science, Vol. 13980)_ , Jaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Udo Kruschwitz, and Annalina Caputo (Eds.). Springer, 313–329. https://doi.org/10.1007/978-3-031-28244-7_20
* Fu et al. (2023) Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. GPTScore: Evaluate as You Desire. _CoRR_ abs/2302.04166 (2023). https://doi.org/10.48550/arXiv.2302.04166 arXiv:2302.04166
* Gallagher et al. (2023) Luke Gallagher, Marwah Alaofi, Mark Sanderson, and Falk Scholer. 2023. Can Generative LLMs Create Query Variants for Test Collections? An Exploratory Study. https://www.microsoft.com/en-us/research/uploads/prod/2023/05/srp0313-alaofi.pdf
* Gao et al. (2021) Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Few-shot Learners. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021_ , Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (Eds.). Association for Computational Linguistics, 3816–3830. https://doi.org/10.18653/v1/2021.acl-long.295
* Gienapp et al. (2020) Lukas Gienapp, Benno Stein, Matthias Hagen, and Martin Potthast. 2020. Efficient Pairwise Annotation of Argument Quality. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault (Eds.). Association for Computational Linguistics, 5772–5781. https://doi.org/10.18653/v1/2020.acl-main.511
* Gospodinov et al. (2023) Mitko Gospodinov, Sean MacAvaney, and Craig Macdonald. 2023. Doc2Query-: When Less is More. In _Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2-6, 2023, Proceedings, Part II_ _(Lecture Notes in Computer Science, Vol. 13981)_ , Jaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Udo Kruschwitz, and Annalina Caputo (Eds.). Springer, 414–422. https://doi.org/10.1007/978-3-031-28238-6_31
* Goyal et al. (2022) Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News Summarization and Evaluation in the Era of GPT-3. _CoRR_ abs/2209.12356 (2022). https://doi.org/10.48550/arXiv.2209.12356 arXiv:2209.12356
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-Augmented Language Model Pre-Training. _CoRR_ abs/2002.08909 (2020). arXiv:2002.08909 https://arxiv.org/abs/2002.08909
* Guzmán et al. (2014) Francisco Guzmán, Shafiq R. Joty, Lluís Màrquez i Villodre, Alessandro Moschitti, Preslav Nakov, and Massimo Nicosia. 2014. Learning to Differentiate Better from Worse Translations. In _Conference on Empirical Methods in Natural Language Processing_.
* Guzmán et al. (2015) Francisco Guzmán, Shafiq R. Joty, Lluís Màrquez i Villodre, and Preslav Nakov. 2015. Pairwise Neural Machine Translation Evaluation. In _Annual Meeting of the Association for Computational Linguistics_.
* Gwizdka (2014) Jacek Gwizdka. 2014. Characterizing relevance with eye-tracking measures. In _Fifth Information Interaction in Context Symposium, IIiX ’14, Regensburg, Germany, August 26-29, 2014_ , David Elsweiler, Bernd Ludwig, Leif Azzopardi, and Max L. Wilson (Eds.). ACM, 58–67. https://doi.org/10.1145/2637002.2637011
* Hearst (2009) Marti A. Hearst. 2009. _Search User Interfaces_. Cambridge University Press. https://doi.org/10.1017/CBO9781139644082
* Huang et al. (2021) Yi-Chong Huang, Xia-Chong Feng, Xiao-Cheng Feng, and Bing Qin. 2021. The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey. _CoRR_ abs/2104.14839 (2021). arXiv:2104.14839 https://arxiv.org/abs/2104.14839
* Izacard and Grave (2021) Gautier Izacard and Edouard Grave. 2021. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021_ , Paola Merlo, Jörg Tiedemann, and Reut Tsarfaty (Eds.). Association for Computational Linguistics, 874–880. https://doi.org/10.18653/v1/2021.eacl-main.74
* Järvelin and Kekäläinen (2002) Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. _ACM Trans. Inf. Syst._ 20, 4 (2002), 422–446. https://doi.org/10.1145/582415.582418
* Ji et al. (2023) Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation. _ACM Comput. Surv._ 55, 12 (2023), 248:1–248:38. https://doi.org/10.1145/3571730
* Jiang et al. (2022) Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neubig. 2022. Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer. (2022). https://dblp.org/rec/journals/corr/abs-2212-02027
* Jiang et al. (2023) Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active Retrieval Augmented Generation. _CoRR_ abs/2305.06983 (2023). https://doi.org/10.48550/arXiv.2305.06983 arXiv:2305.06983
* Jin et al. (2020) Di Jin, Zhijing Jin, Joey Tianyi Zhou, Lisa Orii, and Peter Szolovits. 2020. Hooks in the Headline: Learning to Generate Headlines with Controlled Styles. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault (Eds.). Association for Computational Linguistics, 5082–5093. https://doi.org/10.18653/v1/2020.acl-main.456
* Kelly et al. (2009) Diane Kelly et al. 2009\. Methods for evaluating interactive information retrieval systems with users. _Foundations and Trends® in Information Retrieval_ 3, 1–2 (2009), 1–224.
* Khattab et al. (2022) Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2022. Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP. (2022). https://dblp.org/rec/journals/corr/abs-2212-14024
* Khot et al. (2023) Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2023. Decomposed Prompting: A Modular Approach for Solving Complex Tasks. In _The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023_. OpenReview.net. https://openreview.net/pdf?id=_nGgzQjzaRy
* Kiesel et al. (2018) Johannes Kiesel, Arefeh Bahrami, Benno Stein, Avishek Anand, and Matthias Hagen. 2018. Toward Voice Query Clarification. In _41st International ACM Conference on Research and Development in Information Retrieval (SIGIR 2018)_. ACM, 1257–1260. https://doi.org/10.1145/3209978.3210160
* Kiesel et al. (2021a) Johannes Kiesel, Lars Meyer, Florian Kneist, Benno Stein, and Martin Potthast. 2021a. An Empirical Comparison of Web Page Segmentation Algorithms. In _Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part II_ _(Lecture Notes in Computer Science, Vol. 12657)_ , Djoerd Hiemstra, Marie-Francine Moens, Josiane Mothe, Raffaele Perego, Martin Potthast, and Fabrizio Sebastiani (Eds.). Springer, 62–74. https://doi.org/10.1007/978-3-030-72240-1_5
* Kiesel et al. (2021b) Johannes Kiesel, Lars Meyer, Martin Potthast, and Benno Stein. 2021b. Meta-Information in Conversational Search. _ACM Transactions on Information Systems (ACM TOIS)_ 39, 4, Article 50 (Aug. 2021), 44 pages. https://doi.org/10.1145/3468868
* Koopman et al. (2023) Bevan Koopman, Ahmed Mourad, Hang Li, Anton van der Vegt, Shengyao Zhuang, Simon Gibson, Yash Dang, David Lawrence, and Guido Zuccon. 2023. AgAsk: an agent to help answer farmer’s questions from scientific documents. _International Journal on Digital Libraries_ (2023), 1–16.
* Kulesza and Shieber (2004) Alex Kulesza and Stuart M. Shieber. 2004. A learning approach to improving sentence-level MT evaluation. In _Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages_.
* Lazaridou et al. (2022) Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. _CoRR_ abs/2203.05115 (2022). https://doi.org/10.48550/arXiv.2203.05115 arXiv:2203.05115
* Lewis et al. (2020) Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_ , Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html
* Li et al. (2013) Maoxi Li, Aiwen Jiang, and Mingwen Wang. 2013. Listwise Approach to Learning to Rank for Automatic Evaluation of Machine Translation. In _Machine Translation Summit_.
* Li et al. (2018) Xiangsheng Li, Yiqun Liu, Jiaxin Mao, Zexue He, Min Zhang, and Shaoping Ma. 2018. Understanding Reading Attention Distribution during Relevance Judgement. In _Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018_ , Alfredo Cuzzocrea, James Allan, Norman W. Paton, Divesh Srivastava, Rakesh Agrawal, Andrei Z. Broder, Mohammed J. Zaki, K. Selçuk Candan, Alexandros Labrinidis, Assaf Schuster, and Haixun Wang (Eds.). ACM, 733–742. https://doi.org/10.1145/3269206.3271764
* Li et al. (2019) Xiangsheng Li, Jiaxin Mao, Chao Wang, Yiqun Liu, Min Zhang, and Shaoping Ma. 2019. Teach Machine How to Read: Reading Behavior Inspired Relevance Estimation. In _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019_ , Benjamin Piwowarski, Max Chevalier, Éric Gaussier, Yoelle Maarek, Jian-Yun Nie, and Falk Scholer (Eds.). ACM, 795–804. https://doi.org/10.1145/3331184.3331205
* Li and Belkin (2008) Yuelin Li and Nicholas J. Belkin. 2008. A faceted approach to conceptualizing tasks in information seeking. (2008), 1822–1837. https://dblp.org/rec/journals/ipm/LiB08
* Lin (2004) Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In _Text Summarization Branches Out_. Association for Computational Linguistics, Barcelona, Spain, 74–81. https://www.aclweb.org/anthology/W04-1013
* Liu et al. (2021) Chang Liu, Ying-Hsang Liu, Jingjing Liu, and Ralf Bierig. 2021. Search Interface Design and Evaluation. _Found. Trends Inf. Retr._ 15, 3-4 (2021), 243–416. https://doi.org/10.1561/1500000073
* Liu et al. (2023b) Nelson F. Liu, Tianyi Zhang, and Percy Liang. 2023b. Evaluating Verifiability in Generative Search Engines. _CoRR_ abs/2304.09848 (2023). https://doi.org/10.48550/arXiv.2304.09848 arXiv:2304.09848
* Liu and Chilton (2022) Vivian Liu and Lydia B. Chilton. 2022. Design Guidelines for Prompt Engineering Text-to-Image Generative Models. In _CHI ’22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022_ , Simone D. J. Barbosa, Cliff Lampe, Caroline Appert, David A. Shamma, Steven Mark Drucker, Julie R. Williamson, and Koji Yatani (Eds.). ACM, 384:1–384:23. https://doi.org/10.1145/3491102.3501825
* Liu et al. (2023a) Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2023a. Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023_ , Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, 4140–4170. https://doi.org/10.18653/v1/2023.acl-long.228
* Lux et al. (2020) Klaus-Michael Lux, Maya Sappelli, and Martha Larson. 2020. Truth or Error? Towards systematic analysis of factual errors in abstractive summaries. In _Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems_. Association for Computational Linguistics, Online, 1–10. https://doi.org/10.18653/v1/2020.eval4nlp-1.1
* MacAvaney et al. (2021) Sean MacAvaney, Craig Macdonald, Roderick Murray-Smith, and Iadh Ounis. 2021. IntenT5: Search Result Diversification using Causal Language Models. _CoRR_ abs/2108.04026 (2021). arXiv:2108.04026 https://arxiv.org/abs/2108.04026
* MacAvaney et al. (2020) Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. 2020. Expansion via prediction of importance with contextualization. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval_. 1573–1576.
* Maddalena et al. (2017) Eddy Maddalena, Stefano Mizzaro, Falk Scholer, and Andrew Turpin. 2017. On Crowdsourcing Relevance Magnitudes for Information Retrieval Evaluation. _ACM Trans. Inf. Syst._ 35, 3 (2017), 19:1–19:32. https://doi.org/10.1145/3002172
* Manning et al. (2008) Christopher D Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. _Introduction to information retrieval_. Vol. 1. Cambridge university press Cambridge.
* Maxwell and Azzopardi (2016) David Maxwell and Leif Azzopardi. 2016. Agents, Simulated Users and Humans: An Analysis of Performance and Behaviour.. In _CIKM_. 731–740. https://dblp.org/rec/conf/cikm/MaxwellA16
* Maxwell et al. (2015) David Maxwell, Leif Azzopardi, Kalervo Järvelin, and Heikki Keskustalo. 2015. Searching and Stopping: An Analysis of Stopping Rules and Strategies.. In _CIKM_. 313–322. https://dblp.org/rec/conf/cikm/MaxwellAJK15
* Maxwell et al. (2017) David Maxwell, Leif Azzopardi, and Yashar Moshfeghi. 2017. A Study of Snippet Length and Informativeness: Behaviour, Performance and User Experience. In _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017_ , Noriko Kando, Tetsuya Sakai, Hideo Joho, Hang Li, Arjen P. de Vries, and Ryen W. White (Eds.). ACM, 135–144. https://doi.org/10.1145/3077136.3080824
* Maynez et al. (2020) Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan T. McDonald. 2020. On Faithfulness and Factuality in Abstractive Summarization. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault (Eds.). Association for Computational Linguistics, 1906–1919. https://doi.org/10.18653/v1/2020.acl-main.173
* Mialon et al. (2023) Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented Language Models: a Survey. _CoRR_ abs/2302.07842 (2023). https://doi.org/10.48550/arXiv.2302.07842 arXiv:2302.07842
* Miller et al. (2017) Alexander H. Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A Dialog Research Software Platform. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017 - System Demonstrations_ , Lucia Specia, Matt Post, and Michael Paul (Eds.). Association for Computational Linguistics, 79–84.
* Moffat et al. (2015) Alistair Moffat, Peter Bailey, Falk Scholer, and Paul Thomas. 2015. INST: An Adaptive Metric for Information Retrieval Evaluation. In _Proceedings of the 20th Australasian Document Computing Symposium, ADCS 2015, Parramatta, NSW, Australia, December 8-9, 2015_ , Laurence Anthony F. Park and Sarvnaz Karimi (Eds.). ACM, 5:1–5:4. https://doi.org/10.1145/2838931.2838938
* Moffat et al. (2017) Alistair Moffat, Peter Bailey, Falk Scholer, and Paul Thomas. 2017. Incorporating User Expectations and Behavior into the Measurement of Search Effectiveness. _ACM Trans. Inf. Syst._ 35, 3 (2017), 24:1–24:38. https://doi.org/10.1145/3052768
* Moffat et al. (2013) Alistair Moffat, Paul Thomas, and Falk Scholer. 2013. Users versus models: what observation tells us about effectiveness metrics. In _22nd ACM International Conference on Information and Knowledge Management, CIKM’13, San Francisco, CA, USA, October 27 - November 1, 2013_ , Qi He, Arun Iyengar, Wolfgang Nejdl, Jian Pei, and Rajeev Rastogi (Eds.). ACM, 659–668. https://doi.org/10.1145/2505515.2507665
* Nakov et al. (2021) Preslav Nakov, Giovanni Da San Martino, Tamer Elsayed, Alberto Barrón-Cedeño, Rubén Míguez, Shaden Shaar, Firoj Alam, Fatima Haouari, Maram Hasanain, Watheq Mansour, Bayan Hamdan, Zien Sheikh Ali, Nikolay Babulkov, Alex Nikolov, Gautam Kishore Shahi, Julia Maria Struß, Thomas Mandl, Mücahid Kutlu, and Yavuz Selim Kartal. 2021. Overview of the CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News. In _Experimental IR Meets Multilinguality, Multimodality, and Interaction - 12th International Conference of the CLEF Association, CLEF 2021, Virtual Event, September 21-24, 2021, Proceedings_ _(Lecture Notes in Computer Science, Vol. 12880)_ , K. Selçuk Candan, Bogdan Ionescu, Lorraine Goeuriot, Birger Larsen, Henning Müller, Alexis Joly, Maria Maistro, Florina Piroi, Guglielmo Faggioli, and Nicola Ferro (Eds.). Springer, 264–291. https://doi.org/10.1007/978-3-030-85251-1_19
* Nalisnick et al. (2019) Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. 2019. Do Deep Generative Models Know What They Don’t Know?. In _International Conference on Learning Representations_. https://openreview.net/forum?id=H1xwNhCcYm
* Nenkova et al. (2007) Ani Nenkova, Rebecca J. Passonneau, and Kathleen R. McKeown. 2007. The Pyramid Method: Incorporating human content selection variation in summarization evaluation. _ACM Trans. Speech Lang. Process._ 4, 2 (2007), 4. https://doi.org/10.1145/1233912.1233913
* Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. In _Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016_ _(CEUR Workshop Proceedings, Vol. 1773)_ , Tarek Richard Besold, Antoine Bordes, Artur S. d’Avila Garcez, and Greg Wayne (Eds.). CEUR-WS.org. https://ceur-ws.org/Vol-1773/CoCoNIPS_2016_paper9.pdf
* Nishino et al. (2019) Toru Nishino, Shotaro Misawa, Ryuji Kano, Tomoki Taniguchi, Yasuhide Miura, and Tomoko Ohkuma. 2019. Keeping Consistency of Sentence Generation and Document Classification with Multi-Task Learning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019_ , Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (Eds.). Association for Computational Linguistics, 3193–3203. https://doi.org/10.18653/v1/D19-1315
* Nogueira et al. (2019) Rodrigo Frassetto Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019. Document Expansion by Query Prediction. _CoRR_ abs/1904.08375 (2019). arXiv:1904.08375 http://arxiv.org/abs/1904.08375
* Nourani et al. (2019) Mahsan Nourani, Samia Kabir, Sina Mohseni, and Eric D. Ragan. 2019. The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. In _Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2019, Stevenson, WA, USA, October 28-30, 2019_ , Edith Law and Jennifer Wortman Vaughan (Eds.). AAAI Press, 97–105. https://ojs.aaai.org/index.php/HCOMP/article/view/5284
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In _Annual Meeting of the Association for Computational Linguistics_.
* Pöttker (2003) Horst Pöttker. 2003. News and its communicative quality: the inverted pyramid—when and why did it appear? _Journalism Studies_ 4, 4 (2003), 501–511. https://doi.org/10.1080/1461670032000136596
* Radev and McKeown (1998) Dragomir R. Radev and Kathleen R. McKeown. 1998. Generating Natural Language Summaries from Multiple On-Line Sources. _Comput. Linguistics_ 24, 3 (1998), 469–500.
* Radlinski and Craswell (2017) Filip Radlinski and Nick Craswell. 2017. A Theoretical Framework for Conversational Search. In _Proceedings of the 2017 Conference on Human Information Interaction and Retrieval_. 117–126. https://doi.org/10.1145/3020165.3020183
* Ram et al. (2023) Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-Context Retrieval-Augmented Language Models. _CoRR_ abs/2302.00083 (2023). https://doi.org/10.48550/arXiv.2302.00083 arXiv:2302.00083
* Reimer et al. (2023) Jan Heinrich Reimer, Sebastian Schmidt, Maik Fröbe, Lukas Gienapp, Harrisen Scells, Benno Stein, Matthias Hagen, and Martin Potthast. 2023. The Archive Query Log: Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives. In _Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023_ , Hsin-Hsi Chen, Wei-Jou (Edward) Duh, Hen-Hsen Huang, Makoto P. Kato, Josiane Mothe, and Barbara Poblete (Eds.). ACM, 2848–2860. https://doi.org/10.1145/3539618.3591890
* Renaud and Azzopardi (2012) Gareth Renaud and Leif Azzopardi. 2012. SCAMP: a tool for conducting interactive information retrieval experiments.. In _IIiX_. 286–289. https://dblp.org/rec/conf/iiix/RenaudA12
* Reynolds and McDonell (2021) Laria Reynolds and Kyle McDonell. 2021. Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. In _CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama Japan, May 8-13, 2021, Extended Abstracts_ , Yoshifumi Kitamura, Aaron Quigley, Katherine Isbister, and Takeo Igarashi (Eds.). ACM, 314:1–314:7. https://doi.org/10.1145/3411763.3451760
* Robinson and Wingate (2023) Joshua Robinson and David Wingate. 2023. Leveraging Large Language Models for Multiple Choice Question Answering. In _The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023_. OpenReview.net. https://openreview.net/pdf?id=yKbprarjc5B
* Roitero et al. (2018) Kevin Roitero, Eddy Maddalena, Gianluca Demartini, and Stefano Mizzaro. 2018. On Fine-Grained Relevance Scales. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018_, Kevyn Collins-Thompson, Qiaozhu Mei, Brian D. Davison, Yiqun Liu, and Emine Yilmaz (Eds.). ACM, 675–684. https://doi.org/10.1145/3209978.3210052
* Sakai (2023) Tetsuya Sakai. 2023. SWAN: A Generic Framework for Auditing Textual Conversational Systems. _CoRR_ abs/2305.08290 (2023). https://doi.org/10.48550/arXiv.2305.08290 arXiv:2305.08290
* Sakai et al. (2011) Tetsuya Sakai, Makoto P. Kato, and Young-In Song. 2011. Click the search button and be happy: evaluating direct and immediate information access. In _Proceedings of the 20th ACM Conference on Information and Knowledge Management, CIKM 2011, Glasgow, United Kingdom, October 24-28, 2011_ , Craig Macdonald, Iadh Ounis, and Ian Ruthven (Eds.). ACM, 621–630. https://doi.org/10.1145/2063576.2063669
* Sallam et al. (2023) Malik Sallam, Nesreen A Salim, B Ala’a, Muna Barakat, Diaa Fayyad, Souheil Hallit, Harapan Harapan, Rabih Hallit, Azmi Mahafzah, and B Ala’a. 2023. ChatGPT output regarding compulsory vaccination and COVID-19 vaccine conspiracy: A descriptive study at the outset of a paradigm shift in online search for information. _Cureus_ 15, 2 (2023).
* Salton (1969) Gerard Salton. 1969. _Interactive Information Retrieval_. Technical Report. Cornell University.
* Sameki et al. (2016) Mehrnoosh Sameki, Aditya Barua, and Praveen K. Paritosh. 2016. Rigorously Collecting Commonsense Judgments for Complex Question-Answer Content. _Proceedings of the AAAI Conference on Human Computation and Crowdsourcing_ (2016).
* Sander and Dietz (2021) David P. Sander and Laura Dietz. 2021. EXAM: How to Evaluate Retrieve-and-Generate Systems for Users Who Do Not (Yet) Know What They Want. In _Proceedings of the Second International Conference on Design of Experimental Search & Information REtrieval Systems, Padova, Italy, September 15-18, 2021_ _(CEUR Workshop Proceedings, Vol. 2950)_ , Omar Alonso, Stefano Marchesin, Marc Najork, and Gianmaria Silvello (Eds.). CEUR-WS.org, 136–146. https://ceur-ws.org/Vol-2950/paper-16.pdf
* Sanderson et al. (2010) Mark Sanderson et al. 2010\. Test collection based evaluation of information retrieval systems. _Foundations and Trends® in Information Retrieval_ 4, 4 (2010), 247–375.
* Schuff et al. (2022) Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, and Ngoc Thang Vu. 2022. Human Interpretation of Saliency-based Explanation Over Text. In _FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022_. ACM, 611–636. https://doi.org/10.1145/3531146.3533127
* Semnani et al. (2023) Sina J Semnani, Violet Z Yao, Heidi C Zhang, and Monica S Lam. 2023. WikiChat: A Few-Shot LLM-Based Chatbot Grounded with Wikipedia. _arXiv preprint arXiv:2305.14292_ (2023).
* Shah et al. (2021) Darsh J. Shah, Lili Yu, Tao Lei, and Regina Barzilay. 2021. Nutri-bullets Hybrid: Consensual Multi-document Summarization. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021_ , Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (Eds.). Association for Computational Linguistics, 5213–5222. https://doi.org/10.18653/v1/2021.naacl-main.411
* Shi et al. (2023) Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. REPLUG: Retrieval-Augmented Black-Box Language Models. _CoRR_ abs/2301.12652 (2023). https://doi.org/10.48550/arXiv.2301.12652 arXiv:2301.12652
* Shin et al. (2020) Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, 4222–4235. https://doi.org/10.18653/v1/2020.emnlp-main.346
* Snover et al. (2006) Matthew G. Snover, B. Dorr, R. Schwartz, Linnea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In _Conference of the Association for Machine Translation in the Americas_.
* Song and Cohn (2011) Xingyi Song and Trevor Cohn. 2011. Regression and Ranking based Optimisation for Sentence Level MT Evaluation. In _WMT@EMNLP_.
* Sorensen et al. (2022) Taylor Sorensen, Joshua Robinson, Christopher Michael Rytting, Alexander Glenn Shaw, Kyle Jeffrey Rogers, Alexia Pauline Delorey, Mahmoud Khalil, Nancy Fulda, and David Wingate. 2022. An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022_ , Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguistics, 819–862. https://doi.org/10.18653/v1/2022.acl-long.60
* Sun et al. (2023) Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. _CoRR_ abs/2304.09542 (2023). https://doi.org/10.48550/arXiv.2304.09542 arXiv:2304.09542
* Tay et al. (2022) Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Prakash Gupta, Tal Schuster, William W. Cohen, and Donald Metzler. 2022. Transformer Memory as a Differentiable Search Index. In _NeurIPS_.
* Thorne (2022) James Thorne. 2022. Data-Efficient Auto-Regressive Document Retrieval for Fact Verification. _ArXiv_ abs/2211.09388 (2022).
* Tombros and Sanderson (1998) Anastasios Tombros and Mark Sanderson. 1998. Advantages of Query Biased Summaries in Information Retrieval. In _SIGIR ’98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia_ , W. Bruce Croft, Alistair Moffat, C. J. van Rijsbergen, Ross Wilkinson, and Justin Zobel (Eds.). ACM, 2–10. https://doi.org/10.1145/290941.290947
* Trippas et al. (2020) Johanne R. Trippas, Damiano Spina, Paul Thomas, Hideo Joho, Mark Sanderson, and Lawrence Cavedon. 2020. Towards a Model for Spoken Conversational Search. _Information Processing & Management_ 57, 2 (2020), 1–19. https://doi.org/10.1016/j.ipm.2019.102162
* Vakkari (2016) Pertti Vakkari. 2016. Searching as learning: A systematization based on literature. _J. Inf. Sci._ 42, 1 (2016), 7–18. https://doi.org/10.1177/0165551515615833
* Vakulenko et al. (2020) Svitlana Vakulenko, Evangelos Kanoulas, and Maarten de Rijke. 2020. An Analysis of Mixed Initiative and Collaboration in Information-Seeking Dialogues. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval (SIGIR 2020)_ , Jimmy X. Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu (Eds.). ACM, 2085–2088. https://doi.org/10.1145/3397271.3401297
* Vakulenko et al. (2019) Svitlana Vakulenko, Kate Revoredo, Claudio Di Ciccio, and Maarten de Rijke. 2019. QRFA: A data-driven model of information-seeking dialogues. In _European Conference on Information Retrieval_. Springer, 541–557. https://doi.org/10.1007/978-3-030-15712-8_35
* Wadden et al. (2020) David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or Fiction: Verifying Scientific Claims. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, 7534–7550. https://doi.org/10.18653/v1/2020.emnlp-main.609
* Wang et al. (2022) Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Qi Chen, Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, and Mao Yang. 2022. A Neural Corpus Indexer for Document Retrieval. In _NeurIPS_.
* White et al. (2023) Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C. Schmidt. 2023. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. _CoRR_ abs/2302.11382 (2023). https://doi.org/10.48550/arXiv.2302.11382 arXiv:2302.11382
* Wilson (2011) Max L. Wilson. 2011. Interfaces for information retrieval. In _Interactive Information Seeking, Behaviour and Retrieval_ , Ian Ruthven and Diane Kelly (Eds.). Facet Publishing, 139–170.
* Wu et al. (2023) Zhijing Wu, Jiaxin Mao, Kedi Xu, Dandan Song, and Heyan Huang. 2023. A Passage-Level Reading Behavior Model for Mobile Search. In _Proceedings of the ACM Web Conference 2023, WWW 2023, Austin, TX, USA, 30 April 2023 - 4 May 2023_ , Ying Ding, Jie Tang, Juan F. Sequeda, Lora Aroyo, Carlos Castillo, and Geert-Jan Houben (Eds.). ACM, 3236–3246. https://doi.org/10.1145/3543507.3583343
* Yang et al. (2023) Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. 2023. Large Language Models as Optimizers. _CoRR_ abs/2309.03409 (2023). https://doi.org/10.48550/arXiv.2309.03409 arXiv:2309.03409
* Yang (2017) Ziying Yang. 2017. Relevance Judgments: Preferences, Scores and Ties. In _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017_ , Noriko Kando, Tetsuya Sakai, Hideo Joho, Hang Li, Arjen P. de Vries, and Ryen W. White (Eds.). ACM, 1373. https://doi.org/10.1145/3077136.3084154
* Yu et al. (2023) Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than Retrieve: Large Language Models are Strong Context Generators. In _The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023_. OpenReview.net. https://openreview.net/pdf?id=fB0hRu9GZUS
* Yuan et al. (2021) Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. BARTScore: Evaluating Generated Text as Text Generation. In _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_ , Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (Eds.). 27263–27277. https://proceedings.neurips.cc/paper/2021/hash/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Abstract.html
* Yue et al. (2023) Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, and Huan Sun. 2023. Automatic Evaluation of Attribution by Large Language Models. _CoRR_ abs/2305.06311 (2023). https://doi.org/10.48550/arXiv.2305.06311 arXiv:2305.06311
* Zamani et al. (2020) Hamed Zamani, Bhaskar Mitra, Everest Chen, Gord Lueck, Fernando Diaz, Paul N. Bennett, Nick Craswell, and Susan T. Dumais. 2020. Analyzing and Learning from User Interactions for Search Clarification. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval (SIGIR 2020)_ , Jimmy X. Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu (Eds.). ACM, 1181–1190. https://doi.org/10.1145/3397271.3401160
* Zhang and Pradeep (2023) Dake Zhang and Ronak Pradeep. 2023. ReadProbe: A Demo of Retrieval-Enhanced Large Language Models to Support Lateral Reading. _CoRR_ abs/2306.07875 (2023). https://doi.org/10.48550/arXiv.2306.07875 arXiv:2306.07875
* Zhang et al. (2021a) Edwin Zhang, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, Rodrigo Frassetto Nogueira, and Jimmy Lin. 2021a. Chatty Goose: A Python Framework for Conversational Search. In _44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)_ , Fernando Diaz, Chirag Shah, Torsten Suel, Pablo Castells, Rosie Jones, and Tetsuya Sakai (Eds.). ACM, 2521–2525.
* Zhang et al. (2020) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. In _ICLR_. OpenReview.net.
* Zhang et al. (2021b) Zijian Zhang, Koustav Rudra, and Avishek Anand. 2021b. Explain and Predict, and then Predict Again. In _WSDM ’21, The Fourteenth ACM International Conference on Web Search and Data Mining, Virtual Event, Israel, March 8-12, 2021_ , Liane Lewin-Eytan, David Carmel, Elad Yom-Tov, Eugene Agichtein, and Evgeniy Gabrilovich (Eds.). ACM, 418–426. https://doi.org/10.1145/3437963.3441758
* Zhao et al. (2023) Ruochen Zhao, Xingxuan Li, Yew Ken Chia, Bosheng Ding, and Lidong Bing. 2023. Can ChatGPT-like Generative Models Guarantee Factual Accuracy? On the Mistakes of New Generation Search Engines. _CoRR_ abs/2304.11076 (2023). https://doi.org/10.48550/arXiv.2304.11076 arXiv:2304.11076
* Zhao et al. (2019) Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019_ , Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (Eds.). Association for Computational Linguistics, 563–578. https://doi.org/10.18653/v1/D19-1053
* Zheng et al. (2012) Wei Zheng, Xuanhui Wang, Hui Fang, and Hong Cheng. 2012. Coverage-based search result diversification. _Inf. Retr._ 15, 5 (2012), 433–457. https://doi.org/10.1007/s10791-011-9178-4
* Zheng et al. (2019) Yukun Zheng, Jiaxin Mao, Yiqun Liu, Zixin Ye, Min Zhang, and Shaoping Ma. 2019. Human Behavior Inspired Machine Reading Comprehension. In _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019_ , Benjamin Piwowarski, Max Chevalier, Éric Gaussier, Yoelle Maarek, Jian-Yun Nie, and Falk Scholer (Eds.). ACM, 425–434. https://doi.org/10.1145/3331184.3331231
* Zhou et al. (2022) Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, and Ji-Rong Wen. 2022. DynamicRetriever: A Pre-training Model-based IR System with Neither Sparse nor Dense Index. _CoRR_ abs/2203.00537 (2022).
* Zhu et al. (2009) Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2009. A multi-dimensional model for assessing the quality of answers in social Q&A sites. In _ICIQ_. 264–265.
* Zhuang et al. (2022) Shengyao Zhuang, Houxing Ren, Linjun Shou, Jian Pei, Ming Gong, Guido Zuccon, and Daxin Jiang. 2022. Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation. _CoRR_ abs/2206.10128 (2022).
* Zhuang and Zuccon (2021) Shengyao Zhuang and Guido Zuccon. 2021. Fast passage re-ranking with contextualized exact term matching and efficient passage expansion. _arXiv preprint arXiv:2108.08513_ (2021).
* Ziems et al. (2023) Noah Ziems, Wenhao Yu, Zhihan Zhang, and Meng Jiang. 2023. Large Language Models are Built-in Autoregressive Search Engines. arXiv:2305.09612 [cs.CL] http://arxiv.org/abs/2305.09612v1
* Zuccon and Koopman (2023) Guido Zuccon and Bevan Koopman. 2023. Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness. _arXiv preprint arXiv:2302.13793_ (2023).
|
# Structure Guided Lane Detection
Jinming Su∗ Chao Chen Ke Zhang Junfeng Luo∗ Xiaoming Wei &Xiaolin Wei
Meituan {sujinming, chenchao60, zhangke21, luojunfeng, weixiaoming,
<EMAIL_ADDRESS>
###### Abstract
Recently, lane detection has made great progress with the rapid development of
deep neural networks and autonomous driving. However, there exist three mainly
problems including characterizing lanes, modeling the structural relationship
between scenes and lanes, and supporting more attributes (_e.g._ , instance
and type) of lanes. In this paper, we propose a novel structure guided
framework to solve these problems simultaneously. In the framework, we first
introduce a new lane representation to characterize each instance. Then a top-
down vanishing point guided anchoring mechanism is proposed to produce
intensive anchors, which efficiently capture various lanes. Next, multi-level
structural constraints are used to improve the perception of lanes. In the
process, pixel-level perception with binary segmentation is introduced to
promote features around anchors and restore lane details from bottom up, a
lane-level relation is put forward to model structures (_i.e._ , parallel)
around lanes, and an image-level attention is used to adaptively attend
different regions of the image from the perspective of scenes. With the help
of structural guidance, anchors are effectively classified and regressed to
obtain precise locations and shapes. Extensive experiments on public benchmark
datasets show that the proposed approach outperforms state-of-the-art methods
with 117 FPS on a single GPU.
††footnotetext: * Co-corresponding author.
## 1 Introduction
Lane detection, which aims to detect lanes in road scenes, is a fundamental
perception task and has a wide range of applications (_e.g._ , ADAS Butakov
and Ioannou (2014), autonomous driving Chen and Huang (2017) and high-
definition map production Homayounfar et al. (2019)). Over the past years,
lane detection has made significant progress and it is also used as an
important element for tasks of road scene understanding, such as driving area
detection Yu et al. (2020).
To address the task of lane detection, lots of learning-based methods Pan et
al. (2018); Qin et al. (2020) have been proposed in recent years, achieving
impressive performance on existing benchmarks TuSimple (2017); Pan et al.
(2018). However, there still exist several challenges that hinder the
development of lane detection. Frist, there lacks a unified and effective lane
representation. As shown in (a) of Fig. 1, there exist various definitions
including point TuSimple (2017), mask Pan et al. (2018), marker Yu et al.
(2020) and grid Lee et al. (2017), which are quite different in form for
different scenarios. Second, it is difficult to model the structural
relationship between scenes and lanes. As displayed in (b) of Fig. 1, the
structural information depending on scenes, such as location of vanishing
points and parallelism of lanes, is very useful, but there is no scheme to
describe it. Last, while predicting lanes, it is also important to predict
other attributes including instance and type (see (c) of Fig. 1), but it is
not easy to extend these for existing methods. These three difficulties are
especially difficult to deal with and greatly slow down the development of
lane detection. Due to these difficulties, lane detection remains a
challenging vision task.
Figure 1: Challenges of lane detection. (a) Various representation. There
exist many kinds of annotations TuSimple (2017); Pan et al. (2018); Yu et al.
(2020); Lee et al. (2017), which makes it difficult to characterize lanes in a
unified way. (b) Underresearched scene structures. Lane location are strongly
dependent on structural information, such as vanishing point (black point),
parallelism in bird’s eye view and distance attention caused by perspective.
(c) More attributes to support. Lanes have more attributes such as instance
and type, which should be predicted. Figure 2: Framework of our approach. We
first extract the common features by the extractor, which provides features
for vanishing point guided anchoring and pixel-level perception. The anchoring
produces intensive anchors and perception utilizes binary segmentation to
promote features around lanes. Promoted features are used to classify and
regress anchors with the aid of lane-level relation and image-level attention.
The dashed arrow indicates the supervision, and the supervision of vanishing
point and lane segmentation is omitted in the figure.
To deal with the first difficulty, many methods characterize lanes with simple
fitted curves or masks. For examples, SCNN Pan et al. (2018) treats the
problem as a semantic segmentation task, and introduces slice-by-slice
convolutions within feature maps, thus enabling message passing. For these
methods, lanes are characterized as a special form (_e.g._ , point, curve or
mask), so it is difficult to support the format of marker or grid that usually
has an uncertain number. Similarly, those who support the latter Lee et al.
(2017) do not support the former well. To address the second problem, some
methods use vanishing point or parallel relation as auxiliary information. For
example, a vanishing point prediction task Lee et al. (2017) is utilized to
implicitly embed a geometric context recognition capability. In these methods,
they usually only pay attention to a certain kind of structural information or
do not directly use it end-to-end, which leads to the structures not fully
functioning and the algorithm complicated. For the last problem, some
clustering- or detection-based methods are used to distinguish or classify
instances. Line-CNN Li et al. (2019) utilizes line proposals as references to
locate traffic curves, which forces the method to learn the feature of lanes.
To these methods, they can distinguish instances and even extend to more
attributes, but they usually need extra computation and have many manually
designed super-parameters, which leads to poor scalability.
Inspired by these observations and analysis, we propose a novel structure
guided framework for lane detection, as shown in Fig. 2. In order to
characterize lanes, we propose a box-line based proposal method. In this
method, the minimum circumscribed rectangle of the lane is used to distinguish
instance, and its center line is used for structured positioning. For the sake
of further improving lane detection by utilizing structural information, the
vanishing point guided anchoring mechanism is proposed to generate intensive
anchors (_i.e._ , as few and accurate anchors as possible). In this mechanism,
vanishing point is learned in a segmentation manner and used to produce
structural anchors top-down, which can efficiently capture various lanes.
Meanwhile, we put forward multi-level structure constraints to improve the
perception of lanes. In the process, the pixel-level perception is used to
improve lane details with the help of lane binary segmentation, the lane-level
relation aims at modeling the parallelism properties of inter-lanes by Inverse
Perspective Mapping (IPM) via a neural network, and image-level attention is
to attend the image with adaptive weights from the perspective of scenes.
Finally, features of lane anchors under structural guidance are extracted for
accurate classification, regression and the prediction of other attributes.
Experimental results on CULane and Tusimple datasets verify the effectiveness
of the proposed method which achieves state-of-the-art performance and run
efficiently at 117 FPS.
The main contributions of this paper include: 1) we propose a structure guided
framework for lane detection, which characterize lanes and can accurately
class, locate and restore the shape of unlimited lanes. 2) we introduce a
vanishing point guided anchoring mechanism, in which the vanishing point is
predicted and used to produce intensive anchors, which can precisely capture
lanes. 3) we put forward the multi-level structural constraints, which are
used to sense pixel-level unary details, model lane-level pair-wise relation
and adaptively attend image-level global information.
## 2 Related Work
In this section, we review the related works that aim to resolve the
challenges of lane detection in two aspects.
### 2.1 Traditional Methods
To solve the problem of lane detection, traditional methods are usually based
on hand-crafted features by detecting shapes of markings and fitting the
spline. Veit et al. (2008) presents a comprehensive overview of features used
to detect road markings. And Wu and Ranganathan (2012) uses Maximally Stable
Extremal Regions features and performs the template matching to detect
multiple road markings. However, there approaches often fail in unfamiliar
conditions.
### 2.2 Deep Learning based Methods
With the development of deep learning, methods Pizzati and García (2019); Van
Gansbeke et al. (2019); Guo et al. (2020) based on deep neural networks
achieve progress in lane detection. SCNN Pan et al. (2018) generalizes
traditional deep layer-by-layer convolutions to enable message passing between
pixels across rows and columns. ENet-SAD Hou et al. (2019) presents a
knowledge distillation approach, which allows a model to learn from itself
without any additional supervision or labels. PolyLaneNet Tabelini et al.
(2020) adopts a polynomial representation for the lane markings, and outputs
polynomials via the deep polynomial regression. UltraFast Qin et al. (2020)
treats the process of lane detection as a row-based selecting problem using
global features. CurveLanes Xu et al. (2020) proposes a lane-sensitive
architecture search framework to automatically capture both long-ranged
coherent and accurate short-range curve information.
In these methods, different lane representations are adopted and some
structural information is considered for performance improvement. However,
these methods are usually based on the powerful learning ability of neural
networks to learn the fitting or shapes of lanes, and the role of scene-
related structural information for lanes has not been paid enough attention to
and discussed.
## 3 The Proposed Approach
To address these difficulties (i.e., characterizing lanes, modeling the
relationship between scenes and lanes, and supporting more attributes), we
propose a novel structure guided framework for lane detection, denoted as
SGNet. In this framework, we first introduce a new lane representation. Then a
top-down vanishing point guided anchoring mechanism is proposed, and next
multi-level structure constraints is used. Details of the proposed approach
are described as follows.
### 3.1 Representation
For adapting to different styles of lane annotation, we introduce a new box-
line based method for lane representation. Firstly, we calculate the minimum
circumscribed rectangle $R$ (“box”) with the height $h$ and width $w$ for the
lane instance $L_{lane}$. For this rectangle, center line $L_{center}$
(“line”) perpendicular to the short side is obtained. And the angle between
the positive $X$-axis and $L_{center}$ in clockwise direction is $\theta$. In
this manner, $L_{center}$ provides the position of the lane instance, and $h$
and $w$ restrict the areas involved. Based on $R$ and $L_{center}$ , lane
prediction based on points, masks, markers, grids and other formats can be
performed. In this paper, the solution based on key points of lane detection
is taken just because of the point-based styles of lane annotation in public
datasets (_e.g._ , CULane TuSimple (2017) and Tusimple Pan et al. (2018)).
Inspired by existing methods Li et al. (2019); Chen et al. (2019); Qin et al.
(2020), we define key points of the lane instance with equally spaced $y$
coordinates $Y=\\{y_{i}\\}$ and $y_{i}=\frac{H}{P-1}\cdot i(i=1,2,...,P-1)$,
where $P$ means the number of all key points through image height, which is
fixed on images with same height $H$ and width $W$. Accordingly, the $x$
coordinates of the lane is expressed as $X=\\{x_{i}\\}$. For the convenience
of expression, the straight line equation of $L_{center}$ is defined as
$ax+by+c=0,a\neq 0\ or\ b\neq 0$ (1)
where $a$, $b$ and $c$ can be easily computed by $\theta$ and any point on
$L_{center}$. Next, when the $y$ coordinate of the center line is $y_{i}$, we
can compute the corresponding $x$ coordinate as
$x_{i}=L_{center}(y_{i})=\frac{-c-by_{i}}{a},a\neq 0.$ (2)
Then, we define the offset of $x$ coordinate $\Delta X$ between the lane
$L_{lane}$ and center line $L_{center}$ as
$\displaystyle\Delta X$ $\displaystyle=\\{\Delta
x_{i}\\}=\\{x_{i}-\frac{-c-by_{i}}{a}\\},$ (3) $\displaystyle X$
$\displaystyle=\\{\frac{-c-by_{i}}{a}\\}+\Delta X.$
Therefore, based on $L_{center}$ and $\Delta X$, we can calculate the lane
instance $L_{lane}$. Usually, it is easier to learn $L_{center}$ and $\Delta
X$ than the directly fitting key points of $L_{lane}$.
Figure 3: Lane representation.
### 3.2 Feature Extractor
To see Fig. 2, SGNet takes ResNet He et al. (2016) as the feature extractor,
which is modified to remove the last global pooling and fully connected layers
for the pixel-level prediction task. Feature extractor has five residual
modules for encoding, named as $\mathcal{E}_{i}(\pi_{i})$ with parameters
$\pi_{i}(i=1,2,...,5)$. To obtain larger feature maps, we convolve
$\mathcal{E}_{5}(\pi_{5})$ by a convolutional layer with 256 kernels of
$3\times 3$ and then $\times 2$ upsample the features, followed by an element-
wise summation with $\mathcal{E}_{4}(\pi_{4})$ to obtain
$\mathcal{E}_{4}^{\prime}(\pi_{4}^{\prime})$. Finally, for a $H\times W$ input
image, a $\frac{H}{16}\times\frac{W}{16}$ feature map is output by the feature
extractor.
### 3.3 Vanishing Point Guided Anchoring
In order to learn the lane representation, there are two main ways to learn
the center line $L_{center}$ and $x$ offset $\Delta X$. The first way is to
learn the determined $L_{center}$ directly with angle, number and position
regression, which is usually difficult to achieve precise results because of
the inherent difficulty of regression tasks. The second way is based on mature
detection tasks, using dense anchors to classify, regress and then obtain
proposals representing the lane instance. And the second one has been proved
to work well in general object detection tasks, so we choose it as our base
model.
To learn the center line $L_{center}$ and $x$ offset $\Delta X$ well, we
propose a novel vanishing point guided anchoring mechanism (named as VPG-
Anchoring). The vanishing point (VP) provides strong characterization of
geometric scene, representing the end of the road and also the “virtual” point
where the lanes intersect in the distance. Since VP is the intersection point
of lanes, lanes in the scene must pass through VPs, and lines that do not pass
through VPs are not lanes in the scene with high probability. Therefore, dense
lines radiated from VPs can theoretically cover all lanes in the image, which
is equivalent to reducing the generation space of anchors from
$\mathbb{R}^{H\times W\times N_{proposal}}$ to $\mathbb{R}^{N_{proposal}}$.
$N_{proposal}$ represents the number of anchors generated at one pixel.
As shown in Fig. 2, the features map
$\mathcal{E}^{\prime}_{4}(\pi^{\prime}_{4})$ is feed to VPG-Anchoring. In the
mechanism, VP is predicted by a simple branch, which is implemented by a
multi-scale context-aware atrous spatial pyramid pooling (ASPP) Chen et al.
(2018) followed by a convolutional layer with 256 kernels of $3\times 3$ and a
softmax activation. The VP prediction branch is denoted as
$\phi_{\mathcal{V}}({\pi_{\mathcal{V}}})$ with parameters $\pi_{\mathcal{V}}$.
Usually, VP is not annotated in lane datasets, such as CULane Pan et al.
(2018), so we average the intersection points of the center lines of all lane
instances and get the approximate VP. In addition, a single point is usually
difficult to predict, so we expand the area of VP to a radius of 16 pixels and
use segmentation algorithm to predict. To achieve this, we expect the output
of $\phi_{\mathcal{V}}({\pi_{\mathcal{V}}})$ to approximate the ground-truth
masks of VP (represented as $G_{\mathcal{V}}$) by minimizing the loss
$\displaystyle{\mathcal{L}}_{\mathcal{V}}=BCE(\phi_{\mathcal{V}}({\pi_{\mathcal{V}}}),G_{\mathcal{V}}),$
(4)
where $BCE(\cdot,\cdot)$ represents the pixel-level binary cross-entropy loss
function.
In order to ensure that generated anchors are dense enough, we choose a
$W_{anchor}\times W_{anchor}$ rectangular area with VP as the center, and take
one point every $S_{anchor}$ to generate anchors. For each point, anchors are
generated every $A_{anchor}$ angle ($A_{anchor}\in[0,180]$) as shown in Fig.
4.
Figure 4: VP-guided anchoring mechanism. Anchors (golden lines) generated
based on (a) the vanishing point (black point) and (b) the area around
vanishing point (black and gray points).
In this way, anchors are targeted, intensive and not redundant, compared with
general full-scale uniform generation and even specially designed methods for
lanes Li et al. (2019). Note that anchors run through the whole image, and
only the part below VP is shown for convenient display in Figs. 2 and 4.
### 3.4 Classification and Regression
In order to classify and regress the generated anchors, we extract high-level
feature maps based on $\mathcal{E}_{4}(\pi_{4})$ with several convolutional
layers. The feature map is named as
$\text{F}_{\mathcal{A}}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times
C^{\prime}}$, where $H^{\prime},W^{\prime}$ and $C^{\prime}$ are the height,
width and channel of $\text{F}_{\mathcal{A}}$. For each anchor $L_{lane}$, the
channel-level features of each point on anchors are extracted from
$\text{F}_{\mathcal{A}}$ to obtain lane descriptor
$\text{D}_{\mathcal{A}}\in\mathbb{R}^{H^{\prime}\times C^{\prime}}$, which are
used to classify the existence $Conf^{L_{lane}}$ and regress $x$ offsets
$\Delta X^{L_{lane}}$ including the length $len$ of lanes. To learn these, we
expect the output to approximate the ground-truth existence $GConf^{L_{lane}}$
and $x$ offsets $G\Delta X^{L_{lane}}$ by minimizing the loss
$\displaystyle{\mathcal{L}}_{\mathcal{C}}$
$\displaystyle=\sum_{L_{lane}=0}^{L-1}BCE(Conf^{L_{lane}},GConf^{L_{lane}}),$
(5) $\displaystyle{\mathcal{L}}_{\mathcal{R}}$
$\displaystyle=\sum_{L_{lane}=0}^{L-1}SL1(\Delta X^{L_{lane}},G\Delta
X^{L_{lane}}),$
where $SL1(\cdot,\cdot)$ means smooth L1 loss and L means the number of
proposals. Finally, Line-NMS Li et al. (2019) is used to obtain the finally
result with confidence thresholds.
### 3.5 Multi-level Structure Constraints
In order to further improve lane perception, we ask for the structural
relationship between scenes and lanes, and deeply explore the pixel-level,
lane-level and image-level structures.
#### Pixel-level Perception.
The top-down VPG-Anchoring mechanism covers the structures and distribution of
lanes. At the same time, there is a demand of bottom-up detail perception,
which ensures that lane details are restored and described more accurately.
For the sake of improving the detail perception, we introduce lane
segmentation branch to location lane locations and promote pixel-level unary
details. As shown in Fig. 2, the lane segmentation branch has the same input
and similar network structure with the VP prediction branch. The lane
segmentation branch is denoted as $\phi_{\mathcal{P}}({\pi_{\mathcal{P}}})$
with parameters $\pi_{\mathcal{P}}$. To segment lanes, we expect the output of
$\text{P}_{\mathcal{P}}=\phi_{\mathcal{P}}({\pi_{\mathcal{P}}})$ to
approximate the ground-truth masks of binary lane mask (represented as
$G_{\mathcal{P}}$) by minimizing the loss
$\displaystyle{\mathcal{L}}_{\mathcal{P}}=BCE(\text{P}_{\mathcal{P}},G_{\mathcal{P}}).$
(6)
To promote the pixel-level unary details, we weight the input features
$\text{F}_{\mathcal{A}}$ by the following operation
$\displaystyle\text{M}_{\mathcal{A}}=\text{F}_{\mathcal{A}}\otimes\text{P}_{\mathcal{P}}+\text{F}_{\mathcal{A}},$
(7)
where $M_{\mathcal{A}}$ are feed to classify and regress instead of
$\text{F}_{\mathcal{A}}$.
Figure 5: Qualitative comparisons of the state-of-the-art algorithms and our approach. | Total | Normal | Crowd | Dazzle | Shadow | No line | Arrow | Curve | Cross | Night | FPS
---|---|---|---|---|---|---|---|---|---|---|---
DeepLabV2-50 | 66.70 | 87.40 | 64.10 | 54.10 | 60.70 | 38.10 | 79.00 | 59.80 | 2505 | 60.60 | -
SCNN | 71.60 | 90.60 | 69.70 | 58.50 | 66.90 | 43.40 | 84.10 | 64.40 | 1990 | 66.10 | 8
FD | - | 85.90 | 63.60 | 57.00 | 59.90 | 40.60 | 79.40 | 65.20 | 7013 | 57.80 | -
ENet-SAD | 70.80 | 90.10 | 68.80 | 60.20 | 65.90 | 41.60 | 84.00 | 65.70 | 1998 | 66.00 | 75
PointLane | 70.20 | 88.00 | 68.10 | 61.50 | 63.30 | 44.00 | 80.90 | 65.20 | 1640 | 63.20 | -
RONELD | 72.90 | - | - | - | - | - | - | - | - | - | -
PINet | 74.40 | 90.30 | 72.30 | 66.30 | 68.40 | 49.803 | 83.70 | 65.60 | 14273 | 67.70 | 25
ERFNet-E2E | 74.00 | 91.003 | 73.103 | 64.50 | 74.102 | 46.60 | 85.803 | 71.901 | 2022 | 67.90 | -
IntRA-KD | 72.40 | - | - | - | - | - | - | - | - | - | 98
UltraFast-18 | 68.40 | 87.70 | 66.00 | 58.40 | 62.80 | 40.20 | 81.00 | 57.90 | 1743 | 62.10 | 3231
UltraFast-34 | 72.30 | 90.70 | 70.20 | 59.50 | 69.30 | 44.40 | 85.70 | 69.503 | 2037 | 66.70 | 1752
CurveLanes | 74.803 | 90.70 | 72.30 | 67.702 | 70.10 | 49.40 | 85.803 | 68.40 | 1746 | 68.903 | -
Ours-Res18 | 76.122 | 91.422 | 74.052 | 66.893 | 72.173 | 50.162 | 87.132 | 67.02 | 11641 | 70.672 | 1173
Ours-Res34 | 77.271 | 92.071 | 75.411 | 67.751 | 74.311 | 50.901 | 87.971 | 69.652 | 13732 | 72.691 | 92
Table 1: Comparisons with state-of-the-art methods on CULane dataset.
F1-measure score (“%” is omitted) is used to evaluate the results of total and
8 sub-categories. For Cross, only FP are shown. The top three results are in
red1, green2 and blue3 fonts with a footnote.
#### Lane-level Relation.
In fact, lanes conform to certain rules in the construction process, and the
most important one is that the lanes are parallel. Due to imaging reasons,
this relationship is no longer maintained after perspective transformation,
but it can be modeled potentially. To model the lane-level relation, we
conduct IPM by the $H$ Matrix Neven et al. (2018) via a neural network. After
learning $H$, the lane instance $L_{lane}$ can be transformed to
$L^{\prime}_{lane}$ on bird’s eye view, where different instances are
parallel. Formally, we define the relationship between lanes as follows. For
two lane instances $L_{lane1}$ and $L_{lane2}$ in the image, they are
projected to the bird’s-eye view through the learned $H$ matrix, and the
corresponding instance $L^{\prime}_{lane1}$ and $L^{\prime}_{lane2}$ are
obtained. The two instances can be fitted to the following linear equations:
$\displaystyle a_{1}*x+b_{1}*y+c_{1}$ $\displaystyle=0,$ (8) $\displaystyle
a_{2}*x+b_{2}*y+c_{2}$ $\displaystyle=0.$
In these two equations, under the condition that y is equal, the difference of
x is always constant. Thus we can get that $a_{1}*b_{2}=a_{2}*b_{1}$.
Expanding to all instances, lane-level relation can be formulated as
$\displaystyle L_{\mathcal{L}}=\sum_{i=0,j=0,i\neq
j}^{L-1}L1(a_{i}b_{j}-a_{j}b_{i}).$ (9)
#### Image-level Attention.
In the process of camera imaging, distant objects are small after projection.
Usually, the distant information of lanes is not prominent visually, but they
are equally important. After analysis, it is found that the distance between
lanes and VP reflects the inverse proportion to scales in imaging. Therefore,
we generate perspective attention map PAM based on VP, which is based on the
strong assumption that the attention and distance after imaging satisfies two-
dimensional gaussian distribution. PAM ensures the attention of different
regions by adaptively restricting the classification and regression loss (from
Eq. 5) as follows.
$\displaystyle L_{\mathcal{I}}=$
$\displaystyle\sum_{L_{lane}=0}^{L-1}\sum_{p=0}^{P-1}L1(\Delta
x^{L_{lane}}_{p},G\Delta x^{L_{lane}}_{p})$ (10)
$\displaystyle\cdot(1+|E(x^{L_{lane}}_{p},y^{L_{lane}}_{p})|),$
where $|\cdot|$ means normalized to [0, 1].
By taking the losses of Eqs.(4),(5),(6),(9) and (10), the overall learning
objective can be formulated as follows:
$\displaystyle\min_{\mathbb{P}}\mathcal{L}_{\mathcal{V}}+\mathcal{L}_{\mathcal{C}}+\mathcal{L}_{\mathcal{R}}+\mathcal{L}_{\mathcal{P}}+\mathcal{L}_{\mathcal{L}}+\mathcal{L}_{\mathcal{I}},$
(11)
where $\mathbb{P}$ is the set of
$\\{\\{\pi_{i}\\}^{5}_{i=1},\pi^{\prime}_{4},\pi_{\mathcal{V}},\pi_{\mathcal{C}},\pi_{\mathcal{R}},\pi_{\mathcal{P}},\pi_{\mathcal{L}}\\}$,
and $\pi_{\mathcal{C}},\pi_{\mathcal{R}}$ and $\pi_{\mathcal{L}}$ are the
parameters of classification, regression and lane-level relation subnetworks,
respectively.
## 4 Experiments and Results
### 4.1 Experimental Setup
#### Dataset.
To evaluate the performance of the proposed method, we conduct experiments on
CULane Pan et al. (2018) and Tusimple TuSimple (2017) dataset. CULane dataset
has a split with 88,880/9,675/34,680 images for train/val/test and Tusimple
dataset is divided into three parts: 3,268/358/2,782 for train/val/test.
#### Metrics.
For CULane, we use F1-measure score as the evaluation metric. Following Pan et
al. (2018), we treat each lane as a line with 30 pixel width and compute the
intersection-over-union (IoU) between groundtruths and predictions with a
threshold of 0.5 to For Tusimple, the official metric (Accuracy) is used as
the evaluation criterion, which evaluates the correction of predicted lane
points.
#### Training and Inference.
We use Adam optimization algorithm to train our network end-to-end by
optimizing the loss in Eq. (11). In the optimization process, the parameters
of feature extractor are initialized by the pre-trained ResNet-18/34 model and
“poly” learning rate policy are employed for all experiments. The training
images are resized to the resolution of $360\times 640$ for faster training,
and applied affine and flipping. And we train the model for 10 epochs on
CULane and 60 epochs on TuSimple. Moreover, we empirically and experimentally
set the number of points $P=72$, the width of rectangular $W_{anchor}=40$,
anchor strides $S_{anchor}=5$ and anchor angle interval $A_{anchor}=5$.
| Accuracy | FPS
---|---|---
DeepLabV2-18 | 92.69 | 40
DeepLabV2-34 | 92.84 | 20
SCNN | 96.532 | 8
FD | 94.90 | -
ENet-SAD | 96.641 | 753
Cascaded-CNN | 95.24 | 60
PolyLaneNet | 93.36 | 1151
Ours-Res34 | 95.873 | 922
Table 2: Comparisons with state-of-the-arts on Tusimple.
### 4.2 Comparisons with State-of-the-art Methods
We compare our approach with state-of-the-arts including DeeplabV2 Chen et al.
(2017), SCNN Pan et al. (2018), FD Philion (2019), ENet-SAD Hou et al. (2019)
, PointLane Chen et al. (2019), RONELD Chng et al. (2020), PINet Ko et al.
(2020), ERFNet-E2E Yoo et al. (2020), IntRA-KD Hou et al. (2020), UltraFast
Qin et al. (2020), CurveLanes Xu et al. (2020), Cascaded-CNN Pizzati et al.
(2019) and PolyLaneNet Tabelini et al. (2020).
We compare our approach with 10 state-of-the-art methods on CULane dataset, as
listed in Tab. 1. Comparing our ResNet34-based method with others, we can see
that the proposed method consistently outperforms other methods across total
and almost all categories. For the total dataset, our method is noticeably
improved from 74.80% to 77.27% compared with the second best method. Also, it
is worth noting that our method is significantly better on Crowd (+2.31%),
Arrow (+2.17%) and Night (+3.79%) compared with second best methods,
respectively. In addition, we also obviously lower FP on Cross by 3.78%
relative to the second best one. As for Curve, we are slightly below the best
method (ERFNet-E2E), which conducts special treatment for curve points while
maybe damaging other categories. Moreover, our method has a faster FPS than
almost all results. These observations present the efficiency and robustness
of our proposed method and validate that VPG-Anchoring and multi-level
structures are useful for the task of lane detection.
Some examples generated by our approach and other state-of-the-art algorithms
are shown in Fig. 5. We can see that lanes can be detected with accurate
location and precise shape by the proposed method, even in complex situations.
These visualizations indicate that the proposed lane representation has a good
characterization of lanes, and also show the superiority of the proposed
method.
Moreover, we list the comparisons on Tusimple as shown in Tab. 2. It can be
seen that our method is competitive in highway scenes without adjustment,
which further proves the effectiveness of structural information for lane
detection.
### 4.3 Ablation Analysis
To validate the effectiveness of different components of the proposed method,
we conduct several experiments on CULane to compare the performance variations
of our methods.
| VPG-A | Pixel | Lane | Image | Total
---|---|---|---|---|---
Base | | | | | 71.98
Base+V-F | ✓ | | | | 74.08
Base+V | ✓ | | | | 74.27
Base+V+P | ✓ | ✓ | | | 76.30
Base+V+P+L | ✓ | ✓ | ✓ | | 76.70
SGNet | ✓ | ✓ | ✓ | ✓ | 77.27
Table 3: Performance of different settings of the proposed method. “-A” means
“Anchoring”.
#### Effectiveness of VPG-Anchoring.
To investigate the effectiveness of the proposed VPG-Anchoring, we conduct
ablation experiments and introduce three different models for comparisons. The
first setting is only the feature extractor and the subnetwork of
classification and regression, which is regarded as “Base” model. In Base,
anchor is generated uniformly at all positions of the feature map, and
$A_{anchor}$ is lowered to ensure the same number with SGNet. In addition, we
conduct another model (“Base+V”) by adding VPG-Anchor. And we also replace the
$L_{center}$ by straight line fitted directly by key points as the “Base+V-F”
to explore the importance of VP. The comparisons of above models are listed in
Tab. 3. We can observe that the VPG-Anchoring greatly improve the performance
of Base model, which verifies the effectiveness of this mechanism. In
addition, comparing Base+V with Base+V-F, we find the proposed approximate VP
in lane presentation is better than the one by direct fitting.
#### Effectiveness of Multi-level Structures.
To explore the effectiveness of the pixel-level, lane-level and image-level
structures, we conduct another experiments by combining the pixel-level
perception with “Base+V” as “Base+V+P” and adding lane-level relation to
“Base+V+P” as “Base+V+P+L”. From the last four rows of Tab. 3, we can find
that the performance of lane detection can be continuously improved by pixel-,
lane- and image-level structures, which validates that the three levels of
constrains are compatible with each other, and can be used together to gain
performance.
## 5 Conclusion
In this paper, we rethink the difficulties that hinder the development of lane
detection and propose a structure guided framework. In this framework, we
introduce a new lane representation to meet the demands of various lane
representations. Based on the representation, we propose a novel vanishing
point guided anchoring mechanism to generate intensive anchors for efficiently
capturing lanes. In addition, multi-level structure constraints is modeled to
improve lane perception. Extensive experiments on benchmark datasets validates
the effectiveness of the proposed approach with fast inference and shows that
the perspective of modeling and utilization of structure information is useful
for lane detection.
## References
* Butakov and Ioannou [2014] Vadim A Butakov and Petros Ioannou. Personalized driver/vehicle lane change models for adas. IEEE TVT, 64(10):4422–4431, 2014.
* Chen and Huang [2017] Zhilu Chen and Xinming Huang. End-to-end learning for lane keeping of self-driving cars. In IEEE IV, 2017.
* Chen et al. [2017] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE TPAMI, 40(4):834–848, 2017.
* Chen et al. [2018] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
* Chen et al. [2019] Zhenpeng Chen, Qianfei Liu, and Chenfan Lian. Pointlanenet: Efficient end-to-end cnns for accurate real-time lane detection. In IEEE IV, 2019.
* Chng et al. [2020] Zhe Ming Chng, Joseph Mun Hung Lew, and Jimmy Addison Lee. Roneld: Robust neural network output enhancement for active lane detection. arXiv preprint arXiv:2010.09548, 2020.
* Guo et al. [2020] Yuliang Guo, Guang Chen, Peitao Zhao, Weide Zhang, Jinghao Miao, Jingao Wang, and Tae Eun Choe. Gen-lanenet: A generalized and scalable approach for 3d lane detection. In ECCV, 2020.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
* Homayounfar et al. [2019] Namdar Homayounfar, Wei-Chiu Ma, Justin Liang, Xinyu Wu, Jack Fan, and Raquel Urtasun. Dagmapper: Learning to map by discovering lane topology. In ICCV, 2019.
* Hou et al. [2019] Yuenan Hou, Zheng Ma, Chunxiao Liu, and Chen Change Loy. Learning lightweight lane detection cnns by self attention distillation. In ICCV, 2019.
* Hou et al. [2020] Yuenan Hou, Zheng Ma, Chunxiao Liu, Tak-Wai Hui, and Chen Change Loy. Inter-region affinity distillation for road marking segmentation. In CVPR, 2020.
* Ko et al. [2020] Yeongmin Ko, Jiwon Jun, Donghwuy Ko, and Moongu Jeon. Key points estimation and point instance segmentation approach for lane detection. arXiv preprint arXiv:2002.06604, 2020.
* Lee et al. [2017] Seokju Lee, Junsik Kim, Jae Shin Yoon, Seunghak Shin, Oleksandr Bailo, Namil Kim, Tae-Hee Lee, Hyun Seok Hong, Seung-Hoon Han, and In So Kweon. Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. In ICCV, 2017.
* Li et al. [2019] Xiang Li, Jun Li, Xiaolin Hu, and Jian Yang. Line-cnn: End-to-end traffic line detection with line proposal unit. IEEE Transactions on Intelligent Transportation Systems, 21(1):248–258, 2019.
* Neven et al. [2018] Davy Neven, Bert De Brabandere, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Towards end-to-end lane detection: an instance segmentation approach. In IEEE IV, 2018.
* Pan et al. [2018] Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Spatial as deep: Spatial cnn for traffic scene understanding. In AAAI, 2018.
* Philion [2019] Jonah Philion. Fastdraw: Addressing the long tail of lane detection by adapting a sequential prediction network. In CVPR, 2019.
* Pizzati and García [2019] Fabio Pizzati and Fernando García. Enhanced free space detection in multiple lanes based on single cnn with scene identification. In IEEE IV, 2019.
* Pizzati et al. [2019] Fabio Pizzati, Marco Allodi, Alejandro Barrera, and Fernando García. Lane detection and classification using cascaded cnns. In International Conference on Computer Aided Systems Theory, 2019\.
* Qin et al. [2020] Zequn Qin, Huanyu Wang, and Xi Li. Ultra fast structure-aware deep lane detection. In ECCV, 2020.
* Tabelini et al. [2020] Lucas Tabelini, Rodrigo Berriel, Thiago M Paixão, Claudine Badue, Alberto F De Souza, and Thiago Oliveira-Santos. Polylanenet: Lane estimation via deep polynomial regression. arXiv preprint arXiv:2004.10924, 2020.
* TuSimple [2017] TuSimple. Tusimple lane detection challenge. http://benchmark.tusimple.ai/#/, 2017. Accessed: 2017.
* Van Gansbeke et al. [2019] Wouter Van Gansbeke, Bert De Brabandere, Davy Neven, Marc Proesmans, and Luc Van Gool. End-to-end lane detection through differentiable least-squares fitting. In ICCV Workshops, 2019.
* Veit et al. [2008] Thomas Veit, Jean-Philippe Tarel, Philippe Nicolle, and Pierre Charbonnier. Evaluation of road marking feature extraction. In IEEE Conference on Intelligent Transportation Systems, 2008.
* Wu and Ranganathan [2012] Tao Wu and Ananth Ranganathan. A practical system for road marking detection and recognition. In IEEE IV, 2012.
* Xu et al. [2020] Hang Xu, Shaoju Wang, Xinyue Cai, Wei Zhang, Xiaodan Liang, and Zhenguo Li. Curvelane-nas: Unifying lane-sensitive architecture search and adaptive point blending. In ECCV, 2020.
* Yoo et al. [2020] Seungwoo Yoo, Hee Seok Lee, Heesoo Myeong, Sungrack Yun, Hyoungwoo Park, Janghoon Cho, and Duck Hoon Kim. End-to-end lane marker detection via row-wise classification. In CVPR Workshops, 2020.
* Yu et al. [2020] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In CVPR, 2020.
|
# Essentially entropic lattice Boltzmann model: Theory and simulations
Mohammad Atif Jawaharlal Nehru Centre for Advanced Scientific Research,
Jakkur, Bangalore 560064, India Praveen Kumar Kolluru Jawaharlal Nehru
Centre for Advanced Scientific Research, Jakkur, Bangalore 560064, India
Santosh Ansumali<EMAIL_ADDRESS>Jawaharlal Nehru Centre for Advanced
Scientific Research, Jakkur, Bangalore 560064, India SankhyaSutra Labs
Limited, Bangalore, India
###### Abstract
We present a detailed description of the essentially entropic lattice
Boltzmann model. The entropic lattice Boltzmann model guarantees unconditional
numerical stability by iteratively solving the nonlinear entropy evolution
equation. In this paper we explain the construction of closed-form analytic
solutions to this equation. We demonstrate that near equilibrium this exact
solution reduces to the standard lattice Boltzmann model. We consider a few
test cases to show that the exact solution does not exhibit any significant
deviation from the iterative solution. We also extend the analytical solution
for the ES-BGK model to remove the limitation on the Prandtl number for heat
transfer problems. The simplicity of the exact solution removes the
computational overhead and algorithmic complexity associated with the entropic
lattice Boltzmann models.
The lattice Boltzmann model (LBM) is an efficient kinetic formulation of the
nonlinear hydrodynamic phenomena on a lattice designed to capture the physics
of macroscopic flow (Frisch _et al._ , 1986; Chen _et al._ , 1992; Ansumali
_et al._ , 2003; Yudistiawan _et al._ , 2010; Adhikari _et al._ , 2005;
Mazloomi _et al._ , 2015; Kolluru _et al._ , 2020a). The Navier-Stokes
dynamics emerges as the hydrodynamic limit of this kinetic model which
performs simple microscale operations on the populations of fictitious
particles (Higuera _et al._ , 1989; Qian _et al._ , 1992; Benzi _et al._ ,
1992). The discrete equilibrium in LBM is chosen such that the macroscopic
constraints are satisfied (McNamara and Zanetti, 1988; Qian _et al._ , 1992;
Benzi _et al._ , 1992). Historically, the top-down approach of choosing the
discrete equilibrium distribution from the macroscopic dynamics emerged as a
computationally attractive alternative to the Boolean particle dynamics of the
lattice gas model (Frisch _et al._ , 1986; McNamara and Zanetti, 1988;
Higuera _et al._ , 1989). However, this top-down approach lost a few
desirable features of the lattice gas such as the unconditional numerical
stability, the $H$ theorem and consequently the faithful representation of
microscopic Boltzmann dynamics (Karlin _et al._ , 1999; Succi _et al._ ,
2002). It was soon realized that the lack of a discrete time $H$ theorem
results in the growth of numerical instabilities (Boghosian _et al._ , 2001;
Karlin _et al._ , 1999; Succi _et al._ , 2002).
The entropic lattice Boltzmann model (ELBM) emerged as an alternate
methodology to restore the $H$ theorem for discrete space-time evolution
(Karlin _et al._ , 1998; Wagner, 1998; Karlin _et al._ , 1999; Chen and
Teixeira, 2000; Boghosian _et al._ , 2001; Succi _et al._ , 2002; Ansumali
_et al._ , 2003; Boghosian _et al._ , 2003). It was considered a paradigm
shift for computational fluid dynamics because the numerical stability of a
hydrodynamic solver was ensured by compliance with the thermodynamics at the
discrete time level (Succi _et al._ , 2002). Currently, the ELBM is accepted
as a viable tool for simulation of turbulence, multiphase flows, as well as
microflows due to its unconditional numerical stability, and has shown
remarkable improvement over the traditional LBM (Ansumali _et al._ , 2006;
Aidun and Clausen, 2010; Chikatamarla and Karlin, 2013; Mazloomi _et al._ ,
2015; Atif _et al._ , 2017). The additional step in ELBM, known as the
entropic involution step, involves a numerical search for the discrete path
length corresponding to jump to a mirror state on the isentropic surface.
Considerable efforts have been made to ensure the correctness and efficient
implementation of this step (Ansumali and Karlin, 2000, 2002a; Tosi _et al._
, 2006; Chikatamarla _et al._ , 2006; Brownlee _et al._ , 2007; Gorban and
Packwood, 2012). However, there is scope for a better theoretical
understanding of the ELBM if one is able to obtain a closed form expression
for the discrete path length. For example:
* •
The variable discrete path length could be understood as an adaptive implicit
modeling of the unresolved scales of the flow via the thermodynamic route, and
may provide a new insight into the subgrid modeling of turbulence.
* •
It should enhance the efficiency of the ELBM by avoiding a numerical search
for the path length.
* •
It will resolve the ambiguities in the implementation of ELBM. It should be
noted that for some rare events, the details of which are discussed in Sec.
II, the entropic involution step has no solution, and hence there is no unique
definition of the path length (Gorban and Packwood, 2012).
In Ref. (Atif _et al._ , 2017), the authors reformulated the ELBM and
obtained a closed form analytical solution for the discrete path length
$\alpha$. This was achieved by relaxing the entropy equality condition used in
ELBM and replace it with the constraint that entropy must increase within a
discrete time step. The analytical form of $\alpha$ was found as the root of a
quadratic equation $-a\alpha^{2}+b\alpha-c$, where the coefficients $a,b,c$
are given in Eq. (46). The near equilibrium limit of this exact solution is
the standard LBGK value of $\alpha=2$. Its simplicity removes the
computational overhead and algorithmic complexity associated with ELBM. In
this paper, we discuss the theory of the entropic lattice Boltzmann model and
explain the construction of the closed form analytic solution for the discrete
path length in detail. We also demonstrate that the exact solution exhibits no
significant deviation from the iterative ELBM solution by considering a few
canonical setups. This paper is organized as follows: In Sec. I, we briefly
review the entropic lattice Boltzmann model. In Sec. II, we describe the
entropic involution step in its traditional form and derive its near-
equilibrium limit. In Sec. III, we explain the methodology to construct exact
solutions for the path length. In Sec. IV, we perform a detailed comparison of
the our solution with ELBM and BGK values of path length. In Sec. V we derive
the analytical solution to the path length for the ES-BGK model. Finally, in
Sec. VI we derive the expression for turbulent viscosity corresponding to the
exact solution of the path length.
## I Entropic lattice Boltzmann model
In this section, we introduce the LBM and its entropic formulation in $D$
dimensions. In LBM one defines a set of discrete velocities ${\bf c}_{i}$,
$i=1,\cdots,N$ such that they form links of a space-filling lattice (Succi,
2001), and at every lattice node ${\bm{x}}$ and time $t$ a set of discrete
populations $f({\bm{c}}_{i},{\bm{x}},t)\equiv f_{i}$. Here, the set of
populations $f_{i}$ is understood as a vector
$\bm{f}=\\{f_{1},f_{2},\cdots,f_{N}\\}$ in the $N$ dimensional vector space,
where $N$ is the number of discrete populations. We define the bilinear action
between two functions of discrete velocities $\phi$ and $\psi$ as
$\left<\phi,\psi\right>=\sum_{i=1}^{N}\phi_{i}\psi_{i}.$ (1)
Analogous to continuous kinetic theory, the hydrodynamic variables such as the
mass density $\rho$, velocity $\mathbf{u}$, and the scaled temperature
$\theta$ are defined as
$\rho=\left<f,1\right>,\quad\rho{\bm{u}}=\left<f,{\bm{c}}\right>,\quad\rho
u^{2}+D\rho\theta=\left<f,{\bm{c}}^{2}\right>.$ (2)
Similarly, the $H$ function for hydrodynamics is taken in Boltzmann form as
(Karlin _et al._ , 1999; Ansumali _et al._ , 2003; Ansumali and Karlin,
2005)
$H[f]=\left<f,\log\frac{f}{w}-1\right>,$ (3)
with weights $w_{i}>0$. The population $\bm{f}({\bm{x}}+{\bm{c}}_{i}\Delta
t,t+\Delta t)$ after a time step $\Delta t$ starting from $\bm{f}({\bm{x}},t)$
is written as two step process:
1. 1.
The discrete free-flight as
$\bm{f}({\bm{x}}+{\bm{c}}_{i}\Delta t,t+\Delta t)=\bm{f}^{*}({\bm{x}},t),$ (4)
which shifts the populations from one lattice node to another. Similar to the
free flight of molecules, this step preserves the entropy globally, i.e.,
$\sum_{\bm{x}}H[f({\bm{x}}+{\bm{c}}_{i}\Delta t,t+\Delta
t)]=\sum_{\bm{x}}H[f]$ (see Ref. (Wagner, 1998) for a detailed proof).
2. 2.
The collisional relaxation towards the discrete equilibrium as
$\bm{f}^{*}({\bm{x}},t)=\bm{f}({\bm{x}},t)+\alpha\beta\left[\bm{f}^{\rm
eq}(\mathcal{M}^{\rm slow}({\bm{x}},t))-\bm{f}({\bm{x}},t)\right],$ (5)
typically modeled by a single relaxation model of Bhatnagar-Gross-Krook (BGK)
(Bhatnagar _et al._ , 1954) with mean free time $\tau$. Here,
$\mathcal{M}^{\rm
slow}({\bm{x}},t)=\\{\rho({\bm{x}},t),\bm{u}({\bm{x}},t),\theta({\bm{x}},t)\\}$
are the collisional invariants ($\theta({\bm{x}},t)\notin\mathcal{M}^{\rm
slow}({\bm{x}},t)$ for isothermal LBM). For the standard LBGK, $\alpha=2$, and
the dimensionless discrete relaxation parameter $\beta={\Delta
t}/{(2\tau+\Delta t)}$ is bounded in the interval $0<\beta<1$. Notice that
$\beta=1$ implies $\tau=0$, and as the kinematic viscosity $\nu=\tau\theta$,
$\beta=1$ implies that there is no dissipation in the system. For a typical
LBM simulation the operating range is an over-relaxation regime of $\Delta
t/\tau\gg 1$ where $\beta\rightarrow 1$. In the standard LBM, this regime of
$\beta\rightarrow 1$ encounters numerical instability, which is resolved in
the ELBM by treating $\alpha$ as a variable which is evaluated at each point
and time step such that the $H$ theorem is satisfied. This is discussed in
detail in Sections II-III.
To recapitulate, the discrete free-flight that represents the convection
process leads to no dissipation, hence no entropy production (Wagner, 1998).
The collisional relaxation, however, has non-zero entropy production due to
relaxation of the populations towards the equilibrium but is entirely local in
position space.
Historically, the discrete isothermal equilibrium at a reference temperature
$\theta_{0}$ was chosen as (Qian _et al._ , 1992)
$f_{i}^{\rm
eq}=w_{i}\rho\left[1+\frac{u_{\alpha}c_{\alpha}}{\theta_{0}}+\frac{u_{\alpha}u_{\beta}}{2\theta_{0}^{2}}\left(c_{\alpha}c_{\beta}-\theta_{0}\delta_{\alpha\beta}\right)\right],$
(6)
which was sufficient to recover the Navier-Stokes dynamics upto
$\mathcal{O}(u^{2})$, provided that the moments of the weights $w_{i}$ satisfy
$\left<w,1\right>=1,\,\left<w,c_{\alpha}c_{\beta}\right>=\theta_{0}\delta_{\alpha\beta},\left<w,c_{\alpha}c_{\beta}c_{\gamma}c_{\kappa}\right>=\theta_{0}^{2}\Delta_{\alpha\beta\gamma\kappa},$
(7)
where
$\Delta_{\alpha\beta\gamma\kappa}=\delta_{\alpha\beta}\delta_{\gamma\kappa}+\delta_{\alpha\gamma}\delta_{\beta\kappa}+\delta_{\alpha\kappa}\delta_{\beta\gamma}$.
However, this polynomial form of discrete equilibrium permits the populations
to attain negative values thus making the simulations numerically unstable
(Karlin _et al._ , 1999; Succi _et al._ , 2002). A method that resolves the
issue of nonpositive form of equilibrium distribution is to construct the
discrete equilibrium $\bm{f}^{\rm eq}$ as the minimizer of the convex $H$
function under the constraint that the mass density, the momentum density, and
the energy density (ignored for isothermal scenarios) are conserved (Karlin
_et al._ , 1999; Boghosian _et al._ , 2001; Atif _et al._ , 2018; Kolluru
_et al._ , 2020b). The discrete entropic equilibrium thus obtained is of the
form
$f_{i}^{\rm eq}=w_{i}\rho\exp\left(-\mu-\zeta_{\alpha}c_{i\alpha}-\gamma
c_{i}^{2}\right),$ (8)
where $\mu,\zeta_{\alpha},\gamma$ are the Lagrange multipliers. For the $D1Q3$
model, the discrete entropic isothermal equilibrium in the explicit form is
$f_{\pm 1}^{\rm
eq}=\frac{\rho}{6}\,\varUpsilon\left[\frac{2{u}_{\alpha}+\sqrt{1+3{u}_{\alpha}^{2}}}{1-{u}_{\alpha}}\right]^{\pm
1},\quad f_{0}^{\rm eq}=\frac{4\rho}{6}\,\varUpsilon,$ (9)
where $\varUpsilon=2-\sqrt{1+3{u}^{2}}$. For the higher-dimensional extensions
of $D1Q3$, i.e., $D2Q9,D3Q27,$ the generalized expression of the discrete
entropic isothermal equilibrium is (Ansumali _et al._ , 2003)
$f_{i}^{\rm
eq}=w_{i}\rho\prod_{\alpha=1}^{D}\varUpsilon\left[\frac{2{u}_{\alpha}+\sqrt{1+3{u}_{\alpha}^{2}}}{1-{u}_{\alpha}}\right]^{c_{i\alpha}/\sqrt{3\theta_{0}}}.$
(10)
The above entropic equilibrium can be compared with Eq. (6) by performing a
series expansion around $u=0$. The expansion up to ${\cal O}(u^{3})$ is
$\displaystyle\begin{split}f_{i}^{\rm
eq}=w_{i}\rho\bigg{[}1+\frac{u_{\alpha}c_{\alpha}}{\theta_{0}}+\frac{u_{\alpha}u_{\beta}}{2\theta_{0}^{2}}\left(c_{\alpha}c_{\beta}-\theta_{0}\delta_{\alpha\beta}\right)\\\
+\frac{1}{6\theta_{0}^{3}}\left(u_{\alpha}u_{\beta}u_{\gamma}c_{\alpha}c_{\beta}c_{\gamma}-\theta_{0}u^{2}u_{\alpha}c_{\alpha}\right)\bigg{]},\end{split}$
(11)
which matches the historically employed equilibrium from Eq. (6) till ${\cal
O}(u^{2})$. The errors in the higher moments such as viscous stress and heat
flux is of ${\cal O}(u^{4})$ and ${\cal O}(u^{3})$ respectively (Ansumali,
2004). As for most higher-order models, the Lagrange multipliers cannot be
evaluated in explicit form and need to be found numerically. The series form
can be used as an alternative for simulations at low Mach numbers (${\rm Ma}$)
defined as ${\rm Ma}=u/c_{s}$, where $c_{s}$ is the sound speed.
## II The entropic involution
The existence of the entropy function $H$ accompanied with the entropic
equilibrium derived in a variational fashion provides an opportunity for
creating a nonlinearly stable numerical method (Karlin _et al._ , 1999; Succi
_et al._ , 2002; Boghosian _et al._ , 2001). As the advection process [Eq.
(4)] does not lead to entropy production (Chen and Teixeira, 2000), a
nonlinearly stable LBM can be achieved by making the collisional relaxation to
equilibrium [Eq. (5)] adhere to the $H$ theorem, or in other words, by
ensuring that there is nonpositive entropy production during the collision
(Karlin _et al._ , 1999).
The physical domain is discretized into grid points, at each of which we
define a set of $N$ populations $\bm{f}=\\{f_{0},f_{1},\cdots f_{N-1}\\}$.
Each point has an entropy level $H$ associated with it. For example, at a grid
point with set of populations $\bm{f}^{+}=\\{f^{+}_{0},f^{+}_{1},\cdots
f^{+}_{N-1}\\}$, from Eq. (3), $H[f^{+}]$ is a scalar. The equilibrium
$\bm{f}^{\rm eq}$ is the point with the least value of $H$, as, by
construction, it is the minimizer of the convex entropy function $H$ under the
relevant constraints.
The collision step given by Eq. (5) is understood in geometric terms as
follows: in an $N$ dimensional phase space, starting from the pre-collisional
state $\bm{f}$, one covers a distance (path length) $\alpha\beta$ in the
direction of $\bm{f}^{\rm eq}-\bm{f}$ to reach the post-collisional state
$\bm{f}^{*}$, i.e.,
$\bm{f}^{*}=\bm{f}+\alpha\beta[\bm{f}^{\rm eq}-\bm{f}].$ (12)
Here, for convenience we have dropped the position and time coordinates
$\bm{x},t$ as the collision step is local in position space and instantaneous.
We first consider the $D1Q2$ lattice as an example to visualize the phase
space and discuss the entropic collisional dynamics. This one dimensional
lattice has only two populations $f_{1},f_{-1}$ with discrete velocities
$+1,-1$ respectively (see Fig. 1). Due to the lack of enough degrees of
freedom, the $D1Q2$ lattice does not conserve momentum and hence cannot model
hydrodynamics. The mass density ($\rho=f_{1}+f_{-1}$) is a conserved moment,
and the momentum density ($\rho u=f_{1}-f_{-1}$) becomes a nonconserved
moment. These two constraints can be inverted to obtain the relations
$f_{1}=\frac{\rho+\rho u}{2},\quad f_{-1}=\frac{\rho-\rho u}{2}.$ (13)
Figure 2 represents the isoentropic contours in the vector space for the
$D1Q2$ lattice. The criterion of mass conservation $f_{1}+f_{-1}=\rho$
dictates that the collisional dynamics for $\rho=1$ is restricted on the
straight line in the figure. The equilibrium is given by
${\bm{f}}^{\rm eq}\equiv\\{f_{1}^{\rm eq},f_{-1}^{\rm
eq}\\}=\\{\rho/2,\rho/2\\}.$ (14)
It can be seen from the Fig. 2 (bottom) that near the equilibrium the
isoentropy contours are almost circular. This property of the $H$ function
$(H=f_{1}\log f_{1}+f_{2}\log f_{2}-\rho-\rho\log 2)$ is valid for the higher
dimensional lattices as well.
Another model we consider is D1Q3, which will be used later for illustrating
the concepts of entropic involution. For the $D1Q3$ lattice, the populations
are $\\{f_{-1},f_{0},f_{1}\\}$ with discrete velocities $\\{-1,0,+1\\}$
respectively. The mass conservation constraint requires that
$f_{-1}+f_{0}+f_{1}=\rho$, a plane on which the entire discrete dynamics is
constrained (see Fig. 4). The equilibrium for the $D1Q3$ lattice is given by
Eq. (9). The conserved moments are the mass density $\rho=f_{-1}+f_{0}+f_{1}$
and momentum density $\rho u=f_{1}-f_{-1}$, whereas the nonconserved moment is
the stress $\sigma_{xx}=f_{1}+f_{-1}-f^{\rm eq}_{1}-f^{\rm eq}_{-1}$. These
three constraints can be inverted to obtain the relations
$\displaystyle\begin{split}&\tilde{f}_{-1}\equiv\frac{f_{-1}}{\rho}=\frac{\tilde{f}^{\rm
eq}_{1}+\tilde{f}^{\rm eq}_{-1}+\tilde{\sigma}_{xx}-u}{2},\\\
&\tilde{f}_{0}\equiv\frac{f_{0}}{\rho}=1-\tilde{\sigma}_{xx}-\tilde{f}^{\rm
eq}_{1}-\tilde{f}^{\rm eq}_{-1},\\\
&\tilde{f}_{1}\equiv\frac{f_{1}}{\rho}=\frac{\tilde{f}^{\rm
eq}_{1}+\tilde{f}^{\rm eq}_{-1}+\tilde{\sigma}_{xx}+u}{2},\end{split}$ (15)
where $\tilde{\sigma}_{xx}=\sigma_{xx}/\rho,\tilde{f}^{\rm eq}_{i}=f^{\rm
eq}_{i}/\rho$.
Figure 1: Discrete velocities in a D1Q2 model. This one dimensional lattice
has only two populations $f_{1},f_{-1}$ with discrete velocities $+1,-1$
respectively, and cannot model hydrodynamics due to the lack of enough degrees
of freedom.
Figure 2: Isoentropy contours for a $D1Q2$ lattice. It can be seen from zoomed
figure (bottom) that near the equilibrium the isoentropy contours become
almost circular.
We now define a mirror state
$\bm{f}^{\rm mirror}=\bm{f}+\alpha(\bm{f}^{\rm eq}-\bm{f}),$ (16)
which is essentially $\bm{f}^{*}$ from Eq. (12) with $\beta=1$. Here, we
remind that $\beta=1$ is a zero dissipation state, therefore, the mirror state
$\bm{f}^{\rm mirror}$ lies at the same entropy as the initial state $\bm{f}$,
i.e.,
$H[\bm{f}^{\rm mirror}]=H[\bm{f}].$ (17)
The aim of the entropic involution step is to find the $\alpha$ corresponding
to the mirror state. Note that all the states $\bm{f},\bm{f}^{*},\bm{f}^{\rm
mirror}$ are at a higher entropy level than $\bm{f}^{\rm eq}$. Hence, starting
from $\bm{f}$ and moving in the direction of $\bm{f}^{\rm eq}-\bm{f}$, the
value of $H$ decreases till the equilibrium state, after which it begins to
rise. The maximum allowable path length that could be covered is $\alpha$,
after which $H$ increases beyond its pre-collisional state, and the $H$
theorem is violated. This is depicted in Fig. 3 for the D1Q2 lattice.
Figure 3: Entropic collisional dynamics for $D1Q2$ lattice. Note that the pre-
collisional state $\bm{f}$ and the mirror state ${\bm{f}}^{\rm mirror}$ are at
the same entropy level.
Figure 4: Top: The polytope of positivity for the $D1Q3$ lattice is a
triangular section of the plane inside which all the populations are positive,
and outside of which one or more populations become negative. Bottom:
Representation of a pre-collisional state $\bm{f}$ for which the mirror state
is not defined.
There exists an important structure in the distribution functions space – the
polytope of positivity (Gorban and Packwood, 2012). It is the region inside
which all the populations are positive but outside of which one or more
populations become negative. The shaded triangular region in Fig. 4 (top) is
the polytope of positivity for the $D1Q3$ lattice. The entropic involution
does not yield a solution when the isoentropic surfaces are partially outside
the polytope of positivity. This is due to the presence of the logarithm in
the entropy function which is undefined when one of the populations is
negative. Figure 4 (bottom) shows a pre-collisional state $\bm{f}$ for which
the mirror state lies outside the triangle, hence cannot be defined.
In LBGK, the path length is fixed to a constant value of $\alpha_{\rm
LBGK}=2$. The ELBM introduces the concept of the state dependent $\alpha$
(Karlin _et al._ , 1999), evaluated numerically by solving the nonlinear
equation [Eq. (17)] (Ansumali and Karlin, 2002a; Tosi _et al._ , 2006;
Chikatamarla _et al._ , 2006). Once the path length $\alpha$ and therefore
the mirror state are known, the post-collisional state is found by the linear
contraction
$\bm{f}^{*}=\bm{f}^{\rm mirror}-\alpha(1-\beta)[\bm{f}^{\rm
eq}-\bm{f}]=\bm{f}+\alpha\beta[\bm{f}^{\rm eq}-\bm{f}].$ (18)
Since $0<\beta<1$, it is guaranteed that $H[\bm{f}^{*}]<H[\bm{f}^{\rm
mirror}]$. To summarize, the ELBM ensures adherence to the $H$ theorem in the
collision by first “over-relaxing” the populations to an equal entropy (zero
dissipation) mirror state followed by adding dissipation, thus, ensuring a
nonpositive entropy production (Karlin _et al._ , 1999).
Next, we discuss the near equilibrium limit of the entropic involution. In a
well resolved simulation, the departure of populations from the equilibrium is
small and the entropic involution step yields the solution $\alpha=\alpha_{\rm
LBGK}=2$. To demonstrate this, we define the dimensionless departure from the
equilibrium as
$\displaystyle x_{i}=\frac{f_{i}^{\rm eq}}{f_{i}}-1.$ (19)
As the populations $f_{i},f_{i}^{\rm eq}$ are positive, $x_{i}\in(-1,\infty)$.
Here, the lower limit is due to the extreme case of $f_{i}^{\rm eq}\rightarrow
0$, whereas the upper limit is due to $f_{i}\rightarrow 0$. Further, we
introduce a decomposition of distributions $f_{i}$ in terms of the departure
from equilibrium as (Gorban _et al._ , 1996)
$\Omega^{+}=\\{f_{i}:x_{i}\geq 0\\},\quad\Omega^{-}=\\{f_{i}:-1<x_{i}<0\\}.$
(20)
This asymmetry in the range of $x$ is crucial in the subsequent derivation of
the exact solution. With this decomposition, we also partition the bilinear
action into two partial contributions
$\left<f,\psi\right>_{\Omega^{\pm}}=\sum_{f_{i}\in\Omega^{\pm}}f_{i}\psi_{i}.$
(21)
The path length $\alpha$ is the root of the equation
$\displaystyle\Delta H\equiv H[\bm{f}^{\rm mirror}]-H[\bm{f}]=0,$ (22)
which is simplified to obtain (see Appendix A for a detailed derivation)
$\displaystyle\begin{split}H[\bm{f}^{\rm
mirror}]-H[\bm{f}]=\left<f,\left(1+\alpha x\right)\log{\left(1+\alpha
x\right)}\right>\\\ -\alpha\left<f,x\log(1+x)\right>.\end{split}$ (23)
In a well resolved simulation, the dimensionless departure of populations from
the equilibrium is small, i.e., $|x_{i}|\ll 1$. Therefore, expanding the above
equation about $x_{i}=0$ via Taylor series one obtains
$H[\bm{f}^{\rm
mirror}]-H[\bm{f}]=\alpha\left(\frac{\alpha}{2}-1\right)\left<f,x^{2}\right>+O(x^{3}).$
(24)
Thus, for small departure from the equilibrium, the non-trivial root of
$H[\bm{f}^{\rm mirror}]-H[\bm{f}]=0$ is $\alpha=2$. Hence, in the limit
$x_{i}\rightarrow 0$, the ELBM reduces to the LBGK.
We now derive the expanded form of Eq. (23) for the $D1Q2$ lattice. As stated
earlier, the $D1Q2$ lattice lacks the degrees of freedom to model
hydrodynamics, however, it is simple enough to show the analytical form of
$H[\bm{f}^{\rm mirror}]-H[\bm{f}]$. The Eq. (23) for the $D1Q2$ lattice can be
expanded to obtain
$\displaystyle\begin{split}&\Delta H\equiv H[\bm{f}^{\rm mirror}]-H[\bm{f}]\\\
&=f_{1}(1+\alpha x_{1})\log(1+\alpha x_{1})-\alpha f_{1}x_{1}\log(1+x_{1})\\\
&+f_{-1}(1+\alpha x_{-1})\log(1+\alpha x_{-1})-\alpha
f_{-1}x_{-1}\log(1+x_{-1}).\end{split}$ (25)
For this lattice, $f^{\rm eq}_{1}=f^{\rm eq}_{-1}=\rho/2$, therefore,
$x_{1}=\rho/(2f_{1})-1,x_{-1}=\rho/(2f_{-1})-1$, substituting which in the
above equation along with Eq. (13) yields
$\displaystyle\begin{split}&\frac{\Delta H}{\rho}=\left[\frac{1+u-\alpha
u}{2}\right]\log\left[\frac{1+u-\alpha u}{1+u}\right]\\\
&+\left[\frac{1-u+\alpha u}{2}\right]\log\left[\frac{1-u+\alpha
u}{1-u}\right]+\frac{\alpha u}{2}\log\left[\frac{1-u}{1+u}\right].\end{split}$
(26)
It is seen from the above equation that the solution of $\Delta H=0$ is
independent of $\rho$. It can also be verified that $\alpha=2$ is a nontrivial
solution (this is due to the symmetric nature of $D1Q2$ and is not the case
for $D1Q3$ and other higher dimensional lattices). Figure 5 shows that the
solution for $\Delta H=0$ remains $\alpha=2$ at all values of $u$.
Figure 5: The solution for $\Delta H=0$ remains $\alpha=2$ at all values of
$u$ for the $D1Q2$ lattice (this is not the case for $D1Q3$ and other higher
lattices).
Next, we derive the expanded form of Eq. (23) for the $D1Q3$ lattice. We
define $\tilde{x}_{i}$ as the $x_{i}$ for the $D1Q3$ model which are
calculated by substituting the equilibrium from Eq. (15) into Eq. (19) as
$\displaystyle\begin{split}&\tilde{x}_{-1}=\frac{2\tilde{f}^{\rm
eq}_{-1}}{\tilde{f}^{\rm eq}_{1}+\tilde{f}^{\rm
eq}_{-1}+\tilde{\sigma}_{xx}-u}-1,\\\ &\tilde{x}_{0}=\frac{\tilde{f}^{\rm
eq}_{0}}{1-\tilde{\sigma}_{xx}-\tilde{f}^{\rm eq}_{1}-\tilde{f}^{\rm
eq}_{-1}}-1,\\\ &\tilde{x}_{1}=\frac{2\tilde{f}^{\rm eq}_{1}}{\tilde{f}^{\rm
eq}_{1}+\tilde{f}^{\rm eq}_{-1}+\tilde{\sigma}_{xx}+u}-1.\end{split}$ (27)
The above $\tilde{x}_{i}$ are substituted in Eq. (23) to obtain the entropy
evolution for $D1Q3$ as
$\displaystyle\begin{split}&\frac{\Delta
H}{\rho}=\tilde{f}_{1}\left[(1+\alpha\tilde{x}_{1})\log(1+\alpha\tilde{x}_{1})-\alpha\tilde{x}_{1}\log(1+\tilde{x}_{1})\right]\\\
&+\tilde{f}_{-1}\left[(1+\alpha\tilde{x}_{-1})\log(1+\alpha\tilde{x}_{-1})-\alpha\tilde{x}_{-1}\log(1+\tilde{x}_{-1})\right]\\\
&+\tilde{f}_{0}\left[(1+\alpha\tilde{x}_{0})\log(1+\alpha\tilde{x}_{0})-\alpha\tilde{x}_{0}\log(1+\tilde{x}_{0})\right],\end{split}$
(28)
which is then solved using Newton-Raphson scheme for the path length $\alpha$.
This path length is dependent on $\tilde{\sigma}_{xx}$ and $u$ of the initial
state $\bm{f}$. Figure 6 plots the values of $\alpha$ for various
$u,\tilde{\sigma}_{xx}$. It can be seen that the region corresponding to the
LBGK value of 2, becomes thinner as $|\bm{u}|$ increases, and that the
deviation of $\alpha$ from the LBGK value becomes larger as
$|\bm{\tilde{}}{\sigma}_{xx}|$ increases. Figure 6 (bottom) plots the path
length as a function of $\bm{\tilde{}}{\sigma}_{xx}$ for various values of the
velocity $|\bm{u}|$. The shaded portion of the Fig. 6 (top) represents the
regions (typically with large moments) where the initial state is well defined
(lies within the polytope of positivity), whereas the mirror state lies
outside the polytope of positivity, thus, for such cases, the entropic
involution shows indeterminacy. It should be noted that these events are rare
and even if one encounters such cases it is known how to construct the path
length (Ansumali and Karlin, 2002a; Mazloomi M. _et al._ , 2015).
Figure 6: The heat map of $\alpha$ corresponding to $\Delta H=0$ at various
values of $u,\tilde{\sigma}_{xx}$ for the $D1Q3$ lattice. The shaded region
represents the part of moment space where the mirror state lies outside the
polytope of positivity.
We now discuss the significance of over-relaxation in the entropic involution
step over the under-relaxation. A numerical scheme via the first order Euler
discretization of the Boltzmann BGK equation is possible. It reads as
$\displaystyle f(\bm{x}+\bm{c}\Delta t,t+\Delta t)$
$\displaystyle=f(\bm{x},t)+\frac{\Delta t}{\tau}\left[f^{\rm
eq}-f(\bm{x},t)\right]$ $\displaystyle=\left(1-\frac{\Delta
t}{\tau}\right)f(\bm{x},t)+\frac{\Delta t}{\tau}f^{\rm eq},$ (29)
and exhibits unconditional numerical stability if $\Delta t\ll\tau$. The $H$
theorem for this scheme is trivially satisfied as the post-collisional state
is a convex combination of the pre-collisional state and the equilibrium
state. This is called an under-relaxing scheme as the discrete dynamics never
crosses over the equilibrium state and corresponds to $\alpha<1$. However, for
many practical applications the relevant time scales are multiple orders of
magnitude greater than $\Delta t$. Therefore, for faster convergence it is
required to have numerical scheme which permits large time steps, i.e.,
$\Delta t\gg\tau$ is desirable (which correspond to $\alpha>1$). The over-
relaxation of the populations to a mirror state is thus an important feature
of the discrete dynamics as it allows one to achieve large time steps.
## III Exact solution to the path length: Essentially entropic lattice
Boltzmann model
As discussed in the previous section, the discrete path length $\alpha$ is
available as the nontrivial root of Eq. (23). This equation is highly
nonlinear and is typically solved by a combination of bisection and Newton-
Raphson method (Ansumali and Karlin, 2000, 2002b). Considerable efforts have
been put in to ensure that the correct solution is obtained in an efficient
manner (Ansumali and Karlin, 2002a; Tosi _et al._ , 2006; Chikatamarla _et
al._ , 2006; Brownlee _et al._ , 2007). In this section, we present an
alternate construction of ELBM where the discrete path length $\alpha$ is
known in explicit form without any indeterminacy. The key idea is to obtain
$\alpha$ by directly considering the natural criterion of monotonic decrease
of $H$ with time (Atif _et al._ , 2017). This implies solving an inequality
$\Delta H\equiv H[\bm{f}^{*}]-H[\bm{f}]<0.$ (30)
The above inequality, by construction, accepts multiple solutions. For
example, when $\alpha\leq 1$ the inequality is trivially satisfied as the new
state is a convex combination of the old state and the equilibrium (Wagner,
1998). However, one is interested in an over-relaxed collision, where the new
state is no longer a convex combination of the old state and equilibrium. This
corresponds to the real solutions of Eq. (30) in the range
$1<\alpha<\alpha^{\rm max}$, where $\alpha^{\rm max}=-1/\left(\beta x_{i}^{\rm
min}\right)$ is maximum possible pass-length corresponding to an edge of the
polytope of positivity beyond which the populations become negative (Karlin
_et al._ , 1999). Among the multiple solutions of the inequality, we are
looking for the maximal path length $\alpha$ such that ${\Delta H\rightarrow
0}$. As is the case with ELBM, the solution should reduce to standard LBM
close to equilibrium ($\alpha=2$). Indeed, the present methodology is valid
for both discrete velocity models of LBM as well as the continuous in velocity
Boltzmann-BGK equations, where the summation in the inner products needs to be
replaced by appropriate integrals.
The general idea behind obtaining an analytical expression for the path length
$\alpha$ is as follows: we intend to split $\Delta H$ into two parts,
$\Delta H=H(\alpha)+H^{(B)},$ (31)
where $H^{(B)}$ is chosen such that it is nonpositive, and $H(\alpha)=0$ is an
easily solvable polynomial whose root is the path length $\alpha$. The
discrete-time $H$ theorem is satisfied as $H^{(B)}$ is nonpositive and
contributes to the entropy production, i.e.,
$\Delta H=H^{(B)}\leq 0.$ (32)
A word of caution is in order here. As stated earlier, the inequality $\Delta
H\leq 0$ by construction accepts multiple solutions. These solutions are not
identical but differ in two ways:
1. 1.
Not all the solutions reduce to LBGK $(\alpha_{\rm LBGK}=2)$ in the limit of
$x_{i}\rightarrow 0$. Our interest is only in the solutions that reduce to the
standard LBM for $x_{i}\rightarrow 0$.
2. 2.
The entropy production corresponding to each solution dictates its dissipative
nature, i.e., as the magnitude of $H^{(B)}$ increases the dynamics becomes
more and more dissipative. This is the reason why we are interested in the
solution such that ${\Delta H\rightarrow 0}.$ This point will be elucidated in
the forthcoming section, where we derive two expressions for $\alpha$, one of
which is more dissipative than the other.
Following the procedure detailed in Appendix A the Eq. (30) is rewritten as
$\displaystyle\Delta H=$
$\displaystyle\left<f,\left(1+\hat{x}\right)\log{\left(1+\hat{x}\right)}\right>-\alpha\beta\left<f,x\log(1+x)\right>,$
(33)
where $\hat{x}=\alpha\beta x$. Under the decomposition given by Eq. (20), the
above equation becomes
$\displaystyle\begin{split}\Delta
H&=\left<f,\left(1+\hat{x}\right)\log{\left(1+\hat{x}\right)}\right>_{\Omega^{-}}+\left<f,\left(1+\hat{x}\right)\log{\left(1+\hat{x}\right)}\right>_{\Omega^{+}}\\\
&-\alpha\beta\left<f,x\log(1+x)\right>_{\Omega^{-}}-\alpha\beta\left<f,x\log(1+x)\right>_{\Omega^{+}}.\end{split}$
(34)
We now derive two solutions to $\Delta H\leq 0$ by splitting Eq. (34) into a
polynomial and an entropy production term as in Eq. (31). These solutions
require bounds on the logarithm. The lower order solution is constructed by
exploiting the loose bounds, whereas the higher order solution is derived by
exploiting the sharper bounds (see Appendix B for details on the bounds of
logarithm). Both the solutions are shown to reduce to the LBGK value of 2 for
$x_{i}\rightarrow 0$.
### III.1 Lower order solution
In this section, we find the path length by exploiting the loose bounds on the
logarithms [Eqs. (71),(74),(76)]. Upon adding and subtracting the term
$\left<f,{\cal A}_{1}+{\cal A}_{2}-{\cal A}_{3}\right>$ from Eq. (34), it is
written as
$\displaystyle\begin{split}\Delta
H=\left<f,\left(1+\hat{x}\right)\log{\left(1+\hat{x}\right)}-{\cal
A}_{1}\right>_{\Omega^{-}}+\left<f,{\cal A}_{1}\right>_{\Omega^{-}}\\\
+\left<f,{\left(1+\hat{x}\right)\log{\left(1+\hat{x}\right)}-{\cal
A}_{2}}\right>_{\Omega^{+}}+\left<f,{\cal A}_{2}\right>_{\Omega^{+}}\\\
-\alpha\beta\left<f,{x\log(1+x)-{\cal A}_{3}}\right>-\alpha\beta\left<f,{\cal
A}_{3}\right>,\end{split}$ (35)
where
$\displaystyle\begin{split}{\cal
A}_{1}=\hat{x}+\frac{\hat{x}^{2}}{2}-\frac{\hat{x}^{3}}{2},\,{\cal
A}_{2}=\hat{x}+\frac{\hat{x}^{2}}{2},\,{\cal
A}_{3}=\frac{2x^{2}}{2+x}.\end{split}$ (36)
Now, identifying that
$\left<f,x\right>_{\Omega^{+}}+\left<f,x\right>_{\Omega^{-}}=\left<f,x\right>=0$
due to conservation laws, Eq. (35) is written in a compact form as
$\displaystyle\Delta H$ $\displaystyle=\alpha\beta H_{1}(\alpha)+H_{1}^{(B)},$
(37)
where
$\displaystyle\begin{split}H_{1}^{(B)}=-{\left<f,G_{1}(\hat{x})\right>_{\Omega_{-}}}-{\left<f,G_{2}(\hat{x})\right>_{\Omega_{+}}}-{\alpha\beta\left<f,G_{3}(x)\right>}\\\
+{\alpha^{2}\beta(\beta-1)\left<f,\frac{x^{2}}{2}\right>}-{\alpha^{3}\beta(\beta^{2}-1)\left<f,\frac{x^{3}}{2}\right>_{\Omega^{-}}},\end{split}$
(38)
and
$H_{1}(\alpha)=-\alpha^{2}a_{1}+\alpha b_{1}-c_{1},$ (39)
with
$\quad
a_{1}=\left<f,\frac{x^{3}}{2}\right>_{\Omega^{-}},b_{1}=\left<f,\frac{x^{2}}{2}\right>,c_{1}=\left<f,\frac{2x^{2}}{2+x}\right>.$
(40)
It can be seen that $H_{1}(0)<0<H_{1}(2)$, therefore, a positive root of Eq.
(39) bounded in $(0,2)$ exists. As Eq. (39) is constructed by employing lower
order bounds on the logarithm, this root is called $\alpha_{\rm Lower}$,
$\alpha_{\rm
Lower}=\frac{-b_{1}+\sqrt{b_{1}^{2}-4a_{1}c_{1}}}{-2a_{1}}=\frac{2c_{1}}{b_{1}+\sqrt{b_{1}^{2}-4a_{1}c_{1}}}.$
(41)
To avoid numerical issues related to the precision loss while dealing with
small numbers, in the above expression we have multiplied the root with its
conjugate (Press _et al._ , 1992).
Due to the nonnegative nature of the functions $G_{1},\,G_{2},\,G_{3}$ in
their respective domains [Eqs. (71), (74), (76)], and $\beta<1$, each term in
Eq. (38) is nonpositive, hence, $H_{1}^{(B)}\leq 0$. Therefore, from Eq. (37)
we see that the $H$ theorem is satisfied because $H_{1}(\alpha_{\rm
Lower})=0$, hence,
$\displaystyle\Delta H=H_{1}^{(B)}\leq 0.$ (42)
Upon expanding $\alpha_{\rm Lower}$ and ignoring higher order terms one
obtains
$\lim_{x_{i}\rightarrow 0}\alpha_{\rm
Lower}=2-\frac{\left<f,x^{3}\right>_{\Omega^{+}}}{\left<f,x^{2}\right>}+3\frac{\left<f,x^{3}\right>_{\Omega^{-}}}{\left<f,x^{2}\right>},$
(43)
which has the limiting value of $2$. Thus, for small departures from
equilibrium where $x_{i}\rightarrow 0$, the scheme reduces to the standard
LBM. It is also evident from Eq. (43) that $\alpha_{\rm Lower}<2$. This is
important as it is known that for ELBM the path length fluctuates around the
standard LBGK value of $\alpha=2$ (Karlin _et al._ , 2015), a feature of ELBM
not mimicked by $\alpha_{\rm Lower}$. In the next section, we construct
another path length $\alpha_{\rm Higher}$ that fluctuates about the standard
LBGK value of $\alpha=2$.
### III.2 Higher order solution
In this section, we derive the path length $\alpha$ by exploiting the sharper
bounds on the logarithms [Eqs. (72), (75), (77)]. Following the same
methodology as the previous section, we add and subtract terms from Eq. (67)
to obtain
$\displaystyle\begin{split}\Delta H&=H^{(B)}+\alpha\beta
H(\alpha),\end{split}$ (44)
where $H^{(B)}<0$ and
$\displaystyle H(\alpha)=-\alpha^{2}a+\alpha b-c.$ (45)
The coefficients $a,b,c$ are
$\displaystyle\begin{split}a&=\beta^{2}\left<f,\frac{x^{3}}{6}-\frac{h\beta
x^{4}}{12}+\frac{h^{2}\beta^{2}x^{5}}{20}-\frac{h^{3}\beta^{3}x^{6}}{5}\right>_{\Omega^{-}},c=\left<f,\frac{60x^{2}+60x^{3}+11x^{4}}{60+90x+36x^{2}+3x^{3}}\right>,\\\
b&=\bigg{<}f,\frac{x^{2}}{2}\bigg{>}-\bigg{<}f,\frac{2\alpha_{\rm
Lower}\beta^{2}x^{3}}{15}\bigg{(}\frac{2}{4+\alpha_{\rm
Lower}x}+\frac{1}{4+2\alpha_{\rm Lower}x}+\frac{2}{4+3\alpha_{\rm
Lower}x}\bigg{)}\bigg{>}_{\Omega^{+}},\\\ \end{split}$ (46)
The parameter $h$ in the above equation serves as an upper bound on the path
length and is found as the positive root of the quadratic equation
$\displaystyle\begin{split}H_{2}(\alpha)&=-\alpha^{2}a_{2}+\alpha
b-c,\end{split}$ (47)
where
$\displaystyle\begin{split}a_{2}=\beta^{2}\left<f,\frac{x^{3}}{6}\right>_{\Omega^{-}}.\end{split}$
(48)
Equation (45) has a positive root $\alpha_{\rm Higher}$ [as
$H(0)<0<H(\infty)$] which is the desired path length. It has the limit
$\lim_{x_{i}\rightarrow 0}\alpha_{\rm
Higher}=2+\left(\frac{4\beta^{2}}{3}-1\right)\frac{\left<f,x^{3}\right>}{\left<f,x^{2}\right>}.$
(49)
Unlike $\alpha_{\rm Lower}$, which was always less than 2, no such comment can
be made about $\alpha_{\rm Higher}$. Thus, $\alpha_{\rm Higher}$ mimics an
important feature of the ELBM where the path length fluctuates about the BGK
value of 2. A detailed derivation of $\alpha_{\rm Higher}$ is provided in
Appendix C. The details regarding the implementation of this exact solution
for the path length are given in Appendix D.
## IV Comparison with ELBM and BGK
In this section, we compare the analytical solutions for the path length
($\alpha_{\rm Lower}$, $\alpha_{\rm Higher}$) with the BGK ($\alpha_{\rm
LBGK}=2$) and the iterative ELBM solution ($\alpha_{\rm ELBM}$). To this end,
we consider three canonical setups: the one-dimensional Sod shock tube, the
doubly periodic shear layer, and the lid-driven cavity. It is illustrated from
these examples that $\alpha_{\rm Lower}$ is more dissipative than $\alpha_{\rm
Higher}$ and hence is not the ideal choice for hydrodynamics. Nevertheless, it
is useful for the construction of $\alpha_{\rm Higher}$ as demonstrated in the
previous section. It is also demonstrated that there is an insignificant
difference between the path lengths $\alpha_{\rm Higher}$ and $\alpha_{\rm
ELBM}$.
### IV.1 Sod shock tube
Figure 7: Density (left), velocity (middle) and entropy (right) plots from
$\alpha_{\rm LBGK}$, $\alpha_{\rm Lower}$, $\alpha_{\rm Higher}$, and
$\alpha_{\rm ELBM}$ at time $t=500$ for viscosity $\nu=1.0\times 10^{-5}$.
To compare the behaviour of $\alpha_{\rm Lower},\alpha_{\rm Higher}$ with
$\alpha_{\rm ELBM}$ and $\alpha_{\rm LBGK}$, we first simulate the one-
dimensional shock tube using the $D1Q3$ lattice. In this setup, a domain with
800 grid points is initialized with a step function for density as
$\rho\,(x\leq 400)=1.5$ and $\rho\,(x>400)=0.75$. The presence of a sharp
discontinuity in the initial condition at the center of the domain generates a
moving compressive shock front in the low-density region and a rarefaction
front in the high-density region. These two fronts give rise to a contact
region of uniform pressure and velocity in the center of the tube (Laney,
1998). The density, velocity, and entropy profiles shown in Figure 7
illustrate that the numerical oscillations are sharply reduced in the case of
$\alpha_{\rm Lower}$, thus pointing to its dissipative nature. It can also be
seen that the oscillations are prominent for $\alpha_{\rm LBGK}$ and that both
$\alpha_{\rm Higher}$ and $\alpha_{\rm ELBM}$ restore the $H$ theorem without
altering the fields.
Figure 8: Comparison between $\alpha_{\rm Lower}$ and $\alpha_{\rm ELBM}$ for
the Sod shock tube. Top: Snapshot of the path length. Bottom: Ratio of the
turbulent viscosity correction to the kinematic viscosity.
Figure 9: Comparison between $\alpha_{\rm Higher}$ and $\alpha_{\rm ELBM}$ for
the Sod shock tube. Top: Snapshot of the path length. Bottom: Ratio of the
turbulent viscosity correction to the kinematic viscosity.
Figure 8 (top) compares $\alpha_{\rm Lower}$ and $\alpha_{\rm ELBM}$. It is
evident that the path lengths show departure from $\alpha=2$ (BGK value) only
in the narrow regions of the compressive and the rarefaction fronts. It can
also be seen that the value of $\alpha_{\rm Lower}$ is always smaller than 2,
while that of $\alpha_{\rm ELBM}$ fluctuates about 2. Figure 8 (bottom) plots
the ratio of turbulent viscosity correction to kinematic viscosity
$\nu_{T}/\nu_{0}$ (more details in Sec. VI). From the figure, it is evident
that at the location of the shock front the $\alpha_{\rm Lower}$ is more than
twice the kinematic viscosity, while $\alpha_{\rm ELBM}$ is only $\sim 47\%$.
Similarly, Figure 9 (top) compares the path length from $\alpha_{\rm Higher}$
and it is seen that for this setup $\alpha_{\rm Higher}$ exhibits smaller
fluctuations than the $\alpha_{\rm ELBM}$. Figure 9 (bottom) shows that the
turbulent viscosity correction for $\alpha_{\rm ELBM}$ is $\sim 47\%$, whereas
for $\alpha_{\rm Higher}$ it is $\sim 42\%$. Hence, it can be concluded that
$\alpha_{\rm Higher}$ imposes the $H$ theorem (thus guaranteeing unconditional
numerical stability) with the least turbulent viscosity correction.
### IV.2 Doubly periodic shear layer
In this section, we compare the behaviour of $\alpha_{\rm Lower},\alpha_{\rm
Higher}$ with $\alpha_{\rm LBGK}$ by considering the setup of doubly periodic
shear layer (Minion and Brown, 1997). The initial velocity field comprises of
two shear layers given by
$\displaystyle u_{x}(y)$
$\displaystyle=\begin{cases}U_{0}\tanh[(4y-1)/w],\qquad y\leq 1/2\\\
U_{0}\tanh[(3-4y)/w],\qquad y>1/2\end{cases}$ (50) $\displaystyle u_{y}(x)$
$\displaystyle=U_{0}\delta\sin[2\pi(x+1/4)],$ (51)
where $w=\delta=0.05,U_{0}=0.04$ and $x,y$ are nondimensionalized coordinates.
The viscosity is calculated from the Reynolds number which for the present
case is fixed at $3\times 10^{4}.$ It is known that at poor grid resolutions
for this setup, the numerical disturbances may lead to formation of spurious
vortices in the braids (Minion and Brown, 1997; Coreixas _et al._ , 2017).
Figure 10 depicts the isovorticity contours for $\alpha_{\rm
Lower},\alpha_{\rm Higher}$ on a $256\times 256$ grid and for $\alpha_{\rm
LBGK}$ on $1024\times 1024$ grid obtained after one convection time. A
qualitative comparison of the three plots reveal that the vortex structure is
smudged for $\alpha_{\rm Lower}$, while the vortex structure of $\alpha_{\rm
Higher}$ on a $256\times 256$ grid is the same as that of BGK at $1024\times
1024$ grid. In Fig. 11 we show the magnitude of the path lengths $\alpha_{\rm
Lower},\alpha_{\rm Higher}$, from where it evident that while $\alpha_{\rm
Lower}$ always remains smaller than 2, $\alpha_{\rm Higher}$ fluctuates about
2, thus corroborating the dissipative nature of $\alpha_{\rm Lower}$. Finally,
a quantitative analysis of the flow is performed by measuring the change in
global enstrophy ($\Delta\Omega=\bar{\Omega}_{t}/\bar{\Omega}_{0}\times 100$),
where $\bar{\Omega}_{t}$ is the enstrophy at time $t$, and $\bar{\Omega}_{0}$
is the initial global enstrophy (defined as the square of the vorticity).
Figure 12 plots the time evolution of $\Delta\Omega$. It is evident that
$\alpha_{\rm Higher}$ on a $128\times 128$ grid behaves the same as the BGK on
a much larger $1024\times 1024$ grid, whereas $\alpha_{\rm Lower}$ exhibits
dissipation that manifests in the form of reduced enstrophy.
Figure 10: Nondimensional iso-vorticity contours for $\alpha_{\rm Lower}$
(left), $\alpha_{\rm Higher}$ (center) at grid size $256\times 256$ and for
BGK at $1024\times 1024$ (right) after one convection time.
Figure 11: Path length from $\alpha_{\rm Lower}$ (left) and $\alpha_{\rm
Higher}$ (right) after one convection time on a grid of size $256\times 256$.
Figure 12: Change in the global enstrophy $\Delta\Omega$ vs time for various
square grids. Here, $t^{*}$ is the nondimensional convection time.
### IV.3 Lid-driven cavity
In this section, we consider the lid-driven cavity at a Reynolds number ($\rm
Re$) of 5000 where the motion of the top wall drives the flow in a 2D cavity.
We use the standard $D2Q9$ lattice and diffuse boundary condition Ansumali and
Karlin (2002c). For this setup, the LBGK ($\alpha=2$) is numerically unstable
at smaller grid sizes of $64\times 64,\,96\times 96$, and $128\times 128$,
however, it is stable at a larger grid of size $256\times 256$. The entropic
formulations $\alpha_{\rm Lower},\alpha_{\rm Higher},\alpha_{\rm ELBM}$ are
stable at all grid sizes.
Figure 13: Iso-vorticity contours for the lid-driven cavity at Reynolds number
of 5000 for various grid sizes: $64\times 64$ (left), $96\times 96$ (center),
$128\times 128$ (right).
Figure 13 depicts the iso-vorticity contours for various grid sizes obtained
using $\alpha_{\rm Higher}$. It is seen that even extremely under-resolved
grids remain numerically stable. However, at coarse resolutions like $64\times
64$ and $96\times 96$ the finer structures are distorted, which take the
expected form at a slightly higher grid size of $128\times 128$. It should be
repeated here that at grid size of $128\times 128$ the LBGK ($\alpha=2$) is
numerically unstable. In Fig. 14, we plot the velocities along vertical and
horizontal centerlines and observe a good match with Ghia _et al._ (1982).
Figure 14: Velocity profiles for the lid-driven cavity at Reynolds number of
5000 and Mach number 0.05 for various grid sizes. Top: nondimensionalized
x-velocity along the vertical centerline. Bottom: nondimensionalized
y-velocity along the horizontal centerline.
Next, we establish that there is no appreciable difference between the path
lengths $\alpha_{\rm Higher}$ and $\alpha_{\rm ELBM}$. To this effect, we
compare the instantaneous value of $\alpha_{\rm Higher}$ and $\alpha_{\rm
ELBM}$ for three different grid resolutions. First, the simulation is
performed using $\alpha_{\rm Higher}$ for 100 convection times. On the
populations thus obtained, we evaluate $\alpha_{\rm Higher}$ and $\alpha_{\rm
ELBM}$ for the entire grid. The $L_{1},\,L_{2},\,L_{\infty}$ error norms of
$||\alpha_{\rm Higher}-\alpha_{\rm ELBM}||$ are tabulated in Table 1, whereas
the distribution of path lengths are given in Fig. 15. It is evident that
$\alpha_{\rm Higher}$ and $\alpha_{\rm ELBM}$ show insignificant deviation at
all grid sizes. From Fig. 15 and Table 2, it can also be seen that as the grid
size increases the distribution of the path lengths becomes narrower as the
region around the LBGK value of $\alpha=2$ where $90\%$ of the points lie
(inside solid vertical lines) becomes smaller.
Figure 15: Distribution of $\alpha_{\rm Higher}$ and $\alpha_{\rm ELBM}$ for lid-driven cavity at Reynolds number 5000 and Mach number 0.05. Grid sizes are $64\times 64$ (top), $96\times 96$ (middle), $128\times 128$ (bottom). The difference between the distribution of $\alpha_{\rm Higher}$ and $\alpha_{\rm ELBM}$ is seen to be insignificant. The solid black lines denote the region inside which $90\%$ of the points lie. The locations of the solid lines are tabulated in Table 2. | $64\times 64$ | $96\times 96$ | $128\times 128$
---|---|---|---
$L_{1}$ | $2.55\times 10^{-5}$ | $1.37\times 10^{-5}$ | $8.26\times 10^{-6}$
$L_{2}$ | $1.67\times 10^{-4}$ | $9.89\times 10^{-5}$ | $5.17\times 10^{-5}$
$L_{\infty}$ | $6.07\times 10^{-3}$ | $5.24\times 10^{-3}$ | $3.71\times 10^{-3}$
Table 1: Error norms for $||\alpha_{\rm Higher}-\alpha_{\rm ELBM}||$. | $64\times 64$ | $96\times 96$ | $128\times 128$
---|---|---|---
$\alpha_{\rm Higher}$ | $2\pm 1.79\times 10^{-3}$ | $2\pm 8.0\times 10^{-4}$ | $2\pm 3.9\times 10^{-4}$
$\alpha_{\rm ELBM}$ | $2\pm 1.77\times 10^{-3}$ | $2\pm 7.3\times 10^{-4}$ | $2\pm 2.9\times 10^{-4}$
Table 2: Region around the LBGK value of $\alpha=2$ where $90\%$ of the points
lie. It is seen that as the grid size increases the region becomes narrower.
We also briefly investigate the idea that the path length $\alpha_{\rm Lower}$
could be utilized as a good initial guess value for the iterative ELBM solver.
Typically, the iterative root solver converges in 4-5 iterations, however, it
is stipulated that the converged result should be obtained in a single
iteration when using $\alpha_{\rm Lower}$ as the initial guess value. We call
this first iterate $\alpha_{\rm Iterate1}$ and compare it with $\alpha_{\rm
ELBM}$. The $L_{1},L_{2},L_{\infty}$ error norms of $||\alpha_{\rm
Iterate1}-\alpha_{\rm ELBM}||$ are tabulated in Table 3 from where it can be
concluded that the difference is insignificant for all three grid sizes.
| $64\times 64$ | $96\times 96$ | $128\times 128$
---|---|---|---
$L_{1}$ | $2.27\times 10^{-7}$ | $7.48\times 10^{-8}$ | $2.56\times 10^{-8}$
$L_{2}$ | $3.02\times 10^{-6}$ | $1.37\times 10^{-6}$ | $8.26\times 10^{-7}$
$L_{\infty}$ | $1.10\times 10^{-4}$ | $9.00\times 10^{-5}$ | $7.00\times 10^{-5}$
Table 3: Error norms for $||\alpha_{\rm Iterate1}-\alpha_{\rm ELBM}||$.
## V Exact solution to the entropic lattice ES–BGK model
The ES–BGK model proposed by Holway Jr (1965) overcomes the restriction on the
Prandtl number ($\rm Pr$) in BGK collision models without compromising the
conceptual simplicity. This model employs a quasi-equilibrium state $f^{\rm
QE}$ instead Maxwellian in the collision term. The quasi-equilibrium state is
an anisotropic Gaussian distribution that reduces to a Maxwellian at the
equilibrium. The continuous $H$ theorem for this model was proved by Andries
_et al._ (2000). In this section, we extend the discrete $H$ theorem to the
lattice ES–BGK model and derive the exact solution for the path length.
### V.1 Lattice ES–BGK model
The collision term for the lattice ES–BGK collision model reads as (Meng _et
al._ , 2013)
$\bm{f}^{*}({\bm{x}},t)=\bm{f}({\bm{x}},t)+\alpha\beta[\tilde{\bm{f}}^{\rm
QE}-\bm{f}({\bm{x}},t)],$ (52)
where $\beta=\Delta t/(2\tau_{1}+\Delta t),$ and the viscosity $\nu$ is
related to the relaxation time $\tau_{1}$ by $\nu=\tau_{1}\theta\,{\rm Pr}$
(Kolluru _et al._ , 2022). In Eq. (52), $\alpha$ is the path length which is
equal to 2 in the standard case, and is found by solving Eq. (30) for the
entropic lattice ES–BGK model. The discrete quasi-equilibrium distribution
$\tilde{f}^{\rm QE}$ is found as the minimizer of the the discrete $H$
function under the constraints of mass and momentum being conserved with the
pressure tensor given by
$\displaystyle P_{\alpha\beta}\left(\tilde{\bm{f}}^{\rm QE}\right)$
$\displaystyle=\rho
u_{\alpha}u_{\beta}+\rho\theta\delta_{\alpha\beta}+\frac{1-1/{\rm
Pr}}{1+\Delta t/(2\tau_{1}{\rm Pr})}\sigma_{\alpha\beta}\left(\bm{f}\right).$
(53)
Solving the minimization problem, one obtains
$\displaystyle\tilde{f}_{i}^{\rm
QE}=\rho\exp\left(-\mu-{\zeta_{\kappa}c_{i\kappa}}-\gamma_{\alpha\beta}\sigma_{\alpha\beta}\right),$
(54)
where $\mu,\zeta_{\kappa},\gamma_{\alpha\beta}$ are the Lagrange multipliers
associated with the mass, momentum, and pressure tensor respectively. The
Lagrange multipliers are calculated by performing a perturbation expansion
around the equilibrium state as in Ref. Ansumali _et al._ (2007).
### V.2 Exact solution for the path-length
Following the procedure as detailed in Appendix A, the Eq. (30) for the
lattice ES–BGK model is rewritten as
$\displaystyle\begin{split}\Delta
H=\left<f,(1+\hat{z})\ln(1+\hat{z})\right>-\alpha\beta\left<f,z\ln(1+z)\right>\\\
+\frac{\alpha\beta}{{\rm Pr}}\frac{1+\delta t/(2\tau_{1})}{1+\delta
t/(2\tau_{1}{\rm Pr})}\gamma_{\alpha\beta}\sigma_{\alpha\beta}(f),\end{split}$
(55)
where $\hat{z}=\alpha\beta z,z=\tilde{f}^{\rm QE}/f-1$. The Lagrange
multipliers are evaluated numerically, however, using a series expansion it
can be shown that the last term in the above equation can be approximated as
$\gamma_{\alpha\beta}=-m\frac{\sigma_{\alpha\beta}}{\rho\theta^{2}},\quad
m=\frac{1}{2}\,{\rm for}\,\alpha=\beta,\quad m=1\,{\rm otherwise}.$ (56)
It is seen that the last term is negative definite hence it contributes only
to the entropy production. Thus, the analytical expression for the path length
remains the same with equivalent features as Sec. III.
### V.3 Rayleigh-Bénard convection
Rayleigh-Bénard convection is a well-studied model of natural convection and
is considered a classical benchmark for thermal models (Shan, 1997). The
domain consists viscous fluid confined between two thermally well-conducting
parallel plates. The plates are kept at a distance $L$ with the bottom plate
maintained at higher temperature $\theta_{\rm bottom}$ and the top plate is
kept at a lower temperature $\theta_{\rm top}$. The flow is induced by the
unstable density gradients in the presence of a gravitational field (Atif _et
al._ , 2018). The dynamics of the Rayleigh-Bènard convection is characterized
by two non-dimensional numbers: the Rayleigh number and the Prandtl number.
The Prandtl number is a property of the fluid (${\rm Pr}=\nu/\alpha_{T}$)
whereas the Rayleigh number (${\rm Ra}$) is defined as
${\rm Ra}=\frac{{g}\hat{\beta}\Delta\theta L^{3}}{\nu\alpha_{T}},$ (57)
where ${g}$ is the gravity, $\hat{\beta}=-1/\rho(\partial\rho/\partial T)_{P}$
is the thermal expansion coefficient, $\Delta\theta=\theta_{\rm
bottom}-\theta_{\rm top}$ is the temperature difference between the two walls,
$\nu$ is the kinematic viscosity, and $\alpha_{T}$ is the thermal diffusivity.
In this section, we simulate the turbulent Rayleigh-Bénard convection at ${\rm
Ra}=1.0\times 10^{7}$ and ${\rm Pr}=0.71$ on a grid of size $2N\times 2N\times
N$ with $N=112$ and $N=224$. The exact solution for the path length as derived
in the preceding section is used with Eq. (52) as the collision model. The
numerical simulations are performed using the 67 velocity crystallographic
lattice Atif _et al._ (2018) with $\theta_{\rm bottom}=1.02\theta_{0}$ and
$\theta_{\rm top}=0.98\theta_{0}$. Constant temperature boundary conditions at
the top and the bottom walls were imposed and periodic boundary conditions
were applied in the horizontal directions. We calculate the Nusselt number and
time-averaged horizontal mean of nondimensional temperature
$T=(\theta-\theta_{\rm top})/\Delta\theta$. The calculated Nusselt number is
$13.4$ with $N=112$ and $15.3$ with $N=224$, whereas that reported by the
direct numerical simulation (DNS) of Ref. Togni _et al._ (2015) is 15.59. In
Fig. 17 we compare the time-averaged mean horizontal temperature with the DNS
data and observe a good match. It can be seen that as expected the temperature
rises rapidly close to the wall and obtains a uniform profile in the bulk.
Hence, it can be concluded that the exact solution to the path length extends
the unconditional numerical stability to non-unity Prandtl number heat
transfer simulations too.
Figure 16: Iso-temperature contours (top) for Rayleigh-Bènard convection at
nondimensional temperatures 0.3, 0.7. The mid and bottom figures visualize the
temperature field at horizontal slices close to the two walls.
Figure 17: Time-averaged mean horizontal temperatures for Rayleigh-Bènard
convection at ${\rm Ra}=1\times 10^{7}$ and ${\rm Pr}=0.71$ compared with the
DNS profile from Ref.Togni _et al._ (2015). Here, the nondimensional vertical
coordinate $z^{*}$ is $z\,{\rm Nu}/L$
.
## VI Entropic route to modeling the subgrid viscosity
The entropic LBM has been interpreted as an implicit subgrid model of
turbulence Karlin _et al._ (2003). The modification to the path length
$\alpha$ due to the compliance with the $H$ theorem can be understood as a
turbulent viscosity correction at the macroscopic scale. Several studies have
analyzed the form of the viscosity correction and found similarities to the
Smagorinsky’s model for viscosity correction Malaspinas _et al._ (2008);
Buzzicotti and Tauzin (2021). In this section, we derive the subgrid model
corresponding to the exact path length. From the Chapman-Enskog expansion the
effective kinematic viscosity $\nu$ due to the entropic collision term is
found as
$\nu=\theta\tau=\theta\Delta t\left(\frac{1}{\alpha\beta}-\frac{1}{2}\right).$
(58)
The viscosity correction $\nu_{T}$ is defined as $\nu_{T}=\nu-\nu_{0}$, where
$\nu_{0}$ is the viscosity corresponding to the BGK path length $\alpha=2$,
and is obtained as
$\nu_{T}=\frac{\theta\Delta t}{2\alpha\beta}\left(2-{\alpha}\right).$ (59)
It is seen from the above expression that the path length $\alpha$ dictates
whether the viscosity correction is positive or negative. A path length
smaller than 2 implies an increment in the viscosity which in turn smoothens
the gradients, whereas, a path length larger than 2 corresponds to reduction
in the viscosity which sharpens the gradients (Karlin _et al._ , 2015). Thus,
the entropic LBM permits backscatter of energy from the subgrid scales to the
resolved scales too.
We now evaluate the viscosity correction in terms of the macroscopic moments.
For this purpose we consider the path length from Eq. (49) in assuming small
departure from equilibrium, i.e., $x_{i}\rightarrow 0,\,\Delta t\gg\tau$, and
interpret $\left<\cdot\right>$ as a continuous integral. Substituting Eq. (49)
in Eq. (59) the turbulent viscosity correction $\nu_{T}$ is found as
$\nu_{T}=\frac{\theta\Delta t}{2}\frac{\Theta}{6+\Theta},\,{\rm
where}\,\quad\Theta=\frac{\left<f,x^{3}\right>}{\left<f,x^{2}\right>}.$ (60)
From Grad’s $13$ moment representation one can write the approximation
$f=f^{\rm MB}\left(1+\Omega\right)$, where
$\Omega=\frac{\sigma_{ij}\xi_{i}\xi_{j}}{2p\theta}-\frac{q_{k}\xi_{k}}{p\theta}\left(1-\frac{\xi^{2}}{5\theta}\right),$
(61)
$f^{\rm MB}$ is the Maxwell-Boltzmann distribution, $\xi_{i}$ is the peculiar
velocity, $p$ is the pressure, $\theta$ is the temperature, $\sigma_{ij}$ is
the traceless part of symmetric stress tensor and $q_{k}$ is the heat flux.
Thereafter, the leading terms of the two terms appearing in $\Theta$ are
evaluated as
$\displaystyle\begin{split}\left<f,x^{2}\right>&=\frac{1}{2p\theta}\sigma_{kl}\sigma_{lk}+{\cal
O}(\sigma^{3}),\\\
\left<f,x^{3}\right>&=-\frac{3}{p^{2}\theta}\sigma_{kl}\sigma_{lm}\sigma_{mk}+{\cal
O}(\sigma^{4}),\end{split}$ (62)
where assuming a small change in temperature ${\cal O}(q^{2})$ terms have been
ignored. Substituting $\sigma_{ij}=\rho\tau\theta S_{ij}$, $\bm{S}$ being the
strain rate tensor we find the viscosity correction as
$\nu_{T}=-{\tau\theta}\,\frac{\Delta
t}{2}\frac{S_{ij}S_{jk}S_{ki}}{S_{mn}S_{nm}-\tau S_{ab}S_{bc}S_{ca}}.$ (63)
It should be noted that for very fine grid resolutions ($\Delta t\rightarrow
0$) the viscosity correction vanishes. Similar expressions for the turbulent
viscosity have also been derived in Refs. Malaspinas _et al._ (2008);
Buzzicotti and Tauzin (2021). The above expression for turbulent viscosity is
similar to Smagorinsky’s model where the turbulent viscosity $\nu_{T}$ is
$\nu_{T}=(C_{S}\Delta)^{2}\sqrt{S_{ij}S_{ji}},$ (64)
where $C_{S}$ is Smagorinsky’s constant, in that, both scale like the strain
rate tensor and is also distinct from it because of emergence of the third
invariant of the symmetrized strain rate tensor (Smagorinsky _et al._ , 1965;
Deardorff, 1970).
## VII Conclusion
In this paper, we present in detail the methodology to construct exact
solutions to the path length in the entropic lattice Boltzmann method. This
methodology can be extended to derive more accurate expressions, however, we
find that $\alpha_{\rm Higher}$ is sufficient for hydrodynamic applications.
The more dissipative solution $\alpha_{\rm Lower}$ could also be employed to
model viscous flows in the vicinity of walls and can also be used as a good
guess for the iterative solution. We have demonstrated that $\alpha_{\rm
Higher}$ shows no appreciable difference from the iterative solution by
studying the macroscopic behaviour of a few canonical setups. We have also
extended the exact solution to lattice ES-BGK model for nonlinear numerical
stability in non-unitary Prandtl heat transfer scenarios.
## Appendix A Derivation of $\Delta H$
In this section, we derive the expression for $\Delta H=H[\bm{f}^{\rm
mirror}]-H[\bm{f}]$. We begin by using the form of $H$ [Eq. (3)] to obtain
$\displaystyle\begin{split}&H[\bm{f}^{\rm mirror}]-H[\bm{f}]=\left<f^{\rm
mirror},\log\frac{f^{\rm
mirror}}{w}\right>-\left<f,\log\frac{f}{w}\right>.\end{split}$ (65)
Substituting $\bm{f}^{\rm mirror}$ from Eq. (16) in the above equation yields
$\displaystyle\begin{split}&H[\bm{f}^{\rm mirror}]-H[\bm{f}]\\\
&=\left<f+\alpha(f^{\rm eq}-f),\log\frac{f+\alpha(f^{\rm
eq}-f)}{w}\right>-\left<f,\log\frac{f}{w}\right>.\end{split}$ (66)
Substituting $x$ from Eq. (19) in the above equation one obtains
$\displaystyle\begin{split}&H[\bm{f}^{\rm mirror}]-H[\bm{f}]\\\
&=\left<f(1+\alpha x),\log\frac{f(1+\alpha
x)}{w}\right>-\left<f,\log\frac{f}{w}\right>\\\ &=\left<f\left(1+\alpha
x\right),\log{\left(1+\alpha
x\right)}\right>-\alpha\left<fx,\log\frac{w}{f}\right>.\end{split}$ (67)
Now substituting $w_{i}$ from Eq. (8) one obtains
$\displaystyle\begin{split}&H[\bm{f}^{\rm mirror}]-H[\bm{f}]\\\
&=\left<f\left(1+\alpha x\right),\log{\left(1+\alpha x\right)}\right>\\\
&-\alpha\left<fx,\log\frac{f^{\rm
eq}\exp({\mu+\zeta_{\kappa}c_{i\kappa}+\gamma c_{i}^{2}})}{f}\right>\\\
&=\left<f\left(1+\alpha x\right),\log{\left(1+\alpha
x\right)}\right>-\alpha\left<fx,\log(1+x)\right>\\\
&-\underline{\alpha\lambda\left<f,x\right>}-\underline{\alpha\zeta_{\kappa}\left<f,xc_{\kappa}\right>}-\underline{\alpha\gamma\left<f,xc^{2}\right>},\end{split}$
(68)
where we have substituted $f_{i}^{\rm eq}/f_{i}=1+x_{i}$ and the underlined
terms are zero due to moments invariance, i.e.,
$\displaystyle\begin{split}\left<f,x\right>=\sum_{i}(f_{i}^{\rm
eq}-f_{i})=\rho-\rho=0,\\\ \left<f,xc_{\kappa}\right>=\sum_{i}(f_{i}^{\rm
eq}c_{i\kappa}-f_{i}c_{i\kappa})=\rho u_{\kappa}-\rho u_{\kappa}=0,\\\
\left<f,xc^{2}\right>=\sum_{i}(f_{i}^{\rm eq}c^{2}_{i}-f_{i}c^{2}_{i})=\rho
e-\rho e=0.\end{split}$ (69)
Thus, we obtain
$\displaystyle\begin{split}&H[\bm{f}^{\rm mirror}]-H[\bm{f}]\\\
&=\left<f\left(1+\alpha x\right),\log{\left(1+\alpha
x\right)}\right>-\alpha\left<fx,\log(1+x)\right>\\\ &=\left<f,\left(1+\alpha
x\right)\log{\left(1+\alpha
x\right)}\right>-\alpha\left<f,x\log(1+x)\right>.\end{split}$ (70)
## Appendix B Bounds on the logarithm
In this section, we list a few positive definite functions along with their
domain of validity. In the interval $y\in(-1,0)$, using using the Taylor
series expansion of the logarithm we define
$\displaystyle G_{1}(y)$
$\displaystyle=(1+y)\left[-\log(1+y)+y-\frac{y^{2}}{2}\right]>0,$ (71)
$\displaystyle G_{4}(y)$
$\displaystyle=(1+y)\Bigg{[}-\log(1+y)+y-\frac{y^{2}}{2}+\frac{y^{3}}{3}-\frac{y^{4}}{4}$
$\displaystyle+\frac{y^{5}}{5}\Bigg{]}>0.$ (72)
Next, we exploit the integral definition of $\log(1+y)$, i.e.,
$\log(1+y)=\int_{0}^{y}\frac{1}{1+z}dz,$ (73)
and evaluate it using Gauss-Legendre and Newton-Cotes quadrature rules. As the
integrand is an $2n$-convex function, i.e., its even ($2n$) derivatives are
positive, the error due to the approximations are sign-definite, hence these
approximations can be used to construct upper and lower bounds on $\log(1+y)$.
Evaluating the integral in Eq. (73) via Gauss-Legendre quadratures, one
obtains
$\displaystyle\mathcal{I}_{\rm GL}^{(1)}(y)$ $\displaystyle=\frac{2y}{(2+y)},$
$\displaystyle\mathcal{I}_{\rm GL}^{(2)}(y)$
$\displaystyle=\frac{6y+3y^{2}}{6+6y+y^{2}},$ $\displaystyle\mathcal{I}_{\rm
GL}^{(3)}(y)$
$\displaystyle=\frac{60y+60y^{2}+11y^{3}}{60+90y+36y^{2}+3y^{3}},$
where $\mathcal{I}_{\rm GL}^{(n}$ is the intergral evaluated using $n^{\rm
th}$-order Gauss-Legendre quadrature. Similarly, evaluating the integral in
Eq. (73) via Newton-Cotes quadratures, one obtains Khattri (2009)
$\displaystyle\mathcal{I}_{\rm NC}^{(1)}(y)$
$\displaystyle=\frac{y}{2}\left[1+\frac{1}{1+y}\right],$
$\displaystyle\mathcal{I}_{\rm NC}^{(2)}(y)$
$\displaystyle=\frac{y}{6}\left[1+\frac{8}{2+y}+\frac{1}{1+y}\right],$
$\displaystyle\mathcal{I}_{\rm NC}^{(4)}(y)$
$\displaystyle=\frac{y}{90}\left[7+\frac{128}{4+y}+\frac{48}{4+2y}+\frac{128}{4+3y}+\frac{7}{1+y}\right],$
where $\mathcal{I}_{\rm GL}^{(n}$ is the intergral evaluated using $n^{\rm
th}$-order Newton-Cotes quadrature.
In the interval $y\in[0,\infty)$, exploiting the sign-definiteness of the
errors we define
$\displaystyle G_{2}(y)=(1+y)\left[-\log(1+y)+\mathcal{I}_{\rm
NC}^{(1)}\right]\geq 0,$ (74) $\displaystyle
G_{5}(y)=(1+y)\left[-\log(1+y)+\mathcal{I}_{\rm NC}^{(3)}\right]\geq 0,$ (75)
and in the interval $y\in(-1,\infty)$ we define
$\displaystyle G_{3}(y)=y\left[\log(1+y)-\mathcal{I}_{\rm GL}^{(1)}\right]\geq
0,$ (76) $\displaystyle G_{6}(y)=y\left[\log(1+y)-\mathcal{I}_{\rm
GL}^{(3)}\right]\geq 0.$ (77)
The functions $G_{1}(y),G_{2}(y),G_{3}(y)$ form loose bounds bounds on the
logarithm, whereas, $G_{4}(y),G_{5}(y),G_{6}(y)$ provide sharp bounds on it.
## Appendix C Derivation of the higher-order solution
Following the same methodology as Section III.1, we add and subtract the same
terms from Eq. (67) to obtain
$\displaystyle\begin{split}\Delta
H&=H^{(B)}+\alpha\beta\hat{H}(\alpha),\end{split}$ (78)
where
$\displaystyle H^{(B)}=-\Big{<}f,{G_{6}(\alpha\beta
x)}\Big{>}_{\Omega^{-}}-\Big{<}f,{G_{7}(\alpha\beta
x)}\Big{>}_{\Omega^{+}}{-\alpha\beta G_{8}(x)}\leq 0,$ (79)
is nonpositive and contributes to the entropy production, and
$\displaystyle\begin{split}\hat{H}(\alpha)=-\left<f,\frac{\alpha^{2}\beta^{2}x^{3}}{6}-\frac{\alpha^{3}\beta^{3}x^{4}}{12}+\frac{\alpha^{4}\beta^{4}x^{5}}{20}-\frac{\alpha^{5}\beta^{5}x^{6}}{5}\right>_{\Omega^{-}}\\\
+\left<f,\frac{\alpha\beta
x^{2}}{2}\right>-\bigg{<}f,\frac{2\alpha^{2}\beta^{2}x^{3}}{15}\bigg{(}\frac{2}{4+\alpha\beta
x}+\frac{1}{4+2\alpha\beta x}\\\ +\frac{2}{4+3\alpha\beta
x}\bigg{)}\bigg{>}_{\Omega^{+}}-\left<f,\frac{60x^{2}+60x^{3}+11x^{4}}{60+90x+36x^{2}+3x^{3}}\right>.\end{split}$
(80)
The above equation has at least one positive root as
$\hat{H}(0)<0<\hat{H}(\infty)$ which can be found using any numerical method.
In order to preserve the computational efficiency of the method we solve the
above equation by converting it into a quadratic in $\alpha$.
### C.1 Solving the higher degree polynomial
Figure 18: Behaviour of Eqs. (39),(80),(82),(45),(47) near the positive root.
In this section, we solve Eq. (80) by converting it to a quadratic. This
conversion to quadratic is performed by extracting negative terms from the Eq.
(80). The extracted terms then contribute to the entropy production $H^{(B)}$.
As stated earlier, the Eq. (80) has a positive root since
$\hat{H}(0)<0<\hat{H}(\infty)$. We assume that upper and lower bounds on the
root $\alpha$ exist. A suitable choice for the lower bound is $\alpha_{\rm
Lower}$, while the upper bound $h$ will be later evaluated. Therefore,
$\alpha_{\rm Lower}<\alpha<h$. Converting $\hat{H}(\alpha)$ to a quadratic is
a two step procedure and is explained in the following subsections.
#### C.1.1 Exploiting the lower bound
Using the lower bound $\alpha_{\rm Lower}$, in Eq. (80) we split the term
$\displaystyle\begin{split}-&\bigg{<}f,\frac{2\alpha^{2}\beta^{2}x^{3}}{15}\bigg{(}\frac{2}{4+\alpha\beta
x}+\frac{1}{4+2\alpha\beta x}+\frac{2}{4+3\alpha\beta
x}\bigg{)}\bigg{>}_{\Omega^{+}}\\\
&\equiv-\bigg{<}f,\frac{2\alpha\beta^{2}x^{3}}{15}\bigg{(}\frac{2}{\frac{4}{\alpha_{\rm
Lower}}+\beta x}+\frac{1}{\frac{4}{\alpha_{\rm Lower}}+2\beta
x}+\frac{2}{\frac{4}{\alpha_{\rm Lower}}+3\beta
x}\bigg{)}\bigg{>}_{\Omega^{+}}\\\
&-\bigg{<}f,\frac{2\alpha\beta^{2}x^{3}}{15}\bigg{(}\bigg{\\{}\frac{2}{\frac{4}{\alpha}+\beta
x}-\frac{2}{\frac{4}{\alpha_{\rm Lower}}+\beta
x}\bigg{\\}}+\bigg{\\{}\frac{1}{\frac{4}{\alpha}+2\beta
x}-\frac{1}{\frac{4}{\alpha_{\rm Lower}}+2\beta
x}\bigg{\\}}+\bigg{\\{}\frac{2}{\frac{4}{\alpha}+3\beta
x}-\frac{2}{\frac{4}{\alpha_{\rm Lower}}+3\beta
x}\bigg{\\}}\bigg{)}\bigg{>}_{\Omega^{+}},\end{split}$ (81)
where each term in curly braces is positive (as $\alpha_{\rm Lower}<\alpha$)
thereby making the second term negative. Here, recognizing that the negative
term contributes to the entropy production $H^{(B)}$, we obtain the quintic
polynomial $\tilde{H}(\alpha)$,
$\displaystyle\begin{split}\tilde{H}(\alpha)&=-\alpha^{2}\beta^{2}\left<f,\frac{x^{3}}{6}-\frac{\alpha\beta
x^{4}}{12}+\frac{\alpha^{2}\beta^{2}x^{5}}{20}-\frac{\alpha^{3}\beta^{3}x^{6}}{5}\right>_{\Omega^{-}}+\alpha\bigg{[}\bigg{<}f,\frac{x^{2}}{2}\bigg{>}-\bigg{<}f,\frac{2\alpha_{\rm
Lower}\beta^{2}x^{3}}{15}\bigg{(}\frac{2}{4+\alpha_{\rm Lower}x}\\\
&+\frac{1}{4+2\alpha_{\rm Lower}x}+\frac{2}{4+3\alpha_{\rm
Lower}x}\bigg{)}\bigg{>}_{\Omega^{+}}\bigg{]}-\left<f,\frac{60x^{2}+60x^{3}+11x^{4}}{60+90x+36x^{2}+3x^{3}}\right>.\end{split}$
(82)
Essentially, while converting $\hat{H}(\alpha)$ to $\tilde{H}(\alpha)$, we
have shifted the negative definite terms in Eq. (81) to the entropy
production, hence, the curve for $\tilde{H}(\alpha)$ lies above
$\hat{H}(\alpha)$ (see Fig. 18). It follows that an upper bound on the root of
$\hat{H}(\alpha)$ will also serve as the upper bound for the root of
$\tilde{H}(\alpha)$.
#### C.1.2 Exploiting the upper bound
Using the upper bound $h$, in Eq. (82) we split the term
$\displaystyle-\left<f,\frac{\alpha^{2}\beta^{2}x^{3}}{6}-\frac{\alpha^{3}\beta^{3}x^{4}}{12}+\frac{\alpha^{4}\beta^{4}x^{5}}{20}-\frac{\alpha^{5}\beta^{5}x^{6}}{5}\right>_{\Omega^{-}}\equiv-\alpha^{2}\beta^{2}\left<f,\frac{x^{3}}{6}-\frac{h\beta
x^{4}}{12}+\frac{h^{2}\beta^{2}x^{5}}{20}-\frac{h^{3}\beta^{3}x^{6}}{5}\right>_{\Omega^{-}}$
$\displaystyle-\alpha^{2}\beta^{2}\left<f,-\frac{(\alpha-h)\beta
x^{4}}{12}+\frac{(\alpha^{2}-h^{2})\beta^{2}x^{5}}{20}-\frac{(\alpha^{3}-h^{3})\beta^{3}x^{6}}{5}\right>_{\Omega^{-}},$
(83)
where the second term is negative, due to $x_{i}<0,x_{i}\in\Omega^{-}\,{\rm
and}\,\alpha<h$. Now, substituting Eq. (83) into Eq. (82) and again
recognizing that the negative terms contribute to the entropy production
$H^{(B)}$, we obtain the quadratic $H(\alpha)$.
It remains to specify the upper bound $h$. For this we consider the quadratic
equation $H_{2}(\alpha)=H(\alpha)|_{h=0}$,
$\displaystyle\begin{split}H_{2}(\alpha)&=-\alpha^{2}a_{2}+\alpha
b-c,\end{split}$ (84)
$\displaystyle\begin{split}a_{2}=\beta^{2}\left<f,\frac{x^{3}}{6}\right>_{\Omega^{-}},\end{split}$
(85)
whose positive root is $\alpha_{2}$. Therefore, $H_{2}(\alpha_{2})=0$ and
$\displaystyle H({\alpha_{2}})=\alpha_{2}^{2}\beta^{2}\left<f,\frac{h\beta
x^{4}}{12}-\frac{h^{2}\beta^{2}x^{5}}{20}+\frac{h^{3}\beta^{3}x^{6}}{5}\right>_{\Omega^{-}}$
$\displaystyle+H_{2}(\alpha_{2})>0.$ (86)
As $H(0)<0<H(\alpha_{2})$, a root of $H(\alpha)$ lies in the interval
$(0,\alpha_{2})$ (see Figure 18). Hence, a suitable choice for the upper bound
is $h=\alpha_{2}$.
## Appendix D Implementing the analytical solution
The post-collisional populations are found via the routine
$f_{i}^{*}=f_{i}+\alpha\beta[f_{i}^{\rm eq}-f_{i}],$ (87)
where the path length $\alpha$ needs to be evaluated at each grid point. We
begin by calculating
$x_{i}=\frac{f_{i}^{\rm eq}}{f_{i}}-1,$ (88)
where $i=1\rightarrow N$ for a lattice with $N$ discrete velocities. To
evaluate a summation on one of the sub-divisions $\Omega^{-}$ or $\Omega^{+}$
we sum over the populations in the concerned subdivision. For instance, to
calculate
$a_{1}=\left<f,\frac{x^{3}}{2}\right>_{\Omega^{-}},\quad
b_{1}=\left<f,\frac{x^{2}}{2}\right>,$
the pseudo-code is:
1:$a_{1}=0,b_{1}=0$
2:for each integer $i$ in $1$ to $N$ do
3: if $x_{i}<0$ then
4: $a_{1}=a_{1}+f_{i}*x_{i}^{3}/2$
5: end if
6: $b_{1}=b_{1}+f_{i}*x_{i}^{2}/2$
7:end for
8:Return $a_{1}$
To find the path length $\alpha$ we execute the following steps:
1:Find $|x_{i}|^{\rm max}$, the $x_{i}$ with maximum magnitude.
2:if $|x_{i}|^{\rm max}<10^{-3}$ then
3: $\alpha=2$
4:else
5: Calculate $a_{1},b_{1},c_{1}$ from Eq. (40)
6: Calculate $\alpha_{\rm Lower}$ from Eq. (41)
7: Calculate $a_{2}$ from Eq. (85) and $b,c$ from Eq. (46)
8: Calculate $h$, the positive root of the Eq. (47)
9: Calculate $a,b,c$ from Eq. (46)
10: Find $\alpha_{\rm Higher}$, the positive root of Eq. (45)
11: $\alpha=\alpha_{\rm Higher}$
12:end if
Although, the exact solution to the path length is always found, we need to
ensure that the post collisional populations remain positive due to the
boundary conditions or in the case of of extremely under-resolved situations.
To this effect, an extra step might be required. We again stress that these
situations are extremely rare. The maximum permitted value of the path length
such that all the post collisional populations remain positive is $\alpha^{\rm
max}$. Therefore,
1:Find $x_{i}^{\rm min}$, the smallest $x_{i}$
2:Calculate $\alpha^{\rm max}=-1/(\beta x_{i}^{\rm min})$
3:if $\alpha>\alpha^{\rm max}$ then
4: $\alpha=(1+\alpha^{\rm max})/2$
5:end if
## References
* Frisch _et al._ (1986) U. Frisch, B. Hasslacher, and Y. Pomeau, Phys. Rev. Lett. 56, 1505 (1986).
* Chen _et al._ (1992) H. Chen, S. Chen, and W. H. Matthaeus, Phys. Rev. A 45, R5339 (1992).
* Ansumali _et al._ (2003) S. Ansumali, I. V. Karlin, and H. C. Öttinger, Europhys. Lett. 63, 798 (2003).
* Yudistiawan _et al._ (2010) W. P. Yudistiawan, S. K. Kwak, D. V. Patil, and S. Ansumali, Phys. Rev. E 82, 046701 (2010).
* Adhikari _et al._ (2005) R. Adhikari, K. Stratford, M. E. Cates, and A. J. Wagner, Europhys. Lett. 71, 473 (2005).
* Mazloomi _et al._ (2015) A. Mazloomi, S. S. Chikatamarla, and I. V. Karlin, Phys. Rev. Lett. 114, 174502 (2015).
* Kolluru _et al._ (2020a) P. K. Kolluru, M. Atif, and S. Ansumali, J. Comput. Sci. 45, 101179 (2020a).
* Higuera _et al._ (1989) F. J. Higuera, S. Succi, and R. Benzi, Europhys. Lett. 9, 345 (1989).
* Qian _et al._ (1992) Y. H. Qian, D. d’Humières, and P. Lallemand, Europhys. Lett. 17, 479 (1992).
* Benzi _et al._ (1992) R. Benzi, S. Succi, and M. Vergassola, Phys. Rep. 222, 145 (1992).
* McNamara and Zanetti (1988) G. R. McNamara and G. Zanetti, Phys. Rev. Lett. 61, 2332 (1988).
* Karlin _et al._ (1999) I. V. Karlin, A. Ferrante, and H. C. Öttinger, Europhys. Lett. 47, 182 (1999).
* Succi _et al._ (2002) S. Succi, I. V. Karlin, and H. Chen, Rev. Mod. Phys. 74, 1203 (2002).
* Boghosian _et al._ (2001) B. M. Boghosian, J. Yepez, P. V. Coveney, and A. J. Wagner, Proc. R. Soc. London, Ser. A 457, 717 (2001).
* Karlin _et al._ (1998) I. V. Karlin, A. N. Gorban, S. Succi, and V. Boffi, Phys. Rev. Lett. 81, 6 (1998).
* Wagner (1998) A. J. Wagner, Europhys. Lett. 44, 144 (1998).
* Chen and Teixeira (2000) H. Chen and C. Teixeira, Comp. Phys. Commun. 129, 21 (2000).
* Boghosian _et al._ (2003) B. M. Boghosian, P. J. Love, P. V. Coveney, I. V. Karlin, S. Succi, and J. Yepez, Phys. Rev. E 68, 025103 (2003).
* Ansumali _et al._ (2006) S. Ansumali, I. Karlin, F. C.E., and K. Boulouchos, Physica A 359, 289 (2006).
* Aidun and Clausen (2010) C. K. Aidun and J. R. Clausen, Annu. Rev. Fluid Mech. 42, 439 (2010).
* Chikatamarla and Karlin (2013) S. Chikatamarla and I. Karlin, Physica A 392, 1925 (2013).
* Atif _et al._ (2017) M. Atif, P. K. Kolluru, C. Thantanapally, and S. Ansumali, Phys. Rev. Lett. 119, 240602 (2017).
* Ansumali and Karlin (2000) S. Ansumali and I. V. Karlin, Phys. Rev. E 62, 7999 (2000).
* Ansumali and Karlin (2002a) S. Ansumali and I. V. Karlin, J. Stat. Phys. 107, 291 (2002a).
* Tosi _et al._ (2006) F. Tosi, S. Ubertini, S. Succi, and I. V. Karlin, J. Sci. Comput. 30, 369 (2006).
* Chikatamarla _et al._ (2006) S. S. Chikatamarla, S. Ansumali, and I. V. Karlin, Phys. Rev. Lett. 97, 010201 (2006).
* Brownlee _et al._ (2007) R. A. Brownlee, A. N. Gorban, and J. Levesley, Phys. Rev. E 75, 036711 (2007).
* Gorban and Packwood (2012) A. N. Gorban and D. Packwood, Phys. Rev. E 86, 025701 (2012).
* Succi (2001) S. Succi, _The Lattice Boltzmann Equation: for Fluid Dynamics and Beyond_ (Oxford University Press, Oxford, 2001).
* Ansumali and Karlin (2005) S. Ansumali and I. V. Karlin, Phys. Rev. Lett. 95, 260605 (2005).
* Bhatnagar _et al._ (1954) P. L. Bhatnagar, E. P. Gross, and M. Krook, Phys. Rev. 94, 511 (1954).
* Atif _et al._ (2018) M. Atif, M. Namburi, and S. Ansumali, Phys. Rev. E 98, 053311 (2018).
* Kolluru _et al._ (2020b) P. K. Kolluru, M. Atif, M. Namburi, and S. Ansumali, Phys. Rev. E 101, 013309 (2020b).
* Ansumali (2004) S. Ansumali, _Minimal kinetic modeling of hydrodynamics_ , Ph.D. thesis, ETH Zurich (2004).
* Gorban _et al._ (1996) A. N. Gorban, I. V. Karlin, V. B. Zmievskii, and T. Nonnenmacher, Physica A 231, 648 (1996).
* Mazloomi M. _et al._ (2015) A. Mazloomi M., S. S. Chikatamarla, and I. V. Karlin, Phys. Rev. E 92, 023308 (2015).
* Ansumali and Karlin (2002b) S. Ansumali and I. V. Karlin, Phys. Rev. E 65, 056312 (2002b).
* Press _et al._ (1992) W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, The art of scientific computing 2, 1002 (1992).
* Karlin _et al._ (2015) I. Karlin, F. Bösch, S. Chikatamarla, and S. Succi, Entropy 17, 8099 (2015).
* Laney (1998) C. B. Laney, _Computational Gasdynamics_ (Cambridge University Press, 1998).
* Minion and Brown (1997) M. L. Minion and D. L. Brown, J. Comput. Phys. 138, 734 (1997).
* Coreixas _et al._ (2017) C. Coreixas, G. Wissocq, G. Puigt, J. F. Boussuge, and P. Sagaut, Phys. Rev. E 96, 033306 (2017).
* Ansumali and Karlin (2002c) S. Ansumali and I. V. Karlin, Phys. Rev. E 66, 026311 (2002c).
* Ghia _et al._ (1982) U. Ghia, K. Ghia, and C. Shin, J. Comput. Phys. 48, 387 (1982).
* Holway Jr (1965) L. H. Holway Jr, Rarefied Gas Dyn. 1, 193 (1965).
* Andries _et al._ (2000) P. Andries, P. Le Tallec, J.-P. Perlat, and B. Perthame, Euro. J. Mech. B 19, 813 (2000).
* Meng _et al._ (2013) J. Meng, Y. Zhang, N. G. Hadjiconstantinou, G. A. Radtke, and X. Shan, J. Fluid Mech. 718, 347 (2013).
* Kolluru _et al._ (2022) P. K. Kolluru, M. Atif, and S. Ansumali, arXiv preprint arXiv:2201.05280 (2022).
* Ansumali _et al._ (2007) S. Ansumali, S. Arcidiacono, S. Chikatamarla, N. Prasianakis, A. Gorban, and I. Karlin, Eur. Phys. J. B 56, 135 (2007).
* Shan (1997) X. Shan, Phys. Rev. E 55, 2780 (1997).
* Togni _et al._ (2015) R. Togni, A. Cimarelli, and E. De Angelis, J. Fluid Mech. 782, 380–404 (2015).
* Karlin _et al._ (2003) I. Karlin, S. Ansumali, E. De Angelis, H. Öttinger, and S. Succi, arXiv preprint cond-mat/0306003 (2003).
* Malaspinas _et al._ (2008) O. Malaspinas, M. Deville, and B. Chopard, Phys. Rev. E 78, 066705 (2008).
* Buzzicotti and Tauzin (2021) M. Buzzicotti and G. Tauzin, Phys. Rev. E 104, 015302 (2021).
* Smagorinsky _et al._ (1965) J. Smagorinsky, S. Manabe, and J. L. Holloway, Mon. Weather Rev 93, 727 (1965).
* Deardorff (1970) J. W. Deardorff, J. Fluid Mech. 41, 453 (1970).
* Khattri (2009) S. Khattri, Teach. Math. 12, 7 (2009).
|
vertex is $1$, these cycles must be disjoint, i.e., $V(C_{i})\cap
V(C_{j})=\emptyset$ and $A(C_{i})\cap A(C_{j})=\emptyset$, for any $i\neq j$.
Denote the set of vertices in the cycles as
$V_{c}=\bigcup_{k=1}^{K}V(C_{1})\cup\dots\cup V(C_{K}).$ (30)
Let $u_{1},\dots,u_{M}$ be the vertices of $C_{1},\dots,C_{m}$ with indegree
at least $2$.
Based on Observation 2, starting from any vertex outside $V_{c}$ there is a
unique path that reaches $V_{c}$. Combining all vertices that reach the cycles
at $u_{m}$ (denoted as $V_{m}$), and the paths from these vertices to $u_{m}$,
we obtain a directed subgraph $T_{m}$, which is connected with $V_{c}$ only
via the vertex $u_{m}$. The subgraphs $T_{m}$’s are disjoint from each other
since they are connected with $V_{c}$ via different vertices. In addition,
each vertex outside of $V_{c}$ lies in exactly one of the subgraph $T_{m}$.
Thus, we can partition the whole graph into the union of the cycles
$C_{1},\dots,C_{K}$ and the subgraphs $T_{1},\dots,T_{M}$.
We then show $T_{m}$’s are trees. For any vertex $v_{0}$ in the subgraph
$T_{m}$, consider the walk $W(v_{0}).$ Any path starting from $v_{0}$ must be
part of $W(v_{0})$. Starting from $v_{0}$ there is only one path from $v_{0}$
to $u_{m}$ which is $W_{1}(v_{0})$, according to Observation 2. Therefore, by
the definition of a directed tree, $T_{m}$ is a directed tree with the root
$u_{m}$. Therefore, we can partition the whole graph into the union of the
cycles $C_{1},\dots,C_{K}$ and subtrees $T_{1},\dots,T_{M}$ with disjoint edge
sets; in addition, the edge sets of the cycles are disjoint, and the root of
$T_{l}$ must be in certain cycle $C_{k}$. It is easy to verify the properties
stated in Lemma 1. This finishes the proof.
#### J.2.2 Proof of Claim J.1
We first prove the case for $d\geq 2$. Suppose the corresponding graph for $Y$
is $G$, and $G$ is decomposed into the union of cycles $C_{1},\dots,C_{K}$ and
trees $T_{1},\dots,T_{m}$. We perform the following operation: pick an
arbitrary tree $T_{m}$ with the root $u_{m}$. The tree is non-empty, thus
there must be an edge $e$ with the head $u_{m}$.
Suppose $v$ is the tail of the edge $e$. Now we remove the edge $e=(v,u_{m})$
and create a new edge $e^{\prime}=(v,v)$. The new edge corresponds to
$y_{v}=x_{v}$. The old edge $(v,u_{m})$ corresponds to $y_{v}=x_{u_{m}}$ (and
a term $h(f(x_{u_{m}})-f(x_{v}))$) if $u_{m}\leq n$ or
$y_{v}=y_{u_{m}-n}\notin\\{x_{1},\dots,x_{n}\\}$ (and a term
$h(f(y_{u_{m}-n})-f(x_{v}))$) if $u_{m}>n$. This change corresponds to the
change of $y_{v}$: we change $y_{v}=x_{u_{m}}$ (if $u_{m}\leq n$) or
$y_{v}=y_{u_{m}-n}$ (if $u_{m}>n$) to $\hat{y}_{v}=x_{v}$. Let
$\hat{y}_{i}=y_{i}$ for any $i\neq v$, and
$\hat{Y}=(\hat{y}_{1},\dots,\hat{y}_{n})$ is the new point.
Previously $v$ is in a tree $T_{m}$ (not its root), now $v$ is the root of a
new tree, and also part of the new cycle (self-loop)
$C_{K+1}=(v,e^{\prime},v)$. In this new graph, the number of vertices in
cycles increases by $1$, thus the value of $g$ increases by $-\frac{1}{n}\log
2$, i.e., $g(\hat{Y})-g(Y)=\frac{1}{n}\log 2$.
Since $d\geq 2$, we can find a path in $\mathbb{R}^{d}$ from a point to
another point without passing any of the points in $\\{x_{1},\dots,x_{n}\\}$.
In the continuous process of moving $y_{v}$ to $\hat{y}_{v}$, the function
value will not change except at the end that $y_{v}=x_{v}$. Thus there is a
non-increasing path from $Y$ to $\hat{Y}$, in the sense that along this path
the function value of $g$ does not decrease.
The illustration of this proof is given below.
(a) Original graph
(b) Modified graph, with improved function value
Figure 26: Illustration of the proof of Claim J.1. For the figure on the
left, we pick an arbitrary tree with the head being vertex $9$, which
corresponds to $y_{6}=y_{7}$. We change $y_{7}$ to $\hat{y}_{7}=x_{7}$ to
obtain the figure on the right. Since one more cycle is created, the function
value increases by $-\frac{1}{n}\log 2.$
For the case $d=1$, the above proof does not work. The reason is that the path
from $y_{v}$ to $\hat{y}_{v}$ may touch other points in
$\\{x_{1},\dots,x_{n}\\}$ and thus may change the value of $g$. We only need
to make a small modification: we move $y_{v}$ in $\mathbb{R}$ until it touches
a certain $x_{i}$ that corresponds to a vertex in the tree $T_{m}$, at which
point a cycle is created, and the function value increases by at least
$\frac{1}{n}\log 2$. This path is a non-decreasing path, thus the claim is
also proved.
### J.3 Proof of Theorem 2
Obviously, $g(Y)\triangleq\phi_{\rm R}(Y,X)=\frac{1}{n}\sup_{f\in
C(\mathbb{R}^{d})}\sum_{i=1}^{n}[h(f(x_{i})-f(y_{i}))]\geq h(0)$ (by picking
$f=0$).
Step 1: achieving optimal $g(Y)$. We prove if
$\\{y_{1},\dots,y_{n}\\}=\\{x_{1},\dots,x_{n}\\}$, then $g(Y)=h(0)$.
###### Claim J.2.
Assume $h$ is concave. Then the function $\xi_{\rm
R}(m)=\sup_{(t_{1},\dots,t_{k})\in ZO(m)}\sum_{i=1}^{m}h(t_{i})$ satisfies
$\xi_{\rm R}(m)=mh(0)$, where the set
$ZO(m)=\\{t_{1},t_{2},\dots,t_{m}\in\mathbb{R}:\sum_{i=1}^{m}t_{i}=0\\}$.
The proof of this claim is obvious and skipped here. When
$\\{y_{1},\dots,y_{n}\\}=\\{x_{1},\dots,x_{n}\\}$, we can divide $[n]$ into
multiple cycles $C_{1}\cup\dots\cup C_{K}$, each with length $m_{k}$, and
obtain $\phi_{\rm R}(Y,X)=\frac{1}{n}\sup_{f\in
C(\mathbb{R}^{d})}\sum_{k=1}^{K}\sum_{i=1}^{m_{k}}[h(f(x_{i})-f(y_{i}))]=\frac{1}{n}\sum_{k=1}^{K}\xi{\rm
R}(m_{k})=\frac{1}{n}\sum_{k=1}^{K}m_{k}h(0)=h(0).$
Step 2: compute $g(Y)$ when $y_{i}\in\\{x_{1},\dots,x_{n}\\},\forall i.$
Assume $y_{i}\in\\{x_{1},\dots,x_{n}\\},\forall i.$ We build a directed graph
$G=(V,A)$ as follows (the same graph as in Appendix J.2). The set of vertices
$V=\\{1,2,\dots,n\\}$ represents $x_{1},x_{2},\dots,x_{n}$. We draw a directed
edge $(i,j)\in A$ if $y_{i}=x_{j}$. Note that it is possible to have a self-
loop $(i,i)$, which corresponds to the case $y_{i}=x_{i}$.
According to Lemma 1, this graph can be decomposed into cycles
$C_{1},C_{2},\dots,C_{K}$ and subtrees $T_{1},T_{2},\dots,T_{M}$. We claim
that
$\phi_{\rm R}(Y,X)=\frac{1}{n}\sum_{k=1}^{K}|V(C_{k})|h(0)\geq h(0).$ (31)
The proof of the relation in Eq. (31) is similar to the proof of Eq. (22) used
in the proof of Theorem 2, and briefly explained below. One major part of the
proof is to show that the contribution of the nodes in the cycles is
$\sum_{k=1}^{K}|V(C_{k})|h(0)$. This is similar to Step 1, and is based on
Claim J.2. Another major part of the proof is to show that the contribution of
the nodes in the subtrees is zero, similar to the proof of Eq. (28). This is
because we can utilize Assumption 4.4 to construct a sequence of $f$ values
(similar to Eq. (26)) so that
$f(y_{i})-f(x_{i})=\begin{cases}0,&i\in\bigcup_{k=1}^{K}V(C_{k}),\\\
\alpha_{N},&i\in\bigcup_{m=1}^{M}V(T_{m}).\end{cases}$ (32)
Here $\\{\alpha_{N}\\}_{N=1}^{\infty}$ is a sequence of real numbers so that
$\lim_{N\rightarrow\infty}h(\alpha_{N})=\sup_{t}h(t)=0$. In the case that
$h(\infty)=0$ like RS-GAN, we pick $\alpha_{N}=N$. In the case that $h(a)=0$
for a certain finite number $a$, we can just pick $\alpha_{N}=a,\forall N$
(thus we do not need a sequence but just one choice).
Since the expression of $\phi_{\rm R}(Y,X)$ in Eq. (31) is a scaled version of
the expression of $\phi_{\rm RS}(Y,X)$ (scale by $-\frac{\log 2}{h(0)}$), the
rest of the proof is the same as the proof of Theorem 2.
Step 3: function value for general $Y$ and GMR. This step is the same as the
proof of Theorem J.1. For the value of general $Y$, we build an “augmented
graph” and apply the result in Step 2 to obtain $g(Y)$. To prove GMR, the same
construction as the proof of Theorem J.1 suffices.
## Appendix K Results in Parameter Space
We will first state the technical assumptions and then present the formal
results in parameter space. The results become somewhat technical due to the
complication of neural-nets. Suppose the discriminator neural net is
$f_{\theta}$ where $\theta\in\mathbb{R}^{J}$ and the generator net is $G_{w}$
where $w\subset\mathbb{R}^{K}.$
###### Assumption K.1.
(representation power of discriminator net): For any distinct vectors
$v_{1},\dots,v_{2n}\in\mathbb{R}^{d}$ , any $b_{1},\dots,b_{2n}\in\mathbb{R}$,
there exists $\theta\in\mathbb{R}^{J}$ such that
$f_{\theta}(v_{i})=b_{i},~{}i=1,\dots,2n.$
###### Assumption K.2.
(representation power of generator net in $\mathcal{W}$) For any distinct
$z_{1},\dots,z_{n}\in\mathbb{R}^{d_{z}}$ and any
$y_{1},\dots,y_{n}\in\mathbb{R}^{d}$, there exists $w\in\mathcal{W}$ such that
$G_{w}(z_{i})=y_{i},i=1,\dots,n$.
For any given $Z=(z_{1},\dots,z_{n})\in\mathbb{R}^{d_{z}\times n}$, and any
$\in\mathcal{W}\subseteq\mathbb{R}^{K}$, we define a set $G^{-1}(Y;Z)$ as
follows: $w\in G^{-1}(Y;Z)$ iff $G_{w}(Z)=Y$ and $w\in\mathcal{W}$.
###### Assumption K.3.
(path-keeping property of generator net; duplication of Assumption 4.6): For
any distinct $z_{1},\dots,z_{n}\in\mathbb{R}^{d_{z}}$, the following holds:
for any continuous path $Y(t),t\in[0,1]$ in the space $\mathbb{R}^{d\times n}$
and any $w_{0}\in G^{-1}(Y(0);Z)$, there is continuous path $w(t),t\in[0,1]$
such that $w(0)=w_{0}$ and $Y(t)=G_{w(t)}(Z),t\in[0,1]$.
We will present sufficient conditions for these assumptions later. Next we
present two main results on the landscape of GANs in the parameter space.
###### Proposition K.1.
(formal version of Proposition 1) Consider the separable-GAN problem
$\min_{w\in\mathbb{R}^{K}}\varphi_{\rm sep}(w),$ where $\varphi_{\rm
sep}(w)=\sup_{\theta}\frac{1}{2n}\sum_{i=1}^{n}[h_{1}(f_{\theta}(x_{i}))+h_{2}(-f_{\theta}(G_{w}(z_{i})))]$
Suppose $h_{1},h_{2}$ satisfy the same assumptions of Theorem 1. Suppose
$G_{w}$ satisfies Assumption K.2 and Assumption 4.6 (with certain
$\mathcal{W}$). Suppose $f_{\theta}$ satisfies Assumption K.1. Then there
exist at least $(n^{n}-n!)$ distinct $w\in\mathcal{W}$ that are not global-
min-reachable.
###### Proposition K.2.
(formal version of Prop. 2) Consider the RpGAN problem
$\min_{w\in\mathbb{R}^{K}}\varphi_{\rm R}(w),$ where $\varphi_{\rm
R}(w)=\sup_{\theta}\frac{1}{n}\sum_{i=1}^{n}[h(f_{\theta}(x_{i}))-f_{\theta}(G_{w}(z_{i}))].$
Suppose $h$ satisfies the same assumptions of Theorem 2. Suppose $G_{w}$
satisfies Assumption K.2 and Assumption 4.6 (with certain $\mathcal{W}$).
Suppose $f_{\theta}$ satisfies Assumption K.1. Then any $w\in\mathcal{W}$ is
global-min-reachable for $\varphi_{\rm R}(w)$.
We have presented two generic results that relies on a few properties of the
neural-nets. These properties can be satisfied by certain neural-nets, as
discussed next. Our results largely rely on recent advanced in neural-net
optimization theory.
### K.1 Sufficient Conditions for the Assumptions
In this part, we present a set of conditions on neural nets that ensure the
assumptions to hold. We will discuss more conditions in the next subsection.
###### Assumption K.4.
(mildly wide) The last hidden layer has at least $\bar{n}$ neurons, where
$\bar{n}$ is the number of input vectors.
The assumption of width is common in recent theoretical works in neural net
optimization (e.g. [50, 73, 2]). For the generator network, we set
$\bar{n}=n$; for the discriminator network, we set $\bar{n}=2n.$
###### Assumption K.5.
(smooth enough activation) The activation function $\sigma$ is an analytic
function, and the $k$-th order derivatives $\sigma^{(k)}(0)$ are non-zero, for
$k=0,1,2,\dots,\bar{n},$ where $\bar{n}$ is the number of input vectors.
The assumption of the neuron activation is satisfied by sigmoid, tanh,
SoftPlus, swish, etc.
For the generator network, consider a fully neural network
$G_{w}(z)=W_{H}\sigma(W_{H-1}\dots W_{2}\sigma(W_{1}z))$ that maps
$z\in\mathbb{R}^{d_{z}}$ to $G_{w}(z)\in\mathbb{R}^{d}$. Define
$T_{k}(z)=\sigma(W_{k-1}\dots W_{2}\sigma(W_{1}z))\in\mathbb{R}^{d_{k}}$ where
$d_{k}$ is the number of neurons in the $k$-th hidden layer. Then we can write
$G_{w}(z)=W_{H}T_{H}(z)$, where $W_{H}\in\mathbb{R}^{d\times d_{H}}$. Let
$Z=(z_{1},\dots,z_{n})$ and let
$T_{k}(Z)=(T_{k}(z_{1}),\dots,T_{k}(z_{n}))\in\mathbb{R}^{d_{k}\times n},$
$k=1,2,\dots,H.$ Define $\mathcal{W}=\\{w=(W_{1},\dots,W_{H}):T_{H}(Z)\text{is
full rank}\\}$.
We will prove that under these two assumptions on the neural nets, the
landscape of RpGAN is better than that of SepGAN.
###### Proposition K.3.
Suppose $h_{1},h_{2},h$ sastify assumptions in Theorem 1 and Theorem 2.
Suppose $G_{w},f_{\theta}$ satisfies Assump. K.5 and K.4 ($\bar{n}=n$ for
$G_{w}$, and $\bar{n}=2n$ for $f_{\theta}$). Then there exist at least
$(n^{n}-n!)$ distinct $w\in\mathcal{W}$ that are not GMR for $\varphi_{\rm
Sep}(w)$. In contrast, any $w\in\mathcal{W}$ is global-min-reachable for
$\varphi_{\rm R}(w)$.
This proposition is the corollary of Prop. K.1 and Prop. K.2; we only need to
verify the assumptions in the two propositions. The following series of claims
provide such verification.
###### Claim K.1.
Suppose Assumptions K.4 and K.5 hold for the generator net $G_{w}$ with
distinct input $z_{1},\dots,z_{n}$. Then
$\mathcal{W}=\\{(W_{1},\dots,W_{H}):T_{H}(Z)\text{ is full rank}\\}$ is a
dense set in $\mathbb{R}^{K}$. In addition, Assumption K.2 holds.
This full-rank condition was used in a few works of neural-net landscape
analysis (e.g. [72]). In GAN area, [7] studied invertible generator nets
$G_{w}$ where the weights are restricted to a subset of $\mathbb{R}^{K}$ to
avoid singularities. As the set $\mathcal{W}$ is dense, intuitively the
iterates will stay in this set for most of the time. However, rigorously
proving that the iterates stay in this set is not easy, and is one of the
major challenges of current neural-network analysis. For instance, [38]) shows
that for very wide neural networks with proper initialization along the
training trajectory of gradient descent the neural-tangent kernel (a matrix
related to $T_{H}(Z)$) is full rank. A similar analysis can prove that the
matrix $T_{H}(Z)$ stays full rank during training under similar conditions. We
do not attempt to develop the more complicated convergence analysis for
general neural-nets here and leave it to future work.
###### Claim K.2.
Suppose Assumptions K.4 and K.5 hold for the generator net $G_{w}$ with
distinct input $z_{1},\dots,z_{n}$. Then it satisfies Assumption 4.6 with
$\mathcal{W}$ defined in Claim K.1.
Assumption K.1 can be shown to hold under a similar condition to that in Claim
K.1.
###### Claim K.3.
Consider a fully connected neural network
$f_{\theta}(z)=\theta_{H}\sigma(\theta_{H-1}\dots\theta_{2}\sigma(\theta_{1}z))$
that maps $u\in\mathbb{R}^{d}$ to $f_{\theta}(u)\in\mathbb{R}$ and suppose
Assumptions K.4 and K.5 hold. Then Assumption K.1 holds.
The proofs of the claims are given in Appendix K.5.
With these claims, we can immediately prove Prop. K.3.
Proof of Prop. K.3: According to Claim K.2, K.1, K.3, the assumptions of Prop.
K.3 imply the assumptions of Prop. K.1 and Prop. K.2. Therefore, the
conclusions of Prop. K.1 and Prop. K.2 hold. Since the conclusion of Prop. K.3
is the combination of the the conclusions of Prop. K.1 and Prop. K.2, it also
holds. $\Box$
### K.2 Other Sufficient Conditions
Assumption K.3 (path-keeping property) is the key assumption. Various results
in neural-net theory can ensure this assumption (or its variant) holds, and we
have utilized one of the simplest such results in the last subsection. We
recommend to check [80] which describes a bigger picture about various
landscape results. In this subsection, we briefly discuss other possible
results applicable to GAN.
We start with a strong conjecture about neural net landscape, which only
requires a wide final hidden layer but no condition on the depth and
activation.
###### Conjecture K.1.
Suppose $g_{\theta}$ is a fully connected neural net with any depth and any
continuous activation, and it satisfies Assumption K.4 (i.e. a mildly wide
final hidden layer). Assume $\ell(y,\hat{y})$ is convex in $\hat{y}$, then the
empirical loss function of a supervised learning problem
$\sum_{i=1}^{n}\ell(y_{i},g_{\theta}(x_{i}))$ is global-min-reachable for any
point.
We then describe a related conjecture for GAN, which is easy to prove if
Conjecture K.1 holds.
Conjecture 1 (informal): Suppose $G_{w}$ is a fully connected net satisfying
Assump. K.4 (i.e. a mildly wide final hidden layer). Suppose $G_{w}$ and
$f_{\theta}$ are expressive enough (i.e. Assump. K.2 and Assump. K.1 hold).
Then the RpGAN loss has a benign landscape, in the sense that any point is GMR
for $\varphi_{\rm R}(w)$. In contrast, the SepGAN loss does not have this
property.
Unfortunately, we are not aware of any existing work that has proved
Conjecture K.1, thus we are not able to prove Conjecture 1 above for now.
Venturi et al. [84] proved a special case of Conjecture K.1 for $L=1$ (one
hidden layer), and other works such as Li et al. [50] prove a weaker version
of Conjecture K.1; see [80] for other related results. The precise version of
Conjecture K.1 seems non-trivial to prove.
We list two results on GAN that can be derived from weaker versions of
Conjecture K.1; both results apply to the whole space instead of the dense
subset $\mathcal{W}$.
Result 1 (1-hidden-layer): Suppose $G_{w}$ is 1-hidden-layer network with any
continuous activation. Suppose it satisfies Assump. K.4 (i.e. a mildly wide
final hidden layer). Suppose $G_{w}$ and $f_{\theta}$ are expressive enough
(i.e. Assump. K.2 and Assump. K.1 hold). Then the RpGAN loss satisfies GMR for
any point. This result is based on Venturi et al. [84].
Result 2: Suppose $G_{w}$ is a fully connected network with any continuous
activation and any number of layers. Suppose it satisfies Assump. K.4 (i.e. a
mildly wide final hidden layer). Suppose $G_{w}$ and $f_{\theta}$ are
expressive enough (i.e. Assump. K.2 and K.1 hold). Then the RpGAN loss has no
sub-optimal set-wise local minima (see [50, Def. 1] for the definition). This
result is based on Li et al. [50].
Due to space constraint, we do not present the proofs of the above two results
(combining them with GANs is somewhat cumbersome). The high-level proof
framework is similar to that of Prop. K.3.
### K.3 Proofs of Propositions for Parameter Space
Proof of Proposition K.1. The basic idea is to build a relation between the
points in the parameter space to the points in the function space.
Denote $\mathcal{L}_{\rm
sep}(w;\theta)=\frac{1}{2n}\sum_{i=1}^{n}[h_{1}(f_{\theta}(x_{i}))+h_{2}(-f_{\theta}(G_{w}(z_{i})))]$,
then $\varphi_{\rm sep}(w)=\sup_{\theta}\mathcal{L}_{\rm sep}(w;\theta).$
Denote $L_{\rm
sep}(Y;f)=\frac{1}{2n}\sum_{i=1}^{n}[h_{1}(f(x_{i}))+h_{2}(-f(y_{i}))]$, and
$\phi(Y,X)=\sup_{f}L_{\rm sep}(Y;f).$ Note that in the definition of the two
functions above, the discriminator is hidden in the $\sup$ operators, thus we
have freedom to pick the discriminator values (unlike the generator space
which we have to check all $w$ in the inverse of $Y$).
Our goal is to analyze the landscape of $\varphi_{\rm sep}(w)$, based on the
previously proved result on the landscape of $\phi(Y,X)$. We first show that
the image of $\varphi_{\rm sep}(\hat{w})$ is the same as that of $\phi_{\rm
sep}(\hat{Y},X)$.
Define $G^{-1}(Y)\triangleq\\{w:G_{w}(z_{i})=y_{i},i=1,\dots,n\\}.$ We first
prove that
$\phi_{\rm sep}(\hat{Y},X)=\varphi_{\rm sep}(\hat{w}),~{}\forall~{}\hat{w}\in
G^{-1}(\hat{Y}).$ (33)
Suppose $\phi_{\rm sep}(\hat{Y},X)=\alpha$. This implies that $L_{\rm
sep}(\hat{Y};f)\leq\alpha$ for any $f$; in addition, for any $\epsilon>0$
there exists $\hat{f}\in C(\mathbb{R}^{d})$ such that
$L_{\rm sep}(\hat{Y};\hat{f})\geq\alpha-\epsilon.$ (34)
According to Assumption K.1, there exists $\theta^{*}$ such that
$f_{\theta^{*}}(x_{i})=\hat{f}(x_{i}),~{}\forall~{}i$, and
$f_{\theta^{*}}(u)=\hat{f}(u),\forall~{}u\in\\{y_{1},\dots,y_{n}\\}\backslash\\{x_{1},\dots,x_{n}\\}$.
In other words, there exists $\theta^{*}$ such that
$f_{\theta^{*}}(x_{i})=\hat{f}(x_{i}),~{}f_{\theta^{*}}(y_{i})=\hat{f}(y_{i}),~{}\forall~{}i.$
(35)
Then we have
$\displaystyle\mathcal{L}_{\rm sep}(\hat{w};\theta^{*}(\epsilon))$
$\displaystyle=\frac{1}{2n}\sum_{i=1}^{n}[h_{1}(f_{\theta^{*}}(x_{i}))+h_{2}(-f_{\theta^{*}}(G_{\hat{w}}(z_{i})))]\overset{\rm(i)}{=}\sum_{i=1}^{n}[h_{1}(f_{\theta^{*}}(x_{i}))+h_{2}(-f_{\theta^{*}}(\hat{y}_{j}))]$
$\displaystyle\overset{\rm(ii)}{=}\frac{1}{2n}\sum_{i=1}^{n}[h_{1}(\hat{f}(x_{i}))+h_{2}(-\hat{f}(\hat{y}_{i}))]=L_{\rm
sep}(\hat{Y};\hat{f})\overset{\rm(iii)}{\geq}\alpha-\epsilon.$
In the above chain, (i) is due to the assumption $\hat{w}\in G^{-1}(\hat{Y})$
(which implies $G_{\hat{w}}(z_{j})=\hat{y}_{j}$), (ii) is due to the choice of
$\theta^{*}$. (iii) is due to (34).
Therefore, we have $\varphi_{\rm sep}(\hat{w})=\sup_{\theta}\mathcal{L}_{\rm
sep}(\hat{w};\theta)\geq\mathcal{L}_{\rm
sep}(\hat{w};\theta^{*}(\epsilon))\geq\alpha-\epsilon.$ Since this holds for
any $\epsilon$, we have $\varphi_{\rm sep}(\hat{w})\geq\alpha.$ Similarly,
from $\mathcal{L}_{\rm sep}(\hat{w};\theta)\leq\alpha$ we can obtain
$\varphi_{\rm sep}(\hat{w})\leq\alpha.$ Therefore $\varphi_{\rm
sep}(\hat{w})=\alpha=\phi_{\rm sep}(\hat{Y},X).$ This finishes the proof of
(33).
Define
$\displaystyle Q(X)\triangleq\\{Y=(y_{1},\dots,y_{n})\mid
y_{i}\in\\{x_{1},\dots,x_{n}\\},i\in\\{1,2,\dots,n\\};y_{i}=y_{j}\text{ for
some }i\neq j\\}.$
Any $Y\in Q(X)$ is a mode-collapsed pattern. According to Theorem 1, any $Y\in
Q(X)$ is a strict local minimum of $\phi_{\rm sep}(Y,X)$, and thus $Y$ is not
GMR. Therefore $\hat{w}\in G^{-1}(Y)$ where $Y\in Q(X)$ is not GMR; this is
because a non-decreasing path in the parameter space will be mapped to a non-
decreasing path in the function space, causing contradiction. Finally,
according to Assumption K.2, for any $Y$ there exists at least one pre-image
$w\in G^{-1}(Y)\cap\mathcal{W}$. There are $(n^{n}-n!)$ elements in $Q(X)$,
thus there are at least $(n^{n}-n!)$ points in $\mathcal{W}$ that are not
global-min-reachable. $\Box$
Proof of Proposition K.2. Similar to Eq. (33), we have $\varphi_{\rm
R}(w)=\phi_{\rm R}(Y,X)$ for any $w\in G^{-1}(Y)$. We need to prove that there
is a non-decreasing path from any $w_{0}\in\mathcal{W}$ to $w^{*}$, where
$w^{*}$ is a certain global minimum. Let $Y_{0}=G_{w_{0}}(z_{1},\dots,z_{n})$.
According to Thm. 2, there is a continuous path $Y(t)$ from $Y_{0}$ to $Y^{*}$
along which the loss value $\phi_{\rm R}(Y(t),X)$ is non-increasing. According
to Assump. 4.6, there is a continuous path $w(t)$ such that $w(0)=\hat{w}$,
$Y(t)=G_{w(t)}(Z),t\in[0,1]$. Along this path, the value $\varphi_{\rm
R}(w(t))=\phi_{\rm R}(Y(t),X)$ is non-increasing, and at the end the function
value $\varphi_{\rm R}(w(1))=\phi_{\rm R}(Y^{*},X)$ is the minimal value of
$\varphi_{\rm R}(w)$. Thus the existence of such a path is proved. $\Box$
### K.4 A technical lemma
We present a technical lemma, that slightly generalizes [50, Proposition 1].
###### Assumption K.6.
$v_{1},v_{2},\dots,v_{m}\in\mathbb{R}^{d}$ are distinct, i.e., $v_{i}\neq
v_{j}$ for any $i\neq j$.
###### Lemma 2.
Define $T_{H}(V)=(\sigma(W_{H-1}\dots
W_{2}\sigma(W_{1}v_{i})))_{i=1}^{m}\in\mathbb{R}^{d_{H}\times m}$. Suppose
Assumptions K.4, K.5 and K.6 hold. Then the set
$\Omega=\\{(W_{1},\dots,W_{H-1}):\text{rank}(T_{H}(V))<m\\}$ has zero measure.
This claim is slightly different from [50, Proposition 1], which requires the
input vectors to have one distinct dimension (i.e., there exists $j$ such that
$v_{1j},\dots,v_{m,j}$ are distinct); here we only require the input vectors
to be distinct. It is not hard to link “distinct vectors” to “vectors with one
distinct dimension” by a variable transformation.
###### Claim K.4.
Suppose $v_{1},\dots,v_{m}\in\mathbb{R}^{d}$ are distinct. Then for generic
matrix $W\in\mathbb{R}^{d\times d}$, for the vectors
$\bar{v}_{i}=Wv_{i}\in\mathbb{R}^{d},i=1,\dots,n$, there exists $j$ such that
$\bar{v}_{1j},\dots,\bar{v}_{m,j}$ are distinct.
###### Proof.
Define the set $\Omega_{0}=\\{u\mid u\in\mathbb{R}^{1\times d},\exists i\neq
j\text{ s.t. }u^{T}v_{i}=u^{T}v_{j}\\}$. This is the union of $d(d-1)$
hyperplanes $\Omega_{ij}\triangleq\\{u\mid u\in\mathbb{R}^{1\times
d},u^{T}v_{i}=u^{T}v_{j}\\}$. Each hyperplane $\Omega_{ij}$ is a zero-measure
set, thus the union of them $\Omega_{0}$ is also a zero-measure set. Let $u$
be the first row of $W$, then $u$ is generic vector and thus not in
$\Omega_{0}$, which implies $\bar{v}_{11},\dots,\bar{v}_{m,1}$ are distinct. ∎
Proof of Lemma 2: Pick a generic matrix $A\in\mathbb{R}^{d_{v}\times d_{v}}$,
then $\bar{v}_{i}=Av_{i}\in\mathbb{R}^{d_{v}\times 1}$ has one distinct
dimension, i.e., there exists $j$ such that $\bar{v}_{1j},\dots,\bar{v}_{m,j}$
are distinct. In addition, we can assume $A$ is full rank (since it is
generic). Define
$\bar{T}_{H}(\bar{V})=(\sigma(W_{H-1}\dots
W_{2}\sigma(\bar{W}_{1}\bar{v}_{1})),\dots,\sigma(W_{H-1}\dots
W_{2}\sigma(\bar{W}_{1}\bar{v}_{m}))\in\mathbb{R}^{d_{H}\times m}.$
According to [50, Prop. 1], the set
$\bar{\Omega}=\\{(\bar{W}_{1},W_{2},W_{3},\dots,W_{H-1}):\text{rank}(\bar{T}_{H}(\bar{V}))<m\\}$
has zero measure. With the transformation
$\eta_{0}(\bar{W}_{1})=\bar{W}_{1}A^{-1}$, we have $\sigma(W_{H-1}\dots
W_{2}\sigma(\bar{W}_{1}\bar{v}_{i}))=\sigma(W_{H-1}\dots
W_{2}\sigma(W_{1}v_{i})),~{}\forall~{}i$ and thus
$\bar{T}_{H}(\bar{V})=T_{H}(V).$ Define
$\eta(\bar{W_{1}},W_{2},\dots,W_{m})=(\bar{W}_{1}A^{-1},W_{2},\dots,W_{m})$,
then $\eta$ is a homeomorphism between $\bar{\Omega}$ and $\Omega$. Therefore
the set $\Omega=\\{(W_{1},\dots,W_{H-1}):\text{rank}(T_{H}(V))<m\\}$ has zero
measure. $\Box$
### K.5 Proof of claims
Proof of Claim K.1: According to Lemma 2, $\mathcal{W}$ is a dense subset of
$\mathbb{R}^{J}$ (in fact, $\Omega$ is defined for a general neural network,
and $\mathcal{W}$ is defined for the generator network, thus an instance of
$\Omega$). As a result, there exists $(W_{1},\dots,W_{H-1})$ such that
$T_{H}(Z)$ has rank at least $n$. Thus for any
$y_{1},y_{2},\dots,y_{n}\in\mathbb{R}^{d}$, there exists $W_{H}$ such that
$W_{H}T_{H}(Z)=(y_{1},\dots,y_{n})$. $\Box$
Proof of Claim K.2: For any continuous path $Y(t),t\in[0,1]$ in the space
$\mathbb{R}^{d\times n}$, any $w_{0}\in G^{-1}(Y(0))$ and any $\epsilon>0$,
our goal is to show that there exists a continuous path $w(t),t\in[0,1]$ such
that $w(0)=w_{0}$ and $Y(t)=G_{w(t)}(Z),t\in[0,1]$.
Due to the assumption of $w_{0}\in\mathcal{W}$, we know that $w_{0}$
corresponds to a rank-$n$ post-activation matrix $T_{H}(Z)$. Suppose
$w_{0}=(W_{1},\dots,W_{H})$ and
$T_{H}(Z)=(T_{H}(z_{1}),\dots,T_{H}(z_{n}))\in\mathbb{R}^{d_{H}\times n}$ has
rank $n$. Since $T_{H}(Z)$ is full rank, for any path from $Y(0)$ to $Y(1)$,
we can continuously change $W_{H}$ such that the output of $G_{w}(Z)$ changes
from $Y(0)$ to $Y(1)$. Thus there exists a continuous path $w(t),t\in[0,1]$
such that $w(0)=w_{0}$ and $Y(t)=G_{w(t)}(Z),t\in[0,1]$. $\Box$
Proof of Claim K.3: This is a direct application of Lemma 2. Different from
Claim K.2, here we apply Lemma 2 to the discriminator network. $\Box$
## Appendix L Discussion of Wasserstein GAN
W-GAN is a popular formulation of GAN, so a natural question is whether we can
prove a similar landscape result for W-GAN. Consider W-GAN formulation
(empirical version) $\min_{Y}\phi_{\rm W}(Y,X),$ where
$\phi_{\rm W}(Y,X)=\max_{|f|_{L}\leq
1}\frac{1}{n}\sum_{i=1}^{n}[f(x_{i})-f(y_{i})].$
For simplicity we consider the same number of generated samples and true
samples. It can be viewed as a special case of RpGAN where $h(t)=-t$; it can
also be viewed as a special case of SepGAN where $h_{1}(t)=h_{2}(t)=-t$.
However, the major complication is the Lipschitz constraint. It makes the
computation of the function values much harder. For the case of $n=2$, the
function value of $\phi_{\rm W}(Y,X)$ is provided in the following claim.
###### Claim L.1.
Suppose $n=2$. Denote $a_{1}=x_{1},a_{2}=x_{2},a_{3}=y_{1},a_{4}=y_{2}$. The
value of $\phi_{\rm W}(Y,X)$ is
$\displaystyle\max_{u_{1},u_{2},u_{3},u_{4}\in\mathbb{R}}$ $\displaystyle
u_{1}+u_{2}-u_{3}-u_{4},$ s.t.
$\displaystyle|u_{i}-u_{j}|\leq\|a_{i}-a_{j}\|,\forall i,j\in\\{1,2,3,4\\}.$
This claim is not hard to prove, and we skip the proof here.
This claim indicates that computing $\phi_{\rm W}(Y,X)$ is equivalent to
solving a linear program (LP). Solving LP itself is computationally feasible,
but our landscape analysis requires to infer about the global landscape of
$\phi_{\rm W}(Y,X)$ as a function of $Y$. In classical optimization, it is
possible to state that the optimal value of an LP is a convex function of a
certain parameter (e.g. the coefficient of the objective). But in our LP
$y_{i}$’s appear in multiple positions of the LP, and we are not aware of an
existing result that can be readily applied.
Similar to Kantorovich-Rubinstein Duality, we can write down the dual problem
of the LP where the objective is linear combination of $\|a_{i}-a_{j}\|$.
However, it is still not clear what to say about the global landscape, due to
the lack of closed-form solutions.
Finally, we remark that although W-GAN has a strong theoretical appeal, it did
not replace JS-GAN or simple variants of JS-GAN in recent GAN models. For
instance, SN-GAN [67] and BigGAN [18] use hinge-GAN.
(a) Generator | (b) Discriminator
---|---
$z\in\mathbb{R}^{128}\sim{\mathcal{N}}(0,I)$ | image $x\in[-1,1]^{H\times W\times 3}$
128 $\rightarrow h\times w\times$ 512/c, dense, linear | $3\times 3$, stride 1 conv, 64/c
$4\times 4$, stride 2 deconv, 256/c, BN, ReLU | $4\times 4$, stride 2 conv, 128/c
| $3\times 3$, stride 1 conv, 128/c
$4\times 4$, stride 2 deconv, 128/c, BN, ReLU | $4\times 4$, stride 2 conv, 256/c
| $3\times 3$, stride 1 conv, 256/c
$4\times 4$, stride 2 deconv, 64/c, BN, ReLU | $4\times 4$, stride 2 conv, 512/c
| $3\times 3$, stride 1 conv, 512/c
$3\times 3$, stride 1 conv, 3, Tanh | $h\times w\times 512/c\rightarrow s$, linear
Table 7: CNN models for CIFAR-10 and STL-10 used in our experiments on image
Generation. h = w = 4, H = W = 32 for CIFAR-10. h = w = 6, H = W = 48 for
STL-10. c=1, 2 and 4 for the regular, 1/2 and 1/4 channel structures
respectively. All layers of D use LReLU-0.1 (except the final dense ‘’linear”
layer).
(a) Generator | (b) Discriminator
---|---
$z\in{\mathbb{R}}^{128}\sim{\mathcal{N}}(0,I)$ | $x\in[-1,1]^{256\times 256\times 3}$
reshape $\rightarrow$ $128\times 1\times 1$ | $4\times 4$, stride 2 conv, 32,
$4\times 4$, stride 1 deconv, BN, 1024 | $4\times 4$, stride 2 conv, 64
$4\times 4$, stride 2 deconv, BN, 512 | $4\times 4$, stride 2 conv, 128
$4\times 4$, stride 2 deconv, BN, 256 | $4\times 4$, stride 2 conv, 256
$4\times 4$, stride 2 deconv, BN, 128 | $4\times 4$, stride 2 conv, 512
$4\times 4$, stride 2 deconv, BN, 64 | $4\times 4$, stride 2 conv, 1024
$4\times 4$, stride 2 deconv, BN, 32 | dense $\rightarrow$ 1
$4\times 4$, stride 2 deconv, 3, Tanh |
Table 8: CNN model architecture for size 256 LSUN used in our experiments on
high resolution image generation. All layers of G use ReLU (except one layer
with Tanh); all layers of D use LReLU-0.1.
(a) Generator | (b) Discriminator
---|---
$z\in\mathbb{R}^{128}\sim{\mathcal{N}}(0,I)$ | image $x\in[-1,1]^{32\times 32\times 3}$
dense, $4\times 4\times 256$/c | ResBlock down 128/c
ResBlock up 256/c | ResBlock down 128/c
ResBlock up 256/c | ResBlock down 128/c
ResBlock up 256/c | ResBlock down 128/c
BN, ReLU, $3\times 3$ conv, 3 Tanh | LReLU 0.1
| Global sum pooling
| dense $\rightarrow$ 1
Table 9: Resnet architecture for CIFAR-10. c=1, 2 and 4 for the regular, 1/2
and 1/4 channel structures respectively.
(a) Generator | (b) Discriminator
---|---
$z\in\mathbb{R}^{128}\sim{\mathcal{N}}(0,I)$ | image $x\in[-1,1]^{48\times 48\times 3}$
dense, $6\times 6\times 512$/c | ResBlock down 64/c
ResBlock up 256/c | ResBlock down 128/c
ResBlock up 128/c | ResBlock down 256/c
ResBlock up 64/c | ResBlock down 512/c
BN, ReLU, $3\times 3$ conv, 3 Tanh | ResBlock down 1024/c
| LReLU 0.1
| Global sum pooling
| dense $\rightarrow$ 1
Table 10: Resnet architecture for STL-10. c=1, 2 and 4 for the regular, 1/2
and 1/4 channel structures respectively.
(a) Generator | (b) Discriminator
---|---
$z\in\mathbb{R}^{128}\sim{\mathcal{N}}(0,I)$ | image $x\in[-1,1]^{32\times 32\times 3}$
dense, $4\times 4\times 128$ | BRes down (64, 32, 64)
BRes up (128, 64, 128) | BRes down (64, 32, 64)
BRes up (128, 64, 128) | BRes down (64, 32, 64)
BRes up (128, 64, 128) | BRes down (64, 32, 64)
BN, ReLU, $3\times 3$ conv, 3 Tanh | LReLU 0.1
| Global sum pooling
| dense $\rightarrow$ 1
Table 11: BottleNeck Resnet models for CIFAR-10. BRes refers to BottleNeck
ResBlock. BRes $(a,b,c)$ refers to the Bottleneck resblock with (input, hidden
and output) being $(a,b,c)$.
(a) Generator | (b) Discriminator
---|---
$z\in{\mathbb{R}}^{128}\sim{\mathcal{N}}(0,I)$ | image $x\in[-1,1]^{48\times 48\times 3}$
dense, $6\times 6\times 256$ | BRes down (3, 16, 32)
BRes up (256, 64, 128) | BRes down (32, 16, 64)
BRes up (128, 32, 64) | BRes down (64, 32, 128)
BRes up (64, 16, 32) | BRes down (128, 64, 256)
BN, ReLU, $3\times 3$ conv, 3 Tanh | BRes down (256, 128, 512)
| LReLU 0.1
| Global sum pooling
| dense $\rightarrow$ 1
Table 12: BottleNeck Resnet models for STL-10.
RS-GAN generator learning rate
---
| | CIFAR-10 | STL-10
CNN | No normalization | 2e-4 | 5e-4
Regular + SN | 5e-4 | 5e-4
channel/2 + SN | 5e-4 | 5e-4
channel/4 + SN | 2e-4 | 5e-4
ResNet | Regular+SN | 1.5e-3 | 1e-3
channel/2 + SN | 1.5e-3 | 1e-3
channel/4 + SN | 1e-3 | 5e-4
BottleNeck | 1e-3 | 1e-3
WGAN-GP Hyper-parameters
---
generator learning rate | 1e-4
discriminator learning rate | 1e-4
$\beta_{1}$ | 0.5
$\beta_{2}$ | 0.9
Gradient penalty $\lambda$ | 10
# D iterations per G iteration | 5
Table 13: Learning rate for RS-GAN in each setting. Hyper-parameters used for
WGAN-GP |
11institutetext: Shaoshi Chen 22institutetext: KLMM, Academy of Mathematics
and Systems Science, Chinese Academy of Sciences,
Beijing 100190, (China)
School of Mathematical Sciences, University of Chinese Academy of Sciences,
Beijing 100049, (China)
22email<EMAIL_ADDRESS>
This work was supported by the NSFC grants 11501552, 11688101 and by the
Frontier Key Project (QYZDJ-SSW-SYS022) and the Fund of the Youth Innovation
Promotion Association, CAS
# How to generate all possible rational Wilf-Zeilberger pairs?
Dedicated to the memory of Jonathan M. Borwein and Ann Johnson
Shaoshi Chen
###### Abstract
A Wilf–Zeilberger pair $(F,G)$ in the discrete case satisfies the equation
$F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k).$
We present a structural description of all possible rational Wilf–Zeilberger
pairs and their continuous and mixed analogues.
## 1 Introduction
The Wilf–Zeilberger (abbr. WZ) theory Wilf1992 ; WilfZeilberger1992 ;
PWZbook1996 has become a bridge between symbolic computation and
combinatorics. Through this bridge, not only classical combinatorial
identities from handbooks and long-standing conjectures in combinatorics, such
as Gessel’s conjecture KKZ2009 ; Bostan2010 and $q$-TSPP conjecture KKZ2011 ,
are proved algorithmically, but also some new identities and conjectures
related to mathematical constants, such as $\pi$ and zeta values, are
discovered via computerized guessing Gessel1995 ; Borwein2004 ; Sun2011 ;
CHZ2016 .
WZ-pair is one of leading concepts in the WZ theory that was originally
introduced in WilfZeilberger1992 with a recent brief description in
Tefera2010 . In the discrete case, a WZ-pair $(F(n,k),G(n,k))$ satisfies the
WZ equation
$F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k),$
where both $F$ and $G$ are hypergeometric terms, i.e., their shift quotients
with respect to $n$ and $k$ are rational functions in $n$ and $k$,
respectively. Once a WZ-pair is given, one can sum on both sides of the above
equation over $k$ from $0$ to $\infty$ to get
$\sum_{k=0}^{\infty}F(n+1,k)-\sum_{k=0}^{\infty}F(n,k)=\lim_{k\rightarrow\infty}G(n,k+1)-G(n,0).$
If $G(n,0)$ and $\lim_{k\rightarrow\infty}G(n,k+1)$ are $0$ then we obtain
$\sum_{k=0}^{\infty}F(n+1,k)=\sum_{k=0}^{\infty}F(n,k),$
which implies that $\sum_{k=0}^{\infty}F(n,k)$ is independent of $n$. Thus, we
get the identity $\sum_{k=0}^{\infty}F(n,k)=c$, where the constant $c$ can be
determined by evaluating the sum for one value of $n$. We may also get a
companion identity by summing the WZ-equation over $n$. For instance, the pair
$(F,G)$ with
$F=\frac{\binom{n}{k}^{2}}{\binom{2n}{n}}\quad\text{and}\quad
G={\frac{(2k-3n-3){k}^{2}}{2(2n+1)(-n-1+k)^{2}}}\cdot\frac{\binom{n}{k}^{2}}{\binom{2n}{n}}$
leads to two identities
$\sum_{k=0}^{\infty}\binom{n}{k}^{2}=\binom{2n}{n}\quad\text{and}\quad\sum_{n=0}^{\infty}\frac{(3n-2k+1)}{2(2n+1)\binom{2n}{n}}\binom{n}{k}^{2}=1.$
Besides to prove combinatorial identities, WZ-pairs have many other
applications. One of the applications can be traced back to Andrei Markov’s
1890 method for convergence-acceleration of series for computing $\zeta(3)$,
which leads to the Markov-WZ method MZ2004 ; Kondratieva2005 ; Mohammed2005 .
WZ-pairs also play a central role in the study of finding Ramanujan-type and
Zeilberger-type series for constants involving $\pi$ in EZ1994 ; Guillera2002
; Guillera2006 ; Guillera2010 ; Guillera2013 ; Liu2012 ; Zudilin2011 ; HKS2018
, zeta values Pilehrood2008a ; Pilehrood2008b and their $q$-analogues
Pilehrood2011 ; GuoLiu2018 ; GuoZudilin2018 . Most recent applications are
related to congruences and super congruences Zudilin2009 ; Long2011 ; Sun2011
; Sun2012 ; Sun2013 ; Sun2013b ; Guo2017 ; Guo2018 .
For appreciation we select some remarkable $(q)$-series about $\pi,\zeta(3)$
together with (super)-congruences whose proofs can be obtained via WZ-pairs as
follows (this list is surely not comprehensive):
1. 1.
Ramanujan’s series for $1/\pi$: first recorded in Ramanujan’s second notebook,
proved by Bauer in Bauer1859 , and by Ekhad and Zeilberger using WZ-pairs in
EZ1994 . For a nice survey on Ramanujan’s series, see BBC2009 .
$\frac{2}{\pi}=\sum_{k=0}^{\infty}\frac{4k+1}{(-64)^{k}}\binom{2k}{k}^{3}.$
2. 2.
Guillera’s series for $1/\pi^{2}$: found and proved by Guillera in 2002 using
WZ-pairs Guillera2002 . For more results on Ramanujan-type series for
$1/\pi^{2}$, see Zudilin’s surveys Zudilin2007 ; Zudilin2011 .
$\frac{128}{\pi^{2}}=\sum_{k=0}^{\infty}(-1)^{k}\binom{2k}{k}^{5}\frac{820k^{2}+180k+13}{2^{20k}}.$
3. 3.
Guillera’s Zeilberger-type series for $\pi^{2}$: found and proved by Guillera
using WZ-pairs in Guillera2008 .
$\frac{\pi^{2}}{2}=\sum_{k=1}^{\infty}\frac{(3k-1)16^{k}}{k^{3}\binom{2k}{k}^{3}}.$
4. 4.
Markov–Apéry’s series for $\zeta(3)$: first discovered by Andrei Markov in
1890, used by Apéry for his irrationality proof, and proved by Zeilberger
using WZ-pairs in Zeilberger1993 .
$\zeta(3)=\frac{5}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k^{3}\binom{2k}{k}}.$
5. 5.
Amdeberhan’s series for $\zeta(3)$: proved by Amdeberhan in 1996 using WZ-
pairs Amdeberhan1996 .
$\zeta(3)=\frac{1}{4}\sum_{k=1}^{\infty}(-1)^{k-1}\frac{56k^{2}-32k+5}{k^{3}(2k-1)^{2}\binom{2k}{k}\binom{3k}{k}}.$
6. 6.
Bailey–Borwein–Bradley identity: experimentally discovered and proved by
Bailey et al. in BBB2006 , a proof using the Markov-WZ method is given in
Pilehrood2008b ; Pilehrood2008b and its $q$-analogue is presented in
Pilehrood2011 .
$\sum_{k=0}^{\infty}\zeta(2k+2)z^{2k}=3\sum_{k=1}^{\infty}\frac{1}{\binom{2k}{k}(k^{2}-z^{2})}\prod_{m=1}^{k-1}\frac{m^{2}-4z^{2}}{m^{2}-z^{2}},\quad\text{$z\in{\mathbb{C}}$
with $|z|<1$}.$
7. 7.
van Hamme’s supercongruence I: first conjectured by van Hamme Hamme1997 ,
proved by Mortenson Mortenson2008 using ${}_{6}F_{5}$ transformations and by
Zudilin Zudilin2009 using WZ-pairs.
$\sum_{k=0}^{\frac{p-1}{2}}\frac{4k+1}{(-64)^{k}}\binom{2k}{k}^{3}\equiv
p(-1)^{\frac{p-1}{2}}(\text{mod}~{}p^{3}),$
where $p$ is an odd prime and the multiplicative inverse of $(-64)^{k}$ should
be computed modulo $p^{3}$.
8. 8.
van Hamme’s supercongruence II: first conjectured by van Hamme Hamme1997 ,
proved by Long Long2011 using hypergeometric evaluation identities, one of
which is obtained by Gessel using WZ-pairs in Gessel1995 .
$\sum_{k=0}^{\frac{p-1}{2}}{\frac{6k+1}{256^{k}}}\binom{2k}{k}^{3}\equiv
p(-1)^{\frac{p-1}{2}}(\text{mod}~{}p^{4}),$
where $p>3$ is a prime and the multiplicative inverse of $(256)^{k}$ should be
computed modulo $p^{4}$.
9. 9.
Guo’s $q$-analogue of van Hamme’s supercongruence I: discovered and proved
recently by Guo using WZ-pairs in Guo2018 .
$\sum_{k=0}^{\frac{p-1}{2}}(-1)^{k}q^{k^{2}}[4k+1]_{q}\frac{(q;q^{2})_{k}^{3}}{(q^{2};q^{2})_{k}^{3}}\equiv[p]_{q}q^{\frac{(p-1)^{2}}{4}}(-1)^{\frac{p-1}{2}}\pmod{[p]_{q}^{3}},$
where for $n\in{\mathbb{N}}$, $(a;q)_{n}:=(1-a)(1-aq)\cdots(1-aq^{n-1})$ with
$(a;q)_{0}=1$, $[n]_{q}=1+q+\cdots+q^{n-1}$ and $p$ is an odd prime.
10. 10.
Hou–Krattenthaler–Sun’s $q$-analogue of Guillera’s Zeilberger-type series for
$\pi^{2}$: inspired by a recent conjecture on supercongruence by Guo in
Guo2018b , and proved using WZ-pairs in HKS2018 . This work is also connected
to other emerging developments on $q$-analogues of series for famous constants
and formulae Sun2018 ; GuoZudilin2018 ; GuoLiu2018 .
$2\sum_{k=0}^{\infty}q^{2k^{2}+2k}(1+q^{2k^{2}+2}-2q^{4k+3})\frac{(q^{2};q^{2})_{k}^{3}}{(q;q^{2})^{3}_{k+1}(-1;q)_{2k+3}}=\sum_{k=0}^{\infty}\frac{q^{2k}}{(1-q^{2k+1})^{2}}.$
For applications, it is crucial to have WZ-pairs at hand. In the previous
work, WZ-pairs are obtained either by guessing from the identities to be
proved using Gosper’s algorithm or by certain transformations from a given WZ-
pair Gessel1995 . Riordan in the preface of his book Riordan1968 commented
that “the central fact developed is that identities are both inexhaustible and
unpredictable; the age-old dream of putting order in this chaos is doomed to
failure ”. As an optimistic respond to Riordan’s comment, Gessel in his
talk111The talk was given at the Waterloo Workshop in Computer Algebra (in
honor of Herbert Wilf’s 80th birthday), Wilfrid Laurier University, May 28,
2011. For the talk slides, see the link:
http://people.brandeis.edu/~gessel/homepage/slides/wilf80-slides.pdf on the WZ
method motivated with some examples that “WZ forms bring order to this chaos
”, where WZ-forms are a multivariate generalization of WZ-pairs Zeilberger1993
. With the hope of discovering more combinatorial identities in an intrinsic
and algorithmic way, it is natural and challenging to ask the following
question.
###### Problem 1
How to generate all possible WZ-pairs algorithmically?
This problem seems quite open, but every promising project needs a starting
point. In Liu2015 , Liu had described the structure of a special class of
analytic WZ-functions with $F=G$ in terms of Rogers–Szegö polynomials and
Stieltjes–Wigert polynomials in the $q$-shift case. In Sun2012 , Sun studied
the relation between generating functions of $F(n,k)$ and $G(n,k)$ if $(F,G)$
is a WZ-pair and applied this relation to prove some combinatorial identities.
In this paper, we solve the problem completely for the first non-trivial case,
namely, the case of rational WZ-pairs. To this end, let us first introduce
some notations. Throughout this paper, let $K$ be a field of characteristic
zero and $K(x,y)$ be the field of rational functions in $x$ and $y$ over $K$.
Let $D_{x}=\partial/\partial_{x}$ and $D_{y}=\partial/\partial_{y}$ be the
usual derivations with respect to $x$ and $y$, respectively. The shift
operators ${\sigma}_{x}$ and ${\sigma}_{y}$ are defined respectively as
${\sigma}_{x}(f(x,y))=f(x+1,y)\quad\text{and}\quad{\sigma}_{y}(f(x,y))=f(x,y+1)\quad\text{for
$f\in K(x,y)$.}$
For any $q\in K\setminus\\{0\\}$, we define the $q$-shift operators
$\tau_{q,x}$ and $\tau_{q,y}$ respectively as
$\tau_{q,x}(f(x,y))=f(qx,y)\quad\text{and}\quad\tau_{q,y}(f(x,y))=f(x,qy)\quad\text{for
$f\in K(x,y)$.}$
For $z\in\\{x,y\\}$, let $\Delta_{z}$ and $\Delta_{q,z}$ denote the difference
and $q$-difference operators defined by $\Delta_{z}(f)={\sigma}_{z}(f)-f$ and
$\Delta_{q,z}(f)=\tau_{q,z}(f)-f$ for $f\in K(x,y)$, respectively.
###### Definition 1
Let $\partial_{x}\in\\{D_{x},\Delta_{x},\Delta_{q,x}\\}$ and
$\partial_{y}\in\\{D_{y},\Delta_{y},\Delta_{q,y}\\}$. A pair $(f,g)$ with
$f,g\in K(x,y)$ is called a _WZ-pair_ with respect to
$(\partial_{x},\partial_{y})$ in $K(x,y)$ if
$\partial_{x}(f)=\partial_{y}(g)$.
The set of all rational WZ-pairs in $K(x,y)$ with respect to
$(\partial_{x},\partial_{y})$ forms a linear space over $K$, denoted by
${\mathcal{P}}_{(\partial_{x},\partial_{y})}$. A WZ-pair $(f,g)$ with respect
to $(\partial_{x},\partial_{y})$ is said to be _exact 222This is motivated by
the fact that a differential form $\omega=gdx+fdy$ with $f,g\in K(x,y)$ is
exact in $K(x,y)$ if and only if $f=D_{y}(h)$ and $g=D_{x}(h)$ for some $h\in
K(x,y)$._ if there exists $h\in K(x,y)$ such that $f=\partial_{y}(h)$ and
$g=\partial_{x}(h)$. Let ${\mathcal{E}}_{(\partial_{x},\partial_{y})}$ denote
the set of all exact WZ-pairs with respect to $(\partial_{x},\partial_{y})$,
which forms a subspace of ${\mathcal{P}}_{(\partial_{x},\partial_{y})}$. The
goal of this paper is to provide an explicit description of the structure of
the quotient space
${\mathcal{P}}_{(\partial_{x},\partial_{y})}/{\mathcal{E}}_{(\partial_{x},\partial_{y})}$.
The remainder of this paper is organized as follows. As our key tools, residue
criteria for rational integrability and summability are recalled in Section 2.
In Section 3, we present structure theorems for rational WZ-pairs in three
different settings. This paper ends with a conclusion along with some remarks
on the future research.
## 2 Residue criteria
In this section, we recall the notion of residues and their ($q$-)discrete
analogues for rational functions and some residue criteria for rational
integrability and summability from BronsteinBook ; ChenSinger2012 ;
HouWang2015 .
Let $F$ be a field of characteristic zero and $F(z)$ be the field of rational
functions in $z$ over $F$. Let $D_{z}$ be the usual derivation on $F(z)$ such
that $D_{z}(z)=1$ and $D_{z}(c)$=0 for all $c\in F$. A rational function $f\in
F(z)$ is said to be _$D_{z}$ -integrable_ in $F(z)$ if $f=D_{z}(g)$ for some
$g\in F(z)$. By the irreducible partial fraction decomposition, one can always
uniquely write $f\in F(z)$ as
$f=q+\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}\frac{a_{i,j}}{d_{i}^{j}},$ (1)
where $q,a_{i,j},d_{i}\in F[z]$, $\deg_{z}(a_{i,j})<\deg_{z}(d_{i})$ and the
$d_{i}$’s are distinct irreducible and monic polynomials. We call $a_{i,1}$
the _pseudo $D_{z}$-residue_ of $f$ at $d_{i}$, denoted by
$\text{pres}_{D_{z}}(f,d_{i})$. For an irreducible polynomial $p\in F[z]$, we
let ${\mathcal{O}}_{p}$ denote the set
${\mathcal{O}}_{p}:=\left\\{\frac{a}{b}\in F(z)\mid\text{$a,b\in F[z]$
with~{}$\gcd(a,b)$ and ${p}\nmid b$}\right\\},$
and let ${\mathcal{R}}_{p}$ denote the set $\\{f\in F(z)\mid
pf\in{\mathcal{O}}_{p}\\}$. If $f\in{\mathcal{R}}_{p}$, the pseudo-residue
$\text{pres}_{D_{z}}(f,p)$ is called the _$D_{z}$ -residue_ of $f$ at $p$,
denoted by ${\operatorname{res}}_{D_{z}}(f,p)$. The following example shows
that pseudo-residues may not be the obstructions for $D_{z}$-integrability in
$F(z)$.
###### Example 1
Let $F:={\mathbb{Q}}$ and $f=(1-x^{2})/(x^{2}+1)^{2}$. Then the irreducible
partial fraction decomposition of $f$ is of the form
$f=\frac{2}{(x^{2}+1)^{2}}-\frac{1}{x^{2}+1}.$
The pseudo-residue of $f$ at $x^{2}+1$ is $-1$, which is nonzero. However, $f$
is $D_{z}$-integrable in $F(z)$ since $f=D_{z}(x/(x^{2}+1))$.
The following lemma shows that $D_{z}$-residues are the only obstructions for
$D_{z}$-integrability of rational functions with squarefree denominators, so
are pseudo-residues if $F$ is algebraically closed.
###### Lemma 1
(ChenSinger2012, , Proposition 2.2) Let $f=a/b\in F(z)$ be such that $a,b\in
F[z]$, $\gcd(a,b)=1$. If $b$ is squarefree, then $f$ is $D_{z}$-integrable in
$F(z)$ if and only if ${\operatorname{res}}_{D_{z}}(f,d)=0$ for any
irreducible factor $d$ of $b$. If $F$ is algebraically closed, then $f$ is
$D_{z}$-integrable in $F(z)$ if and only if
$\text{pres}_{D_{z}}(f,z-\alpha)=0$ for any root $\alpha$ of the denominator
$b$.
By the Ostrogradsky–Hermite reduction Ostrogradsky1845 ; Hermite1872 ;
BronsteinBook , we can decompose a rational function $f\in F(z)$ as
$f=D_{z}(g)+a/b$, where $g\in F(z)$ and $a,b\in F[z]$ are such that
$\deg_{z}(a)<\deg_{z}(b),\gcd(a,b)=1$, and $b$ is a squarefree polynomial in
$F[z]$. By Lemma 1, $f$ is $D_{z}$-integrable in $F(z)$ if and only if $a=0$.
We now recall the ($q$-)discrete analogue of $D_{z}$-residues introduced in
ChenSinger2012 ; HouWang2015 . Let $\phi$ be an automorphism of $F(z)$ that
fixes $F$. For a polynomial $p\in F[z]$, we call the set $\\{\phi^{i}(p)\mid
i\in{\mathbb{Z}}\\}$ the _$\phi$ -orbit_ of $p$, denoted by $[p]_{\phi}$. Two
polynomials $p,q\in F[z]$ are said to be $\phi$-equivalent (denoted as
$p\sim_{\phi}q$) if they are in the same $\phi$-orbit, i.e., $p=\phi^{i}(q)$
for some $i\in{\mathbb{Z}}$. For any $a,b\in F(z)$ and $m\in{\mathbb{Z}}$, we
have
$\frac{a}{\phi^{m}(b)}=\phi(g)-g+\frac{\phi^{-m}(a)}{b},$ (2)
where $g$ is equal to $\sum_{i=0}^{m-1}\frac{\phi^{i-m}(a)}{\phi^{i}(b)}$ if
$m\geq 0$, and equal to $-\sum_{i=0}^{-m-1}\frac{\phi^{i}(a)}{\phi^{m+i}(b)}$
if $m<0$.
Let ${\sigma}_{z}$ be the shift operator with respect to $z$ defined by
${\sigma}_{z}(f(z))=f(z+1)$. Note that ${\sigma}_{z}$ is an automorphism of
$F(z)$ that fixes $F$. A rational function $f\in F(z)$ is said to be
_${\sigma}_{z}$ -summable_ in $F(z)$ if $f={\sigma}_{z}(g)-g$ for some $g\in
F(z)$. For any $f\in F(z)$, we can uniquely decompose it into the form
$f=p(z)+\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}\sum_{\ell=0}^{e_{i,j}}\frac{a_{i,j,\ell}}{{\sigma}_{z}^{\ell}(d_{i})^{j}},$
(3)
where $p,a_{i,j,\ell},d_{i}\in F[z]$, $\deg_{z}(a_{i,j,\ell})<\deg_{z}(d_{i})$
and the $d_{i}$’s are irreducible and monic polynomials such that no two of
them are ${\sigma}_{z}$-equivalent. We call the sum
$\sum_{\ell=0}^{e_{i,j}}{\sigma}_{z}^{-\ell}(a_{i,j,\ell})$ the
_${\sigma}_{z}$ -residue_ of $f$ at $d_{i}$ of multiplicity $j$, denoted by
${\operatorname{res}}_{{\sigma}_{z}}(f,d_{i},j)$. Recently, the notion of
${\sigma}_{z}$-residues has been generalized to the case of rational functions
over elliptic curves (Dreyfus2018, , Appendix B). The following lemma is a
discrete analogue of Lemma 1 which shows that ${\sigma}_{z}$-residues are the
only obstructions for ${\sigma}_{z}$-summability in the field $F(z)$.
###### Lemma 2
(ChenSinger2012, , Proposition 2.5) Let $f=a/b\in F(z)$ be such that $a,b\in
F[z]$ and $\gcd(a,b)=1$. Then $f$ is ${\sigma}_{z}$-summable in $F(z)$ if and
only if ${\operatorname{res}}_{{\sigma}_{z}}(f,d,j)=0$ for any irreducible
factor $d$ of the denominator $b$ of any multiplicity $j\in{\mathbb{N}}$.
By Abramov’s reduction Abramov1975 ; Abramov1995b , we can decompose a
rational function $f\in F(z)$ as
$f=\Delta_{z}(g)+\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}\frac{a_{i,j}}{b_{i}^{j}},$
where $g\in F(z)$ and $a_{i,j},b_{i}\in F[z]$ are such that
$\deg_{z}(a_{i,j})<\deg_{z}(b_{i})$ and the $b_{i}$’s are irreducible and
monic polynomials in distinct ${\sigma}_{z}$-orbits. By Lemma 2, $h$ is
${\sigma}_{z}$-summable in $F(z)$ if and only if $a_{i,j}=0$ for all $i,j$
with $1\leq i\leq n$ and $1\leq j\leq m_{i}$.
Let $q$ be a nonzero element of $F$ such that $q^{m}\neq 1$ for all nonzero
$m\in{\mathbb{Z}}$ and let $\tau_{q,z}$ be the $q$-shift operator with respect
to $z$ defined by $\tau_{q,z}(f(z))=f(qz)$. Since $q$ is nonzero, $\tau_{q,z}$
is an automorphism of $F(z)$ that fixes $F$. A rational function $f\in F(z)$
is said to be _$\tau_{q,z}$ -summable_ in $F(z)$ if $f=\tau_{q,z}(g)-g$ for
some $g\in F(z)$. For any $f\in F(z)$, we can uniquely decompose it into the
form
$f=c+zp_{1}+\frac{p_{2}}{z^{s}}+\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}\sum_{\ell=0}^{e_{i,j}}\frac{a_{i,j,\ell}}{\tau_{q,z}^{\ell}(d_{i})^{j}},$
(4)
where $c\in F,s,n,m_{i},e_{i,j}\in{\mathbb{N}}$ with $s\neq 0$, and
$p_{1},p_{2},a_{i,j,\ell},d_{i}\in F[z]$ are such that $\deg_{z}(p_{2})<s$,
$\deg_{z}(a_{i,j,\ell})<\deg_{z}(d_{i})$, and $p_{2}$ is either zero or has
nonzero constant term, i.e., $p_{2}(0)\neq 0$. Moreover, the $d_{i}$’s are
irreducible and monic polynomials in distinct $\tau_{q,z}$-orbits and $z\nmid
d_{i}$ for all $i$ with $1\leq i\leq n$. We call the constant $c$ the
_$\tau_{q,z}$ -residue_ of $f$ at infinity, denoted by
${\operatorname{res}}_{\tau_{q,z}}(f,\infty)$ and call the sum
$\sum_{\ell=0}^{e_{i,j}}\tau_{q,z}^{-\ell}(a_{i,j,\ell})$ the _$\tau_{q,z}$
-residue_ of $f$ at $d_{i}$ of multiplicity $j$, denoted by
${\operatorname{res}}_{\tau_{q,z}}(f,d_{i},j)$. A $q$-analogue of Lemma 2 is
as follows.
###### Lemma 3
(ChenSinger2012, , Proposition 2.10) Let $f=a/b\in F(z)$ be such that $a,b\in
F[z]$ and $\gcd(a,b)=1$. Then $f$ is $\tau_{q,z}$-summable in $F(z)$ if and
only if ${\operatorname{res}}_{\tau_{q,z}}(f,\infty)=0$ and
${\operatorname{res}}_{\tau_{q,z}}(f,d,j)=0$ for any irreducible factor $d$ of
the denominator $b$ of any multiplicity $j\in{\mathbb{N}}$.
By a $q$-analogue of Abramov’s reduction Abramov1995b , we can decompose a
rational function $f\in F(z)$ as
$f=\Delta_{q,z}(g)+c+\sum_{i=1}^{n}\sum_{j=1}^{m_{i}}\frac{a_{i,j}}{b_{i}^{j}},$
where $g\in F(z),c\in F,$ and $a_{i,j},b_{i}\in F[z]$ are such that
$\deg_{z}(a_{i,j})<\deg_{z}(b_{i})$ and the $b_{i}$’s are irreducible and
monic polynomials in distinct ${\sigma}_{z}$-orbits and $\gcd(z,b_{i})=1$ for
all $i$ with $1\leq i\leq n$. By Lemma 3, $f$ is $\tau_{q,z}$-summable in
$F(z)$ if and only if $c=0$ and $a_{i,j}=0$ for all $i,j$ with $1\leq i\leq n$
and $1\leq j\leq m_{i}$.
###### Remark 1
Note that pseudo-residues are essentially different from residues in the
differential case, but not needed in the shift and $q$-shift cases.
## 3 Structure theorems
In this section, we present structure theorems for rational WZ-pairs in terms
of some special pairs. Throughout this section, we will assume that $K$ is an
algebraically closed field of characteristic zero and let
$\partial_{x}\in\\{D_{x},\Delta_{x},\Delta_{q,x}\\}$ and
$\partial_{y}\in\\{D_{y},\Delta_{y},\Delta_{q,y}\\}$.
We first consider the special case that $q\in K$ is a root of unity. Assume
that $m$ is the minimal positive integer such that $q^{m}=1$. For any $f\in
K(x,y)$, it is easy to show that $\tau_{q,y}(f)=f$ if and only if $f\in
K(x)(y^{m})$. Note that $K(x,y)$ is a finite algebraic extension of
$K(x)(y^{m})$ of degree $m$. In the following theorem, we show that WZ-pairs
in this special case are of a very simple form.
###### Theorem 3.1
Let $\partial_{x}\in\\{D_{x},\Delta_{x},\Delta_{q,x}\\}$ and $f,g\in K(x,y)$
be such that $\partial_{x}(f)=\Delta_{q,y}(g)$. Then there exist rational
functions $h\in K(x,y)$ and $a,b\in K(x,y^{m})$ such that $\partial_{x}(a)=0$
and
$f=\Delta_{q,y}(h)+a\quad\text{and}\quad g=\partial_{x}(h)+b.$
Moreover, we have $a\in K(y^{m})$ if $\partial_{x}\in\\{D_{x},\Delta_{x}\\}$
and $a\in K(x^{m},y^{m})$ if $\partial_{x}=\Delta_{q,x}$.
###### Proof
By Lemma 2.4 in ChenSinger2014 , any rational function $f\in K(x,y)$ can be
decomposed as
$f=\Delta_{q,y}(h)+a,\quad\text{where~{}$h\in K(x,y)$ and~{}$a\in
K(x)(y^{m})$}.$ (5)
Moreover, $f$ is $\tau_{q,y}$-summable in $K(x,y)$ if and only if $a=0$. Then
$\partial_{x}(f)=\Delta_{q,y}(\partial_{x}(h))+\partial_{x}(a).$
Note that $\partial_{x}(a)\in K(x)(y^{m})$, which implies that
$\partial_{x}(a)=0$ because $\partial_{x}(f)$ is $\tau_{q,y}$-summable in
$K(x,y)$. Then $\Delta_{q,y}(g)=\Delta_{q,y}(\partial_{x}(h))$. So
$g=\partial_{x}(h)+b$ for some $b\in K(x,y^{m})$. This completes the proof.
$\Box$
From now on, we assume that $q$ is not a root of unity. We will investigate
WZ-pairs in three different cases according to the choice of the pair
$(\partial_{x},\partial_{y})$.
### 3.1 The differential case
In the continuous setting, we consider WZ-pairs with respect to
$(D_{x},D_{y})$, i.e., the pairs of the form $(f,g)$ with $f,g\in K(x,y)$
satisfying $D_{x}(f)=D_{y}(g)$.
###### Definition 2
A WZ-pair $(f,g)$ with respect to $(D_{x},D_{y})$ is called a _log-derivative_
pair if there exists nonzero $h\in K(x,y)$ such that $f=D_{y}(h)/h$ and
$g=D_{x}(h)/h$.
The following theorem shows that any WZ-pair in the continuous case is a
linear combination of exact and log-derivative pairs, which was first proved
by Christopher in Christopher1999 and then extended to the multivariate case
in Zoladek1998 ; ChenThesis2011 .
###### Theorem 3.2
Let $f,g\in K(x,y)$ be such that $D_{x}(f)=D_{y}(g)$. Then there exist
rational functions $a,b_{1},\ldots,b_{n}\in K(x,y)$ and nonzero constants
$c_{1},\ldots,c_{n}\in K$ such that
$f=D_{y}(a)+\sum_{i=1}^{n}c_{i}\frac{D_{y}(b_{i})}{b_{i}}\quad\text{and}\quad
g=D_{x}(a)+\sum_{i=1}^{n}c_{i}\frac{D_{x}(b_{i})}{b_{i}}.$
###### Proof
The proof in the case when $K$ is the field of complex numbers can be found in
(Christopher1999, , Theorem 2) and in the case when $K$ is any algebraically
closed field of characteristic zero can be found in (ChenThesis2011, , Theorem
4.4.3).
###### Corollary 1
The quotient space
${\mathcal{P}}_{(D_{x},D_{y})}/{\mathcal{E}}_{(D_{x},D_{y})}$ is spanned over
$K$ by the set
$\\{(f,g)+{\mathcal{E}}_{(D_{x},D_{y})}\mid\text{$f,g\in K(x,y)$ such that
$(f,g)$ is a log-derivative pair}\\}.$
###### Remark 2
A differentiable function $h(x,y)$ is said to be hyperexponential over
${\mathbb{C}}(x,y)$ if $D_{x}(h)=fh$ and $D_{y}(h)=gh$ for some
$f,g\in{\mathbb{C}}(x,y)$. The above theorem enables us to obtain the
multiplicative structure of hyperexponential functions, i.e., any
hyperexponential function $h(x,y)$ can be written as
$h=\exp(a)\cdot\prod_{i=1}^{n}b_{i}^{c_{i}}$ for some
$a,b_{i}\in{\mathbb{C}}(x,y)$ and $c_{i}\in{\mathbb{C}}$.
### 3.2 The ($q$)-shift case
In the discrete setting, we consider WZ-pairs with respect to
$(\partial_{x},\partial_{y})$ with
$\partial_{x}\in\\{\Delta_{x},\Delta_{q,x}\\}$ and
$\partial_{y}\in\\{\Delta_{y},\Delta_{q,y}\\}$, i.e., the pairs of the form
$(f,g)$ with $f,g\in K(x,y)$ satisfying $\partial_{x}(f)=\partial_{y}(g)$.
Let $\theta_{x}\in\\{{\sigma}_{x},\tau_{q,x}\\}$ and
$\theta_{y}\in\\{{\sigma}_{y},\tau_{q,y}\\}$. For any nonzero
$m\in{\mathbb{Z}}$, $\theta_{x}^{m}$ is also an automorphism on $K(x,y)$ that
fixes $K(y)$, i.e., for any $f\in K(x,y)$, $\theta_{x}^{m}(f)=f$ if and only
if $f\in K(y)$. The ring of polynomials in $\theta_{x}$ and $\theta_{y}$ over
$K$ is denoted by $K[\theta_{x},\theta_{y}]$. For any
$p=\sum_{i,j}c_{i,j}\theta_{x}^{i}\theta_{y}^{j}\in K[\theta_{x},\theta_{y}]$
and $f\in K(x,y)$, we define the action $p\bullet
f=\sum_{i,j}c_{i,j}\theta_{x}^{i}(\theta_{y}^{j}(f))$. Then $K(x,y)$ can be
viewed as a $K[\theta_{x},\theta_{y}]$-module. Let
$G=\langle\theta_{x},\theta_{y}\rangle$ be the free abelian group generated by
$\theta_{x}$ and $\theta_{y}$. Let $f\in K(x,y)$ and $H$ be a subgroup of $G$.
We call the set $\\{c\theta(f)\mid c\in K\setminus\\{0\\},\theta\in H\\}$ the
_$H$ -orbit_ at $f$, denoted by $[f]_{H}$. Two elements $f,g\in K(x,y)$ are
said to be $H$-equivalent if $[f]_{H}=[g]_{H}$, denoted by $f\sim_{H}g$. The
relation $\sim_{H}$ is an equivalence relation. A rational function $f\in
K(x,y)$ is said to be _$(\theta_{x},\theta_{y})$ -invariant_ if there exist
$m,n\in{\mathbb{Z}}$, not all zero, such that
$\theta_{x}^{m}\theta_{y}^{n}(f)=f$. All possible
$(\theta_{x},\theta_{y})$-invariant rational functions have been completely
characterized in AbramovPetkovsek2002a ; Ore1930 ; Sato1990 ; CFFJ2012 ;
CCFFL2015 . We summarize the characterization as follows.
###### Proposition 1
Let $f\in K(x,y)$ be $(\theta_{x},\theta_{y})$-invariant, i.e., there exist
$m,n\in{\mathbb{Z}}$, not all zero, such that
$\theta_{x}^{m}\theta_{y}^{n}(f)=f$. Set $\bar{n}=n/\gcd(m,n)$ and
$\bar{m}=m/\gcd(m,n)$. Then
* 1.
if $\theta_{x}={\sigma}_{x}$ and $\theta_{y}={\sigma}_{y}$, then
$f=g(\bar{n}x-\bar{m}y)$ for some $g\in K(z)$;
* 2.
if $\theta_{x}=\tau_{q,x}$, $\theta_{y}=\tau_{q,y}$, then
$f=g(x^{\bar{n}}y^{-\bar{m}})$ for some $g\in K(z)$;
* 3.
if $\theta_{x}={\sigma}_{x}$, $\theta_{y}=\tau_{q,y}$, then $f\in K(x)$ if
$m=0$, $f\in K(y)$ if $n=0$, and $f\in K$ if $mn\neq 0$.
We introduce a discrete analogue of the log-derivative pairs.
###### Definition 3
A WZ-pair $(f,g)$ with respect to $(\partial_{x},\partial_{y})$ is called a
_cyclic_ pair if there exists a $(\theta_{x},\theta_{y})$-invariant $h\in
K(x,y)$ such that
$f=\frac{\theta_{x}^{s}-1}{\theta_{x}-1}\bullet h\quad\text{and}\quad
g=\frac{\theta_{y}^{t}-1}{\theta_{y}-1}\bullet h,$
where $s,t\in{\mathbb{Z}}$ are not all zero satisfying that
$\theta_{x}^{s}(h)=\theta_{y}^{t}(h)$.
In the above definition, we may always assume that $s\geq 0$. Note that for
any $n\in{\mathbb{Z}}$ we have
$\frac{\theta_{y}^{n}-1}{\theta_{y}-1}=\left\\{\begin{array}[]{ll}\sum_{j=0}^{n-1}\theta_{y}^{j},&\hbox{$n\geq
0$;}\\\ -\sum_{j=1}^{-n}\theta_{y}^{-j},&\hbox{$n<0$.}\end{array}\right.$
###### Example 2
Let $a\in K(y)$ and $b\in K(x)$. Then both $(a,0)$ and $(0,b)$ are cyclic by
taking $h=a,s=1,t=0$ and $h=b,s=0,t=1$, respectively. Let $p=2x+3y$. Then the
pair $(f,g)$ with
$f=\frac{1}{p}+\frac{1}{{\sigma}_{x}(p)}+\frac{1}{{\sigma}_{x}^{2}(p)}\quad\text{and}\quad
g=\frac{1}{p}+\frac{1}{{\sigma}_{y}(p)}$
is a cyclic WZ-pair with respect to $(\Delta_{x},\Delta_{y})$.
Let $V_{0}=K(x)[y]$ and $V_{m}$ be the set of all rational functions of the
form $\sum_{i=1}^{I}{a_{i}}/{b_{i}^{m}}$, where
$m\in{\mathbb{Z}}_{+},a_{i},b_{i},\in K(x)[y]$,
$\deg_{y}(a_{i})<\deg_{y}(b_{i})$ and the $b_{i}$’s are distinct irreducible
polynomials in the ring $K(x)[y]$. By definition, the set $V_{m}$ forms a
subspace of $K(x,y)$ as a vector spaces over $K(x)$. By the irreducible
partial fraction decomposition, any $f\in K(x,y)$ can be uniquely decomposed
into $f=f_{0}+f_{1}+\cdots+f_{n}$ with $f_{i}\in V_{i}$ and so
$K(x,y)=\bigoplus_{i=0}^{\infty}V_{i}$. The following lemma shows that the
space $V_{m}$ is invariant under certain shift operators.
###### Lemma 4
Let $f\in V_{m}$ and $P\in K(x)[\theta_{x},\theta_{y}]$. Then $P(f)\in V_{m}$.
###### Proof
Let $f=\sum_{i=1}^{I}a_{i}/b_{i}^{m}$ and
$P=\sum_{i,j}p_{i,j}\theta_{x}^{i}\theta_{y}^{j}$. For any
$\theta=\theta_{x}^{i}\theta_{y}^{j}$ with $i,j,k\in{\mathbb{Z}}$,
$\theta(b_{i})$ is still irreducible and
$\deg_{y}(\theta(a_{i}))<\deg_{y}(\theta(b_{i}))$. Then all of the simple
fractions
${p_{i,j}\theta_{x}^{i}\theta_{y}^{j}(a_{i})}/{\theta_{x}^{i}\theta_{y}^{j}(b_{i})^{n}}$
appearing in $P(f)$ are proper in $y$ and have irreducible denominators. If
some of denominators are the same, we can simplify them by adding the
numerators to get a simple fraction. After this simplification, we see that
$P(f)$ can be written in the same form as $f$, so it is in $V_{m}$. $\Box$
###### Lemma 5
Let $p$ be a monic polynomial in $K(x)[y]$. If
$\theta_{x}^{m}(p)=c\theta_{y}^{n}(p)$ for some $c\in K(x)$ and
$m,n\in{\mathbb{Z}}$ with $m,n$ being not both zero, then $c\in K$.
###### Proof
Write $p=\sum_{i=0}^{d}p_{i}y^{i}$ with $p_{i}\in K(x)$ and $p_{d}=1$. Then
$\theta_{x}^{m}(p)=\sum_{i=0}^{d}\theta_{x}^{m}(p_{i})y^{i}=c\sum_{i=0}^{d}p_{i}\theta_{y}^{n}(y^{i})=c\theta_{y}^{n}(p).$
Comparing the leading coefficients in $y$ yields $c=1$ if
$\theta_{y}={\sigma}_{y}$ and $c=q^{-nd}$ if $\theta_{y}=\tau_{q,y}$. Thus,
$c\in K$ because $q\in K$. $\Box$
###### Lemma 6
Let $f\in K(x,y)$ be a rational function of the form
$f=\frac{a_{0}}{b^{m}}+\frac{a_{1}}{\theta_{x}(b^{m})}+\cdots+\frac{a_{n}}{\theta_{x}^{n}(b^{m})},$
where $m\in{\mathbb{Z}}_{+},n\in{\mathbb{N}},a_{0},a_{1},\ldots,a_{n}\in
K(x)[y]$ with $a_{n}\neq 0$ and $b\in K(x)[y]$ are such that
$\deg_{y}(a_{i})<\deg_{y}(b)$ and $b$ is an irreducible and monic polynomial
in $K(x)[y]$ such that $\theta_{x}^{i}(b)$ and $\theta_{x}^{j}(b)$ are not
$\theta_{y}$-equivalent for all $i,j\in\\{0,1,\ldots,n\\}$ with $i\neq j$. If
$\theta_{x}(f)-f=\theta_{y}(g)-g$ for some $g\in K(x,y)$, then $(f,g)$ is
cyclic.
###### Proof
By a direct calculation, we have
$\theta_{x}(f)-f=\frac{\theta_{x}(a_{n})}{\theta_{x}^{n+1}(b^{m})}-\frac{a_{0}}{b^{m}}+\frac{\theta_{x}(a_{0})-a_{1}}{\theta_{x}(b^{m})}+\cdots+\frac{\theta_{x}(a_{n-1})-a_{n}}{\theta_{x}^{n}(b^{m})}.$
If $\theta_{x}(f)-f=\theta_{y}(g)-g$ for some $g\in K(x,y)$, then all of the
$\theta_{y}$-residues at distinct $\theta_{y}$-orbits of $\theta_{x}(f)-f$ are
zero by residue criteria in Section 2. Since
$b^{m},\theta_{x}(b^{m}),\ldots,\theta_{x}^{n}(b^{m})$ are in distinct
$\theta_{y}$-orbits, $\theta_{x}^{n+1}(b^{m})$ must be $\theta_{y}$-equivalent
to one of them. Otherwise, we get
$a_{0}=0,\quad\theta_{x}(a_{0})-a_{1}=0,\quad\ldots,\quad\theta_{x}(a_{n-1})-a_{n}=0,\quad\text{and}\quad\theta_{x}(a_{n})=0.$
Since $\theta_{x}$ is an automorphism on $K(x,y)$, we have
$a_{0}=a_{1}=\cdots=a_{n}=0$, which contradicts the assumption that $a_{n}\neq
0$. If $\theta_{x}^{n+1}(b^{m})$ is $\theta_{y}$-equivalent to
$\theta_{x}^{i}(b^{m})$ for some $0<i\leq n$, so is
$\theta_{x}^{n+1-i}(b^{m})$, which contradicts the assumption. Thus,
$\theta_{x}^{n+1}(b^{m})=c\theta_{y}^{t}(b^{m})$ for some $c\in
K(x)\setminus\\{0\\}$ and $t\in{\mathbb{Z}}$. By Lemma 5, we have $c\in
K\setminus\\{0\\}$. A direct calculation leads to
$\displaystyle\theta_{x}(f)-f$
$\displaystyle{=}\frac{\theta_{x}(a_{n})}{\theta_{x}^{n+1}(b^{m})}{-}\frac{a_{0}}{b^{m}}+\sum_{i=1}^{n}\frac{\theta_{x}(a_{i-1})-a_{i}}{\theta_{x}^{i}(b^{m})}{=}\frac{\theta_{x}(a_{n})}{c\theta_{y}^{t}(b^{m})}{-}\frac{a_{0}}{b^{m}}+\sum_{i=1}^{n}\frac{\theta_{x}(a_{i-1})-a_{i}}{\theta_{x}^{i}(b^{m})}$
$\displaystyle{=}\frac{\theta_{y}^{-t}\theta_{x}(a_{n}/c)-a_{0}}{b^{m}}+\sum_{i=1}^{n}\frac{\theta_{x}(a_{i-1})-a_{i}}{\theta_{x}^{i}(b^{m})}+\theta_{y}(u)-u$
for some $u\in K(x,y)$ using the formula (2). By the residue criteria, we then
get $a_{0}=\theta_{y}^{-t}\theta_{x}(a_{n}/c),a_{1}=\theta_{x}(a_{0}),\ldots,$
and $a_{n}=\theta_{x}(a_{n-1})$. This implies that
$\theta_{x}^{n+1}(a_{0})=c\theta_{y}^{t}(a_{0})$ and
$a_{i}=\theta_{x}^{i}(a_{0})$ for $i\in\\{1,\ldots,n\\}$. So
$f=\frac{\theta_{x}^{n+1}-1}{\theta_{x}-1}\bullet h$ with $h=a_{0}/b^{m}$,
which leads to
$\theta_{x}(f)-f=\theta_{x}^{n+1}(h)-h=\theta_{y}^{t}(h)-h=\theta_{y}(g)-g\quad\text{with}\quad
g=\frac{\theta_{y}^{t}-1}{\theta_{y}-1}\bullet h.$
Thus, $(f,g)$ is a cyclic WZ-pair. $\Box$
The following theorem is a discrete analogue of Theorem 3.2.
###### Theorem 3.3
Let $f,g\in K(x,y)$ be such that $\partial_{x}(f)=\partial_{y}(g)$. Then there
exist rational functions $a,b_{1},\ldots,b_{n}\in K(x,y)$ such that
$f=\partial_{y}(a)+\sum_{i=1}^{n}\frac{\theta_{x}^{s_{i}}-1}{\theta_{x}-1}\bullet
b_{i}\quad\text{and}\quad
g=\partial_{x}(a)+\sum_{i=1}^{n}\frac{\theta_{y}^{t_{i}}-1}{\theta_{y}-1}\bullet
b_{i},$
where for each $i\in\\{1,\ldots,n\\}$ we have
$\theta_{x}^{s_{i}}(b_{i})=\theta_{y}^{t_{i}}(b_{i})$ for some
$s_{i}\in{\mathbb{N}}$ and $t_{i}\in{\mathbb{Z}}$ with $s_{i},t_{i}$ not all
zero.
###### Proof
By Abramov’s reduction and its $q$-analogue, we can decompose $f$ as
$f=\partial_{y}(a)+c+\sum_{j=1}^{J}f_{j}\quad\text{
with~{}$f_{j}=\sum_{i=1}^{I}\sum_{\ell=0}^{L_{i,j}}\frac{a_{i,j,\ell}}{\theta_{x}^{\ell}(b_{i}^{j})}$},$
where $a\in K(x,y),c\in K(x)$, and $a_{i,j,\ell}b_{i}\in K(x)[y]$ such that
$c=0$ if $\theta_{y}={\sigma}_{y}$, $\deg_{y}(a_{i,j,\ell})<\deg_{y}(b_{i})$,
and the $b_{i}$’s are irreducible and monic polynomials belonging to distinct
$G$-orbits where $G=\langle\theta_{x},\theta_{y}\rangle$. Moreover,
$\theta_{x}^{\ell_{1}}(b_{i}^{j})$ and $\theta_{x}^{\ell_{2}}(b_{i}^{j})$ are
in distinct $\theta_{y}$-orbits if $\ell_{1}\neq\ell_{2}$. By applying Lemma 4
to the equation $\theta_{x}(f)-f=\theta_{y}(g)-g$, we get that
$\theta_{x}(c)-c$ is $\theta_{y}$-summable and so is $\theta_{x}(f_{j})-f_{j}$
for each multiplicity $j\in\\{1,\ldots,J\\}$. By residue criteria for
$\theta_{y}$-sumability and the assumption that the $b_{i}$’s are in distinct
$\langle\theta_{x},\theta_{y}\rangle$-orbits, we have $\theta_{x}(c)-c=0$ and
for each $i\in\\{1,\ldots,I\\}$, the rational function
$f_{i,j}:=\sum_{\ell=0}^{L_{i,j}}{a_{i,j,\ell}}/{\theta_{x}^{\ell}(b_{i}^{j})}$
is either equal to zero or there exists $g_{i,j}\in K(x,y)$ such that
$\theta_{x}(f_{i,j})-f_{i,j}=\theta_{y}(g_{i,j})-g_{i,j}$. Then
$(f_{i,j},g_{i,j})$ is cyclic by Lemma 6 for every $i,j$ with $1\leq i\leq I$
and $1\leq j\leq J$. So the pair $(f,g)$ can be written as
$(f,g)=(\partial_{y}(a),\partial_{x}(a))+(c,0)+\sum_{i=1}^{I}\sum_{j=1}^{J}(f_{i,j},g_{i,j}).$
This completes the proof. $\Box$
###### Corollary 2
The quotient space
${\mathcal{P}}_{(\partial_{x},\partial_{y})}/{\mathcal{E}}_{(\partial_{x},\partial_{y})}$
is spanned over $K$ by the set
$\\{(f,g)+{\mathcal{E}}_{(\partial_{x},\partial_{y})}\mid\text{$f,g\in K(x,y)$
such that $(f,g)$ is a cyclic pair}\\}.$
### 3.3 The mixed case
In the mixed continuous-discrete setting, we consider the rational WZ-pairs
with respect to $(\theta_{x}-1,D_{y})$ with
$\theta_{x}\in\\{{\sigma}_{x},\tau_{q,x}\\}$.
###### Lemma 7
Let $p$ be an irreducible and monic polynomial in $K(x)[y]$. Then for any
nonzero $m\in{\mathbb{Z}}$, we have either $\gcd(p,\theta_{x}^{m}(p))=1$ or
$p\in K[y]$.
###### Proof
Since $\theta_{x}$ is an automorphism on $K(x,y)$, $\theta_{x}^{i}(p)$ is
irreducible in $K(x)[y]$ for any $i\in{\mathbb{Z}}$. If
$\gcd(p,\theta_{x}^{m}(p))\neq 1$, then $\theta_{x}^{m}(p)=cp$ for some $c\in
K(x)$. Write $p=\sum_{i=0}^{d}p_{i}y^{i}$ with $p_{i}\in K(x)$ and $p_{d}=1$.
Then $\theta_{x}^{m}(p)=cp$ implies that $\theta_{x}^{m}(p_{i})=cp_{i}$ for
all $i$ with $0\leq i\leq d$. Then $c=1$ and $p_{i}\in K$ for all $i$ with
$0\leq i\leq d-1$. So $p\in K[y]$. $\Box$
The structure of WZ-pairs in the mixed setting is as follows.
###### Theorem 3.4
Let $f,g\in K(x,y)$ be such that $\theta_{x}(f)-f=D_{y}(g)$. Then there exist
$h\in K(x,y)$, $u\in K(y)$ and $v\in K(x)$ such that
$f=D_{y}(h)+u\quad\text{and}\quad g=\theta_{x}(h)-h+v.$
###### Proof
By the Ostrogradsky–Hermite reduction, we decompose $f$ into the form
$f=D_{y}(h)+\sum_{i=1}^{I}\sum_{j=0}^{J_{i}}\frac{a_{i,j}}{\theta_{x}^{j}(b_{i})},$
where $h\in K(x,y)$ and $a_{i,j},b_{i}\in K(x)[y]$ with $a_{i,J_{i}}\neq 0$,
$\deg_{y}(a_{i,j})<\deg_{y}(b_{i})$ and $b_{i}$ being irreducible and monic
polynomials in $y$ over $K(x)$ such that the $b_{i}$’s are in distinct
$\theta_{x}$-orbits. By a direct calculation, we get
$\theta_{x}(f)-f=D_{y}(\theta_{x}(h)-h)+\sum_{i=1}^{I}\left(\frac{\theta_{x}(a_{i,J_{i}})}{\theta_{x}^{J_{i}+1}(b_{i})}-\frac{a_{i,0}}{b_{i}}+\sum_{j=1}^{J_{i}}\frac{\theta_{x}(a_{i,j-1})-a_{i,j}}{\theta_{x}^{j}(b_{i})}\right).$
For all $i,j$ with $1\leq i\leq I$ and $0\leq j\leq J_{i}+1$, the
$\theta_{x}^{j}(b_{i})$’s are irreducible and monic polynomials in $y$ over
$K(x)$. We first show that for each $i\in\\{1,\ldots,I\\}$, we have $b_{i}\in
K[y]$. Suppose that there exists $i_{0}\in\\{1,\ldots,I\\}$, $b_{i_{0}}\notin
K[y]$. Then $\gcd(\theta_{x}^{m}(b_{i_{0}}),b_{i_{0}})=1$ for any nonzero
$m\in{\mathbb{Z}}$ by Lemma 7. Since $\theta_{x}(f)-f$ is $D_{y}$-integrable
in $K(x,y)$, we have $\theta_{x}(a_{i_{0},J_{i_{0}}})=0$ by Lemma 1. Then
$a_{i_{0},J_{i_{0}}}=0$, which contradicts the assumption that
$a_{i,J_{i}}\neq 0$ for all $i$ with $1\leq i\leq I$. Since $b_{i}\in K[y]$,
$f$ can be written as
$f=D_{y}(h)+\sum_{i=1}^{I}\frac{a_{i}}{b_{i}},\quad\text{where
$a_{i}:=\sum_{j=0}^{J_{i}}a_{i,j}$.}$
Since $\theta_{x}(f)-f$ is $D_{y}$-integrable in $K(x,y)$ and since
$\theta_{x}(f)-f=D_{y}(\theta_{x}(h)-h)+\sum_{i=1}^{I}\frac{\theta_{x}(a_{i})-a_{i}}{b_{i}},$
we have $\theta_{x}(a_{i})-a_{i}=0$ for each $i\in\\{1,\ldots,I\\}$ by Lemma
1. This implies that $a_{i}\in K(y)$ and $f=D_{y}(h)+u$ with
$u=\sum_{i=1}^{I}a_{i}/b_{i}\in K(y)$. Since $\theta_{x}(f)-f=D_{y}(g)$, we
get $D_{y}(g-(\theta_{x}(h)-h))=0$. Then $g=\theta_{x}(h)-h+v$ for some $v\in
K(x)$. $\Box$
###### Corollary 3
The quotient space
${\mathcal{P}}_{(\theta_{x}-1,D_{y})}/{\mathcal{E}}_{(\theta_{x}-1,D_{y})}$ is
spanned over $K$ by the set
$\\{(f,g)+{\mathcal{E}}_{(\theta_{x}-1,D_{y})}\mid\text{$f\in K(y)$ and $g\in
K(x)$}\\}.$
## 4 Conclusion
We have explicitly described the structure of rational WZ-pairs in terms of
special pairs. With structure theorems, we can easily generate rational WZ-
pairs, which solves Problem 1 in the rational case completely. For the future
research, the next direction is to solve the problem in the cases of more
general functions. Using the terminology of Gessel in Gessel1995 , a
hypergeometric term $F(x,y)$ is said to be a _WZ-function_ if there exists
another hypergeometric term $G(x,y)$ such that $(F,G)$ is a WZ-pair. In the
scheme of creative telescoping, $(F,G)$ being a WZ-pair with respect to
$(\partial_{x},\partial_{y})$ is equivalent to that $\partial_{x}$ being a
telescoper for $F$ with certificate $G$. Complete criteria for the existence
of telescopers for hypergeometric terms and their variants are known
Abramov2003 ; ChenHouMu2005 ; CCFFL2015 . With the help of existence criteria
for telescopers, one can show that $F(x,y)$ can be decomposed as the sum
$F=\partial_{y}(H_{1})+H_{2}$ with $H_{1},H_{2}$ being hypergeometric terms
and $H_{2}$ is of proper form (see definition in WilfZeilberger1992 ;
Gessel1995 ) if $F$ is a WZ-function. So it is promising to apply the ideas in
the study of the existence problem of telescopers to explore the structure of
WZ-pairs.
Acknowledgment. I would like to thank Prof. Victor J.W. Guo and Prof. Zhi-Wei
Sun for many discussions on series for special constants, (super)-congruences
and their $q$-analogues that can be proved using the WZ method. I am also very
grateful to Ruyong Feng and Rong-Hua Wang for many constructive comments on
the earlier version of this paper. I also thank the anonymous reviewers for
their constructive and detailed comments.
## References
* [1] Sergei A. Abramov. The rational component of the solution of a first order linear recurrence relation with rational right hand side. Ž. Vyčisl. Mat. i Mat. Fiz., 15(4):1035–1039, 1090, 1975\.
* [2] Sergei A. Abramov. Indefinite sums of rational functions. In ISSAC ’95: Proceedings of the 1995 International Symposium on Symbolic and Algebraic Computation, pp. 303–308, New York, NY, USA, 1995. ACM.
* [3] Sergei A. Abramov. When does Zeilberger’s algorithm succeed? Adv. in Appl. Math., 30(3):424–441, 2003.
* [4] Sergei A. Abramov and Marko Petkovšek. On the structure of multivariate hypergeometric terms. Adv. in Appl. Math., 29(3):386–411, 2002.
* [5] Tewodros Amdeberhan. Faster and faster convergent series for $\zeta(3)$. Electron. J. Combin., 3(1):Research Paper 13, approx. 2, 1996.
* [6] David H. Bailey, Jonathan M. Borwein, and David M. Bradley. Experimental determination of Apéry-like identities for $\zeta(2n+2)$. Experiment. Math., 15(3):281–289, 2006.
* [7] Nayandeep Deka Baruah, Bruce C. Berndt, and Heng Huat Chan. Ramanujan’s series for $1/\pi$: a survey. Amer. Math. Monthly, 116(7):567–587, 2009.
* [8] Gustav C. Bauer. Von den Coefficienten der Reihen von Kugelfunctionen einer Variablen. J. Reine Angew. Math., 56:101–121, 1859.
* [9] Jonathan Borwein, David Bailey, and Roland Girgensohn. Experimentation in mathematics: Computational paths to discovery. A K Peters/CRC Press, 2004.
* [10] Alin Bostan and Manuel Kauers. The complete generating function for Gessel walks is algebraic. Proceedings of the American Mathematical Society, 138(9):3063–3078, 2010.
* [11] Manuel Bronstein. Symbolic Integration I: Transcendental Functions. Springer-Verlag, Berlin, second edition, 2005.
* [12] Shaoshi Chen. Some Applications of Differential-Difference Algebra to Creative Telescoping. PhD Thesis, Ecole Polytechnique LIX, 2011.
* [13] Shaoshi Chen, Frédéric Chyzak, Ruyong Feng, Guofeng Fu, and Ziming Li. On the existence of telescopers for mixed hypergeometric terms. J. Symbolic Comput., 68(part 1):1–26, 2015.
* [14] Shaoshi Chen, Ruyong Feng, Guofeng Fu, and Jin Kang. Multiplicative decompositions of multivariate $q$-hypergeometric terms. J. Systems Sci. Math. Sci., 32(8):1019–1032, 2012.
* [15] Shaoshi Chen and Michael F. Singer. Residues and telescopers for bivariate rational functions. Adv. in Appl. Math., 49(2):111–133, 2012.
* [16] Shaoshi Chen and Michael F. Singer. On the summability of bivariate rational functions. J. Algebra, 409:320–343, 2014.
* [17] William Y. C. Chen, Qing-Hu Hou, and Yan-Ping Mu. Applicability of the $q$-analogue of Zeilberger’s algorithm. J. Symbolic Comput., 39(2):155–170, 2005.
* [18] William Y. C. Chen, Qing-Hu Hou, and Doron Zeilberger. Automated discovery and proof of congruence theorems for partial sums of combinatorial sequences. J. Difference Equ. Appl., 22(6):780–788, 2016.
* [19] Colin Christopher. Liouvillian first integrals of second order polynomial differential equations. Electron. J. Differential Equations, 49:1–7, 1999.
* [20] Thomas Dreyfus, Charlotte Hardouin, Julien Roques, and Michael F. Singer. On the nature of the generating series of walks in the quarter plane. Invent. Math., Jan 2018, https://doi.org/10.1007/s00222-018-0787-z.
* [21] Shalosh B. Ekhad and Doron Zeilberger. A WZ proof of Ramanujan’s formula for $\pi$. In Geometry, analysis and mechanics, pages 107–108. World Sci. Publ., River Edge, NJ, 1994.
* [22] Ira M. Gessel. Finding identities with the WZ method. J. Symb. Comput., 20(5-6):537–566, November 1995.
* [23] Jesús Guillera. Some binomial series obtained by the WZ-method. Adv. in Appl. Math., 29(4):599–603, 2002.
* [24] Jesús Guillera. Generators of some Ramanujan formulas. Ramanujan J., 11(1):41–48, 2006.
* [25] Jesús Guillera. Hypergeometric identities for 10 extended Ramanujan-type series. Ramanujan J., 15(2):219–234, 2008.
* [26] Jesús Guillera. On WZ-pairs which prove Ramanujan series. Ramanujan J., 22(3):249–259, 2010.
* [27] Jesús Guillera. WZ-proofs of “divergent” Ramanujan-type series. In Advances in combinatorics, pages 187–195. Springer, Heidelberg, 2013.
* [28] Victor J. W. Guo. Some generalizations of a supercongruence of van Hamme. Integral Transforms and Spec. Funct., (1):1–12, 2017.
* [29] Victor J. W. Guo. A $q$-analogue of a Ramanujan-type supercongruence involving central binomial coefficients. J. of Math. Analysis and Applications, 458(1):590–600, 2018.
* [30] Victor J. W. Guo. A $q$-analogues of the (J.2) supercongruence of van Hamme. To appear in J. of Math. Analysis and Applications, 2018\.
* [31] Victor J. W. Guo and Ji-Cai Liu. $q$-analogues of two Ramanujan-type formulas for $1/\pi$. To appear in J. Difference Equ. Appl., 2018.
* [32] Victor J. W. Guo and Wadim Zudilin. Ramanujan-type formulae for $1/\pi$: $q$-analogues. To appear in Integral Transforms Spec. Funct., https://doi.org/10.1080/10652469.2018.1454448, 2018.
* [33] Charles Hermite. Sur l’intégration des fractions rationnelles. Ann. Sci. École Norm. Sup. (2), 1:215–218, 1872.
* [34] Khodabakhsh Hessami Pilehrood and Tatiana Hessami Pilehrood. Generating function identities for $\zeta(2n+2),\ \zeta(2n+3)$ via the WZ method. Electron. J. Combin., 15(1): Research Paper 35, 9, 2008.
* [35] Khodabakhsh Hessami Pilehrood and Tatiana Hessami Pilehrood. Simultaneous generation for zeta values by the Markov-WZ method. Discrete Math. Theor. Comput. Sci., 10(3):115–123, 2008.
* [36] Khodabakhsh Hessami Pilehrood and Tatiana Hessami Pilehrood. A $q$-analogue of the Bailey-Borwein-Bradley identity. J. Symbolic Comput., 46(6):699–711, 2011.
* [37] Qing-Hu Hou, Christian Krattenthaler, and Zhi-Wei Sun. On $q$-analogues of some series for $\pi$ and $\pi^{2}$. Preprint: arXiv:1802.01506, 2018.
* [38] Qing-Hu Hou and Rong-Hua Wang. An algorithm for deciding the summability of bivariate rational functions. Adv. in Appl. Math., 64:31 – 49, 2015.
* [39] Manuel Kauers, Christoph Koutschan, and Doron Zeilberger. Proof of Ira Gessel’s lattice path conjecture. Proc. Natl. Acad. Sci. USA, 106(28):11502–11505, 2009.
* [40] Margo Kondratieva and Sergey Sadov. Markov’s transformation of series and the WZ method. Adv. in Appl. Math., 34(2):393–407, 2005.
* [41] Christoph Koutschan, Manuel Kauers, and Doron Zeilberger. Proof of George Andrews’s and David Robbins’s $q$-TSPP conjecture. Proc. Natl. Acad. Sci. USA, 108(6):2196–2199, 2011.
* [42] Zhi-Guo Liu. Gauss summation and Ramanujan-type series for $1/\pi$. International Journal of Number Theory, 08(02):289–297, 2012.
* [43] Zhi-Guo Liu. A $q$-extension of a partial differential equation and the Hahn polynomials. Ramanujan J., 38(3):481–501, 2015.
* [44] Ling Long. Hypergeometric evaluation identities and supercongruences. Pacific J. Math., 249(2):405–418, 2011.
* [45] Mohamud Mohammed. The $q$-Markov-WZ method. Ann. Comb., 9(2):205–221, 2005.
* [46] Mohamud Mohammed and Doron Zeilberger. The Markov-WZ method. Electron. J. Combin., 11(1):205–221, 2004.
* [47] Eric Mortenson. A $p$-adic supercongruence conjecture of van Hamme. Proceedings of the American Mathematical Society, 136(12):4321–4328, 2008.
* [48] Oystein Ore. Sur la forme des fonctions hypergéométriques de plusieurs variables. J. Math. Pures Appl. (9), 9(4):311–326, 1930.
* [49] Mikhail Vasil’evich Ostrogradskiĭ. De l’intégration des fractions rationnelles. Bull. de la classe physico-mathématique de l’Acad. Impériale des Sciences de Saint-Pétersbourg, 4:145–167, 286–300, 1845\.
* [50] Marko Petkovšek, Herbert S. Wilf, and Doron Zeilberger. $A=B$. A K Peters Ltd., Wellesley, MA, 1996. With a foreword by Donald E. Knuth.
* [51] John Riordan. Combinatorial Identities, John Wiley & Sons, Inc., 1968.
* [52] Mikio Sato. Theory of prehomogeneous vector spaces (algebraic part)—the English translation of Sato’s lecture from Shintani’s note. Nagoya Math. J., 120:1–34, 1990. Notes by Takuro Shintani, Translated from the Japanese by Masakazu Muro.
* [53] Xiaomin Sun. Some Discussions on Three Kinds of WZ-equations. Master thesis, Soochow University, April 2012. Supervised by Xinrong Ma.
* [54] Zhi-Wei Sun. Super congruences and Euler numbers. Sci. China Math., 54(12):2509–2535, 2011.
* [55] Zhi-Wei Sun. A refinement of a congruence result by van hamme and mortenson. Illinois Journal of Mathematics, 56(3):967–979, 2012.
* [56] Zhi-Wei Sun. Conjectures involving arithmetical sequences. in: Number Theory: Arithmetic in Shangri-La (eds., S. Kanemitsu, H. Li and J. Liu), Proc. 6th China-Japan Seminar (Shanghai, August 15-17, 2011), World Sci., Singapore, pages 244–258, 2013.
* [57] Zhi-Wei Sun. Products and sums divisible by central binomial coefficients. Electron. J. Combin., 20(1):91–109(19), 2013.
* [58] Zhi-Wei Sun. Two $q$-analogues of Euler’s formula $\zeta(2)=\pi^{2}/6$. Preprint: arXiv:arXiv:1802.01473, 2018.
* [59] Akalu Tefera. What is $\dots$ a Wilf-Zeilberger pair? Notices Amer. Math. Soc., 57(4):508–509, 2010.
* [60] Lucien van Hamme. Some conjectures concerning partial sums of generalized hypergeometric series. In $p$-adic functional analysis (Nijmegen, 1996), volume 192 of Lecture Notes in Pure and Appl. Math., pages 223–236. Dekker, New York, 1997.
* [61] Herbert S. Wilf and Doron Zeilberger. An algorithmic proof theory for hypergeometric (ordinary and “$q$”) multisum/integral identities. Invent. Math., 108(3):575–633, 1992.
* [62] Herbert S. Wilf and Doron Zeilberger. Rational function certification of multisum/integral/“$q$” identities. Bull. Amer. Math. Soc. (N.S.), 27(1):148–153, 1992.
* [63] Doron Zeilberger. Closed form (pun intended!). In A tribute to Emil Grosswald: number theory and related analysis, volume 143 of Contemp. Math., pages 579–607. Amer. Math. Soc., Providence, RI, 1993.
* [64] Henryk Zoladek. The extended monodromy group and Liouvillian first integrals. J. Dynam. Control Systems, 4(1):1–28, 1998.
* [65] Wadim Zudilin. More Ramanujan-type formulas for $1/\pi^{2}$. Russian Math. Surveys, 62(3):634–636, 2007.
* [66] Wadim Zudilin. Ramanujan-type supercongruences. J. Number Theory, 129(8):1848–1857, 2009.
* [67] Wadim Zudilin. Arithmetic hypergeometric series. Russian Math. Surveys,, 66(2):369 C420, 2011.
|
$\gamma(\boldsymbol{b})\,=\,\frac{1}{2\sqrt{\frac{2(a_{1}^{2}\mu_{1}^{2}+a_{2}^{2}\mu_{2}^{2})^{3}(a_{1}^{2}b_{1}^{2}\mu_{1}^{2}+a_{2}^{2}b_{2}^{2}\mu_{2}^{2})}{8a_{1}^{8}b_{1}^{4}\mu_{1}^{8}+8a_{1}^{6}b_{1}^{2}a_{2}^{2}(b_{1}+b_{2})^{2}\mu_{1}^{6}\mu_{2}^{2}+a_{1}^{4}a_{2}^{4}(b_{1}+b_{2})^{2}(b_{1}^{2}+10b_{1}b_{2}+b_{2}^{2})\mu_{1}^{4}\mu_{2}^{4}+8a_{1}^{2}a_{2}^{6}b_{2}^{2}(b_{1}+b_{2})^{2}\mu_{1}^{2}\mu_{2}^{6}+8a_{2}^{8}b_{2}^{4}\mu_{2}^{8}}}}$
(190)
It is messy but straightforward to write the derivatives of $\gamma$ w.r.t.
$b_{1}$ and $b_{2}$ evaluated at $b_{1}=b_{2}=1$ (which gives $\zeta_{1}$ and
$\zeta_{2}$) to see that for any derivative of order $m\leq 8$ the above
yields $\zeta_{1}=\zeta_{2}$ and at $m=9$ one obtains,
$|\zeta_{1}|-|\zeta_{2}|\,=\,\frac{5670}{\|\boldsymbol{A}\boldsymbol{\mu}\|^{18}}(a_{1}a_{2}\mu_{1}\mu_{2})^{8}(a_{1}^{2}\mu_{1}^{2}-a_{2}^{2}\mu_{2}^{2})\,.$
(191) |
# On the solution of a conformal mapping problem by means of Weierstrass
functions
Smirnov Matvey 119991 Russia, Moscow GSP-1, ul. Gubkina 8, Institute for
Numerical Mathematics, Russian Academy of Sciences<EMAIL_ADDRESS>
###### Abstract.
The conformal mapping problem for the section of a channel filled with porous
material under a rectangular dam onto the upper half-plane is considered.
Similar problems arise in computing of fluid flow in hydraulic structures. As
a solution method, the representation of Christoffel–Schwartz elliptic
integral in terms of Weierstrass functions is used. The calculation is based
on Taylor series for the sigma function, the coefficients of which are
determined recursively. A simple formula for a conformal mapping is obtained,
which depends on four parameters and uses the sigma function. A numerical
experiment was carried out for a specific area. The degeneration of the
region, which consists in the dam width tending to zero, is considered, and it
is shown that the resulting formula has a limit that implements the solution
of the limiting problem. A refined proof of Weierstrass recursive formula for
the coefficients of Taylor series of the sigma function is presented.
Keywords. Conformal mappings, Christoffel-Schwartz integral, elliptic
funtions, Weierstrass sigma function, degeneration of Weierstrass functions.
## 1\. Introduction
A region $\Omega\subset\mathbb{C}$, the boundary of which is a polygonal curve
with angles multiple of $\pi/2$, is considered in this article. This region
models the shape of a channel under a dam. The calculation of fluid flow in
such channel boils down to a conformal mapping problem of $\Omega$ onto the
upper half-plane. The solution of such problems is given by Christoffel-
Schwartz integral (see, e.g., [15] or [10]), which in this case is naturally
defined on an elliptic Riemann surface. In this paper the simple formula,
which writes integral in terms of Weierstrass sigma function (see e.g., [1] or
[9]), was found. This approach allows to avoid numerical integration and the
mapping parameters can be found from a simple nonlinear system of equations;
therefore, the computation is significantly simplified.
Similar problems have been considered in [4], [6], [7], [5], and [3], where
Christoffel-Schwartz integral was efficiently represented by theta functions
(see, e.g., [11]). In the paper [3] the application of Lauricella functions to
these problems was studied, and also different approaches were compared.
The main advantage of Weierstrass functions over theta functions is that they
have limiting values when the surface degenerates. The paper analyzes behavior
of the constructed conformal mapping under the condition that the dam width
tends to zero. It turns out that conformal mappings have a limit, which is a
solution to the limiting problem. Thus, it is shown that the solution is
stable under the considered degeneration.
The described property of the Weierstrass sigma function loses its value if
the standard method is used for calculations, which expresses the sigma
function in terms of the theta function (since it does not withstand
degeneration). Thus, it is necessary to use an independent method for
calculating sigma functions. In this paper, we use the expression for the
coefficients of its expansion into Taylor series obtained by Weierstrass (see
[17]). Since the proof presented there is apparently not complete (at one
point, the analyticity of the sigma function in three variables in a
neighborhood of zero is used, which is not obvious), we give a more detailed
proof in the appendix. The above formula, however, is not sufficient for the
final numerical solution, since Taylor series are not suitable for
calculations with large arguments (namely, such a need arises with
degeneration). Thus, the problem of constructing an efficient computational
method for the sigma function independent of theta functions still remains
unsolved. If such a method is available, it will be possible to construct
formulas that are stable under various degenerations and use them in
calculations. The results of this work illustrate the need in such methods.
Problems in which hyperelliptic Riemann surfaces of higher genus arise can
also be solved using the theory of sigma functions developed by Klein and
Baker in [14] and [2] respectively (more detailed exposition can be found in
[16]). There is hope that it will be possible to prove the stability of
formulas expressing the solution of the above problems in terms of high-order
sigma functions. Thus, the construction of Weierstrass-type recurrent formulas
(which are known for genus 1 and 2; see [16]) and calculation methods for
sigma functions can be extremely useful in applied problems.
## 2\. The statement and the origin of the problem
Consider region $\Omega$ in the complex plane pictured on Figure 1.
$w_{4}$$w_{3}$$w_{2}$$w_{1}$$\delta$$h$$h^{-}$$h^{+}$ Figure 1. Region
$\Omega$.
It is bounded from below by a line and from above by a polygonal curve with
four vertices $w_{1},w_{2},w_{3},w_{4}$ (it is convenient to think that this
region also has two vertices at $\pm\infty$). Let the line, which bounds the
region from below be parallel to the real axis, and let the vertex $w_{4}$ be
at the origin. Then, this region is determined by four real parameters
$h^{-},h^{+},h,\delta$, where $h$ is equal to the length of segment
$[w_{1},w_{2}]$, $\delta$ is the length of $[w_{2},w_{3}]$, and $h^{-}$ and
$h^{+}$ are equal to the distance from the line that bounds the region from
below to $w_{4}$ and $w_{1}$ respectively. These parameters are positive and
satisfy inequalities $h^{-}-h^{+}+h>0$, which corresponds to positivity of the
length of $[w_{3},w_{4}]$, and $h<h^{+}$. The region is determined uniquely by
these parameters.
Regions similar to $\Omega$ arise in problems connected with the computation
of fluid flow through the porous material under a dam. Since the flow is
continuous and satisfies the Darcy’s law, the pressure $p$ is a harmonic
function in $\Omega$. Assuming that segments $[w_{1},w_{2}]$, $[w_{2},w_{3}]$,
$[w_{3},w_{4}]$, and channel’s bottom are impenetrable, we obtain natural
boundary conditions: the normal derivative $\partial p/\partial n$ vanishes on
impenetrable segments of the boundary, while on the remaining segments (i.e.
on the half-lines starting from $w_{1}$ and $w_{4}$) $p$ is locally constant.
Consider a real-valued function $q$ in the region $\Omega$ such that $f=p+iq$
is holomorphic (such function exists because $\Omega$ is simply connected).
The normal derivative of $p$ vanishing condition is easily equivalent to
constancy of $q$ on the corresponding boundary segment. It follows, that, if
$f$ is a function that conformally maps $\Omega$ onto a rectangle in a way,
such that $w_{1}$, $w_{4}$, and vertices at infinity are mapped to the
vertices of a rectangle, then $p=\operatorname{Re}f$ is a solution to the
original problem. $q=\operatorname{Im}f$ is called the current function. Its
level lines are the streamlines of the fluid under the dam. It is clear, that
it is enough to solve the problem of conformal mapping of $\Omega$ onto upper
half-plane $\mathbb{C}_{+}$ in order to solve the specified problem.
In what follows, the conformal mapping problem will be solved explicitly using
the tools of Weierstrass elliptic functions. Below we show the calculation of
streamlines in $\Omega$ obtained using the method constructed in this work.
Figure 2. Streamlines in the region $\Omega$.
## 3\. The solution of the conformal mapping problem
### 3.1. The general form of the solution and parameter determination
Since $\Omega$ is simply connected, there is a conformal mapping
$W:\mathbb{C}_{+}\rightarrow\Omega$, where
$\mathbb{C}_{+}=\\{z\in\mathbb{C}:\operatorname{Im}z>0\\}$ is the upper half-
plane (see, e.g., [8] or [15]). Using, if necessary, a suitable automorphism
of $\mathbb{C}_{+}$, one can make it such that the point $w_{4}$ is the
preimage under $W$ (more precisely, under its continuation to the boundary) of
$\infty$. Then, by Christoffel-Schwartz theorem (see [10]), there exist
$x^{-}<x^{+}<x_{1}<x_{2}<x_{3}\in\mathbb{R}$ and $C\in\mathbb{C}$ such that
(3.1)
$dW=\phi=C\frac{\sqrt{(x-x_{2})(x-x_{3})}}{(x-x^{-})(x-x^{+})\sqrt{x-x_{1}}}dx.$
###### Remark 3.1.
Here $x_{i}$ is the preimage of $w_{i}$ under $W$, and $x^{-}$ and $x^{+}$ are
the points on the boundary of the upper half-plane in which $W$ goes to
infinity (preimages of the vertices at infinity).
The differential form $\phi$ can be considered on the hyperelliptic Riemann
surface $V$ of genus $1$, defined by equation
$y^{2}=F(x)=4(x-x_{1})(x-x_{2})(x-x_{3})$. Using the shift of the upper half-
plane we can set $x_{1}+x_{2}+x_{3}=0$ without loss of generality. Thus,
$F(x)=4x^{3}-g_{2}x-g_{3}$ for some real $g_{2},g_{3}$ (that are determined by
$x_{1},x_{2},x_{3}$). We can rewrite $\phi$ on this surface in the form
(3.2) $\phi=2C\frac{(x-x_{2})(x-x_{3})}{y(x-x^{-})(x-x^{+})}dx.$
Let us fix the branch of $\sqrt{F(x)}$ in the region that is made from
$\mathbb{C}$ by throwing off segment $[x_{1},x_{2}]$ and half-line
$[x_{3},\infty]$. Let this branch have positive values as the argument tends
to half-line $(x_{3},\infty)$ from the upper half-plane. Recalling that $dx/y$
is a holomorphic (everywhere non-zero) form on $V$, we obtain that $\phi$ has
two zeros of multiplicity $2$ at $(x_{2},0)$ and $(x_{3},0)$ and also four
simple poles at $(x^{-},\pm\sqrt{F(x^{-})})$ and $(x^{+},\pm\sqrt{F(x^{+})})$.
Note that the residues of this form at these poles are equal to $\pm
h^{-}/\pi$ and $\mp h^{+}/\pi$ respectively.
Now we shall use Abel map (see, e.g., [12]) which identifies $V$ with
$\operatorname{Jac}(V)$ (as usual, we set the infinity as the initial point
and $dx/y$ as the basis of holomorphic forms). Let us introduce the half-
periods
$\omega=\int_{x_{1}}^{x_{2}}\frac{dx}{y},\;\;\omega^{\prime}=-\int_{x_{2}}^{x_{3}}\frac{dx}{y},$
and quantities $\eta=\zeta(\omega)$ and
$\eta^{\prime}=\zeta(\omega^{\prime})$, where $\zeta$ is the Weierstrass zeta
function (see [1]). It is easy to see that $\omega,\eta\in\mathbb{R}$ and
$\omega^{\prime},\eta^{\prime}\in i\mathbb{R}$. The set of points
$(x,\sqrt{F(x)})$, where $x\in\mathbb{C}_{+}$, is mapped by this map onto the
rectangle with vertices $0,\omega^{\prime},\omega^{\prime}-\omega,-\omega$.
Let us denote the images of the points $(x^{-},\sqrt{F(x^{-})})$ and
$(x^{+},\sqrt{F(x^{+})})$ by $z^{-}$ and $z^{+}$ respectively (see Figure 3,
where the preimages of points are indicated in the brackets).
$0\;(\infty)$$\omega^{\prime}\;(x_{1})$$\omega^{\prime}-\omega\;(x_{2})$$-\omega\;(x_{3})$$z^{-}$$z^{+}$
Figure 3. Image of the upper half-plane under the Abel map.
The images of $(x^{-},-\sqrt{F(x^{-})})$ and $(x^{+},-\sqrt{F(x^{+})})$ in
this case are equal to $-z^{-}$ and $-z^{+}$. Consider the differential form
$\psi$ on the torus that corresponds to $\phi$ under this identification of
$V$ with $\operatorname{Jac}(V)$. This form has $4$ simple poles in the points
$\pm z^{-}$ and $\pm z^{+}$ and its residues are equal to $\pm h^{-}/\pi$ and
$\mp h^{+}/\pi$ respectively.
Now we use the method of representing elliptic functions by Weierstrass
functions that is described in [1]. Consider meromorphic function
(3.3)
$g(z)=\frac{h^{-}}{\pi}(\zeta(z-z^{-})-\zeta(z+z^{-}))-\frac{h^{+}}{\pi}(\zeta(z-z^{+})-\zeta(z+z^{+})).$
Using the quasiperiodic properties of $\zeta$ (see [1]) it is easy to conclude
that $g$ is elliptic. Form $g(z)dz$ has the same simple poles as $\psi$ with
the same residues. Therefore, $\psi-g(z)dz$ is a holomorphic form on the
torus. Since the space of holomorphic $1$-forms on the torus is one-
dimensional it follows that $\psi-g(z)dz=Ddz$, where $D$ is a constant (note
that $D\in i\mathbb{R}$).
Now we return to the map $W$. It is clear that
$W(x)=-\int_{x}^{\infty}\phi.$
In view of that, let
(3.4) $Q(z)=\int_{0}^{z}\psi.$
Obviously, $W(x)$ is equal to $Q(z)$, where $z$ is the image of
$(x,\sqrt{F(x)})$ under the Abel map. Thus, $Q$ conformally maps the rectangle
with vertices $0,\omega^{\prime},\omega^{\prime}-\omega,-\omega$ onto
$\Omega$, and $\omega^{\prime}$ is mapped to $w_{1}$, $\omega^{\prime}-\omega$
is mapped to $w_{2}$, and $-\omega$ to $w_{3}$ (also $0$ is mapped to
$w_{4}$). Now we can derive the system of equations from the previously
obtained relations:
(3.5) $g(-\omega)+D=0,\;\;\;g(\omega^{\prime}-\omega)+D=0,$ (3.6)
$Q(\omega^{\prime}-\omega)-Q(\omega^{\prime})=-ih,\;\;\;Q(-\omega)-Q(\omega^{\prime}-\omega)=-\delta.$
###### Remark 3.2.
The first pair of equations follows from the fact that $\phi$ has zeros in the
points $(x_{2},0)$ and $(x_{3},0)$, and the second pair is a consequence of
relations $w_{3}-w_{2}=-\delta$, $w_{2}-w_{1}=-ih$.
It remains to derive a reasonable formula for $Q$. Recall that $\zeta$ is a
logarithmic derivative of $\sigma$. It easily follows that
(3.7)
$Q(z)=Dz+\frac{h^{-}}{\pi}\ln\left(\frac{\sigma(z-z^{-})}{\sigma(z+z^{-})}\right)-\frac{h^{+}}{\pi}\ln\left(\frac{\sigma(z-z^{+})}{\sigma(z+z^{+})}\right)-i(h^{-}-h^{+}),$
where $\ln$ denotes the branch of logarithm in the plane cut by negative
imaginary half-line such that $\ln(1)=0$. Substituting in (3.5) the formula
for $g$ from (3.3) and using quasiperiodicity of $\sigma$ (see, e.g., [1]), we
obtain the system of equations:
(3.8) $\begin{dcases}-D\omega-\frac{2h^{+}}{\pi}\eta
z^{+}+\frac{2h^{-}}{\pi}\eta z^{-}=-ih,\\\
-D\omega^{\prime}-\frac{2h^{+}}{\pi}\eta^{\prime}z^{+}+\frac{2h^{-}}{\pi}\eta^{\prime}z^{-}=-\delta,\\\
D+\frac{h^{-}}{\pi}(\zeta(\omega-z^{-})-\zeta(\omega+z^{-}))-\frac{h^{+}}{\pi}(\zeta(\omega-z^{+})-\zeta(\omega+z^{+}))=0,\\\
D+\frac{h^{-}}{\pi}(\zeta(\omega^{\prime}+\omega-z^{-})-\zeta(\omega^{\prime}+\omega+z^{-}))\\\
\qquad-\frac{h^{+}}{\pi}(\zeta(\omega^{\prime}+\omega-z^{+})-\zeta(\omega^{\prime}+\omega+z^{+}))=0.\end{dcases}$
In this system of equations there are five variables (since the quantities
$\omega,\omega^{\prime},\eta,\eta^{\prime}$ are determined by $g_{2}$ and
$g_{3}$) $g_{2},g_{3},D,z^{+},z^{-}$ (the first two are real and the other are
imaginary) and four equations (3.8) (the first, third and fourth equations are
imaginary and the second one is real). Thus, it is natural to consider a
single parameter family of curves that necessarily contains a suitable one,
i.e. to consider functions $g_{2}=g_{2}(\gamma)$ and $g_{3}=g_{3}(\gamma)$ and
use the system (3.8) to determine the parameters $\gamma,D,z^{+},z^{-}$.
In what follows we shall use the family of curves that is defined by the roots
of the polynomial $F$: $x_{1}=\gamma-1/2$, $x_{2}=-2\gamma$,
$x_{3}=\gamma+1/2$, $\gamma\in(-1/6,1/6)$ (more detailed analysis of this
family is given during the study of the degeneration $\delta\rightarrow 0$).
This family corresponds to the normalization condition $x_{3}-x_{1}=1$ in
addition to the already given relation $x_{1}+x_{2}+x_{3}=0$.
(a) The rectangle $P$ and contours in it.
(b) The image of the rectangle and the contours.
(c) The behaviour near $[w_{2},w_{3}]$.
Figure 4. The conformal mapping $Q$.
### 3.2. On the numerical implementation
It was decided to use for numerical implementation the explicit computation of
the sigma function depending on parameters $g_{2},g_{3}$ through its Taylor
series (see [17] or Theorem A.4). It is clear that for the effective solution
of the system (3.8) it is necessary to compute all the quantities in it and
their derivatives with respect to parameters. In the end it reduces to the
computation of $\omega$ and $\omega^{\prime}$ and their derivatives with
respect to $g_{2}$ and $g_{3}$ and, also, $\zeta$ and its derivatives with
respect to $z,g_{2},g_{3}$. Since
$\zeta=\frac{1}{\sigma}\frac{\partial\sigma}{\partial z},$
the problem of computation of $\zeta$ and its derivatives can be solved
easily. In order to compute $\omega$ we note that $\sigma$ has zeros exactly
in the points of the lattice
$\\{2m\omega+2n\omega^{\prime}:n,m\in\mathbb{Z}\\}$ and these zeros are
simple. An effective way to localize a simple zero $z_{0}$ of a holomorphic
function $f$ is to compute integral of $zf^{\prime}(z)/2\pi if(z)$ on a
contour enclosing $z_{0}$. To find a suitable contour it is possible to apply
a variant of binary search using that $\omega\geq\pi/2$. Using the specified
method either directly, or for an approximate calculation of zero and
subsequent application of equation solving methods, it is easy to construct an
effective and precise algorithm of computation of $\omega$ (and
$\omega^{\prime}$). In order to compute their derivatives it is possible to
differentiate the integral of $z\sigma^{\prime}(z)/\sigma(z)$ by $g_{2}$ or
$g_{3}$, and to compute it explicitly by determining the residue in the zero
of $\sigma$. Thus, the solution of the system (3.8) can be completely reduced
to the computation of the sigma function and its derivatives with respect to
$z$, $g_{2}$, and $g_{3}$.
We demonstrate the solution of a specific problem by this method. Let
$h^{+}=\pi$, $h^{-}=\pi+0.5$, $h=0.5$, $\delta=0.2$. We shall search for the
solution in the one parameter family of curves defined by $x_{1}=\gamma-1/2$,
$x_{2}=-2\gamma$, $x_{3}=\gamma+1/2$, $\gamma\in(-1/6,1/6)$. The solution of
the system (3.8) is
$(\gamma,D,z^{+},z^{-})=(0.1051616134,0.0203152915i,1.3043479103i,0.7195735824i).$
Given this $\gamma$ we obtain $\omega=1.6518996331$,
$\omega^{\prime}=2.2939120295i$. On Figure 4 the image of the rectangle $P$
with vertices $0,\omega^{\prime},\omega^{\prime}-\omega,-\omega$ under the map
$Q$ is shown.
## 4\. Stability of the solution under the degeneration of the region
Here we shall consider a problem of conformal mapping of the upper half-plane
onto the region $\widetilde{\Omega}$ that comes from $\Omega$ with
degeneration $\delta\rightarrow 0$ (see Figure 5) and analyse behaviour of the
solution under the condition that no other degeneration is happening (i.e.
quantities $h^{-},h^{+},h,h^{-}+h-h^{+},h^{+}-h$ have positive limits).
$w_{4}$$w_{2}=w_{3}$$w_{1}$$h$$h^{-}$$h^{+}$ Figure 5. Region
$\widetilde{\Omega}$.
$\widetilde{\Omega}$ is determined by three parameters $h,h^{+}$ and $h^{-}$.
A conformal mapping of the upper half-plane onto $\widetilde{\Omega}$ can be
found by the analogous method (using Christoffel-Schwartz theorem). In this
case, since the corresponding Riemann surface has genus $0$, the solution can
be expressed in elementary functions. Another method (that is considered here)
is to apply formula (3.7), using the fact that $\sigma$ is defined for $g_{2}$
and $g_{3}$ such that $F(x)=4x^{3}-g_{2}x-g_{3}$ has multiple roots. It is
natural to suppose that the solution can be found by taking the limit under
gluing the roots, that are mapped to $w_{2}$ and $w_{3}$. Together with that
the stability of the solution under $\delta\rightarrow 0$ will be proved.
### 4.1. Gluing of the roots
Again consider a family of curves depending on $\gamma\in(-1/6,1/6)$ that is
given by $F_{\gamma}(x)=4(x-x_{1}(\gamma))(x-x_{2}(\gamma))(x-x_{3}(\gamma))$,
where $x_{1}(\gamma)=\gamma-1/2$, $x_{2}(\gamma)=-2\gamma$,
$x_{3}(\gamma)=\gamma+1/2$. Under $\gamma\rightarrow-1/6$ the roots $x_{2}$
and $x_{3}$ glue. The limiting values of $g_{2}$ and $g_{3}$ are $4/3$ and
$-8/27$ respectively. For each $\gamma$ we define quantities $\omega(\gamma)$,
$\omega^{\prime}(\gamma)$, $\eta(\gamma)$, $\eta^{\prime}(\gamma)$. In what
follows we shall omit dependence on $\gamma$.
###### Lemma 4.1.
Under $\gamma\rightarrow-1/6$ we have
(4.1)
$\omega,\eta\rightarrow\infty,\;\;\omega^{\prime}\rightarrow\frac{i\pi}{2},\;\;\eta^{\prime}\rightarrow-\frac{i\pi}{6},\;\;\frac{\eta}{\omega}\rightarrow-\frac{1}{3}.$
Moreover,
(4.2)
$\begin{gathered}\sigma(z,\frac{4}{3},-\frac{8}{27})=e^{-\frac{z^{2}}{6}}\sinh(z),\;\;\zeta(z,\frac{4}{3},-\frac{8}{27})=\coth(z)-\frac{z}{3},\\\
\wp(z,\frac{4}{3},-\frac{8}{27})=\frac{1}{\sinh^{2}(z)}+\frac{1}{3}.\end{gathered}$
Finally, there exists $\varepsilon>0$ such that for $\gamma+1/6<\varepsilon$
the estimation
(4.3) $-c_{1}\ln(\gamma+1/6)\leq\omega(\gamma)\leq-c_{2}\ln(\gamma+1/6)$
holds, where $0<c_{1}<c_{2}$.
###### Proof.
(4.1) easily follows from integral representations
$\omega(\gamma)=\frac{1}{2}\int_{\gamma-\frac{1}{2}}^{-2\gamma}\frac{dx}{\sqrt{(x-\gamma-1/2)(x-\gamma+1/2)(x+2\gamma)}},$
$\omega^{\prime}(\gamma)=\frac{1}{2}\int_{-2\gamma}^{\gamma+\frac{1}{2}}\frac{dx}{\sqrt{-(x-\gamma-1/2)(x-\gamma+1/2)(x+2\gamma)}},$
$\eta(\gamma)=-\frac{1}{2}\int_{\gamma-\frac{1}{2}}^{-2\gamma}\frac{xdx}{\sqrt{(x-\gamma-1/2)(x-\gamma+1/2)(x+2\gamma)}},$
$\eta^{\prime}(\gamma)=-\frac{1}{2}\int_{-2\gamma}^{\gamma+\frac{1}{2}}\frac{xdx}{\sqrt{-(x-\gamma-1/2)(x-\gamma+1/2)(x+2\gamma)}}.$
To derive (4.2) one can pass to the limit $\gamma\rightarrow-1/6$ using the
formula that represents $\sigma$ as the infinite product (see [1]). We obtain
$\sigma(z,\frac{4}{3},-\frac{8}{27})=z\prod_{n\neq
0}\left(1-\frac{z}{in\pi}\right)e^{\frac{z}{in\pi}-\frac{z^{2}}{2n^{2}\pi^{2}}}.$
Using classical identities
$\sum_{n=1}^{\infty}\frac{1}{n^{2}}=\frac{\pi^{2}}{6},\;\;\prod_{n=1}^{\infty}\left(1-\frac{x^{2}}{n^{2}\pi^{2}}\right)=\frac{\sin(x)}{x},$
we derive the first equation of (4.2). The rest follow from equalities
$\zeta(z)=\sigma^{\prime}(z)/\sigma(z)$, $\wp(z)=-\zeta^{\prime}(z)$.
Now we estimate the growth of $\omega(\gamma)$. Consider equality
$\omega(\gamma)=\frac{1}{2}\int_{0}^{1/2-3\gamma}\frac{dt}{\sqrt{t(1-t)(1/2-3\gamma-t)}}.$
Note that for small $\gamma$ integral on the segment $[0,1/2]$ is bounded and,
therefore,
$\omega(\gamma)\sim\frac{1}{2}\int_{1/2}^{1/2-3\gamma}\frac{dt}{\sqrt{t(1-t)(1/2-3\gamma-t)}}.$
The estimation of the integral in rhs is quite simple since $1/\sqrt{t}$ is
bounded from below and above by positive constants and the remaining integral
can be calculated explicitly. ∎
Since $\omega^{\prime}\rightarrow i\pi/2$ and $\omega\rightarrow\infty$, it is
natural to suppose that (3.7) can give a formula for a conformal mapping of
the half-strip
$S=\\{z\in\mathbb{C}:\operatorname{Re}z<0,\operatorname{Im}z\in(0,\pi/2)\\}$
onto $\widetilde{\Omega}$. Let
(4.4)
$\widetilde{Q}(z)=Dz+\frac{h^{-}}{\pi}\ln\left(\frac{\sigma(z-z^{-})}{\sigma(z+z^{-})}\right)-\frac{h^{+}}{\pi}\ln\left(\frac{\sigma(z-z^{+})}{\sigma(z+z^{+})}\right)-i(h^{-}-h^{+}),$
where $\sigma$ is taken at values $g_{2}=4/3$, $g_{3}=-8/27$ and
$D\in\mathbb{C}$, $z^{-},z^{+}\in(0,i\pi/2)$ are the parameters. Substituting
(4.2) into (4.4), we obtain
(4.5)
$\begin{multlined}\widetilde{Q}(z)=z\left(D+\frac{2h^{-}z^{-}}{3\pi}-\frac{2h^{+}z^{+}}{3\pi}\right)\\\
+\frac{h^{-}}{\pi}\ln\frac{\sinh(z-z^{-})}{\sinh(z+z^{-})}-\frac{h^{+}}{\pi}\ln\frac{\sinh(z-z^{+})}{\sinh(z+z^{+})}-i(h^{-}-h^{+}).\end{multlined}\widetilde{Q}(z)=z\left(D+\frac{2h^{-}z^{-}}{3\pi}-\frac{2h^{+}z^{+}}{3\pi}\right)\\\
+\frac{h^{-}}{\pi}\ln\frac{\sinh(z-z^{-})}{\sinh(z+z^{-})}-\frac{h^{+}}{\pi}\ln\frac{\sinh(z-z^{+})}{\sinh(z+z^{+})}-i(h^{-}-h^{+}).$
It is easy to check that if $\widetilde{Q}$ has a non-zero linear term it has
no limit under $\operatorname{Re}z\rightarrow-\infty$. If
$D+\frac{2h^{-}z^{-}}{3\pi}-\frac{2h^{+}z^{+}}{3\pi}=0,$
its limit is equal to $2(h^{-}z^{-}-h^{+}z^{+})/\pi-i(h^{-}-h^{+})$. Thus, if
$\widetilde{Q}$ conformally maps $S$ onto $\widetilde{\Omega}$, then the
conditions
(4.6) $\begin{dcases}D+\frac{2h^{-}z^{-}}{3\pi}-\frac{2h^{+}z^{+}}{3\pi}=0,\\\
h^{-}z^{-}-h^{+}z^{+}=-\frac{ih\pi}{2}\end{dcases}$
hold. Obviously, these conditions are sufficient under the additional
assumption that the derivative of $\widetilde{Q}$ is not vanishing on $S$ and
its boundary. Using tedious but elementary calculations it can be shown that
this condition is equivalent to
(4.7) $h^{-}\sinh(2z^{-})=h^{+}\sinh(2z^{+}).$
Thus, (4.6) and (4.7) determine the parameters $D,z^{-},z^{+}$, at which
$\widetilde{Q}$ is the desired conformal mapping.
We show that the obtained formula reduces to Christoffel-Schwartz integral in
the upper half-plane under the variable change $x=\wp(z)=1/\sinh^{2}(z)+1/3$.
Changing the variable in (4.4) and using that the linear term vanishes we
obtain the following formula for the conformal mapping of $\mathbb{C}_{+}$
onto $\widetilde{\Omega}$:
(4.8)
$\widetilde{W}(x)=\frac{h^{-}}{\pi}\ln\left(\frac{\sqrt{x^{-}+2/3}-\sqrt{x+2/3}}{\sqrt{x^{-}+2/3}+\sqrt{x+2/3}}\right)+\frac{h^{+}}{\pi}\ln\left(\frac{\sqrt{x^{+}+2/3}-\sqrt{x+2/3}}{\sqrt{x^{+}+2/3}+\sqrt{x+2/3}}\right).$
The equations on the parameters $z^{-}$ and $z^{+}$ can be rewritten in the
form
(4.9)
$\begin{dcases}\frac{h^{-}\sqrt{x^{-}+1}}{x^{-}}=\frac{h^{+}\sqrt{x^{+}+1}}{x^{+}},\\\
h^{-}\ln\left(\frac{1+\sqrt{x^{-}+4/3}}{\sqrt{x^{-}+1/3}}\right)+h^{+}\ln\left(\frac{1+\sqrt{x^{+}+4/3}}{\sqrt{x^{+}+1/3}}\right)=-\frac{ih\pi}{2}.\end{dcases}$
Differentiating $\widetilde{W}$ and using the first equation from (4.9) we
obtain
$d\widetilde{W}=\frac{h^{-}\sqrt{x^{-}+1}-h^{+}\sqrt{x^{+}+1}}{\pi}\frac{(x-1/3)dx}{\sqrt{x+2/3}(x-x^{-})(x-x^{+})}.$
Therefore we found the exact form of the constant in Christoffel-Schwartz
integral. The remaining parameters $x^{-}$ and $x^{+}$ can be found from the
system of equations (4.9).
### 4.2. Passing to the limit
Consider a sequence of regions determined by the parameters
$(h^{-}_{n},h^{+}_{n},h_{n},\delta_{n})$. Assume that they have limits
$(h^{-}_{\lim},h^{+}_{\lim},h_{\lim},0)$. Also we assume that
$h^{+}_{\lim}-h_{\lim}>0$ and $h^{-}_{\lim}-h^{+}_{\lim}+h_{\lim}>0$. We shall
prove that the parameters $(D_{n},\gamma_{n},z^{-}_{n},z^{+}_{n})$, given by
the solution of the system (3.8) for the corresponding regions, have limits
$(D_{\lim},-1/6,z^{-}_{\lim},z^{+}_{\lim})$, and, moreover, parameters
$(D_{\lim},z^{-}_{\lim},z^{+}_{\lim})$ satisfy (4.6) and (4.7). Thus, in view
of the fact that the Weierstrass sigma function is entire, it follows that the
constructed solution is stable.
The following proof is rather long and technical and, therefore, we shall omit
most of the calculations.
In the following estimations we shall also use parameters
$x^{-}_{n},x^{+}_{n},x_{1}^{(n)},x_{2}^{(n)},x_{3}^{(n)},C_{n}$ of the map
$W_{n}$.
###### Lemma 4.2.
Assume that $\gamma_{n}\rightarrow-1/6$ and
$\delta_{n}\omega(\gamma_{n})\rightarrow 0$. Then the indicated convergence
holds.
###### Proof.
Note that $\gamma_{n}\rightarrow-1/6$ implies that sequences $z^{-}_{n}$ and
$z^{+}_{n}$ are bounded. The first equation in (3.8) then implies that $D_{n}$
is also bounded. Passing to the subsequences we can assume that all these
sequences are convergent (if we succeed to prove that the limits satisfy (4.6)
and (4.7), then uniqueness of the solution implies that all the subsequences
converge to the same limit, and, therefore, the initial sequences converge).
The first equation in (4.6) is obtained by passing to the limit in the second
equation in (3.8) in view of Lemma 4.1. Multiplying the first equation in
(3.8) by $\omega^{\prime}$ and the second one by $\omega$, subtracting and
passing to limit leads to the second equation in (4.6) (the term
$\delta\omega$ by assumption tends to zero).
Now we derive (4.7) for the limits of the sequences. Recall the following
notation of the theory of elliptic functions (see [1]):
$\zeta_{2}(z)=\zeta(z+\omega)-\eta,$
$\zeta_{3}(z)=\zeta(z+\omega+\omega^{\prime})+\eta+\eta^{\prime}.$
These functions are connected to $\sigma_{2},\sigma_{3}$:
$\zeta_{k}=\frac{1}{\sigma_{k}}\frac{d\sigma_{k}}{dz}=\frac{d\ln\sigma_{k}}{dz}.$
Finally, $\sigma_{k}=\sigma\sqrt{\wp-x_{k}}$. The last two equations in (3.8)
can be rewritten as
(4.10)
$D+\frac{h^{-}}{\pi}(\zeta_{k}(-z^{-})-\zeta(z^{-}))-\frac{h^{+}}{\pi}(\zeta_{k}(-z^{+})-\zeta(z^{+}))=0,\;\;k=2,3.$
Since
$\begin{multlined}\zeta_{2}(z)-\zeta_{3}(z)=\frac{\sigma^{\prime}(z)\sqrt{\wp-
x_{2}}+\wp^{\prime}(z)\sigma(z)(\wp-x_{2})^{-1/2}}{\sigma(z)\sqrt{\wp-
x_{2}}}\\\ -\frac{\sigma^{\prime}(z)\sqrt{\wp-
x_{3}}+\wp^{\prime}(z)\sigma(z)(\wp-x_{3})^{-1/2}}{\sigma(z)\sqrt{\wp-
x_{3}}},\end{multlined}\zeta_{2}(z)-\zeta_{3}(z)=\frac{\sigma^{\prime}(z)\sqrt{\wp-
x_{2}}+\wp^{\prime}(z)\sigma(z)(\wp-x_{2})^{-1/2}}{\sigma(z)\sqrt{\wp-
x_{2}}}\\\ -\frac{\sigma^{\prime}(z)\sqrt{\wp-
x_{3}}+\wp^{\prime}(z)\sigma(z)(\wp-x_{3})^{-1/2}}{\sigma(z)\sqrt{\wp-
x_{3}}},$
it follows that
(4.11) $\zeta_{2}(z)-\zeta_{3}(z)=\frac{\wp^{\prime}(z)(x_{2}-x_{3})}{(\wp-
x_{2})(\wp-x_{3})}.$
Equation (4.7) is derived by passing to the limit (using Lemma 4.1) from
equations (4.10) (from which the constant $D$ can be eliminated) and
substitution the formula for $\zeta_{2}-\zeta_{3}$ from (4.11). ∎
###### Lemma 4.3.
Inequalities $|C_{n}|\geq a_{1}$, $|C_{n}|\leq
a_{2}\sqrt{x_{3}^{(n)}-x^{-}_{n}}$ hold for some positive constants
$a_{1},a_{2}$. Moreover, the sequence $x^{+}_{n}$ is bounded from below.
###### Proof.
The estimations for $C^{(n)}$ easily follow from
$|C_{n}|\bigintss_{x_{3}^{(n)}}^{+\infty}\frac{\sqrt{\left(x-x_{2}^{(n)}\right)\left(x-x_{3}^{(n)}\right)}dx}{\sqrt{x-x_{1}^{(n)}}(x-x^{-}_{n})(x-x^{+}_{n})}=h^{-}_{n}-h^{+}_{n}+h_{n}.$
To prove the boundedness from below for $x^{+}_{n}$ it suffices to consider
equality
$|C_{n}|\bigintss_{x_{1}^{(n)}}^{x_{2}^{(n)}}\frac{\sqrt{\left(x_{2}^{(n)}-x\right)\left(x_{3}^{(n)}-x\right)}dx}{\sqrt{x-x_{1}^{(n)}}(x-x^{-}_{n})(x-x^{+}_{n})}=h_{n}.$
∎
###### Lemma 4.4.
Assume that sequence $x^{-}_{n}$ is bounded from below. Then there exist
constants $0<b_{1}<b_{2}$ such that
$b_{1}\sqrt{\delta_{n}}\leq|x_{2}^{(n)}-x_{3}^{(n)}|\leq
b_{2}\sqrt{\delta_{n}}$.
###### Proof.
It follows from easy estimation for the integral in equality
$|C_{n}|\bigintss_{x_{2}^{(n)}}^{x_{3}^{(n)}}\frac{\sqrt{\left(x-x_{2}^{(n)}\right)\left(x_{3}^{(n)}-x\right)}dx}{\sqrt{x-x_{1}^{(n)}}(x-x^{-}_{n})(x-x^{+}_{n})}=\delta_{n}.$
∎
The foregoing Lemmas imply that it remains to prove that sequence $x^{-}_{n}$
is bounded from below (in view of the asymptotics (4.3)). Assume that this is
not true. Passing to the subsequences we can assume that
$x^{-}_{n}\rightarrow-\infty$ and $x_{1}^{(n)},x_{2}^{(n)},x_{3}^{(n)}$,
$x^{+}_{n}$ are convergent.
###### Lemma 4.5.
In the foregoing assumptions statements $x_{3}^{(n)}-x_{2}^{(n)}\rightarrow
0$, $x_{1}^{(n)}-x^{+}_{n}\rightarrow 0$ hold.
###### Proof.
Equality
$|C_{n}|\frac{\sqrt{\left(x_{2}^{(n)}-x^{+}_{n}\right)\left(x_{3}^{(n)}-x^{+}_{n}\right)}}{\sqrt{x_{1}^{(n)}-x^{+}_{n}}(x^{+}_{n}-x^{-}_{n})}=\frac{h^{+}_{n}}{\pi}$
implies $x_{1}^{(n)}-x^{+}_{n}\rightarrow 0$.
To prove that $x_{3}^{(n)}-x_{2}^{(n)}\rightarrow 0$ we return to parameters
$(D_{n},\gamma_{n},z^{-}_{n},z^{+}_{n})$. Assume that
$x_{2}^{(n)}-x_{1}^{(n)}\rightarrow 0$. Then
$\omega^{\prime}(\gamma_{n}),\eta^{\prime}(\gamma_{n})\rightarrow\infty$, and
$\omega$ and $\eta$ have finite limits. Moreover, Legendre identity (see,
e.g., [1] or [9]) implies
$\lim_{n\rightarrow\infty}\frac{\omega^{\prime}(\gamma_{n})}{\eta^{\prime}(\gamma_{n})}=\lim_{n\rightarrow\infty}\frac{\omega(\gamma_{n})}{\eta(\gamma_{n})}.$
The first two equations from (3.8) imply
$\lim_{n\rightarrow\infty}\left(\frac{2h^{+}_{n}}{\pi}z^{+}_{n}\left(\frac{\omega^{\prime}}{\eta^{\prime}}-\frac{\omega}{\eta}\right)-i\frac{h_{n}}{\omega}\right)=0.$
Therefore,
$\lim_{n\rightarrow\infty}\left(\frac{h^{+}_{n}z^{+}_{n}}{\omega^{\prime}}-h_{n}\right)=0,$
and, passing to the limit, we obtain $h_{\lim}\geq h^{+}_{\lim}$. This
contradicts the assumptions made.
Now assume that $x_{2}^{(n)}-x_{1}^{(n)}\nrightarrow 0$
and$x_{3}^{(n)}-x_{2}^{(n)}\nrightarrow 0$. Then both periods $\omega$ and
$\omega^{\prime}$ have finite limits. In this case $z^{-}_{n}\rightarrow 0$,
$z^{+}_{n}\rightarrow\omega^{\prime}(\gamma_{\lim})$. It is obvious that
$D_{n}$ also is convergent and, passing to the limit in the second equation in
(3.8), we obtain
$-D_{\lim}\omega^{\prime}-\frac{2h^{+}_{\lim}\omega^{\prime}\eta^{\prime}}{\pi}=0.$
Substituting into the first equation we get
$\frac{2h^{+}_{\lim}}{\pi}\omega\eta^{\prime}-\frac{2h^{+}_{\lim}}{\pi}\omega^{\prime}\eta=-ih_{\lim},$
implying $h^{+}_{\lim}=h_{\lim}$. ∎
Now we have enough preparation to deduce a contradiction from
$x^{-}_{n}\rightarrow-\infty$. In order to do this we shall analyse
asymptotics of some sequences (in what follows the equivalence of sequences
means that the quotient of them tends to $1$).
Equality
$|C_{n}|\frac{\sqrt{\left(x_{2}^{(n)}-x^{-}_{n}\right)\left(x_{3}^{(n)}-x^{-}_{n}\right)}}{\sqrt{x_{1}^{(n)}-x^{-}_{n}}(x^{+}_{n}-x^{-}_{n})}=\frac{h^{-}_{n}}{\pi}$
implies that
(4.12) $|C_{n}|\sim\frac{h^{-}_{n}}{\pi}\sqrt{|x^{-}_{n}|}.$
On the other hand
$|C_{n}|\frac{\sqrt{\left(x_{2}^{(n)}-x^{+}_{n}\right)\left(x_{3}^{(n)}-x^{+}_{n}\right)}}{\sqrt{x_{1}^{(n)}-x^{+}_{n}}(x^{+}_{n}-x^{-}_{n})}=\frac{h^{+}_{n}}{\pi},$
and, therefore,
(4.13)
$\sqrt{x_{1}^{(n)}-x^{+}_{n}}\sim\frac{h^{-}_{n}}{h^{+}_{n}}\frac{1}{\sqrt{|x^{-}_{n}|}}.$
Now consider equality
$|C_{n}|\bigintss_{x_{1}^{(n)}}^{x_{2}^{(n)}}\frac{\sqrt{\left(x_{2}^{(n)}-x\right)\left(x_{3}^{(n)}-x\right)}dx}{\sqrt{x-x_{1}^{(n)}}(x-x^{-}_{n})(x-x^{+}_{n})}=h_{n}.$
Using (4.12), it is easy to show that the sequence in lhs is equivalent to
sequence
$\frac{h^{-}_{n}}{\pi\sqrt{|x^{-}_{n}|}}\bigintss_{x_{1}^{(n)}}^{x_{2}^{(n)}}\frac{\sqrt{\left(x_{2}^{(n)}-x\right)\left(x_{3}^{(n)}-x\right)}dx}{\sqrt{x-x_{1}^{(n)}}(x-x^{+}_{n})}.$
Now, changing the variable, we obtain
$\bigintss_{x_{1}^{(n)}}^{x_{2}^{(n)}}\frac{\sqrt{\left(x_{2}^{(n)}-x\right)\left(x_{3}^{(n)}-x\right)}dx}{\sqrt{x-x_{1}^{(n)}}(x-x^{+}_{n})}=\bigintss_{0}^{x_{2}^{(n)}-x_{1}^{(n)}}\frac{\sqrt{\left(x_{2}^{(n)}-x_{1}^{(n)}-x\right)(1-x)}dx}{\sqrt{x}(x+x_{1}^{(n)}-x^{+}_{n})}.$
It appears that the asymptotics of the last integral is independent of the
convergence rate of $|x_{2}^{(n)}-x_{1}^{(n)}|\rightarrow 1$. Namely, for all
sequences $\alpha_{n}\rightarrow 1$ and $a_{n}\rightarrow 0$ the equivalence
$\int_{0}^{\alpha_{n}}\frac{\sqrt{(\alpha_{n}-x)(1-x)}dx}{\sqrt{x}(x+a_{n})}\sim\int_{0}^{1}\frac{(1-x)dx}{\sqrt{x}(x+a_{n})}\sim\frac{\pi}{\sqrt{a_{n}}}$
holds. Finally, in view of (4.13), we obtain
$h_{n}=|C_{n}|\bigintss_{x_{1}^{(n)}}^{x_{2}^{(n)}}\frac{\sqrt{\left(x_{2}^{(n)}-x\right)\left(x_{3}^{(n)}-x\right)}dx}{\sqrt{x-x_{1}^{(n)}}(x-x^{-}_{n})(x-x^{+}_{n})}\sim\frac{h^{-}_{n}}{\pi\sqrt{|x^{-}_{n}|}}\frac{\pi}{\sqrt{x_{1}^{(n)}-x^{+}_{n}}}\sim
h^{+}_{n}.$
It contradicts the assumption $h_{\lim}<h^{+}_{\lim}$.
## 5\. Conclusion
The simple expression through the Weierstrass sigma function for a conformal
mapping of a polygonal region $\Omega$ was obtained. For the specific example
the numerical experiment was carried out. The behaviour under degeneration was
analyzed and it was shown that the formula is stable and converges to the
solution of the limiting problem.
The future research direction can be connected either with the construction
and analysis of the solutions to similar problems corresponding, for example,
to Riemann surfaces of genus $2$, or with the development of the sigma
function theory: construction of the recurrent formulas for higher genus and
elaboration of computational methods independent of the theta function theory.
## 6\. Acknowledgements
The author expresses his gratitude to A. Bogatyrev and O. Grigoriev for posing
the problem and useful discussions, and also to K. Malkov for help in the
computer implementation of the calculations. The author also thanks the Center
for Continuing Professional Education “Sirius University” for the invitation
to the educational module “Computational Technologies, Multidimensional Data
Analysis, and Modelling”, during which some of the results of this work were
obtained.
## Appendix A On the coefficients of the Weierstrass sigma function Taylor
series
Here we prove that the sigma function is an entire function of three variables
and derive a recurrent formula for its Taylor series coefficients, that was
originally established by Weierstrass in [17]. The proof given there has a gap
connected to the analyticity of the sigma function in a neighbourhood of zero.
Perhaps, this fact can be proved by an independent argument but, since
Weierstrass does not give any references (and omits this issue completely), we
decided to provide a complete proof here.
The homogeneity condition
$\sigma(\frac{z}{\lambda},\lambda^{4}g_{2},\lambda^{6}g_{3})=\frac{1}{\lambda}\sigma(z,g_{2},g_{3})$
easily implies the following differential equation for the $\sigma$ function:
(A.1) $z\frac{\partial\sigma}{\partial z}-4g_{2}\frac{\partial\sigma}{\partial
g_{2}}-6g_{3}\frac{\partial\sigma}{\partial g_{3}}-\sigma=0.$
Further, using the definition of the $\sigma$ function and the standard
differential equation for the $\wp$ function, one can derive an equation (for
a proof see [13])
(A.2) $\frac{\partial^{2}\sigma}{\partial
z^{2}}-12g_{3}\frac{\partial\sigma}{\partial
g_{2}}-\frac{2}{3}g_{2}^{2}\frac{\partial\sigma}{\partial
g_{3}}+\frac{1}{12}g_{2}z^{2}\sigma=0.$
Let $f$ be an entire function of three variables $(z,g_{2},g_{3})$ satisfying
(A.1) and (A.2). We derive a relation between the Taylor series coefficients
$f_{mnk}$ of $f$:
$f=\sum_{m,n,k=0}^{\infty}f_{mnk}g_{2}^{m}g_{3}^{n}z^{k}.$
(A.1) implies that $f_{mnk}=0$, if $k\neq 4m+6n+1$. Therefore, $f$ can be
written in the form
$f=\sum_{m,n=0}^{\infty}a_{mn}g_{2}^{m}g_{3}^{n}z^{4m+6n+1}.$
Now, substituting this expression of $f$ into (A.2), we obtain the equality
(A.3)
$a_{mn}=\frac{12(m+1)a_{m+1,n-1}+\frac{2}{3}(n+1)a_{m-2,n+1}-\frac{1}{12}a_{m-1,n}}{(4m+6n+1)(4m+6n)},$
in which for convenience $a_{mn}$ is defined by zero when $m$ or $n$ is
negative. It is easy to see that (A.3) uniquely determines sequence $a_{mn}$
for given $a_{00}$. To prove this let us introduce an order relation on pairs
of nonnegative integers $(m,n)$: $(m,n)\leq(m^{\prime},n^{\prime})$, if
$m+n<m^{\prime}+n^{\prime}$ or if $m+n=m^{\prime}+n^{\prime}$ and $n\leq
n^{\prime}$. It is easy to see that we defined a well-order on
$\mathbb{Z}_{+}\times\mathbb{Z}_{+}$, and in (A.3) indices of the terms
$a_{mn}$ in rhs are strictly less then $(m,n)$. Thus, it is proved that (A.3)
determines $a_{mn}$ recursively for given $a_{00}$.
If the sigma function was an entire function of three variables, or, at least,
holomorphic in some neighbourhood of zero, then the recurrence relation (A.3)
for its Taylor series coefficients would be proved. The difficulty is that the
domain of $\sigma$ is the set
$\\{(z,g_{2},g_{3})\in\mathbb{C}^{3}:g_{2}^{3}-27g_{3}^{2}\neq 0\\}$. The
following considerations prove the entirety of $\sigma$ and the recurrence
relation (A.3).
###### Remark A.1.
It is known (see, e.g., [1] or [9]) that condition $g_{2}^{3}-27g_{3}^{2}\neq
0$ is equivalent to simplicity of the roots of polynomial
$4x^{3}-g_{2}x-g_{3}$.
###### Lemma A.2.
Let $a_{mn}$ satisfy the recurrence relation (A.3). Then for all
$q>(28+\sqrt{811})/36\approx 1.569$ there exists $C>0$ such that
(A.4) $|a_{mn}|\leq C\frac{q^{2m+3n}}{(2m+3n)!}.$
###### Proof.
Substituting in (A.3) this estimation and it is easy to show that for the
existence of a constant, it suffices that the inequality
$\frac{6(m+1)}{4m+6n+1}\frac{q^{2m+3n-1}}{(2m+3n)!}+\frac{(n+1)q^{2m+3n-1}}{6(4m+6n+1)(2m+3n)!}+\frac{q^{2m+3n-2}}{48(2m+3n)!}\leq\frac{q^{2m+3n}}{(2m+3n)!}$
holds starting from some index $(m,n)$ (in terms of the foregoing ordering).
For this, in turn, it suffices to satisfy the inequality
$\frac{1}{48}+q\left(\frac{3}{2}+\frac{1}{18}\right)<q^{2}.$
Solving the quadratic equation, we obtain the required statement. ∎
Lemma A.2 allows to define an entire function
$h(z,g_{2},g_{3})=\sum_{m,n=0}^{\infty}a_{mn}g_{2}^{m}g_{3}^{n}z^{4m+6n+1},$
where $a_{mn}$ are determined by recurrence relation (A.3) and initial
condition $a_{00}=1$. We shall prove that $h\equiv\sigma$ for $(g_{2},g_{3})$
such that $g_{2}^{3}-27g_{3}^{2}\neq 0$.
###### Lemma A.3.
Let $f$ be a holomorphic function of variables $(z,g_{2},g_{3})$, defined on a
set $\mathbb{C}\times U$, where $U\subset\mathbb{C}^{2}$ is open, satisfying
equation (A.2). Assume that $f$ is odd in variable $z$. Then $f$ can be
represented by series
(A.5) $f(z,g_{2},g_{3})=\sum_{n=0}^{\infty}c_{n}(g_{2},g_{3})z^{2n+1},$
and in $U$ the recurrence relation
(A.6) $(2n+3)(2n+2)c_{n+1}-12g_{3}\frac{\partial c_{n}}{\partial
g_{2}}-\frac{2}{3}g_{2}^{2}\frac{\partial c_{n}}{\partial
g_{3}}+\frac{1}{12}g_{2}c_{n-1},$
holds, where $n\geq 0$ (for $n=0$ we set $c_{n-1}=0$).
###### Proof.
Indeed the representability of $f$ by the series follows from entirety of $f$
by $z$. Its coefficients $c_{n}(g_{2},g_{3})$ are given by
$c_{n}(g_{2},g_{3})=\frac{1}{(2n+1)!}\frac{\partial^{2n+1}f}{\partial
z^{2n+1}}|_{z=0}.$
It is easy to see that the series (A.5) can be differentiated term-by-term,
and therefore we can substitute it in (A.2). Collecting the coefficient at
$z^{2n+1}$, we obtain (A.6). ∎
Recurrence relation (A.6) can be used to prove, that $\sigma$ and $h$ coincide
on the domain of the $\sigma$ function. Indeed, if the first terms of their
expansions coincide, then these function coincide (note that they are both odd
in $z$). Indeed, $\partial\sigma/\partial z|_{z=0}\equiv\partial h/\partial
z|_{z=0}\equiv 1$. Thus, $h$ is the analytic continuation of the $\sigma$
function to an entire function of variables $(z,g_{2},g_{3})$. This completes
the proof of the following theorem.
###### Theorem A.4 (Weierstrass).
The $\sigma$ function is entire and for all $(z,g_{2},g_{3})\in\mathbb{C}^{3}$
equality
(A.7)
$\sigma(z,g_{2},g_{3})=\sum_{m,n=0}^{\infty}a_{mn}g_{2}^{m}g_{3}^{n}z^{4m+6n+1}$
holds, where coefficients $a_{mn}$ are determined by recurrence relation (A.3)
and initial condition $a_{00}=1$.
## References
* [1] N.. Akhiezer “Elements of the theory of elliptic functions” Translated from the second Russian edition by H. H. McFaden 79, Translations of Mathematical Monographs American Mathematical Society, Providence, RI, 1990, pp. viii+237 DOI: 10.1090/mmono/079
* [2] H.. Baker “On the hyperelliptic sigma functions” In _Math. Ann._ 50.2-3, 1898, pp. 462–472 DOI: 10.1007/BF01448079
* [3] S.I. Bezrodnykh et al. “On capacity computation for symmetric polygonal condensers” In _J. Comput. Appl. Math._ 361, 2019, pp. 271–82
* [4] A. Bogatyrev, M. Hassner and D. Yarmolich “An exact analytical-expression for the read sensor signal in magnetic data storage channels” In _Error-correcting codes, finite geometries and cryptography_ 523, Contemp. Math. Amer. Math. Soc., Providence, RI, 2010, pp. 155–160 DOI: 10.1090/conm/523/10322
* [5] A.. Bogatyrev and O.. Grigor’ev “Conformal mapping of rectangular heptagons II” In _Comput. Methods Funct. Theory_ 18.2, 2018, pp. 221–238 DOI: 10.1007/s40315-017-0217-z
* [6] A.. Bogatyrev and O.. Grigor’ev “Filtration under a Stepped Dam and Riemann Theta Functions” In _Tr. Mat. Inst. Steklova_ 311.Analiz i Matematicheskaya Fizika, 2020, pp. 14–26 DOI: 10.4213/tm4145
* [7] A.. Bogatyrëv “The conformal mapping of rectangular heptagons” In _Mat. Sb._ 203.12, 2012, pp. 35–56 DOI: 10.1070/SM2012v203n12ABEH004284
* [8] Henri Cartan “Elementary theory of analytic functions of one or several complex variables” Translated from the French, Reprint of the 1973 edition Dover Publications, Inc., New York, 1995, pp. 228
* [9] K. Chandrasekharan “Elliptic functions” 281, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] Springer-Verlag, Berlin, 1985, pp. xi+189 DOI: 10.1007/978-3-642-52244-4
* [10] T.. Driscoll and L.. Trefethen “Schwarz-Christoffel mapping” Cambridge: Cambridge University Press, 2002, pp. xvi+132
* [11] H.. Farkas and I. Kra “Riemann surfaces” 71, Graduate Texts in Mathematics Springer-Verlag, New York, 1992, pp. xvi+363 DOI: 10.1007/978-1-4612-2034-3
* [12] Otto Forster “Lectures on Riemann surfaces” Translated from the 1977 German original by Bruce Gilligan, Reprint of the 1981 English translation 81, Graduate Texts in Mathematics Springer-Verlag, New York, 1991, pp. viii+254
* [13] G.. Halphen “Traite des fonctions elliptiques et de leurs applications” Paris: Gauthier-Villars, 1886, pp. 659
* [14] Felix Klein “Ueber hyperelliptische Sigmafunctionen” In _Math. Ann._ 27.3, 1886, pp. 431–464 DOI: 10.1007/BF01445285
* [15] M.. Lavrentiev and B.. Shabat “Metody teorii funktsi kompleksnogo peremennogo” “Nauka”, Moscow, 1987, pp. 688
* [16] D.. V.. “Kleinian functions, hyperelliptic Jacobians and applications” In _Reviews in Mathematics and Math. Physics_ 10.2 London: GordonBreach, 1997, pp. 3–120
* [17] K. Weierstrass “Zur Theorie der elliptischen Funktionen” In _Sitzungsberichte der Akademie der Wissenshaften zu Berlin_ 1, 1882, pp. 443–51
|
# Improving Network Degree Correlation by Degree-preserving Rewiring
Shuo Zou, Bo Zhou, and Qi Xuan This work was supported in part by the National
Natural Science Foundation of China under Grant 61973273, by the Zhejiang
Provincial Natural Science Foundation of China under Grant LR19F030001, by the
National Key R&D Program of China under Grant 2020YFB1006104, and by the
Research and Development Center of Transport Industry of New Generation of
Artificial Intelligence Technology. _(Corresponding authors: Qi Xuan.)_ All
authors are with the Institute of Cyberspace Security, College of Information
Engineering, Zhejiang University of Technology, Hangzhou 310023, China.
###### Abstract
Degree correlation is a crucial measure in networks, significantly impacting
network topology and dynamical behavior. The degree sequence of a network is a
significant characteristic, and altering network degree correlation through
degree-preserving rewiring poses an interesting problem. In this paper, we
define the problem of maximizing network degree correlation through a finite
number of rewirings and use the assortativity coefficient to measure it. We
analyze the changes in assortativity coefficient under degree-preserving
rewiring and establish its relationship with the $s-$metric. Under our
assumptions, we prove the problem to be monotonic and submodular, leading to
the proposal of the GA method to enhance network degree correlation. By
formulating an integer programming model, we demonstrate that the GA method
can effectively approximate the optimal solution and validate its superiority
over other baseline methods through experiments on three types of real-world
networks. Additionally, we introduce three heuristic rewiring strategies, EDA,
TA and PEA, and demonstrate their applicability to different types of
networks. Furthermore, we extend the application of our proposed rewiring
strategies to investigate their impact on several spectral robustness metrics
based on the adjacency matrix, revealing that GA effectively improves network
robustness, while TA performs well in enhancing the robustness of power
networks, PEA exhibits promising performance in routing networks, and both
heuristic methods outperform other baseline methods in flight networks.
Finally, we explored the robustness of several centrality metrics in the
network while enhancing network degree correlation using the GA method. We
found that, for disassortative real networks, closeness centrality and
eigenvector centrality are typically robust. When focusing on the top-ranked
nodes, we observed that all centrality metrics remain robust in disassortative
networks.
###### Index Terms:
Complex network, Degree correlation, Assortativity coefficient.
## I Introduction
Complex networks serve as powerful tools for abstractly representing real-
world systems, where individual units are represented as nodes, and
interactions between these units are represented as edges. Therefore, research
on complex networks has experienced tremendous growth in recent years. Various
network properties, including the degree sequence[1, 2], degree correlation[3,
4] and clustering coefficient[5, 6] are extensively utilized in complex
network analysis to assess the topological structure of networks.
In the field of complex networks, systems represented as networks often have
different properties in reality. One of the most interesting properties is
degree correlation. It represents the relationship between the degrees of
connected nodes, such as whether nodes with large degrees tend to be connected
to other nodes with large degrees or to nodes with small degrees. Degree
correlation is an important concept in network analysis. For example, degree
correlation in social networks may reflect the idea that popular individuals
tend to know other popular individuals. Similarly, in citation networks,
papers that are highly cited may tend to cite other highly cited papers. A
network is referred to as assortative when high-degree nodes tend to connect
to other high-degree nodes, and low-degree nodes tend to connect to other low-
degree nodes. On the other hand, a network is called disassortative when high-
degree nodes tend to connect to low-degree nodes, and low-degree nodes tend to
connect to high-degree nodes. A network is considered neutral when there is no
preferential tendency in connections between nodes.
There are several measures of degree correlation for undirected networks. The
most popular among them is the assortativity coefficient, denoted as $r$. It
is the Pearson correlation coefficient between the degrees of connected nodes
in the network. The assortativity coefficient is a normalized measure, ranging
between -1 and 1. It was initially introduced by Newman[7, 8]. Li _et al._[9]
proposed the $s$-metric, which is obtained by calculating the product of the
degrees of connected nodes. When using this measure, normalization is often
required. This involves computing the maximum and minimum $s$-metric under the
current degree sequence, which can be challenging. When the degree sequence of
the network remains unchanged, the definition of the assortativity coefficient
includes the $s$-metric. Therefore, this paper primarily uses the
assortativity coefficient to measure the degree correlation in networks.
The problem considered in this paper is as follows: Given a simple undirected
network and a budget, we aim to maximally improve the degree correlation of
the network while meeting the budget constraint through the modification of
its topological structure. The changes to the network’s topological structure
can take various forms, including edge addition, edge deletion, and edge
rewiring. We primarily consider edge rewiring, altering the network’s
topological structure without changing the node degrees. This is practically
meaningful since, in real-world networks, nodes often have capacity
constraints. For instance, increasing the number of flights between airports
may raise operational costs, which could be impractical in the short term.
However, adjusting flights between airports through rewiring is a relatively
straightforward approach. In router networks, rewiring connections between
routers allows adjustments without altering their loads.
There is some research on changing network degree correlation through
rewiring. Xulvi _et al._[10] proposed two algorithms that aim to achieve the
desired degree correlation in a network by producing assortative and
disassortative mixing, respectively. Li _et al._[11] developed a
probabilistic attack method that increases the chances of rewiring the edges
between nodes of higher degrees, leading to a network with a higher degree of
assortativity. Geng _et al._[12] introduced a global disassortative rewiring
strategy aimed at establishing connections between high-degree nodes and low-
degree nodes through rewiring, resulting in a higher level of disassortativity
within the network. However, the mentioned works did not consider the rewiring
budget. This paper primarily investigates how to maximize the degree
correlation of a network through rewiring under a limited budget.
Degree correlation is a crucial property in complex networks, and different
types of networks exhibit varying degrees of degree correlation. These
differences in degree correlation result in distinct topological
characteristics[13, 14, 15], such as the distribution of path lengths and
Rich-club coefficient, within networks. The diverse effects of degree
correlation play a significant role in processes like disease propagation[16,
17] and also impact the robustness of networks[18, 19, 12]. In this paper, we
mainly focus on examining the impact of our method on several robustness
measures based on the network adjacency-spectrum, while altering degree
correlation. This helps determine whether our method contributes to enhancing
the robustness of the network.
The robustness of centrality metrics in networks is also an important research
question. It investigates whether centrality metrics can maintain robustness
when the network’s topology changes. Some researchers have studied the
variations of various centrality metrics in networks when nodes or edges
fail[20, 21, 22]. In this paper, we explore which centrality metrics in the
network can maintain robustness while our rewiring methods improve network
degree correlation.
In this paper, we investigated the problem of maximizing network degree
correlation through a finite number of rewirings. Our contributions are
summarized as follows:
* •
We defined the problem of maximizing degree correlation and proposed the GA,
EDA, TA, and PEA algorithms.
* •
We proved that under our assumptions, the objective function is monotonic and
submodular.
* •
We validated that GA can effectively approximate the optimal solution and
significantly improve network degree correlation on several real networks.
Meanwhile, EDA,TA and PEA also demonstrated their respective advantages.
* •
We applied these rewiring strategies to enhance network robustness and found
that GA can effectively improve network robustness. Additionally, EDA, TA and
PEA showed applicability to different types of networks for enhancing network
robustness.
* •
We analyzed the robustness of several centrality metrics when networks were
rewired using the GA method. Our findings indicate that in disassortative real
networks, closeness centrality and eigenvector centrality exhibit robustness.
Furthermore, upon focusing on the top-ranked nodes, we observed that all
centrality metrics maintain their robustness in disassortative networks.
The structure of the paper is as follows. In Sec. II, we introduce the degree
correlation measure of networks, specifically the assortativity coefficient,
and analyze its variation under degree-preserving rewiring. We also establish
a connection between the assortativity coefficient and another degree
correlation metric, the $s-$metric. In Sec. II, we define the problem of
maximizing degree correlation through rewiring and analyze the objective
function is monotonic and submodular, leading to the proposal of the GA
strategy, and we describe three heuristic rewiring methods, EDA, TA and PEA.
In Sec. III, we validate the rationality of our assumption and demonstrate
that the GA method effectively approximates the optimal solution. Through
experiments on different types of real networks, we demonstrate that GA can
effectively enhance network degree correlation, while EDA, TA, and PEA are
applicable to different network types. Additionally, we investigate the impact
of these rewiring methods on the spectral robustness of networks, and explore
the robustness of several centrality metrics in the network while enhancing
network degree correlation using the GA method. Finally, Sec. IV concludes
with a summary of findings and outlines avenues for future research.
## II Methodology
### II-A Preliminaries and Ideas
We consider an undirected and unweighted network $G=(V,E)$, where the set of
vertex $V$ is a set of $N$ nodes, and $E$ is a set of edges $M$. The
assortativity coefficient is a widely used measure to quantify the degree
correlation in a network. In this paper, we primarily utilize the
assortativity coefficient to measure the degree correlation of the network.
The assortativity coefficient is defined as[8]:
$\mathbf{r}=\frac{M^{-1}\sum_{i}^{M}(j_{i}k_{i})-[M^{-1}\sum_{i}^{M}\frac{1}{2}(j_{i}+k_{i})]^{2}}{M^{-1}\sum_{i}^{M}\frac{1}{2}(j_{i}^{2}+k_{i}^{2})-[M^{-1}\sum_{i}^{M}\frac{1}{2}(j_{i}+k_{i})]^{2}}.$
(1)
where $k_{i}$ and $j_{i}$ are the degrees of the endpoins of the $i$th edge,
respectively.
The degree distribution is a crucial characteristic of a network as it reveals
the connectivity patterns and the overall topology of the network. Therefore,
we employ a rewiring strategy to alter the network’s topology without changing
the degree of each node in the network. The rewiring strategy is shown in
Figure 1. We choose an edge pair $\langle(i,j),(k,l)\rangle$ from the original
network $G$ that satisfies $(i,j)\in E$ and $(k,l)\in E$, which can be rewired
as $(i,k)$ and $(j,l)$ if $(i,k),(j,l)\notin E$, or can be rewired as $(i,l)$
and $(k,j)$ if $(i,l),(k,j)\notin E$. Obviously, the rewiring strategy does
not change the degree of the nodes. According to Formula 1,
$\sum_{i}^{M}\frac{1}{2}(j_{i}^{2}+k_{i}^{2})$ and
$\sum_{i}^{M}\frac{1}{2}(j_{i}+k_{i})$ are also unchanged under the rewiring
strategy. The rewiring strategy only affects the following formula:
$\mathbf{s}=\sum_{i}^{M}(j_{i}k_{i}).$ (2)
We can observe that $s$ is the $s$-metric proposed by Li _et al._[9]
Typically, the $s$-metric needs to be normalized to quantify the degree
correlation of the network. The normalized $s$-metric is defined by [9, 23]:
$s^{n}=\frac{s-s_{min}}{s_{max}-s_{min}}.$ (3)
Here, $s_{min}$ and $s_{max}$ are the minimum and the maximum values of $s$
from networks with the same degree sequence. Typically, calculating $s_{min}$
and $s_{max}$ is not straightforward, so more often, the assortativity
coefficient is used to measure the degree correlation of networks. However,
under the rewiring strategy, the change in assortativity coefficient
translates to the change in the $s$-metric, and their meanings are equivalent.
Nevertheless, to distinctly represent the degree correlation of the network,
we will still use the assortativity coefficient in the following paper.
When the edge pair $\langle(i,j),(k,l)\rangle$ is rewired to
$\langle(i,k),(j,l)\rangle$, the change in the assortativity coefficient can
be converted to the change in $s$, calculated as:
$value_{\langle(i,j),(k,l)\rangle}=(d_{i}d_{k}+d_{j}d_{l})-(d_{i}d_{j}+d_{k}d_{l}).$
(4)
where $d_{i}$ represents the degree of node $i$. It is important to note that
$value_{\langle(i,j),(k,l)\rangle}$ represents the rewiring of edge pair
$\langle(i,j),(k,l)\rangle$ to $\langle(i,k),(j,l)\rangle$. The edge pair
$\langle(i,j),(k,l)\rangle$ could also be rewired to
$\langle(i,l),(j,k)\rangle$, the change in $s$ denoted as
$value_{\langle(i,j),(l,k)\rangle}$. Figure 1 illustrates the calculation of
the $value$ for a edge pair during the rewiring process.
Figure 1: The degrees of nodes $i$, $j$, $k$, and $l$ are $4$, $1$, $3$, and
$2$, respectively. The rewiring of the edge pairs $\langle(i,j),(k,l)\rangle$
can occur in two possible ways, corresponding to
$value_{\\{(i,j),(k,l)\\}}=(4\times 3+1\times 2)-(4\times 1+3\times 2)=4$ and
$value_{\\{(i,j),(l,k)\\}}=(4\times 2+1\times 3)-(4\times 1+3\times 2)=1$. If
there exist edges $(i,l)$ or $(j,k)$, and $(i,k)$ or $(j,l)$ in the network,
then the edge pair $\langle(i,j),(k,l)\rangle$ cannot be rewired.
### II-B Problem Definition
For a simple network $G(V,E)$, let $S$ be the set of rewired edge pairs. We
denote the network after rewiring as $G+S$. The assortativity coefficient of
$G+S$ is represented by $r(S)$, and the change in the assortativity
coefficient can be expressed as $\Delta r(S)$.
In networks, rewiring a limited set of edges to maximize a certain metric is
often challenging, as it involves a more complex combinatorial optimization
problem compared to adding or removing a limited number of edges to alter a
network metric. Here, we assume that newly generated edge pairs resulting from
rewiring will not be considered for further rewiring in subsequent steps. This
encompasses two scenarios: firstly, if an edge pair
$\langle(i,j),(k,l)\rangle$ is reconfigured to $\langle(i,k),(j,l)\rangle$,
edges $(i,k)$ and $(j,l)$ will not be rewired with other edges in subsequent
steps. Secondly, when edge $(i,j)$ is not rewired, the edge pair
$\langle(a,i),(b,j)\rangle$ cannot be rewired to $\langle(a,b),(i,j)\rangle$,
because edge $(i,j)$ already exists in the network. However, when edge $(i,j)$
is rewired, the edge pair $\langle(a,i),(b,j)\rangle$ can be reconfigured to
$\langle(a,b),(i,j)\rangle$. Nevertheless, our assumption excludes the
scenario of considering $\langle(a,i),(b,j)\rangle$ being rewired to
$\langle(a,b),(i,j)\rangle$ at any point. Therefore, we can identify all
potential edge pairs within the original graph without considering the
additional components during the rewiring process. This greatly simplifies our
reconfiguration problem. Subsequent experiments can validate the
reasonableness of our assumption.
When rewiring in a network needs to occur in parallel, it is a meaningful
assumption that the selected pairs of edges for rewiring align precisely. For
instance, in a flight network, continuously adjusting flight routes within a
short period is impractical. Instead, the entire flight network typically
undergoes a unified adjustment of flight routes at a specific time,
necessitating parallel rewiring of flight routes.
We aims to maximize the assortativity coefficient through a limited number of
rewirings, name as Maximum Assortative Rewiring (MAR). We define the
following set function optimization problem:
$\underset{S\subset EP,|S|=k}{maximize}\quad\Delta r(S).$ (5)
where $EP$ is a set of rewirable edges. Since the change in the assortativity
coefficient can be converted to the change in $s$, the optimization problem
(5) is equivalent to the following problem:
$\underset{S\subset EP,|S|=k}{maximize}\quad\Delta s(S).$ (6)
In MAR, the set $EP$ consists of all possible rewired edge pairs with a
positive $value$ in the original network $G$. These edge pairs in $EP$ satisfy
two mutually exclusive conditions.
* •
Constraint 1: The pair of edges formed by the same edge and other edges are
mutually exclusive, as each edge can only be rewired once.
* •
Constraint 2: Edge pairs that result in the same edge after rewiring are also
mutually exclusive, since simple graphs do not allow multiple edges between
the same pair of nodes.
Figure 2 illustrates a network along with its corresponding $EP$. Suppose we
select the edge pair $\langle(2,3),(4,5)\rangle$ and rewire it to
$\langle(2,4),(3,5)\rangle$. According to Constraint 1, the edge pairs
$\langle(2,3),(4,5)\rangle$, $\langle(2,8),(4,5)\rangle$, and
$\langle(2,3),(6,7)\rangle$ cannot be chosen for the next rewiring process.
Following Constraint 2, the edge pair $\langle(2,8),(4,9)\rangle$ also cannot
be selected for the next rewiring process.
Figure 2: The left side illustrates the original network along with its
corresponding $EP$. In addition to the rewirable edge pairs, $EP$ also
includes their corresponding $value$. The network on the right side represents
the change in $EP$ corresponding to the rewiring of the edge pair
$\langle(2,3),(4,5)\rangle$ to $\langle(2,4),(3,5)\rangle$. According to
Constraint 1, the edge pairs $\langle(2,3),(4,5)\rangle$,
$\langle(2,8),(4,5)\rangle$ and $\langle(2,3),(6,7)\rangle$ cannot be chosen
for the next rewiring process, we use red lines to indicate this. Following
Constraint 2, the edge pair $\langle(2,8),(4,9)\rangle$ also cannot be
selected for the next rewiring process, we use orange lines to indicate this.
###### Theorem 1.
In the MAR problem, $\Delta s(S)$, exhibits monotonic behavior.
###### Proof.
In MAR, for any given solution $S$, let us consider an edge pair
$\langle(i,j),(k,l)\rangle$ in $G+S$ that can be rewired. The change in the
assortativity coefficient, denoted $\Delta
s(S\cup\\{\langle(i,j),(k,l)\rangle\\})$, can be expressed as $\Delta
s(S\cup\\{\langle(i,j),(k,l)\rangle\\})=\Delta
s(S)+value_{\langle(i,j),(k,l)\rangle}$. Since
$value_{\langle(i,j),(k,l)\rangle}>0$, it follows that $\Delta
s(S\cup\langle(i,j),(k,l)\rangle)>\Delta s(S)$, indicating that $s(S)$ is
increasing monotonically. ∎
Algorithm 1 GA
1:Graph $G=(V,E)$; an integer $k$
2:A set $S$ and $|S|=k$
3:$EP\leftarrow$ the set of possible rewired edge pairs with a positive
$value$ in the original $G$, sorted in descending order.
4:$S\leftarrow\emptyset$
5:$index\leftarrow 0$
6:$n\leftarrow 0$
7:$len\leftarrow length(EP)$
8:while $n<k$ and $index<len$ do
9: edge $(i,j),(k,l)\leftarrow EP[index]$
10: $index\leftarrow index+1$
11: if the edges $(i,k)$ and $(j,l)$ can be rewired in $G$ then
12: $S\leftarrow S\cup\\{\\{(i,j),(k,l)\\}\\}$
13: $G\leftarrow G+\\{\\{(i,j),(k,l)\\}\\}$
14: $n\leftarrow n+1$
15: end if
16:end while
17:return $S$
###### Theorem 2.
In the MAR problem, $\Delta s(S)$ is submodular.
###### Proof.
For each pair $S$ and $T$ of MAR such that $S\subseteq T$, and for each pair
of rewired edge pairs $\langle(i,j),(k,l)\rangle$ in $G(S)$ that satisfy the
rewiring requirements, if $\Delta s(S)$ is submodular, then
$s(S\cup\\{\langle(i,j),(k,l)\rangle\\})-s(S)$ should be greater than or equal
to $s(T\cup\\{\langle(i,j),(k,l)\rangle\\})-s(T)$. We know that the impact of
rewiring a pair of edges on the network’s assortativity coefficient only
depends on that specific pair of edges, and rewiring other pairs of edges will
not affect the assortativity coefficient change of this specific pair. so
$s(S\cup\\{\langle(i,j),(k,l)\rangle\\})-s(S)=s(T\cup\\{\langle(i,j),(k,l)\rangle\\})-s(T)=value_{(i,j),(k,l)}$,
so $\Delta s(S)$ is submodular. ∎
### II-C Rewiring Method
Let’s consider the following optimization problem: given a finite set $N$, an
integer $k$, and a real-valued function $z$ on the set of subsets of $N$, find
a set $S\in N$ with $|S|\leq k$ such that $z(S)$ is maximized. If $z$ is
monotone and submodular, the following greedy algorithm achieves an
approximation of $1-\frac{1}{e}$ [24]: start with the empty set and repeatedly
add the element that maximizes the increase in $z$ when added to the set.
Theorem 1 and 2 indicate that the objective function (6) is both monotone and
submodular. As a result, a simple greedy strategy can be used to approximate
the problem (5). We propose the Greedy Assortative to maximize the assortative
coefficient.
Greedy Assortative(GA): First, identify all possible pairs of rewired edges
with a positive $value$ in the original graph $G$. Initialize the set $S$ is
empty. Then select the pair with the highest positive $value$ and try to
rewire it. If successful, add it to $S$. if not, move on to the pair with the
second highest $value$ and repeat the process until $|S|=k$.
The details of this algorithm are summarized in Algorithm 1. In fact, the time
complexity of the algorithm is $O(M^{3}\log(M))$, where $M$ represents the
number of edges in the graph. The GA method requires identifying all possible
rewiring edge pairs with positive $value$ and sorting them in descending
order. When the size of a network is large, the number of potential edge pairs
is enormous, and the primary time cost of the algorithm lies in sorting these
large numbers of potential edge pairs. Although there are sorting algorithms
available that can effectively reduce sorting time, it may still be time-
consuming for a large-scale network. Indeed, there is relatively little
research on changing network degree correlations through a limited number of
rewirings, and there are few related heuristic rewiring methods available at
present. Therefore, considering the characteristics of assortative networks,
we propose several heuristic methods with a time complexity of $O(N)$ or
$O(N^{2})$.
Edge Difference Assortative(EDA): To enhance network assortativity, we
prioritize rewiring edges with a large difference in degrees between their
endpoints. In the rewiring process, we first select the edge with the largest
difference in degrees, then proceed to choose the edge with the next largest
difference in degrees that satisfies the rewiring condition. This selected
edge pair is then rewired to ensure that the edge with the largest difference
in degrees is addressed. We continue this process by selecting the edge with
the largest difference in degrees from the remaining edges.
Targeted Assortative(TA): This is an adaptation of Geng’s disassortative
rewiring strategy[12], which prioritizes connecting nodes with higher degrees
to nodes with lower degrees, thereby inducing disassortativity in the network.
We employ a similar approach, giving priority to rewiring that connects nodes
with the highest degrees before considering connections among other nodes.
Probability Edge Assortative(PEA): Probability assortative considers the
tendency of high-degree nodes to connect, enhancing network assortativity. We
can further enhance assortativity by focusing on rewiring edges with a
significant difference in degrees. Initially, calculate the degree difference
for each edge in the network, using the degree difference as the probability
weight for edge selection. Probabilistically choose two edges, disconnect
them, and then connect the high-degree nodes with each other and the low-
degree nodes with each other.
Next, we focus on explaining more implementation details of the three
heuristic methods we proposed or improved.
The EDA algorithm, as shown in Algorithm 2, first sorts the edges in the
network in descending order based on the degree difference. It selects the
edge with the largest degree difference, denoted as $(i,j)$, and then attempts
to rewire it with the edge with the second largest degree difference, denoted
as $(k,l)$. We then sort the four nodes corresponding to these two edges in
descending order of their degrees, denoted as $a\geq b\geq c\geq d$. We rewire
the edge pair $\langle(i,j),(k,l)\rangle$ to $\langle(a,b)(c,d)\rangle$,
thereby disconnecting nodes with large degree differences while connecting
nodes with similar degrees, thus enhancing the network’s assortativity. If
rewiring is not possible, we proceed to select the next edge in the sequence
and attempt to rewire it. If none of the edges can be rewired with it, the
edge is removed from the sequence.
The TA algorithm, as shown in Algorithm 3, utilizes a $nodeList$, which is a
list of all nodes in the network arranged in descending order of their
degrees. Node $a$ represents the highest degree node in each primary
iteration, while node $z$ represents the next highest degree node which has
not been rewired yet in each primary iteration. $p$ and $q$ represent the
indices of nodes $a$ and $z$ in the $nodeList$, respectively. $S(a)$ denotes
the set of neighbor nodes of node $a$, while $S(a)-S(y)$ represents the set of
nodes that are neighbors of node a but not neighbors of node $y$. Node $y$ is
the node with the minimum degree in the set $S(z)$, and node $b$ is the node
with the minimum degree in the set $S(a)-S(y)$. The degrees $d_{z}$, $d_{y}$,
and $d_{b}$ are defined similarly. The condition $d_{z}>d_{y}$ and
$d_{z}>d_{b}$ indicates that reconnecting the edge pair
$\langle(a,b),(z,y)\rangle$ to $\langle(a,z),(b,y)\rangle$ effectively
enhances the network’s assortativity. The terminal condition of the algorithm
is not solely determined by the budget $k$. When the budget $k$ is large or
when the network size is small, the algorithm may terminate before
reconnecting $k$ times due to constraints such as $d_{z}>d_{y}$ and
$d_{z}>d_{b}$, indicating termination after considering all nodes.
The PEA algorithm, as shown in Algorithm 4, first calculates the degree
difference for each edge pair of nodes, denoted as
$D_{k}=[diff_{1},diff_{2},diff_{3},...,diff_{M}]$. We can compute the
probability density for each edge as $p_{i}=d_{i}/\sum(N_{k})$. Based on the
probabilities $P_{k}$, we select the edge pair $\langle(i,j),(k,l)\rangle$,
where edges with larger degree differences have a higher probability of being
chosen. The rewiring process corresponds to that in EDA.
Algorithm 2 EDA
1:Graph $G=(V,E)$; an integer $k$.
2:$n\leftarrow 0$
3:$edgeList\leftarrow$ A list of edge in $G$.
4:while $n<k$ do
5: The $edgelist$ sorted in descending order based on the degree difference.
6: $(i,j)\leftarrow edgeList[0]$
7: $p\leftarrow 1$ the degree of $a$
8: while $p<length(edgeList)$ do
9: $(k,l)\leftarrow edgeList[p]$
10: $a,b,c,d\leftarrow$ The nodes of the two edges $(i,j)$ and $(k,l)$ are
arranged in descending order based on their degrees.
11: if $(i,j),(k,l)$ can be rewired to $(a,b),(c,d)$ then
12: $G\leftarrow G+\\{\langle(a,c),(b,d)\rangle\\}$
13: $n\leftarrow n+1$
14: $edgeList\leftarrow edgeList-\\{(a,b),(c,d)\\}+\\{(a,c),(b,d)\\}$
15: $n\leftarrow n+1$
16: else
17: $p\leftarrow p+1$
18: if $p==length(edgeList)$ then
19: $edgeList\leftarrow edgeList-\\{(i,j)\\}$
20: end if
21: end if
22: end while
23:end while
Algorithm 3 TA
1:Graph $G=(V,E)$; an integer $k$.
2:$nodeList\leftarrow$ A list of nodes sorted in descending order based on
node degree.
3:$n\leftarrow 0$
4:$p\leftarrow 0$
5:$q\leftarrow p+1$
6:$N\leftarrow length(nodeList)$
7:while $n<k$ and $p<N-1$ do
8: if $q=N$ then
9: $p\leftarrow p+1$
10: $q\leftarrow p+1$
11: continue
12: end if
13: Get the node with highest degree as $a$ according to $nodeList[p]$
14: $d_{a}\leftarrow$ the degree of $a$
15: Get the node with lowest degree as $z$ according to $nodeList[q]$
16: $d_{z}\leftarrow$ the degree of $z$
17: $key\leftarrow True$
18: while The $G$ has the edge $(a,x)$ do
19: $q\leftarrow q+1$
20: if $q=N$ then
21: $key\leftarrow False$
22: break
23: end if
24: $z\leftarrow nodeList[q]$
25: $d_{z}\leftarrow$ the degree of $z$
26: if $key=False$ then
27: $p\leftarrow p+1$
28: $q=p+1$
29: else
30: $S_{a}\leftarrow$ the neighbors nodes of $a$
31: $S_{z}\leftarrow$ the neighbors nodes of $z$
32: the node $y$, which degree smallest in $S_{z}$
33: $S_{y}\leftarrow$ the neighbors nodes of$y$
34: $S_{a-y}\leftarrow S_{a}-S_{y}$
35: if $S_{a-y}=\emptyset$ then $q=q+1$
36: else
37: the node $b$, which degree smallest in $S_{a-y}$
38: if $d_{z}>d_{y}$ and $d_{z}>d_{b}$ then
39: $G\leftarrow G+\\{\langle(a,b),(z,y)\rangle\\}$
40: $n\leftarrow n+1$
41: $q\leftarrow q+1$
42: else
43: $q\leftarrow q+1$
44: end if
45: end if
46: end if
47: end while
48:end while
Algorithm 4 PEA
1:Graph $G=(V,E)$; an integer $k$.
2:$D_{k}\leftarrow[diff_{1},diff_{2},diff_{3},...,diff_{M}]$, the difference
in degrees between the nodes at both ends of each edge.
3:$P\leftarrow$ A probability distribution is calculated for each edge based
on the difference in degrees of the two end nodes.
4:$n\leftarrow 0$
5:while $n<k$ do
6: $(i,j),(k,l)\leftarrow$ Randomly select two edges based on the probability
distribution $P$.
7: $a,b,c,d\leftarrow$ The nodes of the two edges $(i,j)$ and $(k,l)$ are
arranged in descending order based on their degrees.
8: if $(i,j),(k,l)$ can be rewired to $(a,b),(c,d)$ then
9: $G\leftarrow G+\\{\langle(a,c),(b,d)\rangle\\}$
10: $n\leftarrow n+1$
11: end if
12:end while
### II-D Network Robustness
Robustness refers to the ability of a network to continue operating and
supporting its services when parts of the network are naturally damaged or
subjected to attacks. For example, in a power network, a robust electrical
network should continue functioning without significant impact even if some
power plants are unable to operate or certain lines are disrupted. There are
currently many robustness metrics available to measure the robustness of a
network. Different robustness metrics have different implications for the
robustness of a network. For example, the average shortest path[25, 26] and
efficiency[27, 28] quantify the shortest path distances between pairs of nodes
in the network. $f$-robustness[13] and $R$-robustness[29, 30] are directly
related to the largest connected component of the network. In addition to
these metrics that utilize the network’s topology to quantify its robustness,
there exists another type of robustness metric based on the adjacency matrix,
known as spectral-based robustness metrics. Spectral-based robustness metrics
have been demonstrated to be associated with information propagation and
dynamic processes in networks, and as such, they are widely utilized for
measuring network robustness. There is existing research suggesting a certain
relationship between degree correlation and network robustness. In this study,
we primarily investigate whether our rewiring strategy, aimed at enhancing
network degree correlation, can simultaneously improve network robustness. We
focus mainly on robustness metrics based on the adjacency matrix.
We consider three adjacency matrix-based robustness metrics, including
spectral radius and natural connectivity.
1. 1.
Spectral radius[31]: The spectral radius, denoted as $\lambda_{1}$ , of a
network is defined as the largest eigenvalue of the network’s adjacency
matrix.
2. 2.
Natural connectivity[32]: The natural connectivity is a mathematical measure
defined as a special average of all the eigenvalues of the adjacency matrix
with respect to the natural exponent and natural logarithm. It is directly
related to the closed paths in the network. This metric is defined as:
$\bar{\lambda}(G)=ln(\frac{1}{n}\sum_{i=1}^{n}e^{\lambda_{i}}).$ (7)
### II-E Robustness of Centrality Measures
#### II-E1 Centrality Measures
Centrality measures are a method used to assess the importance of nodes in a
network, commonly used in the study of complex networks such as social
networks, information diffusion networks, transportation networks, and more
[33]. We are interested in whether the centrality measures of the network are
robust when we use our rewiring method to enhance the degree correlation of
the network. We consider four widely applied centrality metrics: betweenness
centrality, closeness centrality, eigenvector centrality, and k-shell.
Betweenness centrality measures the importance of a node in a network based on
the number of shortest paths that pass through it[34]; Closeness centrality
measures the average distance between a node and all other nodes in a
network[35]; Eigenvector centrality measures the importance of a node in a
network, taking into account both the node’s own influence on the network and
the influence of its neighboring nodes[36]; The k-shell method calculates the
node centrality by decomposing the network[37].
#### II-E2 Robustness evaluation function of centrality measures
As the network topology changes with the rewiring, the degree correlation of
the network also changes, but the degree sequence of the network remains
unchanged. This prompts us to investigate whether different centrality
measures of the network exhibit robustness under rewiring strategies aimed at
enhancing network degree correlation.
To evaluate the robustness of centrality measures $C$, we calculate the
Spearman rank correlation coefficient $SC$ between the centrality measures
$C_{O}$ and $C_{R}$ before and after rewiring, respectively. $C_{O}$
represents the centrality measure of the original network, while $C_{R}$
represents the centrality measure of the rewired network. Here, we represent
the node rankings corresponding to $C_{O}$ and $C_{R}$ as $R_{O}$ and $R_{R}$,
respectively. The Spearman rank correlation coefficient $SC$ can be calculated
as follows:
$\mathbf{SC}=\frac{\langle R_{O}R_{R}\rangle-\langle R_{O}\rangle\langle
R_{R}\rangle}{\sqrt{(\langle R_{O}^{2}\rangle-\langle
R_{O}\rangle^{2})(\langle R_{M}^{2}\rangle-\langle R_{M}\rangle^{2})}}$ (8)
The value of $SC$ ranges from -1 to 1, with a value closer to 1 indicating
robustness for the respective centrality measure.
## III Experiments
In this section, we first demonstrate the reasonableness of our assumptions
and compare the GA method with the optimal solution. We validate the
effectiveness of the GA method and our heuristic methods on real networks and
explore their impact on network spectral robustness metrics. Finally, we
investigate whether various centrality measures can maintain robustness during
network rewiring using the GA method.
### III-A Baseline Method
Currently, there are limited methods for altering the assortativity
coefficient of a network through degree-preserving rewiring. To demonstrate
the effectiveness of our proposed GA method and three heuristic methods, we
compare them with the following two existing heuristic methods.
1. 1.
Random Assortative(RA)[10]: Randomly select two edges without common nodes.
Rewire these edges so that the two highest degree node and the two lowest-
degree nodes are connected.
2. 2.
Probability Assortative(PA)[11]: The probability of selecting a node is
determined by its degree, serving as a probability weight. The process
involves probabilistically choosing two nodes, $i$ and $k$, and then selecting
random neighbors, $j$ and $l$, for nodes $i$ and $k$, respectively. These
chosen nodes form the rewired edges $(i,j)$ and $(k,l)$, resulting in their
disconnection, followed by the connection of edges $(i,k)$ and $(j,l)$.
Both of these algorithms are relatively simple, and their specific procedures
are detailed in their corresponding papers; therefore, we will not provide a
detailed description here.
### III-B Dataset description
We evaluate the methods using three different categories of datasets, as
indicated in Table I. These categories include AS router, flight, and power
networks. Edge rewiring in these networks holds practical significance and
applications. For Instance, in the flight network, edge rewiring involves
rearranging flights between airports without affecting the airport’s capacity.
* •
AS-733[38] The dataset consists of routing networks spanning $733$ consecutive
dates. In our experiments, we selected a routing network every six months,
resulting in a total of six networks. The size of the networks gradually
increased, with the number of nodes ranging from 3015 to 6127, and the number
of edges ranging from 5156 to 12046. All these networks are disassortative
scale-free networks with degree exponent between 2 and 3.
* •
USPowerGrid and BCSPWR10[39, 40] These are two power networks for the Western
states of the United States, both of which belong to neutral networks. And the
degree distribution of the power network follows an exponential distribution.
* •
USAir97 and USAir10[39, 40] The USAir97 and USAir10 are flight networks
composed of the air routes between American airports in 1997 and 2010,
respectively. The degree distributions of these two networks lie between
exponential and power-law distributions, often referred to as stretched
exponential distributions.
TABLE I: Statistics of datasets. 3 categories of datasets (AS router, power, and flight networks) where rewiring can be applied. For a network with $\lvert$V$\rvert$ nodes and $\lvert$E$\rvert$ edges, we use $r$ to denote the assortativity coefficient of the network. Dataset | $\lvert$V$\rvert$ | $\lvert$E$\rvert$ | $r$
---|---|---|---
AS-733-A | 3015 | 5156 | -0.229
AS-733-B | 3640 | 6613 | -0.210
AS-733-C | 4296 | 7815 | -0.201
AS-733-D | 5031 | 9664 | -0.187
AS-733-E | 6127 | 12046 | -0.182
USPowerGrid | 4941 | 6594 | 0.003
BCSPWR10 | 5300 | 8271 | -0.052
USAir97 | 332 | 2126 | -0.208
USAir10 | 1574 | 17215 | -0.113
### III-C Assumption rationality
We assume that during the rewiring process, newly generated edge pairs will
not be rewired in subsequent steps. Below, we aim to verify the reasonableness
of this assumption. Even for a small-scale network, enumerating all possible
rewiring edge pairs to find the optimal solution for rewiring k edge pairs is
challenging. Therefore, our goal is to validate whether our GA method can
approach the maximum assortativity achievable by the network under this
assumption. If, under our assumption, the GA method can bring the network
close to maximum assortativity, it indicates that our assumption does not
significantly affect the rewiring effectiveness, thereby validating its
reasonableness.
Winterbach _et al._[41] investigated an exact approach to obtain the maximum
assortative network that can be formed with a given degree sequence. They
transformed the problem of constructing the maximum assortative network into
the maximum weight subgraph problem on a complete graph, which was solved
using b-matching [42]. Furthermore, they further converted b-matching into a
more efficient 1-matching problem [43] to obtain the maximum assortative
network for a given degree sequence. Considering that the time complexity of
1-matching is also relatively high, we conducted experiments on three small-
scale synthetic networks. In the experiments, we first obtained the maximum
assortative network achievable with the degree sequence using Winterbach _et
al._ ’s method and then executed the GA method to obtain the maximum
assortative network. We compared whether the assortativity coefficient of the
maximum assortative network obtained by the GA method could match that of the
maximum assortative network obtained using Winterbach _et al._ ’s method to
assess the reasonableness of the assumption.
The experimental results are summarized in Table II, where we present the
maximum, minimum, and average approximation ratios of the assortativity
coefficients obtained by the GA method compared to the theoretically maximum
assortative networks across various types of networks. In the case of the WS
network, the minimum approximation ratio is 0.927 and the average
approximation ratio is 0.984. For the other two types of networks, the minimum
and average approximation ratios are better than those of the WS network. This
suggests that even under our assumption, our GA method can effectively
approximate the maximum assortativity coefficient across all three types of
networks. When our goal is to maximize the assortativity coefficient by
rewiring a limited number of edge pairs, our algorithm typically performs
better because it is less likely to select newly created edge pairs during the
rewiring process compared to obtaining the network’s maximum assortative
network.
TABLE II: Comparing the assortativity coefficient of the maximum assortative network obtained by the GA method and the exact approach on three model networks. The first three columns denote the network type, number of nodes, and number of edges in the network. The fourth column indicates the maximum approximation ratio achieved by GA, while the fifth column presents the minimum approximation ratio achieved by GA. The sixth column displays the average approximation ratio. Network | $\lvert$V$\rvert$ | $\lvert$E$\rvert$ | Max Approx. | Min Approx. | Ave Approx.
---|---|---|---|---|---
ER | 50 | 100 | 0.990 | 0.932 | 0.968
WS | 50 | 100 | 1 | 0.927 | 0.964
BA | 50 | 96 | 0.997 | 0.957 | 0.982
### III-D Solution Quality
In this section, we first formulate the Integer Programming(IP) for MAI to
obtain the optimal solution. We validate the effectiveness of GA on several
small model networks,ER network, WS network and BA network. Subsequently,
using the real networks from Table I, we compare GA with baseline methods
introduced in III-A, confirming the effectiveness of GA across different types
of real networks. Finally, we analyze the runtime of GA on real networks.
#### III-D1 IP formulation for MAI
Let $S$ be a solution for MAI, and $EP$ represent all pairs of edges in the
network that can be rewired, each with a positive value. Given each edge pair
$ep\in EP$, we define $x_{ep}$
$\begin{split}x_{ep}=\left\\{\begin{array}[]{lc}1\,\,\,\,\text{if}\,ep\in S\\\
0\,\,\,\,\text{otherwise.}\\\ \end{array}\right.\end{split}$
The IP formulation is defined as follows:
$\begin{split}&\max\,\,\sum_{ep\in EP}{value_{ep}x_{ep}}\\\
&\text{s.t.}\quad\left\\{\begin{array}[]{lc}\sum_{\\{ep\in EP|(i,j)\in
ep\\}}x_{ep}\leq 1\,\,\,\text{for\,each}\,\,(i,j)\in E\\\ \sum_{\\{ep\in
EP|(i,j)\in ep_{r}\\}}x_{ep}\leq 1\,\,\,\text{for\,each}\,\,(i,j)\in E_{r}\\\
\sum_{ep\in EP}x_{ep}\leq k\\\
x_{ep}\in\\{0,1\\}\,\,\,\text{for\,each}\,\,ep\in EP\\\
\end{array}\right.\end{split}$
$E_{r}$ is a set of new edges generated after rewiring the elements in $EP$,
and $ep_{r}$ represents the edge pair generated after rewiring $ep$. The first
constraint ensures that each edge in the original network can only be rewired
once. The second constraint ensures that each new edge is only generated once.
We solved the above program by using the GLPK solver. In the experiment, we
compared GA and the optimal solution calculated using IP. Our experiments are
conducted on three popular model networks: ER network, WS network, and BA
network. Since these networks are randomly generated, we repeat the
experiments multiple times and average the results. In the experiments, we
consider the rewiring frequency to be 5% of the network edges.
TABLE III: Comparing GA and the optimal solution on three model networks. The first three columns denote the network type, number of nodes, and number of edges in the network. The fourth column represents the percentage of times GA obtains an optimal solution in multiple experiments. The fifth column indicates the minimum approximation ratio achieved by GA, while the sixth column presents the average approximation ratio. Network | $\lvert$V$\rvert$ | $\lvert$E$\rvert$ | OPT% | Min Approx. | Ave Approx.
---|---|---|---|---|---
ER | 50 | 100 | 42.5 | 0.960 | 0.960
WS | 50 | 100 | 67.0 | 0.924 | 0.990
BA | 50 | 96 | 99.5 | 0.994 | 0.999
The results are reported in Table III, where we display the percentage of
optimal solutions achieved by GA, along with the minimum (i.e., worst-case)
and average approximation ratios. The experiments clearly indicate that the
minimum approximation ratio achieved by GA significantly outperforms
theoretical values. In the BA network, GA obtains an optimal solution in over
99.5%. Although in ER and WS networks, GA achieves an optimal solution in
42.5% and 67.0%, respectively, by observing their minimum and average
approximation ratios, it is evident that even when GA does not achieve the
optimal solution, it comes very close. For example, in the ER network, the
minimum approximation ratio is 0.924, and the average approximation ratio is
0.990. For the three model networks mentioned above, the minimum approximation
ratio is not less than 0.924, and the average approximation ratio is not less
than 0.960, indicating that GA performs exceptionally well on model networks.
(a) AS-733-A
(b) USPowerGrid
(c) USAir97
(d) AS-733-E
(e) BCSPWR10
(f) USAir10
Figure 3: The assortativity coefficient of the pivot as a function of the
percentage $p$ of rewired edge pairs is examined using six methods.
(a) AS-733-E
(b) USPowerGrid
(c) USAir97
Figure 4: The running time of five heuristics is analyzed as a function of the
percentage $p$ of rewired edge pairs.
#### III-D2 The Comparison with Alternative Baselines
We compare our proposed GA method and heuristic methods with the baseline
methods described in Sec II on the real networks presented in Table I,
validating the effectiveness of our algorithm on real networks.
TABLE IV: When the number of rewired edge pairs in the network is 5% of the total number of edges, the GA method and our proposed heuristic methods are compared with baseline methods for rewiring the assortativity coefficient of three types of real networks. The text in red font corresponds to the highest assortativity coefficient among the six methods, while the text in blue font corresponds to the second highest assortativity coefficient. Methods | AS-733-A | AS-733-B | AS-733-C | AS-733-D | AS-733-E | USPowerGrid | BCSPWR10 | USAir97 | USAir10
---|---|---|---|---|---|---|---|---|---
GA | -0.214 | -0.198 | -0.191 | -0.178 | -0.172 | 0.556 | 0.502 | -0.119 | 0.032
EDA | -0.221 | -0.204 | -0.196 | -0.182 | -0.177 | 0.539 | -0.175 | -0.165 | -0.031
TA | -0.221 | -0.204 | -0.196 | -0.182 | -0.177 | 0.464 | 0.403 | -0.165 | -0.036
PEA | -0.218 | -0.201 | -0.194 | -0.180 | -0.175 | 0.185 | 0.132 | -0.165 | -0.043
PA | -0.224 | -0.207 | -0.198 | -0.185 | -0.180 | 0.073 | 0.032 | -0.189 | -0.083
RA | -0.223 | -0.206 | -0.198 | -0.184 | -0.178 | 0.069 | 0.02 | -0.183 | -0.073
TABLE V: When the number of rewired edge pairs in the network is 5% of the total number of edges, the GA method and our proposed heuristic methods are compared with baseline methods for rewiring the Spearman rank correlation coefficient of three types of real networks. The text in red font corresponds to the highest Spearman rank correlation coefficient among the six methods, while the text in blue font corresponds to the second highest Spearman rank correlation coefficient. Methods | AS-733-A | AS-733-B | AS-733-C | AS-733-D | AS-733-E | USPowerGrid | BCSPWR10 | USAir97 | USAir10
---|---|---|---|---|---|---|---|---|---
original | -0.504 | -0.481 | -0.502 | -0.521 | -0.050 | -0.074 | -0.144 | -0.144 | -0.066
GA | -0.227 | -0.196 | -0.212 | -0.211 | -0.230 | 0.245 | 0.258 | 0.030 | 0.156
EDA | -0.309 | -0.289 | -0.312 | -0.324 | -0.351 | 0.223 | 0.240 | -0.052 | 0.054
TA | -0.310 | -0.289 | -0.312 | -0.326 | 0.352 | 0.100 | 0.098 | -0.052 | 0.059
PEA | -0.368 | -0.347 | -0.366 | -0.367 | -0.384 | 0.112 | 0.094 | -0.070 | 0.028
PA | -0.428 | -0.407 | -0.425 | -0.426 | -0.445 | 0.042 | 0.027 | -0.110 | -0.024
RA | -0.407 | -0.387 | -0.405 | -0.407 | -0.424 | 0.039 | 0.015 | -0.098 | -0.009
To ensure the validity of the experiments, we repeated the experiments 50
times on real networks for methods with uncertain results, such as RA, and
averaged the results. Table IV displays the assortativity coefficients of the
real networks after rewiring by our GA method and heuristic methods, compared
to baseline methods, when the rewiring budget is 5% of the total number of
edges in the network. The GA method consistently achieves the best results
across all three types of networks, while our proposed heuristic methods EDA,
TA, and PEA also outperform the baseline methods on all networks. We observe
that the performance of the three heuristic methods varies across different
types of networks. In the routing network, the performance of PEA is second
only to the GA method. In the power network, EDA and TA perform well,
especially EDA, which closely matches the increase in network assortativity
coefficients achieved by the GA method. In the flight network, our three
heuristic methods show similar effectiveness. Notably, EDA and TA demonstrate
similar effects across all three types of networks. This suggests that
although our EDA and TA methods employ different strategies for rewiring edge
pairs, they tend to select similar edge pairs for rewiring. One possible
explanation is that the TA method prioritizes rewiring edge pairs involving
high-degree nodes, similar to the edge pairs with large degree differences
targeted by the EDA method. This phenomenon is particularly prominent in
disassortative real networks.
Another noteworthy phenomenon emerges when considering neutral networks: for
neutral networks, our methods exhibit a significant improvement in the network
assortativity coefficient. For instance, in the power network, the GA method
increases the assortativity coefficients of USPowerGrid and BCSPWR10 by 0.553
and 0.507, respectively. This transformation effectively changes them from
neutral networks into strongly assortative networks. In contrast, for
disassortative scale-free networks, even the improvement in the assortativity
coefficient achieved by the GA method is limited. For example, in AS-733-A and
AS-733-E, the GA method increases their assortativity coefficients by only
0.015 and 0.010, respectively. The reason behind this phenomenon lies in the
influence of network degree distribution on the value of the assortativity
coefficient. Scale-free networks with degree exponent $\gamma<3$ tend to
exhibit structural disassortativity [44](e.g., $\gamma_{AS-733-A}=2.20$,
$\gamma_{AS-733-E}=2.11$), indicating the presence of multiple edges between
high-degree nodes. However, due to the limitation of being a simple network
with only one edge between nodes, the network tends to be disassortative.
Additionally, the range within which the network’s assortativity coefficient
can vary is relatively small. Although rewiring effectively changes the
network’s structure, , these changes may not be prominently reflected in the
assortativity coefficient.
We can evaluate the degree correlation of networks demonstrating structural
disassortativity using the Spearman rank correlation coefficient [45]. In Sec.
II-E, the calculation of the Spearman rank correlation coefficient for
centrality measures is described to assess their robustness. Here, we
calculate the Spearman rank correlation coefficient based on node degrees to
measure the degree correlation of the network. The Spearman rank correlation
coefficient utilizes the rankings of node degrees instead of their actual
degrees, thereby reducing the influence of degree distribution on the
assortativity coefficient. It is evident from Table V that the Spearman rank
correlation coefficient effectively captures the degree of change in degree
correlation in disassortative scale-free networks. For example, in AS-733-A,
the GA method increases the network’s Spearman rank correlation coefficient by
0.227. Furthermore, while PEA demonstrates superior performance to EDA and TA
in terms of the assortativity coefficient, EDA and TA outperform PEA when
considering the Spearman rank correlation coefficient in certain networks.
This indicates that the Spearman rank correlation coefficient, which considers
the rankings of node degrees, may not always align well with the assortativity
coefficient.
Figure 3 depicts the assortativity coefficient variations of the network under
different methods for rewiring budgets ranging from 0.5% to 5% of the number
of network edges. The trends observed in the routing network are similar,
thus, we present a subset of networks here. We can clearly see that the GA
method yields the best results. Across all routing networks, different methods
exhibit similar effects, with GA being the most effective, followed by PEA,
while EDA and TA show comparable performance, and PA and RA methods are the
least effective. Similar observations can be made for the power networks,
although PEA and TA significantly outperform EDA. In the power networks, our
heuristic methods, PEA and TA, show improvements in assortativity coefficients
that are very close to those achieved by the GA method, especially the EDA
method. In flight network, the performance of the three methods we proposed is
similar, with only slight variations. Specifically, in USAir97, PEA is
slightly better than EDA and TA, while in USAir10, EDA and TA are slightly
better than PEA.
Next, we conduct an analysis of the time efficiency of our GA method and the
heuristic methods in comparison to baseline methods. The Figure 4 illustrates
the runtime of different methods across three types of networks as the number
of rewirings ranges from 0.05% to 5% of the total number of edges in the
network. We observe that the time efficiency of the GA method is notably
lower, differing by several orders of magnitude from the other methods.
Additionally, as the network scale increases, the time cost of the GA method
sharply rises. It is noteworthy that our GA only performs one initial sorting
of the $value$ for all possible edge pairs with positive $value$, so the
number of rewirings typically does not significantly affect its runtime. The
runtime for the EDA, TA, and PEA methods is similar to that of baseline
methods, and in some networks, it even outperforms baseline methods.
Therefore, in conjunction with the preceding experiments, our proposed
heuristic methods demonstrate a clear advantage over baseline methods and
effectively increase the assortativity coefficient of networks. This suggests
that when the network scale is large and GA is impractical, EDA, TA, and PEA
can be flexibly employed based on the network type. For example, in power
networks, EDA and TA are favored, whereas PEA is better suited for router
networks.
(a) AS-733-A
(b) USPowerGrid
(c) USAir97
(d) AS-733-E
(e) BCSPWR10
(f) USAir10
Figure 5: The spectral radius of five heuristics is analyzed as a function of
the percentage $p$ of rewired edge pairs.
(a) AS-733-A
(b) USPowerGrid
(c) USAir97
(d) AS-733-E
(e) BCSPWR10
(f) USAir10
Figure 6: The natural connectivity of five heuristics is analyzed as a
function of the percentage $p$ of rewired edge pairs.
### III-E The Analysis of Network Robustness
In this section, we analyze the impact of the GA method and the heuristic
methods on network robustness by selecting several representative measures, as
described in Section II-D. We compare the changes in these robustness measures
before and after executing the rewiring methods, considering a rewiring budget
ranging from 0.5% to 5% of the number of network edges.
Figure 5 illustrates the variation of the spectral radius under different
rewiring methods. We use $\frac{R-R_{0}}{R_{0}}$ as the vertical axis to
represent the corresponding change rate in robustness metrics. Similarly,
Figures 6 shows the changes in natural connectivity under different rewiring
methods.
According to the definitions of the two spectral robustness metrics, it can be
observed that they are all directly related to the largest eigenvalue of the
network’s adjacency matrix. Increasing the network’s assortativity coefficient
typically leads to an increase in the largest eigenvalue of the network,
thereby enhancing the robustness metrics associated with the largest
eigenvalue. Figures 5 and 6 demonstrate that the variation trend of the
spectral radius and the natural connectivity under different rewiring methods
in routing and flight networks is similar to that of the assortativity
coefficient. Specifically, the rewiring methods that are more effective in
increasing the network’s assortativity coefficient also tend to effectively
increase the network’s spectral radius and natural connectivity in these two
types of networks. While the relationship between the assortativity
coefficient and the largest eigenvalue is not straightforward, particularly in
power networks, some interesting observations emerge. For instance, in power
networks, the GA method proves most effective in increasing the network
assortativity, whereas TA emerges as the most effective method for enhancing
the network’s spectral radius. Moreover, EDA, TA, and GA methods initially
lead to a rapid increase in the network’s spectral radius with an uptick in
rewiring frequency, stabilizing once the rewiring frequency surpasses 2.5% of
the total number of edges, with no further increase observed with additional
rewiring. Additionally, despite RA, PA, and PEA’s capacity to augment the
network’s assortativity coefficient, they do not contribute to improvements in
the network’s spectral radius and natural connectivity.
Observing Figures 5 and 6 reveals an interesting phenomenon: the variations in
the natural connectivity of different network types under different rewiring
methods resemble those of their spectral radius. One possible explanation is
that natural connectivity represents the weighted average of all eigenvalues
of the network adjacency matrix, with the maximum eigenvalue being
predominant, thereby resulting in similar variations in spectral radius and
natural connectivity.
Furthermore, we noted that the stability of the two robustness metrics varies
across networks of different types. For example, in the AS router network and
the flight network, when the rewiring ratio is 5%, the increase in the
spectral radius is 12% and 14% in the AS router network, and 6.7% and 17.9% in
the flight network, respectively. However, in the power network, the increase
in the spectral radius reaches as high as 78% and 86%. Similar phenomena are
also observed in natural connectivity.
Overall, GA effectively improves the spectral robustness metrics of the three
types of networks, with particularly notable performance in the router network
and flight network compared to other rewiring strategies. Our three heuristic
methods perform well in both routing and flight networks, with TA and EDA also
proving effective for the power network. Notably, in the power network, TA
outperforms GA. It is worth noting that our rewiring strategy does not require
the calculation of network robustness metrics at each rewiring step. Even
spectral-based robustness metrics are computationally expensive, especially
for large-scale networks. Therefore, our rewiring strategy demonstrates
significant time efficiency.
### III-F Robustness of centrality measures
(a) AS-733-A
(b) USPowerGrid
(c) USAir97
(d) AS-733-E
(e) BCSPWR10
(f) USAir10
Figure 7: The influence of rewiring edge pairs using the GA method on the
Spearman rank correlation coefficient $SC$ between the true measure $C_{T}$
and manipulated measure $C_{M}$, with rewiring frequencies ranging from 0.5%
to 5% of the total number of edges in the network.
(a) AS-733-A
(b) USPowerGrid
(c) USAir97
(d) AS-733-E
(e) BCSPWR10
(f) USAir10
Figure 8: The Spearman rank correlation coefficient $SC$ between the true
centrality measure $C_{T}$ and the manipulated centrality measure $C_{M}$ of
top-degree nodes, resulting from rewiring edge pairs using the GA method, is
analyzed. The rewiring frequencies range from 0.5% to 5% of the total number
of edges in the network.
Through our previous experiments, we have validated that the GA method can
effectively enhance the degree correlation of networks of different types
while simultaneously improving their robustness. An interesting question
arises: when we optimize network structure using the GA method, can various
centrality measures of the network maintain their robustness?
The impact of using the GA method to rewire networks to enhance network degree
correlation while affecting centrality measures is illustrated in Figure 7. As
the number of rewirings increases, the Spearman correlation coefficient $SC$
for all centrality measures initially experiences a rapid decrease before
reaching a relatively stable state. One key observation is that across all
three types of networks, the robustness of closeness centrality and
eigenvector centrality to changes is superior to that of betweenness
centrality and k-shell. Especially for routing networks, the $SC$ of closeness
centrality and eigenvector centrality can be maintained above 0.8. However, in
power networks and flight networks, as the number of rewiring iterations
increases, our centrality measures fail to maintain their robustness. We also
observed that in disassortative networks, the variations in closeness
centrality and eigenvector centrality were similar, indicating a certain
correlation between these two centrality measures in disassortative networks.
In fact, in many cases, nodes ranking at the top are more important.
Therefore, for each centrality measure, we only consider the robustness of the
top 5% ranked nodes under different rewiring frequencies. It can be observed
that for routing networks and flight networks, all four centrality measures
remain relatively stable. At a rewiring frequency of 5%, the $SC$ of all
centrality measures is above 0.73. However, in the power network, at a
rewiring frequency of 5%, the $SC$ of all centrality measures is below 0.6.
This indicates that the centrality of top-ranked nodes in disassortative
networks is more robust. This is because in disassortative networks, the
centrality measures of top nodes often exhibit significant numerical
differences, making it difficult for nodes with lower centrality measures to
surpass others through rewiring. We also found that in the flight network, the
k-shell centrality remained robust during the rewiring process. This is
because in the flight network, there are numerous connections between high-
degree nodes, which typically have higher k-shell. Therefore, rewiring hardly
changes their k-shell. Additionally, in the power network, the k-shell also
exhibits greater stability compared to other centrality measures.
In the power network, none of the centrality measures can maintain robustness.
One possible reason is that in the power network, the degrees of different
nodes are relatively close, and the centrality measures of different nodes do
not differ significantly in numerical value. When using the GA method for
rewiring, it is easier to enhance the centrality of nodes with lower
centrality measures, effectively improving their ranking in the respective
centrality measure.
## IV Conclusion
In this work, we addressed the problem of maximizing network degree
correlation through a limited number of rewirings while preserving the network
degree distribution. We employed the widely used assortativity coefficient to
quantify network degree correlation and demonstrated its equivalence to the
$s-$metric under degree-preserving conditions. We analyzed the factors that
influence changes in the assortativity coefficient under degree-preserving
conditions. Based on our assumptions, we formulate the problem of maximizing
the assortativity coefficient and verify its monotonic submodularity.
Introducing the GA method, we showed through various experiments that it
efficiently approximates the optimal solution and outperforms several
heuristic methods in enhancing network degree correlation. Additionally, we
proposed three heuristic rewiring methods, EDA, TA and PEA, aimed at enhancing
network degree correlation. Experimental results revealed that TA is suitable
for power networks, while PEA performs well in AS routing networks, and both
heuristic methods outperform other baseline methods in flight networks.
We also investigated the impact of our rewiring strategies on network spectral
robustness, thus expanding the application scenarios of our approaches.
Experimental results demonstrated that our GA strategy effectively enhances
both network degree correlation and spectral robustness across all three
network types. Particularly, the proposed TA exhibited excellent performance
in power networks, even surpassing the GA strategy. We analyzed whether
several centrality measures can maintain robustness when the GA method rewires
networks. We found that, for disassortative real networks, closeness
centrality and eigenvector centrality are typically robust, whereas none of
the centrality measures are robust for neutral power grids. When focusing on
the top-ranked nodes, we observed that all centrality measures remain robust
in disassortative networks.
In future work, we also plan to extend the application of our rewiring
strategies to fields such as information propagation, exploring whether
different rewiring strategies have varying impacts on network dynamic
processes. Additionally, regarding altering network degree correlation, we
intend to investigate different approaches for modifying network topology,
such as adding or deleting edges, to understand how they affect network degree
correlation.
## References
* [1] F. Chung and L. Lu, “Connected components in random graphs with given expected degree sequences,” _Annals of combinatorics_ , vol. 6, no. 2, pp. 125–145, 2002.
* [2] S. Chatterjee, P. Diaconis, and A. Sly, “Random graphs with a given degree sequence,” _The Annals of Applied Probability_ , pp. 1400–1435, 2011.
* [3] J. Park and M. E. Newman, “Origin of degree correlations in the internet and other networks,” _Physical review e_ , vol. 68, no. 2, p. 026112, 2003.
* [4] P. Mahadevan, D. Krioukov, K. Fall, and A. Vahdat, “Systematic topology analysis and generation using degree correlations,” _ACM SIGCOMM Computer Communication Review_ , vol. 36, no. 4, pp. 135–146, 2006.
* [5] J. Saramäki, M. Kivelä, J.-P. Onnela, K. Kaski, and J. Kertesz, “Generalizations of the clustering coefficient to weighted complex networks,” _Physical Review E_ , vol. 75, no. 2, p. 027105, 2007.
* [6] M. P. McAssey and F. Bijma, “A clustering coefficient for complete weighted networks,” _Network Science_ , vol. 3, no. 2, pp. 183–195, 2015.
* [7] M. E. Newman, “Assortative mixing in networks,” _Physical review letters_ , vol. 89, no. 20, p. 208701, 2002.
* [8] ——, “Mixing patterns in networks,” _Physical review E_ , vol. 67, no. 2, p. 026126, 2003.
* [9] L. Li, D. Alderson, J. C. Doyle, and W. Willinger, “Towards a theory of scale-free graphs: Definition, properties, and implications,” _Internet Mathematics_ , vol. 2, no. 4, pp. 431–523, 2005.
* [10] R. Xulvi-Brunet and I. M. Sokolov, “Changing correlations in networks: assortativity and dissortativity,” _Acta Physica Polonica B_ , vol. 36, no. 5, pp. 1431–1455, 2005.
* [11] L. Jing, Z. Hong-Xin, W. Xiao-Juan, and J. Lei, “Algorithm design and influence analysis of assortativity changing in given degree distribution,” _ACTA PHYSICA SINICA_ , vol. 65, no. 9, 2016.
* [12] H. Geng, M. Cao, C. Guo, C. Peng, S. Du, and J. Yuan, “Global disassortative rewiring strategy for enhancing the robustness of scale-free networks against localized attack,” _Physical Review E_ , vol. 103, no. 2, p. 022313, 2021.
* [13] Z. Jing, T. Lin, Y. Hong, L. Jian-Hua, C. Zhi-Wei, and L. Yi-Xue, “The effects of degree correlations on network topologies and robustness,” _Chinese Physics_ , vol. 16, no. 12, p. 3571, 2007.
* [14] J. Zhou, X. Xu, J. Zhang, J. Sun, M. Small, and J.-a. Lu, “Generating an assortative network with a given degree distribution,” _International Journal of Bifurcation and chaos_ , vol. 18, no. 11, pp. 3495–3502, 2008.
* [15] R. Noldus and P. Van Mieghem, “Effect of degree-preserving, assortative rewiring on ospf router configuration,” in _Proceedings of the 2013 25th International Teletraffic Congress (ITC)_. IEEE, 2013, pp. 1–4.
* [16] S. L. Chang, M. Piraveenan, and M. Prokopenko, “Impact of network assortativity on epidemic and vaccination behaviour,” _Chaos, solitons & fractals_, vol. 140, p. 110143, 2020.
* [17] M. Boguá, R. Pastor-Satorras, and A. Vespignani, “Epidemic spreading in complex networks with degree correlations,” _Statistical mechanics of complex networks_ , pp. 127–147, 2003.
* [18] M. Zhou and J. Liu, “A memetic algorithm for enhancing the robustness of scale-free networks against malicious attacks,” _Physica A: Statistical Mechanics and its Applications_ , vol. 410, pp. 131–143, 2014.
* [19] J. Menche, A. Valleriani, and R. Lipowsky, “Asymptotic properties of degree-correlated scale-free networks,” _Physical Review E_ , vol. 81, no. 4, p. 046103, 2010.
* [20] J. Platig, E. Ott, and M. Girvan, “Robustness of network measures to link errors,” _Physical Review E_ , vol. 88, no. 6, p. 062812, 2013.
* [21] C. Martin and P. Niemeyer, “Influence of measurement errors on networks: Estimating the robustness of centrality measures,” _Network Science_ , vol. 7, no. 2, pp. 180–195, 2019.
* [22] Q. Niu, A. Zeng, Y. Fan, and Z. Di, “Robustness of centrality measures against network manipulation,” _Physica A: Statistical Mechanics and its Applications_ , vol. 438, pp. 124–131, 2015.
* [23] L. Li, D. Alderson, W. Willinger, and J. Doyle, “A first-principles approach to understanding the internet’s router-level topology,” _ACM SIGCOMM Computer Communication Review_ , vol. 34, no. 4, pp. 3–14, 2004.
* [24] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions—i,” _Mathematical programming_ , vol. 14, pp. 265–294, 1978.
* [25] A. E. Motter and Y.-C. Lai, “Cascade-based attacks on complex networks,” _Physical Review E_ , vol. 66, no. 6, p. 065102, 2002.
* [26] H. Morohosi, “Measuring the network robustness by monte carlo estimation of shortest path length distribution,” _Mathematics and Computers in Simulation_ , vol. 81, no. 3, pp. 551–559, 2010.
* [27] C.-L. Pu, S.-Y. Zhou, K. Wang, Y.-F. Zhang, and W.-J. Pei, “Efficient and robust routing on scale-free networks,” _Physica A: Statistical Mechanics and its Applications_ , vol. 391, no. 3, pp. 866–871, 2012.
* [28] T. A. Schieber, L. Carpi, A. C. Frery, O. A. Rosso, P. M. Pardalos, and M. G. Ravetti, “Information theory perspective on network robustness,” _Physics Letters A_ , vol. 380, no. 3, pp. 359–364, 2016.
* [29] V. H. Louzada, F. Daolio, H. J. Herrmann, and M. Tomassini, “Smart rewiring for network robustness,” _Journal of Complex networks_ , vol. 1, no. 2, pp. 150–159, 2013.
* [30] H. J. Herrmann, C. M. Schneider, A. A. Moreira, J. S. Andrade, and S. Havlin, “Onion-like network topology enhances robustness against malicious attacks,” _Journal of Statistical Mechanics: Theory and Experiment_ , vol. 2011, no. 01, p. P01027, 2011.
* [31] H. Tong, B. A. Prakash, C. Tsourakakis, T. Eliassi-Rad, C. Faloutsos, and D. H. Chau, “On the vulnerability of large graphs,” in _2010 IEEE international conference on data mining_. IEEE, 2010, pp. 1091–1096.
* [32] W. Jun, M. Barahona, T. Yue-Jin, and D. Hong-Zhong, “Natural connectivity of complex networks,” _Chinese physics letters_ , vol. 27, no. 7, p. 078902, 2010.
* [33] M. Kitsak, L. K. Gallos, S. Havlin, F. Liljeros, L. Muchnik, H. E. Stanley, and H. A. Makse, “Identification of influential spreaders in complex networks,” _Nature physics_ , vol. 6, no. 11, pp. 888–893, 2010.
* [34] L. C. Freeman, “A set of measures of centrality based on betweenness,” _Sociometry_ , pp. 35–41, 1977.
* [35] D. Krackhardt, “Assessing the political landscape: Structure, cognition, and power in organizations,” _Administrative science quarterly_ , pp. 342–369, 1990.
* [36] P. Bonacich, “Some unique properties of eigenvector centrality,” _Social networks_ , vol. 29, no. 4, pp. 555–564, 2007.
* [37] S. Carmi, S. Havlin, S. Kirkpatrick, Y. Shavitt, and E. Shir, “A model of internet topology using k-shell decomposition,” _Proceedings of the National Academy of Sciences_ , vol. 104, no. 27, pp. 11 150–11 154, 2007.
* [38] J. Leskovec, J. Kleinberg, and C. Faloutsos, “Graphs over time: densification laws, shrinking diameters and possible explanations,” in _Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining_ , 2005, pp. 177–187.
* [39] J. Kunegis, “Konect: the koblenz network collection,” in _Proceedings of the 22nd international conference on world wide web_ , 2013, pp. 1343–1350.
* [40] R. Rossi and N. Ahmed, “The network data repository with interactive graph analytics and visualization,” in _Proceedings of the AAAI conference on artificial intelligence_ , vol. 29, no. 1, 2015.
* [41] H. Wang, “Do greedy assortativity optimization algorithms produce good results?” _The European Physical Journal B_ , vol. 85, pp. 1–9, 2012.
* [42] C. H. Papadimitriou and K. Steiglitz, _Combinatorial optimization: algorithms and complexity_. Courier Corporation, 1998.
* [43] Y. Shiloach, “Another look at the degree constrained subgraph problem,” _Information Processing Letters_ , vol. 12, no. 2, pp. 89–92, 1981.
* [44] M. Boguná, R. Pastor-Satorras, and A. Vespignani, “Cut-offs and finite size effects in scale-free networks,” _The European Physical Journal B_ , vol. 38, pp. 205–209, 2004.
* [45] W.-Y. Zhang, Z.-W. Wei, B.-H. Wang, and X.-P. Han, “Measuring mixing patterns in complex networks by spearman rank correlation coefficient,” _Physica A: Statistical Mechanics and its Applications_ , vol. 451, pp. 440–450, 2016.
|
(-1.76184,-1.65448) –
(-1.77581,-1.56559) –
(-1.78542,-1.47703) –
(-1.79183,-1.38988) –
(-1.7957,-1.30465) –
(-1.79661,-1.22098) –
(-1.79527,-1.13931) –
(-1.79183,-1.05968) –
(-1.78643,-0.982098) –
(-1.77985,-0.906881) –
(-1.77227,-0.833968) –
(-1.7632,-0.763004) –
(-1.75342,-0.694229) –
(-1.74309,-0.627553) –
(-1.73168,-0.562658) –
(-1.7193,-0.499504) –
(-1.70744,-0.438395) –
(-1.69414,-0.378685) –
(-1.68089,-0.320647) –
(-1.66708,-0.26404) –
(-1.65281,-0.208798) –
(-1.63813,-0.154849) –
(-1.62314,-0.102119) –
(-1.60787,-0.0505295) –
(-1.5924,-2.15076e-14) –
(-1.57678,0.0495522) –
(-1.56033,0.0981675) –
(-1.54451,0.145999) –
(-1.52793,0.193023) –
(-1.51064,0.239262) –
(-1.49405,0.285004) –
(-1.47677,0.330096) –
(-1.45882,0.374561) –
(-1.44158,0.418818) –
(-1.42301,0.462363) –
(-1.40512,0.505874) –
(-1.38656,0.54898) –
(-1.36799,0.591981) –
(-1.34872,0.634659) –
(-1.32938,0.677352) –
(-1.3093,0.719796) –
(-1.28909,0.762367) –
(-1.26869,0.805134) –
(-1.24745,0.847767) –
(-1.22593,0.890689) –
(-1.20405,0.933957) –
(-1.1812,0.977176) –
(-1.15788,1.02081) –
(-1.13349,1.06442) –
(-1.109,1.109) –
(-1.0833,1.1536) –
(-1.05635,1.19819) –
(-1.02901,1.24386) –
(-1.0007,1.29009) –
(-0.970905,1.33634) –
(-0.940374,1.38372) –
(-0.908192,1.43108) –
(-0.87503,1.4796) –
(-0.840387,1.52866) –
(-0.804476,1.57887) –
(-0.766528,1.62895) –
(-0.727058,1.68013) –
(-0.685899,1.73238) –
(-0.642643,1.78501) –
(-0.597401,1.83861) –
(-0.550007,1.89314) –
(-0.500118,1.94783) –
(-0.447789,2.00329) –
(-0.392726,2.05874) –
(-0.334945,2.11476) –
(-0.274203,2.17054) –
(-0.210414,2.22595) –
(-0.143455,2.28015) –
(-0.0733177,2.33301) –
[dotted, thick] (0,2.25708) –
(0.0728069,2.31675) –
(0.149405,2.37472) –
(0.229313,2.42588) –
(0.312843,2.4764) –
(0.399434,2.52193) –
(0.487992,2.55815) –
(0.578594,2.58848) –
(0.668407,2.60327) –
(0.759515,2.61427) –
(0.850652,2.61804) –
(0.938455,2.60666) –
(1.02247,2.58247) –
(1.09971,2.54129) –
(1.16936,2.48502) –
(1.23111,2.41619) –
(1.2846,2.33667) –
(1.33108,2.25074) –
(1.37043,2.15946) –
(1.4046,2.0668) –
(1.43474,1.97476) –
(1.46139,1.88402) –
(1.4865,1.79687) –
(1.51228,1.71534) –
(1.53346,1.63297) –
(1.552,1.552) –
(1.56803,1.47247) –
(1.58645,1.39865) –
(1.60182,1.32514) –
(1.61583,1.25337) –
(1.6298,1.18412) –
(1.6428,1.11645) –
(1.65556,1.05065) –
(1.66706,0.985894) –
(1.67675,0.921802) –
(1.68535,0.858728) –
(1.69677,0.798441) –
(1.70739,0.738853) –
(1.71792,0.680173) –
(1.72513,0.621085) –
(1.73034,0.562221) –
(1.73628,0.504436) –
(1.74031,0.446836) –
(1.74314,0.389637) –
(1.7441,0.332704) –
(1.7467,0.27665) –
(1.74822,0.220851) –
(1.74795,0.16523) –
(1.74593,0.109845) –
(1.74216,0.0547495) –
(1.73807,-2.20915e-15) –
(1.73226,-0.0544386) –
(1.72335,-0.108424) –
(1.71416,-0.162036) –
(1.70542,-0.215445) –
(1.69432,-0.268354) –
(1.68367,-0.321177) –
(1.67137,-0.373595) –
(1.65744,-0.425558) –
(1.64257,-0.477212) –
(1.62543,-0.528134) –
(1.60804,-0.578929) –
(1.58972,-0.629414) –
(1.56786,-0.678476) –
(1.54642,-0.72769) –
(1.5228,-0.775905) –
(1.4983,-0.823695) –
(1.47229,-0.870711) –
(1.44123,-0.914633) –
(1.41354,-0.960644) –
(1.3821,-1.00415) –
(1.34988,-1.04707) –
(1.31959,-1.09166) –
(1.28783,-1.13538) –
(1.25514,-1.17866) –
(1.2215,-1.2215) –
(1.18495,-1.26184) –
(1.14847,-1.30268) –
(1.11014,-1.34193) –
(1.07178,-1.38172) –
(1.032,-1.42043) –
(0.99085,-1.45799) –
(0.948732,-1.49496) –
(0.905266,-1.53072) –
(0.860826,-1.56584) –
(0.815069,-1.59966) –
(0.76984,-1.63599) –
(0.723689,-1.67235) –
(0.675748,-1.70674) –
(0.626834,-1.7411) –
(0.57708,-1.77607) –
(0.526136,-1.81097) –
(0.473389,-1.84373) –
(0.419407,-1.87632) –
(0.364371,-1.9101) –
(0.307733,-1.94295) –
(0.250363,-1.98183) –
(0.19065,-2.01687) –
(0.129203,-2.05362) –
(0.065655,-2.08918) –
(9.70938e-15,-2.12768) –
(-0.0682092,-2.17045) –
(-0.138793,-2.20605) –
(-0.212544,-2.24848) –
(-0.289269,-2.2898) –
(-0.369015,-2.32987) –
(-0.452218,-2.37061) –
(-0.538489,-2.40906) –
(-0.628137,-2.44643) –
(-0.720454,-2.47982) –
(-0.814598,-2.50707) –
(-0.909712,-2.52682) –
(-1.00269,-2.5325) –
(-1.09326,-2.52636) –
(-1.1793,-2.50613) –
(-1.2584,-2.46974) –
(-1.33263,-2.42404) –
(-1.39911,-2.36577) –
(-1.45493,-2.2926) –
(-1.50515,-2.21477) –
(-1.54987,-2.13322) –
(-1.58448,-2.04269) –
(-1.6127,-1.94942) –
(-1.63713,-1.85696) –
(-1.65496,-1.76235) –
(-1.667,-1.667) –
(-1.67576,-1.57364) –
(-1.68033,-1.48141) –
(-1.67864,-1.38869) –
(-1.67338,-1.29801) –
(-1.66584,-1.21031) –
(-1.66093,-1.12877) –
(-1.65258,-1.04876) –
(-1.64393,-0.972216) –
(-1.63462,-0.898638) –
(-1.62234,-0.826626) –
(-1.60848,-0.756894) –
(-1.59642,-0.690832) –
(-1.58511,-0.627591) –
(-1.57477,-0.566953) –
(-1.56289,-0.507813) –
(-1.55158,-0.450777) –
(-1.53964,-0.395311) –
(-1.52645,-0.341202) –
(-1.51766,-0.289509) –
(-1.50924,-0.239041) –
(-1.49917,-0.189389) –
(-1.48678,-0.140542) –
(-1.47353,-0.0927064) –
(-1.4637,-0.0459985) –
(-1.4531,-1.96262e-14) –
(-1.44108,0.0452877) –
(-1.42977,0.0899536) –
(-1.41779,0.134021) –
(-1.40447,0.177425) –
(-1.39191,0.220457) –
(-1.37875,0.26301) –
(-1.36497,0.305108) –
(-1.35129,0.346953) –
(-1.33769,0.388635) –
(-1.3228,0.429805) –
(-1.30666,0.470425) –
(-1.28992,0.510715) –
(-1.27324,0.550981) –
(-1.25595,0.591003) –
(-1.23739,0.630483) –
(-1.2176,0.66938) –
(-1.19719,0.708015) –
(-1.17615,0.746407) –
(-1.15505,0.78497) –
(-1.13325,0.823357) –
(-1.11018,0.861147) –
(-1.08695,0.8992) –
(-1.06294,0.937107) –
(-1.03813,0.974872) –
(-1.015,1.015) –
(-0.990846,1.05514) –
(-0.965164,1.09476) –
(-0.937963,1.1338) –
(-0.909687,1.17276) –
(-0.880713,1.2122) –
(-0.851742,1.2533) –
(-0.822942,1.29675) –
(-0.791883,1.339) –
(-0.759652,1.3818) –
(-0.726468,1.42577) –
(-0.692464,1.47156) –
(-0.657133,1.51854) –
(-0.620563,1.56736) –
(-0.582043,1.61669) –
(-0.541681,1.66712) –
(-0.49911,1.71795) –
(-0.455276,1.77318) –
(-0.409535,1.83216) –
(-0.360794,1.89135) –
(-0.309393,1.95343) –
(-0.254262,2.01269) –
(-0.19624,2.076) –
(-0.134309,2.13478) –
(-0.0690533,2.19731) –
(-3.8,-4.6) – (3.8,-4.6) – (3.8,4.5) – (-3.8,4.6) – (-3.8,-4.6);
[fill=orange, thick, opacity=0.8] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(0.43638,2.75519),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(1.49043,1.92145),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(1.38963,-0.131359),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-0.520189,-2.72693),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-1.51625,-1.83283),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-1.31853,0.124638),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[] at (-2,3.8) $\phi{=}36^{\circ}$;
[line width=2pt, ->,red] (1.2,-1.6) – (0.6,-1);
[orange,dashed,very thick] (0,2.51164) –
(0.080936,2.57543) –
(0.165655,2.63301) –
(0.253602,2.68282) –
(0.344127,2.72405) –
(0.43638,2.75519) –
(0.529332,2.77485) –
(0.621938,2.78239) –
(0.713073,2.77724) –
(0.801929,2.76026) –
(0.887143,2.73034) –
(0.968395,2.68982) –
(1.04486,2.63901) –
(1.11628,2.57958) –
(1.18201,2.51189) –
(1.24203,2.43761) –
(1.29652,2.35836) –
(1.34548,2.27508) –
(1.38938,2.18931) –
(1.42765,2.10072) –
(1.46134,2.01137) –
(1.49043,1.92145) –
(1.51579,1.83228) –
(1.53659,1.74292) –
(1.55428,1.65514) –
(1.569,1.569) –
(1.5804,1.48409) –
(1.58857,1.40052) –
(1.59473,1.31928) –
(1.59851,1.23993) –
(1.59948,1.16209) –
(1.59894,1.08664) –
(1.59646,1.01314) –
(1.59219,0.94162) –
(1.58628,0.872067) –
(1.5795,0.804797) –
(1.57137,0.739431) –
(1.56267,0.676229) –
(1.55356,0.615097) –
(1.5435,0.555695) –
(1.5333,0.498198) –
(1.52306,0.442491) –
(1.51293,0.388453) –
(1.5023,0.335803) –
(1.49196,0.284607) –
(1.48131,0.234616) –
(1.47041,0.185756) –
(1.45933,0.137947) –
(1.44812,0.091108) –
(1.43684,0.0451545) –
(1.42553,-1.8119e-15) –
(1.41352,-0.0444215) –
(1.40154,-0.0881776) –
(1.38963,-0.131359) –
(1.3764,-0.17388) –
(1.36398,-0.216033) –
(1.35096,-0.25771) –
(1.33806,-0.299092) –
(1.32458,-0.340094) –
(1.31121,-0.380941) –
(1.29792,-0.42172) –
(1.2847,-0.462521) –
(1.27151,-0.503427) –
(1.25832,-0.544522) –
(1.24443,-0.585584) –
(1.23046,-0.626952) –
(1.21574,-0.668358) –
(1.20145,-0.710535) –
(1.1857,-0.752469) –
(1.16967,-0.794906) –
(1.15328,-0.837904) –
(1.13589,-0.881083) –
(1.118,-0.924891) –
(1.09954,-0.969372) –
(1.0804,-1.01456) –
(1.0605,-1.0605) –
(1.03973,-1.10721) –
(1.018,-1.1547) –
(0.995656,-1.20354) –
(0.972095,-1.25322) –
(0.947629,-1.3043) –
(0.922091,-1.35682) –
(0.895309,-1.41078) –
(0.866752,-1.4656) –
(0.83664,-1.52184) –
(0.804797,-1.5795) –
(0.770743,-1.63791) –
(0.734641,-1.69765) –
(0.696051,-1.75802) –
(0.654858,-1.81894) –
(0.610948,-1.88031) –
(0.564408,-1.94271) –
(0.514714,-2.00468) –
(0.46198,-2.06678) –
(0.406241,-2.12959) –
(0.347113,-2.19158) –
(0.284572,-2.25262) –
(0.218666,-2.31324) –
(0.149316,-2.37331) –
(0.0763828,-2.43054) –
(1.13454e-14,-2.48619) –
(-0.0797811,-2.53867) –
(-0.162813,-2.58784) –
(-0.248744,-2.63143) –
(-0.337303,-2.67003) –
(-0.427973,-2.70211) –
(-0.520189,-2.72693) –
(-0.613146,-2.74306) –
(-0.705863,-2.74916) –
(-0.797589,-2.74532) –
(-0.887143,-2.73034) –
(-0.973665,-2.70446) –
(-1.05553,-2.66596) –
(-1.13229,-2.61657) –
(-1.20278,-2.55604) –
(-1.26642,-2.48549) –
(-1.32275,-2.40607) –
(-1.37176,-2.31951) –
(-1.41363,-2.22752) –
(-1.44832,-2.13114) –
(-1.47631,-2.03196) –
(-1.49867,-1.93207) –
(-1.51625,-1.83283) –
(-1.52911,-1.73444) –
(-1.5383,-1.63813) –
(-1.5445,-1.5445) –
(-1.54792,-1.4536) –
(-1.54879,-1.36544) –
(-1.54842,-1.28097) –
(-1.54599,-1.19919) –
(-1.54171,-1.12011) –
(-1.53694,-1.04451) –
(-1.53078,-0.971466) –
(-1.52342,-0.900946) –
(-1.51565,-0.833233) –
(-1.50705,-0.767879) –
(-1.49779,-0.704808) –
(-1.48869,-0.644215) –
(-1.47861,-0.585422) –
(-1.46832,-0.528629) –
(-1.45865,-0.473944) –
(-1.44837,-0.420791) –
(-1.43827,-0.369286) –
(-1.42777,-0.319144) –
(-1.41764,-0.27043) –
(-1.40728,-0.222891) –
(-1.39675,-0.17645) –
(-1.38611,-0.131026) –
(-1.37543,-0.0865348) –
(-1.36475,-0.042889) –
(-1.3534,-1.82796e-14) –
(-1.34213,0.0421782) –
(-1.33027,0.0836933) –
(-1.31853,0.124638) –
(-1.30625,0.165018) –
(-1.29344,0.204861) –
(-1.28081,0.244327) –
(-1.26836,0.283513) –
(-1.25609,0.322509) –
(-1.2433,0.361213) –
(-1.23067,0.39987) –
(-1.21817,0.438568) –
(-1.20577,0.477396) –
(-1.19277,0.516158) –
(-1.17981,0.555176) –
(-1.16683,0.594529) –
(-1.15377,0.634293) –
(-1.13998,0.67418) –
(-1.1254,0.714202) –
(-1.1106,0.754763) –
(-1.09493,0.79551) –
(-1.0789,0.836878) –
(-1.06188,0.878467) –
(-1.04437,0.92074) –
(-1.02628,0.963739) –
(-1.0075,1.0075) –
(-0.987942,1.05205) –
(-0.967502,1.09742) –
(-0.946526,1.14415) –
(-0.924422,1.19176) –
(-0.901495,1.2408) –
(-0.877179,1.29073) –
(-0.852116,1.34272) –
(-0.825718,1.39621) –
(-0.797465,1.45058) –
(-0.767879,1.50705) –
(-0.736421,1.56497) –
(-0.702907,1.62432) –
(-0.667418,1.68571) –
(-0.629469,1.74842) –
(-0.588879,1.81238) –
(-0.545864,1.87888) –
(-0.499767,1.94646) –
(-0.450412,2.01503) –
(-0.397893,2.08583) –
(-0.341692,2.15736) –
(-0.281647,2.22947) –
(-0.217601,2.30198) –
(-0.14936,2.37401) –
(-0.0768048,2.44397) –
[dotted, thick] (0,2.44023) –
(0.0790037,2.51394) –
(0.162503,2.5829) –
(0.249742,2.64199) –
(0.340227,2.69318) –
(0.431402,2.72376) –
(0.526152,2.75819) –
(0.619779,2.77273) –
(0.712545,2.77518) –
(0.801337,2.75822) –
(0.88452,2.72227) –
(0.961688,2.67119) –
(1.03028,2.60219) –
(1.08989,2.51858) –
(1.14196,2.4268) –
(1.18874,2.33303) –
(1.22839,2.23443) –
(1.26233,2.13449) –
(1.29087,2.03408) –
(1.31875,1.94048) –
(1.34497,1.85119) –
(1.36735,1.76277) –
(1.38598,1.67537) –
(1.40472,1.59335) –
(1.42262,1.51493) –
(1.438,1.438) –
(1.45256,1.36405) –
(1.46658,1.29296) –
(1.47977,1.22417) –
(1.49235,1.15759) –
(1.50395,1.09268) –
(1.51355,1.02861) –
(1.52302,0.96654) –
(1.53255,0.906346) –
(1.54353,0.848562) –
(1.55178,0.790672) –
(1.55858,0.73341) –
(1.56462,0.677071) –
(1.56999,0.621604) –
(1.57145,0.565755) –
(1.57297,0.51109) –
(1.57467,0.457484) –
(1.57662,0.404807) –
(1.57683,0.352463) –
(1.5774,0.300904) –
(1.57629,0.24966) –
(1.57424,0.198872) –
(1.57126,0.148528) –
(1.56739,0.0986115) –
(1.56264,0.049108) –
(1.55705,-1.97906e-15) –
(1.54992,-0.0487082) –
(1.54198,-0.0970131) –
(1.53395,-0.145001) –
(1.52653,-0.192846) –
(1.51623,-0.240147) –
(1.50446,-0.286992) –
(1.49264,-0.333644) –
(1.48074,-0.380188) –
(1.46806,-0.426512) –
(1.45461,-0.472633) –
(1.43905,-0.51809) –
(1.42075,-0.562516) –
(1.40303,-0.607146) –
(1.38647,-0.652422) –
(1.36718,-0.696613) –
(1.34772,-0.740917) –
(1.32683,-0.784684) –
(1.3063,-0.829004) –
(1.2843,-0.872807) –
(1.26025,-0.915626) –
(1.2359,-0.95866) –
(1.21008,-1.00106) –
(1.18334,-1.04326) –
(1.15566,-1.08524) –
(1.1255,-1.1255) –
(1.09588,-1.167) –
(1.06664,-1.20986) –
(1.03622,-1.25258) –
(1.00417,-1.29456) –
(0.970489,-1.33576) –
(0.935604,-1.3767) –
(0.899856,-1.41795) –
(0.864232,-1.46134) –
(0.827442,-1.50511) –
(0.788425,-1.54737) –
(0.748765,-1.59121) –
(0.707401,-1.63471) –
(0.664294,-1.67782) –
(0.619409,-1.72047) –
(0.572928,-1.76329) –
(0.525545,-1.80894) –
(0.476906,-1.85743) –
(0.425886,-1.9053) –
(0.372851,-1.95455) –
(0.317689,-2.00581) –
(0.259757,-2.05619) –
(0.1997,-2.11261) –
(0.136662,-2.17218) –
(0.0699417,-2.22558) –
(1.04419e-14,-2.2882) –
(-0.0738064,-2.34856) –
(-0.151625,-2.41) –
(-0.233439,-2.46952) –
(-0.319755,-2.53112) –
(-0.410164,-2.58967) –
(-0.504157,-2.64288) –
(-0.600189,-2.68509) –
(-0.696016,-2.7108) –
(-0.792065,-2.72631) –
(-0.88452,-2.72227) –
(-0.971509,-2.69847) –
(-1.05345,-2.66071) –
(-1.1292,-2.60943) –
(-1.19676,-2.54324) –
(-1.25551,-2.46407) –
(-1.30674,-2.37695) –
(-1.35016,-2.283) –
(-1.38635,-2.18453) –
(-1.41374,-2.08026) –
(-1.43433,-1.97418) –
(-1.44969,-1.86893) –
(-1.46035,-1.76527) –
(-1.46739,-1.66442) –
(-1.46763,-1.56287) –
(-1.4625,-1.4625) –
(-1.45514,-1.36647) –
(-1.45014,-1.27847) –
(-1.44218,-1.19307) –
(-1.43257,-1.11121) –
(-1.42272,-1.03366) –
(-1.41121,-0.959054) –
(-1.39705,-0.886595) –
(-1.38404,-0.818519) –
(-1.37251,-0.754543) –
(-1.36277,-0.694366) –
(-1.35256,-0.636465) –
(-1.34138,-0.580467) –
(-1.33068,-0.526854) –
(-1.31996,-0.475215) –
(-1.30868,-0.425217) –
(-1.29831,-0.377193) –
(-1.29102,-0.331478) –
(-1.28216,-0.286598) –
(-1.27247,-0.242737) –
(-1.26061,-0.199662) –
(-1.25013,-0.157928) –
(-1.2418,-0.117385) –
(-1.23288,-0.0775661) –
(-1.2241,-0.038469) –
(-1.2134,-1.63886e-14) –
(-1.2029,0.0378027) –
(-1.19265,0.0750353) –
(-1.18267,0.111795) –
(-1.17226,0.148091) –
(-1.16144,0.183954) –
(-1.15092,0.21955) –
(-1.13932,0.254668) –
(-1.12802,0.289625) –
(-1.117,0.32452) –
(-1.10626,0.359446) –
(-1.09442,0.394017) –
(-1.08348,0.42898) –
(-1.07207,0.463924) –
(-1.06016,0.498875) –
(-1.04775,0.533856) –
(-1.03418,0.568547) –
(-1.02129,0.60399) –
(-1.00898,0.640319) –
(-0.997142,0.677657) –
(-0.983946,0.714878) –
(-0.969386,0.751933) –
(-0.952917,0.788321) –
(-0.937232,0.826281) –
(-0.920609,0.86451) –
(-0.9035,0.9035) –
(-0.885808,0.943289) –
(-0.867899,0.984438) –
(-0.850522,1.0281) –
(-0.831243,1.07163) –
(-0.810888,1.11609) –
(-0.789739,1.16207) –
(-0.767625,1.20958) –
(-0.74473,1.25927) –
(-0.720477,1.31054) –
(-0.695329,1.36466) –
(-0.66868,1.42102) –
(-0.640002,1.47896) –
(-0.60911,1.53844) –
(-0.576774,1.60205) –
(-0.542774,1.67049) –
(-0.506803,1.74443) –
(-0.466707,1.8177) –
(-0.423264,1.89357) –
(-0.376164,1.97192) –
(-0.324105,2.04632) –
(-0.269062,2.12985) –
(-0.208817,2.20905) –
(-0.143855,2.28651) –
(-0.0743616,2.36623) –
(-3.8,-4.6) – (3.8,-4.6) – (3.8,4.5) – (-3.8,4.6) – (-3.8,-4.6);
[fill=orange, thick, opacity=0.8] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(0.459277,2.89976),rotate=-24] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(1.28417,1.8896),rotate=-24] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(1.10367,-0.519348),rotate=-24] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-0.455516,-2.87602),rotate=-24] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-1.26589,-1.8627),rotate=-24] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-1.06039,0.45887),rotate=-24] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[] at (-2,3.8) $\phi{=}24^{\circ}$;
[orange,dashed,very thick] (0,2.7061) –
(0.0871106,2.7719) –
(0.17782,2.82637) –
(0.27097,2.86656) –
(0.365308,2.89171) –
(0.459277,2.89976) –
(0.551724,2.89224) –
(0.641065,2.86796) –
(0.726262,2.8286) –
(0.806466,2.77588) –
(0.881024,2.71151) –
(0.949473,2.63726) –
(1.0118,2.55551) –
(1.0677,2.46731) –
(1.11728,2.37433) –
(1.16113,2.27884) –
(1.19943,2.18176) –
(1.23246,2.08397) –
(1.26056,1.98632) –
(1.28417,1.8896) –
(1.30382,1.79456) –
(1.32011,1.70187) –
(1.3328,1.61108) –
(1.34206,1.52227) –
(1.34904,1.43658) –
(1.354,1.354) –
(1.35617,1.27353) –
(1.35679,1.19617) –
(1.35555,1.12141) –
(1.35323,1.04967) –
(1.34949,0.980464) –
(1.34512,0.914142) –
(1.33974,0.850222) –
(1.33413,0.789003) –
(1.32727,0.729675) –
(1.32056,0.672857) –
(1.31417,0.618401) –
(1.30699,0.565583) –
(1.29978,0.51462) –
(1.29268,0.465395) –
(1.28582,0.417787) –
(1.27861,0.371472) –
(1.27184,0.326554) –
(1.26491,0.282741) –
(1.25789,0.239955) –
(1.25084,0.198113) –
(1.24381,0.15713) –
(1.23687,0.116919) –
(1.22935,0.0773441) –
(1.22198,0.0384024) –
(1.21481,-1.54407e-15) –
(1.20714,-0.037936) –
(1.19971,-0.0754793) –
(1.19182,-0.11266) –
(1.18418,-0.149597) –
(1.17681,-0.186388) –
(1.16898,-0.222995) –
(1.1614,-0.259604) –
(1.15336,-0.296132) –
(1.14552,-0.332805) –
(1.13787,-0.369716) –
(1.12968,-0.406712) –
(1.12095,-0.443817) –
(1.11295,-0.481617) –
(1.10367,-0.519348) –
(1.095,-0.557932) –
(1.08561,-0.596821) –
(1.07607,-0.636386) –
(1.0657,-0.676313) –
(1.05504,-0.717005) –
(1.04458,-0.758935) –
(1.03308,-0.801339) –
(1.02102,-0.844662) –
(1.00884,-0.88941) –
(0.996382,-0.935665) –
(0.983,-0.983) –
(0.96858,-1.03143) –
(0.953941,-1.08203) –
(0.938413,-1.13435) –
(0.921822,-1.18841) –
(0.903989,-1.24423) –
(0.88473,-1.30184) –
(0.864241,-1.36183) –
(0.841915,-1.4236) –
(0.817904,-1.48776) –
(0.791635,-1.55367) –
(0.762915,-1.62128) –
(0.731832,-1.69117) –
(0.698134,-1.76328) –
(0.661086,-1.83624) –
(0.621,-1.91124) –
(0.577626,-1.9882) –
(0.530365,-2.06563) –
(0.479257,-2.14407) –
(0.423996,-2.22266) –
(0.364479,-2.30123) –
(0.300524,-2.37889) –
(0.232174,-2.45615) –
(0.159173,-2.52998) –
(0.0817356,-2.60087) –
(1.21714e-14,-2.66721) –
(-0.0857113,-2.72738) –
(-0.17489,-2.7798) –
(-0.26691,-2.82362) –
(-0.360788,-2.85593) –
(-0.455516,-2.87602) –
(-0.549737,-2.88182) –
(-0.642145,-2.87279) –
(-0.731186,-2.84778) –
(-0.815541,-2.80711) –
(-0.893916,-2.75119) –
(-0.965042,-2.68051) –
(-1.02846,-2.59759) –
(-1.08399,-2.50495) –
(-1.13113,-2.40376) –
(-1.17076,-2.29774) –
(-1.20318,-2.18858) –
(-1.22922,-2.07849) –
(-1.24995,-1.9696) –
(-1.26589,-1.8627) –
(-1.27805,-1.75909) –
(-1.28674,-1.65885) –
(-1.29268,-1.56259) –
(-1.29671,-1.47082) –
(-1.29918,-1.38349) –
(-1.2995,-1.2995) –
(-1.29896,-1.2198) –
(-1.29685,-1.14333) –
(-1.29398,-1.07048) –
(-1.29065,-1.00113) –
(-1.28599,-0.934329) –
(-1.28079,-0.870422) –
(-1.27526,-0.809302) –
(-1.26901,-0.750489) –
(-1.26283,-0.694247) –
(-1.25629,-0.640113) –
(-1.25019,-0.588294) –
(-1.24339,-0.538063) –
(-1.23732,-0.489891) –
(-1.23081,-0.443119) –
(-1.22462,-0.397903) –
(-1.21818,-0.353914) –
(-1.21226,-0.311255) –
(-1.20626,-0.26963) –
(-1.19954,-0.228825) –
(-1.19287,-0.188932) –
(-1.18629,-0.149863) –
(-1.17985,-0.111529) –
(-1.17289,-0.0737921) –
(-1.16544,-0.0366255) –
(-1.15753,-1.56341e-14) –
(-1.1506,0.0361591) –
(-1.14255,0.071883) –
(-1.1348,0.10727) –
(-1.12736,0.142419) –
(-1.11954,0.177317) –
(-1.11203,0.21213) –
(-1.10481,0.246955) –
(-1.0972,0.281712) –
(-1.09052,0.316826) –
(-1.08272,0.351798) –
(-1.0758,0.38731) –
(-1.06836,0.422993) –
(-1.06039,0.45887) –
(-1.05249,0.495262) –
(-1.04397,0.53193) –
(-1.03542,0.569228) –
(-1.02616,0.60687) –
(-1.01674,0.645244) –
(-1.0065,0.684017) –
(-0.995959,0.723606) –
(-0.984472,0.763634) –
(-0.973076,0.804998) –
(-0.9611,0.847324) –
(-0.948444,0.890648) –
(-0.9355,0.9355) –
(-0.922111,0.981949) –
(-0.907647,1.02952) –
(-0.89289,1.07932) –
(-0.876749,1.1303) –
(-0.859932,1.1836) –
(-0.842203,1.23926) –
(-0.822942,1.29675) –
(-0.801961,1.35604) –
(-0.779751,1.41836) –
(-0.755359,1.48248) –
(-0.729195,1.54962) –
(-0.700661,1.61913) –
(-0.6695,1.69096) –
(-0.635936,1.76638) –
(-0.599367,1.84466) –
(-0.559476,1.92573) –
(-0.516121,2.01016) –
(-0.468767,2.09715) –
(-0.417106,2.18654) –
(-0.360718,2.27749) –
(-0.299195,2.36837) –
(-0.23244,2.45896) –
(-0.160238,2.54691) –
(-0.0826462,2.62985) –
[dotted, thick] (0,2.67923) –
(0.0864665,2.75141) –
(0.176355,2.80309) –
(0.269705,2.85318) –
(0.364422,2.8847) –
(0.458724,2.89627) –
(0.550797,2.88738) –
(0.639677,2.86175) –
(0.722569,2.81422) –
(0.797589,2.74532) –
(0.865729,2.66444) –
(0.926239,2.57273) –
(0.97822,2.4707) –
(1.02277,2.36348) –
(1.06128,2.25533) –
(1.09339,2.14591) –
(1.12347,2.04358) –
(1.15183,1.94764) –
(1.17417,1.8502) –
(1.19395,1.75684) –
(1.21363,1.67042) –
(1.23213,1.58845) –
(1.24806,1.50865) –
(1.26304,1.43263) –
(1.27692,1.35978) –
(1.291,1.291) –
(1.30359,1.22416) –
(1.31594,1.16016) –
(1.32504,1.09617) –
(1.33479,1.03537) –
(1.34434,0.976723) –
(1.35389,0.920104) –
(1.36123,0.863862) –
(1.36882,0.80952) –
(1.37499,0.755905) –
(1.38041,0.703354) –
(1.38455,0.651519) –
(1.3894,0.601248) –
(1.39117,0.550802) –
(1.39181,0.501084) –
(1.39409,0.452967) –
(1.39609,0.4056) –
(1.39718,0.358735) –
(1.39603,0.312049) –
(1.39403,0.265925) –
(1.39191,0.220457) –
(1.38973,0.175564) –
(1.38752,0.131159) –
(1.3839,0.0870676) –
(1.3803,0.0433776) –
(1.37462,-1.74718e-15) –
(1.36758,-0.0429778) –
(1.36061,-0.0856024) –
(1.35373,-0.127965) –
(1.34694,-0.170158) –
(1.34023,-0.212272) –
(1.3329,-0.254265) –
(1.32426,-0.296007) –
(1.31431,-0.337457) –
(1.30442,-0.378968) –
(1.29456,-0.420628) –
(1.2847,-0.462521) –
(1.27348,-0.504208) –
(1.26091,-0.545645) –
(1.24699,-0.586788) –
(1.23361,-0.628557) –
(1.21946,-0.670402) –
(1.20449,-0.712334) –
(1.18869,-0.754364) –
(1.17318,-0.797291) –
(1.15499,-0.839151) –
(1.13644,-0.881517) –
(1.11746,-0.924441) –
(1.09795,-0.967969) –
(1.07628,-1.01069) –
(1.054,-1.054) –
(1.03054,-1.09741) –
(1.00585,-1.14091) –
(0.982584,-1.18774) –
(0.957794,-1.23478) –
(0.930589,-1.28085) –
(0.902616,-1.32816) –
(0.873713,-1.37675) –
(0.842635,-1.42482) –
(0.809387,-1.47227) –
(0.775263,-1.52154) –
(0.740034,-1.57265) –
(0.70375,-1.62627) –
(0.666116,-1.68242) –
(0.626115,-1.7391) –
(0.58429,-1.79826) –
(0.540143,-1.85918) –
(0.494139,-1.92455) –
(0.445321,-1.99225) –
(0.394051,-2.06569) –
(0.338706,-2.1385) –
(0.279875,-2.21544) –
(0.217202,-2.29775) –
(0.149715,-2.37966) –
(0.0774045,-2.46305) –
(1.16261e-14,-2.54771) –
(-0.0825352,-2.62631) –
(-0.170228,-2.7057) –
(-0.262119,-2.77293) –
(-0.356446,-2.82156) –
(-0.45264,-2.85786) –
(-0.548279,-2.87418) –
(-0.640911,-2.86727) –
(-0.729427,-2.84093) –
(-0.813766,-2.801) –
(-0.891076,-2.74245) –
(-0.959772,-2.66587) –
(-1.02143,-2.57984) –
(-1.07472,-2.48353) –
(-1.11788,-2.37561) –
(-1.1531,-2.26309) –
(-1.18206,-2.15016) –
(-1.20402,-2.03589) –
(-1.21812,-1.91945) –
(-1.22813,-1.80714) –
(-1.23483,-1.69959) –
(-1.23516,-1.59236) –
(-1.23139,-1.48849) –
(-1.22469,-1.38914) –
(-1.21786,-1.29689) –
(-1.2125,-1.2125) –
(-1.20617,-1.13267) –
(-1.19766,-1.05588) –
(-1.18883,-0.983486) –
(-1.18003,-0.915321) –
(-1.17158,-0.851204) –
(-1.16265,-0.790137) –
(-1.15585,-0.733525) –
(-1.14911,-0.679579) –
(-1.14138,-0.62748) –
(-1.13218,-0.576873) –
(-1.12287,-0.52838) –
(-1.11425,-0.482178) –
(-1.10649,-0.438091) –
(-1.09842,-0.395454) –
(-1.09214,-0.354857) –
(-1.08577,-0.315445) –
(-1.07733,-0.276613) –
(-1.06962,-0.239088) –
(-1.06271,-0.202723) –
(-1.05598,-0.167251) –
(-1.04879,-0.132493) –
(-1.04047,-0.098353) –
(-1.03387,-0.0650454) –
(-1.02763,-0.0322944) –
(-1.02106,-1.37908e-14) –
(-1.0149,0.0318947) –
(-1.00917,0.0634914) –
(-1.00245,0.0947595) –
(-0.996174,0.125846) –
(-0.988936,0.156632) –
(-0.982139,0.187353) –
(-0.976459,0.218265) –
(-0.971176,0.249356) –
(-0.965581,0.280527) –
(-0.95831,0.311374) –
(-0.951384,0.342519) –
(-0.9441,0.373796) –
(-0.937085,0.405513) –
(-0.929643,0.437457) –
(-0.922374,0.469973) –
(-0.914592,0.502801) –
(-0.906868,0.53632) –
(-0.899127,0.570603) –
(-0.891287,0.605718) –
(-0.882691,0.641312) –
(-0.873844,0.677823) –
(-0.864109,0.714853) –
(-0.855549,0.754268) –
(-0.846383,0.794807) –
(-0.8365,0.8365) –
(-0.825786,0.879372) –
(-0.814123,0.923441) –
(-0.801392,0.968717) –
(-0.788338,1.01632) –
(-0.775144,1.06689) –
(-0.761917,1.12113) –
(-0.747544,1.17794) –
(-0.731772,1.23736) –
(-0.714346,1.29939) –
(-0.695329,1.36466) –
(-0.674701,1.43381) –
(-0.651797,1.50621) –
(-0.62655,1.58249) –
(-0.599049,1.66392) –
(-0.568121,1.7485) –
(-0.532844,1.83406) –
(-0.494667,1.9266) –
(-0.452417,2.024) –
(-0.404916,2.12264) –
(-0.351758,2.22092) –
(-0.292991,2.31926) –
(-0.228514,2.41743) –
(-0.157929,2.51022) –
(-0.0817134,2.60016) –
(-3.8,-4.6) – (3.8,-4.6) – (3.8,4.5) – (-3.8,4.6) – (-3.8,-4.6);
[fill=orange, thick, opacity=0.8] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(0.462153,2.91792),rotate=-12] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(1.11858,1.44207),rotate=-12] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(0.849834,-1.43699),rotate=-12] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-0.464697,-2.93398),rotate=-12] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-1.09282,-1.23956),rotate=-12] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-0.738853,1.70739),rotate=-12] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[] at (-2,3.8) $\phi{=}12^{\circ}$;
[orange,dashed,very thick] (0,2.90904) –
(0.0927077,2.95001) –
(0.186922,2.97105) –
(0.281018,2.97286) –
(0.373284,2.95485) –
(0.462153,2.91792) –
(0.546292,2.86376) –
(0.624869,2.7955) –
(0.697246,2.7156) –
(0.762868,2.62581) –
(0.822027,2.52994) –
(0.874502,2.42902) –
(0.920693,2.32541) –
(0.960986,2.22071) –
(0.995643,2.11585) –
(1.02534,2.01234) –
(1.04989,1.90974) –
(1.07048,1.81009) –
(1.08741,1.71348) –
(1.10055,1.61941) –
(1.11097,1.52912) –
(1.11858,1.44207) –
(1.12456,1.35936) –
(1.12836,1.27988) –
(1.13074,1.20411) –
(1.1325,1.1325) –
(1.13298,1.06394) –
(1.13242,0.998365) –
(1.13217,0.93661) –
(1.13086,0.877183) –
(1.12925,0.820448) –
(1.12698,0.765892) –
(1.1248,0.713823) –
(1.12233,0.663742) –
(1.11907,0.615216) –
(1.1158,0.568526) –
(1.11263,0.523563) –
(1.10906,0.479932) –
(1.10518,0.43757) –
(1.10174,0.396652) –
(1.09752,0.356605) –
(1.09392,0.317812) –
(1.09035,0.279954) –
(1.08687,0.242945) –
(1.08355,0.206698) –
(1.08043,0.171123) –
(1.07755,0.136126) –
(1.07496,0.101614) –
(1.07198,0.067443) –
(1.06932,0.0336049) –
(1.06702,-1.35623e-15) –
(1.06367,-0.0334272) –
(1.06068,-0.0667326) –
(1.05736,-0.09995) –
(1.0537,-0.133113) –
(1.0504,-0.166366) –
(1.04604,-0.199543) –
(1.04202,-0.232918) –
(1.03761,-0.266413) –
(1.03348,-0.300255) –
(1.02892,-0.334317) –
(1.02457,-0.368867) –
(1.01971,-0.403731) –
(1.01561,-0.439493) –
(1.01154,-0.475994) –
(1.0068,-0.51299) –
(1.00258,-0.551174) –
(0.998164,-0.590312) –
(0.993458,-0.630468) –
(0.98837,-0.671696) –
(0.983374,-0.714463) –
(0.977208,-0.758) –
(0.971441,-0.803646) –
(0.964283,-0.85013) –
(0.957207,-0.898877) –
(0.949,-0.949) –
(0.940021,-1.00102) –
(0.93056,-1.05551) –
(0.919933,-1.11201) –
(0.908387,-1.17109) –
(0.896092,-1.23336) –
(0.882346,-1.29833) –
(0.866893,-1.366) –
(0.849834,-1.43699) –
(0.831189,-1.51193) –
(0.809933,-1.58958) –
(0.786097,-1.67054) –
(0.759353,-1.75476) –
(0.72937,-1.84218) –
(0.695577,-1.93204) –
(0.657709,-2.02422) –
(0.615305,-2.1179) –
(0.568172,-2.21289) –
(0.515968,-2.30831) –
(0.458445,-2.40325) –
(0.395452,-2.49678) –
(0.326845,-2.58725) –
(0.252736,-2.67367) –
(0.173203,-2.75298) –
(0.088732,-2.8235) –
(1.31556e-14,-2.88287) –
(-0.0920414,-2.9288) –
(-0.186123,-2.95834) –
(-0.280685,-2.96934) –
(-0.37417,-2.96186) –
(-0.464697,-2.93398) –
(-0.550664,-2.88668) –
(-0.630731,-2.82173) –
(-0.703929,-2.74162) –
(-0.769576,-2.6489) –
(-0.82749,-2.54675) –
(-0.877855,-2.43834) –
(-0.920953,-2.32606) –
(-0.957055,-2.21162) –
(-0.987514,-2.09857) –
(-1.0125,-1.98714) –
(-1.03251,-1.87814) –
(-1.04888,-1.77357) –
(-1.06202,-1.67348) –
(-1.07193,-1.5773) –
(-1.0798,-1.48622) –
(-1.08564,-1.3996) –
(-1.08986,-1.31741) –
(-1.09282,-1.23956) –
(-1.09443,-1.16545) –
(-1.0955,-1.0955) –
(-1.09587,-1.02909) –
(-1.09582,-0.966099) –
(-1.09457,-0.90551) –
(-1.09342,-0.848146) –
(-1.09207,-0.793432) –
(-1.09013,-0.740852) –
(-1.08779,-0.690332) –
(-1.08581,-0.642145) –
(-1.08313,-0.595458) –
(-1.08051,-0.550549) –
(-1.07744,-0.507004) –
(-1.07466,-0.465048) –
(-1.07099,-0.424034) –
(-1.06781,-0.384436) –
(-1.06457,-0.345898) –
(-1.06132,-0.308343) –
(-1.05816,-0.271689) –
(-1.05513,-0.235849) –
(-1.0516,-0.200603) –
(-1.049,-0.166145) –
(-1.04598,-0.132138) –
(-1.04258,-0.0985526) –
(-1.03951,-0.0654006) –
(-1.03681,-0.0325832) –
(-1.03379,-1.39628e-14) –
(-1.03045,0.0323833) –
(-1.02681,0.0646014) –
(-1.02357,0.0967559) –
(-1.02073,0.128948) –
(-1.01687,0.161057) –
(-1.0134,0.193315) –
(-1.00958,0.225669) –
(-1.00611,0.258324) –
(-1.00225,0.29118) –
(-0.997988,0.324266) –
(-0.993963,0.357849) –
(-0.990122,0.392017) –
(-0.985756,0.426575) –
(-0.981468,0.461844) –
(-0.977187,0.497902) –
(-0.972839,0.534823) –
(-0.967732,0.572315) –
(-0.963009,0.611144) –
(-0.957959,0.651028) –
(-0.952482,0.692019) –
(-0.946478,0.734164) –
(-0.940385,0.777955) –
(-0.933519,0.823008) –
(-0.925764,0.86935) –
(-0.918,0.918) –
(-0.909526,0.968547) –
(-0.900165,1.02104) –
(-0.889735,1.0755) –
(-0.878916,1.13309) –
(-0.866582,1.19275) –
(-0.853332,1.25564) –
(-0.838855,1.32182) –
(-0.822838,1.39134) –
(-0.804959,1.46421) –
(-0.785214,1.54107) –
(-0.763216,1.62192) –
(-0.738853,1.70739) –
(-0.711149,1.79616) –
(-0.680008,1.8888) –
(-0.645254,1.98589) –
(-0.606033,2.08598) –
(-0.561842,2.18823) –
(-0.51242,2.29244) –
(-0.457253,2.397) –
(-0.396005,2.50028) –
(-0.32844,2.59987) –
(-0.2546,2.69338) –
(-0.174801,2.77839) –
(-0.0895982,2.85106) –
[dotted, thick] (0,2.90197) –
(0.0925522,2.94506) –
(0.186656,2.96681) –
(0.280552,2.96793) –
(0.372575,2.94924) –
(0.460494,2.90744) –
(0.542979,2.8464) –
(0.618699,2.7679) –
(0.687399,2.67724) –
(0.7461,2.56809) –
(0.797773,2.45529) –
(0.842166,2.33921) –
(0.880346,2.2235) –
(0.914369,2.11298) –
(0.945063,2.00836) –
(0.973974,1.91153) –
(0.997427,1.81431) –
(1.01685,1.7194) –
(1.03777,1.63527) –
(1.05484,1.55215) –
(1.07024,1.47306) –
(1.08478,1.39849) –
(1.09887,1.32831) –
(1.11153,1.26078) –
(1.12251,1.19535) –
(1.1325,1.1325) –
(1.14226,1.07265) –
(1.15046,1.01426) –
(1.16104,0.960499) –
(1.17053,0.907954) –
(1.17845,0.856191) –
(1.18663,0.806432) –
(1.19287,0.757016) –
(1.1978,0.708375) –
(1.20273,0.661204) –
(1.20715,0.615074) –
(1.20988,0.569326) –
(1.21094,0.524021) –
(1.21431,0.48078) –
(1.21617,0.43785) –
(1.21722,0.3955) –
(1.21886,0.354111) –
(1.21979,0.313189) –
(1.22075,0.272869) –
(1.22038,0.2328) –
(1.21941,0.193135) –
(1.21856,0.15394) –
(1.21716,0.115056) –
(1.21594,0.0765005) –
(1.21421,0.0381581) –
(1.21127,-1.53957e-15) –
(1.20785,-0.0379582) –
(1.20465,-0.0757901) –
(1.20097,-0.113525) –
(1.19681,-0.151193) –
(1.19357,-0.189043) –
(1.18843,-0.226705) –
(1.18279,-0.264385) –
(1.17733,-0.302287) –
(1.17133,-0.340302) –
(1.16477,-0.378456) –
(1.15896,-0.417251) –
(1.15251,-0.456312) –
(1.1441,-0.495096) –
(1.13502,-0.534101) –
(1.12588,-0.573662) –
(1.1166,-0.613854) –
(1.1065,-0.654383) –
(1.09495,-0.694878) –
(1.0837,-0.73648) –
(1.0709,-0.778054) –
(1.05822,-0.820842) –
(1.04608,-0.865396) –
(1.03324,-0.91092) –
(1.01906,-0.956963) –
(1.0035,-1.0035) –
(0.987458,-1.05154) –
(0.970308,-1.1006) –
(0.952386,-1.15124) –
(0.933524,-1.20349) –
(0.91147,-1.25453) –
(0.889102,-1.30827) –
(0.865756,-1.36421) –
(0.841915,-1.4236) –
(0.816882,-1.4859) –
(0.789388,-1.54926) –
(0.760807,-1.6168) –
(0.730148,-1.68727) –
(0.697092,-1.76065) –
(0.663721,-1.84356) –
(0.626025,-1.92671) –
(0.583938,-2.00993) –
(0.540916,-2.10673) –
(0.491751,-2.19997) –
(0.438968,-2.30115) –
(0.380408,-2.4018) –
(0.315501,-2.49745) –
(0.245417,-2.59624) –
(0.169873,-2.70005) –
(0.0875992,-2.78745) –
(1.30523e-14,-2.86025) –
(-0.0916638,-2.91679) –
(-0.185679,-2.95129) –
(-0.280286,-2.96512) –
(-0.373461,-2.95625) –
(-0.46337,-2.9256) –
(-0.549737,-2.88182) –
(-0.629805,-2.81759) –
(-0.701115,-2.73066) –
(-0.765236,-2.63396) –
(-0.822027,-2.52994) –
(-0.868514,-2.41239) –
(-0.907418,-2.29188) –
(-0.939363,-2.17074) –
(-0.964332,-2.04931) –
(-0.984889,-1.93295) –
(-0.99913,-1.81741) –
(-1.01001,-1.70783) –
(-1.01845,-1.60482) –
(-1.02503,-1.50829) –
(-1.03325,-1.42214) –
(-1.03884,-1.33926) –
(-1.04118,-1.25857) –
(-1.04092,-1.18069) –
(-1.04119,-1.10875) –
(-1.0435,-1.0435) –
(-1.04329,-0.979713) –
(-1.04225,-0.91887) –
(-1.04118,-0.861339) –
(-1.04034,-0.806974) –
(-1.03944,-0.755194) –
(-1.03925,-0.706274) –
(-1.03824,-0.658884) –
(-1.03712,-0.613349) –
(-1.03542,-0.569228) –
(-1.03389,-0.526793) –
(-1.03201,-0.485628) –
(-1.03053,-0.445952) –
(-1.02825,-0.407114) –
(-1.02523,-0.369106) –
(-1.02354,-0.332569) –
(-1.02262,-0.297098) –
(-1.02254,-0.262545) –
(-1.02062,-0.228137) –
(-1.01826,-0.194243) –
(-1.01548,-0.160835) –
(-1.01371,-0.128062) –
(-1.0109,-0.0955581) –
(-1.00846,-0.063447) –
(-1.00713,-0.0316503) –
(-1.0048,-1.35712e-14) –
(-1.00289,0.0315171) –
(-0.999993,0.0629142) –
(-0.99682,0.0942272) –
(-0.994069,0.12558) –
(-0.990333,0.156853) –
(-0.987001,0.188281) –
(-0.98336,0.219807) –
(-0.979395,0.251466) –
(-0.975766,0.283486) –
(-0.972433,0.315963) –
(-0.968016,0.348507) –
(-0.963824,0.381605) –
(-0.959798,0.415342) –
(-0.953956,0.448897) –
(-0.947575,0.482814) –
(-0.940617,0.517109) –
(-0.934866,0.552878) –
(-0.928382,0.589169) –
(-0.921699,0.626386) –
(-0.914154,0.664172) –
(-0.90625,0.70296) –
(-0.899523,0.74415) –
(-0.892147,0.786534) –
(-0.884527,0.830626) –
(-0.876,0.876) –
(-0.86693,0.923186) –
(-0.857144,0.972239) –
(-0.846916,1.02375) –
(-0.836444,1.07834) –
(-0.822941,1.13268) –
(-0.811599,1.19423) –
(-0.79983,1.26033) –
(-0.784324,1.32622) –
(-0.766806,1.39482) –
(-0.749902,1.47177) –
(-0.730098,1.55154) –
(-0.707962,1.636) –
(-0.683036,1.72515) –
(-0.655337,1.82027) –
(-0.624496,1.922) –
(-0.589068,2.02758) –
(-0.54795,2.13412) –
(-0.50224,2.24689) –
(-0.450495,2.36158) –
(-0.391912,2.47444) –
(-0.326136,2.58163) –
(-0.253202,2.6786) –
(-0.173602,2.75933) –
(-0.0891984,2.83834) –
(-3.8,-4.6) – (3.8,-4.6) – (3.8,4.5) – (-3.8,4.6) – (-3.8,-4.6);
[fill=orange, thick, opacity=0.8] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(0,3)] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-0.657128,-2.26185)] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(0.657128,-2.26185)] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(0.964358,0.797786)] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=orange, thick, opacity=0.2,shift=(-0.964358,0.797786)] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[] at (-2,3.8) $\phi{=}0^{\circ}$;
[orange,dashed,very thick] (0,3.00025) –
(0.0939515,2.98959) –
(0.186123,2.95834) –
(0.274829,2.90739) –
(0.35875,2.8398) –
(0.436822,2.75799) –
(0.508662,2.6665) –
(0.573812,2.56709) –
(0.632182,2.46219) –
(0.684155,2.35488) –
(0.729817,2.24615) –
(0.76983,2.13828) –
(0.804598,2.03218) –
(0.834334,1.92803) –
(0.85986,1.8273) –
(0.881199,1.72945) –
(0.89966,1.63648) –
(0.914624,1.54655) –
(0.927515,1.46153) –
(0.938386,1.38079) –
(0.947629,1.3043) –
(0.955627,1.23199) –
(0.962302,1.16322) –
(0.968437,1.09848) –
(0.97342,1.03659) –
(0.978,0.978) –
(0.982464,0.922595) –
(0.98603,0.869302) –
(0.988876,0.818069) –
(0.991176,0.768835) –
(0.993671,0.721944) –
(0.994803,0.676067) –
(0.995846,0.631983) –
(0.996947,0.589593) –
(0.997005,0.548108) –
(0.996718,0.507853) –
(0.996823,0.469069) –
(0.996788,0.431349) –
(0.996696,0.39462) –
(0.996624,0.358807) –
(0.996643,0.323829) –
(0.997495,0.289799) –
(0.997887,0.256214) –
(0.998542,0.223201) –
(0.999503,0.190665) –
(1.00081,0.158513) –
(1.00179,0.126555) –
(1.00245,0.0947595) –
(1.00282,0.0630918) –
(1.0036,0.0315393) –
(1.00338,-1.27534e-15) –
(1.0036,-0.0315393) –
(1.00282,-0.0630918) –
(1.00245,-0.0947595) –
(1.00179,-0.126555) –
(1.00081,-0.158513) –
(0.999503,-0.190665) –
(0.998542,-0.223201) –
(0.997887,-0.256214) –
(0.997495,-0.289799) –
(0.996643,-0.323829) –
(0.996624,-0.358807) –
(0.996696,-0.39462) –
(0.996788,-0.431349) –
(0.996823,-0.469069) –
(0.996718,-0.507853) –
(0.997005,-0.548108) –
(0.996947,-0.589593) –
(0.995846,-0.631983) –
(0.994803,-0.676067) –
(0.993671,-0.721944) –
(0.991176,-0.768835) –
(0.988876,-0.818069) –
(0.98603,-0.869302) –
(0.982464,-0.922595) –
(0.978,-0.978) –
(0.97342,-1.03659) –
(0.968437,-1.09848) –
(0.962302,-1.16322) –
(0.955627,-1.23199) –
(0.947629,-1.3043) –
(0.938386,-1.38079) –
(0.927515,-1.46153) –
(0.914624,-1.54655) –
(0.89966,-1.63648) –
(0.881199,-1.72945) –
(0.85986,-1.8273) –
(0.834334,-1.92803) –
(0.804598,-2.03218) –
(0.76983,-2.13828) –
(0.729817,-2.24615) –
(0.684155,-2.35488) –
(0.632182,-2.46219) –
(0.573812,-2.56709) –
(0.508662,-2.6665) –
(0.436822,-2.75799) –
(0.35875,-2.8398) –
(0.274829,-2.90739) –
(0.186123,-2.95834) –
(0.0939515,-2.98959) –
(1.36912e-14,-3.00025) –
(-0.0939515,-2.98959) –
(-0.186123,-2.95834) –
(-0.274829,-2.90739) –
(-0.35875,-2.8398) –
(-0.436822,-2.75799) –
(-0.508662,-2.6665) –
(-0.573812,-2.56709) –
(-0.632182,-2.46219) –
(-0.684155,-2.35488) –
(-0.729817,-2.24615) –
(-0.76983,-2.13828) –
(-0.804598,-2.03218) –
(-0.834334,-1.92803) –
(-0.85986,-1.8273) –
(-0.881199,-1.72945) –
(-0.89966,-1.63648) –
(-0.914624,-1.54655) –
(-0.927515,-1.46153) –
(-0.938386,-1.38079) –
(-0.947629,-1.3043) –
(-0.955627,-1.23199) –
(-0.962302,-1.16322) –
(-0.968437,-1.09848) –
(-0.97342,-1.03659) –
(-0.978,-0.978) –
(-0.982464,-0.922595) –
(-0.98603,-0.869302) –
(-0.988876,-0.818069) –
(-0.991176,-0.768835) –
(-0.993671,-0.721944) –
(-0.994803,-0.676067) –
(-0.995846,-0.631983) –
(-0.996947,-0.589593) –
(-0.997005,-0.548108) –
(-0.996718,-0.507853) –
(-0.996823,-0.469069) –
(-0.996788,-0.431349) –
(-0.996696,-0.39462) –
(-0.996624,-0.358807) –
(-0.996643,-0.323829) –
(-0.997495,-0.289799) –
(-0.997887,-0.256214) –
(-0.998542,-0.223201) –
(-0.999503,-0.190665) –
(-1.00081,-0.158513) –
(-1.00179,-0.126555) –
(-1.00245,-0.0947595) –
(-1.00282,-0.0630918) –
(-1.0036,-0.0315393) –
(-1.00338,-1.35521e-14) –
(-1.0036,0.0315393) –
(-1.00282,0.0630918) –
(-1.00245,0.0947595) –
(-1.00179,0.126555) –
(-1.00081,0.158513) –
(-0.999503,0.190665) –
(-0.998542,0.223201) –
(-0.997887,0.256214) –
(-0.997495,0.289799) –
(-0.996643,0.323829) –
(-0.996624,0.358807) –
(-0.996696,0.39462) –
(-0.996788,0.431349) –
(-0.996823,0.469069) –
(-0.996718,0.507853) –
(-0.997005,0.548108) –
(-0.996947,0.589593) –
(-0.995846,0.631983) –
(-0.994803,0.676067) –
(-0.993671,0.721944) –
(-0.991176,0.768835) –
(-0.988876,0.818069) –
(-0.98603,0.869302) –
(-0.982464,0.922595) –
(-0.978,0.978) –
(-0.97342,1.03659) –
(-0.968437,1.09848) –
(-0.962302,1.16322) –
(-0.955627,1.23199) –
(-0.947629,1.3043) –
(-0.938386,1.38079) –
(-0.927515,1.46153) –
(-0.914624,1.54655) –
(-0.89966,1.63648) –
(-0.881199,1.72945) –
(-0.85986,1.8273) –
(-0.834334,1.92803) –
(-0.804598,2.03218) –
(-0.76983,2.13828) –
(-0.729817,2.24615) –
(-0.684155,2.35488) –
(-0.632182,2.46219) –
(-0.573812,2.56709) –
(-0.508662,2.6665) –
(-0.436822,2.75799) –
(-0.35875,2.8398) –
(-0.274829,2.90739) –
(-0.186123,2.95834) –
(-0.0939515,2.98959) –
[dotted, thick] (0,2.99601) –
(0.0937738,2.98393) –
(0.185546,2.94917) –
(0.27423,2.90105) –
(0.357598,2.83068) –
(0.433504,2.73703) –
(0.5027,2.63524) –
(0.564866,2.52706) –
(0.619169,2.4115) –
(0.665808,2.29173) –
(0.70731,2.17688) –
(0.742045,2.06111) –
(0.773361,1.95329) –
(0.802601,1.8547) –
(0.827946,1.75948) –
(0.848455,1.66519) –
(0.866958,1.57699) –
(0.886189,1.49846) –
(0.902129,1.42153) –
(0.916527,1.34863) –
(0.929757,1.2797) –
(0.942625,1.21522) –
(0.95509,1.15451) –
(0.966099,1.09582) –
(0.976325,1.03968) –
(0.986,0.986) –
(0.994835,0.934212) –
(1.00194,0.883331) –
(1.00958,0.835197) –
(1.01744,0.789205) –
(1.02513,0.744803) –
(1.03282,0.701902) –
(1.03943,0.659642) –
(1.04503,0.618028) –
(1.04967,0.577063) –
(1.05405,0.537066) –
(1.05824,0.497972) –
(1.06233,0.459712) –
(1.06573,0.421952) –
(1.06914,0.384915) –
(1.07264,0.34852) –
(1.0749,0.312289) –
(1.07733,0.276613) –
(1.07997,0.241402) –
(1.08077,0.206168) –
(1.08112,0.171233) –
(1.08246,0.136747) –
(1.08411,0.102479) –
(1.08538,0.0682866) –
(1.08558,0.0341157) –
(1.08541,-1.37959e-15) –
(1.08417,-0.0340713) –
(1.08327,-0.0681534) –
(1.0827,-0.102346) –
(1.08176,-0.136658) –
(1.08043,-0.171123) –
(1.07799,-0.205638) –
(1.07583,-0.240477) –
(1.07323,-0.275557) –
(1.07015,-0.310908) –
(1.06726,-0.346772) –
(1.06515,-0.383478) –
(1.06244,-0.42065) –
(1.05974,-0.458589) –
(1.05697,-0.49737) –
(1.05342,-0.536745) –
(1.04905,-0.576723) –
(1.04442,-0.617668) –
(1.03943,-0.659642) –
(1.03399,-0.702697) –
(1.02799,-0.746882) –
(1.02079,-0.791805) –
(1.01285,-0.837901) –
(1.00512,-0.886136) –
(0.995866,-0.935181) –
(0.986,-0.986) –
(0.975841,-1.03916) –
(0.965164,-1.09476) –
(0.95509,-1.15451) –
(0.944358,-1.21746) –
(0.931004,-1.28142) –
(0.918116,-1.35097) –
(0.900993,-1.41974) –
(0.886909,-1.49968) –
(0.870023,-1.58257) –
(0.849418,-1.66708) –
(0.828247,-1.76012) –
(0.803443,-1.85665) –
(0.776225,-1.96052) –
(0.74468,-2.06843) –
(0.708621,-2.18091) –
(0.667978,-2.2992) –
(0.620048,-2.41493) –
(0.565945,-2.53189) –
(0.502965,-2.63663) –
(0.433282,-2.73564) –
(0.356977,-2.82577) –
(0.274164,-2.90035) –
(0.185901,-2.95481) –
(0.0938183,-2.98535) –
(1.36719e-14,-2.99601) –
(-0.0937738,-2.98393) –
(-0.185546,-2.94917) –
(-0.27423,-2.90105) –
(-0.357598,-2.83068) –
(-0.433504,-2.73703) –
(-0.5027,-2.63524) –
(-0.564866,-2.52706) –
(-0.619169,-2.4115) –
(-0.665808,-2.29173) –
(-0.70731,-2.17688) –
(-0.742045,-2.06111) –
(-0.773361,-1.95329) –
(-0.802601,-1.8547) –
(-0.827946,-1.75948) –
(-0.848455,-1.66519) –
(-0.866958,-1.57699) –
(-0.886189,-1.49846) –
(-0.902129,-1.42153) –
(-0.916527,-1.34863) –
(-0.929757,-1.2797) –
(-0.942625,-1.21522) –
(-0.95509,-1.15451) –
(-0.966099,-1.09582) –
(-0.976325,-1.03968) –
(-0.986,-0.986) –
(-0.994835,-0.934212) –
(-1.00194,-0.883331) –
(-1.00958,-0.835197) –
(-1.01744,-0.789205) –
(-1.02513,-0.744803) –
(-1.03282,-0.701902) –
(-1.03943,-0.659642) –
(-1.04503,-0.618028) –
(-1.04967,-0.577063) –
(-1.05405,-0.537066) –
(-1.05824,-0.497972) –
(-1.06233,-0.459712) –
(-1.06573,-0.421952) –
(-1.06914,-0.384915) –
(-1.07264,-0.34852) –
(-1.0749,-0.312289) –
(-1.07733,-0.276613) –
(-1.07997,-0.241402) –
(-1.08077,-0.206168) –
(-1.08112,-0.171233) –
(-1.08246,-0.136747) –
(-1.08411,-0.102479) –
(-1.08538,-0.0682866) –
(-1.08558,-0.0341157) –
(-1.08541,-1.46599e-14) –
(-1.08417,0.0340713) –
(-1.08327,0.0681534) –
(-1.0827,0.102346) –
(-1.08176,0.136658) –
(-1.08043,0.171123) –
(-1.07799,0.205638) –
(-1.07583,0.240477) –
(-1.07323,0.275557) –
(-1.07015,0.310908) –
(-1.06726,0.346772) –
(-1.06515,0.383478) –
(-1.06244,0.42065) –
(-1.05974,0.458589) –
(-1.05697,0.49737) –
(-1.05342,0.536745) –
(-1.04905,0.576723) –
(-1.04442,0.617668) –
(-1.03943,0.659642) –
(-1.03399,0.702697) –
(-1.02799,0.746882) –
(-1.02079,0.791805) –
(-1.01285,0.837901) –
(-1.00512,0.886136) –
(-0.995866,0.935181) –
(-0.986,0.986) –
(-0.975841,1.03916) –
(-0.965164,1.09476) –
(-0.95509,1.15451) –
(-0.944358,1.21746) –
(-0.931004,1.28142) –
(-0.918116,1.35097) –
(-0.900993,1.41974) –
(-0.886909,1.49968) –
(-0.870023,1.58257) –
(-0.849418,1.66708) –
(-0.828247,1.76012) –
(-0.803443,1.85665) –
(-0.776225,1.96052) –
(-0.74468,2.06843) –
(-0.708621,2.18091) –
(-0.667978,2.2992) –
(-0.620048,2.41493) –
(-0.565945,2.53189) –
(-0.502965,2.63663) –
(-0.433282,2.73564) –
(-0.356977,2.82577) –
(-0.274164,2.90035) –
(-0.185901,2.95481) –
(-0.0938183,2.98535) –
scaled ticks=false,
tick label style=/pgf/number format/fixed,
ylabel= overlaping volume $V_\text{overlap}$ $[V_\text{pear}]$,
xlabel= tapering parameter $k_\theta$,
xtick pos=left,
ytick pos=left,
xmin = 1.9,
xmax = 6.1,
ymin = 0,
ymax = 0.05,
xlabel style=yshift=-0cm,
ylabel absolute,
ylabel style=xshift=-0cm,
legend pos=north west,
legend style=draw=none,
x tick label style=
/pgf/number format/.cd,
fixed zerofill,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[mark=x] coordinates
Top: The contact profiles according to the PHGO model ([dashed]) and the HPR model ([dotted]) for identical pear-shaped particles with $k=3$ and $\theta_k=15^\circ$ at different angles between the molecules $\phi=\arccos(\mathbf{u}_i{\cdot}\mathbf{u}_j)$ in the xz-plane. The surrounding pears are positioned in contact according to the PHGO model. The arrows showcase the different contact between blunt (red) and pointy (blue) ends depending on $\phi$. Bottom: The maximal overlap volume $V_\text{overlap}$ between two PHGO particles with different tapering parameters $k_\theta$ when in contact. The volume is given in comparison to the volume of the Bézier pear $V_\text{pear}$.
In the following, we first detail the specific shape differences between the two pear-shaped particle models in sec:Micro. Afterwards we analyse the effect of these distinctions by calculating the phase diagram of the HPR model numerically and comparing it to the phase behaviour of PHGO particles in sec:Meso. Here we show that the gyroid phase, which can be interpreted as a warped bilayer phase, is not universal for tapered pear particles and that the special features of the PHGO contact function promote the formation of otherwise unfavourable bilayer-configurations. Subsequently in sec:Pair, we analyse the local environment of the pear-shaped particles within the different phases. In combination with our results from part 2, where we observe the depletion behaviour between pear-shaped particles within a hard sphere solvent [21], this study sheds light on the different mesoscopic behaviour between the PHGO and HPR model from a microscopic perspective.
§ MICROSCOPIC DIFFERENCES BETWEEN HARD PEARS OF REVOLUTION AND PEAR HARD GAUSSIAN OVERLAP PARTICLES
In fig:ContactFuntionPears the contact profiles of PHGO and HPR particles with aspect ratio $k=3$ and tapering parameter $k_\theta=3$ are compared. The contact profile is determined by the interface of the excluded volume given by the contact function
\begin{equation}
\sigma(\mathbf{r}_{ij},\mathbf{u}_i,\mathbf{u}_j) =
\begin{cases}
0, \text{if particle } i \text{ and } j \text{ do not overlap},\\
1, \text{if particle } i \text{ and } j \text{ overlap}\\
\end{cases}
\end{equation}
with the relative distance $\mathbf{r}_{ij}$ between the reference particle $i$ and a secondary particle $j$ and their orientation vectors $\mathbf{u}_i$ and $\mathbf{u}_j$. It becomes apparent that the two models show considerable differences for relative angles $\phi=\arccos(\mathbf{u}_i{\cdot}\mathbf{u}_j)$ between $50^{\circ}$ and $130^{\circ}$. In this regime the PHGO profile often overestimates the overlap, which leads to gaps between the particles. This, however, is inherited from a similar error between the HGO and HER (hard ellipsoids of revolution) potential of the ellipsoid [32]. For small angles an additional effect occurs. At around $30^{\circ}$ the PHGO profile also occasionally underestimates the contact distance, in other words the distance of closest approach, $\sigma$ compared to the Bézier shape such that the colloidal particles overlap with their blunt ends when represented by Bézier pears. The gap size and the overlap volume (see fig:ContactFuntionPears) are higher for more asymmetrical pears, such that the PHGO approximation is worse for Bézier-pears with larger taper.
[anchor=south] at (0,3) (a);
[anchor=south] at (13.8,3) (b);
[red,very thick] (0.8,0) circle (0.8);
[fill=red,red] (0.8,0) circle (0.05);
[red,very thick] (-0.8,0) circle (0.8);
[fill=red,red] (-0.8,0) circle (0.05);
[very thick,dashed,<->] (0.8,1) – (-0.8,1);
[thick,dotted] (0.8,0) – (0.8,1);
[thick,dotted] (-0.8,0) – (-0.8,1);
[anchor=south] at (0,1) $\sigma_{\textcolor{red}{AA}}$;
[blue,very thick] (-0.5,0) circle (0.5);
[fill=blue,blue] (-0.5,0) circle (0.05);
[blue,very thick] (0.5,0) circle (0.5);
[fill=blue,blue] (0.5,0) circle (0.05);
[very thick,dashed,<->] (-0.5,0.7) – (0.5,0.7);
[thick,dotted] (0.5,0) – (0.5,0.7);
[thick,dotted] (-0.5,0) – (-0.5,0.7);
[anchor=south] at (0,0.7) $\sigma_{\textcolor{blue}{BB}}$;
[very thick,dotted] (6.9,-4.3) – (6.9,4.3);
[very thick,dotted] (1.9,0) – (6.7,0);
[very thick,dotted] (7.1,0) – (11.9,0);
[red,very thick] (-0.65,-0) circle (0.8);
[fill=red,red] (-0.65,0) circle (0.05);
[blue,very thick] (0.65,-0) circle (0.5);
[fill=blue,blue] (0.65,0) circle (0.05);
[very thick,dashed,<->] (-0.65,1) – (0.65,1);
[thick,dotted] (0.65,0) – (0.65,1);
[thick,dotted] (-0.65,0) – (-0.65,1);
[anchor=south] at (0,1) $\sigma_{\textcolor{red}{A}\textcolor{blue}{B}}=0.5(\sigma_{\textcolor{red}{AA}}+\sigma_{\textcolor{blue}{BB}})$;
[anchor=south] at (0,1.5) bi-disperse additive disks;
[red,very thick] (-0.55,0) circle (0.8);
[fill=red,red] (-0.55,0) circle (0.05);
[blue,very thick] (0.55,0) circle (0.5);
[fill=blue,blue] (0.55,0) circle (0.05);
[very thick,dashed,<->] (0.55,1) – (-0.55,1);
[thick,dotted] (0.55,0) – (0.55,1);
[thick,dotted] (-0.55,0) – (-0.55,1);
[anchor=south] at (0,1) $\sigma_{\textcolor{red}{A}\textcolor{blue}{B}}\neq0.5(\sigma_{\textcolor{red}{AA}}+\sigma_{\textcolor{blue}{BB}})$;
[anchor=south] at (0,1.5) bi-disperse non-additive disks;
[very thick] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=black] (0,0) circle (0.05);
[very thick, shift=(1.55,-1.81506),rotate=-144] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=black] (1.55,-1.81506) circle (0.05);
[thick,dotted] (0.7,0.7) – (0,0);
[thick,dotted] (1.55,-1.81506) – (2.25,-1.11506);
[very thick,dashed,<->] (0.7,0.7) – (2.25,-1.11506);
[anchor=south west] at (1.375,-0.30753) $\sigma_{144^{\circ}}$;
[very thick] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=black] (0,0) circle (0.05);
[very thick, shift=(1.55,-0.131359),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=black] (1.55,-0.131359) circle (0.05);
[thick,dotted] (0,-1.7) – (0,0);
[thick,dotted] (1.55,-0.131359) – (1.55,-1.831359);
[very thick,dashed,<->] (0,-1.7) – (1.55,-1.831359);
[anchor=north] at (0.775,-1.7656795) $\sigma_{36^{\circ}}$;
[anchor=south] at (1,1.55) self-additive pears;
[very thick] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=black] (0,0) circle (0.05);
[very thick, shift=(1.38963,-0.131359),rotate=-36] (-0.5,0) .. controls (-0.166666666,2) and (0.166666666,2) .. (0.5,0) .. controls (0.833333333,-2) and (-0.8333333333,-2) .. (-0.5,0) ;
[fill=black] (1.38963,-0.131359) circle (0.05);
[thick,dotted] (0,-1.7) – (0,0);
[thick,dotted] (1.38963,-0.131359) – (1.38963,-1.831359);
[very thick,dashed,<->] (0,-1.7) – (1.38963,-1.831359);
[anchor=north] at (0.694815,-1.7656795) $\sigma_{36^{\circ}}$;
[anchor=south] at (0.9,1.55) self-non-additive pears;
a) The concept of an additive and non-additive mixture of disc species $A$ and $B$. In the additive mixture the interspecies contact distance $\sigma_{AB}$ can be calculated from the contact between disks of the same species $\sigma_{AA}$ and $\sigma_{BB}$ by an additive rule. In the non-additive case this rule does not hold. b) The concept of self-additive and self-non-additive system by the example of pear-shaped particles. The contact between different parts of self-additive pears at a certain relative angle (i.e $\phi=36^{\circ}$) and distance can be deduced logically from the contact between the same particles at a different angle (i.e $\phi=144^{\circ}$). In self-non-additive systems the contact distance between parts of the particles vary and do not follow an overall shape.
In the following, we will use the term self-non-additivity to describe this combination between over- and underestimation of the contact distance and this special angle dependency of the contact distance. Conventionally, hard-core interactions are labelled additive, if in a mixture the distance of closest approach $\sigma_{AB}$ between species $A$ and $B$ can be logically deduced from the contact distance between particles of the same type by the additive constraint: $\sigma_{AB}= 0.5(\sigma_{AA}+\sigma_{BB})$. If this rule does not hold, the mixture is referred to as non-additive [33, 34, 35, 36, 37]. This concept is illustrated in fig:additivea.
A similar effect, however, also occurs in the mono-disperse PHGO particle system. This becomes apparent by explaining the choice of the “self” in self-non-additivity which is illustrated by analysing the contact distance between the blunt ends of the pear-shaped particles in fig:ContactFuntionPears and explained additionally in fig:additiveb. For certain relative angles, the blunt ends overlap ($\phi=36^{\circ}$), whereas for other angles their contact coincides with the Bézier description ($\phi=144^{\circ}$; indicated by red arrows in fig:ContactFuntionPears). Similar behaviour is observed for the contact between the thin ends (gaps at $\phi=108^{\circ}$ and no gap at $\phi=156^{\circ}$; indicated by blue arrows in fig:ContactFuntionPears). Hence, the PHGO model represents the hard interactions between two Bézier pear-shaped object depending on their relative angle differently well. Alternatively, differently orientated pears can be interpreted as distinct hard particle species with non-additive interactions as the contact at $\phi=36^{\circ}$ can not be deduced additively form the contact at $\phi=144^{\circ}$ (see fig:additiveb). Moreover, the described angular dependency of the contact function implies that a true physical hard shape cannot copy the PHGO model [Additional overlap rules (like adding non-additive features to the blunt ends) are required to imitate the interactions between PHGO particles with physical hard shapes.].
Evidently, the self-non-additivity of the PHGO model is a specific form of an orientation- and distance-dependent interaction potential. The interaction remains, for all relative orientations of the particles, a hard-core interaction where the particles experience no interaction until the point of contact.
§ PHASE HEHAVIOUR OF HARD PEARS OF REVOLUTION AND PEAR HARD GAUSSIAN OVERLAP PARTICLES
The key result of this paper is the computation of the phase diagram of HPR particles and its comparison to the phase behaviour of pears as approximated by the PHGO model. Whereas PHGO particles were found to form complex phases (including smectic and gyroid), these phases are absent in the phase diagram of hard pears of revolution (HPR).
§.§ Phase behaviour of pear hard Gaussian overlap (PHGO) particles
To highlight the sensitivity of the special collective behaviour of PHGO pears in terms of particle shape, the phase diagram of the PHGO pear-shaped particle model, which has been obtained in [29], is revisited and put into perspective in the following. In this previous paper a complete phase diagram of PHGO particles with aspect ratio $k=3$ is calculated (see also the recreated phase diagram in fig:phase_diagram). Depending on the tapering parameter, the phase diagram can be separated into three regimes. Two parts, containing pears with high ($k_\theta<2.3$) and intermediate tapering ($2.3<k_\theta<4.5$), are characterised by the formation of bilayer-phases, namely the bilayer smectic and the gyroid configuration. The third fraction ($k_\theta>4.5$) of the phase diagram involves nearly ellipsoidal particles which generate monolayer states like nematic and monolayer smectic.
[thick,shift=(1.14cm,7.85cm),scale=0.25] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[thick,shift=(2.27cm,7.85cm),scale=0.25] (-0.5,0) .. controls (-0.083333333,2) and (0.083333333,2) .. (0.5,0) .. controls (0.916666666,-2) and (-0.916666666,-2) .. (-0.5,0) ;
[thick,shift=(6.25cm,7.85cm),scale=0.25] (-0.5,0) .. controls (-0.236842105,2) and (0.236842105,2) .. (0.5,0) .. controls (0.763157894,-2) and (-0.763157894,-2) .. (-0.5,0) ;
[thick,shift=(8.665cm,7.85cm),scale=0.25] (-0.5,0) .. controls (-0.282608695,2) and (0.282608695,2) .. (0.5,0) .. controls (0.717391304,-2) and (-0.717391304,-2) .. (-0.5,0) ;
[thick,shift=(10.79cm,7.85cm),scale=0.25] (-0.5,0) .. controls (-0.314814814,2) and (0.314814814,2) .. (0.5,0) .. controls (0.685185185,-2) and (-0.685185185,-2) .. (-0.5,0) ;
[thick,shift=(13.49cm,7.85cm),scale=0.25] (-0.5,0) .. controls (-0.346153846,2) and (0.346153846,2) .. (0.5,0) .. controls (0.653846153,-2) and (-0.653846153,-2) .. (-0.5,0) ;
xlabel=tapering parameter $k_{\theta}$,
ylabel=global density $\rho_g$,
xtick pos=left,
ytick pos=left,
xmin = 1.6,
ymin = 0.44,
ymax = 0.68,
x tick label style=
/pgf/number format/.cd,
fixed zerofill,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[name path=i1,smooth,black,very thick,dotted] plot coordinates
[name path=ii,smooth,black,very thick] plot coordinates
[name path=f1,smooth,black,very thick] plot coordinates
[name path=i2,smooth,black,very thick, dotted] plot coordinates
[name path=f2,smooth,black,very thick] plot coordinates
[name path=i3,smooth,black,very thick,dotted] plot coordinates
[name path=f3,smooth,black,very thick] plot coordinates
[name path=fi,smooth,black,very thick] plot coordinates
[name path=gi1,smooth,black,very thick,dotted] plot coordinates
[name path=gi2,smooth,black,very thick,dotted] plot coordinates
[name path=ui,smooth,black,very thick, dotted] plot coordinates
[name path=daxis1,draw=none, mark=none,domain=2.0:2.4] 0.43;
[name path=daxis2,draw=none, mark=none,domain=2.4:4.52] 0.43;
[name path=daxis3,draw=none, mark=none,domain=4.52:6.5] 0.43;
[name path=taxis1,draw=none, mark=none,domain=2.0:2.4] 0.68;
[name path=taxis2,draw=none, mark=none,domain=4.52:4.65] 0.68;
[name path=taxis3,draw=none, mark=none,domain=4.65:6.5] 0.68;
[name path=taxis4,draw=none, mark=none,domain=2.4:4.65] 0.68;
[name path=taxis,draw=none, mark=none,domain=2.4:4.52] 0.68;
[dashed, thick,mark=none,domain=1.5:6.5] 0.64;
[dashed, thick,mark=none,domain=1.5:6.5] 0.455;
[blue,opacity=0.5,domain=2.0:2.32] fill between [ of=i1 and daxis1];
[blue,opacity=0.5] fill between [ of=i2 and daxis2];
[blue,opacity=0.5] fill between [ of=i3 and daxis3];
[white] fill between [ of=gi1 and taxis];
[gray] fill between [ of=gi1 and i2];
[white] fill between [ of=gi2 and taxis3];
[gray] fill between [ of=gi2 and i3];
[green,opacity=0.2] fill between [ of=f1 and taxis1];
[red,opacity=0.5] fill between [ of=i2 and taxis];
[red,opacity=0.5] fill between [ of=f2 and taxis2];
[red,opacity=0.1] fill between [ of=ui and taxis4];
[green,opacity=0.2] fill between [ of=f3 and taxis3];
[cyan,opacity=0.2] fill between [ of=i1 and f1];
[cyan,opacity=0.2] fill between [ of=fi and f3];
[orange,opacity=0.2] fill between [ of=fi and i3];
[every mark/.append style=solid, fill=blue,mark=*,draw=none] coordinates
[every mark/.append style=solid, fill=gray,mark=triangle*,draw=none] coordinates
[every mark/.append style=solid, fill=gray,mark=cube*,draw=none] coordinates
[every mark/.append style=solid, fill=green,mark=otimes*,draw=none] coordinates
[every mark/.append style=solid, fill=orange,mark=triangle*,draw=none] coordinates
[every mark/.append style=solid, fill=red,mark=cube*,draw=none] coordinates
[mark=x,draw=none] coordinates
[every mark/.append style=solid, fill=cyan,mark=diamond*,draw=none] coordinates
at (axis cs:4.7,0.5) [anchor=north east] Isotropic;
at (axis cs:3.5,0.598) [anchor=north east] Gyroid;
at (axis cs:6.2,0.564) [anchor=north east] Nematic;
at (axis cs:5.4,0.609) [anchor=north east] Smectic;
at (axis cs:2.3,0.597) [anchor=north east] Smectic;
at (axis cs:5,0.66) [anchor=north] Solid$_{\text{Sm}}$;
at (axis cs:2.0,0.66) [anchor=north] Solid$_{\text{Sm}}$;
at (axis cs:3.5,0.66) [anchor=north] Solid$_{\text{G}}$;
[fill=white,inner sep=1pt,draw=black] at (axis cs:1.8,0.462) [anchor=north west] compression/decompression $\updownarrow$;
[fill=white,inner sep=1pt,draw=black] at (axis cs:6.3,0.66) [anchor=south east] PHGO model;
hatch distance/.store in=,
hatch distance=10pt,
hatch thickness/.store in=,
hatch thickness=2pt
[,]flexible hatch
[thick,shift=(1.14cm,5.8cm),scale=0.25] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[thick,shift=(2.27cm,5.8cm),scale=0.25] (-0.5,0) .. controls (-0.083333333,2) and (0.083333333,2) .. (0.5,0) .. controls (0.916666666,-2) and (-0.916666666,-2) .. (-0.5,0) ;
[thick,shift=(6.25cm,5.8cm),scale=0.25] (-0.5,0) .. controls (-0.236842105,2) and (0.236842105,2) .. (0.5,0) .. controls (0.763157894,-2) and (-0.763157894,-2) .. (-0.5,0) ;
[thick,shift=(8.665cm,5.8cm),scale=0.25] (-0.5,0) .. controls (-0.282608695,2) and (0.282608695,2) .. (0.5,0) .. controls (0.717391304,-2) and (-0.717391304,-2) .. (-0.5,0) ;
[thick,shift=(10.79cm,5.8cm),scale=0.25] (-0.5,0) .. controls (-0.314814814,2) and (0.314814814,2) .. (0.5,0) .. controls (0.685185185,-2) and (-0.685185185,-2) .. (-0.5,0) ;
[thick,shift=(13.49cm,5.8cm),scale=0.25] (-0.5,0) .. controls (-0.346153846,2) and (0.346153846,2) .. (0.5,0) .. controls (0.653846153,-2) and (-0.653846153,-2) .. (-0.5,0) ;
xlabel=tapering parameter $k_{\theta}$,
ylabel=global density $\rho_g$,
xtick pos=left,
ytick pos=left,
xmin = 1.6,
ymin = 0.455,
ymax = 0.63,
xtick = 2.0,2.5,3.0,3.5,4.0,4.5,5.0,5.5,6.0,
ytick = 0.50,0.55,0.60,
x tick label style=
/pgf/number format/.cd,
fixed zerofill,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[name path=i1,smooth,black,very thick] plot coordinates
[name path=i2,smooth,black,very thick,dotted] plot coordinates
[name path=daxis1,draw=none, mark=none,domain=2.0:6.5] 0.45;
[name path=taxis1,draw=none, mark=none,domain=2:3] 0.63;
[name path=taxis2,draw=none, mark=none,domain=2:6.5] 0.63;
[thick,mark=none,domain=1.5:6.5] 0.63;
[thick,mark=none,domain=1.5:6.5] 0.455;
[blue,opacity=0.5] fill between [ of=i1 and daxis1];
[pattern=flexible hatch,pattern color=lightgray,hatch thickness=3pt] fill between [ of=i2 and taxis1];
[white] fill between [ of=i1 and taxis2];
[orange,opacity=0.2] fill between [ of=i1 and taxis2];
[every mark/.append style=solid, fill=blue,mark=*,draw=none] coordinates
[every mark/.append style=solid, fill=lightgray,mark=*,draw=none] coordinates
[every mark/.append style=solid, fill=orange,mark=triangle*,draw=none] coordinates
at (axis cs:3.5,0.51) Isotropic;
at (axis cs:3.5,0.59) Nematic;
[fill=white,inner sep=1pt,draw=black] at (axis cs:1.8,0.474) [anchor=north west] compression/decompression $\updownarrow$;
[fill=white,inner sep=1pt,draw=black] at (axis cs:6.3,0.61) [anchor=south east] HPR model;
Top: Phase diagram of hard PHGO pear-shaped particles with $k=3.0$ obtained by compression (from isotropic) and decompression at fixed tapering parameter $k_{\theta}$ for systems of $3040$ particles in a cubic simulation box. Grey regions between the isotropic and ordered phases indicate parameter values for which phase hysteresis is observed between compression and decompression sequences. The phase diagram is adopted from SEMCS-T2017. Bottom: Phase diagram of hard HPR particles with $k=3.0$ obtained by compression (from isotropic) and decompression at fixed tapering parameter $k_{\theta}$ for systems of $400$ and $1600$ particles in a cubic simulation box. Grey shaded regions indicate configurations which showcase a high degree of local orientational order and basic features, which could lead to bilayer formations according to their pair-correlation functions (see fig:ori_hard). However, this should not be seen as a separate phase from the isotropic state. The schematics above both graphs indicate the cross-sectional shape of the particles associated with each $k_{\theta}$ value.
The nematic order parameter $P_2$ during the compression of HPR particle systems with $N=400$ for different tapering parameters $k_{\theta}$.
§.§ Phase behaviour of hard pears of revolution (HPR)
The slight shape change of the pear particles are realised by changing the model to describe pear particle interactions from the PHGO to the HPR representation. The calculated phase diagram is based on $NVT$ Monte Carlo simulations with $N=400$ and $N=1600$ monodisperse HPR particles interacting via a hard-core potential. The boundary conditions of the cuboidal simulation box are set as periodic in all three directions. The tapering parameter $k_{\theta}$ lies between $2.0$ and $5.0$ which corresponds to tapering angles between $28.1^{\circ}$ and $11.4^{\circ}$. The MC translation step and the rotation step are initially set as $\Delta_{q,\text{max}} = 0.015\sigma_w$ and $\Delta_{u,\text{max}} = 0.015\sigma_w$ [The parameter $\sigma_w$ indicates the width of the pear-shaped particles.], respectively, but have been adjusted in an equilibration phase to maintain acceptance rates of roughly 50% for the displacement attempts.
Every simulation starts from an initially crystalline arrangement of particles at very low density ($\rho_g=0.1$), which is then compressed to the global density $\rho_g=0.44$ where all systems are obtained in the isotropic phase. Subsequently, the systems are slowly compressed further (see symbols in fig:phase_diagram). For each data point of the sequence, the assembly is equilibrated for $2{\cdot}10^6$ MC steps and afterwards analysed for $1.8{\cdot}10^7$ step, where snapshots are taken after every 10000th step. At very high densities $\rho_g=0.63$, the mean squared displacement of the individual pears indicates trapped particles. Those particles hardly diffuse within the simulation box during simulation runs. This could be an indicator of a solid state. However, our simple Metropolis MC method is not sufficient to access this region reliably. Thus, solid phases are not drawn in the phase diagram. Afterwards, expansion sequences are performed in an equivalent, but reverse, manner from each $\rho_g=0.63$ state. The resultant phase diagram is shown in fig:phase_diagram.
Already at first sight, the HPR phase diagram differs starkly from the phase diagram of PHGO particles. It becomes apparent that the remarkable division into three different regimes in terms of shape is absent. Independent of tapering all particles feature a similar phase behaviour. For low densities, the particles adopt the expected isotropic phase. However, during the compression, the pear-shaped particles begin to globally align with the director of the system and eventually transition into a nematic state (see nematic order parameter in fig:nemOrderHard).
[very thick] (-7.591,0.3) – (-7.591,-11.733);
[very thick] (-5.309,0.3) – (-5.309,-11.733);
[very thick] (-0.3423,0.3) – (-0.3423,-11.733);
[very thick] (4.6243,0.3) – (4.6243,-11.733);
[very thick] (9.591,0.3) – (9.591,-11.733);
[very thick] (9.591,0.3) – (-7.591,0.3);
[very thick] (9.591,-1.8) – (-7.591,-1.8);
[very thick] (9.591,-6.766) – (-7.591,-6.766);
[very thick] (9.591,-11.733) – (-7.591,-11.733);
at (-2.82565,-0.1) Cluster;
at (2.141,-0.1) Blunt end;
at (7.10765,-0.1) Nematic;
hide axis,
scale only axis,
point meta min=0,
point meta max=1,
colorbar horizontal,
colormap =mymaprgb(0pt)=(0,0,0.6); rgb(32pt)=(1,1,1); rgb(64pt)=(0.6,0,0),
colorbar style=
yticklabel style=
/pgf/number format/.cd,
fixed zerofill
[draw=none] coordinates (0,0);
at (7.10765,-1.5) $|\mathbf{n}{\cdot}\mathbf{u}_i|$;
at (-6.45,-3.283) PHGO:;
at (-6.45,-4.083) Gyroid;
at (-6.45,-4.682) $\mathbf{k_\theta=3.8}$;
at (-6.45,-5.282) $\mathbf{\rho_g=0.58}$;
[] at (-2.82565,-4.283);
[] at (2.141,-4.283);
[] at (7.10765,-4.283);
at (-6.45,-8.2495) HPR:;
at (-6.45,-9.0495) Nematic;
at (-6.45,-9.6495) $\mathbf{k_\theta=3.0}$;
at (-6.45,-10.2495) $\mathbf{\rho_g=0.58}$;
[] at (-2.82565,-9.2495);
[] at (2.141,-9.2495);
[] at (7.10765,-9.2495);
Representative configurations of $3040$ PHGO pear-shaped particles in the gyroid phase (first row: $k=3$, $k_{\theta}=3.8$, $\rho_g=0.60$) and $1600$ HPR particles forming the nematic phase (second row: $k=3$, $k_{\theta}=3.0$, $\rho_g=0.58$). The structures are illustrated in the cluster representation (first column) and the blunt end representation (second column) where the colors indicate the cluster affiliation. In the third column the particles are additionally colored according to their relative orientation to the director $\mathbf{n}$.
Also at direct visual comparison between the HPR and PHGO assemblies the major distinctions become apparent (see characteristic configurations pictured in fig:phasesHard). Next to the absence of gyroid phases and of the global alignment into one preferred directions, the HPR particles even lack of any indications of bilayer formation. Neither do they display interdigitated zig-zag patterns of anti-parallelly aligned pears, nor is it feasible to detect layers or channel domains via distance clustering of their blunt ends for any given tapering parameter. By contrast the influence of the tapering parameter $k_{\theta}$ is manifested in a shift of the transition density from the isotropic to the nematic phase. A greater head-tail asymmetry of the pear shape induces destabilisation of the nematic order such that the transition occurs for larger densities. Also note that the hysteresis effects are marginal compared to those observed in the process of constructing fig:phase_diagram. Consequently, the hysteresis is not drawn in this phase diagram. Moreover, the transition line coincides with previous observations of the isotropic-nematic transition for prolate ellipsoids with $k=3$ and $k_{\theta}{\rightarrow}\infty$ ($\rho_{in}=0.541$ [2, 38]). As the nematic phase arches over all values of $k_{\theta}$ it becomes evident that HPR pears seem to be unable to form bilayer-structures via self-assembly.
The computational complexity of the overlap calculations for HPR imply that our results are based on fewer and shorter simulation runs. While the question of equilibration is a more persistent one than for PHGO, there are clear indications that the HPR behaviour described above is a close representative of the equilibrium behaviour: Firstly, we have been unsuccessful in obtaining an equilibrated bilayer configuration even when the HPR systems are initially prepared as an artificial smectic or gyroid arrangement. Here the pre-constructed structures destabilise and transition into nematic configurations upon equilibration. Secondly, during our simulations the HPR pears hardly show any sign of precursors of bilayer formation. This, however, is a typical initial step in the isotropic phase of PHGO particles before entering the bilayer states [29]. The precursors appear as small randomly oriented clusters which are unjoined such that they do not form long-ranged structures. Only HPR particles within the grey area in fig:phase_diagram hint towards some of the characteristics of such bilayer precursors, which is discussed in more detail below.
§ PAIR CORRELATION FUNCTIONS
Overall, we can draw the conclusion that the small differences between the PHGO and HPR model have major repercussions on the pears' ability to collectively form bilayer phases. To give an explanation for the drastic change in phase behaviour, we investigate the local surrounding of the different phases by calculating the lateral $g^{\perp}$ and longitudinal $g^{\parallel}$ pair-correlation functions. As the local behaviour is intimately linked with global phase behaviour, this analysis, next to our studies on the depletion behaviour of the two pear-shaped particle models in part 2 [21], sheds light on the propensity of PHGO articles to form gyroid structures from a microscopic point of view. Here we concentrate not only on the density distribution in lateral and longitudinal direction of the pears, but also the polar and nematic weighted correlation functions. Before we apply these tools to the PHGO and HPR systems, however, we first describe the definition of $g(r)$ in detail, as a basis for our extended definition of $g^{\perp}$ and $g^{\parallel}$ below.
§.§ Technical definition of pair correlation functions
One of the best established observables to characterise the translational order of particle systems are the pair correlation function $g(r)$, also known as the radial distribution function. The radial distribution function represents the probability, given that particle $i$ is placed at the origin, to find another molecule $j$ at a radial distance $r$. Thus $g(r)$ bears valuable information about the positional correlations between the particles. Based on the number density distribution function
the radial distribution function is written as
\begin{equation}
\label{eq:RadialDistributionFunction}
g(r)=\frac{1}{N\rho_N}\left\langle\sum_i\sum_{j\neq i}\delta(r-r_{ij})\right\rangle
\end{equation}
with the global number density
\begin{equation}
\label{eq:NumberDensity}
\rho_N=\frac{N}{V}.
\end{equation}
To calculate $g(r)$ numerically in our simulations, eq:RadialDistributionFunction has to be discretised and rewritten. Based on the definition of $g(r)$, the mean number of particles $\delta N(r)$ found within a small distance interval $[r,r+\delta r]$ from another particle is given by
\begin{equation}
\label{eq:RadialDistributionFunctionNumber}
\delta N(r)=\rho_Ng(r)V_{\text{shell}}(r)
\end{equation}
with $V_{\text{shell}}(r)$ being the volume of the thin spherical shell of thickness $\delta r$ whose inner boundary is a sphere of radius $r$. By approximating $V_{\text{shell}}(r)=V_{\text{sph}}(r+\delta r)-V_{\text{sph}}(r)\approx 4\pi r^2 \delta r + \mathcal{O}(\delta r^2)$ and rearranging eq:RadialDistributionFunctionNumber, we obtain
\begin{equation}
\label{eq:RadialDistributionFunctionHistogram}
g(r)=\frac{1}{\rho_N}\frac{\delta N(r)}{4\pi r^2\delta r}.
\end{equation}
This can be interpreted as a formula to generate the radial distribution function by a normalised histogram. The histogram is computed by counting all pair separations, corresponding to the domain $m\delta r < r_{ij} < (m+1)\delta r$ and normalize them according to eq:RadialDistributionFunctionHistogram. Note that the “normalisation” factor in this case indicates that $g(r)$ converges towards 1 for large distances: $\lim_{r\rightarrow\infty}g(r)=1$. This indicates that a pair of particles at large distance from one another is uncorrelated. Additionally, to prevent boundary effects only pairs with $r_{ij}<\frac{L}{2}$ are considered in calculating $g(r)$. The concept is pictured in fig:DistributionFuctiona.
Schematics of the radial (a), longitudinal (b) and lateral distribution function (c). The figures show cross sections through the sampling space. The gray areas represent shells which bin the space around the center pear-shaped particle and are used to create the corresponding histogram. The shells are spherical (a), discal (b) and cylindrical (c).
In the analysis of liquid crystals it is often advantageous not to determine the radial distribution as described above, but to separate the distance between two molecules into a longitudinal and a lateral part, particularly for smectic phases. Due to their anisotropic features, the order parallel to the director is different from the order perpendicular to the director. By calculating $g^{\parallel}(\mathbf{n}\cdot\mathbf{r})$ and $g^{\perp}(\sqrt{r^2-(\mathbf{n}\cdot\mathbf{r})^2})$ the information is separated for the two directions. The former characterises the smectic layering of the system, whereas the latter is a measure of translational order within the layers. However, this approach has the disadvantage that global orientational order is needed. Lipid systems adopting a bicontinuous surface geometry, exhibit no overall global orientational order as they form pronouncedly curved bilayers. Nevertheless, locally neighbouring lipids are clearly orientationally correlated such that a lateral and longitudinal distribution function on a local scale seems to be more effective. Thus, we replace the director with the orientation of the liquid crystal at the origin $\mathbf{u}_i$. In this way, we can guarantee to detect both curved bilayer ordering but also smectic layering as $\mathbf{u}_i\approx \mathbf{n}$ [This only applies to the smectic-A phase. For other smectic phases it is still more convenient to use the director as a reference.]. The longitudinal and lateral distance are defined by $r^{\parallel}=\mathbf{u}_i\cdot\mathbf{r}$ and $r^{\perp}=\sqrt{r^2-r^{\parallel 2}}$, respectively. Note here, that $r^{\parallel}$ can become negative. For pear-shaped particles, positive longitudinal distances correspond to a distance in the direction of the thin narrow end while negative distances have to be assigned to particles which are placed in the direction of the thick blunt end.
To compute the longitudinal distribution function $g^{\parallel}(r^{\parallel})$ and lateral distribution function $g^{\perp}(r^{\perp})$ we use a similar histogram approach like in For simplifying the normalisation of the histograms they are calculated within a cylinder. This implies that only particles which lie within a cylinder with radius $R_{\text{cyl}}$ and height $H_{\text{cyl}}$ centered at the position of particle $i$ are considered. The cylinder, furthermore, shares the same rotational symmetry axis as the very particle $i$ (see fig:DistributionFuctionb). The dimensions of the encapsulating cylinder have to be chosen such that the periodic boundaries of the simulation box are not trespassed
\begin{equation}
\label{eq:CyliderDimensions}
\begin{aligned}
H_{\text{cyl}} &< L\sin{\alpha}\\
R_{\text{cyl}} &< \frac{L}{2}\sin{\alpha}.
\end{aligned}
\end{equation}
Here, $\alpha$ encodes the aspect ratio of the cylinder $\tan\alpha$. The probability to find a particle at longitudinal distance $r^{\parallel}$ within a circular disk of thickness $\delta r^{\parallel}$ and volume $V_{\text{disc}}=\pi R_{\text{cyl}}^2\delta r^{\parallel}$ bounded by the cylinder is given by
\begin{equation}
\label{eq:LongitudinalDistributionFunctionHistogram}
g^{\parallel}(r^{\parallel})=\frac{1}{\rho_N}\frac{\delta N^{\parallel}(r^{\parallel})}{\pi R_{\text{cyl}}^2\delta r^{\parallel}}.
\end{equation}
$\delta N^{\parallel}(r^{\parallel})$ is the mean number of particles within the disc. Analogously, probability to find a particle at lateral distance $r^{\perp}$ within a cylindrical shell of thickness $\delta r^{\perp}$ and volume $V_{\text{disc}}\approx 2\pi r\delta r^{\parallel}H_{\text{cyl}}$ is defined as
\begin{equation}
\label{eq:LateralDistributionFunctionHistogram}
g^{\perp}(r^{\perp})=\frac{1}{\rho_N}\frac{\delta N^{\perp}(r^{\perp})}{2\pi H_{\text{cyl}}r^{\perp}\delta r^{\perp}}.
\end{equation}
Here $\delta N^{\perp}(r^{\perp})$ is the mean number of particles within the cylindrical shell. The notion of both distribution functions is depicted in fig:DistributionFuctionb+c.
The different distribution functions provide the possibility to study the local orientational ordering in a much more detailed way as well. Here, the number density in eq:RadialDistributionFunction can be weighted by a factor which includes the relative orientations of the pear particles. With this take on $g(r)$ we can define a polar radial distribution function $g_{P1}$ weighted by the first Legendre polynomial $P_1(\mathbf{u}_i\cdot\mathbf{u}_j)=\cos(\mathbf{u}_i\cdot\mathbf{u}_j)$
\begin{equation}
\label{eq:PolarRadialDistributionFunction}
g_{P_1}(r)=\frac{1}{N\delta N(r)}\left\langle\sum_i\sum_{j\neq i}\cos(\mathbf{u}_i\cdot\mathbf{u}_j)\delta(r-r_{ij})\right\rangle.
\end{equation}
For the nematic radial distribution function $g_{P2}$ the second Legendre polynom $P_2(\mathbf{u}_i{\cdot}\mathbf{u}_j)=\frac{1}{2}(3\cos^2(\mathbf{u}_i\cdot\mathbf{u}_j)-1)$ is used as weighting factor, such that
\begin{equation}
\label{eq:NematicRadialDistributionFunction}
g_{P_2}(r)=\frac{1}{N\delta N(r)}\left\langle\sum_i\sum_{j\neq i}\frac{1}{2}(3\cos^2(\mathbf{u}_i\cdot\mathbf{u}_j)-1)\delta(r-r_{ij})\right\rangle.
\end{equation}
Both the polar and nematic distribution function are scaled by the mean number of particles at distance $r$ to easier relate the values to polar and nematic order parameters. This means that $g_{P_1}(r)$ and $g_{P_2}(r)$ determine how strongly two particles separated by a distance $r$ are orientationally correlated. However, the functions do not contain information about the likeliness of such configurations occurring. In a similar vein also lateral and longitudinal variants of the distributions are defined.
[thick] (-6,-3.3) rectangle (6,3.1);
[very thick,->] (-5.7,-2.3) – (5.7,-2.3);
[very thick] (0,-2.2) – (0,-2.4);
[anchor=north] at (0,-2.3) $r^{\parallel}$;
[anchor=south] at (0,2) ideal bilayer smectic;
[fill=black,rotate=-90] (-0.5,0) .. controls (-0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[very thick,shift=(0,1.4),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (0,1.4) $\blacksquare$;
[very thick,shift=(0,-1.4),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (0,-1.4) $\blacksquare$;
[very thick,shift=(4.2,1.4),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (4.2,1.4) $\blacktriangledown$;
[very thick,shift=(4.2,0),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (4.2,0) $\blacktriangledown$;
[very thick,shift=(4.2,-1.4),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (4.2,-1.4) $\blacktriangledown$;
[very thick,shift=(1.4,0.7),rotate=90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=west] at (1.4,0.7) $\bigstar$;
[very thick,shift=(1.4,-0.7),rotate=90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=west] at (1.4,-0.7) $\bigstar$;
[very thick,shift=(-2.8,0.7),rotate=90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=west] at (-2.8,0.7) $\blacktriangle$;
[very thick,shift=(-2.8,-0.7),rotate=90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=west] at (-2.8,-0.7) $\blacktriangle$;
[very thick,shift=(-4.2,1.4),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (-4.2,1.4) $\blacklozenge$;
[very thick,shift=(-4.2,0),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (-4.2,0) $\blacklozenge$;
[very thick,shift=(-4.2,-1.4),rotate=-90] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=east] at (-4.2,-1.4) $\blacklozenge$;
[thick] (-6,-3.3) rectangle (6,3.1);
[very thick,<->] (-5.7,-2.3) – (5.7,-2.3);
[very thick] (0,-2.2) – (0,-2.4);
[anchor=north] at (2.85,-2.3) $r^{\perp}$;
[anchor=north] at (-2.85,-2.3) $r^{\perp}$;
[anchor=south] at (0,2) ideal bilayer smectic;
[fill=black,shift=(0,-0.6)] (-0.5,0) .. controls (-0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[very thick,shift=(4.2,-0.6)] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[very thick,shift=(3.5,0.8),rotate=180] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[very thick,shift=(2.8,-0.6)] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=north] at (2.8,-0.6) $\blacklozenge$;
[very thick,shift=(2.1,0.8),rotate=180] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=south] at (2.1,0.8) $\blacktriangle$;
[very thick,shift=(1.4,-0.6)] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=north] at (1.4,-0.6) $\bigstar$;
[very thick,shift=(0.7,0.8),rotate=180] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=south] at (0.7,0.8) $\blacksquare$;
[very thick,shift=(-0.7,0.8),rotate=180] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=south] at (-0.7,0.8) $\blacksquare$;
[very thick,shift=(-1.4,-0.6)] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=north] at (-1.4,-0.6) $\bigstar$;
[very thick,shift=(-2.1,0.8),rotate=180] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=south] at (-2.1,0.8) $\blacktriangle$;
[very thick,shift=(-2.8,-0.6)] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[anchor=north] at (-2.8,-0.6) $\blacklozenge$;
[very thick,shift=(-3.5,0.8),rotate=180] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
[very thick,shift=(-4.2,-0.6)] (-0.5,0) .. controls (0,2) and (0,2) .. (0.5,0) .. controls (1,-2) and (-1,-2) .. (-0.5,0) ;
at (0.21*,0.42) Longitudinal;
at (0.71*,0.42) Lateral;
ylabel= pair correlation $g^{\parallel}(r^{\parallel})$,
xlabel= $r^{\parallel}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = -10,
xmax = 10,
ymin = 0,
ymax = 2.2,
scaled x ticks=false,
ylabel absolute,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0,0) – (axis cs:0,1.6);
[anchor=south] at (axis cs:0,1.6) $\blacksquare$;
[thick,dotted] (axis cs:4.2,0) – (axis cs:4.2,1.7);
[anchor=south] at (axis cs:4.2,1.7) $\blacktriangledown$;
[thick,dotted] (axis cs:1.1,0) – (axis cs:1.1,1.9);
[anchor=south] at (axis cs:1.1,1.9) $\bigstar$;
[thick,dotted] (axis cs:-3,0) – (axis cs:-3,1.9);
[anchor=south] at (axis cs:-3,1.9) $\blacktriangle$;
[thick,dotted] (axis cs:-4.2,0) – (axis cs:-4.2,1.8);
[anchor=south] at (axis cs:-4.2,1.8) $\blacklozenge$;
[ultra thick,densely dotted,blue] coordinates
[ultra thick,red] coordinates
[ultra thick,densely dashed,violet] coordinates
[ultra thick,densely dashdotted,orange] coordinates
at (0.015,0.225) (a);
ylabel= pair correlation $g^{\perp}(r^{\perp})$,
xlabel= $r^{\perp}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = 0,
xmax = 6,
ymin = 0.1,
ymax = 2.2,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0.76,0) – (axis cs:0.76,1.1);
[anchor=south] at (axis cs:0.76,1.1) $\blacksquare$;
[thick,dotted] (axis cs:1.15,0) – (axis cs:1.15,1.9);
[anchor=south] at (axis cs:1.15,1.9) $\bigstar$;
[thick,dotted] (axis cs:1.9,0) – (axis cs:1.9,1.3);
[anchor=south] at (axis cs:1.9,1.3) $\blacktriangle$;
[thick,dotted] (axis cs:2.35,0) – (axis cs:2.35,1.3);
[anchor=south] at (axis cs:2.35,1.3) $\blacklozenge$;
[ultra thick,densely dotted,blue] coordinates
[ultra thick,red] coordinates
[ultra thick,densely dashed,violet] coordinates
[ultra thick,densely dashdotted,orange] coordinates
at (0.515,0.225) (d);
ylabel= weighted pair corr. $g^{\parallel}_{P_1}(r^{\parallel})$,
xlabel= $r^{\parallel}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = -10,
xmax = 10,
ymin = -1.2,
ymax = 1.2,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0,0) – (axis cs:0,0.9);
[anchor=south] at (axis cs:0,0.9) $\blacksquare$;
[thick,dotted] (axis cs:4.2,0) – (axis cs:4.2,0.9);
[anchor=south] at (axis cs:4.2,0.9) $\blacktriangledown$;
[thick,dotted] (axis cs:1.1,0) – (axis cs:1.1,-0.9);
[anchor=north] at (axis cs:1.1,-0.9) $\bigstar$;
[thick,dotted] (axis cs:-3,0) – (axis cs:-3,-0.9);
[anchor=north] at (axis cs:-3,-0.9) $\blacktriangle$;
[thick,dotted] (axis cs:-4.2,0) – (axis cs:-4.2,0.9);
[anchor=south] at (axis cs:-4.2,0.9) $\blacklozenge$;
[ultra thick,densely dotted,blue] coordinates
[ultra thick,red] coordinates
[ultra thick,densely dashed,violet] coordinates
[ultra thick,densely dashdotted,orange] coordinates
at (0.015,0.225-4.75cm) (b);
ylabel= weighted pair corr. $g^{\perp}_{P_1}(r^{\perp})$,
xlabel= $r^{\perp}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = 0,
xmax = 6,
ymin = -1.2,
ymax = 0.6,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0.76,0) – (axis cs:0.76,-1);
[anchor=north] at (axis cs:0.76,-1) $\blacksquare$;
[thick,dotted] (axis cs:1.15,0) – (axis cs:1.15,0.4);
[anchor=south] at (axis cs:1.15,0.4) $\bigstar$;
[thick,dotted] (axis cs:1.9,0) – (axis cs:1.9,-0.4);
[anchor=north] at (axis cs:1.9,-0.4) $\blacktriangle$;
[thick,dotted] (axis cs:2.35,0) – (axis cs:2.35,0.2);
[anchor=south] at (axis cs:2.35,0.2) $\blacklozenge$;
[ultra thick,densely dotted,blue] coordinates
[ultra thick,red] coordinates
[ultra thick,densely dashed,violet] coordinates
[ultra thick,densely dashdotted,orange] coordinates
at (0.515,0.225-4.75cm) (e);
ylabel= weighted pair corr. $g^{\parallel}_{P_2}(r^{\parallel})$,
xlabel= $r^{\parallel}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = -10,
xmax = 10,
ymin = -0.2,
ymax = 1,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0,-0.2) – (axis cs:0,0.85);
[anchor=south] at (axis cs:0,0.85) $\blacksquare$;
[thick,dotted] (axis cs:4.2,-0.2) – (axis cs:4.2,0.8);
[anchor=south] at (axis cs:4.2,0.8) $\blacktriangledown$;
[thick,dotted] (axis cs:1.1,-0.2) – (axis cs:1.1,0.85);
[anchor=south] at (axis cs:1.1,0.85) $\bigstar$;
[thick,dotted] (axis cs:-3,-0.2) – (axis cs:-3,0.8);
[anchor=south] at (axis cs:-3,0.8) $\blacktriangle$;
[thick,dotted] (axis cs:-4.2,-0.2) – (axis cs:-4.2,0.8);
[anchor=south] at (axis cs:-4.2,0.8) $\blacklozenge$;
[ultra thick,densely dotted,blue] coordinates
[ultra thick,red] coordinates
[ultra thick,densely dashed,violet] coordinates
[ultra thick,densely dashdotted,orange] coordinates
at (0.015,0.225-9.5cm) (c);
ylabel= weighted pair corr. $g^{\perp}_{P_2}(r^{\perp})$,
xlabel= $r^{\perp}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = 0,
xmax = 6,
ymin = -0.2,
ymax = 1,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
legend columns=4,
legend style=draw,at=(0.28,-1.1cm), /tikz/every even column/.append style=column sep=0.5cm,
[thick,dotted] (axis cs:0.76,-0.2) – (axis cs:0.76,0.85);
[anchor=south] at (axis cs:0.76,0.85) $\blacksquare$;
[thick,dotted] (axis cs:1.15,-0.2) – (axis cs:1.15,0.85);
[anchor=south] at (axis cs:1.15,0.85) $\bigstar$;
[thick,dotted] (axis cs:1.9,-0.2) – (axis cs:1.9,0.7);
[anchor=south] at (axis cs:1.9,0.8) $\blacktriangle$;
[thick,dotted] (axis cs:2.35,-0.2) – (axis cs:2.35,0.8);
[anchor=south] at (axis cs:2.35,0.8) $\blacklozenge$;
[ultra thick,densely dotted,blue] coordinates
[ultra thick,red] coordinates
[ultra thick,densely dashed,violet] coordinates
[ultra thick,densely dashdotted,orange] coordinates
Bilayer Smectic,Gyroid,Nematic,Monolayer Smectic;
at (0.515,0.225-9.5cm) (f);
The longitudinal pair-correlation function $g^{\parallel}(r^{\parallel})$ (left column) and the lateral pair-correlation function $g^{\perp}(r^{\perp})$ (right column) of the smectic bilayer ($k_{\theta}=2.2$,$\rho_g=0.57$), the gyroid ($k_{\theta}=3.8$,$\rho_g=0.56$), the nematic ($k_{\theta}=5.4$,$\rho_g=0.56$) and the smectic monolayer phase ($k_{\theta}=5.4$,$\rho_g=0.585$). The pair-correlation functions are additionally weighted by the polar order parameter $P_1$ (second row) and the nematic order parameter $P_2$ (third row).
§.§ Pair correlation functions of PHGO systems
The lateral and longitudinal pair correlation functions are first applied to various PHGO systems which represent the different phases in the phase diagram shown in fig:phase_diagram. The local properties to form bilayers have a clear signature in the form of the different longitudinal pair-correlation functions $g^{\parallel}(z)$ of PHGO particles (see fig:ori_go left). In case of the smectic bilayer phase, all three plots (a-c) indicate multiple distinct peaks suggesting both long ranged transitional, polar and nematic order in the longitudinal direction but also a piling of multiple sheets of pear-shaped particles. Moreover, the bifurcation of peaks in fig:ori_goa, for instance the pair of peaks indicated by $\blacksquare$ and $\bigstar$, implies an organisation into stacks of interdigitated bilayers rather than monolayers. Here, the arrangement into parallel leaflets ($\blacksquare$, $\blacklozenge$, $\blacktriangledown$), where the polar order parameter $P_1$ locally exhibits positive values, and antiparallel leaflets of the bilayers ($\bigstar$, $\blacktriangle$), where $P_1$ changes sign, can be identified. This propensity to obtain local polar order is also observed in pear-sphere-mixtures dominated by small hard spheres, where the PHGO particles align due to depletion attractions (see part 2 of this series [21]). The leaflets are also affirmed by the $g_{P_2}^{\parallel}(z)$ profile of this phase in the form of small dips at each maximum. Also the lateral pair-correlations indicate the smectic bilayer phase (see fig:ori_go right). Firstly, the weighted functions show that the particles are aligned for large lateral distances suggesting that the layers are flat. Secondly, a small peak ($\blacksquare$) before the main peak is observable in fig:ori_god+f, which can be assigned to the immediate antiparallel and parallel neighbours of the reference pears in the same bilayer, respectively.
Analogously the pair correlation functions belonging to gyroid forming PHGO particle systems prove that single particles arrange within interdigitating curved bilayers. The characteristics of the distance distributions are locally similar to those observed in the flat bilayer-smectic phase of strongly tapered pears. The bifurcation of peaks (a) and the clear bump at the location of the secondary minor maximum for small $r^{\perp}$ in the bilayer smectic phase (d) coincide with the architecture of interdigitated bilayers. Yet, both of these plots also point to considerable differences on a larger length scale. The correlations are less distinct and diminish faster in the longitudinal and lateral direction which can be explained by the inherent curvature of the minimal surface structure. The influence of the warped bilayers is reflected even more in the characteristics of the weighted pair correlation functions. Firstly, the polar order vanishes in (b+e) for large distances and is less periodic. Secondly the nematic order in (c) around $0$ and, like the plot in (f), eventually approaches this very value for $r^{\parallel}\rightarrow\infty$. This means that the stacks of bilayers do not lie parallel to each other anymore and also that largely separated particles within the same leaflet are likely to be differently oriented.
Also the pair-correlation functions of the nematic and monolayer smectic give valuable information about the importance of the mentioned signatures of the different $g(r)$s for bilayer assembly. Although both translational and orientational order is still present, the correlations are weaker than for bilayer arrangements. Furthermore, the plots not only differ quantitatively but also qualitatively. On the one hand, the division into two maxima per peak for $g^{\parallel}(r^{\parallel})$ in fig:ori_goa vanishes. On the other hand, the small secondary peak which was contributed to the opposite leaflet of a bilayer also disappears for small $r^{\perp}$ in $g^{\perp}(r^{\perp})$ (see $\blacksquare$ in fig:ori_god). Both of these phenomena can be explained by the lack of inversion asymmetry. In this regime, the particles are not tapered enough to interdigitate into a neighbouring sheet and rather form a separate monolayer. Moreover, the weak taper causes the polarity within a sheet to be less pronounced (indicated by the overall small peaks in the $P_1$ profiles) as in the bilayer smectic phase, such that antiparallel particles can be found within the same leaflet more often (high peak at $\bigstar$ in fig:ori_god). This also causes the profile of the nematic and monolayer smectic phases in fig:ori_goc to be more homogeneous at a high mean nematic value.
§.§ Pair correlation functions of HPR systems
at (0.21*,0.23) Longitudinal;
at (0.71*,0.23) Lateral;
ylabel= pair correlation $g^{\parallel}(r^{\parallel})$,
xlabel= $r^{\parallel}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = -5.2,
xmax = 5.2,
ymin = 0.78,
ymax = 1.32,
scaled x ticks=false,
ylabel absolute,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0,0) – (axis cs:0,1.2);
[anchor=south] at (axis cs:0,1.2) $\square$;
[thick,dotted] (axis cs:2.2,0) – (axis cs:2.2,1.2);
[anchor=south] at (axis cs:2.2,1.2) $\largestar_1$;
[thick,dotted] (axis cs:1.1,0) – (axis cs:1.1,1.2);
[anchor=south] at (axis cs:1.1,1.2) $\largestar_2$;
[thick,dotted] (axis cs:-2.15,0) – (axis cs:-2.15,1.25);
[anchor=south] at (axis cs:-2.15,1.25) $\triangle_1$;
[thick,dotted] (axis cs:-2.8,0) – (axis cs:-2.8,1.25);
[anchor=south] at (axis cs:-2.8,1.25) $\triangle_2$;
[thick,dotted] (axis cs:-4.2,0) – (axis cs:-4.2,1.2);
[anchor=south] at (axis cs:-4.2,1.2) $\lozenge$;
[orange, thick] table[x expr= r*sqrt(2), y expr= norm/0.00345+0.15, col sep=comma] imagesPhaseDiagram/Data/gr1_kth35_N380_rho0.58.csv;
[blue, thick] table[x expr= r*sqrt(2), y expr= norm/0.0033+0.15, col sep=comma] imagesPhaseDiagram/Data/gr1_kth35_N380_rho0.55.csv;
[red,ultra thick] table[x expr= r*sqrt(2), y expr= norm/0.0034, col sep=comma] imagesPhaseDiagram/Data/gr1_kth20_N380_rho0.60.csv;
[gray, ultra thick] table[x expr= r*sqrt(2), y expr= norm/0.00325, col sep=comma] imagesPhaseDiagram/Data/gr1_kth20_N380_rho0.58.csv;
[blue] at (axis cs:4,1.2) [anchor=south] +0.15;
[orange] at (axis cs:-1.5,1.2) [anchor=south] +0.15;
at (0.015,0.225) (a);
ylabel= pair correlation $g^{\perp}(r^{\perp})$,
xlabel= $r^{\perp}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = 0,
xmax = 5.2,
ymin = 0.1,
ymax = 2,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0.95,0) – (axis cs:0.95,1.1);
[anchor=south] at (axis cs:0.95,1.1) $\square$;
[thick,dotted] (axis cs:1.3,0) – (axis cs:1.3,1.8);
[anchor=south] at (axis cs:1.3,1.73) $\largestar$;
[thick,dotted] (axis cs:2.5,0) – (axis cs:2.5,1.1);
[anchor=south] at (axis cs:2.5,1.1) $\lozenge$;
[orange, thick] table[x expr= r*sqrt(2), y expr= norm/0.00345, col sep=comma] imagesPhaseDiagram/Data/gr2_kth35_N380_rho0.58.csv;
[blue, thick] table[x expr= r*sqrt(2), y expr= norm/0.0033, col sep=comma] imagesPhaseDiagram/Data/gr2_kth35_N380_rho0.55.csv;
[red,ultra thick] table[x expr= r*sqrt(2), y expr= norm/0.0034, col sep=comma] imagesPhaseDiagram/Data/gr2_kth20_N380_rho0.60.csv;
[gray, ultra thick] table[x expr= r*sqrt(2), y expr= norm/0.00325, col sep=comma] imagesPhaseDiagram/Data/gr2_kth20_N380_rho0.58.csv;
at (0.515,0.225) (d);
ylabel= weighted pair corr. $g^{\parallel}_{P_1}(r^{\parallel})$,
xlabel= $r^{\parallel}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = -5.2,
xmax = 5.2,
ymin = -0.3,
ymax = 0.3,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:2.2,0) – (axis cs:2.2,-0.1);
[anchor=north] at (axis cs:2.2,-0.1) $\largestar_1$;
[thick,dotted] (axis cs:1.1,0) – (axis cs:1.1,-0.1);
[anchor=north] at (axis cs:1.1,-0.1) $\largestar_2$;
[thick,dotted] (axis cs:-2.15,0) – (axis cs:-2.15,0.1);
[anchor=south] at (axis cs:-2.15,0.1) $\triangle_1$;
[thick,dotted] (axis cs:-2.8,0) – (axis cs:-2.8,-0.1);
[anchor=north] at (axis cs:-2.8,-0.1) $\triangle_2$;
[orange, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P1_kth35_N380_rho0.58.csv;
[blue, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P1_kth35_N380_rho0.55.csv;
[red,ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P1_kth20_N380_rho0.60.csv;
[gray, ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P1_kth20_N380_rho0.58.csv;
at (0.015,0.225-4.75cm) (b);
ylabel= weighted pair corr. $g^{\perp}_{P_1}(r^{\perp})$,
xlabel= $r^{\perp}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = 0,
xmax = 5.2,
ymin = -0.3,
ymax = 0.3,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0.95,0) – (axis cs:0.95,-0.1);
[anchor=north] at (axis cs:0.95,-0.1) $\square$;
[thick,dotted] (axis cs:1.3,0) – (axis cs:1.3,0.1);
[anchor=south] at (axis cs:1.3,0.1) $\largestar$;
[orange, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P1_kth35_N380_rho0.58.csv;
[blue, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P1_kth35_N380_rho0.55.csv;
[red,ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P1_kth20_N380_rho0.60.csv;
[gray, ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P1_kth20_N380_rho0.58.csv;
at (0.515,0.225-4.75cm) (e);
ylabel= weighted pair corr. $g^{\parallel}_{P_2}(r^{\parallel})$,
xlabel= $r^{\parallel}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = -5.2,
xmax = 5.2,
ymin = -0.2,
ymax = 0.7,
scaled x ticks=false,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:2.2,0) – (axis cs:2.2,0.55);
[anchor=south] at (axis cs:2.2,0.55) $\largestar_1$;
[thick,dotted] (axis cs:1.1,0) – (axis cs:1.1,0.55);
[anchor=south] at (axis cs:1.1,0.55) $\largestar_2$;
[thick,dotted] (axis cs:-2.15,0) – (axis cs:-2.15,0.55);
[anchor=south] at (axis cs:-2.15,0.55) $\triangle_1$;
[thick,dotted] (axis cs:-2.8,0) – (axis cs:-2.8,0.55);
[anchor=south] at (axis cs:-2.8,0.55) $\triangle_2$;
[thick,dotted] (axis cs:-4.2,0) – (axis cs:-4.2,0.55);
[anchor=south] at (axis cs:-4.2,0.5) $\lozenge$;
[orange, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P2_kth35_N380_rho0.58.csv;
[blue, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P2_kth35_N380_rho0.55.csv;
[red,ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P2_kth20_N380_rho0.60.csv;
[gray, ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr1_P2_kth20_N380_rho0.58.csv;
at (0.015,0.225-9.5cm) (c);
ylabel= weighted pair corr. $g^{\perp}_{P_2}(r^{\perp})$,
xlabel= $r^{\perp}$ $[\sigma_w]$,
xtick pos=left,
ytick pos=left,
xmin = 0,
xmax = 5.2,
ymin = -0.2,
ymax = 0.7,
scaled x ticks=false,
legend columns=4,
legend style=draw,at=(0.32,-1.1cm), /tikz/every even column/.append style=column sep=0.5cm,
ylabel style=yshift=-0.15cm,
xlabel style=yshift=0.2cm,
y tick label style=
/pgf/number format/.cd,
fixed zerofill,
[thick,dotted] (axis cs:0.95,0) – (axis cs:0.95,0.55);
[anchor=south] at (axis cs:0.95,0.55) $\square$;
[thick,dotted] (axis cs:1.3,0) – (axis cs:1.3,0.6);
[anchor=south] at (axis cs:1.3,0.58) $\largestar$;
[thick,dotted] (axis cs:2.5,0) – (axis cs:2.5,0.5);
[anchor=south] at (axis cs:2.5,0.5) $\lozenge$;
[blue, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P2_kth35_N380_rho0.55.csv;
[orange, thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P2_kth35_N380_rho0.58.csv;
[gray, ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P2_kth20_N380_rho0.58.csv;
[red,ultra thick] table[x expr= r*sqrt(2), y expr= norm, col sep=comma] imagesPhaseDiagram/Data/gr2_P2_kth20_N380_rho0.60.csv;
Isotropic $\mathbf{k_{\theta}{=}3.5}$,Nematic $\mathbf{k_{\theta}{=}3.5}$,Isotropic $\mathbf{k_{\theta}{=}2.0}$,Nematic $\mathbf{k_{\theta}{=}2.0}$;
at (0.515,0.225-9.5cm) (f);
The longitudinal pair-correlation function $g^{\parallel}(r^{\parallel})$ (left column) and the lateral pair-correlation function $g^{\perp}(r^{\perp})$ (right column) of the isotropic ($k_{\theta}=2.0$:$\rho_g=0.58$ and $k_{\theta}=3.5$:$\rho_g=0.55$) and nematic ($k_{\theta}=2.0$:$\rho_g=0.6$ and $k_{\theta}=3.5$:$\rho_g=0.58$) in systems of $N=400$ HPR particle. The pair-correlation functions are additionally weighted by the polar order parameter $P_1$ (second row) and the nematic order parameter $P_2$ (third row).
Based on these observations gained from the PHGO particles, we can deduce the lack of bilayer phases in the HPR phase diagram, by an analysis of these phases' local behaviour. The profiles of the pair correlation functions in the nematic and the isotropic phase close to the transition line (see fig:ori_hard) exhibit both similarities and differences to the liquid crystal phases of the PHGO pear systems in fig:ori_go. The lateral pair-correlation functions $g^{\perp}(r^{\perp})$ of the nematic phases of both pear models, for example, produce similar plots, also comparable to the monolayer smectic of the PHGO model. The characteristic minor peak before the first major peak (see $\square$ in fig:ori_hardd), however, which have been attributed to interdigitating bilayer arrangements, is not present. Only for pears close to $k_{\theta}=2.0$ this peak is implied by a bump. Also the profiles of $g_{P_2}^{\perp}(r^{\perp})$ are akin (even if the alignment is not as strong) to the not-bilayer forming liquid crystal phases of the weakly tapered PHGO pears. The most significant difference in terms of lateral correlation, however, is in the polarity of the neighbouring particles in fig:ori_harde. For HPR pears the nearest neighbours show basically no preference of parallel or anti-parallel orientation. The high degree of local polar order for PHGO pears is at best vaguely reflected and largest for $k_{\theta}<2.5$.
The plots of the longitudinal pair correlations $g^{\parallel}(r^{\parallel})$ shown in fig:ori_hard left, however, also indicate why the particles are not arranged within a bilayer formation and rather create nematic phases. The most noticeable one is the missing peak ($\square$ in fig:ori_harda) at $r^{\parallel}=0$ in the nematic and monolayer smectic phase. This signifies that this particular correlation is crucial for the formation of bilayer phases as it corresponds to particles sitting side by side to another. All other peaks ($\largestar$,$\triangle$,$\lozenge$) can be attributed to their counterparts in the $g^{\parallel}(r^{\parallel})$-signature of the nematic/smectic phases of the PHGO pears, but seem to be closer together. Furthermore, the weighted functions indicate that the reference pears barely influence the polar preference of their neighbour's orientation, not even longitudinal direction. On a similar note, the local nematic order indicated by the minor peaks, even though obviously present, is not as pronounced and long ranged in this model, not to mention the double peaks, which can be observed for all liquid crystal phases in fig:ori_go, but are not noticeable here.
Despite these distinctions, similarities can be determined as well. For once, the pears tend to aggregate preferentially at the blunt ends ($r^{\parallel}<0$) rather than the pointy end ($r^{\parallel}>0$) of other particles. This leads to the assumption that in principle the mechanism which brings the pears together with their blunt ends to form clusters also exists in the HPR model. Unfortunately, the impact of this mechanism is not strong enough to indeed induce the self-assembly of bigger clusters (see cluster representation in fig:phasesHard). More intriguing, however, is the observation that for highly tapered particles $k_{\theta}<2.5$ the peaks of $g^{\parallel}(r^{\parallel})$ ($\largestar_1$,$\largestar_2$ and $\triangle_1$,$\triangle_2$) and $g_{P_2}^{\perp}(r^{\perp})$ ($\square$,$\largestar$) widen considerably or even split into two. This can be already observed in the isotropic phase close to the phase transition. The area within the system which showcases these indications of bifurcation is shaded in the phase diagram. Thus, some of the basic conditions for bilayer formation are also met at least for highly tapered HPR particles. Nevertheless, without additional features to the contact function, those effects are too weak to produce a more complex phase behaviour than nematic.
In this paper, we focused exclusively on pear-shaped particles with a specific aspect ratio of $k=3$. While possible, it is unlikely that a different choice of $k$ for the HPR would have yielded a different phase behaviour, for the following reasons. Firstly, by increasing the aspect ratio, the maximum adjustable taper of convex pear-shaped particle decreases. As we have shown that higher taper implies higher local order, we can rule out the existence of the gyroid phase in HPR systems for $k\geq 3$. Secondly, less elongated hard particles usually lose their ability to create global orientational order (rule of thumb $k<2.75$ [1, 2]) and form isotropic configurations instead. Therefore, the window of aspect ratios, which comes into consideration, seems too small to increase the local polar order in fig:ori_hardb+e to values, which are needed to achieve bilayering comparable to PHGO systems.
§ CONCLUSION AND OUTLOOK
The overarching theme of this paper concerned the stability of the gyroid phase with respect to particle shape, particularly the difference in phase behaviour between HPR and PHGO particles. It hence fits closely with the broader topic of how self-assembly (in particular in hard core systems) is sensitive to the details of the particle shape [22, 23, 4, 24, 5, 6, 7, 8, 9, 25, 26, 27]. In particular, we compared two hard pear-shaped particle models on the microscopic scale and their abilities to form the double gyroid spontaneously globally. One is the pear hard Gaussian overlap (PHGO) particle, which closely approximates a pear-shape but also features self-non-additive properties. The other model represents the exact pear shape perfectly and is called hard pear of revolution (HPR) model.
Therefore, we revisited the phase behaviour of PHGO particles and additionally generated a phase diagram based on particles interacting according to strict hard-core HPR interactions. In contrast to the rich phase diagram of PHGO particles containing nematic and monolayer smectic, but also both bilayer smectic and bilayer gyroid structures, we observed in the HPR systems only a rudimentary phase behaviour. More precisely, the HPR systems form nematic liquid crystal phases for all particle shapes analysed (i.e. all $k_\theta$), where more highly tapered particles visibly destabilise the nematic order and push the transition to higher densities. However, both the gyroid and the bilayer smectic phase, characteristic for the phase behaviour of PHGO particles, vanish.
According to these observations the small differences in the contact function between the PHGO and HPR model, which can easily, but mistakenly, be considered negligible, have a major impact on the self-assembly of pear-shaped particles. Even though most features of a pear (like aspect ratio and tapering parameter) are present in both models, the PHGO particles have to offer additional morphological properties, to which the stability of the gyroid phase is ascribed. This is also supported by the fact that only the nematic phase is obtained which also have been found for PHGO pears with small tapering angles. In this regime of large $k_{\theta}$ the two pear models differ the least in terms of contact functions. Hence, their collective behaviours are very similar. All these results lead to the assumption that the formation of bilayer structures, including the double gyroid phase, is due to the special orientation dependency of the PHGO contact function. Especially the self-non-additive features in reference to the pear shape seem to magnify the spontaneous placement of pears side to side. This mechanism would naturally lead to sheets, which then interdigitate due to the pointy ends of the individual particles. Not only the HPR model and our depletion studies in part 2 [21] hint towards the validity of this hypothesis, also other models which lack self-non-additive features but look similar to pears are known to fail assembling into bilayer configuration. Neither hard multisphere particles, like snowman [39] or asymmetric dumbbell particles [40], nor conical colloids [41] show any propensity to form the gyroid.
Despite the differences in phase behaviour, the self-assembly of some HPR particles with small $k_{\theta}$ close to the phase transition showcases also interesting properties, which were attributed as necessary precursors to the formation of bilayers. Therefore, it is conceivable that the HPR particles might be able to form similar phases like the PHGO pears, if we, for instance, add suitable changes to the pear-shape or introduce non-additivity to the HPR contact function. These particle modifications also have the potential to be utilised as a regulating mechanism to control the coupling strength between the blunt ends. This might allow us to create a model for pear-shaped particles, based on those indicated by the grey-striped area in fig:phase_diagram, with an intermediate degree of blunt end aggregation. A first attempt to conceptualise such a pear-shaped particle model is made in part 2 of this series [21]. In general, these particles could potentially form phases with a short-range order, sufficient to display a bicontinuous network, but also displays with disorder over larger length scales. Those disordered cubic phases are known as L$_3$ sponge phases [42] and are formed typically in lipid-water mixtures by swelling the cubic phases due to the presence of additives [43, 44, 45, 46, 47, 48, 49, 50, 51].
The formation of gyroid structures in pear-shaped PHGO particle systems remains a fascinating finding. This is particularly so because of the mechanism of creating a propensity for the formation of interdigitated “smectic-like” warped bilayers. While particle shape clearly plays a crucial role in this, this paper has highlighted the subtleties, namely that the effect vanishes for the additive hard pear HPR model. This, in turn, brings us back to the opening statement that the particle shape is a double-edged sword. Surely, the “coarse” (or first order) characterisation of the particles as pear-shaped is critical for the process. Yet, pear-shaped appearance is not sufficient to ensure the effect occurs, as the lack of the gyroid in the HPR phase diagram demonstrates. It appears as first-order shape characteristics are a necessary condition for some structure phase formation but not a sufficient criteria.
As a closing note, we want to mention here that it is difficult to judge which of the two pear models represents the interactions of pear-shaped particles, which might be synthesised in the future, better. For example, it is well established that colloids in experimental systems are never truly hard and the interparticle potential always inherits some degree of softness [52, 53, 54, 55]. Therefore, the potentials we used here – both the PHGO and the HPR potentials – have to be considered as approximations of a real pear-shaped colloid. This becomes even more important as recent studies show that the introduction of already a small degree of softness can influence the stability of crystalline phases [56]. Additionally, pear-shaped particles have not been synthesised yet. In principle, many different strategies to produce nanoparticles with aspherical shapes have been developed like methods via templates [57, 58, 59], particle swelling and phase separation [60, 61, 62], seeded emulsion polymerisation [63, 64, 65, 66], controlled deformation of spherical colloids [67, 68, 69], particle confinement [70] or lithography [71, 72, 73]. However, many of these techniques are still limited in either their customizability of the particle shape, rely on colloids as a basic shape or cannot be mass-produced easily. These difficulties seem to be exacerbated by the big contrast of the two phase diagrams in fig:phase_diagram, which highlights that in both experiments and simulations even small nuances of the interaction profiles of molecules have to be taken into account to predict the right phase behaviour. Also the composite sphere method, where complexly shaped particles are modelled from multiple sphere constituents, are known to faces issues with inaccuracies due to the degraded smoothness of the particle surface [74, 75, 76].
We thank Universities Australia and the German Academic Exchange Service (DAAD) for funds through a collaboration funding scheme, through the grant “Absorption and confinement of complex fluids”. We also thank the DFG through the ME1361/11-2 grant and through the research group “Geometry and Physics of Spatial Random Systems” (GPSRS) for funding. We gratefully acknowledge Klaus Mecke's support and advice in useful discussions. P.W.A.S. acknowledges a Murdoch University Postgraduate Research Scholarship. G.E.S-T is grateful to the Food Science Department at the University of Copenhagen and the Physical Chemistry group at Lund University for their hospitality and to Copenhagen University, the Camurus Lipid Research Foundation and the Danish National Bank for enabling a sabbatical stay in Denmark and Sweden.
§.§ Data availability
The data that supports the findings of this study are available within the article. Data set lists are available from the corresponding authors upon reasonable request.
[1]
J. A. C. Veerman and D. Frenkel.
Phase diagram of a system of hard spherocylinders by computer
Phys. Rev. A, 41(6):3237, 1990.
[2]
D. Frenkel, B. M. Mulder, and J. P. McTague.
Phase diagram of a system of hard ellipsoids.
Phys. Rev. Lett., 52(4):287, 1984.
[3]
A. Haji-Akbari, M. Engel, and S. C. Glotzer.
Phase diagram of hard tetrahedra.
J. Chem. Phys., 135(19):194101, 2011.
[4]
R. Ni, A. P. Gantapara, J. de Graaf, R. van Roij, and M. Dijkstra.
Phase diagram of colloidal hard superballs: from cubes via spheres to
Soft Matter, 8(34):8826–8834, 2012.
[5]
P. F. Damasceno, M. Engel, and S. C. Glotzer.
Crystalline assemblies and densest packings of a family of truncated
tetrahedra and the role of directional entropic forces.
ACS Nano, 6(1):609–614, 2011.
[6]
P. F. Damasceno, M. Engel, and S. C. Glotzer.
Predictive self-assembly of polyhedra into complex structures.
Science, 337(6093):453–457, 2012.
[7]
A. P. Gantapara, J. de Graaf, R. van Roij, and M. Dijkstra.
Phase diagram and structural diversity of a family of truncated
cubes: Degenerate close-packed structures and vacancy-rich states.
Phys. Rev. Lett., 111(1):015501, 2013.
[8]
C. X. Du, G. van Anders, R. S. Newman, and S. C. Glotzer.
Shape-driven solid–solid transitions in colloids.
Proc. Natl. Acad. Sci. USA, 114(20):E3892–E3899, 2017.
[9]
D. Klotsa, E. R. Chen, M. Engel, and S. C. Glotzer.
Intermediate crystalline structures of colloids in shape space.
Soft Matter, 14(43):8692–8697, 2018.
[10]
P. Bolhuis and D. Frenkel.
Tracing the phase boundaries of hard spherocylinders.
J. Chem. Phys., 106(2):666–687, 1997.
[11]
B. S. John, C. Juhlin, and F. A. Escobedo.
Phase behavior of colloidal hard perfect tetragonal parallelepipeds.
J. Chem. Phys., 128(4):044909, 2008.
[12]
M. Marechal, S. Dussi, and M. Dijkstra.
Density functional theory and simulations of colloidal triangular
J. Chem. Phys., 146(12):124905, 2017.
[13]
P. Bartlett and P. B. Warren.
Reentrant melting in polydispersed hard spheres.
Phys. Rev. Lett., 82(9):1979, 1999.
[14]
M. Adams, Z. Dogic, S. L. Keller, and S. Fraden.
Entropically driven microphase transitions in mixtures of colloidal
rods and spheres.
Nature, 393(6683):349, 1998.
[15]
M. A. Bates and D. Frenkel.
Phase behavior of model mixtures of colloidal disks and polymers.
Phys. Rev. E, 62(4):5225, 2000.
[16]
Z. Dogic, K. R. Purdy, E. Grelet, M. Adams, and S. Fraden.
Isotropic-nematic phase transition in suspensions of filamentous
virus and the neutral polymer dextran.
Phys. Rev. E, 69(5):051702, 2004.
[17]
S. Belli, M. Dijkstra, and R. H. H. G. van Roij.
Depletion-induced biaxial nematic states of boardlike particles.
J. Phys. Condens. Matter, 24(28):284128, 2012.
[18]
R. Aliabadi, M. Moradi, and S. Varga.
Tracking three-phase coexistences in binary mixtures of hard plates
and spheres.
J. Chem. Phys., 144(7):074902, 2016.
[19]
Á. G. García, J. Opdam, and R. Tuinier.
Phase behaviour of colloidal superballs mixed with non-adsorbing
Euro. Phys. J. E, 41(9):110, 2018.
[20]
Á. G. García, R. Tuinier, J. V. Maring, J. Opdam, H. H. Wensink, and
H. N. W. Lekkerkerker.
Depletion-driven four-phase coexistences in discotic systems.
Mol. Phys., pages 1–16, 2018.
[21]
P. W. A. Schönhöfer, M. Marechal, D. J. Cleaver, and G. E. Schröder-Turk.
Self-assembly and entropic effects in pear-shaped colloid systems:
II. depletion attraction of pear-shaped particles in a hard sphere solvent.
J. Chem. Phys., 2020.
[22]
R. D. Batten, F. H. Stillinger, and S. Torquato.
Phase behavior of colloidal superballs: Shape interpolation from
spheres to cubes.
Phys. Rev. E, 81(6):061105, 2010.
[23]
Y. Zhang, F. Lu, D. van der Lelie, and O. Gang.
Continuous phase transformation in nanocube assemblies.
Phys. Rev. Lett., 107(13):135701, 2011.
[24]
L. Rossi, V. Soni, D. J. Ashton, D. J. Pine, A. P. Philipse, P. M. Chaikin,
M. Dijkstra, S. Sacanna, and W. T. M. Irvine.
Shape-sensitive crystallization in colloidal superball fluids.
Proc. Natl. Acad. Sci. USA, page 201415467, 2015.
[25]
S. Dussi and M. Dijkstra.
Entropy-driven formation of chiral nematic phases by computer
Nat. Commun., 7:11175, 2016.
[26]
M. Marechal and M. Dijkstra.
Phase behavior and structure of colloidal bowl-shaped particles:
Phys. Rev. E, 82(3):031405, 2010.
[27]
D. Wan, C. X. Du, G. van Anders, and S. C. Glotzer.
FCC-to-BCC phase transitions in convex and concave hard particle
arXiv preprint arXiv:1901.09523, 2019.
[28]
L. J. Ellison, D. J. Michel, F. Barmes, and D. J. Cleaver.
Entropy-driven formation of the gyroid cubic phase.
Phys. Rev. Lett., 97(23):237801, 2006.
[29]
P. W. A. Schönhöfer, L. J. Ellison, M. Marechal, D. J. Cleaver, and G. E.
Purely entropic self-assembly of the bicontinuous Ia$\overline{3}$d
gyroid phase in equilibrium hard-pear systems.
Interface Focus, 7:20160161, 2017.
[30]
P. W. A. Schönhöfer, D. J. Cleaver, and G. E. Schröder-Turk.
Double diamond phase in pear-shaped nanoparticle systems with hard
sphere solvent.
J. Phys. D: Appl. Phys., 51(46):464003, 2018.
[31]
F. Barmes, M. Ricci, C. Zannoni, and D. J. Cleaver.
Computer simulations of hard pear-shaped particles.
Phys. Rev. E, 68:021708, 2003.
[32]
A. Perera.
Fluids of hard natural and Gaussian ellipsoids: A comparative study
by integral equation theories.
J. Chem. Phys., 129(19):194504, 2008.
[33]
P. Ballone, G. Pastore, G. Galli, and D. Gazzillo.
Additive and non-additive hard sphere mixtures: Monte Carlo
simulation and integral equation results.
Mol. Phys., 59(2):275–290, 1986.
[34]
E. Lomba, M. Alvarez, L. L. Lee, and N. G. Almarza.
Phase stability of binary non-additive hard-sphere mixtures: A
self-consistent integral equation study.
J. Chem. Phys., 104(11):4180–4188, 1996.
[35]
R. Roth and R. Evans.
The depletion potential in non-additive hard-sphere mixtures.
Europhys. Lett., 53(2):271, 2001.
[36]
P. Hopkins and M. Schmidt.
Binary non-additive hard sphere mixtures: fluid demixing, asymptotic
decay of correlations and free fluid interfaces.
J. Phys. Condens. Matter., 22(32):325108, 2010.
[37]
K. Zhang, M. Fan, Y. Liu, J. Schroers, M. D. Shattuck, and C. S. O’Hern.
Beyond packing of hard spheres: The effects of core softness, |
The ZX calculus and the ZH calculus use diagrams to denote and to compute properties of quantum operations, and other multi-linear operators described by tensor networks.
These calculi involve `rewrite rules', which are algebraic manipulations of the tensor networks through transformations of diagrams.
The way in which diagrams denote tensor networks is through a semantic map, which assigns a meaning to each diagram in a compositional way.
Slightly different semantic maps, which may prove more convenient for one purpose or another (e.g., analysing unitary circuits versus analysing counting complexity), give rise to slightly different rewrite systems.
Through a simple application of measure theory on discrete sets, we describe a semantic map for ZX and ZH diagrams for qudits of any dimension ${D \!>\! 1}$, well-suited to represent unitary circuits, and admitting simple rewrite rules.
In doing so, we reproduce the `well-tempered' semantics of Ref. [19] for ZX and ZH diagrams in the case ${D \!=\! 2}$.
We demonstrate rewrite rules for the `stabiliser fragment' of the ZX calculus and a `multicharacter fragment' of the ZH calculus; and demonstrate relationships which would allow the two calculi to be used interoperably as a single `ZXH calculus'.
§ INTRODUCTION
The ZX calculus [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] and the ZH calculus [30, 31, 32, 33, 34, 35, 19] are notational systems for quantum computation, and other problems which can be mapped onto tensor networks [33, 36, 37, 38, 39, 40].
They use annotated graphs — `ZX diagrams' and `ZH diagrams' — to denote tensor networks, in a way which may be used to represent quantum circuits.
They are also equipped with with rules to perform computations without recourse to exponentially large matrices, through transformations of diagrams.
While complicated procedures may require diagrams of mounting complexity to analyse, in many cases the ZX- and ZH-calculi simplify the analysis of many-qubit procedures.
In recent years, it has also become more common to consider versions of the ZX- and ZH-calculi which denote operations on qudits [8, 12, 23, 27, 28, 29, 35, 41].
These promise the same benefits for analysis of procedures on qudits, as the corresponding rewrite systems on qubits.
Most treatments of the ZX-calculus [3, 4, 5, 6, 7, 9, 10, 11, 14, 15, 16, 17, 18, 19, 20, 22, 23, 24, 25, 26, 27] and the ZH-calculus [30, 31, 32, 19, 34, 35] are `scalar exact', in that diagram transformations preserve the exact meaning as an operator over $\C$, without glossing over normalisation factors.
Such scalar factors are irrelevant for certain applications (e.g., the analysis of quantum circuits without measurement); but they are important when describing procedures which have probabilistic elements (e.g., postselection), or problems in which the numerical value of some coefficient is the subject of interest [40].
To keep track of scalar factors, one might have to account for changes to the normalising factors with each rewrite, either explicitly or through scalar gadgets: disconnected subdiagrams which obliquely denote normalising factors.
It is of practical interest to consider what presentations of the ZX- or ZH-calculus avoid frequent changes to the scalar gadgets or normalising factors associated with the diagrams.
An appropriate choice of presentation may also allow the two calculi to be combined into a single rewrite system (a `ZXH calculus') to transform diagrams using the rules of each [19, 36, 38], allowing the user to make use of the intuitions offered by both calculi.
In previous work [19], one of us addressed this issue of bookkeeping of scalars for ZX- and ZH-diagrams on qubits, by considering a carefully modified notational convention for these diagrams.
The result, described as `well-tempered' versions of these calculi, are scalar exact but do not introduce modifications to the scalar factors for the most often-used rewrites.
These well-tempered versions of the ZX- and ZH-calculi are also well-suited to work together, representing a convenient presentation of a ZXH calculus.
However, while the `well-tempered' notation was determined systematically and led to simpler rewrite rules, the notational convention itself (i.e., the meanings which are assigned to the simplest diagrams) is slightly unweildly.
Furthermore, this analysis did not address the same issue of scalars which arises for versions of these calculi for qudits of dimension $D > 2$.
In this work, we consider how different normalisations of the ZX- and ZH-calculus may be expressed in a more uniform way, by representing operators on qudits (of any fixed dimension $D>1$) through the use of integrals with respect to a discrete measure.
We may summarise these results, as follows.
Versatile generators for ZX and ZH diagrams.
Let $D > 1$ be an integer, and let $\cH \cong \C^D$.
We present a version of the ZX calculus on qudits with state-space $\cH$, with generator nodes
\begin{equation}
\label{eqn:ZXnodeFamilies}
\begin{gathered}
\qquad
\tikzfig{ZX-green-phase-dot-arity}
\;,\qquad
\tikzfig{ZX-red-phase-dot-arity}
\;,\qquad
\tikzfig{ZX-H-plus-box}
\;,\qquad
\tikzfig{ZX-H-minus-box}
\;\;,
\end{gathered}
\end{equation}
where $m,n \in \N$ and $\Theta: \Z \to \C$ is a function which assigns amplitudes to elements of $[D]$.
We call these generators `green dots', `red dots', and `Hadamard plus boxes', and `Hadamard minus boxes'.
(In using functions $\Theta: \Z \to \C$ to parameterise green and red dots, we loosely follow Wang [23].
When $\Theta$ is absent, we assume the constant function $\Theta(x) = 1$; if instead a parameter $\theta \in \R$ is provided, we assume the function $\Theta(x) = \e^{i\theta x}$.)
We also present a version of the ZH calculus for qudits with Hilbert space $\cH$, with generator nodes
\begin{equation}
\label{eqn:ZHnodeFamilies}
\begin{gathered}
\qquad\;\;
\tikzfig{ZH-white-dot-arity}
\tikzfig{ZH-H-phase-box-arity}
\tikzfig{ZH-gray-dot-arity}
\tikzfig{ZH-gen-not-dot}
\;\;,
\end{gathered}
\end{equation}
where $m,n \in \N$, $c \in \Z$, and $\mathrm{A} : \Z \to \C$.
We call these generators `white dots', `H-boxes', `gray dots', and `generalised-not dots'.
We follow Ref. [19] in considering the gray and not dots to be (primitive) generators, rather than gadgets or `derived generators', e.g., as in Refs. [30, 35].
(We adopt the convention of using functions $\mathrm A: \Z \to \C$ for the sake of uniformity with our presentation of ZX diagrams, but allow a complex unit $\alpha \in \C^\times$ to stand for the function $\mathrm{A}(t) = \alpha^t$.
We define some short-hand notations below for functions $\mathrm{A}(t) = \e^{2\pi i c t/D}$ for $c \in \Z$.)
Simple semantics for qudit ZX and ZH generators, via integrals.
We label the standard basis states of $\cH$ by $\ket{x}$ for $x \in [D]$, where we take the somewhat unconventional choice
We suggest this convention to support rewrites which become possible for arbitrary $D>1$.
This choice is independent of the main idea of our work, which is the use discrete integrals such as the one shown in Eqn. (<ref>); this article promotes a few other such independently motivated conventions (such as the amplitude functions $\Theta, \mathrm A: \Z \to \C$) which seem fruitful.
\begin{equation}
\;\,=\;\,
{(-\tfrac{1}{2}D, \tfrac{1}{2}D] \;\cap\; \Z}
\;\,=\;\,
\{ L_D,\, L_D{+}1,\, \ldots,\, U_D{-}1,\, U_D\}
\end{equation}
for ${L_D = -\lfloor \!\!\:\tfrac{D-1}{2}\!\!\: \rfloor}$ and $U_D = \lfloor \!\!\;\tfrac{D}{2}\!\!\; \rfloor$.
(Note that $[D] = \{0,1\}$ for $D = 2$, but that $L_D$ is negative for ${D \!>\! 2}$.)
We define simple semantics for both sets of generators above, using a notion of integration over $[D]$.
Specifically: we consider a measure $\mu$ on subsets of $[D]$, defined by $\mu(S) = \#S \cdot \nu^2$, where $\#S$ is the cardinality of $S \subset [D]$ and $\nu > 0$ is some real number which we later characterise.
For functions $f: \Z \to \C$, this allows us to define integrals over $[D]$,
\begin{equation}
\label{eqn:introducing-discrete-integral}
\int\limits_{\mathclap{x \in [D]}}
\;:=\;
\int\limits_{\mathclap{x \in [D]}}
f(x) \; \mathrm d\mu(x)
\;:=\;
\sum_{x \in [D]}\! f(x) \, \nu^2\,,
\end{equation}
where for the sake of brevity, we leave the measure $\mu$ implicit in the left-hand expression.
Such integrals allow us to express sums with certain normalising factors more uniformly, by absorbing the factors into the measure $\mu$ by an appropriate choice of $\nu > 0$.
We also define non-normalised point-mass distributions $\kket{x} = \tfrac{1}{\nu} \ket{x} \in \cH$, expressly to obtain
\begin{equation}
% \bbra{z} \Biggl(\;\;\;
% \int\limits_{\mathclap{x \in [D]}}
% f(x) \; \kket{x}
% \Biggr)
% \;=\;
\int\limits_{\mathclap{x \in [D]}}
\bbracket{z}{x} \; f(x)
% \;=\;
% \sum_{x \in \Z_D}\! f(x) \, \bracket{z}{x}
\;=\;
\end{equation}
[4]
similarly to the way that Dirac measures may be used with integration over $\R$.
(We elaborate on this notion in Section <ref>.)
We then define a semantic map $\sem{\,\cdot\,}$ on the ZX and ZH generators as follows:
\begin{equation}
\label{eqn:idealised-ZX-ZH-integrals}%
\mspace{-120mu}
\begin{aligned}{}
\Biggsem{\!\!\!\tikzfig{ZX-green-phase-dot-arity}\!\!\!}
\,&=\,
\int\limits_{\mathclap{x \in [D]}}
\Theta(x) \; \kket{x}^{\!\otimes n}\bbra{x}^{\!\!\;\otimes m}
% \;\mathrm d\mu(x)
% -----------------------------------------------------
% -----------------------------------------------------
\Bigsem{\!\!\: \tikzfig{ZX-H-plus-box} \!\!\:}
\,&=
\mathop{\int \!\!\!\! \int}\limits_{\mathclap{x,k \in [D]}}
\e^{2\pi i k x \!\!\;/\!\!\; D} \kket{k}\bbra{x}
% \; \mathrm d\mu(x) \; \mathrm d\mu(k)
\mspace{-100mu}
% -----------------------------------------------------
\\[.75ex]
% -----------------------------------------------------
\Biggsem{\!\!\!\tikzfig{ZX-red-phase-dot-arity}\!\!\!}
\,&=\,
\int\limits_{\mathclap{k \in [D]}}
\Theta(k) \; \kket{\smash{\omega^{-k}}}\sox{n} \bbra{\smash{\,\omega^{k}\,}}\sox{m}
% \;\mathrm d\mu(k) ,
% \mspace{-30mu}
% -----------------------------------------------------
% -----------------------------------------------------
\Bigsem{\!\!\: \tikzfig{ZX-H-minus-box} \!\!\:}
\,&=\,
\mathop{\int \!\!\!\! \int}\limits_{\mathclap{x,k \in [D]}}
\e^{-2\pi i k x \!\!\;/\!\!\; D} \kket{k}\bbra{x}
% \; \mathrm d\mu(x) \; \mathrm d\mu(k)
\mspace{-100mu}
% -----------------------------------------------------
\\[.75ex]
% -----------------------------------------------------
\Biggsem{\!\!\!\tikzfig{ZH-H-phase-box-arity}\!\!}
\,&=
\mathop{\int \!\!\!\!\int}_{%
\mathclap{
{\substack{x \in [D]^m \\ y\in [D]^n}}
\!\mathrm{A}(x_1 \!\cdot \!\cdot\! \cdot x_m y_1 \!\cdot \!\cdot\! \cdot y_n)\;
\kket{y}\bbra{x}
% \; \mathrm d\mu^m(x) \; \mathrm d\mu^n(y)
\,,
% \mspace{-60mu}
% -----------------------------------------------------
% -----------------------------------------------------
\Biggsem{\!\!\!\tikzfig{ZH-white-dot-arity}\!\!}
\!\:&=\;
\int\limits_{\mathclap{x \in [D]}}
\kket{x}^{\!\otimes n}\!\bbra{x}^{\!\!\;\otimes m}
% \; \mathrm d\mu(x)
\,,
\mspace{-100mu}
% -----------------------------------------------------
\\[-.25ex]
% -----------------------------------------------------
\Biggsem{\!\!\!\tikzfig{ZH-gray-dot-arity}\!\!}
\,&=\,
\mathop{\int \!\!\!\!\int}_{%
\mathclap{
{\substack{x \in [D]^m \\ y\in [D]^n}}
\bbracket{\big.\,
\smash{\textstyle \sum\limits_h x_h + \sum\limits_k y_k}
\,}{0}
\;
\kket{y}\bbra{x}
% \,\; \mathrm d\mu^m(x) \; \mathrm d\mu^n(y)
\,,
% \mspace{-60mu}
% -----------------------------------------------------
% -----------------------------------------------------
\Bigsem{\tikzfig{ZH-gen-not-dot}}
\,&=\;
\int\limits_{\mathclap{x \in [D]}}
\kket{-c{-}x}\bbra{x}
% \; \mathrm d\mu(x)
\,,
\mspace{-100mu}
\end{aligned}
\end{equation}
$\kket{\smash{\omega^k}} = \smash{\tfrac{1}{\sqrt D}} \sum_x \omega^{-kx} \,\kket{x}$ is an $\omega^k$-eigenstate of the operator $X$, for $\omega = \e^{2\pi i / D}$ a primitive $D\textsuperscript{th}$ root of unity and $X$ the cyclic shift operator given by $X \ket{a} = \ket{a{+}1}$.
Here and elsewhere in our work, any expression $E$ which indexes a standard basis vector $\ket{E}$ or point-mass distribution $\kket{E}$ should be understood as being reduced mod $D$ in that context, to an element of $[D] \subset (-\tfrac{1}{2}D, \tfrac{1}{2}D]$.
The precise semantics defined above depends on the parameter $\nu$, which governs the normalisation of the measure $\mu$ on subsets of $[D]$.
Simple rewrites for qudit ZX and ZH diagrams.
We show that imposing the constraint that
\tikz \draw (0,0) -- ++(0.375,0) node (g) [small H box] {\tp} -- ++(0.375,0);
\;}$ is a unitary operator, suffices to fix the value $\nu = D^{-1/4}$, so that $\mu([D]) = \sqrt D$.
This allows us to define a system of scalar-exact rewrites for ZX and ZH diagrams for arbitrary $D>1$ which involve very few scalar gadgets.
A selection of such rewrites is presented in Figures <ref> & <ref>, which we prove sound in Appendix <ref>.
(Figure <ref> presents some equivalences between ZX and ZH diagrams; Figure <ref> presents how some well-known operators would be represented by diagrams involving ZX and ZH generators.)
— Note that these rewrites have not been chosen either to be either minimal or complete.
Rather, we hope to demonstrate representative rewrites to persuade the reader that fixing the semantics as we do above, is likely to be beneficial to the exploration of versions of these calculi for qudits of various dimensions.
\begin{gather*}
\begin{tikzpicture}
\setlength\rulediagramwd{3.5em}
\rewriterule [ZX-GI] {\vtikzfig[-0ex]{ZX-green-id}}
\setlength\rulediagramwd{4.5em}
\nextrewriterule [ZX-RI] {\vtikzfig[-0ex]{ZX-2c-red-dots}}
\setlength\rulediagramwd{5.0em}
\nextrewriterule [ZX-HI] {\vtikzfig[-0ex]{ZX-H-id}}
% \coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.25,0)$);
\setlength\rulediagramwd{3em}
\rewritetarget {\vtikzfig[-0ex]{id-wire}}
\end{tikzpicture}
\\[-14.0ex]
\end{gather*}
\begin{align*}{}
\mspace{-48mu}
\setlength\rulediagramwd{4.875em}
\begin{gathered}
\begin{tikzpicture}
\rewriterule [ZX-GF] {\vtikzfig[-0ex]{ZX-green-fn-phase-fuse}}
\setlength\rulediagramwd{4em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.25,0)$);
\rewritetarget {\vtikzfig[-0ex]{ZX-green-fn-phase-sum}}
\end{tikzpicture}
\end{gathered}
\mspace{-48mu}
\setlength\rulediagramwd{5em}
\begin{gathered}
\begin{tikzpicture}
\rewriterule [ZX-GFP] {\vtikzfig{ZX-green-phase-fuse}}
\setlength\rulediagramwd{4em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.1875,0)$);
\rewritetarget {\vtikzfig[-0ex]{ZX-green-phase-sum}}
\end{tikzpicture}
\end{gathered}
\mspace{-48mu}
\setlength\rulediagramwd{5em}
\begin{gathered}
\begin{tikzpicture}
\rewriterule [ZX-GFS] {\vtikzfig[-0ex]{ZX-green-stab-fuse}}
\setlength\rulediagramwd{4em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.1875,0)$);
\rewritetarget {\vtikzfig{ZX-green-stab-sum}}
\end{tikzpicture}
\end{gathered}
\mspace{-48mu}
\\[-10.5ex]
\end{align*}
\begin{align*}
\mspace{-24mu}
\begin{gathered}
\setlength\rulediagramwd{7em}
\begin{tikzpicture}
\setlength\rulediagramwd{6em}
\rewriterule [ZX-RGC] {\vtikzfig[-1ex]{ZX-red-phase-dot}}
\setlength\rulediagramwd{4.25em}
\rewritetarget {\vtikzfig[-1ex]{ZX-green-phase-w-H}}
\end{tikzpicture}
\\[-1ex]
\begin{tikzpicture}
\setlength\rulediagramwd{6em}
\rewriterule [ZX-RGB] {\vtikzfig[-1ex]{ZX-bialg-many}}
\setlength\rulediagramwd{4.25em}
\rewritetarget {\vtikzfig[-1ex]{ZX-bott-many}}
\end{tikzpicture}
\\[-3ex]
\begin{tikzpicture}
\setlength\rulediagramwd{6em}
\rewriterule [ZX-CPY] {\hspace*{-2ex}\vtikzfig{ZX-red-copy}\hspace*{1ex}}
\setlength\rulediagramwd{3.5em}
\rewritetarget {\hspace*{-1ex}\vtikzfig{ZX-red-copies}\hspace*{1ex}}
\end{tikzpicture}
\\[-5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{5.25em}
\rewriterule [ZX-NS] {\vtikzfig{ZX-green-phase-nots}}
\setlength\rulediagramwd{4em}
\rewritetarget {\hspace*{-2ex}\vtikzfig{ZX-green-negated-phase-gadget}}
\end{tikzpicture}
\\[-5.5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{8em}
\rewriterule [ZX-RS] {\vtikzfig[-0ex]{ZX-conjugate-stab-dot}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.625,0)$);
\rewritetarget {\vtikzfig[.5ex]{ZX-shear-gadget}}
\end{tikzpicture}
\mspace{-18mu}
\end{gathered}
&& &&
\setlength\rulediagramwd{10em}
\begin{gathered}
\begin{tikzpicture}
\setlength\rulediagramwd{5.25em}
\rewriterule [ZX-Z] {\vtikzfig[-3ex]{ZX-zero-and-green-lollipop}}
\setlength\rulediagramwd{4em}
\rewritetarget {\hspace*{-2ex}\vtikzfig[-3ex]{ZX-zero-and-red-lollipop}}
\end{tikzpicture}
\\[-0.0ex]
\mspace{-18mu}
\setlength\rulediagramwd{3.25em}
\begin{tikzpicture}
\rewriterule [ZX-ZCP] {\vtikzfig[-0ex]{ZX-nonzero-stab-phase-dot}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.25,0)$);
\setlength\rulediagramwd{5.25em}
\nextrewriterule [ZX-ZSP] {\vtikzfig[-0ex]{ZX-awkward-quadratic-scalar-dot}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.4375,0)$);
\setlength\rulediagramwd{3.25em}
\rewritetarget {\vtikzfig[-0ex]{ZX-zero-scalar-dot}}
\end{tikzpicture}
\\[-2.5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{9em}
\rewriterule [ZX-MH] {\hspace*{-2ex}\vtikzfig{ZX-green-red-multiedge}\hspace*{1ex}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (.25,0)$);
\rewritetarget {\hspace*{-1ex}\vtikzfig{ZX-multiplier-gadget}\hspace*{1ex}}
\end{tikzpicture}
\mspace{-36mu}
\\[-2.5ex]
\begin{tikzpicture}
\rewriterule [ZX-ME] {\vtikzfig{ZX-green-red-multiedge-lollipop}}
\setlength\rulediagramwd{8.5em}
\rewritetarget {\hspace*{-2ex}\vtikzfig{ZX-red-quadratic-lollipop}}
\end{tikzpicture}
\\[-2.5ex]
\mspace{-18mu}
\setlength\rulediagramwd{7.5em}
\begin{gathered}
\begin{tikzpicture}
\rewriterule [ZX-MEH] {\vtikzfig{ZX-multiedge-Hopf}}
\setlength\rulediagramwd{4.4375em}
\nextrewriterule [ZX-A] {\vtikzfig[1ex]{ZX-antipode}}
\setlength\rulediagramwd{3.25em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\rewritetarget {\vtikzfig[1ex]{ZX-dc}}
\end{tikzpicture}
\end{gathered}
\end{gathered}
\\[-11.0ex]
\end{align*}
\begin{gather*}
\setlength\rulediagramwd{5em}
\begin{tikzpicture}
\rewriterule [ZX-PU] {\vtikzfig[-0ex]{ZX-phase-gadget}}
\setlength\rulediagramwd{6em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.25,0)$);
\nextrewriterule [ZX-SU] {\vtikzfig[-0ex]{ZX-stab-scalar-unit}}
\setlength\rulediagramwd{5.75em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.25,0)$);
\nextrewriterule [ZX-GU] {\vtikzfig[-1ex]{ZX-conjugate-quadratic-dots}}
\setlength\rulediagramwd{2.5em}
\rewritetarget {\vtikzfig[-0.0ex]{empty}}
\end{tikzpicture}
\\[-7.5ex]
\end{gather*}
Various scalar-exact rewrites (including axioms and corollaries) on ZX diagrams, which are sound for the semantics described in Eqn. (<ref>) subject to $\nu = D^{-1/4}$.
Chains of rewrites $\Big.\cD_j \!\xleftrightarrow{\!\!\:\textsf{(x)}\!\:} \cdots \leftrightarrow \cD_{\!\!\;f}$ are intended to indicate that $\cD_j \!\xleftrightarrow{\!\!\:\textsf{(x)}\!\:}\! \cD_{\!\!\;f}$ is either an axiom or notable corollary.
Throughout, we have $\theta, \phi \in \R$, and $\Theta, \Phi: \Z \to \C$, and $a, a_1, a_2, b, b_1, b_2, c \in \Z$.
We define the constant $0$ function, $\mathrm Z: \Z \to \{ 0 \}$.
We let $1 < t < D$ be a divisor of $D$, $1 < t' < t$ be a divisor of $t$ (and thus also of $D$), $u \in \N$ an integer which has no common factors $t>1$ with $D$ (sometimes interpreted as an element $u \in \Z_D^\times$), and $\tilde a \in \Z$ an integer which is not a multiple of $D$.
Many of the rules involve green dots or red dots parameterised by a label ${[\:\! a\:\!]}$ or ${[\:\!a \s b\:\!]}$ for $a,b \in \Z$ : these stand respectively for the amplitude functions $x \mapsto \tau^{-2ax}$ and $x \mapsto \tau^{-2ax - bx^2}$, where $\tau = \exp(\pi i (D^2 {+} 1)/D)$.
An annotation $\neg$ on a red or green dot indicates a dimension-dependent parameter ${[\!\:-\sigma \!\:]}$, where $\sigma = 0$ for $D$ odd and $\sigma = 1$ for $D$ even.
Soundness proofs for these rewrites may be found in Appendix <ref>.
\begin{align*}
\mspace{-36mu}
\begin{gathered}
\begin{tikzpicture}
\setlength\rulediagramwd{4.25em}
\rewriterule [ZXH-GW] {\vtikzfig{ZX-green-dot}\;\;\;}
\setlength\rulediagramwd{3.75em}
\rewritetarget {\vtikzfig{ZH-white-dot}}
\end{tikzpicture}
\\[-4ex]
\begin{tikzpicture}
\setlength\rulediagramwd{4.25em}
\rewriterule [ZXH-RG] {\vtikzfig{ZX-red-dot}}
\setlength\rulediagramwd{3.75em}
\rewritetarget {\vtikzfig{ZH-gray-dot}}
\end{tikzpicture}
\\[-2.5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{4.25em}
\rewriterule [ZXH-GP] {\vtikzfig[-1ex]{ZX-green-phase-dot}}
\setlength\rulediagramwd{3.75em}
\rewritetarget {\vtikzfig{ZH-white-H-gadget}}
\end{tikzpicture}
\\[-7.25ex]
\begin{tikzpicture}
\setlength\rulediagramwd{4.25em}
\rewriterule [ZXH-WH] {\vtikzfig{ZX-green-fn-lollipop}}
\setlength\rulediagramwd{3.75em}
\rewritetarget {\vtikzfig{ZH-H-fn-lollipop}}
\end{tikzpicture}
\end{gathered}
\mspace{0mu}&&%
% \Bigg\vert
\begin{gathered}
\begin{tikzpicture}
\setlength\rulediagramwd{4.25em}
\rewriterule [ZXH-RN] {\vtikzfig{ZX-red-c-dot}}
\setlength\rulediagramwd{3.75em}
\rewritetarget {\vtikzfig{ZH-gen-not-dot}}
\end{tikzpicture}
\\[-5.5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{4.25em}
\rewriterule [ZXH-RA] {\vtikzfig{ZX-red-phase-free-dot}}
\setlength\rulediagramwd{3.75em}
\rewritetarget {\vtikzfig{ZH-gray-id}}
\end{tikzpicture}
\\[2ex]
\begin{tikzpicture}
\setlength\rulediagramwd{4em}
\rewriterule [ZXH-HP] {\vtikzfig{ZX-H-plus-box}}
\setlength\rulediagramwd{3.5em}
\rewritetarget {\vtikzfig{ZH-H-plus1-box}}
\end{tikzpicture}
\\[-2.5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{4em}
\rewriterule [ZXH-HM] {\vtikzfig{ZX-H-minus-box}}
\setlength\rulediagramwd{3.5em}
\rewritetarget {\vtikzfig{ZH-H-minus1-box}}
\end{tikzpicture}
\end{gathered}
\mspace{-36mu}&&%
% \Bigg\vert
\mspace{-24mu}&&
\setlength\rulexnwd{5em}
\begin{gathered}
\begin{tikzpicture}
\setlength\rulediagramwd{1.5em}
\rewriterule [ZXH-GH0] {\vtikzfig{ZX-green-phase-free-deg0-dot}}
\setlength\rulediagramwd{0em}
\rewritetarget {\vtikzfig{ZH-sqrtD-scalar-gadget}}
\end{tikzpicture}
\\[-5.5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{2.5em}
\rewriterule [ZXH-GH] {\vtikzfig{ZX-green-Theta-deg0-dot}}
\setlength\rulediagramwd{1em}
\rewritetarget {\vtikzfig{ZH-Theta-integral-gadget}}
\end{tikzpicture}
\\[-1ex]
\begin{tikzpicture}
\setlength\rulediagramwd{5.25em}
\rewriterule [ZXH-S0] {\vtikzfig[-0ex]{ZX-Theta-0-phase-gadget}}
\setlength\rulediagramwd{1.75em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.25,0)$);
\rewritetarget {\vtikzfig[-0ex]{ZH-Theta-0-phase-gadget}}
\end{tikzpicture}
\\[-4ex]
\begin{tikzpicture}
\setlength\rulediagramwd{5.25em}
\rewriterule [ZXH-S] {\vtikzfig{ZX-Theta-c-phase-gadget}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.25,0)$);
\setlength\rulediagramwd{1.5em}
\rewritetarget {\vtikzfig{ZH-Theta-c-phase-gadget}}
\end{tikzpicture}
\end{gathered}
\end{align*}
Sound rewrites between the ZX generators and the ZH generators, subject to the semantics of Eqn. (<ref>).
Some of these rewrites are special cases or (together with ZX- or ZH-rewrites) easy corollaries of the others.
For instance, follows from , , and fusion of green dots; while is an immediate consequence of .
Soundness proofs for these rewrites may be found in Appendix <ref>.
\begin{gather*}
\begin{tikzpicture}
\setlength\rulediagramwd{4.25em}
\rewriterule [ZH-WI] {\vtikzfig{ZH-white-id}}
\nextrewriterule [ZH-WQS] {\vtikzfig{ZH-white-special-w-invSqrtD}}
\nextrewriterule [ZH-AI] {\vtikzfig{ZH-2antipode}}
\setlength\rulediagramwd{4.5em}
\nextrewriterule [ZH-HI] {\vtikzfig{ZH-H-id}}
\setlength\rulediagramwd{3em}
\rewritetarget {\vtikzfig{id-wire}}
\end{tikzpicture}
\\[-5.25ex]
\setlength\rulediagramwd{6em}
\begin{tikzpicture}
\rewriterule [ZH-WF] {\vtikzfig[-1ex]{ZH-white-fuse}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\nextrewriterule [ZH-GWC] {\vtikzfig{ZH-gray-w-unitH}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\setlength\rulediagramwd{6.5em}
\nextrewriterule [ZH-WNS] {\vtikzfig{ZH-not-dot-symm}}
\setlength\rulediagramwd{4.5em}
\rewritetarget {\vtikzfig[-1ex]{ZH-white-dot}}
\end{tikzpicture}
\\[-3.5ex]
\setlength\rulediagramwd{6em}
\begin{tikzpicture}
\rewriterule [ZH-GF] {\vtikzfig[-1ex]{ZH-gray-fuse}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\nextrewriterule [ZH-GL] {\vtikzfig{ZH-gray-dot-w-lollipop}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\setlength\rulediagramwd{6.5em}
\nextrewriterule [ZH-WGC] {\vtikzfig{ZH-white-w-unitH}}
\setlength\rulediagramwd{4.5em}
\rewritetarget {\vtikzfig[-1ex]{ZH-gray-dot}}
\end{tikzpicture}
\\[-3.5ex]
\begin{aligned}
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{7em}
\rewriterule [ZH-MEH] {\vtikzfig[.25ex]{ZH-multiedge-Hopf}}
\setlength\rulediagramwd{4.5em}
\nextrewriterule [ZH-A] {\vtikzfig[.5ex]{ZH-antipode}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\setlength\rulediagramwd{3em}
\rewritetarget {\vtikzfig{ZH-dc}}
\end{tikzpicture}
\end{aligned}
\Bigg\vert
\mspace{-9mu}&&
\begin{aligned}
\begin{tikzpicture}
% \setlength\rulediagramwd{5.25em}
\rewriterule [ZH-WGB] {\vtikzfig{ZH-bialg-white-gray}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\rewritetarget {\vtikzfig[-1ex]{ZH-bott-white-gray}}
\end{tikzpicture}
\end{aligned}
\end{aligned}
\\[-6.5ex]
\begin{aligned}
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{5.25em}
\rewriterule [ZH-HM] {\vtikzfig[.5ex]{ZH-mult}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\setlength\rulediagramwd{3.5em}
\rewritetarget {\!\!\!\!\!\!\vtikzfig[1.5ex]{ZH-H-prod-prep}}
\end{tikzpicture}
\end{aligned}
\Bigg\vert
\mspace{-9mu}&&
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{2.5em}
\rewriterule [ZH-HU] {\vtikzfig{ZH-H0-prep}}
\setlength\rulediagramwd{1.5em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\rewritetarget {\vtikzfig{ZH-white-prep}\!\!\!\!}
\end{tikzpicture}
\end{aligned}
\Bigg\vert
\mspace{-9mu}&&
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{6em}
\rewriterule [ZH-EC] {\vtikzfig[-.5ex]{ZH-exponent-compl}}
\setlength\rulediagramwd{5.5em}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\rewritetarget {\vtikzfig[1.5ex]{ZH-exponent-compl-gadget}}
\end{tikzpicture}
\end{aligned}
\end{aligned}
\\[-6.0ex]
\begin{aligned}
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{6.5em}
\rewriterule [ZH-MF] {\vtikzfig[-1ex]{ZH-H-complicated-fuse}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\setlength\rulediagramwd{4.25em}
\rewritetarget {\vtikzfig[-1ex]{ZH-H-complicated-phase-box}}
\end{tikzpicture}
\end{aligned}
\Bigg\vert
\mspace{-18mu}&&
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{4.5em}
\rewriterule [ZH-MCA] {\!\!\!\!\vtikzfig[1.5ex]{ZH-multichar-add}}
\setlength\rulediagramwd{3em}
\rewritetarget {\!\!\!\!\vtikzfig[1ex]{ZH-multichar-prep}\!\!\!}
\end{tikzpicture}
\end{aligned}
\mspace{-18mu}
\Bigg\vert
\mspace{-18mu}&&
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{4.5em}
\rewriterule [ZH-UM] {\vtikzfig{ZH-double-mult-unit}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\setlength\rulediagramwd{3.5em}
\rewritetarget {\vtikzfig{ZH-joint-mult-unit}}
\end{tikzpicture}
\end{aligned}
\end{aligned}
\\[-3.75ex]
\mspace{-9mu}
\begin{aligned}
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{10em}
\setlength\rulexnwd{3em}
\rewriterule [ZH-O] {\vtikzfig[.25ex]{ZH-ortho-join}}
\rewritetarget {\vtikzfig{ZH-ortho-star}}
\end{tikzpicture}
\end{aligned}
% \mspace{9mu}
\Bigg\vert
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{7.5em}
\rewriterule [ZH-HWB] {\vtikzfig{ZH-bialg-white-H}}
\setlength\rulediagramwd{6.5em}
\rewritetarget {\!\!\!\!\vtikzfig[1ex]{ZH-bott-white-varH}\!\!\!}
\end{tikzpicture}
\\[-1.5ex]
\begin{tikzpicture}
% \setlength\rulediagramwd{5em}
\rewriterule [ZH-HMB] {\vtikzfig{ZH-bialg-alt-white-H}}
% \setlength\rulediagramwd{4em}
\rewritetarget {\vtikzfig{ZH-bott-alt-H-gray}}
\end{tikzpicture}
\\[-3.5ex]
\begin{tikzpicture}
\setlength\rulediagramwd{5.5em}
\rewriterule [ZH-ME] {\vtikzfig{ZH-multiplier}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\setlength\rulediagramwd{6.5em}
\rewritetarget {\vtikzfig{ZH-white-gray-multiedge}}
\end{tikzpicture}
\end{aligned}
\end{aligned}
\\[-5.25ex]
\begin{aligned}
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{4.5em}
\rewriterule [ZH-ND] {\vtikzfig[.25ex]{ZH-not1-not2}}
\setlength\rulediagramwd{5em}
\rewritetarget {\vtikzfig{ZH-not-transport}}
\end{tikzpicture}
\end{aligned}
\bigg\vert
\mspace{-9mu}&&
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{5.5em}
\rewriterule [ZH-NH] {\vtikzfig{ZH-gen-not-gadget}}
\setlength\rulediagramwd{3em}
\rewritetarget {\vtikzfig{ZH-var-not-dot}}
\end{tikzpicture}
\end{aligned}
\bigg\vert
\mspace{-9mu}&&
\begin{aligned}
\begin{tikzpicture}
\setlength\rulediagramwd{3em}
\rewriterule [ZH-NA] {\vtikzfig[.25ex]{ZH-not-antipode}}
% \setlength\rulediagramwd{4.5em}
% \nextrewriterule [ZH-2HG] {\vtikzfig{ZH-double-H}}
\coordinate (anchor-next-diagram) at ($(anchor-next-diagram) + (-.125,0)$);
\rewritetarget {\vtikzfig{ZH-gray-id}}
\end{tikzpicture}
\end{aligned}
\end{aligned}
\\[-8.75ex]
\end{gather*}
Various scalar-exact rewrites (including axioms and corollaries) on ZH diagrams, which are sound for the semantics described in Eqn. (<ref>) subject to $\nu = D^{-1/4}$.
Chains of rewrites $\Big.\cD_j \!\xleftrightarrow{\!\!\:\textsf{(x)}\!\:} \cdots \leftrightarrow \cD_{\!\!\;f}$ are intended to indicate that $\cD_j \!\xleftrightarrow{\!\!\:\textsf{(x)}\!\:}\! \cD_{\!\!\;f}$ is either an axiom or notable corollary.
Throughout, we have $k \in \N$, $a,b,c, c_1, c_2 \in \Z$ (which may be evaluated modulo $D$); $u,v \in \Z_D^\times$ ; $\mathrm A, \mathrm B: \Z \to \C$; and $\alpha \in \C^\times$.
H-boxes which are labeled inside with an integer parameter such as $c \in \Z$, indicates an amplitude of $\omega^c = \e^{2\pi i c/D}$; H-boxes labelled with $\texttt+$ or $\texttt-$ indicate $c = \pm 1$ (see Figure <ref>).
A not dot labeled with $\neg$ indicates a dimension-dependent parameter $-\sigma \in \Z$, where $\sigma = 0$ for $D$ odd and $\sigma = 1$ for $D$ even; more generally, not-dots may be parameterised by $c \in \Z$ for the sake of convenience and reduced modulo $D$ to an element of $[D]$.
Soundness proofs for these rewrites may be found in Appendix <ref>.
\begin{align*}{}
\\[-7.5ex]
\begin{aligned}{}
\mspace{-21mu}
\Bigsem{\!%
\vtikzfig[-1ex]{ZX-kket-a}
\,}
\!\!\;&=
\kket{a}
% ==
\\[.5ex]
% ==
\mspace{-21mu}
\Bigsem{\!%
\vtikzfig[-1ex]{ZX-kket-omega-a}
\,}
\!\!\;&=
\kket{\smash{\omega^a}}
% ==
\\[.5ex]
% ==
\mspace{-21mu}
\Bigsem{\,
\vtikzfig[-1ex]{ZX-X}
\,}
\!\!\;&=\:\!
% \sum_{\mathclap{x\in \Z_D}}
% \ket{x{+}1}\bra{x}
% \,=:\,
% ==
\\[.5ex]
% ==
\mspace{-21mu}
\Bigsem{\,
\vtikzfig[-1ex]{ZX-Z}
\,}
\!\!\;&=\:\!
% \sum_{\mathclap{x\in \Z_D}}
% \omega^x \ket{x}\bra{x}
% \,=:\,
% ==
\end{aligned}
\begin{aligned}
\Sem{5.5ex}{\,
\vtikzfig[-2.00ex]{ZX-CNOT-alt-gadget}
\,}
\!\!\;&=\,
% \sum_{\mathclap{x,y \in \Z_D}} \ket{x, y{+}cx}\bra{x, y}
% \,=\,
\mathrm{CX}
\\[1.5ex]
\Sem{5.5ex}{\,
\vtikzfig[-2.25ex]{ZX-CZ-gadget}
\,}
\!\!\;&=\,
% \sum_{\mathclap{x,y \in \Z_D}} \omega^{xy} \ket{x, y}\bra{x, y}
% \,=\,
\mathrm{CZ}
\end{aligned}
\begin{aligned}
\Bigsem{\tikzfig{ZH-scalar-box}}
\!\!\;&=\:\!
\alpha
% \Bigsem{\,
% \vtikzfig[-2ex]{ZH-gray-id}
% \,}
% \!\!\;&=\:\!
% \sum_{\mathclap{x \in \Z_D}} \ket{-x}\bra{x}
\\[1ex]
\Sem{8.0ex}{\,
\vtikzfig[-1.75ex]{ZXH-CCZ-c-gadget}
\!\!\;&=\!\:
\mathrm{CCZ}^c
\end{aligned}
\begin{aligned}
\bigsem{\,
\vtikzfig[-1.75ex]{ZX-H-plus-box}
\,}
\!\!\;&=\!\:
\tfrac{1}{\sqrt D} \sum_{\mathclap{k,x \in [D]}} \omega^{kx} \!\:\ket{k}\bra{x}
\mspace{-36mu}
\\[-1.5ex]
\Sem{5.0ex}{\,
\vtikzfig[-1.75ex]{ZH-diag-A-gate}
\!\!\;&=\,
\sum_{\mathclap{x,y \in [D]}} \mathrm A(xy) \,\ket{x,y}\bra{x,y}
\mspace{-36mu}
\\[1ex]
\Bigsem{\,
\vtikzfig[-1.75ex]{ZX-diag-Theta-gate}
\,}
\!\!\;&=\!\:
\sum_{\mathclap{x \in [D]}} \Theta(x) \,\ket{x}\bra{x}
\mspace{-36mu}
\\[-1ex]
\end{aligned}
\\[-5.0ex]
\end{align*}
A selection of simple diagrams to represent vectors and unitary operators, subject to the semantics of Eqn. (<ref>) and $\nu = D^{-1\!\!\;/\!\!\;4}$.
We let $\alpha \in \C^\times$, $a,c \in [D]$, and $\mathrm A, \Theta: \Z \to \C$.
In this case, ${\lvert a \rangle\!\!\!\;\rangle} = D^{\!\:1\!\!\;/\!\!\;4} {\lvert a \rangle}$ and ${\lvert \omega^a \rangle\!\!\!\;\rangle} = D^{\!\:1\!\!\;/\!\!\;4} {\lvert \omega^a \rangle}$.
We define $X {\lvert t \rangle} = {\lvert t{+}1 \rangle} $ and $Z {\lvert t \rangle} = \omega^t {\lvert t \rangle}$ to be the usual generalised Pauli operators on $\cH$, and $\mathrm{CX} {\lvert x,y \rangle} = {\lvert x, y{+}x \rangle}$ and $\mathrm{CZ} {\lvert x,y \rangle} = \omega^{xy} {\lvert x, y \rangle}$ to be the (integer-)controlled versions of those same operators.
These constructions are discussed in Section <ref>.
The special case $D=2$.
In addition to being a promising approach to defining ZX and ZH calculi with simple rewrite systems on qudits, this approach to interpreting ZX and ZH diagrams reproduces
Strictly speaking, the calculi of Ref. [19] involve red and green dots with parameters $\theta \in \R$, H-boxes with parameters $\alpha \in \C$, only one type of Hadamard box instead of two, and a `nu box' which is entirely missing in the calculus presented here.
We may bridge these differences in the case $D=2$ by considering the special case of red and green dots parameterised by functions $\Theta(x) = \exp(i\theta x)$ for $\theta \in \R$, H-boxes parameterised by $\mathrm A(x) = \alpha^x$ for $\alpha \in \C$, identifying both the Hadamard plus and minus boxes with the single Hadamard box of Ref. [19], and replacing the nu-boxes with some suitable scalar gadgets (such as H-boxes parameterised by powers of $\nu = D^{-1/4}$).
the `well-tempered' semantic map $\sem{\,\cdot\,}_\nu$ described in Ref. [19] for $D=2$.
In this way, Eqns. (<ref>) provide a more intuitive definition of those semantics, and extends them to arbitrary $D>1$.
Related work.
As we note above, there is recent and ongoing work [8, 12, 23, 27, 28, 29, 35, 41] on ZX, ZH, and related calculi on qudits of dimension $D>2$, though often considering the special case where $D$ is an odd prime.
Our work is strongly influenced in particular by certain ideas of Booth and Carette [27] and Roy [35], and we are aware of parallel work by collaborations involving these authors [42, 43].
However, we have reason to believe that our work is distinguished in presenting convenient semantics for both ZX and ZH diagrams for arbitrary ${D\!>\!1}$.
In particular, our work is intended only to present results which hold for arbitrary $D$ (albeit allowing for minor variations between the cases of $D$ even and $D$ odd).
Structure of the paper.
Section <ref> provides what background we rely upon in number theory and measure theory, and also for the ZX and ZH calculi, to present our results.
Section <ref> introduces discrete measures on $[D]$ and integrals on $[D]$, and considers constraints on normalisation which may be motivated by particular presentations of the discrete Fourier transform $\hat f$ of a function $f: \Z \to \C$.
In Section <ref>, we demonstrate how this yields convenient representations of generalised Clifford gates on $\cH$, as well as general diagonal operations.
In Section <ref>, we outline a normal form for qudit ZH diagrams for all $D>1$, building on a similar construction for $D \!=\! 2$. [30].
In Section <ref>, we remark on the relationship between this construction and the development of the `well-tempered' semantics for ZX and ZH diagrams for $D=2$.
(Throughout, we refer the reader to the various Appendices for particularly technical details which may be of interest.)
We conclude in Section <ref> with a summary and general commentary on these results.
§ PRELIMINARIES
§.§ Mathematical preliminaries
Number theory.
Let $D>1$ be a fixed integer, and $\omega = \e^{2\pi i \!\!\;/\!\!\;D}$.
We assume some basic familiarity with number theory, in particular with $\Z_D$, the integers modulo $D$.
While it is common to associate $\Z_D$ with the set $\{0,1,\ldots,D\!-\!1\}$ of non-negative `residues' of integers modulo $D$, one might also associate them $\Z_D$ with a set of `signed residues' $[D] = {(-\tfrac{1}{2}D, \tfrac{1}{2} D] \;\!\cap\;\! \Z} = \{L_D,L_D{+}1,\ldots,U_D{-}1,U_D\}$, where $L_D = {-\lfloor\!\!\:\tfrac{D-1}{2}\!\!\:\rfloor}$ and $U_D = {\lfloor\!\!\;\tfrac{D}{2}\!\!\;\rfloor}$.
We may then occasionally substitute $\Z_D$ for $[D]$ when this is unlikely to cause confusion: this will most often occur in the context of expressions such as $\omega^{xy}$, which is well-defined modulo $D$ in each of the variables $x$ and $y$ (i.e., adding any multiple of $D$ to either $x$ or $y$ does not change the value of the expression).
In such an expression, while we may intend for one of $x$ or $y$ or both may be an element of $\Z_D$ in principle, they would in practise be interpreted as a representative integer $x,y \in [D]$.
Measure theory.
We rely only on a modest amount of measure theory, as follows.
For a set $X$, let $\wp(X)$ be the power-set of $X$.
We may define a $\sigma$-algebra on $X$ to be a set $\Sigma \subset \wp(X)$ which contains $X$, which is closed under set complements ($S \in \Sigma \,\Leftrightarrow\, X \!\!\;\setminus\!\!\; S \in \Sigma$), and which is closed under countable unions (if $S_1, S_2, \ldots \in \Sigma$, then $S_1 \cup S_2 \cup \cdots \in \Sigma$).
The purpose of defining $\Sigma$ is to allow the notion of a measure $\mu: \Sigma \to \R \cup \{+\infty\}$ to be defined, where the sets $S \in \Sigma$ are the ones which have a well-defined measure.
Such a function $\mu$ is a measure, if and only if $\mu(\varnothing) = 0$, $\mu(S) \ge 0$ for all $S \in \Sigma$, and if
\begin{equation}
\mu\bigl(S_1 \cup S_2 \cup \cdots\bigr) = \mu(S_1) + \mu(S_2) + \cdots
\end{equation}
for any sequence of disjoint sets $S_j \in \Sigma$.
An example is the $\sigma$-algebra $\Sigma$ consisting of all countable unions of intervals over $\R$, with $\mu$ defined by assigning $\mu(J) = b{\!\;-\!\;}a$ to any interval $J \!\!\;=\!\!\; (a,b)$, $J \!\!\;=\!\!\; (a,b]$, $J \!\!\;=\!\!\; [a,b)$, or $J \!\!\;=\!\!\; [a,b]$ for $a \le b$.
A somewhat more exotic measure is the Dirac distribution $\mu_\delta$ on $\R$, for which $\mu_\delta(S) \!\!\;=\!\!\; 1$ if $0 \in S$, and $\mu_\delta(S) \!\!\;=\!\!\; 0$ otherwise.
For more remarks on the Dirac distribution and related concepts, see Appendix <ref>.
However, we will be mainly interested in measures $\mu$ which can be defined on the subsets of $[D]$, in which $\mu(\{x\})$ is the same for every singleton set.
§.§ ZX and ZH diagrams
ZX and ZH diagrams are both systems of `string diagrams'.
ZX diagrams are effective for representing operations generated by single-qubit rotations and controlled-NOT gates.
In most cases (excepting, e.g., Refs. [20, 27]), it rests on the unitary equivalence of two sets of conjugate bases.
ZH diagrams were developed as an alternative notation to ZX diagrams, to facilitate reasoning about quantum circuits over the Hadamard-Toffoli gate set [44, 45].
In each case, the diagrams are composed of dots or boxes, and wires.
These diagrams can be described as being a composition of `generators',
which typically consist of one (or zero) dots/boxes with some amount of meta-data, and any number (zero or more) directed wires, where the direction is usually represented by an orientation in the diagram.
(In this article, wires are oriented left-to-right, though they are also allowed to bend upwards or downwards.)
Composition of diagrams.
For two generators (or two more complicated diagrams) $\cD_1$ and $\cD_2$, we may define composite diagrams $\cD_1 \!\!\;\otimes\!\!\; \cD_2$ and $\cD_1 \!\!\;\mathbin;\!\!\; \cD_2$, which we represent schematically by
\begin{equation}
\begin{aligned}
\begin{tikzpicture}
\node (D) at (0,0) [draw=black, line width=.75pt,
minimum width=1em, minimum height=9ex]
{$\cD_1 \otimes \cD_2$};
\foreach \dy in {-0.45,-0.35,-0.25,-0.15,0.15,0.25,0.35,0.45} {
\draw ($(D.west) + (0,\dy)$) -- ++(-0.25,0);
\draw ($(D.east) + (0,\dy)$) -- ++(0.25,0);
\end{tikzpicture}
\end{aligned}
\;\;=\;\;
\begin{aligned}
\begin{tikzpicture}
\node (D1) at (0,.75) [draw=black, line width=.75pt,
minimum width=1em, minimum height=4ex]
\node (D2) at (0,0) [draw=black, line width=.75pt,
minimum width=1em, minimum height=4ex]
\foreach \n in {D1,D2} {%
\foreach \dy in {-0.15,-0.05,0.05,0.15} {
\draw ($(\n.west) + (0,\dy)$) -- ++(-0.25,0);
\draw ($(\n.east) + (0,\dy)$) -- ++(0.25,0);
\end{tikzpicture}
\end{aligned}\quad;
\qquad
\qquad
\begin{aligned}
\begin{tikzpicture}
\node (D) at (0,0) [draw=black, line width=.75pt,
minimum width=1em, minimum height=5ex]
{$\cD_1 \mathbin; \cD_2$};
\foreach \dy in {-0.15,-0.05,0.05,0.15} {
\draw ($(D.west) + (0,\dy)$) -- ++(-0.25,0);
\draw ($(D.east) + (0,\dy)$) -- ++(0.25,0);
\end{tikzpicture}
\end{aligned}
\;\;=\;\;
\begin{aligned}
\begin{tikzpicture}
\node (D1) at (0,0) [draw=black, line width=.75pt,
minimum width=1em, minimum height=4ex]
\node (D2) at (1.125,0) [draw=black, line width=.75pt,
minimum width=1em, minimum height=4ex]
\foreach \n in {D1,D2} {%
\foreach \dy in {-0.15,-0.05,0.05,0.15} {
\draw ($(\n.west) + (0,\dy)$) -- ++(-0.25,0);
\draw ($(\n.east) + (0,\dy)$) -- ++(0.25,0);
\end{tikzpicture}
\end{aligned}\quad,
\end{equation}
which we call the `parallel' and `serial' composition of $\cD_1$ and $\cD_2$.
In the latter case we require that the number of output wires of $\cD_1$ (on the right of $\cD_1$) equal the number of input wires of $\cD_2$ (on the left of $\cD_2$), for the composition to be well-defined.
Semantic maps.
ZX and ZH diagrams are assigned a meaning, e.g., as operators over $\C$, through a semantic map $\sem{\,\cdot\,}$ which maps each generator to some scalar, functional, or operator.
To each generator $\cD$ with $m$ input wires and $n$ output wires, one assigns an operator $\sem{\cD}: \cH\sox{m} \to \cH\sox{n}$ for some fixed vector space $\cH$ over $\C$.
(For ZX and ZH diagrams over qubits, one takes $\cH \cong \C^2$; more generally one may consider $\cH \cong \C^D$ to consider a qudit of dimension $D>1$, as we do in this article.)
This semantic map is defined to be consistent with respect to composition, in the sense that
\begin{equation}
\Bigsem{\cD_1 \otimes \cD_2} \;=\; \bigsem{\cD_1} \otimes \bigsem{\cD_2},
\qquad
\qquad
\Bigsem{\cD_1 \mathbin; \cD_2} \;=\; \bigsem{\cD_2} \circ \bigsem{\cD_1},
\end{equation}
where the reversal of the order for sequential composition comes from the convention of considering matrices as acting on a column vector on their right (so that function application is consistent between diagrams and operators).
This allows diagrams to denote multi-linear operators on $\cH$ in a straight-forward way.
The semantics of the ZX and ZH generators are usually defined to facilitate certain ways of reasoning about multilinear maps on $\cH$, through how the diagrams represent those operators.
Wire generators / compact structure.
To represent string diagrams to represent operations in which some qudits are being permuted or left unaffected, we also consider generators consisting only of wires.
Furthermore, we are interested in allowing deformations of diagrams in which the generators are flex-symmetric [46].
We consider four such generators, to which we assign semantics as follows:
\begin{gather}
\label{eqn:stringGenerators}
\mspace{-24mu}
\bigsem{\tikzfig{id-wire}\,}
\mathbf 1
\sum_{\mathclap{x \in [D]}}
\ket{x}\bra{x}
\mspace{15mu}
\biggsem{\!\!\:\tikzfig{swap}}
\sum_{\mathclap{x,y \in [D]}}
\ket{y,\!\!\:x\!\:}\bra{x,\!\!\:y\!\:},
\mspace{15mu}
\biggsem{\!\!\:\tikzfig{cup}}
\sum_{\mathclap{x \in [D]}}
\ket{x,\!\!\:x\!\:},
\mspace{15mu}
\biggsem{\!\!\:\tikzfig{cap}}
\sum_{\mathclap{x \in [D]}}
\bra{x,\!\!\:x\!\:}.
\mspace{-9mu}
\end{gather}
Semantics for ZX generators.
The usual approach to assigning semantics to ZX generators is by considering the green and red dots to represent similar operations, subject to different (conjugate) choices of orthonormal basis, and a unitary `Hadamard' (or Fourier transform) gate relating the two bases.
Conventionally, one indexes the standard basis of $\cH$ by $\ket{0}, \ket{1}, \ldots, \ket{D{-}1}$, in short by $\ket{x}$ for $x \in [D]$ defined by $[D] = \{0,1,\ldots,D{-}1\}$.
We instead take $[D] = {(-\tfrac{1}{2}D, \tfrac{1}{2}D] \!\:\cap\!\: \Z}$ as above, and index the standard basis by $\ket{L_D}$, $\ket{L_D{-}1}$, …, $\ket{-1}$, $\ket{0}$, $\ket{+1}$, …, $\ket{U_D}$.
We then define `green' (lighter-coloured) dots in terms of an action on the basis $\ket{x}$, and the `red' (darker-coloured) dots in terms of an action on the basis $\ket{\smash{\omega^x}}$, where $\ket{\smash{\omega^k}} = \smash{\tfrac{1}{\sqrt D} \sum_x \omega^{-kx} \ket{x}}$ for $k \in [D]$.
In the notation of Booth and Carette [27], we have $\ket{\smash{\omega^k}} = \ket{k\:\!{:} X}$, up to a relabeling of the basis elements of $\cH$.
Specifically: for a green dot with angular parameters $\boldsymbol \theta \in \R^{[D]}$, one conventionally assigns the interpretion $\smash{\sum_{x \in [D]} \e^{i \theta_x} \ket{x}^{\!\otimes n}\!\bra{x}^{\!\!\;\otimes m}}$; and one assigns the interpretation $\smash{\sum_{x \in [D]} \e^{i \theta_x} \ket{\smash{\omega^x}}^{\!\otimes n}\!\bra{\smash{\omega^x}}^{\!\!\;\otimes m}}$ to a red dot with parameter $\boldsymbol\theta$.
For $D>2$, taking such a conventional interpretation does not yield a `flexsymmetric' [46] calculus, in effect because $\bra{\smash{\omega^a}}\trans = \ket{\smash{\omega^a}}^\ast = \ket{\smash{\omega^{-a}}}$.
In particular, this would mean that
\begin{equation}
\label{eqn:red-deg-2-dots-flexsymmetric}
\biggsem{\,%
\begin{aligned}~\\[-2ex]
\begin{tikzpicture}
\node (X) at (0,0) [X dot, label=above:\small$\boldsymbol\theta$] {};
\draw (X) -- ++(.625,0);
\draw (X) -- ++(-.125,0) arc (270:90:.3125) -- ++(.75,0);
\end{tikzpicture}
\end{aligned}
\,}
\;=\;
\biggsem{\!\! %
\begin{aligned}
\begin{tikzpicture}
\node (X) at (0,0) [X dot, label=left:\small$\boldsymbol\theta$] {};
\draw (X) .. controls (0.125,0.3175) .. ++(.625,0.3125);
\draw (X) .. controls (0.125,-0.3125) .. ++(.625,-0.3125);
\end{tikzpicture}
\end{aligned}
\,}
\;=\;
\biggsem{\,%
\begin{aligned}~\\[-4.5ex]
\begin{tikzpicture}
\node (X) at (0,0) [X dot, label=below:\small$\boldsymbol\theta$] {};
\draw (X) -- ++(.625,0);
\draw (X) -- ++(-.125,0) arc (90:270:.3125) -- ++(.75,0);
\end{tikzpicture}
\end{aligned}
\,}
\end{equation}
would not hold: the first would denote $\sum_x \e^{i\theta_x} \ket{\smash{\omega^{-x}, \omega^x}}$, the second would denote $\sum_x \e^{i\theta_x} \ket{\smash{\omega^x, \omega^x}}$, and the third would denote $\sum_x \e^{i\theta_x} \ket{\smash{\omega^x, \omega^{-x}}}$.
Specifically, this represents a way in which such a calculus would fail to have the useful syntactic property that “only the connectivity matters” [1, 13]; and other inconveniences would also arise, which would make these diagrams more difficult to work with.
In order to avoid this problem, we endorse the convention adopted Refs. [27, 28] of involving a generator which is related to the green dot by different unitary transformations on the inputs and outputs.
We then interpret the generators of Eqn (<ref>) as operators using a model $\sem{\,\cdot\,}$ which typically satisfies the following:
\begin{equation}{}
\label{eqn:ZX-conventional-model}
\mspace{-36mu}
\begin{aligned}{}
\Biggsem{\!\!\!\tikzfig{ZX-green-phase-dot-arity}\!\!\!}
\sum_{x \in [D]} \!
\Theta(x) \, \ket{x}^{\!\otimes n}\!\bra{x}^{\!\!\;\otimes m}
\mspace{-18mu}
% -----------------------------------------------------
% -----------------------------------------------------
\Bigsem{\!\!\: \tikzfig{ZX-H-plus-box} \!\!\:}
\sum_{k \in [D]} \!\!\!\;
\ket{k}\bra{\smash{\omega^{k}}}
\,=
\text{\small$\dfrac{1}{\sqrt D}$}\mathop{\sum \sum}_{x,k \in [D]}
\e^{2\pi i k x / D} \ket{x}\bra{k}
\mspace{-18mu}
% -----------------------------------------------------
\\[0.75ex]
% -----------------------------------------------------
\mspace{-18mu}
\Biggsem{\!\!\!\tikzfig{ZX-red-phase-dot-arity}\!\!\!}
\sum_{k \in [D]} \!\!\!\;
\Theta(k) \,
\ket{\smash{\omega^{-k}}}\sox{n} \!\bra{\smash{\:\!\omega^{k}}\;\!}\sox{m}
% -----------------------------------------------------
% -----------------------------------------------------
\Bigsem{\!\!\: \tikzfig{ZX-H-minus-box} \!\!\:}
\sum_{k \in [D]}\!
\ket{\smash{\omega^{k}}}\bra{k}
\,=
\text{\small$\dfrac{1}{\sqrt D}$}\mathop{\sum \sum}_{x,k \in [D]}
\e^{-2\pi i k x / D} \ket{x}\bra{k}
\mspace{-18mu}
\end{aligned}
\mspace{-48mu}
\end{equation}
where we allow a general function $\Theta(x)$ for $\Theta: \Z \to \C$ in place of a phase parameter $\e^{i\theta_x}$ given by a vector $\boldsymbol{\theta} \in \R^{[D]}$.
For the case $D=2$, Ref. [19] shows advantages to including scalar factors different from $+1$ for a semantic map following Eqn. (<ref>).
Our work is to describe one fruitful way to choose such scalar factors.
We can recover the usual convention of parameterising nodes by a simple phase angle when desired, by adopting a short-hand in which an angle $\theta$ stands for the function $\Theta(x) = \e^{i\theta x}$ and a vector $\boldsymbol \theta \in \R^{[D]}$ stands for the function $\Theta(x) = \e^{i\theta_x}$.
Note that the semantics for the $\smash{\tikzfig{ZX-H-plus-box}}$ box, makes it proportional to the quantum Fourier transform over $\Z_D$.
Semantics for ZH generators.
The main feature of ZH diagrams is the use of the H boxes to represent scalar coefficients in a symmetric way depending on products of indices.
We typically interpret the generators of Eqn (<ref>) as operators using a model $\sem{\,\cdot\,}$ which satisfies the following:
\begin{equation}{}
\label{eqn:ZH-conventional-model}
\mspace{-18mu}
\begin{aligned}
\Biggsem{\!\!\!\tikzfig{ZH-white-dot-arity}\!\!}
\sum_{x \in [D]}
\ket{x}^{\!\otimes n}\!\bra{x}^{\!\!\;\otimes m}
% -----------------------------------------------------
% -----------------------------------------------------
\Bigsem{\tikzfig{ZH-gen-not-dot}}
\sum_{x \in \Z_D} \ket{-c{-}x}\bra{x}
\mspace{-18mu}
% -----------------------------------------------------
\\[.25ex]
% -----------------------------------------------------
\Biggsem{\!\!\!\tikzfig{ZH-H-phase-box-arity}\!\!}
\mathop{\sum \sum}_{%
\mathclap{
{x \in [D]^m\!\!\!\;,\, y\in [D]^n}
\,
\mathrm{A}(x_1 \!\cdot\!\cdot\!\cdot x_m y_1 \!\cdot\!\cdot\!\cdot y_n) \,
\ket{y}\!\!\bra{x}
% -----------------------------------------------------
% -----------------------------------------------------
\Biggsem{\!\!\!\tikzfig{ZH-gray-dot-arity}\!\!}
\mathop{\sum \sum}_{%
\mathclap{\substack{
{x \in \Z_D^m \!\!\:,}
\,
{y \in \Z_D^n} \\[.5ex]
\sum\limits_h x_h + \sum\limits_k y_k \;\!=\, 0
\;\;
\ket{y}\!\!\bra{x}
\mspace{-24mu}
\\[-2ex]
\end{aligned}%
\end{equation}
where we allow a general function $\mathrm A: \Z \to \C$ in place of an amplitude $\alpha \in \C$ which one would conventionally use to parameterise H-boxes.
Again, we can recover the usual convention by adopting the short-hand, that an amplitude $\alpha \in \C^\times$ stands for the function $\mathrm{A}(t) = \alpha^t$ for $t \in \Z$.
Note that our convention of indexing the basis vectors of $\cH$ by $\ket{x}$ for $x \in [D] = \{L_D, L_D{+}1, \ldots, U_D{-}1, U_D\}$, means that for $D > 2$ we the exponential function $t \mapsto \alpha^t$ for $\alpha = 0$ is not well-defined.
We may instead consider a more function $\mathbf X_{\{0\}}: \Z \to \C$ given by $\mathbf X_{\{0\}}(t) = 1$ for $t = 0$, with $\mathbf X_{\{0\}}(t) = 0$ otherwise; this substitution is adequate to play the same role that $\mathrm{A}(t) = \alpha^t$ plays for $\alpha = 0$ where $t \in \{0,1,\ldots, D{-}1\}$, e.g., in certain applications to counting complexity [33, 40].
In particular, we fix the semantics so that
\begin{equation}
\label{eqn:ZH-scalar-box}
\Bigsem{\tikzfig{ZH-scalar-box}}
\;\,=\;\,\;
\sum_{\mathclap{\text{(singleton)}}}
\;
\alpha^{\text{(empty product)}} \cdot 1
\;=\;
\alpha^1
\;=\;
\alpha.
\end{equation}
Note that for the H-boxes, we consider the products of the input and output labels $x_1, \ldots, x_m, y_1, \ldots, y_m$ as integers in the context of the expression $\mathrm A(x_1 \cdots y_m)$.
By contrast, for the gray dots and the not-dots, we adopt an interpretation of the elements of $[D]$ as integers modulo $D$, and consider $\Z_D$ arithmetic in the labels of the point-mass functions.
(For instance, for the gray dots, we constrain the summation indices $x \in \Z_D^m$ and $y \in \Z_D^n$, so that the sum of their entries is ${0 \in \Z_D}$ .)
Rewrite systems.
In addition to merely denoting operators over $\C$, we may perform calculations using ZX and ZH diagrams, by considering transformations of diagrams $\cD_1 \mapsto \cD_2$ that satisfy $\sem{\cD_1} = \sem{\cD_2}$.
This is occasionally relaxed, to consider rewrite systems for which $\sem{\cD_1} \propto \sem{\cD_2}$; systems in which equality holds may be called scalar exact to emphasise this fact.
A rewrite which preserves semantics in this way, are said to be sound for that semantic map $\sem{\,\cdot\,}$; which rewrites have this property depends on the choice of $\sem{\,\cdot\,}$.
A separate, more difficult to analyse property is how easy a rewrite system is to use.
We suggest that rewrite systems, in which the most commonly used rewrite rules can be expressed simply, are to be preferred over others; but this depends on obtaining a semantic map $\sem{\,\cdot\,}$ for which such a rewrite system is sound.
In Ref. [19], in the case $D=2$, one of us formulated just such a semantic map $\sem{\,\cdot\,}_\nu$ for both ZX and ZH diagrams.
However, the semantics assigned to the ZX and ZH generators by $\sem{\,\cdot\,}_\nu$ is non-obvious, presenting a different obstacle to working with those semantics.
This raises the question of how one might obtain semantics for the ZX and ZH generators, which are themselves easily to express and reason about, and also leads to a system of rewrites which is simple and easy to reason about.
§ QUDIT OPERATORS AS MULTI-LINEAR MAPS ON A MEASURE SPACE
In this section, we consider how we may express sums and multilinear operators on $\cH \cong \C^D$, in terms of integrals over $[D]$ with a discrete measure.
We then consider the constraints imposed on Ockhamic semantic map of ZX- and ZH-generators, by requiring that they be simply expressed by such integrals.
We then describe some of the consequences of these semantics for rewrite systems on these diagrams.
§.§ Measures and integration over $\Z_D$
We may consider the $\sigma$-algebra $\mathcal B = \wp([D])$, consisting of all subsets of $[D]$, and define the measure ${\mu: \mathcal B \to \R}$ on this $\sigma$-algebra given by
\begin{equation}
\mu(S) \;=\; \# S \cdot \nu^2,
\end{equation}
for some $\nu > 0$ which in principle may be chosen freely.
(Here, $\# S$ simply denotes the number of elements of $S$.)
Let $N = D \nu^2$ represent the measure that we assign to the entire set $[D]$ in that $\mu([D]) = N$; we also have $\mu(\{a\}) = \nu^2 = N/D$ for an arbitrary singleton $a \in [D]$.
This presents $[D]$ as a measure space, the purpose of which is to allow us to define (multi-)linear operators on $\cH$ as arising from integrals with respect to that measure.
For a function $f: \Z \to \C$, we may define a notion of integration of $f$ over a subset $S \subset [D]$:
\begin{equation}
\label{eqn:integral-over-ZD}
\begin{aligned}[b]
\int\limits_{x \in S} \!f(x) \; \mathrm d\mu(x)
\;&=\;
\sum_{x \in S} \, f(x) \, \mu(\{x\})
\;=\,
\sum_{x \in S} f(x) \,\nu^2,
\text{and in particular}\
N &= \int\limits_{\mathclap{x \in [D]}} 1 \cdot \mathrm d\mu(x)\,,
\end{aligned}
\end{equation}
consistent with the notion of normalisation of $N$ for the entire measure space $[D]$.
Our use of integrals and discrete measures in this way is standard, if somewhat uncommon in quantum information theory: see Ref. [47] for a comparable example
(though in our work, we sometimes emphasise the measure more).
Nor are we the first to use integration in relation to describing diagrammatic calculi: see for instance the work of Majid [29], which however is much more concerned with defining a form of the ZX calculus to non-(co-)commutative algebras and co-algebras.
By contrast, our intent is explicitly to draw attention to integrals as an means of defining multi-linear operators on $\cH$, as an approach to defining semantic maps for ZX and ZH diagrams.
We may apply this notion of integration to operator-valued functions, as is typical for wave-functions in quantum mechanics.
For instance, one may define
\begin{equation}
\int\limits_{x \in S} \! f(x) \,\ket{x}\, \mathrm d\mu(x)
\;=\;
\nu^2 \sum_{x \in S}\, f(x) \, \ket{x}.
\end{equation}
Note that in the usual approach to describing wave-functions over $\R$, one takes $\ket{x}$ to represent a point-mass distribution (i.e., it isn't a vector $\vec v \in \C^\R$ for which $v_x = 1$), so that the following equality holds:
\begin{equation}
\bra{z} \Biggl[\;\; \int\limits_{\mathclap{x \in \R}} f(x) \!\;\ket{x} \, \mathrm dx \,\Biggr]
\int\limits_{\mathclap{x \in \R}} f(x) \!\;\delta_z(x) \, \mathrm dx \,
\,=\,
\end{equation}
where here $\delta_z(x)$ is a shifted Dirac distribution (see Appendix <ref> for more details).
In the vector space $\cH$, to avoid notational confusion, we prefer to reserve the symbol `$\ket{x}$' to represent a unit-norm standard basis vector (i.e., a vector $\vec v \in \cH$ such that $v_x = 1$); but we may introduce a symbol `$\kket{x}$' which denotes the vector $\kket{x} \,=\, \tfrac{1}{\nu} \ket{x}$, specifically so that we may write
\begin{equation}{}
\mspace{-18mu}
\label{eqn:initial-attempt-point-mass-distribution}
\begin{aligned}[b]
\bbra{z} \Biggl[\, \int\limits_{\;x \in [D]} \!\!\!f(x) \;\kket{x} \; \mathrm d\mu(x) \:\!\Biggr]
\,&=\!\!
\int\limits_{\;x \in [D]} \!\!\!f(x) \;\bbracket{z}{x} \; \mathrm d\mu(x)% \mspace{-36mu}
\;=\;
\nu^2\! \sum_{x \in [D]} \! f(x) \frac{\bracket{z}{x}}{\nu^2}
% \mspace{-36mu}
\,=\;
\,,
\end{aligned}
\end{equation}
and also
\begin{equation}{}%~\\[-1ex]
\mspace{-18mu}
\label{eqn:resolution-of-the-identity}
\begin{aligned}[b]
\int\limits_{\;x \in [D]} \!\!\!\kket{x}\bbra{x} \; \mathrm d\mu(x)
\;&=\;
\nu^2 \sum_{x \in [D]} \! \frac{\ket{x}\bra{x}}{\nu^2}
% \mspace{-36mu}
\,=\;
\sum_{x \in [D]} \ket{x}\bra{x}
\;=\;
\mathbf 1
\,.
\end{aligned}
\end{equation}
This notation `$\kket{x}$' for a possibly-non-normalised basis vector provides us the flexibility to consider which measures $\mu: \mathcal B \to \R$ are best suited for defining convenient semantics for ZX and ZH generators, while retaining the features provided by Dirac distributions over $\R$
The above is in principle all the background that is necessary to interpret these integrals over $[D]$, as we use them in this article.
However, it is also possible to interpret this notion of integration over $[D]$ in terms of integration over $\Z_D$, which in turn may be interpreted in terms of integration over a continuous group.
Readers who are interested in such an interpretation may find it in Appendix <ref>.
§.§ Constraints on normalisation motivated by the Fourier transform
A precise value for $\nu>0$ or for $N > 0$ is something that can only be fixed by imposing constraints on the way that we represent certain integrals, operators, or functionals.
We are particularly interested in the constraints imposed by how one would represent the discrete Fourier transform over $\Z_D$, as an analogue of the Fourier transform over $\R$.
For a function $f: \Z_D \to \C$, in analogy to a common representation of the Fourier transform of a real-valued function,
We emulate the presentation of the Fourier transform in terms of an oscillation frequency $k$ (as the more common convention of angular frequency $2\pi k$ used by physicists does not admit an obvious definition over the integers mod $D$).
The factor of $1\!\!\;/\!\!\;D$ in the exponent, which is the main notational difference between Eqn. (<ref>) and the usual Fourier transform over $\R$, can be shown to arise from formal connections between the Fourier transform over $\R$ and representations of functions $f: \Z_D \to \C$ in terms of discrete distributions on $\R$
(see Appendix <ref>).
Note also the presence of a minus sign in the exponent, which for historical reasons is absent in the usual definition of the quantum Fourier transform.
suppose that we wished to describe the (discrete) Fourier transform of $f$ by a function $\hat f: \Z_D \to \C$ given by
\begin{equation}
\label{eqn:FT-of-f}
\hat f(k)
\;=\;
\int\limits_{\;\mathclap{x \in \Z_D}}
\e^{-2\pi i k x / D} \, f(x)\;\mathrm d\mu(x) ,
\end{equation}
where here we adopt a notion of integration over $\Z_D$ induced by the one described above for $[D]$.
Differing conventions exist for how one might normalise the Fourier transform, over $\R$ or indeed over $\Z_D$ : in particular, opinions could differ about whether Eqn. (<ref>) should be modified by including a non-trivial scalar factor on the right-hand side.
Whether this should be done is connected to the question of whether we consider the Fourier transform to preserve the measure $\mu$ of $\Z_D$, the separate question of whether we consider it to preserve the $\ell_2$ norm of the functions it acts on in an appropriate sense, and the further question of what the measure $N = \mu(\Z_D)$ should be.
We adopt the convention of defining the Fourier transform of $f: \Z_D \to \C$ as in Eqn. (<ref>).
It will be useful (again in analogy to standard practise in physics) to use $f$ to describe a `wave-function',
Note that $\kket{f}$ is not necessarily a unit vector; whether $\kket{f} \in \cH$ is normalised depends on the values taken by $f$.
\begin{equation}
\kket{f}
\;:=\;
\int\limits_{\;\mathclap{x \in \Z_D}}
f(x) \; \kket{x} \; \mathrm d\mu(x)\,.
\end{equation}
Whether the Fourier transform “preserves the measure of $\Z_D$”, is the question of how we should interpret the domain of $\hat f$ as a measure space $(\Z_D,\mu')$, where $\mu'$ in principle may differ from $\mu$.
This may seem like quite a technical consideration, but it is in principle unavoidable in physics when performing Fourier analysis over $\R$, as it is connected to the choice of units for the domain of the function $f$ and for its Fourier transform $\hat f$.
We would write
\begin{equation}
\label{eqn:initial-ket-f-hat}
\kket{\:\!\smash{\hat f}\;\!}
\;=\;
\int\limits_{\;\mathclap{k \in \Z_D}}
\hat f(k) \, \kket{k} \; \mathrm d\mu'(k)\,,
\end{equation}
integrating with respect to that different measure, for which $\mu'(\Z_D) = N'$ might differ from $N$.
Taking $\mu' \ne \mu$ would imply that the domain of the Fourier transform, consisting of functions $f: (\Z_D,\mu) \to \C$, is strictly speaking not the same as the codomain $\hat f: (\Z_D,\mu') \to \C$.
The string diagrams that would result, would involve wires of more than one type.
While not impossible in principle, this is a departure from the usual approach of defining the ZX and ZH calculus, in which the wires all have the same type.
For the sake of simplicity — both in analysis of integrals, and in the design of ZX and ZH calculi — we prefer to conceive of $f$ and $\hat f$ as having the same measure space $(\Z_D,\mu)$ for their domains.
Identifying $\mu' = \mu$, we may simplify Eqn. (<ref>) to
\begin{equation}
\label{eqn:ket-f-hat}
\begin{aligned}[b]
\kket{\:\!\smash{\hat f}\;\!}
\;&=\,
\int\limits_{\;\mathclap{k \in \Z_D}}
\hat f(k) \, \kket{k} \; \mathrm d\mu(k)
\;=\,
\mathop{\int \!\!\!\! \int}\limits_{\;\mathclap{k,x \in \Z_D}}
\e^{-2\pi i k x / \!\!\: D} f(x) \;\kket{k} \;\mathrm d\mu(x) \; \mathrm d\mu(k)
\,.
\end{aligned}
\end{equation}
This would then motivate the definition for the discrete Fourier transform operator $F$ over $\Z_D$, as
\begin{align}
\label{eqn:integral-FT}
\;&=\;
\mathop{\int \!\!\!\! \int}\limits_{\;\mathclap{k,x \in \Z_D}}
\e^{-2\pi i k x / D} \;\kket{k}\bbra{x} \;\mathrm d\mu(x) \; \mathrm d\mu(k)
\;,
\end{align}
so that $F\kket{f} = \kket{\smash{\hat f}\:\!}$.
The question of whether the Fourier transform preserves the norm, is precisely the question of whether $F$ is unitary.
We adopt the convention that $F$ is indeed unitary, to allow it to directly represent a possible transformation of state-vectors over $\cH$.
This has the further benefit that the inverse Fourier transform can be expressed simularly to the Fourier transform, i.e., without scalar factors:
\begin{equation}{}
\label{eqn:integral-inv-FT}
\mspace{-18mu}
\mathop{\int \!\!\!\! \int}\limits_{\;\mathclap{x,k \in \Z_D}}
\e^{2\pi i k x / D} \,\kket{x}\bbra{k} \;\mathrm d\mu(k) \; \mathrm d\mu(x)
\,,
\mspace{-9mu}
\end{equation}
so that in particular we may write
\begin{equation}{}
\mspace{-18mu}
\,=\,
\bbra{x} F\herm \kket{\smash{\hat f}\:\!}
\,=
\int\limits_{\;\mathclap{k \in \Z_D}}
\e^{2\pi i k x / D} \; \hat f(k) \;\mathrm d\mu(k)
\,,
\end{equation}
again in close analogy to the standard definition of the Fourier transform over $\R$.
The definition of $F$ in Eqn. (<ref>) and the constraint that it should be unitary, imposes a constraint on the measure $\mu$ on $\Z_D$.
We first prove a routine Lemma (which will be of some use in the Appendices in simplifying iterated integrals):
Let $\omega = \e^{2\pi i\!\!\;/\!\!\;D}$ and $E \in [D]$.
\mathop{\text{\LARGE $\int$}}\limits_{\mathclap{k \in \Z_D}}
\omega^{Ek} \; \mathrm d\mu(k)
\,=\,
\bbracket{E}{0} \, D\nu^4.
This holds by reduction to the usual exponential sum:
\begin{equation*}
\begin{aligned}[b]
\int\limits_{\mathclap{k \in [D]}}
\e^{2\pi i Ek/\!\!\:D} \; \mathrm d\mu(k)
\;=\;
\nu^2 \! \sum_{k \in [D]}
\bigl(\omega^E\bigr)^k
\left\{
\begin{aligned}
\nu^2\cdot \omega^{E L_D} \cdot \text{\small$\dfrac{(\omega^E)^D-1}{\omega-1}$}\,
,&~ ~ \text{if $\omega^{E} \ne 1$}
\\[2ex]
\nu^2 \cdot D
,&~ ~ \text{if $\omega^{E} = 1$}
\end{aligned}
\right\}
\;&=\;
\delta_{E,0} \; D\nu^2
\\[-2.5ex]&=\;
\bracket{E}{0} D\nu^2
\\[1.5ex]&=\;
\bbracket{E}{0} \, D\nu^4 \;.
\end{aligned}
\qedhere
\end{equation*}
We may apply this in the case of the Fourier transform as follows.
If $F$ as expressed in Eqn. (<ref>) is unitary, we have
\begin{equation}
\label{eqn:constraint-on-N-via-FT}
\begin{aligned}[b]
\mathbf 1
\;=\;
F\herm F
\;&=\;
\Biggl[\;\;\;
\mathop{\int \!\!\!\! \int}\limits_{%
\mathclap{
y,h \in [D]
\e^{2\pi i hy/\!\!\:D} \; \kket{y} \bbra{h} \;
\mathrm d\mu(y) \; \mathrm d\mu(h)
\Biggr]
\Biggl[\;\;\;
\mathop{\int \!\!\!\! \int }\limits_{%
\mathclap{
k,x \in [D]
\e^{-2\pi i kx/\!\!\:D} \; \kket{k} \bbra{x} \;
\mathrm d\mu(k) \; \mathrm d\mu(x)
\Biggr]
\\&=\;
\mathop{\int \!\!\!\! \int \!\!\!\! \int \!\!\!\! \int}\limits_{%
\mathclap{
y,h,k,x \in [D]
\e^{2\pi i (hy - kx)/\!\!\:D} \; \kket{y} \bbracket{h}{k} \bbra{x} \;
\mathrm d\mu(y) \; \mathrm d\mu(h) \; \mathrm d\mu(k) \; \mathrm d\mu(x)
\\[1.5ex]&=\;
\mathop{\int \!\!\!\! \int \!\!\!\! \int}\limits_{%
\mathclap{
y,k,x \in [D]
\e^{2\pi i k(y - x)/\!\!\:D} \; \kket{y} \bbra{x} \;
\mathrm d\mu(y) \; \mathrm d\mu(k) \; \mathrm d\mu(x)
\\[1ex]&=\;
\mathop{\int \!\!\!\! \int}\limits_{%
\mathclap{
y,x \in [D]
}} \;\;
\Biggl[\;\;\; \int\limits_{\mathclap{k \in [D]}}
\e^{2\pi i k(y - x)/\!\!\:D} \; \mathrm d\mu(k) \Biggr]
\; \kket{y} \bbra{x} \;
\mathrm d\mu(y) \; \mathrm d\mu(x)
\\[1ex]&=\;
\mathop{\int \!\!\!\! \int}\limits_{%
\mathclap{
y,x \in [D]
}} \;
\Bigl[ D\nu^4 \cdot \bbracket{y}{x} \Bigr]
\; \kket{y} \bbra{x} \;
\mathrm d\mu(y) \; \mathrm d\mu(x)
\,=\;
D\nu^4 \!
\mathop{\int}\limits_{%
\mathclap{
x \in [D]
}} \kket{x} \bbra{x} \;
\mathrm d\mu(x)
\,=\,
D\nu^4 \cdot \mathbf 1
\,.
\end{aligned}
\mspace{-18mu}
\end{equation}
This implies that $\nu = D^{-1/4}$
(or equivalently, $N = \mu(\Z_D) = D \nu^2 = \sqrt D$).
It may be of interest to consider, how imposing a value for $N$ different from $\sqrt D$ would affect the presentation of the Fourier transform, or the relationships between the measures of the domain of a function $f: (\Z_D,\mu) \to \C$ and that of $\hat f$.
We discuss this in Appendix <ref>.
§ APPLICATION TO ZX- AND ZH-CALCULI
There are a number of convenient consequences to defining a discrete integral on $[D]$ as we do in the preceding Section, firstly for analysis of the Fourier transform and certain discrete integrals of roots of unity, and for sound rewrite systems on ZX and ZH diagrams when we assign semantics for the ZX and ZH generators as we do in Eqn. (<ref>).
In this section, we demonstrate these features — in part to demonstrate how simple operators would be denoted using our semantics for the ZX and ZH generators, but also in part to demonstrate how those same operators can be denoted using discrete integrals.
Fourier basis distributions.
We defined the vectors $\kket{\smash{\omega^k}}$ on page discn:define-ket-omega essentially as a formal (super-normalised) analogue of the orthonormal Fourier basis states $\ket{\smash{\omega^k}}$.
As a simple consequence of choosing the normalisation for integrals over $[D$], so that the Fourier transform is a unitary transformation on wave-functions, we may express the vectors $\kket{\smash{\omega^k}}$ quite simply (dropping the $\mathrm d\mu(x)$ for brevity):
\begin{equation}
\label{eqn:Fourier-basis}
\kket{\smash{\omega^k}}
\;=\;
F \kket{k}
\;=\;
\int\limits_{\mathclap{x \in [D]}} \omega^{-kx} \,\kket{x} \;.
\end{equation}
Quadratic Gaussian integrals.
Following Ref. [50], define the complex unit $\tau = \e^{\pi i(D^2 + 1)/D}$, which is relevant to the analysis of stabiliser circuits on qudits of dimension $D$.
To analyse such circuits using ZX diagrams, the following integral will be relevant:
\begin{equation}
\label{eqn:quadratic-Gauss-integral}
\Gamma(a,b,D)
\;:=\;
\int\limits_{\mathclap{x \in [D]}} \tau^{2ax + bx^2} \;.
\end{equation}
Evaluating this discrete integral is connected with the subject of quadratic Gaussian sums, which we address in some detail Appendix <ref>.
As a result of the normalisation convention for our discrete integrals, it is possible to show (see Eqn. (<ref>) on page eqn:quadratic-Gauss-integral-formula) that $\bigl\lvert \Gamma(a,b,D) \bigr\rvert = 1$ when $b$ is a multiplicative unit modulo $D$ (e.g., $b = \pm 1$); if $a = 0$ as well, $\Gamma(a,b,D)$ is a power of $\e^{\pi i /4}$.
More generally, $ \Gamma(a,b,D)$ will either be $0$, or have magnitude $\sqrt t$, where $t = \gcd(b,D)$.
Specifically, we have $\Gamma(a,b,D) = 0$ if $a$ is not divisible by $t$, or if ${(D +\!\!\; Db/t^2)} \in \Z$ is odd; and $\lvert \Gamma(a,b,D) \rvert = \sqrt{t}$ otherwise.
In particular, $\Gamma(0,0,D) \,=\, \int_x 1 \; \mathrm d\mu(x) \,=\, \sqrt{D}$.
The stabiliser fragment of ZX for qudits of dimension $D$.
The scalar $\tau$ is defined in such a way that $\tau^2 = \omega$, but also so that $\tau X\herm Z\herm$ is an operator of order $D$, where $X$ and $Z$ given by
\begin{equation}
X \ket{t} \;=\; \ket{t+1},
\qquad\qquad
Z \ket{t} \;=\; \omega^t \ket{t},
\end{equation}
are the $D$-dimensional generalised Pauli operators.
(As always, arithmetic performed in the kets are evaluated modulo $D$.)
Choosing $\tau$ in this way makes it possible [50] to define a simple and uniform theory of unitary stabiliser circuits on qudits of dimension $D$, generated by the single-qudit operators
Note that the definitions below are equivalent to those of Ref. [50], despite the different convention we adopt for the labeling of the standard basis.
For each $x \in [D]$, we either have $x \ge 0$ or $x < 0$: in the latter case the relative phases $\tau^{\raisebox{-0.125ex}{$\scriptscriptstyle 2ax + bx^2$}}$ remain well-defined on substitution of values $x < 0$ with $D+x$, as $\tau^{\raisebox{-0.125ex}{$\scriptscriptstyle 2a(D+x) + b(D+x)^2$}} = \tau^{\raisebox{-0.125ex}{$\scriptscriptstyle 2aD + 2ax + bD^2 + 2bD + bx^2$}} = \tau^{\raisebox{-0.125ex}{$\scriptscriptstyle 2ax + bx^2$}}$ (using the fact that $\tau^{\raisebox{-0.125ex}{$\scriptscriptstyle D^2$}} = \tau^{\raisebox{-0.125ex}{$\scriptscriptstyle 2D$}} = 1$ for both even and odd $D$).
\begin{align}
\,&=
\int\limits_{\mathclap{x\in[D]}} \tau^{\,x^2} \;\kket{x}\bbra{x}
\;;
\,&=
\mathop{\int\!\!\!\!\int}\limits_{\mathclap{k,x\in[D]}} \tau^{-2kx} \;\kket{k}\bbra{x}
\;;
\,&=
\int\limits_{\mathclap{x\in[D]}} \kket{ux}\bbra{x}
\quad
\text{for various $u \in \Z_D^\times$},
\end{align}
and either one of the two-qudit operators
\begin{align}
\mathrm{CX}
\,&=
\mathop{\int\!\!\!\!\int}\limits_{\mathclap{x,y\in[D]}} \kket{x}\bbra{x} \otimes \kket{x{+}y}\bbra{y}
\;;
\mathrm{CZ}
\,&=
\mathop{\int\!\!\!\!\int}\limits_{\mathclap{x,y\in[D]}} \tau^{2xy}\;\kket{x,y}\bbra{x,y} \;.
\end{align}
Using a slightly different notational convention to Booth and Carette [27], we may easily denote these with ZX diagrams using the semantics of Eqn. (<ref>).
For $a,b \in \Z$, when parameterising a green or red dot, let $[a \s b]$ stand for the amplitude function $\Theta(x) = \tau^{2ax + bx^2}$, so that
\begin{align}{}
\biggsem{\!\!\!%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (z) at (0,0.375) [Z dot, label=left:\small{$[a \s b]$}] {};
\draw (z) -- ++(.5,0);
\end{tikzpicture}
\end{aligned}\,}
\,&=\,
\int\limits_{\mathclap{x \in [D]}}\!
\tau^{\,2ax \,+\, bx^2} \,\kket{x}
\;;
\biggsem{\!\!\!%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0.375) [X dot, label=left:\small{$[a \s b]$}] {};
\draw (x) -- ++(.5,0);
\end{tikzpicture}
\end{aligned}\,}
\,&=\,
\int\limits_{\mathclap{k \in [D]}}\!
\tau^{\,2ak \,+\, bk^2} \,\kket{\smash{\omega^{-k}}}
\;;
\end{align}
generalising these to dots with multiple edges (or with none) similarly to Ref. [27].
When $b = 0$, we may abbreviate this function simply by $[a]$, so that
we may then easily represent the operators $S$, $Z$, and $X$ as $1 \to 1$ dots:
\begin{align}
\mspace{-18mu}
\biggsem{\,
\vtikzfig[-1ex]{ZX-Z}
\,}
\,&=\,
\int\limits_{\mathclap{x \in [D]}}\!
\tau^{2x} \;
\kket{x}\bbra{x}
\;=\;
\;;
\qquad\qquad\qquad
\qquad
\biggsem{\,%
\begin{aligned}{}
% ~\\[-1.25ex]
\begin{tikzpicture}[]
\node at (0,0.375) [Z dot, fill=none, draw=none, label=below:\phantom{\footnotesize{$[\;\!0 \s 1\;\!]$}}] {};
\node (x) at (0,0.375) [Z dot, label=above:\footnotesize{$[\;\! 0 \s 1\;\!]$}] {};
\draw (x) -- ++(.625,0);
\draw (x) -- ++(-.625,0);
\end{tikzpicture}
\end{aligned}\,}
\,=\,
\int\limits_{\mathclap{x \in [D]}}\!
\tau^{\,x^2} \;
\kket{x}\bbra{x}
\;=\;
\;;
\mspace{-12mu}
\\[1.25ex]
\mspace{-18mu}
\label{eqn:ZX-X-gadget}
\biggsem{\,
\vtikzfig[-1ex]{ZX-X}
\,}
\,&=\,
% \mathop{\int\!\!\!\!\int}\limits_{\mathclap{h,k \in [D]}}\!
% \,\Bigl(
% \tau^{-2k} \;
% \kket{\smash{\omega^{-k}}}\bbra{\smash{\omega^{k}}}
% \Bigr)
% \Bigl(
% \kket{\smash{\omega^{-h}}}\bbra{\smash{\omega^{h}}}
% \Bigr)
% \,=\,
\int\limits_{\mathclap{h \in [D]}}\!
\tau^{2h} \;
\kket{\smash{\omega^{h}}}\bbra{\smash{\omega^{h}}}
\,=\,
\;.
\mspace{-12mu}
\end{align}
We may also represent the states $\kket{a}$ and $\kket{\omega^a}$ straightforwardly (albeit with the use of auxiliary red dots to represent an antipode operator, mapping $\kket{\omega^a} \mapsto \kket{\smash{\omega^{-a}}}$ and $\kket{a} \mapsto \kket{-a}$ for $a \in \Z_D$):
\begin{align}
\mspace{-18mu}
\biggsem{\!%
\vtikzfig[-1ex]{ZX-kket-omega-a}
\,}
\,&=\,
\mathop{\int \!\!\!\!\int}\limits_{\mathclap{k,x \in [D]}}\!
\tau^{2ax} \,\kket{\smash{\omega^{-k}}}\bbracket{\smash{\omega^k}}{x}
% \,=
% \int\limits_{\mathclap{k \in [D]}}\;
% \Biggl(\;\;\;
% \int\limits_{\mathclap{x \in [D]}}
% \omega^{x(k+a)}
% \!
% \Biggr)\;\!
% \kket{\smash{\omega^{-k}}}
% \,=\!\:
% \int\limits_{\mathclap{k \in [D]}} \!\!\;
% \bbracket{k\!+\!a}{0} \!\;
% \kket{\smash{\omega^{-k}}}
\;=\;
\kket{\smash{\omega^a}}
\,;
\mspace{-12mu}
\\[1.25ex]
\mspace{-18mu}
\biggsem{\!%
\vtikzfig[-1ex]{ZX-kket-a}
\,}
\,&=\,
\mathop{\int \!\!\!\! \int}\limits_{\mathclap{h,k \in [D]}}\!
\tau^{2ah} \,\kket{\smash{\omega^{-k}}}
\bbracket{\smash{\omega^k}}{\smash{\omega^{-h}}}
% \,=
% \int\limits_{\mathclap{k \in [D]}}\!
% \omega^{ak} \,\kket{\smash{\omega^k}}
% \,=
% \int\limits_{\mathclap{x \in [D]}}
% \Biggl(\;\;\;
% \int\limits_{\mathclap{k \in [D]}}\!
% \omega^{k(x-a)}
% \!
% \Biggr)
% \kket{x}
% \,=
% \int\limits_{\mathclap{x \in [D]}}
% \bbracket{x}{a}
% \; \kket{x}
\;=\;
\kket{a}
\,.
\mspace{-12mu}
\end{align}
The remaining operators may be expressed without any phases, using multi-edges between green and red dots, or using Hadamard boxes:
\begin{align}
% \Biggsem{\,
% \vtikzfig[-1ex]{ZX-CNOT-gadget}
% \,}
% =
\Biggsem{\,
\vtikzfig[-1ex]{ZX-CNOT-alt-gadget}
\,}
\mathrm{CX}
\;,
\Biggsem{\,%
\begin{aligned}{}
% ~\\[-1.25ex]
\begin{tikzpicture}[]
\node (c) at (0,0) [Z dot] {};
\draw (c) -- ++(.375,0);
\draw (c) -- ++(-.375,0);
\node (t) at ($(c) + (0,-.75)$) [Z dot] {};
\draw (t) -- ++(.375,0);
\draw (t) -- ++(-.375,0);
\draw (c) -- node [midway, small H box] {\tp} (t);
\end{tikzpicture}
\end{aligned}\,}
\,&=\,
\mathrm{CZ}
\;,
\Biggsem{\,\begin{aligned}
\begin{tikzpicture}
\node (Z) [Z dot] at (0,0) {};
\draw (Z) -- ++(-0.375,0);
\node (G) [X dot] at (1.125,0) {};
\draw (G) -- ++(0.375,0) node [X dot] {} -- ++(0.375,0);
\draw [out=70,in=110] (Z) to (G);
\draw [out=45,in=135] (Z) to (G);
\draw [out=-45,in=-135] (Z) to (G);
\draw [out=-70,in=-110] (Z) to (G);
\node at ($(Z)!0.375!(G) + (0,0.0875)$) {$\vdots$};
\node at ($(Z)!0.5875!(G) + (0,0.)$) {$\left. \begin{matrix} \\[4ex] \end{matrix}\right\} \! u$};
\end{tikzpicture}
\end{aligned}}
\,&=\,
\;.
\end{align}
(The diagram shown for $M_u$ also generalises to operators $M_u = \int_x \kket{ux}\bbra{x}$ for $u$ not a multiplicative unit modulo $D$, though in that case the operator will not be invertible.)
Finally, dots of degree $0$ frequently have a simple interpretation:
\begin{equation}{}
\mspace{-18mu}
\Bigsem{\!
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0.375) [Z dot, label=left:\footnotesize{$[a \s b]\!$}] {};
\end{tikzpicture}
\end{aligned}\,}
\;=\,
\int\limits_{\mathclap{x \in [D]}} \tau^{2ax + bx^2}
\Gamma(a,b,D)
\,=\;
\left\{
\begin{aligned}
\sqrt{t\,} \cdot \e^{i\gamma} ,
&\quad \text{if $t = \gcd(b,D)$ and $a$ is divisible by $t$};
\\
&\quad \text{otherwise},
\end{aligned}
\right.
\mspace{-18mu}
\end{equation}
where $\gamma$ is a phase parameter described in more detail in Eqn. (<ref>); where in particular for $b$ a multiplicative unit modulo $D$, this represents a global phase factor.
This provides a diagrammatic language which is capable of expressing the rewrites similar to those described by Ref. [27], while involving fewer scalar factors; and the semantics can be easily described through the use of discrete integrals over $[D]$.
To make our presentation of the ZX-calculus complete for the stabilizer fragment (i.e., for the subtheory in which all amplitudes are given by ${[\!\:a \s b \!\:]}$ for various $a,b \in \Z$),
Note in particular that we do not present any rules which are clear analogues to the rules , , , or of Ref. [27].
We expect that such rules would be necessary to demonstrate completeness for the stabiliser fragment of ZX for $D>2$ prime.
Still more work still may be necessary to demonstrate completeness for composite $D$, thoigh possibly in the case where $D$ is square-free may prove to be simpler than the general case.
it would suffice to describe rewrites for particular values of $D$ which force an interpretation of $\!
\tikz \node [Z dot, label=left:\footnotesize{$[a \s b]\!$}] {};
\,$
as being either $\Gamma(a,b,D)$ or $\Gamma(a,b,D)^\ast$.
Multipliers and multicharacters in qudit ZH.
In practise, it would be cumbersome to reason about multiplication operators $M_u$ or iterated $\mathrm{CX}$ or $\mathrm{CZ}$ gates using parallel edges between dots.
Booth and Carette describe [27] how these may be denoted using recursively defined gadgets called `multipliers', denoted
\draw (0,0) -- node [midway, rfarr] {\t c} (1,0);
\,$
for $c \in \N$, which represent a limited form of scalable ZX notation [51, 52].
Using discrete integrals and the semantics described in Eqn. (<ref>), we would simply write
\begin{equation}
\Biggsem{\,\begin{aligned}
\begin{tikzpicture}
\node (a) [rfarr] at (0,0) {\t c};
\draw (a) -- ++(-0.4375,0);
\draw (a) -- ++(0.5,0);
\end{tikzpicture}
\end{aligned}}
\;=\;
\Biggsem{\,\begin{aligned}
\begin{tikzpicture}
\node (Z) [Z dot] at (0,0) {};
\draw (Z) -- ++(-0.375,0);
\node (G) [X dot] at (1.125,0) {};
\draw (G) -- ++(0.375,0) node [X dot] {} -- ++(0.375,0);
\draw [out=70,in=110] (Z) to (G);
\draw [out=45,in=135] (Z) to (G);
\draw [out=-45,in=-135] (Z) to (G);
\draw [out=-70,in=-110] (Z) to (G);
\node at ($(Z)!0.375!(G) + (0,0.0875)$) {$\vdots$};
\node at ($(Z)!0.5875!(G) + (0,0.)$) {$\left. \begin{matrix} \\[4ex] \end{matrix}\right\} \! c$};
\end{tikzpicture}
\end{aligned}}
% \Biggsem{\,\begin{aligned}
% \begin{tikzpicture}
% \node (Z) [white dot] at (0,0) {};
% \draw (Z) -- ++(-0.375,0);
% \node (G) [gray dot] at (1.125,0) {};
% \draw (G) -- ++(0.375,0) node [gray dot] {} -- ++(0.375,0);
% \draw [out=70,in=110] (Z) to (G);
% \draw [out=45,in=135] (Z) to (G);
% \draw [out=-45,in=-135] (Z) to (G);
% \draw [out=-70,in=-110] (Z) to (G);
% \node at ($(Z)!0.375!(G) + (0,0.0875)$) {$\vdots$};
% \node at ($(Z)!0.5875!(G) + (0,0.)$) {$\left. \begin{matrix} \\[4ex] \end{matrix}\right\} \! c$};
% \end{tikzpicture}
% \end{aligned}}
% \Bigsem{\;\begin{aligned}
% \begin{tikzpicture}
% \node (K) [H box] at (0,0) {\t{c}};
% \draw (K) -- ++(-0.5,0);
% \draw (K) -- ++(0.5,0) node [small H box] {\tm} -- ++(0.5,0);
% \end{tikzpicture}
% \end{aligned}\;}
% M_k
\;=\;
\int\limits_{\mathclap{x \in [D]}} \kket{cx}\bbra{x}
\;.
\end{equation}
Using these multipliers, Booth and Carette <cit.> then define `Fourier boxes' $
\begin{aligned}
\begin{tikzpicture}
\node (K) [H box] at (0,0) {\t{c}};
\draw (K) -- ++(-0.5,0);
\draw (K) -- ++(0.5,0);
\end{tikzpicture}
\end{aligned}
\,:=\,
\begin{aligned}
\begin{tikzpicture}
\node (a) [rfarr] at (0,0) {\t c};
\node (K) [small H box] at ($(a) + (0.625,0)$) {\tp};
\draw (a) -- ++(-0.5,0);
\draw (K) -- ++(0.375,0);
\draw (a) -- (K);
\end{tikzpicture}
\end{aligned}
$ (using our notation for Hadamard boxes), with semantics given by
\begin{equation}
\Bigsem{\;\begin{aligned}
\begin{tikzpicture}
\node (K) [H box] at (0,0) {\t{c}};
\draw (K) -- ++(-0.5,0);
\draw (K) -- ++(0.5,0);
\end{tikzpicture}
\end{aligned}\;}
\;=\;
\mathop{\int\!\!\!\!\int}\limits_{\mathclap{x,y \in [D]}}
\omega^{cxy} \; \kket{y}\bbra{x}
\;.
\end{equation}
As a ZH generator, this is an H-box with an amplitude parameter $\omega^c$.
Using this as a primitive, and composing this with the inverse
\draw (0,0) -- node [midway, small H box] {\tm} (.75,0);
\,$
of the positive Hadamard box
\draw (0,0) -- node [midway, small H box] {\tp} (.75,0);
\,$, we may directly describe multipliers instead as a ZH gadget, also loosely following Roy [35]:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}
\node (K) [H box] at (0,0) {\t{c}};
\node (h) [small H box] at ($(K) + (0.625,0)$) {\tm};
\draw (K) -- ++(-0.5,0);
\draw (h) -- ++(0.375,0);
\draw (K) -- (h);
\end{tikzpicture}
\end{aligned}
\;\;=:\;\;
\begin{aligned}
\begin{tikzpicture}
\node (a) [rfarr] at (0,0) {\t c};
\draw (a) -- ++(-0.5,0);
\draw (a) -- ++(0.5,0);
\end{tikzpicture}
\end{aligned}
\;\;.
\end{equation}
The amplitude parameter $\omega^c$ corresponds to a character function $\uchi_c: \Z \to \C$ given by $\uchi_c(x) = \omega^{cx}$, which is well-defined modulo $D$, and which we may then regard as a character on $\Z_D$.
The function $\Z \x \Z \to \C$ given by $(x,y) \mapsto \uchi_c(xy)$ is a bicharacter, which is also well-defined modulo $D$ on each of its arguments; and more generally we may consider multicharacters, which are functions $\Z_D \x \cdots \x \Z_D \to \C$ given by $(x_1, \ldots, x_n) \mapsto \omega^{c x_1 \cdots x_n}$.
We may call H-boxes with any number of edges, and with amplitude parameter $\omega^c$ for some $c \in \Z_D$, a ($\Z_D$-)multicharacter box.
We may use multiplier gadgets and multicharacter boxes themselves to usefully describe unitary transformations:
\begin{align}
% \Sem{10ex}{\,
% \vtikzfig[-1.75ex]{ZXH-CNOT-c-gadget}
% \,}
% \,&=\,
\Sem{8.5ex}{\,
\vtikzfig[-2ex]{ZXH-CNOT-c-alt-gadget}
\,}
\mathrm{CX}^c
\mathop{\int\!\!\!\!\int}\limits_{\mathclap{x,y \in [D]}}
\kket{x,y\!\:{+}\!\:cx}\bbra{x,y}
\;,
\Sem{7ex}{\,
\vtikzfig[-1.5ex]{ZXH-CZ-c-gadget}
\,}
\mathrm{CZ}^c
\mathop{\int\!\!\!\!\int}\limits_{\mathclap{x,y \in [D]}}
\omega^{cxy} \; \kket{x,y}\bbra{x,y}
\;.
\end{align}
Note that for $c \in \Z_D$, we may also easily describe unitary transformations operations which are in general not stabiliser operators over $\Z_D$, such as highly-controlled-$X$ and -$Z$ operators.
For example:
\begin{align}{}
\mspace{-48mu}
\Sem{9ex}{\;
\vtikzfig[-2.00ex]{ZXH-CCNOT-c-alt-gadget}
\;}
\mathrm{CCX}^c
\mathop{\int\!\!\!\!\int\!\!\!\!\int}\limits_{\mathclap{x,y,z \in [D]}}
\!
\kket{x,y,z\!\:{+}\!\:cxy}\bbra{x,y,z}
\,,
\Sem{9ex}{\,\;\vtikzfig[-1.75ex]{ZXH-CCZ-c-gadget}\,\;}
\mathrm{CCZ}^c
\mathop{\int\!\!\!\!\int\!\!\!\!\int}\limits_{\mathclap{x,y,z \in [D]}}
\!
\omega^{cxyz} \, \kket{x,y,z}\bbra{x,y,z}
\,.
\mspace{-36mu}
\end{align}
General controlled gates in qudit ZH.
The above construction is not special to multicharacters over $\Z_D$, and can also be used in conjunction with an arbitrary amplitude $\alpha \in \C^\times$ (yielding a unitary operator if and only if $\lvert \alpha \rvert = 1$), or indeed a more general function $\mathrm A: \Z \to \C$:
\begin{align}{}
\mspace{-39mu}
\Sem{11ex}{\;%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(1,0);
\draw (x) -- ++(-.375,0);
\node (w) at ($(x) + (0,.5875)$) [white dot] {};
\draw (w) -- ++(1,0);
\draw (w) -- ++(-.375,0);
\node (y) at ($(x) + (0,-.5875)$) [white dot] {};
\draw (y) -- ++(1,0);
\draw (y) -- ++(-.375,0);
\node (z) at ($(y) + (0,-.5875)$) [white dot] {};
\draw (z) -- ++(1,0);
\draw (z) -- ++(-.375,0);
\node [H box, label=right:\small$\!\!\:\alpha$] (p) at ($(x)!0.5!(y) + (.375,0)$) {};
\draw (w) -- (p);
\draw (x) -- (p);
\draw (y) -- (p);
\draw (z) -- (p);
\end{tikzpicture}
\end{aligned}\;}
\begin{split}
\mathop{\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int}\limits_{\mathclap{w,x,y,z \in [D]}}\!
\alpha^{wxyz} \; \kket{w,\!\!\;x,\!\!\;y,\!\!\;z}\bbra{w,\!\!\;x,\!\!\;y,\!\!\;z}
\\[.5ex]&\;\;\;\;{}=\;
\mathop{\sum}\limits_{\mathclap{w,x,y,z \in [D]}} \,
\alpha^{wxyz} \, \ket{w,\!\!\;x,\!\!\;y,\!\!\;z}\bra{w,\!\!\;x,\!\!\;y,\!\!\;z} \!\:,
\end{split}
\Sem{11ex}{\;%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(1,0);
\draw (x) -- ++(-.375,0);
\node (w) at ($(x) + (0,.5875)$) [white dot] {};
\draw (w) -- ++(1,0);
\draw (w) -- ++(-.375,0);
\node (y) at ($(x) + (0,-.5875)$) [white dot] {};
\draw (y) -- ++(1,0);
\draw (y) -- ++(-.375,0);
\node (z) at ($(y) + (0,-.5875)$) [white dot] {};
\draw (z) -- ++(1,0);
\draw (z) -- ++(-.375,0);
\node [H box, label=right:\small$\!\!\:\mathrm{A}$] (p) at ($(x)!0.5!(y) + (.375,0)$) {};
\draw (w) -- (p);
\draw (x) -- (p);
\draw (y) -- (p);
\draw (z) -- (p);
\end{tikzpicture}
\end{aligned}\;}
\begin{split}
\mathop{\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int}\limits_{\mathclap{w,x,y,z \in [D]}}\!
\mathrm{A}(wxyz) \; \kket{w,\!\!\;x,\!\!\;y,\!\!\;z}\bbra{w,\!\!\;x,\!\!\;y,\!\!\;z}
\\[.5ex]&\;\;\;\;{}=\;
\mathop{\sum}\limits_{\mathclap{w,x,y,z \in [D]}}\,
\mathrm{A}(wxyz) \, \ket{w,\!\!\;x,\!\!\;y,\!\!\;z}\bra{w,\!\!\;x,\!\!\;y,\!\!\;z}
\!\:.
\end{split}
\mspace{-42mu}
\end{align}
Note that here one must recall that the variables of integration are integers, and in particular that the product of the variables of integration are evaluated in $\Z$.
It is for the application of multiply-controlled $\alpha$ gates for $\alpha \in \C^\times$, that we have chosen to index the standard basis by $[D] = \{L_D, L_D{+}1, \ldots, U_D{-}1, U_D\}$, where $L_D$ is negative for $D > 2$.
The set $[D]$ admits a simple involution,
\begin{equation}
\neg: [D] \to [D] \;;
\qquad\qquad\qquad
\neg x
\::=\;
\sigma - x\,,
\qquad
\text{where $\sigma = U_D + L_D = \begin{cases}
1, & \text{if $D$ is even};
\\
0, & \text{if $D$ is odd} ,
\end{cases}
\end{equation}
so that $\neg x = 1 - x$ for $D$ even, and $\neg x = -x$ for $D$ odd.
A similar involution $\neg x = \tilde \sigma - x$ exists for alternative definitions of $[D]$, consisting of a sequence of $D$ consecutive integers.
For instance: for the more conventional choice $[D] = \{0,1,\ldots,D{-}1\}$, one would take $\tilde \sigma = D{-}1$ for all $D>1$.
This leads to rewrites which motivated us, on aesthetic grounds, to consider what definition of $[D]$ would yield the smallest possible value of $\sigma$ for various $D>1$.
This is what leads us to advocate the convention $[D] = \{L_D, L_D{+}1, \ldots, U_D{-}1, U_D\}$ for $L_D = -\lfloor \!\!\:\tfrac{D-1}{2}\!\!\: \rfloor$ and $U_D = \lfloor \!\!\;\tfrac{D}{2}\!\!\; \rfloor$, most notably given that $\sigma = 0$ for all odd $D$.
If we define the syntactic sugar $\smash{\,\tikz
\draw (0,0) -- node [midway, not dot, label=above:\footnotesize$\neg$] {} (.75,0);
\, := \,
\,\tikz
\draw (0,0) -- node [midway, not dot, label=above:\footnotesize$-\sigma$] {} (.75,0);
\,}$, we may show that
\begin{align}{}
\begin{aligned}[b]{}
\mspace{-39mu}
\Sem{15ex}{\;%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(1,0);
\draw (x) -- ++(-1,0);
\node (w) at ($(x) + (0,.5875)$) [white dot] {};
\draw (w) -- ++(1,0);
\draw (w) -- node [midway, not dot, label=above:\footnotesize$\neg$] {} ++(-1,0);
\node (y) at ($(x) + (0,-.5875)$) [white dot] {};
\draw (y) -- ++(1,0);
\draw (y) -- ++(-1,0);
\node (z) at ($(y) + (0,-.5875)$) [white dot] {};
\draw (z) -- ++(1,0);
\draw (z) -- ++(-1,0);
\node [H box, label=right:\small$\!\!\:\alpha$] (p) at ($(x)!0.5!(y) + (.375,0)$) {};
\draw (w) -- (p);
\draw (x) -- (p);
\draw (y) -- (p);
\draw (z) -- (p);
\end{tikzpicture}
\end{aligned}\;}
\,&=\,
\Sem{15ex}{\;%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(1,0);
\draw (x) -- ++(-.5,0);
\node (w) at ($(x) + (0,.5875)$) [white dot] {};
\draw (w) -- node [midway, not dot, label=above:\footnotesize$\neg$] {} ++(1,0);
\draw (w) -- ++(-.5,0);
\node (y) at ($(x) + (0,-.5875)$) [white dot] {};
\draw (y) -- ++(1,0);
\draw (y) -- ++(-.5,0);
\node (z) at ($(y) + (0,-.5875)$) [white dot] {};
\draw (z) -- ++(1,0);
\draw (z) -- ++(-.5,0);
\node [H box, label=right:\small$\!\!\:\alpha$] (p) at ($(x)!0.5!(y) + (.375,0)$) {};
\draw (w) -- node [pos=0.3125, not dot, label=left:\footnotesize$\neg\!\!\!\;$] {} (p);
\draw (x) -- (p);
\draw (y) -- (p);
\draw (z) -- (p);
\end{tikzpicture}
\end{aligned}\;}
\;=
\mathop{\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int}\limits_{\mathclap{w,x,y,z \in [D]}}\!
\alpha^{(\sigma - w)xyz} \; \kket{\neg w,x,y,z}\bbra{w,x,y,z}
\\[-3ex]&=
\mathop{\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int}\limits_{\mathclap{w,x,y,z \in [D]}}\!
\alpha^{\sigma xyz} \alpha^{- wxyz} \; \kket{\neg w,x,y,z}\bbra{w,x,y,z}
\;=\,
\Sem{15ex}{\;%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(-.5,0);
\node (w) at ($(x) + (0,.5875)$) [white dot] {};
\draw (w) -- ++(-.5,0);
\node (y) at ($(x) + (0,-.5875)$) [white dot] {};
\draw (y) -- ++(-.5,0);
\node (z) at ($(y) + (0,-.5875)$) [white dot] {};
\draw (z) -- ++(-.5,0);
\node [H box, label=right:\small$\!\!\:\alpha^{-1}$] (p) at ($(x)!0.5!(y) + (.375,0)$) {};
\draw (w) -- (p);
\draw (x) -- (p);
\draw (y) -- (p);
\draw (z) -- (p);
\coordinate (w') at ($(w) + (1.5,0)$);
\node [white dot] (x') at ($(x) + (1.5,0)$) {};
\node [white dot] (y') at ($(y) + (1.5,0)$) {};
\node [white dot] (z') at ($(z) + (1.5,0)$) {};
\draw (w) -- (w');
\draw (x) -- (x');
\draw (y) -- (y');
\draw (z) -- (z');
\node [H box, label=right:\small$\!\!\:\alpha^{\sigma}$] (p') at ($(x')!0.5!(y') + (.375,0)$) {};
\draw (x') -- (p');
\draw (y') -- (p');
\draw (z') -- (p');
\draw (w') -- node [midway, not dot, label=above:\footnotesize$\neg$] {} ++(1,0);
\draw (x') -- ++(1,0);
\draw (y') -- ++(1,0);
\draw (z') -- ++(1,0);
\end{tikzpicture}
\end{aligned}\;}
\!\;.
\mspace{-36mu}
\end{aligned}%
\end{align}
As a consequence, for a green node with arbitrary phase parameter $\theta \in \R$ (representing an amplitude function $\Theta(x) = \e^{i\theta x}$), it is possible to show that
\begin{equation}
\Sem{9ex}{\;
\begin{aligned}
\begin{tikzpicture}
\node (Z) at (0,0) [Z dot, label=above:\small$\theta$] {};
\node (h1) at (-.5,0.375) [not dot, label=above:\small$\neg$] {};
\node (h2) at (-.5,-0.375) [not dot, label=below:\small$\neg$] {};
\draw (Z) .. controls (-0.3175,0.375) .. (h1) -- ++(-0.375,0);
\draw (Z) .. controls (-0.3175,-0.375) .. (h2) -- ++(-0.375,0);
\node (dots) at ($(Z) + (-0.5,0.125)$) {\footnotesize$\mathbf\vdots$};
\node (h1) at (.5,0.375) [not dot, label=above:\small$\neg$] {};
\node (h2) at (.5,-0.375) [not dot, label=below:\small$\neg$] {};
\draw (Z) .. controls (0.3175,0.375) .. (h1) -- ++(0.375,0);
\draw (Z) .. controls (0.3175,-0.375) .. (h2) -- ++(0.375,0);
\node (dots) at ($(Z) + (0.5,0.125)$) {\footnotesize$\mathbf\vdots$};
\end{tikzpicture}
\end{aligned}\; }
\;=\;
\Sem{10ex}{\;
\begin{aligned}
\begin{tikzpicture}
\node (Z) at (0,0) [Z dot, label=above:\small$-\theta$] {};
\draw (Z) .. controls (-0.3175,0.375) .. ++(-0.75,.375);
\draw (Z) .. controls (-0.3175,-0.375) .. ++(-0.75,-.375);
\node (dots) at ($(Z) + (-0.5,0.125)$) {\footnotesize$\mathbf\vdots$};
\draw (Z) .. controls (0.3175,0.375) .. ++(0.75,.375);
\draw (Z) .. controls (0.3175,-0.375) .. ++(0.75,-.375);
\node (dots) at ($(Z) + (0.5,0.125)$) {\footnotesize$\mathbf\vdots$};
\node (H) at ($(Z) + (-0.4375,0.75)$) [small H box, label=right:\small$\e^{i\theta\sigma}_{\big.}$] {};
\node at ($(Z) + (-0.4375,-0.75)$) [small H box, fill=none, draw=none, label=right:\phantom{\small$\e^{i\theta\sigma}$}] {};
\end{tikzpicture}
\end{aligned}\;},
\end{equation}
generalising the situation in conventional presentations of the ZX calculus for $D = 2$, in which red $\pi$-phase dots play the role of the $\neg\;\!$-dots.
Remark on the above constructions.
Throughout the above, the operators, vectors, and scalars may be defined simply using integrals and the super-normalised point-mass distributions $\kket{x}$.
These lead to straightforward representations of a variety of unitary operators.
We have also, incidentally, set out a convention for representing stabiliser phases for ZX diagrams and for indexing basis states of $\cH$, which we feel are helpful to present the vectors $\kket{a}$ and $\kket{\smash{\omega^a}}$ themselves, and the Pauli and Clifford operators on $\cH$ for arbitrary $D > 1$.
These conventions allow us to analyse $\Z_D$-multicharacters in the case of ZH diagrams, and are nearly sufficient to analyse the stabiliser fragment over $\Z_D$ in the case of ZX diagrams at least for $D$ prime (apart from an absence of rules to reason about scalars).
Remark on rewrites.
The discussion above only begins to touch on the way in which these semantics for ZX and ZH diagrams, supports simple and useful rewrites.
A greater variety of rewrites which are sound for these semantics are demonstrated in Figures <ref>–<ref> (on pages fig:ZX-rewrites & fig:ZH-rewrites), with proofs given in Appendix <ref>.
While there is clearly work still to be done to demonstrate complete versions of these calculi (sufficient to prove equality of diagrams through rewrites alone), we hope that this section provides a convincing demonstration that developing useful and complete qudit ZX-, ZH-, and ZXH-calculi is possible, through the use of semantics expressed as discrete integrals in this way.
§ SKETCH OF A QUDIT NORMAL FORM FOR ZH
In this Section, we outline a normal form for ZH diagrams, which together with standard techniques for ZH normal forms [30] suffice for any $D > 1$ to denote an arbitrary operator on $\cH$.
We may use the more general amplitude functions $\mathrm{A}: \Z \to \C$ to define more fine-grained operators than with amplitude parameters alone.
For instance, let $\Char_{S}$ be the characteristic function of a set $S \subseteq \Z$ (so that $\Char_{S}(t) = 0$ if $t \notin S$, and $\Char_{S}(t) = 1$ otherwise), and let $\mathbf V_{S}(t) = (-1)^{\Char_{S}(t)}$.
Then for $D > 2$, the operation
\begin{align}{}
\mspace{-39mu}
\Sem{8ex}{\;%
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(1.5,0);
\draw (x) -- ++(-.375,0);
\node (y) at ($(x) + (0,-1)$) [white dot] {};
\draw (y) -- ++(1.5,0);
\draw (y) -- ++(-.375,0);
\node [H box, label=right:\small$\!\!\:\mathbf V_{\{1\}}$] (p) at ($(x)!0.5!(y) + (.375,0)$) {};
\draw (x) -- (p);
\draw (y) -- (p);
\end{tikzpicture}
\end{aligned}\;}
\,&=\,
\mathop{\int\!\!\!\!\int}\limits_{\mathclap{x,y \in [D]}}
\mathbf V_{\{1\}}(xy) \; \kket{x,y}\bbra{x,y}
\;=\;
\mathbf 1 \;-\; 2 \Bigl( \ket{\texttt{-1},\texttt{-1}}\bra{\texttt{-1},\texttt{-1}} \;+\;\ket{\texttt{+1},\texttt{+1}}\bra{\texttt{+1},\texttt{+1}} \Bigr)
\mspace{-36mu}
\end{align}
induces a sign of $-1$ on the $\ket{\texttt{-1},\texttt{-1}}$ and $\ket{\texttt{+1},\texttt{+1}}$ components of the two qudits it acts on (as these are precisely the basis states $\ket{x,y}$ for which $xy = +1 \in \Z$).
This is one drawback to the choice to index the standard basis by elements of $\{L_D,L_D{+}1,\ldots,U_D{-}1,U_D\}$ where $L_D$ may be negative.
A more conventional choice of $[D] = \{0,1,\ldots,D{-}1\}$ would see a similar operation induce a sign only on the $\ket{\texttt{+1},\texttt{+1}}$ component, which would make possible a significant simplification of the normal form described just below.
We may use similar gadgets as the basis of a normal form for qudit ZH diagrams.
For an arbitrary number $m \ge 0$ of input wires, we may consider a gadget of the form
Similar constructions are in many cases possible using a gadget of the form illustrated on the
right — where $\boldsymbol \epsilon_\alpha(t) = \alpha$ for $t = 1$ and $\boldsymbol \epsilon_\alpha(t) = 1$ otherwise — relying on the fact that $1 - x^2 = \pm 1$ has no solutions over the integers except for $x = 0$, in which case $1 - x^2 = +1$.
However, to be more precise, we must consider the conditions under which $\rho_D(-\rho_D(-1{-}x)) \!\:\cdot\!\: \rho_D(1{-}x) = \pm 1$ admits solutions for $x \in [D]$; and specifically when $D = 4$ and $x = 2$, we have $\rho_4(-\rho_4(-1{-}x)) \!\:\cdot\!\: \rho_4(1-x) = (-1) \!\cdot\! (-1) = +1$.
Constructions which single out $\kket{x} = \kket{t}\sox{n}$ for a fixed $t \in \Z$ will likely fail for some single value of $D$, for similar reasons.
\begin{aligned}[t]~\\[-6ex]
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(-1,0);
\node (y) at ($(x) + (0,-1.75)$) [white dot] {};
\draw (y) -- ++(-1,0);
\node (dots) at ($(x)!0.5!(y) + (-.75,0.09875)$) {$\vdots$};
\node (dots) at ($(x)!0.5!(y) + (-.75,0.09875)$) {$\vdots$};
\node [Z dot] (p) at ($(x)!0.5!(y) + (.875,0)$) {};
\draw (x) .. controls ++(.375,-.125) and ++(-.125,.375) .. node [midway, not dot, label=above:\footnotesize$\;-1$] {} (p);
\draw (x) .. controls ++(.125,-.375) and ++(-.375,.125) .. node [pos=0.3125, not dot, label=left:\footnotesize$+1\!\!\;$] {}
node [pos=0.625, gray dot] {} (p);
\draw (y) .. controls ++(.375,.125) and ++(-.125,-.375) .. node [midway, not dot, label=below:\footnotesize$\;-1$] {} (p);
\draw (y) .. controls ++(.125,.375) and ++(-.375,-.125) .. node [pos=0.3125, not dot, label=left:\footnotesize$+1\!\!\;$] {} node [pos=0.625, gray dot] {} (p);
\node [H box, label=right:$\!\!\:\boldsymbol \epsilon_\alpha$] at (p) {};
\end{tikzpicture}
\end{aligned}
\begin{align}{}
\label{eqn:ZH-phase-only-on-Us-gadget}
\mspace{-39mu}
% \Sem{11ex}
\begin{aligned}{}
\begin{tikzpicture}[]
\node (x) at (0,0) [white dot] {};
\draw (x) -- ++(-.75,0);
\node (y) at ($(x) + (0,-1.5)$) [white dot] {};
\draw (y) -- ++(-.75,0);
\node (brace) at ($(x)!0.5!(y) + (-1.25,0)$) {%
$m \left\{ \begin{matrix} \\[10ex] \end{matrix} \right.$
\node (dots) at ($(x)!0.5!(y) + (-0.375,0.09875)$) {$\vdots$};
\node [Z dot] (p) at ($(x)!0.5!(y) + (.75,0)$) {};
\draw (x) .. controls ++(.375,-.125) and ++(-.125,.375) .. node [midway, not dot, label=above:\footnotesize$\;\;\;1{-}\sigma$] {} (p);
\draw (x) .. controls ++(.125,-.375) and ++(-.375,.125) .. (p);
\draw (y) .. controls ++(.375,.125) and ++(-.125,-.375) .. node [midway, not dot, label=below:\footnotesize$\;\;\;1{-}\sigma$] {} (p);
\draw (y) .. controls ++(.125,.375) and ++(-.375,-.125) .. (p);
\node [H box, label=right:$\mathbf M^{(2m)}_\alpha$] (p) at ($(x)!0.5!(y) + (.75,0)$) {};
\end{tikzpicture}
\end{aligned}}
\;,
\qquad\qquad
\text{where $
\mathbf M^{(k)}_\alpha(t)
\;=\;
\begin{cases}
\alpha, & \text{if $t = U_{\!D}^{\,k}$};
\\
1, & \text{otherwise}.
\end{cases}
\mspace{-36mu}
\end{align}
We may describe the behaviour of this gadget as it acts on standard basis states.
Each input wire of this gadget admits a state $\kket{x_j}$, and copies it to produce a pair $\kket{x_j, x_j}$.
One copy is then acted on with a $(1{-}\sigma)$-not-dot, yielding a state ${\kket{x_j,\, \rho_D(\sigma{-}1{-}x_j)}} = {\kket{x_j,\, \rho_D(\neg x \:\!{-}\:\!1)}}$, where $\rho_D : \Z \to [D]$ is a map which reduces each integer modulo $D$ to a representative in $[D]$.
(The map $\rho_D$ is implicit in many of the transformations in which one might prefer to evaluate arithmetic modulo $D$: it arises here because we must map the associate the expression $\sigma{-}1{-}x_j$ , which we might prefer to implicitly evaluate modulo $D$ in the basis labels, to an explicit element of $\Z$.)
The $\mathbf M^{(2m)}_\alpha$ box then maps $\kket{x} \mapsto \alpha$ if and only if
\begin{equation}
\label{eqn:normal-form-gadget-constraint}
% \tilde M_m(x)
\prod_{j=1}^n \Bigl( x_j \cdot \rho_D(\neg x - 1) \Bigr)
\;=\;
\end{equation}
and maps $\kket{x} \mapsto +1$ otherwise.
For $x \in \Z^m$, Eqn. (<ref>) is satisfied only if ${x_j \cdot \rho_D(\neg x - 1)} {{}= \pm U_{\!\!\:D}^{\,2}}$ for each $x_j$ individually, as $U_D \in [D]$ is the element with the largest absolute value.
* Note that $\rho_D(\neg x - 1) \,\ne\, \neg x - 1$ if and only if $\neg x = L_D$, which is to say precisely when $x = U_D$: in this case, we have $x \cdot \rho_D(\neg x - 1) = U_{\!\!\:D}^{\,2}$.
* Otherwise, for $x < U_D$, we have $x \cdot \rho_D(\neg x - 1) = x \cdot (\neg x - 1) = \sigma x - x - x^2$.
For $D$ even, we have $\sigma x - x - x^2 = -x^2$; for $D$ odd, we instead have $\sigma x - x - x^2 = -x^2 = -x(x+1)$.
The absolute values of these expressions are bounded strictly below $U_{\!\!\;D}^{\,2}$ in either case.
Then Eqn. (<ref>) is satisfied if and only if $x_j = U_D$ for each $1 \le j \le m$.
The action of the gadget in Eqn. (<ref>) is then to map $\kket{U_{\!\!\:D}}\sox{m} \mapsto \alpha$ and $\kket{x} \mapsto +1$ for all other $x \in [D]^m$.
Using gadgets of this form, we may express any operator $\Omega: \cH\sox{m} \to \cH\sox{n}$ in terms of its coefficients, in a similar manner to the normal form for ZH diagrams in $D=2$ presented by Backens and Kissinger [30].
That is, we may take any operator, transform it into a vector using cup operators, use white dots to make copies of each qudit in the standard basis (one copy for every coefficient of the operator), and then apply not-dots and gadgets of the form in Eqn. (<ref>) to fix the value of each coefficient $\alpha_{x,y} = \bbra{x} \,\Omega\, \kket{y} = \nu^{-m-n} \, \Omega_{\!\;x,y}$ for $x \in [D]^m$ and $y \in [D]^n$.
Remark on the above construction.
We do not expect that the rewrite rules that we have set out, for ZX or ZH diagrams, suffice to transform arbitrary diagrams to such a normal form.
We leave open the problem of demonstrating rewrites to transform arbitrary qudit ZH diagrams into a form such as the one we have sketched here, or a similar one.
We do think that it is likely that simple rewrite rules can be found, which might allow the gadget of Eqn. (<ref>) to be expressed using only phase-amplitude gadgets.
However, setting out such rules is beyond the scope of our current work, which is simply to present an advantageous semantic map for ZX and ZH diagrams through discrete integrals.
§ THE CASE $D = 2$, AND `OCKHAMIC' SEMANTIC MAPS
Notice that the normalisation $\nu = D^{-1/4}$, and the corresponding semantics, essentially reproduces the `well-tempered' semantics of ZX and ZH diagrams [19] in the case that $D = 2$.
The process by which we obtain this semantic map in this article, may appear very different from how the well-tempered interpretation was devised in Ref. [19].
In the latter case, a family of `Ockhamic' semantic maps were considered which differed from each other in the values of normalising scalar factors for each of the generators, subject to some constraints.
(We discuss and generalise the notion of `Ockhamic semantic maps' of ZX diagrams and of ZH diagrams in Appendix <ref>.)
A specific semantic map was then isolated by imposing successive constraints in the form of equations on the semantics of specific diagrams.
This effectively solved for a semantic map, for which certain rewrite rules were sound.
However, it must be admitted that the analysis of Ref. [19] does not provide any particular intuition for why $\nu = D^{-1/4}$ should be a natural parameter to govern the relationships between the ZX and ZH generators.
In this article, we instead set out a semantic map, making use of discrete integrals and an accompanying system of point-mass distributions, with the intent that he ZX and ZH generators should be as simple as possible within that framework.
The precise semantics were determined by the single parameter $\nu$, which we then fixed by constraining the representation of the (quantum) Fourier transform.
However: it is worth noting that the framework of discrete integrals and (what we have here called) `accompanying' point-mass distributions, implicitly impose some of the constraints that were explicitly imposed in Ref. [19].
Specifically: by choosing our notation so that
\begin{equation}
\int\limits_{\mathclap{x \in [D]}} \kket{x}\sox{n}\bbracket{x}{a}
\;\;=\;\;
\kket{a}\sox{n}
\end{equation}
it is not difficult to show that we automatically ensure the correctness of a number of rewrites including , , , , , , and for the semantics of Eqn. (<ref>).
In this way, we may regard the framework of discrete integrals and accompanying point-mass distributions as simplifying both the results and the methodology of Ref. [19].
This provides a clarification and a theoretical justification of the `well-tempered' semantics, and indeed an extension of them to all $D > 1$.
Note that choosing the interpretations of Eqn. (<ref>) just for the ZH generators, does not in fact impose any constraints on either the measure $\mu$ on $[D]$.
In the analysis of Section <ref>, we show that (taken on their own), the semantics of Eqn. (<ref>) for the ZH generators only imposes the constraint on the interpretation that the generators must be related to each other by simple geometric progressions related to the parameter $\nu$, but does not fix what $\nu$ should be.
(If we take ${\nu \!=\! 1}$, Eqn. (<ref>) reproduces the original semantics provided by Backens and Kissinger [30] for the ZH generators for ${D\!=\!2}$.)
In effect, the ZH calculus prioritises the standard basis to such an extent that it does not impose any strong relationships between that basis and any other, and in so doing leaves $\nu$ unconstrained.
It is the single constraint on the ZX generators, that $\kket{\smash{\omega^k}} = \int_x \omega^{-kx} \,\kket{x}$ should be unitarily equivalent to $\kket{x}$, which suffices to fix the measure $\mu$ and thus to fix specific semantics for all of the generators through Eqn. (<ref>).
§ SUMMARY AND DISCUSSION
We have presented an version of the ZX-and ZH-calculi over qudits of dimension $D>1$, with an interpretation of the generators given through simple discrete integrals over a set $[D]$ of representatives of $\Z_D$, using an accompanying set of point-mass distributions $\kket{x}$ as the basis for the definition of operators.
This integral is determined by a measure $\mu$ on $[D]$, which we constrain through a choice of representation for the Fourier transform.
This latter constraint fixes a measure such that $\mu(\{\ast\}) = D^{-1/2}$ and $\mu([D]) = \sqrt D$.
With respect to this measure, we have demonstrated simple rewrite rules which are sound for this interpretation.
Continued work on these calculi (either as separate calculi or a unified calculus) is necessary to demonstrate completeness, ideally while retaining the features which promise to make these interpretations of the calculi easy to use.
In addition to the use of discrete integrals, we have made one other significant and unusual choice of representation: to index the standard basis by $[D] = (-\tfrac{1}{2}D,\tfrac{1}{2}D]$ rather than $\{0,1,\ldots,D{-}1\}$.
Many of the results in our work (in particular: all those to do with the stabiliser fragment of ZX, and multicharacter boxes in ZH) will hold equally well with either set of labels for the standard basis.
In particular, both are equally adequate for representing what results might rest on interpreting arithmetic on the labels as taking place in the ring $\Z_D$, for those expressions which are well-defined modulo $D$.
Our choice of labels is motivated by certain elegant features of the special case $D=2$, concerning an involution $\neg : [D] \to [D]$ given by $\neg x = \sigma - x$.
Our notational choice is then motivated by imposing the constraint that $[D] = \{0,1\}$ for $D = 2$, and then requiring $\sigma$ to be as small and simply expressed as possible for all $D > 1$ subject to that constraint.
For $[D] = (-\tfrac{1}{2}D,\tfrac{1}{2}D]$, this yields the involutions $\neg x = 1-x$ for $D$ even and $\neg x = -x$ for $D$ odd.
We look forward to feedback on this choice of convention, and are interested in whether there would be any comparable notational conveniences to selecting the labels $\{0,1,\ldots,D{-}1\}$ instead.
The simplification of the normal form that would be made possible is worth noting, but we would suggest that this benefit should be assigned an importance based on how often a practitioner might expect to compute a normal form.
We note that both in our choice of involution $\neg: [D] \to [D]$ and in the stabiliser fragment of ZX, there is a significant distinction between the cases of $D$ even and $D$ odd.
In particular: in the stabiliser fragment of ZX, we are concerned with phases which are powers of $\tau = \e^{\pi i(D^2 + 1)/D}$, but for $D$ even this is a $2D\textsuperscript{th}$ root of unity, while for $D$ odd it is a $D\textsuperscript{th}$ root of unity.
It is remarkable that, despite this distinction, the semantics which one obtains for nodes such as
\tikz \node [Z dot, label=left:\footnotesize{$[a \s b]\!$}] {};
\,$
should be consistent for all dimensions $D>1$, in that (a) it denotes a complex phase whenever $b$ is a multiplicative unit modulo $D$, (b) it denotes the scalar $\sqrt{D} = \mu([D])$ for $a=b=0$, and (c) more generally, the magnitude of the scalar that it denotes is either $0$ or $\sqrt{t\,}$ for $t = \gcd(b,D)$.
It should be noted that there will be other respects in which the case of $D$ even will be significantly more complicated than $D$ odd: see Ref. [50] for more details.
However, as our work manages to present a simple and unifying presentation for the stabiliser fragment of ZX over all dimensions $D>1$, perhaps these can be managed so long as one remains mindful of the other major distinction, of whether $D$ is prime or composite.
We conclude with a highly speculative thought regarding discrete measures.
A significant constraint which we imposed on the measure $\mu$ on $[D]$
— interpreted as a measure on $\Z_D$ — was that the Fourier transform should be interpretable as an involution $\C^{(\Z_D,\mu)} \to \C^{(\Z_D,\mu)}$ on functions on the measure space $(\Z_D,\mu)$, rather than a bijection $\C^{(\Z_D,\mu)} \to \C^{(\Z_D,\mu')}$ between functions on distinct measure spaces $(\Z_D,\mu)$ and $(\Z_D,\mu')$.
This may seem like a necessary but technical step; from the perspective of conventional presentations of ZX diagrams, it is necessary, if all of the wires are to have the same type.
However, it is noteworthy that many quantum algorithms have a structure similar to Fourier sampling, in which some classical operation (with a distinguished control register) is made to act on some state which is in not in the standard basis, but rather in a superposition (often through the involvement of the Fourier transform), after which the amplitudes of the different components are made to interfere (often through the involvement of the inverse Fourier transform).
In this respect, many quantum algorithms have a structure which suggest the possibility of changes in the datatype associated with a physical qubit at different stages of the algorithm.
Could it be that it would be more appropriate on the logical level, to have multiple types of qubit — a `standard' type and a `Fourier' type, possibly among others — than to have just a single type of logical qubit?
It would be interesting to consider what insights into the structure of quantum algorithms might arise, by considering multiple types of qubit or quantum register apart from distinctions of dimension and resister size, and contrasting the roles in which they play in existing quantum algorithms.
Should such a program prove to be non-trivial, it is conceivable that this could give rise to new insights into structured quantum programming.
§ ACKNOWLEDGEMENTS
NdB would like to thank Patrick Roy, Titouan Carette, John van de Wetering, and Robert Booth for helpful technical discussions.
[1]
B. Coecke and R. Duncan.
Interacting quantum observables: categorical algebra and diagrammatics.
New. J. Phys 13 (043016), 2011.
DOI: ;
See also [arXiv:0906.4725].
[2]
R. Duncan and S. Perdrix.
Graph States and the Necessity of Euler Decomposition.
Mathematical Theory and Computational Practice (pp. 167–177), 2009.
DOI: ;
See also [arXiv:0902.0500].
[3]
M. Backens.
Making the stabilizer ZX-calculus complete for scalars.
Electronic Proceedings in Theoretical Computer Science 195 (pp. 17–32), 2015.
[4]
S. Perdrix and Q. Wang.
Supplementarity is Necessary for Quantum Diagram Reasoning.
Proceedings of 41st International Symposium on Mathematical Foundations of Computer Science (76:1–76:14), 2016.
DOI: ;
see also [arXiv:1506.03055].
[5]
M. Backens, S. Perdrix, and Q. Wang.
A Simplified Stabilizer ZX-calculus.
Electronic Proceedings in Theoretical Computer Science 236 (pp. 1–20), 2017.
DOI: ;
See also [arXiv:1602.04744],
[6]
E. Jeandel, S. Perdrix, R. Vilmart, and Q. Wang.
ZX-Calculus: Cyclotomic Supplementarity and Incompleteness for Clifford+T Quantum Mechanics.
Proceedings of 42nd International Symposium on Mathematical Foundations of Computer Science (11:1–11:13), 2017.
DOI: ;
See also [arXiv:1702.01945].
[7]
B. Coecke and A. Kissinger.
Picturing Quantum Processes: A First Course in Quantum Theory and Diagrammatic Reasoning.
Cambridge University Press, 2017.
DOI: .
[8]
X. Gong and Q. Wang.
Equivalence of Local Complementation and Euler Decomposition in the Qutrit ZX-calculus.
[arXiv:1704.05955], 2017.
[9]
K. F. Ng and Q. Wang.
A universal completion of the ZX-calculus.
[arXiv:1706.09877], 2017.
[10]
E. Jeandel, S. Perdrix, and R. Vilmart.
A Complete Axiomatisation of the ZX-Calculus for Clifford+T Quantum Mechanics.
33rd Annual ACM/IEEE Symposium on Logic in Computer Science (pp. 559–568), 2018.
DOI: ;
See also [arXiv:1705.11151], 2017.
[11]
K. F. Ng and Q. Wang.
Completeness of the ZX-calculus for Pure Qubit Clifford+T Quantum Mechanics.
[arXiv:1801.07993], 2018.
[12]
Q. Wang.
Qutrit ZX-calculus is Complete for Stabilizer Quantum Mechanics.
Electronic Proceedings in Theoretical Computer Science 266 (pp. 58–70), 2018.
DOI: ;
See also [arXiv:1803.00696].
[13]
B. Coecke and Q. Wang.
ZX-rules for 2-qubit Clifford+T quantum circuits.
Lecture Notes in Computer Science 11106 (pp. 144–161), 2018.
DOI: ;
See also [arXiv:1804.05356].
[14]
R. Vilmart.
A Near-Minimal Axiomatisation of ZX-Calculus for Pure Qubit Quantum Mechanics.
34th Annual ACM/IEEE Symposium on Logic in Computer Science (pp. 1–10), 2019.
DOI: ;
See also [arXiv:1812.09114],
[15]
R. Vilmart.
A ZX-Calculus with Triangles for Toffoli-Hadamard, Clifford+T, and Beyond.
Electronic Proceedings in Theoretical Computer Science 287 (pp. 313–344), 2019.
DOI: ;
See also [arXiv:1804.03084],
[16]
E. Jeandel, S. Perdrix, and R. Vilmart.
Completeness of the ZX-Calculus.
[arXiv:1903.06035], 2019.
[17]
Q. Wang.
An algebraic axiomatisation of ZX-calculus.
[arXiv:1911.06752], 2019.
[18]
Q. Wang.
ZX-calculus over arbitrary commutative rings and semirings (extended abstract).
[arXiv:1912.01003], 2019.
[19]
N. de Beaudrap.
Well-tempered ZX and ZH calculi.
Electronic Proceedings in Theoretical Computer Science 340 (pp. 13–45), 2021.
DOI: ;
See also [arXiv:2006.02557].
[20]
Q. Wang.
Algebraic complete axiomatisation of ZX-calculus with a normal form via elementary matrix operations.
[arXiv:2007.13739], 2020.
[21]
J. van de Wetering.
ZX-calculus for the working quantum computer scientist.
[2012.13966], 2020.
[22]
A. Toumi, R. Yeung, and G. de Felice.
Diagrammatic Differentiation for Quantum Machine Learning.
Electronic Proceedings in Theoretical Computer Science 343 (pp. 132–144), 2021.
DOI: ;
See also [arXiv:2103.07960].
[23]
Q. Wang.
Qufinite ZX-calculus: a unified framework of qudit ZX-calculi.
[arXiv:2104.06429], 2021.
[24]
Q. Wang, R. Yeung, and M. Koch.
Differentiating and Integrating ZX Diagrams.
[arXiv:2201.13250], 2022.
[25]
E. Jeandel, S. Perdrix, and M. Veshchezerova.
Addition and Differentiation of ZX-diagrams.
[arXiv:2202.11386], 2022.
[26]
T. Stollenwerk and S. Hadfield.
Diagrammatic Analysis for Parameterized Quantum Circuits.
[arXiv:2204.01307], 2022.
[27]
R. I. Booth and T. Carette.
Complete ZX-Calculi for the Stabiliser Fragment in Odd Prime Dimensions.
47th International Symposium on Mathematical Foundations of Computer Science (pp. 24:1–24:15), 2022.
DOI: ;
See also [arXiv:2204.12531].
[28]
J. van de Wetering and L. Yeh.
Phase gadget compilation for diagonal qutrit gates.
[arXiv:2204.13681], 2022.
[29]
S. Majid.
Quantum and braided ZX calculus.
J. Phys. A: Math. Theor. 55 (254007), 2022.
DOI: ;
See also [arXiv:2103.07264].
[30]
M. Backens and A. Kissinger.
ZH: A Complete Graphical Calculus for Quantum Computations Involving Classical Non-linearity.
Electronic Proceedings in Theoretical Computer Science 287 (pp. 23–42), 2019.
DOI: ;
See also [arXiv:1805.02175].
[31]
J. van de Wetering and S. Wolffs.
Completeness of the Phase-free ZH-calculus.
[], 2019.
[32]
S. Kuijpers, J. van de Wetering, and A. Kissinger.
Graphical Fourier Theory and the Cost of Quantum Addition.
[], 2019.
[33]
N. de Beaudrap, A. Kissinger, and K. Meichanetzidis.
Tensor Network Rewriting Strategies for Satisfiability and Counting.
Electronic Proceedings in Theoretical Computer Science 340 (pp. 46–59), 2021.
DOI: ;
See also [arXiv:2004.06455].
[34]
M. Backens, A. Kissinger, H. Miller-Bakewell, J. van de Wetering, and S. Wolffs.
Completeness of the ZH-calculus.
[arXiv:2103.06610], 2021.
[35]
P. Roy.
Qudit ZH-calculus.
MSc. Thesis, University of Oxford.
Available at
(accessed 27 February 2023).
[36]
R. D. P. East, J. van de Wetering, N. Chancellor, and A. G. Grushin.
AKLT-states as ZX-diagrams: diagrammatic reasoning for quantum states.
PRX Quantum 3 (010302), 2022.
DOI: ;
See also [arXiv:2012.01219].
[37]
A. Townsend-Teague and K. Meichanetzidis.
Classifying Complexity with the ZX-Calculus: Jones Polynomials and Potts Partition Functions.
Accepted submission to QPL 2022.
[arXiv:2103.06914], 2021.
[38]
R. D. P. East, P. Martin-Dussaud, and J. van de Wetering.
Spin-networks in the ZX-calculus.
[arXiv:2111.03114], 2021.
[39]
G. de Felice and B. Coecke.
Quantum Linear Optics via String Diagrams.
Accepted submission to QPL 2022.
[40]
T. Laakkonen, K. Meichanetzidis, and J. van de Wetering.
A Graphical #SAT Algorithm for Formulae with Small Clause Density.
[arXiv:2212.08048], 2022.
[41]
B. Poór, Q. Wang, R. A. Shaikh, L. Yeh, R. Yeung, and B. Coecke.
Completeness for arbitrary finite dimensions of ZXW-calculus, a unifying calculus,
[arXiv:2302.12135], 2023.
[42]
P. Roy, J. van de Wetering, L. Yeh.
Title to be determined.
Submission to QPL 2023, to appear.
[43]
B. Poór, R. I. Booth, T. Carette, J. van de Wetering, L. Yeh.
The Qupit Stabiliser ZX-travaganza: Simplified Axioms, Normal Forms and Graph-Theoretic Simplification.
Submission to QPL 2023, to appear.
[44]
Y. Shi.
Both Toffoli and Controlled-NOT Need Little Help to Do Universal Quantum Computing.
Quantum Information & Compututation 3 (pp. 84–92), 2003.
[45]
D. Aharonov.
A Simple Proof that Toffoli and Hadamard are Quantum Universal.
[], 2003.
[46]
T. Carette.
Wielding the ZX-calculus, Flexsymmetry, Mixed States, and Scalable Notations.
Ph.D. Thesis, Loria, Université de Lorraine, 2021.
[47]
D. Schlingemann.
Cluster States, Algorithms and Graphs.
Quantum Info. & Comput. 4 (pp. 287–324), 2004.
DOI: ;
See also [arXiv:quant-ph/0305170].
[48]
A. Córdoba.
Dirac combs.
Lett. Math. Phys. 17 (pp. 191–196), 1989.
[49]
T. Carette and E. Jeandel.
A recipe for quantum graphical languages.
47th International Colloquium on Automata, Languages, and Programming (pp. 118:1–118:17), 2020.
DOI: ;
See also [arXiv:2008.04193].
[50]
N. de Beaudrap.
A linearized stabilizer formalism for systems of finite dimension.
Quant. Info. & Comp. 13 (pp. 73–115), 2013.
DOI: ;
See also [].
[51]
T. Carette, D. Horsman, and S. Perdrix.
SZX-Calculus: Scalable Graphical Quantum Reasoning.
Proceedings of 44th International Symposium on Mathematical Foundations of Computer Science (55:1–55:15), 2019.
DOI: ;
See also [].
[52]
T. Carette, Y. D'Anello, and S. Perdrix.
Quantum Algorithms and Oracles with the Scalable ZX-calculus.
Electronic Proceedings in Theoretical Computer Science 343 (pp. 193–209), 2021.
DOI: ;
See also [].
§ DISCRETE MEASURES ON $\R$
§.§ Dirac deltas
When analysing functions on $\R$, it is not uncommon to consider a Dirac distribution $\delta$ (also known as the `Dirac delta'), which is defined in such a way that for an interval $J \subset \R$,
\begin{equation}
\label{eqn:dirac-delta}
\int\limits_{\mathclap{x \in J}} f(x) \, \delta(x-a) \; \mathrm dx
\;=\;
\begin{cases}
f(a), & \text{if $a \in J$};
\\[.5ex]
0, & \text{otherwise}.
\end{cases}
\end{equation}
One may conceive of $\delta(x)$ as the limit of a family of formally definable functions, such as the Gaussians $\mathcal N_{1\!\!\;/n}(x) = \smash{\tfrac{n}{\sqrt 2\pi}\,\e^{-(n x)^2/2}}$ as $n \to \infty$.
In principle, one may consider it as syntactic sugar for a measure $\mu_\delta$ on $\R$, which for any interval $J \subset \R$ satisfies $\mu(J) = 1$ if $0 \in J$, and $\mu(J) = 0$ otherwise: we call this a `point-mass distribution'.
We may write $\delta_a(x) = \delta(x - a)$ for any $a \in \R$, so that $\delta_a$ describes a point-mass distribution at $a \in \R$: that is, a measure $\mu_a$ such that $\mu_a(S) = 1$ if $a \in S$, and $\mu_a(S) = 0$ otherwise.
(More generally, a point-mass distribution is any distribution of the form $p_a \delta_a$ for $p_a \ne 0$: the `mass' of such a distribution is then $p_a$.)
The purpose of the Dirac distributions would then be to allow us to write $\,\int_J f(x) \,\delta_a(x) \,\mathrm dx = \int_J f(x) \,\delta(x-a) \,\mathrm dx\,$ in place of $\,\int_J f(x\!+\!a) \, \mathrm d\mu_a\,$.
This provides a notational bridge between the discrete measures $\mu_a$ and the more common (Lebesgue) measure, so that we may perform analysis as though consistently working with a single variety of integration.
§.§ Impulses and Dirac combs
A discrete measure $\rho$ on $\R$ is a measure which is a linear combination of a countable (and possibly finite) number of such point-mass distributions.
While such measures are not real-valued functions on $\R$, we may say that $\rho(a) \ne 0$ if for any function $f: \R \to \C$, we have $\int_{(a\!\!\:-\!\!\:\epsilon,a\!\!\:+\!\!\:\epsilon)} f(x)\,\rho(x)\,\mathrm dx \,\longrightarrow\, p_a f(a)$ for some $p_a \ne 0$, as $\epsilon \to 0$.
We may refer to the contributions of the point-mass distributions as `impulses': for a discrete measure $\rho$, we say that $\rho$ has an impluse at $a$ if $\rho(a) \ne 0$ in this sense.
We define the Dirac comb $\Sh$ (see, e.g., Ref. [48]) as a discrete distribution, consisting of a sequence of a sum of unit point-mass distributions on $\Z$:
\begin{equation}
\label{eqn:integer-comb}
\Sh(x)
\,=\,
\sum_{t \in \Z} \,\delta_t(x)
\;.
\end{equation}
The Dirac comb is its own Fourier transform, which allows us to also express it as:
\begin{equation}
\label{eqn:FT-integer-comb}
\Sh(x)
\,=\,
\sum_{k \in \Z} \e^{2\pi ikx}
\,=\,
\sum_{k \in \Z} \e^{-2\pi ikx} .
\end{equation}
We may use the Dirac comb to express any function $\phi: \Z \to \C$ as a complex linear combination of impulses at the integers: if we let $\phi': \R \to \C$ be any extension of $\phi$ to the real numbers, we may define the complex-valued discrete `distribution' $\mathbf I \:\! \phi$ on $\R$, by
\begin{equation}
\label{eqn:discrete-complex-distribution-via-impulses}
\mathbf I \:\! \phi(x)
\;=\;
\Sh(x) \,\phi'(x)
\;=\;
\sum_{t \in \Z} \,\delta_t(x) \,\phi(t),
\end{equation}
This will allow us to express sums on integers, in terms of integrals over $\R$: for instance, for any integers $a < b$, we then have
\begin{equation}
\int\limits_{\mathclap{(a,b]}} \mathbf I \!\: \phi(x) \; \mathrm dx
\;=\;
\sum_{\mathclap{a < t\le b}}
\phi(t);
\end{equation}
in particular, we have $\int_{(a,b]} \Sh(x)\,\mathrm dx = b-a$, which is the number of integers in the interval $(a,b]$.
Finally, we may consider normalised versions of the Dirac comb with impulses at integer multiples of any interval length $\ell > 0$:
\begin{equation}
\label{eqn:normalised-comb}
\Sh_\ell(x)
\,=\,
\ell \,\sum_{t \in \Z} \,\delta_{\ell t}(x)
\,=\,
\ell \,\sum_{t \in \Z} \,\delta(x - \ell t)
\,=\,
\ell \,\sum_{k \in \Z} \e^{2\pi ikx/\ell}
\end{equation}
The leading scalar factor of $\ell$ in these sums, ensures that for $a < b$ which are integer multiples of $\ell$, we again have $\int_{(a,b]} \Sh(x)\,\mathrm dx = b-a$.
We may then use this to define a generalisation of $\mathbf I$, to embed functions $\phi: \Z \to \C$ as complex-valued measures in $\R$, but with impulses at intervals of $\ell > 0$: for $\phi'$ again any extension of $\phi$ to $\R$, we define
\begin{equation}
\label{eqn:scaled-discrete-complex-distribution-via-impulses}
\mathbf I_\ell \;\! \phi(x)
\;=\; |
shorten <>/.style=shorten >=#1,shorten <=#1
\begin{equation}#1\end{equation}
\begin{align}#1\end{align}
mod-$\K Q/I$
mod-$\C Q/I$
#1|#1 |
#1|#1 |^2
#1|#1 |
#1|#1 |^2
#1⟨#1 ⟩
#1⟨#1 ⟩
Bordism categories and orientations of
gauge theory moduli spaces
Dominic Joyce and Markus Upmeier
This is the second paper of a series that develops a bordism-theoretic point of view on orientations in enumerative geometry. This paper focuses on those applications to gauge theory that can be established purely using formal arguments and calculations from algebraic topology.
We prove that the orientability of moduli spaces of connections in gauge theory for all principal $G$-bundles $P\ra X$ over compact spin $n$-manifolds at once is equivalent to the vanishing of a certain morphism $\Om_n^{\Spin}(\cL BG)\ra\Z_2$ on the $n$-dimensional spin bordism group of the free loop space of the classifying space of $G,$ and we give a complete list of all compact, connected Lie groups $G$ for which this holds. Moreover, we apply bordism techniques to prove that mod-$8$ Floer gradings exist for moduli spaces of $G_2$-instantons for all principal $\SU(2)$-bundles.
We also prove that there are canonical orientations for all principal $\U(m)$-bundles $P\ra X$ over compact spin $8$-manifolds satisfying $c_2(P)-c_1(P)^2=0.$ The proof is based on an interesting relationship to principal $E_8$-bundles. These canonical orientations play an important role in many conjectures about Donaldson–Thomas type invariants on Calabi–Yau $4$-folds, and resolve an apparent paradox in these conjectures.
§ INTRODUCTION
Coherent orientations of moduli spaces play an important role in gauge theory and for enumerative geometry [30, 31, 32, 41, 72]. Despite their central role, the absence of a general framework means that orientations often remain poorly understood. Indeed, the motivating question for this work was whether one can construct canonical orientations for Donaldson–Thomas type invariants on Calabi–Yau $4$-folds [32]; we will find in [45] that the answer is always affirmative.
This is the second in a series of papers [71], [45] which establishes a general framework for studying orientations based on bordism categories: bordism theory enters because the first paper [71] by the second author establishes a fundamental new technique that formalizes the idea that orientations can be `propagated' along bordisms; we recall this result as Theorem <ref> below.
In this paper, we develop the general framework, explore these new ideas, and give those applications to gauge theory that can be deduced purely by means of algebraic topology. Thus, Theorem <ref> expresses the orientability of moduli spaces $\B_P$ (see <ref> for notation) for all compact, simply-connected spin Riemannian $n$-manifolds $(X,g)$ and all principal $G$-bundles $P\ra X$ at once in terms of a $\Z_2$-valued morphism on the bordism group of $\cL BG,$ the free loop space of the classifying space of the Lie group $G.$ By combining this result with Theorem <ref> concerning certain Lie group morphisms of `complex type' and previous results by the authors <cit.> and by Cao–Gross–Joyce <cit.>, we establish in Theorems <ref> and <ref> below a complete classification for which compact, connected Lie groups $G$ the moduli space $\B_P$ is orientable for all principal $G$-bundles over simply-connected spin Riemannian 7- and 8-manifolds.
Another application of the general framework is to establish the existence of mod-$8$ (or mod-$6$) Floer gradings on the moduli spaces $\B_P$ for principal $\SU(2)$ (or $\SU(3)$) bundles $P\ra X$ over compact spin Riemannian $7$-manifolds, see Theorem <ref> and <ref> for the terminology. Floer gradings refer to the spectral geometry of the differential operators that govern the deformation theory; the role of our bordism-theoretic calculations is to reduce the question to a short list of basic examples. Such Floer gradings would be important if one wanted to define instanton Floer homology groups of $G_2$-manifolds using $G_2$-instantons and $\Spin(7)$-instantons, by analogy with instanton Floer homology groups of 3-manifolds, as in Donaldson [29].
It is perhaps not too surprising that orientability results can be deduced using algebraic topology alone; after all, the orientability of a moduli space can be expressed as the vanishing of a certain families index in real $K$-theory, see <cit.> (these indices, however, were not computable up to this point), and by the Atiyah–Singer Index Theorem [4] such indices are always topological.
It is more surprising that in Theorem <ref> we can in some cases construct canonical orientations using only `formal techniques' from algebraic topology and the results of [71]. These orientations will be induced from the exceptional Lie group $G=E_8$ as the gauge group and they depend on hardly any data. As discussed in <ref>, canonical orientations of this kind are a crucial prerequisite for numerous conjectures about DT4 invariants of Calabi–Yau $4$-folds.
The sequel [45] will deal with canonical orientations in the more difficult case when they depend on additional structure. While we study orientations in gauge theory here, the next paper dually studies orientations in calibrated geometry. This, combined with the bordism-theoretic point of view, naturally leads to a general concept of flag structures, originally invented by the first author in the special case of associative $3$-folds in $G_2$-manifolds [41]. We will show that flag structures can be used for constructing canonical orientations in gauge theory, in the general case, and that indeed orientations in gauge theory and calibrated geometry are essentially equivalent.
This research was partly funded by a Simons Collaboration Grant on `Special Holonomy in Geometry, Analysis and Physics'.
§ BACKGROUND MATERIAL
§.§ Connection moduli spaces 𝒜ₚ,ℬₚ and orientations
The following definitions are taken from Joyce, Tanaka and Upmeier <cit.>.
Suppose we are given the following data:
(a) A compact, connected manifold $X$ of dimension $n>0$.
(b) A Lie group $G$, with $\dim G>0$, and centre $Z(G)\subseteq G$, and Lie algebra $\g$.
(c) A principal $G$-bundle $\pi:P\ra X$. We write $\Ad(P)\ra X$ for the vector bundle with fibre $\g$ defined by $\Ad(P)=(P\t\g)/G$, where $G$ acts on $P$ by the principal bundle action, and on $\g$ by the adjoint action.
Write $\A_P$ for the set of connections $\nabla_P$ on the principal bundle $P\ra X$. This is a real affine space modelled on the infinite-dimensional vector space $\Ga^\iy(\Ad(P)\ot T^*X)$, and we make $\A_P$ into a topological space using the $C^\iy$ topology on $\Ga^\iy(\Ad(P)\ot T^*X)$. Here if $E\ra X$ is a vector bundle then $\Ga^\iy(E)$ denotes the vector space of smooth sections of $E$. Note that $\A_P$ is contractible.
Write $\G_P=\Aut(P)$ for the infinite-dimensional Lie group of $G$-equivariant diffeomorphisms $\ga:P\ra P$ with $\pi\ci\ga=\pi$. Then $\G_P$ acts on $\A_P$ by gauge transformations, and the action is continuous for the topology on $\A_P$.
There is an inclusion $Z(G)\hookra\G_P$ mapping $z\in Z(G)$ to the principal bundle action of $z$ on $P$. This maps $Z(G)$ into the centre $Z(\G_P)$ of $\G_P$, so we may take the quotient group $\G_P/Z(G)$. The action of $Z(G)\subset\G_P$ on $\A_P$ is trivial, so the $\G_P$-action on $\A_P$ descends to a $\G_P/Z(G)$-action.
Each $\nabla_P\in\A_P$ has a (finite-dimensional) stabilizer group $\Stab_{\G_P}(\nabla_P)\subset\G_P$ under the $\G_P$-action on $\A_P$, with $Z(G)\subseteq\Stab_{\G_P}(\nabla_P)$. As $X$ is connected, $\Stab_{\G_P}(\nabla_P)$ is isomorphic to a closed Lie subgroup $H$ of $G$ with $Z(G)\subseteq H$. As in <cit.> we call $\nabla_P$ irreducible if $\Stab_{\G_P}(\nabla_P)=Z(G)$, and reducible otherwise. Write $\A_P^\irr,\A_P^\red$ for the subsets of irreducible and reducible connections in $\A_P$. Then $\A_P^\irr$ is open and dense in $\A_P$, and $\A_P^\red$ is closed and of infinite codimension in the infinite-dimensional affine space $\A_P$.
We write $\B_P=[\A_P/\G_P]$ for the moduli space of gauge equivalence classes of connections on $P$, considered as a topological stack in the sense of Metzler [54] and Noohi [56, 57]. Write $\B_P^\irr=[\A_P^\irr/\G_P]$ for the substack $\B_P^\irr\subseteq\B_P$ of irreducible connections.
Define variations $\ovB_P=[\A_P/(\G_P/Z(G))]$, $\ovB_P^\irr=[\A_P^\irr/(\G_P/Z(G))]$ of $\B_P,\ab\B_P^\irr$. Then $\ovB_P$ is a topological stack, but as $\G_P/Z(G)$ acts freely on $\A_P^\irr$, we may consider $\ovB_P^\irr$ as a topological space (which is an example of a topological stack). There are natural morphisms $\Pi_P:\B_P\ra\ovB_P$, $\Pi_P^\irr:\B^\irr_P\ra\ovB^\irr_P$.
We define orientation bundles $O^{E_\bu}_P,\bar O^{E_\bu}_P$ on the moduli spaces $\B_P,\ovB_P$:
Work in the situation of Definition <ref>, with the same notation. Suppose we are given real vector bundles $E_0,E_1\ra X$, of the same rank $r$, and a linear elliptic partial differential operator $D:\Ga^\iy(E_0)\ra\Ga^\iy(E_1)$, of degree $d$. As a shorthand we write $E_\bu=(E_0,E_1,D)$. With respect to connections $\nabla_{E_0}$ on $E_0\ot\bigot^iT^*X$ for $0\le i<d$, when $e\in\Ga^\iy(E_0)$ we may write
D(e)=∑_i=0^d a_i·∇_E_0^ie,
where $a_i\in \Ga^\iy(E_0^*\ot E_1\ot S^iTX)$ for $i=0,\ldots,d$. The condition that $D$ is elliptic is that $a_d\vert_x\cdot\ot^d\xi:E_0\vert_x\ra E_1\vert_x$ is an isomorphism for all $x\in X$ and $0\ne\xi\in T_x^*X$, and the symbol $\si(D)$ of $D$ is defined using $a_d$.
Let $\nabla_P\in\A_P$. Then $\nabla_P$ induces a connection $\nabla_{\Ad(P)}$ on the vector bundle $\Ad(P)\ra X$. Thus we may form the twisted elliptic operator
D^∇_(P) :^((P)E_0)^((P)E_1),
D^∇_(P) :e⟼∑_i=0^d (𝕀_(P)a_i)·∇_(P)E_0^ie,
where $\nabla_{\Ad(P)\ot E_0}^i$ are the connections on $\Ad(P)\ot E_0\ot\bigot^iT^*X$ for $0\le i\le d$ induced by $\nabla_{\Ad(P)}$ and $\nabla_{E_0}$.
Since $D^{\nabla_{\Ad(P)}}$ is a linear elliptic operator on a compact manifold $X$, it has finite-dimensional kernel $\Ker(D^{\nabla_{\Ad(P)}})$ and cokernel $\Coker(D^{\nabla_{\Ad(P)}})$. The determinant $\det(D^{\nabla_{\Ad(P)}})$ is the 1-dimensional real vector space
where if $V$ is a finite-dimensional real vector space then $\det V=\La^{\dim V}V$. Recall that the index is $\ind_P^{E_\bu}=\dim\Ker(D^{\nabla_{\Ad(P)}})-\dim\Coker(D^{\nabla_{\Ad(P)}})\in\Z.$
These operators $D^{\nabla_{\Ad(P)}}$ vary continuously with $\nabla_P\in\A_P$, so they form a family of elliptic operators over the base topological space $\A_P$. Thus as in Atiyah and Singer [4], there is a natural real line bundle $\hat L{}^{E_\bu}_P\ra\A_P$ with fibre $\hat L{}^{E_\bu}_P\vert_{\nabla_P}=\det(D^{\nabla_{\Ad(P)}})$ at each $\nabla_P\in\A_P$. It is equivariant under the actions of $\G_P$ and $\G_P/Z(G)$ on $\A_P$, and so pushes down to real line bundles $L^{E_\bu}_P\ra\B_P$, $\bar L^{E_\bu}_P\ra\ovB_P$ on the topological stacks $\B_P,\ovB_P$, with $L^{E_\bu}_P\cong\Pi_P^*(\bar L_P^{E_\bu})$. We call $L^{E_\bu}_P,\bar L^{E_\bu}_P$ the determinant line bundles of $\B_P,\ovB_P$. The restriction $\bar L^{E_\bu}_P\vert_{\ovB_P^\irr}$ is a topological real line bundle in the usual sense on the topological space $\ovB_P^\irr$.
For a real line bundle $L\ra\A$ we write $O(L)=(L\setminus 0_\A)/(0,\iy)$ for the principal $\Z_2$-bundles of (fibrewise) orientations on $L.$ That is, we take the complement of the zero section of $L$ and quotient by $(0,\iy)$ acting on the fibres by scalar multiplication.
Define orientation bundles $\hat O^{E_\bu}_P=O(\hat L^{E_\bu}_P)\ra\A_P,$ $O^{E_\bu}_P=O(L^{E_\bu}_P)\ra\B_P,$ and $\bar O^{E_\bu}_P=O(\bar L^{E_\bu}_P)\ra\ovB_P$. There is then a canonical isomorphism $O^{E_\bu}_P\cong\Pi_P^*(\bar O_P^{E_\bu})$. The fibres of $O^{E_\bu}_P\ra\B_P$, $\bar O^{E_\bu}_P\ra\ovB_P$ are orientations on the real line fibres of $L^{E_\bu}_P\ra\B_P$, $\bar L^{E_\bu}_P\ra\ovB_P$. The restriction $\bar O^{E_\bu}_P\vert_{\ovB^\irr_P}$ is a principal $\Z_2$-bundle on the topological space $\ovB^\irr_P$, in the usual sense.
We say that $\B_P$ is orientable if $O^{E_\bu}_P$ is isomorphic to the trivial principal $\Z_2$-bundle $\B_P\t\Z_2\ra\B_P$. An orientation $\om$ on $\B_P$ is an isomorphism $\om:O^{E_\bu}_P\,{\buildrel\cong\over\longra}\,\B_P\t\Z_2$ of principal $\Z_2$-bundles. We make the same definitions for $\ovB_P$ and $\bar O^{E_\bu}_P$. Since $\Pi_P:\B_P\ra\ovB_P$ is a fibration with fibre $[*/Z(G)]$, which is connected and simply-connected, and $O^{E_\bu}_P\cong\Pi_P^*(\bar O_P^{E_\bu})$, we see that $\B_P$ is orientable if and only if $\ovB_P$ is, and orientations of $\B_P$ and $\ovB_P$ correspond. As $\B_P$ is connected, if $\B_P$ is orientable it has exactly two orientations.
Here is a variation on Definition <ref>, for skew-adjoint elliptic operators $E_\bu$.
Continue in the situation of Definition <ref>, and suppose that $E_0=E_1$ and $E_\bu$ is
formally skew-adjoint, $D^*=-D,$ with respect to some choice of metrics on $E_0=E_1$ and $X$. Then $D^{\nabla_{\Ad(P)}}$ is also skew-adjoint, so $\Ker(D^{\nabla_{\Ad(P)}})\cong\Coker(D^{\nabla_{\Ad(P)}})$, and we see from bc2eq3 that $\det(D^{\nabla_{\Ad(P)}})$ and $\hat O^{E_\bu}_P,O^{E_\bu}_P,\bar O^{E_\bu}_P$ are canonically trivial, so the orientation problem is boring.
However, as in Freed <cit.>, we can define the Pfaffian line bundle $\hat\Pf{}^{E_\bu}_P\ra\A_P$ to be the $\Z_2$-graded real line bundle with fibres
\begin{equation*}
\hat\Pf{}^{E_\bu}_P\big\vert_{\na_P}=\det\Ker\bigl(D^{\na_{\Ad(P)}}\bigr)
\end{equation*}
placed in degree of the skew index
\begin{equation*}
\skewind_P^{E_\bu}=\dim\Ker\bigl(D^{\na_{\Ad(P)}}\bigr)\pmod{2}.
\end{equation*}
The Pfaffian line bundle is a kind of square root of $\det(D^{\nabla_{\Ad(P)}})$, defined for skew-adjoint elliptic operators, and need not be trivial. It is equivariant under the actions of $\G_P$ and $\G_P/Z(G)$ on $\A_P$, and so pushes down to real line bundles $\Pf{}^{E_\bu}_P\ra\B_P$, $\bar\Pf{}^{E_\bu}_P\ra\ovB_P$ on the topological stacks $\B_P,\ovB_P$, with $\Pf{}^{E_\bu}_P\cong\Pi_P^*(\bar\Pf{}_P^{E_\bu})$. We call $\Pf{}^{E_\bu}_P,\bar\Pf{}^{E_\bu}_P$ the Pfaffian line bundles of $\B_P,\ovB_P$.
Define Pfaffian orientation bundles $\hat O^{E_\bu}_{\Pf,P}=O(\hat\Pf{}^{E_\bu}_P)\ra\A_P,$ $O^{E_\bu}_{\Pf,P}=O(\Pf{}^{E_\bu}_P)\ra\B_P,$ and $\bar O^{E_\bu}_{\Pf,P}=O(\bar\Pf{}^{E_\bu}_P)\ra\ovB_P$. We say that $\B_P$ is Pfaffian orientable if $O^{E_\bu}_{\Pf,P}$ is isomorphic to the trivial principal $\Z_2$-bundle $\B_P\t\Z_2\ra\B_P$. A Pfaffian orientation $\om$ on $\B_P$ is an isomorphism $\om:O^{E_\bu}_{\Pf,P}\,{\buildrel\cong\over\longra}\,\B_P\t\Z_2$ of principal $\Z_2$-bundles. As $\B_P$ is connected, if $\B_P$ is Pfaffian orientable it has exactly two Pfaffian orientations.
A more general, bordism-theoretic point of view on orientation problems will be developed in Sections <ref> and <ref> below.
(i) Up to continuous isotopy, and hence up to isomorphism, $L^{E_\bu}_P,O^{E_\bu}_P$ in Definition <ref> depend on the elliptic operator $D:\Ga^\iy(E_0)\ra\Ga^\iy(E_1)$ up to continuous deformation amongst elliptic operators, and thus only on the symbol $\si(D)$ of $D$ (essentially, the highest order coefficients $a_d$ in bc2eq1), up to deformation.
(ii) For orienting moduli spaces of `instantons' in gauge theory, as in <ref>–<ref>, we usually start not with an elliptic operator on $X$, but with an elliptic complex
@C=28pt 0 [r] ^(E_0) [r]^D_0 ^(E_1) [r]^(0.55)D_1 ⋯[r]^(0.4)D_k-1 ^(E_k) [r] 0.
If $k>1$ and $\nabla_P$ is an arbitrary connection on a principal $G$-bundle $P\ra X$ then twisting bc2eq4 by $(\Ad(P),\nabla_{\Ad(P)})$ as in bc2eq2 may not yield a complex (that is, we may have $D^{\nabla_{\Ad(P)}}_{i+1}\ci D^{\nabla_{\Ad(P)}}_i\ne 0$), so the definition of $\det(D_\bu^{\nabla_{\Ad(P)}})$ does not work, though it does work if $\nabla_P$ satisfies the appropriate instanton-type curvature condition. To get round this, we choose metrics on $X$ and the $E_i$, so that we can take adjoints $D_i^*$, and replace bc2eq4 by the elliptic operator
@C=90pt ^(_0≤i≤k/2E_2i) [r]^(0.48)∑_i(D_2i+D_2i-1^*) ^(_0≤i< k/2E_2i+1),
and then Definitions <ref>–<ref> work with bc2eq5 in place of $E_\bu$.
Let $\io:G\ra H$ be a morphism of Lie groups, with induced Lie algebra morphism $\io_*:\g\ra\h$. We say that $\io:G\ra H$ is of complex type if $\io_*:\g\ra\h$ is injective, and the quotient $G$-representation $\m=\h/\io_*(\g)$ is of complex type, that is, the real vector space $\m$ may be made into a complex vector space such that the action of $G$ on $\m$ is complex linear.
The next theorem collects results from Joyce–Tanaka–Upmeier <cit.>, except (d), which is easily deduced from <cit.>.
Let $X$ be a compact $n$-manifold and $E_\bu=(E_0,E_1,D)$ a linear elliptic operator on $X,$ and choose an orientation for $\det D$. In the following we consider principal $G$-bundles $P\ra X$ for Lie groups $G,$ including the trivial principal $G$-bundle $P=X\t G,$ the associated connection moduli space $\B_P,$ and its orientation bundle $\pi:O^{E_\bu}_P\ra\B_P$.
(a) If $G$ is abelian e.g. $G=\U(1)^k$ then $\B_P$ is orientable for any principal $G$-bundle $P\ra X,$ and has a canonical orientation, which depends on the orientation for $\det D$.
(b) If $G_1,G_2$ are Lie groups then any principal $G_1\t G_2$-bundle $P\ra X$ is canonically isomorphic to a fibre product $P_1\t_XP_2,$ where $P_i\ra X$ is a principal $G_i$-bundle for $i=1,2$ given by $P_i=P/G_{3-i}$. There is a canonical isomorphism $\B_P\cong\B_{P_1}\t\B_{P_2}$ which identifies $O^{E_\bu}_P\cong O^{E_\bu}_{P_1}\bt O^{E_\bu}_{P_2}$. Thus, $\B_P$ is orientable if and only if $\B_{P_1},\B_{P_2}$ are orientable, and orientations on $\B_{P_1},\B_{P_2}$ induce an orientation on $\B_P$.
(c) Let $\io:G\ra H$ be a morphism of Lie groups, and $P\ra X$ be a principal $G$-bundle. Then $Q=(P\t H)/G$ is a principal $H$-bundle, where $G$ acts on $P$ by the principal $G$-bundle action, and on $H$ by $g:h\mapsto h\cdot\io(g)^{-1}$. Any connection $\nabla_P$ on $P$ induces a connection $\nabla_Q$ on $Q,$ and mapping $[\nabla_P]\mapsto[\nabla_Q]$ induces a morphism of topological stacks $\Up_P^Q:\B_P\ra\B_Q$.
Now suppose that $\io:G\ra H$ is of complex type, as in Definition <ref>. Then there is a canonical isomorphism $\up_P^Q:O_P^{E_\bu}\ra(\Up_P^Q)^*(O_Q^{E_\bu})$. Thus, if $\B_Q$ is orientable then $\B_P$ is orientable, and an orientation on $\B_Q$ induces one on $\B_P$.
(d) Suppose $\io:G\t\U(1)^k\ra H$ is a morphism of connected Lie groups of complex type for $k\ge 0,$ and write $\jmath:=\io\vert_{G\t\{1\}}:G=G\t\{1\}\ra H$. Let $P\ra X$ be a principal $G$-bundle, so that $P\t\U(1)^k\ra X$ is a principal $G\t\U(1)^k$-bundle, and set $Q=(P\t H)/G=(P\t\U(1)^k\t H)/(G\t\U(1)^k),$ so that $Q$ is the principal $H$-bundle induced from both $P,\jmath:G\ra H$ and $P\t\U(1)^k,\io$. Then we have a commutative diagram
\begin{equation*}
\xymatrix@C=50pt@R=11pt{
\B_P \ar[rr]_{\Up_P^Q} \ar[dr]_(0.4){(\id_{\B_P},\nabla_0)} && \B_Q, \\
& \B_P\t\B_{X\t\U(1)^k}\cong\B_{P\t\U(1)^k} \ar[ur]_(0.6){\Up_{P\t\U(1)^k}^Q} }
\end{equation*}
and combining (a)–(c) gives an isomorphism $\ul{\up}_P^Q:O_P^{E_\bu}\ra(\Up_P^Q)^*(O_Q^{E_\bu})$.
Recall that a continuous map $f:S\ra T$ between connected topological spaces $S,T$ is called $p$-connected if the induced maps of homotopy groups $\pi_i(f):\pi_i(S)\ra\pi_i(T)$ are isomorphisms for $i<p$ and surjective for $i=p$. Suppose $\jmath:G\ra H$ is $p$-connected for some $p>n$. Then $\Up_P^Q:\B_P\ra\B_Q$ induces an isomorphism $\pi_1(\Up_P^Q):\pi_1(\B_P)\ra\pi_1(\B_Q)$. Since orientability of $\B_P$ depends on morphisms $\pi_1(\B_P)\ra\Z_2,$ it follows that $\B_P$ is orientable if and only if $\B_Q$ is orientable, and choices of orientations for $\B_P$ and $\B_Q$ are equivalent.
If $E_\bu$ is skew-adjoint, the analogues hold with Pfaffian orientations.
To apply Theorem <ref> it will be helpful to have a list of Lie group morphisms $\io:G\ra H$ of complex type, and to know when the conditions of Theorem <ref>(d) are satisfied. The next theorem will be proved in <ref>.
(a) Here is a list of Lie group morphisms $\io:G\ra H$ of complex type, as in Definition <ref>, for all $m\ge 1$:
E_7(1) E_8, E_6(1)^2 E_8, (14)(1) E_8,
(8)(1) E_8, (3)(1) F_4, (7)(1) F_4,
G_2 (8), (m) (m+1), (m) (m).
(b) Here is a list of Lie group morphisms $\io:G\t\U(1)^k\ra H$ of complex type, where the $\U(1)^k$ factor is written as the final factor in the domain, such that $\jmath:=\io\vert_{G\t\{1\}}:G=G\t\{1\}\ra H$ is $p$-connected for the specified $p$:
(m)(1) (m+1) (p =2m),
(m)(1) (m+1) (p =4m+2),
(m)(1) (m+2) (p =m-1),
(m)(1) (m+2) (p =m-1).
Here we do not specify the actual morphisms $\io$, although these are implicit in the proof, as we will not need them later. In (b), when we say $\jmath$ is $p$-connected, this may not be the maximum such $p$. To prove Theorem <ref>, we will show that:
(i) Suppose a Lie group $H$ has a torus subgroup $T\subseteq H$, and write $G=Z(T)$ for the centralizer of $T$. Then $\inc:G\hookra H$ is of complex type.
(ii) Let $\io:G\ra H$ be a morphism of connected Lie groups which is a covering map, e.g. $\Spin(n)\,{\buildrel 2:1\over\longra}\,\SO(n)$. Then $\io$ is of complex type.
(iii) Compositions of complex type morphisms are of complex type.
(iv) Suppose $\inc:G\hookra H$ is an inclusion of a Lie subgroup, and $Y=H/G$ has $\pi_i(Y)=0$ for $i\le p$. Then $\inc$ is $p$-connected.
Using these we can easily construct many examples of complex type morphisms.
§.§ G₂-instantons on G₂-manifolds
Part (a) of the next theorem follows from Walpuski <cit.> and <cit.>, and part (b) is proved by Joyce–Upmeier <cit.>.
Let $X$ be a compact, oriented, spin Riemannian $7$-manifold, and $E_\bu$ be the Dirac operator $\slashed{D}:\Ga^\iy(S)\ra\Ga^\iy(S)$ on $X$ in Definition <ref>. Suppose $P\ra X$ is a principal $G$-bundle for $G=\U(m)$ or $\SU(m)$. Then:
(a) $\B_P$ and $\ovB_P$ are orientable, that is, $O_P^{E_\bu}\ra\B_P$ and $\bar O^{E_\bu}_P\ra\ovB_P$ are trivializable principal $\Z_2$-bundles.
(b) An orientation on $\det\slashed{D}$ and a flag structure on $X,$ as in Joyce <cit.>, determine canonical trivializations of $O_P^{E_\bu},\bar O^{E_\bu}_P$ for all $P$. Here flag structures are an algebro-topological structure on $7$-manifolds $X$, related to `linking numbers' of disjoint homologous $3$-submanifolds $Y_1,Y_2\subset X$.
Theorem <ref> is related to a 7-dimensional gauge theory discussed by Donaldson and Thomas [32] and Donaldson and Segal [31]. Suppose $X$ is a compact 7-manifold and $(\vp,g)$ a $G_2$-structure on $X$ in the sense of <cit.> which is coclosed (i.e. $\d(*\vp)=0$). Let $G$ be a Lie group, and $P\ra X$ a principal $G$-bundle. A $G_2$-instanton on $P$ is a connection $\nabla_P$ on $P$ with $F^{\nabla_P}\w*\vp=0$ in $\Ga^\iy(\Ad(P)\ot\La^6T^*X)$.
Write $\M_P^{G_2}$ for the moduli space of irreducible $G_2$-instantons on $P$, regarded as a subspace of $\ovB_P^\irr\subset\ovB_P$. As $\d(*\vp)=0$, the deformation theory of $\M_P^{G_2}$ is controlled by an elliptic complex. Then $\M_P^{G_2}$ is a derived manifold of virtual dimension 0. If $\vp$ is generic in its cohomology class, $\M_P^{G_2}$ is an ordinary 0-manifold. Examples and constructions of $G_2$-instantons are given in [53, 60, 61, 73, 74, 75]. As in <cit.>, the orientation bundle of $\M_P^{G_2}$ is the restriction to $\M_P^{G_2}$ of $\bar O^{E_\bu}_P\ra\ovB_P$, for $E_\bu$ the Dirac operator of the spin structure on $X$ induced by $(\vp,g)$, so we may orient $\M_P^{G_2}$ by restricting orientations on $\bar O^{E_\bu}_P\ra\ovB_P$. Thus Theorem <ref> implies <cit.>:
Let $X$ be a compact $7$-manifold and $(\vp,g)$ a coclosed $G_2$-structure on $X,$ and fix an orientation on $\det\slashed{D}$ and a flag structure on $X$. Then for any principal $G$-bundle $P\ra X$ for $G=\U(m)$ or $\SU(m),$ we can construct a canonical orientation on $\M_P^{G_2}$.
Donaldson and Segal [31] propose defining enumerative invariants of $(X,\vp,g)$ by counting $\M_P^{G_2}$, with signs, and adding correction terms from associative 3-folds in $X$. To determine the signs we need an orientation of $\M_P^{G_2}$. Thus, Corollary <ref> contributes to the Donaldson–Segal programme.
§.§ Spin(7)-instantons on Spin(7)-manifolds
Here is Cao–Gross–Joyce <cit.>, an analogue of Theorem <ref>(a).
Let $X$ be a compact, oriented, spin Riemannian $8$-manifold, and $E_\bu$ be the positive Dirac operator $\slashed{D}_+:\Ga^\iy(S_+)\ra\Ga^\iy(S_-)$ on $X$ in Definition <ref>. Suppose $P\ra X$ is a principal $G$-bundle for $G=\U(m)$ or $\SU(m)$. Then $\B_P$ and $\ovB_P$ are orientable, that is, $O_P^{E_\bu}\ra\B_P$ and $\bar O^{E_\bu}_P\ra\ovB_P$ are trivializable principal $\Z_2$-bundles.
This begs the question of whether there is an 8-dimensional analogue of Theorem <ref>(b), which we will answer in this paper and the sequel [45].
Again, Theorem <ref> is related to an 8-dimensional gauge theory discussed by Donaldson and Thomas [32]. Let $X$ be a compact 8-manifold and $(\Om,g)$ a $\Spin(7)$-structure on $X$ in the sense of <cit.>, which need not have $\d\Om=0$. Then there is a natural splitting $\La^2T^*X=\La^2_7T^*X\op\La^2_{21}T^*X$ into vector subbundles of ranks 7 and 21. Suppose $G$ is a Lie group and $P\ra X$ a principal $G$-bundle. A $\Spin(7)$-instanton on $P$ is a connection $\nabla_P$ on $P$ with $\pi^2_7(F^{\nabla_P})=0$ in $\Ga^\iy(\Ad(P)\ot\La^2_7T^*X)$. Write $\M_P^{\Spin(7)}$ for the moduli space of irreducible $\Spin(7)$-instantons on $P$, regarded as a subspace of $\ovB_P^\irr\subset\ovB_P$. Then $\M_P^{\Spin(7)}$ is a derived manifold in the sense of [38, 39, 40, 42], and an ordinary manifold if $\Om$ is generic (amongst non-closed 4-forms). Examples of $\Spin(7)$-instantons were given by Lewis [48], Tanaka [67], and Walpuski [76].
As in <cit.>, the orientation bundle of $\M_P^{\Spin(7)}$ is the restriction to $\M_P^{\Spin(7)}$ of $\bar O^{E_\bu}_P\ra\ovB_P$, for $E_\bu$ the positive Dirac operator $\slashed{D}_+$ of the spin structure on $X$ induced by $(\Om,g)$, so we may orient $\M_P^{\Spin(7)}$ by restricting orientations on $\bar O^{E_\bu}_P\ra\ovB_P$. Thus Theorem <ref> implies <cit.>:
Let $X$ be a compact $8$-manifold with $\Spin(7)$-structure $(\Om,g)$. Then $\M_P^{\Spin(7)}$ is orientable for any principal $\U(m)$- or $\SU(m)$-bundle $P\ra X$.
§.§ DT4 invariants of Calabi-Yau 4-folds
Suppose $X$ is a Calabi–Yau 4-fold, and write $\M$ and $\bcM$ for the classical and derived moduli stacks of objects in $\coh(X)$, with inclusion $i:\M\hookra\bcM$. Then $\bcM$ has a $-2$-shifted symplectic structure in the sense of Pantev–Toën–Vaquié–Vezzosi [59]. Also $\bL_i:i^*(\bL_{\bcM})\ra\bL_\M$ is a 4-Calabi–Yau obstruction theory on $\M$, a classical truncation of the $-2$-shifted symplectic structure on $\bcM$.
Borisov–Joyce [10] defined virtual classes for proper $-2$-shifted symplectic derived $\C$-schemes, using Derived Differential Geometry [38, 39, 40, 42]. More recently, Oh–Thomas [58] gave a new, algebro-geometric definition of 4-Calabi–Yau virtual classes, equivalent to [10], in the style of Behrend–Fantechi [5].
Oh–Thomas [58] define their virtual class $[\M]_\virt$ only when $\M$ is a projective moduli scheme of Gieseker stable sheaves on a Calabi–Yau 4-fold $X$. However, Kiem–Park <cit.> provide an alternative definition which works for $\M$ a proper Deligne–Mumford stack with a 4-Calabi–Yau obstruction theory satisfying an `isotropic cone' condition. Invariants defined by integrating universal cohomology classes over 4-Calabi–Yau virtual classes of moduli spaces of semistable coherent sheaves or complexes on $X$ are known as DT4 invariants.
To define a 4-Calabi–Yau virtual class we need a choice of orientation on $\M$, defined in Borisov–Joyce <cit.>.
Let $\M$ be an Artin or higher $\C$-stack with a 4-Calabi–Yau obstruction theory $\phi:\cF^\bu\ra\bL_\M$, $\th:\cF^\bu\,{\buildrel\sim\over\longra}\,(\cF^\bu)^\vee[2]$. Then we have a determinant line bundle $\det(\cF^\bu)\ra \M$, and $\th$ induces an isomorphism $\det\th:\det\cF^\bu\ra(\det\cF^\bu)^*$. An orientation for $(\M,\phi,\th)$ is a choice of isomorphism $\la:\det\cF^\bu\ra\O_\M$ with $\la^*\ci\la=\det\th$.
Here $\la$ is basically a square root of $\det\th$. Locally on $\M$ in the étale topology there are two choices for $\la$, and there is a principal $\Z_2$-bundle $O_{\cF^\bu}\ra \M$ parametrizing choices of $\la$. We say that $(\M,\phi,\th)$ is orientable if $O_{\cF^\bu}$ is trivializable, and an orientation is a trivialization $O_{\cF^\bu}\cong \M\t\Z_2$.
The next theorem summarizes parts of Cao–Gross–Joyce <cit.>, plus background material from Joyce–Tanaka–Upmeier <cit.>.
Let $X$ be a projective Calabi–Yau $4$-fold.
(a) Write $\M$ for the moduli stack of objects $G^\bu$ in $D^b\coh(X),$ a higher stack. It has a decomposition $\M=\coprod_{\al\in K^0_\top(X)}\M_\al,$ where $\M_\al$ is the substack of complexes $G^\bu$ with class $\lb G^\bu\rb=\al$ in the topological K-theory of the underlying $8$-manifold of $X$. There is a natural $4$-Calabi–Yau obstruction theory $\phi:\cF^\bu\ra\bL_\M$, $\th:\cF^\bu\,{\buildrel\sim\over\longra}\,(\cF^\bu)^\vee[2]$ on $\M,$ and hence a principal $\Z_2$-bundle $O^{\cF^\bu}\ra\M$ of orientations on $\M$ as in Definition <ref>, restricting to $O^{\cF^\bu}_\al\ra\M_\al$.
Write $\M^\top$ for the topological realization of $\M,$ a topological space natural up to homotopy equivalence, as in Simpson [63], Blanc <cit.>, and <cit.>. Then $O_{\cF^\bu}$ lifts to a principal $\Z_2$-bundle $O^{\cF^\bu,\top}\ra\M^\top,$ restricting to $O^{\cF^\bu,\top}_\al\ra\M^\top_\al,$ such that trivializations of $O^{\cF^\bu}_\al$ and $O^{\cF^\bu,\top}_\al$ are naturally in 1-1 correspondence.
(b) Write $\cC=\Map_{C^0}(X, B\U\t\Z),$ where $B\U=\varinjlim_{n\ra\iy}B\U(n)$ is the unitary classifying space. It has a natural decomposition $\cC=\coprod_{\al\in K^0_\top(X)}\cC_\al,$ where $\cC_\al$ is connected. Taking the elliptic operator $E_\bu\ra X$ to be the positive Dirac operator $\slashed{D}_+$ of the spin structure on $X$ induced by the Calabi–Yau $4$-fold structure, which for a Calabi–Yau $4$-fold $X$ may be written
\begin{equation*}
\slashed{D}_+=\db+\db^*:\Ga^\iy(\La^{0,{\rm even}}T^*X)\longra \Ga^\iy(\La^{0,{\rm odd}}T^*X),
\end{equation*}
in <cit.> we construct a principal $\Z_2$-bundle $O_\cC\ra\cC,$ restricting to $O_{\cC_\al}\ra\cC_\al$. It is thought of as a bundle of orientations on $\cC,$ and is obtained from the bundles $O_P^{E_\bu}\ra\B_P$ in <ref> for $\U(m)$-bundles $P\ra X$ in a limiting process as $m\ra\iy$.
From the definition of $O_{\cC_\al},$ if $k\in\N$ then and $\Xi_{\al,k}:\cC_\al\ra\cC_{\al+k\lb\cO_X\rb}$ is the homotopy equivalence induced by direct sum with the trivial vector bundle $\bigop^{k}\cO_X\ra X,$ then there is a canonical isomorphism $O_{\cC_\al}\cong \Xi_{\al,k}^*(O_{\cC_{\al+k\lb\cO_X\rb}})$.
Actually, for a general spin $8$-manifold, $O_{\cC_\al}$ and $\Xi_{\al,k}^*(O_{\cC_{\al+k\lb\cO_X\rb}})$ differ by the $\Z_2$-torsor $\Or(\det\slashed{D}_+)^{\ot^k},$ so in general we should restrict to $k$ even. But as $X$ is a Calabi–Yau $4$-fold there is a canonical isomorphism $\Or(\det\slashed{D}_+)\cong\Z_2$.
(c) We relate (a),(b) as follows: using the classifying morphism of the universal complex $\cU^\bu\ra X\t\M,$ as in <cit.> we can define a continuous map $\Phi:\M^\top\ra\cC,$ natural up to homotopy, restricting to $\Phi_\al:\M_\al^\top\ra\cC_\al$ for $\al\in K^0_\top(X)$. Then there are natural isomorphisms $O^{\cF^\bu,\top}_\al\cong \Phi^*(O_{\cC_\al})$ of principal $\Z_2$-bundles on $\M^\top_\al$. Hence, a trivialization of $O_{\cC_\al}$ induces trivializations of $O^{\cF^\bu,\top}_\al$ and $O^{\cF^\bu}_\al$.
(d) Let $P\ra X$ be a principal $\U(m)$-bundle, and $O_P^{E_\bu}\ra\B_P$ be as in <ref> and <ref> for $E_\bu\ra X$ the positive Dirac operator $\slashed{D}_+$ of the spin structure on $X$ induced by the Calabi–Yau $4$-fold structure. Write $\be=\lb P\rb\in K^0_\top(X)$.
Write $\B_P^\top$ for the topological realization of the topological stack $\B_P,$ a topological space natural up to homotopy equivalence. Then $O_P^{E_\bu}$ lifts to a principal $\Z_2$-bundle $O_P^{E_\bu,\top}\ra\B_P^\top,$ such that trivializations of $O_P^{E_\bu}$ and $O_P^{E_\bu,\top}$ are naturally in 1-1 correspondence.
(e) We relate (b),(d) as follows: using the universal principal $\U(m)$-bundle $U_P\ra X\t\B_P$ we can define a continuous map $\Psi_\be:\B_P^\top\ra\cC_\be,$ natural up to homotopy. Then the construction of $O_{\cC_\be}$ implies that there is a natural isomorphism $O_P^{E_\bu,\top}\cong \Psi_\be^*(O_{\cC_\be})$ of principal $\Z_2$-bundles on $\B_P^\top$. Hence, a trivialization of $O_{\cC_\be}$ induces trivializations of $O_P^{E_\bu}$ and $O_P^{E_\bu,\top}$.
(f) In (d),(e), suppose $m\ge 5$. Then $\Psi_\be:\B_P^\top\ra\cC_\be$ induces isomorphisms $\pi_i(\B_P^\top)\ra\pi_i(\cC_\be)$ for $i=0,1$. Therefore (e) induces a 1-1 correspondence between trivializations of $O_{\cC_\al},$ $O_P^{E_\bu},$ and $O_P^{E_\bu,\top},$ so in particular, a trivialization of $O_P^{E_\bu}$ induces a trivialization of $O_{\cC_\be}$.
(g) Let $\al\in K^0_\top(X)$ and set $k=\max(5-\rank\al,0),$ $m=\min(5,\rank\al),$ and $\be=\al+k\lb\cO_X\rb$. Then there exists a principal $\U(m)$-bundle $P\ra X,$ unique up to isomorphism, with $\lb P\rb=\be$ in $K^0_\top(X)$. By (a)–(f), we now see that a trivialization of $O_P^{E_\bu}$ induces trivializations of $O_P^{E_\bu,\top},O_{\cC_\be},O_{\cC_\al},O^{\cF^\bu,\top}_\al,$ and $O^{\cF^\bu}_\al$. That is, an orientation on $\B_P$ induces an orientation on $\M_\al$.
We offer some explanation of Theorem <ref>. For simplicity, let us start with moduli spaces $\M_\al^{\vect,\ss}(\tau)$ of Gieseker stable vector bundles $E\ra X$ in class $\al\in K^0_\top(X)$ with $c_1(\al)=0$, where $\rank\al=r\ge 4$.
By the Hitchin–Kobayashi correspondence, every such $E\ra X$ admits a natural Hermitian–Einstein connection $\nabla_E$, and then $(E,\nabla_E)$ is a $\Spin(7)$-instanton. Every $\Spin(7)$-instanton connection on the complex vector bundle $E\ra X$ comes from an algebraic vector bundle structure on $E$ in this way. As $r\ge 4$, every complex vector bundle $E'\ra X$ with $\lb E'\rb=\al$ has $E'\cong E$.
This induces an isomorphism from $\M_\al^{\vect,\ss}(\tau)$ to the moduli space $\M_P^{\Spin(7)}$ of irreducible $\Spin(7)$-instantons on the principal $\U(r)$-bundle $P\ra X$ associated to $E$, and hence an inclusion $\M_\al^{\vect,\ss}(\tau)\hookra\B_P$. Since DT4 orientations on $\M_\al^{\vect,\ss}(\tau)$ are basically orientations of $\Spin(7)$ instanton moduli spaces, as in <ref>, an orientation on $\B_P$ pulls back to a DT4 orientation of $\M_\al^{\vect,\ss}(\tau)$.
Now $\M_\al^{\vect,\ss}(\tau)$ is a finite-dimensional $\C$-scheme, whereas $\B_P$ is an infinite-dimensional topological stack. One might think that $\M_\al^{\vect,\ss}(\tau)$ is a simpler object, but in fact orientations on $\B_P$ are much easier to understand. In examples it is difficult to describe $\M_\al^{\vect,\ss}(\tau)$ explicitly. It could have $N\gg 0$ connected components, so that $\M_\al^{\vect,\ss}(\tau)$ would have $2^N$ orientations, but $\B_P$ is connected and so has only 2 orientations. Thus pulling back orientations from $\B_P$ to $\M_\al^{\vect,\ss}(\tau)$ gives orientations with fewer arbitrary choices.
Theorem <ref> gives orientations not just on moduli spaces of vector bundles $\Vect(X)$, but also of coherent sheaves $\coh(X)$, and complexes in $D^b\coh(X)$. The rough analogue in Differential Geometry of passing from $\Vect(X)$ to $D^b\coh(X)$ is taking the limit $r\ra\iy$, for $r=\rank E$. More precisely, the analogue in Topology is passing from $\coprod_{r\ge 0}\Map_{C^0}(X,B\U(r))$ to $\Map_{C^0}(X,B\U\t\Z)$, where $B\U=\varinjlim_{n\ra\iy}B\U(n)$, and the $\Z$ factor keeps track of the rank $r$.
Combining Theorems <ref> and <ref> yields <cit.>:
Let $X$ be a projective Calabi–Yau $4$-fold. Then the orientation bundle $O^{\cF^\bu}\ra\M$ from Theorem <ref>(a) is trivializable, i.e. $\M$ is orientable.
Corollary <ref> has important applications in the programme of defining and studying `DT4 invariants' of Calabi–Yau 4-folds proposed by Borisov and Joyce [10] and Cao and Leung [19], as orientations are necessary to define DT4 invariants. Theorem <ref> and Corollary <ref> are extended to noncompact Calabi–Yau 4-folds by Bojko [7].
The higher $\C$-stack $\M$ in Theorem <ref> and Corollary <ref> contains as open Artin $\C$-substacks the moduli stacks $\M^{\rm coh},\M^{\rm coh,ss},\M^{\rm vect}$ of coherent sheaves, and semistable coherent sheaves, and algebraic vector bundles on $X$, respectively. The principal $\Z_2$-bundle $O^{\cF^\bu}\ra\M$, and orientations on $\M$, may be restricted to $\M^{\rm coh},\ldots,\M^{\rm vect}$. Thus, Theorem <ref> and Corollary <ref> are still interesting if we only care about $\M^{\rm coh},\ldots,\M^{\rm vect}$ rather than $\M$.
§.§ Results and conjectures on DT4 invariants
There is a growing literature on DT4 invariants of Calabi–Yau 4-folds $X$. One frequent theme, which appears in the papers [8, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27] and we summarize in Conjecture <ref> below, are conjectured relations of the schematic form
Conventional invariants of $X$≃DT4 invariants of $X$,
where by `conventional invariants' of $X$ we mean things like the Euler characteristic and Gromov–Witten invariants, and the relation `$\simeq$' may involve change of variables in a generating function, etc.
For this paper, the thing that interests us about bc2eq8 is that the left hand side is orientation-independent, but the right hand side involves a choice of orientation on the relevant DT4 moduli spaces $\M_\al^\ss(\tau)\subseteq\M_\al$. Thus, for bc2eq8 to make sense, it seems that there should exist canonical orientations on $\M_\al$ for all those $\al$ involved in bc2eq8. Corollary <ref> tells us only that $\M_\al$ is orientable, not that it has a canonical orientation. One of our goals is to construct canonical orientations for all moduli spaces $\M_\al$ with $c_2(\al)-c_1(\al)^2=0$, which will be sufficient for the relations bc2eq8 in [8, 13, 14, 16, 17, 18, 19, 22, 23, 20, 21, 24, 25, 26, 27].
Let $X$ be a projective Calabi–Yau $4$-fold. Then:
(a) Cao–Kool <cit.> propose an explicit generating function for invariants $\int_{\Hilb^n(X)}c_n(L^{[n]})$ for $L\ra X$ a line bundle. See also [8, 24].
(b) Bojko [8] proposes formulae for integrals of Segre classes, Verlinde classes and Nekrasov genera over $\Hilb^n(X)$.
(c) Cao–Maulik–Toda <cit.> relate genus $0$ Gromov–Witten invariants of $X$ and $1$-dimensional DT4 invariants. Cao–Toda <cit.> make a related conjecture. See also [13, 14].
(d) Cao–Maulik–Toda <cit.> relate genus $0,1$ Gromov–Witten invariants of $X$ and Pandharipande–Thomas style DT4 invariants. Cao–Toda <cit.> make a related conjecture. See also [18, 25].
(e) Cao–Kool <cit.> relate genus $0,1$ Gromov–Witten invariants of $X$ and rank $1$ DT4 invariants. See also [19].
(f) For holomorphic symplectic $4$-folds $X,$ Cao–Oberdieck–Toda <cit.> relate reduced genus $0,1,2$ Gromov–Witten invariants of $X$ and reduced DT4 invariants counting $1$-dimensional sheaves, and also <cit.> to reduced Pandharipande–Thomas style DT4 invariants.
Although we state this as a conjecture, we emphasize that the cited papers also contain many theorems. All parts of Conjecture <ref> involve only moduli spaces $\M_\al$ on $X$ with $c_1(\al)=c_2(\al)=0$.
§.§ Background on bordism theory
§.§.§ Tangential structures
To define bordism groups with different flavours, we first define tangential structures. Our treatment is based on Lashof [47] and Stong [65].
Let $B\O=\colim_{n\ra\iy}B\O(n)$ be the classifying space of the stable orthogonal group, the direct limit of the classifying spaces $B\O(n)$ of the orthogonal groups $\O(n)$ under the inclusions $\O(n)\hookra\O(n+1)$. There are natural continuous maps $\io_{B\O(n)}:B\O(n)\ra B\O$ coming from the direct limit.
The inclusions $\O(m)\t\O(n)\ra\O(m+n)$ induce a binary operation $\mu_{B\O}:B\O\t B\O\ra B\O$ which is associative, unital, and commutative up to homotopy. Hence $B\O$ is a commutative H-space.
If $X$ is a smooth $n$-manifold (possibly with boundary or corners) then choosing a Riemannian metric $g$ on $X$ gives $TX$ an $\O(n)$-structure, so we have a classifying map $\phi_{TX}:X\ra B\O(n)$, which we compose with $\io_{B\O(n)}:B\O(n)\ra B\O$ to get a map $\phi^\rst_{TX}:X\ra B\O$ classifying the stable tangent bundle of $X$. Up to contractible choice this is unique and independent of the Riemannian metric $g$ on $X,$ which permits us to fix the choice of $\phi^\rst_{TX}$ below. We have a homotopy commutative diagram
\begin{equation*}
\xymatrix@C=60pt@R=15pt{ X \ar[d]^{\phi_{TX}} \ar[dr]^(0.7){\phi_{TX\op\ul{\R}}} \ar@/^.8pc/[drr]^(0.7){\phi^\rst_{TX}} \\
B\O(n) \ar[r] & B\O(n+1) \ar[r]^{\io_{B\O(n+1)}} & B\O, }
\end{equation*}
where $\ul{\R}\ra X$ is the trivial line bundle. The vector bundle isomorphism $\id_{TX}\op -\id_{\ul\R}:TX\op\ul{\R}\ra TX\op\ul{\R}$ induces a homotopy $-1_{\phi_{TX\op\ul{\R}}}:\phi_{TX\op\ul{\R}}\Ra\phi_{TX\op\ul{\R}}$ whose square is homotopic to the constant homotopy $\Id_{\phi_{TX\op\ul{\R}}}$. We define $-1_{\phi^\rst_{TX}}:\phi^\rst_{TX}\Ra\phi^\rst_{TX}$ to be the horizontal composition of this with $\Id_{\io_{B\O(n)}}$.
If $X$ has boundary or corners then $TX\vert_{\pd X}\cong T(\pd X)\op\ul{\R}$, where $\ul{\R}\ra X$ is the trivial line bundle. Thus we have a homotopy commutative diagram
\begin{equation*}
\xymatrix@C=120pt@R=15pt{
*+[r]{\pd X} \drtwocell_{}\omit^{}\omit{^{}} \ar[r]_(0.35){i_{\pd X}} \ar[d]^{\phi_{T\pd X}} & *+[l]{X} \ar[d]_{\phi_{TX}} \\
*+[r]{B\O(n-1)} \ar[r] & *+[l]{B\O(n).\!} }
\end{equation*}
Composing with $B\O(n)\ra B\O$ shows that
A tangential structure $\bs B=(B,\be)$ is a topological space $B$ and a continuous map $\be:B\ra B\O$. We say that $\bs B$ has products if we are given a continuous map $\mu_{\bs B}:B\t B\ra B$, which is homotopy commutative and associative, in a homotopy commutative diagram
\begin{equation*}
\xymatrix@C=120pt@R=15pt{
*+[r]{B\t B} \ar[r]_(0.35){\mu_{\bs B}} \ar[d]^{\be\t\be} \drtwocell_{}\omit^{}\omit{^{\eta_{\bs B}\,\,\,\,}} & *+[l]{B} \ar[d]_{\be} \\
*+[r]{B\O\t B\O} \ar[r]^(0.65){\mu_{B\O}} & *+[l]{B\O.\!} }
\end{equation*}
A $\bs B$-structure $\bs\ga_X=(\ga_X,\eta_X)$ on a smooth manifold $X$ (possibly with boundary or corners) is a homotopy commutative diagram of continuous maps
_^_ η_X B [dr]^
X [ur]^_X [rr]^(0.35)ϕ_X^ BØ.
An isomorphism of tangential structures $\bs\ga_X=(\ga_X,\eta_X)$ and $\bs\ga_X'=(\ga_X',\eta_X')$ is represented by a homotopy $\eta:\ga_X\Ra\ga_X'$ such that the diagram
\begin{equation*}
\begin{tikzcd}
\be\circ\ga_X\arrow[rd,Rightarrow,"\eta_X"]\arrow[rr,Rightarrow,"\Id_\be\circ\eta"] && \be\circ\ga_X'\arrow[ld,Rightarrow,"\eta_X'"]\\
\end{tikzcd}
\end{equation*}
commutes up to homotopy (of homotopies). Here we only care about $\eta$ up to homotopy and often we will only care about isomorphism classes of $\bs B$-structures.
The opposite $\bs B$-structure $-\bs\ga_X$ is obtained by composing homotopies across the diagram
\begin{equation*}
\xymatrix@C=70pt@R=13pt{
\drrtwocell_{}\omit^{}\omit{_{\,\,\,\,\eta_X}} & B \ar[dr]^\be \\
X \drrtwocell_{}\omit^{}\omit{_{\qquad -1_{\phi^\rst_{TX}}}} \ar@/_2pc/[rr]_(0.15){\phi_X^\rst} \ar[ur]^{\ga_X} \ar[rr]_(0.3){\phi_X^\rst} && B\O. \\
&& }
\end{equation*}
Often we just write a manifold with $\bs B$-structure $(X,\bs\ga_X)$ as $X$, omitting $\bs\ga_X$ from the notation. In this case we write $-X$ as a shorthand for $(X,-\bs\ga_X)$, that is, $X$ with the opposite $\bs B$-structure.
From bc2eq9 we see that if $X$ has boundary or corners then composing bc2eq10 with $i_X:\pd X\ra X$ gives a restriction $\bs\ga_X\vert_{\pd X}$ which is a $\bs B$-structure on $\pd X$.
If $\bs B=(B,\be)$, $\bs B'=(B',\be')$ are tangential structures, we say that $\bs B$ factors through $\bs B'$ if there is a homotopy commutative diagram
_^_ B' [dr]^'
B [ur] [rr]^(0.35) BØ.
Composing with this diagram, a $\bs B$-structure on $X$ induces a $\bs B'$-structure.
Here are some examples, including well known geometric structures such as orientations and spin structures.
(a) The orthogonal tangential structure is $\bs\BO=(B\O,\id_{B\O})$. Every manifold $X$ has a $\bs\BO$-structure unique up to homotopy.
(b) The special orthogonal tangential structure is $\bs\BSO=(B\SO,\be_\SO)$, where $B\SO=\colim_{n\ra\iy}B\SO(n)$ and $\be_\SO:B\SO\ra B\O$ is induced by the inclusions $\SO(n)\hookra\O(n)$. A $\bs\BSO$-structure on $X$ is equivalent to an orientation on $X$. The opposite $\bs\BSO$-structure is equivalent to the opposite orientation.
(c) The spin tangential structure is $\bs\BSpin\!=\!(B\Spin,\be_\Spin)$, where $B\Spin\!=\!\colim_{n\ra\iy}\ab B\Spin(n)$ and $\be_\Spin:B\Spin\ra B\O$ is induced by $\Spin(n)\ra\O(n).$ A $\bs\BSpin$-structure on $X$ is equivalent to an orientation and a spin structure.
(d) The spin$^c$ tangential structure is $\bs\BSpinc=(B\Spinc,\be_{\Spinc})$, for $B\Spinc=\colim_{n\ra\iy}\ab B\Spinc(n)$ and $\be_{\Spinc}:B\Spinc\ra B\O$ induced by $\Spinc(n)\ra\O(n)$. A $\bs\BSpinc$-structure on $X$ amounts to an orientation and a spin$^{\rm c}$ structure.
(e) The unitary tangential structure is $\bs\BU=(B\U,\be_\U)$, where $B\U=\colim_{m\ra\iy}\ab B\U(m)$ and $\be_\U:B\U\ra B\O$ is induced by the commutative diagram
\begin{equation*}
\xymatrix@C=20pt@R=15pt{
\cdots \ar[r] & \U(m) \ar[r] \ar[d] & \U(m) \ar[r] \ar[d] & \U(m+1) \ar[r] \ar[d] & \U(m+1) \ar[r] \ar[d] & \cdots \\
\cdots \ar[r] & \O(2m) \ar[r] & \O(2m+1) \ar[r] & \O(2m+2) \ar[r] & \O(2m+3) \ar[r] & \cdots.\! }
\end{equation*}
A $\bs\BU$-structure on $X$ is equivalent to a stable almost complex structure on $X.$
(f) The special unitary tangential structure is $\bs\BSU=(B\SU,\be_\SU)$, where $B\SU=\colim_{m\ra\iy}\ab B\SU(m)$ and $\be_\SU:B\SU\ra B\O$ is defined as in (e).
(g) The quaternionic tangential structure is $\bs\BSp=(B\Sp,\be_\Sp)$, where $B\Sp=\colim_{m\ra\iy}\ab B\Sp(m)$ and $\be_\Sp:B\Sp\ra B\O$ is defined in a similar way to (e).
All of the tangential structures in (a)–(g) have products. Also (b)–(g) factor through $\bs\BSO$, but (a) does not.
§.§.§ Bordism (generalized) homology theory
Let $\bs B$ be a stable tangential structure. Then $\bs B$-bordism $\Om_*^{\bs B}(-)$ is a generalized homology theory of topological spaces $T$, in which the `$n$-chains' are continuous maps $f:X\ra T$ for $X$ a compact $n$-manifold with a $\bs B$-structure. The subject began with the work of Thom [69]. Bordism was introduced by Atiyah [2], and good references are Conner <cit.> and Stong [65].
Let $\bs B$ be a tangential structure, $T$ be a topological space, and $n\in\N$. Consider triples $(X,\bs\ga_X,f)$, where $X$ is a compact manifold with $\dim X=n$, $\bs\ga_X$ is a $\bs B$-structure on $X$, and $f:X\ra T$ is a continuous map. Given two such triples, a bordism from $(X_0,\bs\ga_{X_0},f_0)$ to $(X_1,\bs\ga_{X_1},f_1)$ is a triple $(W,\bs\ga_W,e)$, where:
(i) $W$ is a compact $(n+1)$-manifold with boundary, with a given identification $\pd W\cong X_0\amalg X_1$.
(ii) $\bs\ga_W$ is a $\bs B$-structure on $W$, with a given isomorphism of $\bs B$-structures $\bs\ga_W\vert_{\pd W}\ab\cong -\bs\ga_{X_0}\amalg\bs\ga_{X_1}$ on $\pd W\cong X_0\amalg X_1.$
(iii) $e:W\ra T$ is a continuous map such that $e\vert_{\pd W}\cong f_0\amalg f_1$ under the identification $\pd W\cong X_0\amalg X_1.$
Write $(X_0,\bs\ga_{X_0},f_0)\sim(X_1,\bs\ga_{X_1},f_1)$ if there exists such a bordism $(W,\bs\ga_W,e)$. Then `$\sim$' is an equivalence relation, called $\bs B$-bordism, and the equivalence class $[X,\bs\ga_X,f]$ is called a $\bs B$-bordism class. The $n^{\it th}$ $\bs B$-bordism group $\Om^{\bs B}_n(T)$ is the set of $\bs B$-bordism classes $[X,\bs\ga_X,f]$ with $\dim X=n,$ where the group structure has zero element $0_T=[\es,\es,\es],$ addition $[X,\bs\ga_X,f]+[X',\bs\ga_{X'},f']=[X\amalg X',\bs\ga_X\amalg\bs\ga_{X'},f\amalg f'],$ and inverse $-[X,\bs\ga_X,f]=[X,-\bs\ga_X,f].$
When $T$ is a point we may omit the necessarily constant map $f$ from the notation, and write elements of $\Om^{\bs B}_n(*)$ as $[X,\bs\ga_X]$.
If $T$ is a smooth manifold then, as smooth maps are dense in continuous maps, we can take $f:X\ra T$ and $e:W\ra F$ above to be smooth.
Now suppose that $\bs B'$ is another tangential structure and that $\bs B$ factors through $\bs B'$ as in bc2eq11 of Definition <ref>. Then a $\bs B$-structure $\bs\ga_X$ on a manifold $X$ induces a $\bs B'$-structure $\Pi_{\bs B}^{\bs B'}(\bs\ga_X)$ on $X$. This defines a group morphism
\begin{equation*}
\Om_n^{\bs B}(T)\overset{\Pi_{\bs B}^{\bs B'}}{\longra}\Om_n^{\bs B'}(T),\qquad[X,\bs\ga_X,f]\longmapsto\bigl[X,\Pi_{\bs B}^{\bs B'}(\bs\ga_X),f\bigr].
\end{equation*}
If $\io: U\hookrightarrow T$ is a subspace we can define relative bordism groups $\Om_n^{\bs B}(T,U)$, whose elements $[X,\bs\ga_X,f]$ are bordism classes of triples $(X,\bs\ga_X,f)$ with $X$ a compact $n$-manifold with boundary and $\bs B$-structure $\bs\ga_X$, and $f:X\ra T$ a continuous map with $f(\pd X)\subseteq U\subseteq T$. These fit into a long exact sequence
\begin{equation*}
\xymatrix@C=21pt{ \cdots \ar[r] & \Om_n^{\bs B}(U) \ar[r]^{\io_*} & \Om_n^{\bs B}(T) \ar[r]^(0.45){\pi_*} & \Om_n^{\bs B}(T,U) \ar[r]^(0.53)\pd & \Om_{n-1}^{\bs B}(U) \ar[r] & \cdots. }
\end{equation*}
If $T$ is path-connected we define the reduced bordism groups to be $\ti\Om_n^{\bs B}(T)=\Om_n^{\bs B}(T,\{t_0\})$, for $t_0\in T$ any base point. As the inclusion of $U=\{t_0\}$ has a left inverse, the long exact sequence reduces to short exact sequences
@C=30pt 0 [r] _n^B(*) [r]^_* _n^B(T) [r]^π_* _n^B(T) [r] 0.
$n\!=\!0$ $n\!=\!1$ $n\!=\!2$ $n\!=\!3$ $n\!=\!4$ $n\!=\!5$ $n\!=\!6$ $n\!=\!7$ $n\!=\!8$ $n\!=\!9$
$\Om_n^{\bs\SO}(*)$ $\Z$ 0 0 0 $\Z$ $\Z_2$ 0 0 $\Z^2$
$\Om_n^{\bs\O}(*)$ $\Z_2$ 0 $\Z_2$ 0 $\Z_2^2$ $\Z_2$ $\Z_2^3$ $\Z_2$ $\Z_2^5$
$\Om_n^{\bs\Spin}(*)$ $\Z$ $\Z_2$ $\Z_2$ 0 $\Z$ 0 0 0 $\Z^2$ $\Z_2^2$
$\Om_n^{\bs\Spinc}(*)$ $\Z$ 0 $\Z$ 0 $\Z^2$ 0 $\Z^2$ 0 $\Z^4$ 0
$\Om_n^{\bs\U}(*)$ $\Z$ 0 $\Z$ 0 $\Z^2$ 0 $\Z^3$ 0 $\Z^5$
$\Om_n^{\bs\SU}(*)$ $\Z$ $\Z_2$ $\Z_2$ 0 $\Z$ 0 $\Z$ 0 $\Z^2$
$\bs B$-bordism groups of the point
As $\bs B$-bordism is a generalized homology theory, there is a spectral sequence $H_p(T,\Om^{\bs B}_q(*))\Ra\Om^{\bs B}_{p+q}(T)$, where $*$ is the point. If $T$ is path-connected then as the splitting $\Om^{\bs B}_*(T)=\Om^{\bs B}_*(*)\op\ti\Om^{\bs B}_*(T)$ is functorial, this induces a spectral sequence $\ti H_p(T,\Om^{\bs B}_q(*))\Ra\ti\Om^{\bs B}_{p+q}(T)$. Thus, a lot of important behaviour of bordism depends on the bordism groups $\Om_n^{\bs B}(*)$ of the point, so much effort has gone into calculating these. Table <ref> gives values of $\Om_n^{\bs B}(*)$ for $\bs B=\bs\BSO,\bs\BO,\bs\BSpin,\bs\BSpinc,\bs\BU,\bs\BSU$ and $n\le 8$, which are taken from Stong [65], Anderson, Brown and Peterson [1], and Gilkey <cit.>. Note that we omit the letter $\bs{\rm B}$ from the notation for the bordism group for the classical tangential structures defined in Example <ref>.
$n$ $0$ $1$ $2$ $4$
[top][4ex][c]1.3cm$\Om^\Spin_n(*)$ $\Z\an{1}$ $\Z_2\an{\al_1}$ $\Z_2\an{\al_1^2}$ $\Z\an{\al_2}$
$\;\al_1=[\cS^1,\bs\ga'_{\cS^1}],\; \al_2=[K3,\bs\ga_{K3}]$
Explicit presentation of $\Om^\Spin_n(*)$, $n\le 4$
A presentation of $\Om^\Spin_n(*)$, $n\le 4$ is given in Table <ref>, where
$\bs\ga'_{\cS^1}$ is the spin structure on $\cS^1$ which is not the restriction of a spin structure on the closed unit disc $D^2\subset\R^2$, and $\bs\ga_{K3}$ is the unique spin structure on the $K3$ surface.
§.§.§ Free loop spaces, based loop spaces, and their bordism
Let $T$ be a topological space, which we suppose is path-connected, with a basepoint $t_0\in T$. Write $\cS^1=\R/\Z$, with basepoint $\ul 0=0+\Z$. The free loop space of $T$ is $\cL T=\Map_{C_0}(\cS^1,T)$, with the compact-open topology. Points of $\cL T$ are continuous maps $\ga:\cS^1\ra T$, that is, loops in $T$. The based loop space of $T$ is $\Om T=\Map_{C_0}((\cS^1,\ul{0}),(T,t_0))$. Points of $\Om T$ are continuous maps $\ga:\cS^1\ra T$ with $\ga(\ul{0})=t_0$, that is, based loops in $T$.
Mapping $\ga\mapsto\ga(\ul{0})$ defines an evaluation map $\ev_{\ul{0}}:\cL T\ra T$, with $\ev_{\ul{0}}^{-1}(0)=\Om T$. It is a homotopy fibration, with fibre $\Om T$. Let $\bs B$ be a tangential structure. As $\bs B$-bordism $\Om_*^{\bs B}(-)$ is a generalized homology theory, the homotopy fibration gives an Atiyah–Hirzebruch spectral sequence
The fibration has a section $s_T:T\ra\cL T$ mapping $t\in T$ to the constant loop $\cS^1\ra\{t\}\subset T$, with $\ev_{\ul{0}}\ci s_T=\id_T$. The morphisms $(\ev_{\ul{0}})_*:\Om_n^{\bs B}(\cL T)\ra \Om_n^{\bs B}(T)$ and $(s_T)_*:\Om_n^{\bs B}(T)\ra \Om_n^{\bs B}(\cL T)$ induce a splitting
_n^B(T)=_n^B(T)_n^B(T;T), where
Write $\Pi_n^{\bs B}(T):\Om_n^{\bs B}(\cL T)\ra\Om_n^{\bs B}(\cL T;T)$ for the projection in this direct sum. We regard $\Om_n^{\bs B}(\cL T;T)$ as a relative $\bs B$-bordism group.
The decomposition bc2eq14 of $\Om_n^{\bs B}(\cL T)$ corresponds in the spectral sequence bc2eq13 to the decomposition $\Om_q^{\bs B}(\Om T)=\Om_q^{\bs B}(*)\op\ti\Om_q^{\bs B}(\Om T)$. Therefore bc2eq13 splits as the direct sum of two spectral sequences, with the first $H_p\bigl(T,\Om_q^{\bs B}(*)\bigr)\Ra \Om_{p+q}^{\bs B}(T)$ the usual spectral sequence for computing $\Om_*^{\bs B}(T)$, and the second being
Let $f:S\ra T$ be a continuous map of connected topological spaces, and write $\cL f:\cL S\ra\cL T$ for the induced map of free loop spaces. Then $\cL f_*:\Om_n^{\bs B}(\cL S)\ra\Om_n^{\bs B}(\cL T)$ is compatible with the splittings bc2eq14. Write
for the restriction of $\cL f_*$ to relative $\bs B$-bordism.
Now a continuous map $\phi:X\ra\cL T$ is equivalent to a continuous map $\phi':X\t\cS^1\ra T$, by the tautological definition $\phi'(x,y)=\phi(x)(y)$ for $x\in X$ and $y\in\cS^1$. Define a morphism $\xi_n^{\bs B}(T):\Om_n^{\bs B}(\cL T)\ra\Om_{n+1}^{\bs B}(T)$ by
where $\phi'$ is as above and $\bs\ga_{\cS^1}$ is the $\bs B$-structure on $\cS^1$ induced from the standard $\bs B$-structure on the closed unit disc $D^2\subset\R^2$ by identifying $S^1=\pd D^2$.
Equation bc2eq17 is compatible with equivalences $(X_0,\bs\ga_{X_0},\phi_0)\sim\ab(X_1,\ab\bs\ga_{X_1},\ab\phi_1)$, and so is well defined. Note that $[\cS^1,\bs\ga_{\cS^1}]=0$ in $\Om_1^{\bs B}(*)$. When $\bs B=\Spin$ there is also a second $\Spin$-structure $\bs\ga'_{\cS^1}$ on $\cS^1$ with $[\cS^1,\bs\ga'_{\cS^1}]\ne 0$ in $\Om_1^\Spin(*)$; it is important that we use $\bs\ga_{\cS^1}$ rather than $\bs\ga'_{\cS^1}$ in bc2eq17.
Consider the diagram
0 [r] _n^B(T) @<-1ex>[dr]_0 [r]^(s_T)_* _n^B(T) [r]^Π_n^B(T) [d]^ξ_n^B(T) _n^B(T;T) @<1ex>@..>[dl]^ξ_n^B(T) [r] 0
The top row is exact. If $[X,\bs\ga_X,\psi]\in \Om_n^{\bs B}(T)$ then
\begin{align*}
&\xi_n^{\bs B}(T)\ci (s_T)_*\bigl([X,\bs\ga_X,\psi]\bigr)=\xi_n^{\bs B}(T)\bigl([X,\bs\ga_X,s_T\ci\psi]\bigr)
\\
&\;\> =[X\t\cS^1,\bs\ga_X\t\bs\ga_{\cS^1},\psi\ci\pi_X]=[X,\bs\ga_X,\psi]*[\cS^1,\bs\ga_{\cS^1}]=[X,\bs\ga_X,\psi]*0=0,
\end{align*}
where $*:\Om_n^{\bs B}(T)\t\Om_q^{\bs B}(*)\ra\Om_{n+1}^{\bs B}(T)$ is the natural product. Thus the left hand triangle of bc2eq18 commutes, so there exists a unique morphism $\ti\xi_n^{\bs B}(T):\Om_n^{\bs B}(\cL T;T)\ra\Om_{n+1}^{\bs B}(T)$ making the right hand triangle commute.
§.§.§ Spin bordism of some classifying spaces
The next theorem, proved in <ref>, does some bordism calculations we will need to prove Theorem <ref>, which will be key to proving our applications in <ref>–<ref>. Part (d) is what we actually need for Theorem <ref>. Parts (a)–(c) will be used to prove (d), via the spectral sequences bc2eq15 for $T=B\SU$ and $T=K(\Z,4)$, noting that there are homotopy equivalences $\Om B\SU\simeq\SU$ and $\Om K(\Z,4)\simeq K(\Z,3)$, so (a)–(c) help us understand the terms $\ti\Om_q^{\bs B}(\Om T)$ in bc2eq15.
(a) Write $\SU=\varinjlim_{n\ra\iy}\SU(n)$. The reduced spin bordism groups $\ti\Om^\Spin_n(\SU)$ for $n\le 8$ are given by
$n$ $0,1,2,4,6$ $3$ $5$ $7$ $8$
[top][4ex][c]1.6cm$\ti\Om^\Spin_n(\SU)$ $0$ $\Z$ $\Z$ $\Z^2$ $\Z$
Writing the integral cohomology ring as $H^*(\SU,\Z)=\La_\Z[b_2,b_3,\ldots]$ with $\deg b_i\ab =2i-1,$ the isomorphisms in bc2eq19 are given explicitly by
^_3(), [X,ϕ]⟼∫_Xϕ^*(b_2),
^_5(), [X,ϕ]⟼∫_Xϕ^*(b_3),
^_7()^2, [X,ϕ]⟼(∫_Xϕ^*(b_4),
^_8(), [X,ϕ]⟼∫_Xϕ^*(b_2∪b_3).
(b) Write $K(\Z,k)$ for the Eilenberg–MacLane space with $\pi_k(K(\Z,k))\cong\Z$. The reduced spin bordism groups $\ti\Om^\Spin_n(K(\Z,3))$ for $n\le 8$ are given by
$n$ $0,1,2,4,5,6$ $3$ $7$ $8$
[top][4ex][c]2.6cm$\ti\Om^\Spin_n(K(\Z,3))$ $0$ $\Z$ $\Z$ $\Z_2$
The isomorphisms in bc2eq21 are given explicitly by
^_3(K(,3)), [X,ϕ]⟼∫_Xϕ^*(d_3),
^_7(K(,3)), [X,ϕ]⟼1/8∫_Xp_1(X)∪ϕ^*(d_3),
^_8(K(,3))_2, [X,ϕ]⟼∫_Xϕ^*(d̅_3∪^2(d̅_3)).
Here $d_3\in H^3(K(\Z,3),\Z)$ is the universal cohomology class, as $K(\Z,3)$ is the classifying space for $H^3(-,\Z),$ and $\bar d_3\in H^3(K(\Z,3),\Z_2)$ its mod $2$ reduction, and $\Sq^2(\bar d_3)\in H^5(K(\Z,3),\Z_2)$ the Steenrod square of $\bar d_3$.
(c) Write $\la:\SU\ra K(\Z,3)$ for the classifying map of $b_2\in H^3(\SU,\Z),$ so that $\la^*(d_3)=b_2$. Then under the identifications bc2eq20, bc2eq22, the morphisms $\la_*:\ti\Om^\Spin_n(\SU)\ra\ti\Om^\Spin_n(K(\Z,3))$ for $n=3,7,8$ are given by
_*:^_3()=^_3(K(,3))=, n⟼n,
_*:^_7()=^2^_7(K(,3))=, (m,n)⟼3n,
_*:^_8()=^_8(K(,3))=_2, n⟼n+2.
(d) Write $B\SU=\varinjlim_{n\ra\iy}B\SU(n)$. Then $H^*(B\SU,\Z)=\Z[c_2,c_3,\ldots],$ where $c_i$ is the $i^{\rm th}$ Chern class, with $\deg c_i=2i$. The reduced spin bordism groups $\ti\Om^\Spin_n(\cL B\SU;B\SU)$ for $n\le 7$ are given by
$n$ $0,1,2,4,6$ $3$ $5$ $7$
[top][4ex][c]3cm$\ti\Om^\Spin_n(\cL B\SU;B\SU)$ $0$ $\Z$ $\Z$ $\Z^3$
where the isomorphisms are given explicitly by
^_3(B;B), [X,ϕ]⟼∫_X^1c_2(P),
^_5(B;B), [X,ϕ]⟼∫_X^1c_3(P),
^_7(B;B)^3, [X,ϕ]⟼
1/6∫_X^1 c_4(P) - 1/12∫_X^1 c_2(P)^2,
1/48∫_X^1 p_1(X)∪c_2(P),
1/2∫_X^1 c_2(P)^2
where $P\ra X\t\cS^1$ is the pullback of the universal bundle over $B\SU$ along the adjoint $X\t\cS^1\ra B\SU$ of the map $\phi\colon X\ra\cL B\SU$ the proof will show that the above integrals are always integers.
Write $\mu:B\SU\ra K(\Z,4)$ for the classifying map of $c_2\in H^4(B\SU,\Z)$. Consider the morphisms
μ_^𝕀__2 :_n^(B;B)__2
where $\mu_\rel^\Spin$ is as in bc2eq16. Then bc2eq26 is surjective for $n=7,8$.
§.§.§ The exceptional Lie group E₈ and its classifying space
Let $G$ be a Lie group. Then, as in May <cit.> or Milnor–Stasheff [55], $G$ has a classifying space $BG$, a connected topological space natural up to homotopy equivalence, with a principal $G$-bundle $\pi:EG\ra BG$ with $EG$ contractible, such that if $X$ is a (nice) topological space then isomorphism classes of principal $G$-bundles $\pi:P\ra X$ are in natural correspondence with homotopy classes of continuous maps $f_P:X\ra BG$, with $P\cong f^*(EG)$. The classifying space has a free loop space $\cL BG=\Map_{C^0}(\cS^1,BG)$, the topological space of continuous maps $\ga:\cS^1\ra BG$, where $\cS^1=\R/\Z$.
The exceptional Lie group $E_8$ is a compact, simply-connected, simple Lie group of dimension $248$ and rank $8$. It will be important later for two reasons: firstly, as the only nonzero homotopy group $\pi_d(E_8)$ for $d\le 14$ is $\pi_3(E_8)=\Z$, homotopy-theoretic calculations for $E_8$ and $BE_8$ are not that difficult. And secondly, because of Theorem <ref>, once we have proved orientability results for $E_8$ we can deduce orientability results for many other Lie groups. The next theorem will be proved in <ref>, deduced from Theorem <ref>(d). It will be essential for the applications in <ref>–<ref>.
Write $\io:\SU(8)\ra E_8$ for the Lie group morphism defined as the composition $\SU(8)\,{\buildrel(\id,1)\over\longra}\,\SU(8)\t\U(1)\,{\buildrel\eq{bc2eq6}\over\longra}\, E_8,$ and $B\io:B\SU(8)\ra BE_8$ be the induced morphism of classifying spaces. Consider the morphisms
B_^𝕀__2 :^_n(B(8);B(8))__2
where $B\io_\rel^\Spin$ is as in bc2eq16. Then bc2eq27 is surjective for $n=7,8$.
§.§ Categorical groups and Picard groupoids
Categorical groups may be viewed as a categorification of the concept of a group. Similarly, Picard groupoids categorify abelian groups. These will appear as tools in this paper, so we briefly review them as background here and state a classification result due to Sinh [64].
For more background on symmetric monoidal categories, we refer to Joyal–Street [36] and MacLane <cit.>.
A monoidal category $(\cC,\ot,\bf 1,\al)$ is a category $\cC$ with a tensor product functor $\ot:\cC\t\cC\ra\cC,$ a unit object $\bf 1\in\cC,$ a natural associativity isomorphism $\al,$ and unit isomorphisms. Usually, we will not make these explicit, which is justified by MacLane's coherence theorem. To simplify our exposition, we will usually assume that all unit isomorphisms are identities. The set $\pi_0(\cC)$ of isomorphism classes of objects of a monoidal category is a (possibly non-commutative) monoid. Moreover, the operation induced by the tensor product and the ordinary composition agree in the automorphism group $\pi_1(\cC)=\Aut_\cC(\bf 1),$ which implies that $\pi_1(\cC)$ is an abelian group (by the Eckmann–Hilton argument). We write $\pi_0(\cC)$ multiplicatively and $\pi_1(\cC)$ additively. A categorical group is a monoidal category $(\cG,\ot,\bf 1,\al)$ in which all morphisms are invertible and for which the monoid $\pi_0(\cG)$ is a group.
This means that every object $x$ has a dual, an object $x^*$ for which there exist isomorphisms $\ep_x: x^*\ot x\cong{\bf 1}$ and $\eta_x:{\bf 1}\cong x\ot x^*$ (one usually requires some axioms, which play no role here). In a categorical group, all of the automorphism groups can be identified with each other via
\begin{equation*}
\pi_1(\cG)\longra\Aut_\cG(x),\enskip \left({\bf 1}\xrightarrow{\varphi}{\bf 1}\right)\longmapsto \left(x\cong {\bf 1}\ot x\xrightarrow{\varphi\ot x}{\bf 1}\ot x\cong x\right).
\end{equation*}
Given a group $\pi_0$ and an abelian group $\pi_1,$ let $\cG=\pi_0\quotstack\pi_1$ denote the category of $\pi_0$-graded $\pi_1$-torsors. In other words, the objects of $\cG$ are all pairs $(S,x),$ where $x\in\pi_0$ and $S$ is a set with a free, transitive left action of the group $\pi_1.$ If $x=y,$ then $\Hom_\cG\bigl((S,x),(T,y)\bigr)$ is the set of all $\pi_1$-equivariant maps $\varphi: S\ra T,$ otherwise the morphism set is defined to be empty. Define the tensor product of objects by $(S_0,x_0)\ot(S_1,x_1)=(S_0\ot_{\pi_1} S_1,x_0x_1),$ where $S_0\ot_{\pi_1} S_1=(S_0\t S_1)/\pi_1$ is the quotient by the anti-diagonal $\pi_1$-action.
As any two $\pi_1$-torsors are isomorphic and every isomorphism is multiplication by a group element, $\cG$ is a categorical group with $\pi_0(\cG)=\pi_0,$ $\pi_1(\cG)=\pi_1,$ and a trivial conjugation action of $\pi_0(\cG)$ on $\pi_1(\cG).$
In case $\pi_1=0$ the construction of the category $\pi_0\quotstack\pi_1$ boils down to the abelian group $\pi_0$ viewed as a discrete monoidal category in the usual way.
A monoidal structure on a functor $F:\cC\ra\cD$ of monoidal categories $(\cC,\ot_\cC,\bf 1_\cC)$ and $(\cD,\ot_\cD,\bf 1_\cD)$ is a collection of isomorphisms
F(x)_F(y) F(x_y), 1_ F(1_),
for all objects $x, y$ of $\cC,$ compatible with the associativity and unit isomorphisms in $\cC$ and $\cD,$ see <cit.>. A monoidal transformation of such functors is a natural transformation $F\Rightarrow G$ that maps the isomorphisms bc2eq28 for $F$ and $G$ onto each other, see <cit.>. A monoidal equivalence is a pair of monoidal functors whose composites either way admit monoidal natural isomorphisms to the identity functors of $\cC$ and $\cD.$
A symmetry $\si$ on a monoidal category $(\cC,\ot,\bf 1,\al)$ is a natural isomorphism $\si_{x,y}: x\ot y\ra y\ot x$ such that $\si_{y,x}\circ\si_{x,y}=1_{x\ot y}$ and such that the unit and the hexagon coherence diagrams of <cit.> commute. A Picard groupoid is a categorical group $\cG$ equipped with a symmetry $\si.$
In particular, $\pi_0(\cG)$ is then a commutative monoid. From now on, we write the abelian groups $\pi_0, \pi_1$ additively.
Recall that for a symmetric monoidal functor of symmetric monoidal categories the isomorphisms bc2eq28 are required to commute with the symmetry, see <cit.>. A symmetric monoidal functor between Picard groupoids is also called a morphism of Picard groupoids. There are no further conditions for a monoidal transformation between symmetric monoidal functors.
Picard groupoids are classified by a linear, quadratic invariant defined from the symmetry. We first recall some terminology.
Let $\pi_0$ and $\pi_1$ be abelian groups.
(i) A map $q:\pi_0\ra\pi_1$ is quadratic if $b_q(x,y)=q(x+y)-q(x)-q(y)$ defines a bilinear map. This implies $q(\la x)=\la^2q(x)$ for $\la\in\Z.$ Let $\Quad(\pi_0,\pi_1)$ be the abelian group of all quadratic maps.
A bilinear map $\al:\pi_0\t\pi_0\ra\pi_1$ is alternating if $\al(x,x)=0$ for all $x\in\pi_0$ and skew-symmetric if $\al(x,y)+\al(y,x)=0$ for all $x,y\in\pi_0.$ (This is also a good definition when $\al$ is not bilinear.) Let $\Alt(\pi_0,\pi_1)$ be the abelian group of all alternating bilinear maps and let $\Skew(\pi_0,\pi_1)$ be the abelian group of all skew-symmetric bilinear maps.
By expanding $\al(x+y,x+y)=0,$ one finds $\Alt(\pi_0,\pi_1)\subset\Skew(\pi_0,\pi_1).$ If a quadratic map $q:\pi_0\ra\pi_1$ is also a linear map, then $q(2x)=4q(x)$ and $q(2x)=q(x+x)=q(x)+q(x),$ so $2q(x)=0.$ Therefore, $q$ factors through a linear map $\pi_0/2\pi_0\ra\pi_1.$ Conversely, every linear map $\pi_0/2\pi_0\ra\pi_1$ determines a linear quadratic map by precomposing with the canonical projection. Hence $\Hom(\pi_0/2\pi_0,\pi_1)\subset\Quad(\pi_0,\pi_1)$ is the subset of linear quadratic maps.
There is a short exact sequence
\begin{equation*}
\begin{tikzcd}[column sep=3ex]
0\rar&\Alt(\pi_0,\pi_1)\rar&\Skew(\pi_0,\pi_1)\rar{\De^*} & \Hom(\pi_0/2\pi_0,\pi_1)\rar & 0,
\end{tikzcd}
\end{equation*}
where $\De^*$ maps $\al\in\Skew(\pi_0,\pi_1)$ to the quadratic map $q(x)=\al(x,x).$
A self-equivalence of a Picard groupoid changes $\si$ by a $2$-cocycle, which explains why only the linear quadratic form $q(x)=\si(x,x)$ is an invariant of $\cG,$ called the symmetry invariant. The triple $(\pi_0,\pi_1,q)$ is a complete invariant of Picard groupoids. We have the following classification result from Sinh [64].
(a) Let $\pi_0$ and $\pi_1$ be abelian groups. Up to equivalence, Picard groupoids $\cG$ with $\pi_0(\cG)=\pi_0$ and $\pi_1(\cG)=\pi_1$ are classified by their symmetry invariant, which is a linear quadratic form $q:\pi_0(\cG)\ra\pi_1(\cG).$ Conversely, every triple $(\pi_0,\pi_1,q)$ occurs as the invariants of some Picard groupoid.
Let $\cG$ and $\cG'$ be Picard groupoids with symmetry invariants $q$ and $q'.$ Let $f_0:\pi_0(\cG)\ra\pi_0(\cG')$ and $f_1:\pi_1(\cG)\ra\pi_1(\cG')$ be group morphisms. There exists a symmetric monoidal functor $F:\cG\ra\cG'$ with $\pi_0(F)=f_0$ and $\pi_1(F)=f_1$ if and only if $q'\circ f_0=f_1\circ q.$
It follows that every Picard groupoid is equivalent to a category of $\pi_0$-graded $\pi_1$-torsors $\cG=\pi_0\quotstack\pi_1$ (see Example <ref>) with symmetry isomorphism determined by $\si\in\Skew(\pi_0,\pi_1)$ as
\begin{align*}
s_0\ot_\si s_1&\longmapsto\si(x_0,x_1)(s_1\ot_\si s_0).
\end{align*}
These Picard groupoids are equivalent if the `diagonal' quadratic forms $\si(x,x)$ coincide. We may therefore view a symmetry isomorphism as a sign convention when commuting objects past each other. From this point of view, Theorem <ref> classifies all possible sign conventions on $\pi_0\quotstack\pi_1$ up to equivalence. Sign conventions are very important in the construction of the Quillen determinant line bundle (see [70]) and they are equally important here.
§ BORDISM CATEGORIES AND GAUGE THEORY
For each dimension $n\ge 0$, tangential structure $\bs B$, and Lie group $G$, we will define categories $\Bord_n^{\bs B}(BG),\Bord_n^{\bs B}(\cL BG)$. If $X$ is a compact $n$-manifold we also define a category $\Bord_X(BG)$. All three will be called bordism categories. Parts of <ref>, <ref>, <ref> and <ref> were discussed in the previous paper [71], in less detail.
§.§ Bordism categories Bordₙᴮ(BG)
Fix a dimension $n\ge 0$, a tangential structure $\bs B$ in the sense of <ref>, and a Lie group $G$. We will define a symmetric monoidal category $\Bord_n^{\bs B}(BG)$ that we call a bordism category.
(a) Objects of $\Bord_n^{\bs B}(BG)$ are pairs $(X,P)$, where $X$ is a compact $n$-manifold without boundary with a $\bs B$-structure $\bs\ga_X$, which we generally omit from the notation, and $P\ra X$ is a principal $G$-bundle.
(b) Morphisms $[W,Q]:(X_0,P_0)\ra(X_1,P_1)$ in $\Bord_n^{\bs B}(BG)$ are equivalence classes of pairs $(W,Q),$ see (c), where $W$ is a compact $(n+1)$-manifold with $\bs B$-structure $\bs\ga_W$, there is a chosen isomorphism $\pd W\cong -X_0\amalg X_1$ of the boundary preserving $\bs B$-structures (where $-X_0$ indicates that $X_0$ has the opposite $\bs B$-structure $-\bs\ga_{X_0}$), and $Q\ra W$ is a principal $G$-bundle with a chosen isomorphism $Q\vert_{\pd W}\cong P_0\amalg P_1$. We suppress the isomorphisms from the notation.
(c) In the situation of (b), let $(W_0,Q_0)$ and $(W_1,Q_1)$ be two choices for $(W,Q)$. We say that $(W_0,Q_0)\sim(W_1,Q_1)$ if there exists a pair $(V,R)$, where $V$ is a compact $(n+2)$-manifold with corners and $\bs B$-structure $\bs\ga_V$, with a chosen isomorphism of boundaries identifying $\bs B$-structures
V≅(-X_0[0,1])⨿(X_1[0,1]) ⨿-W_0⨿W_1
such that along $\pd^2V$ we identify $\pd W_i$ with $(-X_0\amalg X_1)\t\{i\}$ for $i=0,1$ in the obvious way, and $R\ra V$ is a principal $G$-bundle such that under bc3eq1 we have
with the obvious compatibility with the chosen isomorphisms $Q_i\vert_{\pd W_i}\cong P_0\amalg P_1$ over $X_0\t\{i\}\amalg X_1\t\{i\}$. It is easy to see that `$\sim$' is an equivalence relation, so the equivalence classes $[W,Q]$ are well defined.
(d) If $[W,Q]:(X_0,P_0)\ra(X_1,P_1)$ and $[W',Q']:(X_1,P_1)\ra(X_2,P_2)$ are morphisms, the composition is
That is, we glue $W,W'$ along their common boundary component $X_1$ to make a manifold $W'\amalg_{X_1}W$ with $\bs B$-structure and boundary $\pd(W'\amalg_{X_1}W)=-X_0\amalg X_2$. To define the smooth structure on $W'\amalg_{X_1}W$ we should choose `collars' $X_1\t(-\ep,0]\subset W$, $X_1\t[0,\ep)\subset W'$ of $X_1$ in $W,W'$, and similarly for $Q,Q'$, but the choices do not change the equivalence class $[W'\amalg_{X_1}W,Q'\amalg_{P_1}Q]$. Composition is associative.
(e) If $(X,P)$ is an object in $\Bord_n^{\bs B}(BG),$ the identity morphism is
(f) If $[W,Q]:(X_0,P_0)\ra(X_1,P_1)$ is a morphism, we can prove that it has an inverse morphism
[W,Q]^-1 =[-W⨿(W⨿_X_0⨿X_1-W),Q⨿(Q⨿_P_0⨿P_1-Q)]:
noting that $\pd(-W)=-(-X_0\amalg X_1)=-X_1\amalg X_0$. Thus the category $\Bord_n^{\bs B}(BG)$ is a groupoid, that is, all morphisms are isomorphisms.
(g) Define a monoidal structure $\ot$ on $\Bord_n^{\bs B}(BG)$ by, on objects
and if $[W,Q]:(X_0,P_0)\ra(X_1,P_1)$, $[W',Q']:(X_0',P_0')\ra(X_1',P_1')$ are morphisms, then
This is compatible with `$\sim$', and with compositions and identities.
(h) The identity in $\Bord_n^{\bs B}(BG)$ is $\boo=(\es,\es)$.
(i) If $(X,P)\in\Bord_n^{\bs B}(BG)$ we write $-(X,P)=(-X,P)$, that is, we give $X$ the opposite $\bs B$-structure $-\bs\ga_X$. Observe that we have an isomorphism
Thus $-(X,P)$ is an inverse for $(X,P)$ under `$\ot$'.
(j) The symmetry isomorphism $\si_{(X,P),(X',P')}=[W,Q]\colon(X,P)\ot(X',P')\to(X',P')\ot(X,P)$
has $(W,Q)=((X\amalg X')\t[0,1],(P\amalg P')\t[0,1])$ with the obvious identification of $\6 W$ with the disjoint union of $-(X\amalg X')$ and $X'\amalg X.$
Hence $\Bord_n^{\bs B}(BG)$ is a Picard groupoid, as in <ref>.
In the case $G=\{1\}$ we will write $\Bord_n^{\bs B}(*)$ instead of $\Bord_n^{\bs B}(B\{1\})$. By definition, objects of $\Bord_n^{\bs B}(*)$ are pairs $(X,P)$, where $P\ra X$ is a principal $\{1\}$-bundle. But as principal $\{1\}$-bundles are trivial (we may take $P\ra X$ to be $\id_X:X\ra X$) we may omit $P$, and write objects of $\Bord_n^{\bs B}(*)$ as $X$, morphisms as $[W]:X_0\ra X_1$, and so on. This is equivalent to $\Bord_n^{\bs B}(*)$ in Definition <ref>.
If $\ga:G_1\ra G_2$ is a morphism of Lie groups, there is an obvious functor
mapping $P\mapsto (P\t G_2)/G_1$ on objects and $[W,\psi]\mapsto[W,(Q\t G_2)/G_1]$ on morphisms, where $G_1$ acts on $P\t G_2$ by the principal bundle action on $P$, and by $g_1:g_2\mapsto g_2\cdot\ga(g_1)^{-1}$ on $G_2$. In particular, the morphisms $\{1\}\hookra G$, $G\twoheadrightarrow\{1\}$ induce functors $\Bord_n^{\bs B}(*)\ra\Bord_n^{\bs B}(BG)$ and $\Bord_n^{\bs B}(BG)\ra\Bord_n^{\bs B}(*)$.
Similarly, a morphism of tangential structures induces a functor.
The next proposition, proved in <ref>, motivates the name bordism category, and the choice of notation `$BG$' in $\Bord_n^{\bs B}(BG)$. It shows the $\Bord_n^{\bs B}(BG)$ can be understood explicitly using homotopy-theoretic methods. The groups $\Om_n^{\bs B}(BG)$ are often explicitly computable, as in <ref>.
Work in the situation of the Definition <ref>. Then:
(i) As in Definition <ref>, write $\pi_0(\Bord_n^{\bs B}(BG))$ for the set of isomorphism classes $[X,P]$ of objects $(X,P).$ Make $\pi_0(\Bord_n^{\bs B}(BG))$ into an abelian group with product, identity, and inverses, induced by $\ot,\boo,-$ in Definition <ref>(g)–(i). Then there is a canonical isomorphism
where $BG$ is the topological classifying space of the Lie group $G$ and $\Om_n^{\bs B}(BG)$ is the bordism group with tangential $\bs B$-structures.
(ii) In Definition <ref>, there is a canonical group isomorphism
(iii) The isomorphisms bc3eq10 and bc3eq11 are compatible with change of group functors, in particular with $\{1\}\ra G.$
(iv) As in every Picard groupoid, a morphism $\la:(X_0,P_0)\ab\to(X_1,P_1)$ in $\Bord_n^{\bs B}(BG)$ determines a bijection
given by composition in the diagram of bijections
*+[r]_n+1^B(BG) [d]^bc3eq11 [r]_(0.22)≅ *+[l]__n^B(BG)((X_0,P_0),(X_1,P_1)) @=[d]
+[r]__n^B(BG)(,) [r]^(0.34) *+[l]__n^B(BG)((X_0,P_0),(X_1,P_1)).
Theorem <ref>(b) now shows that $\Bord_n^{\bs B}(BG)$ is classified up to equivalence as a Picard groupoid by the abelian groups $\Om_n^{\bs B}(BG),\Om_{n+1}^{\bs B}(BG)$ and linear quadratic map $q:\Om_n^{\bs B}(BG)\ra\Om_{n+1}^{\bs B}(BG)$.
§.§ Bordism categories Bordₙᴮ(ℒBG)
Fix a dimension $n\ge -1$, a tangential structure $\bs B$ in the sense of <ref>, and a Lie group $G$. We will define another symmetric monoidal category $\Bord_n^{\bs B}(\cL BG)$ that we call a bordism category. It is a simple modification of Definition <ref>: we replace the principal $G$-bundles $P\ra X$, $Q\ra W$, $R\ra V$ by principal $G$-bundles $P\ra X\t\cS^1$, $Q\ra W\t\cS^1$, $R\ra V\t\cS^1$.
(a) Objects of $\Bord_n^{\bs B}(\cL BG)$ are pairs $(X,P)$, where $X$ is a compact $n$-manifold without boundary with a $\bs B$-structure $\bs\ga_X$, which we generally omit from the notation, and $P\ra X\t\cS^1$ is a principal $G$-bundle.
(b) Morphisms $[W,Q]:(X_0,P_0)\ra(X_1,P_1)$ in $\Bord_n^{\bs B}(\cL BG)$ are equivalence classes of pairs $(W,Q),$ see (c), where $W$ is a compact $(n+1)$-manifold with $\bs B$-structure $\bs\ga_W$, there is a chosen isomorphism $\pd W\cong -X_0\amalg X_1$ of the boundary preserving $\bs B$-structures (where $-X_0$ indicates that $X_0$ has the opposite $\bs B$-structure $-\bs\ga_{X_0}$), and $Q\ra W\t\cS^1$ is a principal $G$-bundle with a chosen isomorphism $Q\vert_{\pd W}\cong P_0\amalg P_1$. We suppress the isomorphisms from the notation.
(c) In the situation of (b), let $(W_0,Q_0)$ and $(W_1,Q_1)$ be two choices for $(W,Q)$. We say that $(W_0,Q_0)\sim(W_1,Q_1)$ if there exists a pair $(V,R)$, where $V$ is a compact $(n+2)$-manifold with corners and $\bs B$-structure $\bs\ga_V$, with a chosen isomorphism bc3eq1 of boundaries identifying $\bs B$-structures, such that along $\pd^2V$ we identify $\pd W_i$ with $(-X_0\amalg X_1)\t\{i\}$ for $i=0,1$ in the obvious way, and $R\ra V\t\cS^1$ is a principal $G$-bundle such that as for bc3eq2, under bc3eq1 we have
\begin{equation*}
R\vert_{\pd V\t\cS^1}\cong (P_0\t[0,1])\amalg (P_1\t[0,1])\amalg Q_0\amalg Q_1,
\end{equation*}
with the obvious compatibility with the chosen isomorphisms $Q_i\vert_{\pd W_i}\cong P_0\amalg P_1$ over $X_0\t\{i\}\amalg X_1\t\{i\}$. It is easy to see that `$\sim$' is an equivalence relation, so the equivalence classes $[W,Q]$ are well defined.
(d) If $[W,Q]:(X_0,P_0)\ra(X_1,P_1)$ and $[W',Q']:(X_1,P_1)\ra(X_2,P_2)$ are morphisms, the composition $[W',Q']\ci [W,Q]$ is defined as in bc3eq3. Composition is associative.
(e) If $(X,\phi)$ is an object in $\Bord_n^{\bs B}(\cL BG),$ the identity morphism $\id_{(X,P)}$ is defined as in bc3eq4.
(f) If $[W,Q]:(X_0,P_0)\ra(X_1,P_1)$ is a morphism, it has an inverse morphism $[W,Q]^{-1}$ defined as in bc3eq5. Thus the category $\Bord_n^{\bs B}(\cL BG)$ is a groupoid, that is, all morphisms are isomorphisms.
(g) Define a monoidal structure $\ot$ on $\Bord_n^{\bs B}(\cL BG)$ as in bc3eq6–bc3eq7.
(h) The identity in $\Bord_n^{\bs B}(\cL BG)$ is $\boo=(\es,\es)$.
(i) If $(X,\phi)\in\Bord_n^{\bs B}(\cL BG)$ we write $-(X,\phi)=(-X,\phi)$, that is, we give $X$ the opposite $\bs B$-structure $-\bs\ga_X$. As in bc3eq8, $-(X,\phi)$ is an inverse for $(X,\phi)$ under `$\ot$'.
(j) The symmetry isomorphism is as in Definition <ref>.
Hence $\Bord_n^{\bs B}(\cL BG)$ is a Picard groupoid, as in <ref>.
Here if $n=-1$, by definition the only manifold $X$ with $\dim X=-1$ is $X=\es$, so the only object in $\Bord_{-1}^{\bs B}(\cL BG)$ is $(\es,\es)$, but morphisms $[W,\psi]:(\es,\es)\ra(\es,\es)$ can still be nontrivial, with $W$ a 0-manifold.
In the case $G=\{1\}$ we will write $\Bord_n^{\bs B}(*)$ instead of $\Bord_n^{\bs B}(\cL B\{1\})$. Then the data $P\ra X\t\cS^1$, $Q\ra W\t\cS^1$ is trivial, so we may write objects of $\Bord_n^{\bs B}(*)$ as $X$, morphisms as $[W]:X_0\ra X_1$, and so on.
If $\ga:G_1\ra G_2$ is a morphism of Lie groups, as in bc3eq9 there is a functor
mapping $P\mapsto (P\t G_2)/G_1$ on objects and $[W,\psi]\mapsto[W,(Q\t G_2)/G_1]$ on morphisms. In particular, the morphisms $\{1\}\hookra G$, $G\twoheadrightarrow\{1\}$ induce functors $\Bord_n^{\bs B}(*)\ra\Bord_n^{\bs B}(\cL BG)$ and $\Bord_n^{\bs B}(\cL BG)\ra\Bord_n^{\bs B}(*)$.
Similarly, a morphism of tangential structures induces a functor.
We relate the categories of Definitions <ref> and <ref>.
Let $n\ge 0$ and $\bs B,G$ be as above. Define a functor
to act on objects by $I_n^{\bs B}(G):(X,P)\mapsto (X\t\cS^1,P)$, and on morphisms by $I_n^{\bs B}(G):[W,\psi]\mapsto [W\t\cS^1,Q]$. Here given the $\bs B$-structures on $X,W$, to define the $\bs B$-structures on $X\t\cS^1,W\t\cS^1$ we use the standard $\bs B$-structure on $\cS^1=\R/\Z$, which is invariant under the action of $\R/\Z\cong\U(1)$. So, for example, when $\bs B=\Spin$, we use the $\Spin$-structure on $\cS^1$ whose principal $\Spin(1)$-bundle is the trivial bundle $(\R/\Z)\t\Spin(1)\ra\R/\Z$. It is easy to check that $I_n^{\bs B}(G)$ is a well-defined symmetric monoidal functor. Also $I_n^{\bs B}(G_1),I_n^{\bs B}(G_2)$ commute with the change-of-group functors $F_\ga$ in bc3eq14 and bc3eq9 in the obvious way.
Here is the analogue of Proposition <ref>, proved in <ref>. It motivates the choice of notation `$\cL BG$' in $\Bord_n^{\bs B}(\cL BG)$.
Work in the situation of Definition <ref>. Then:
(i) As in Definition <ref>, write $\pi_0(\Bord_n^{\bs B}(\cL BG))$ for the set of isomorphism classes $[X,P]$ of objects $(X,P)$ in $\Bord_n^{\bs B}(\cL BG)$. Make it into an abelian group with product, identity, and inverses, induced by $\ot,\boo,-$ in Definition <ref>(g)–(i). Then there is a canonical isomorphism
where $\Om_n^{\bs B}(\cL BG)$ is the bordism group of the free loop space $\cL BG=\Map_{C^0}(\cS^1,BG)$ of the topological classifying space $BG$ of $G,$ with tangential $\bs B$-structures from <ref>.
(ii) In Definition <ref>, there is a canonical group isomorphism
(iii) The isomorphisms bc3eq16 and bc3eq17 are compatible with change of group functors, in particular with $\{1\}\ra G.$
(iv) As in every Picard groupoid, a morphism $\la:(X_0,P_0)\ab\to(X_1,P_1)$ in $\Bord_n^{\bs B}(\cL BG)$ determines a bijection
\begin{equation*}
\Om_{n+1}^{\bs B}(\cL BG)\longra\Hom_{\Bord_n^{\bs B}(\cL BG)}\bigl((X_0,P_0),(X_1,P_1)\bigr)
\end{equation*}
given by composition in the diagram of bijections
\begin{equation*}
\xymatrix@C=150pt@R=15pt{
*+[r]{\Om_{n+1}^{\bs B}(\cL BG)} \ar[d]^{\eq{bc3eq17}} \ar[r]_(0.22)\cong & *+[l]{\Hom_{\Bord_n^{\bs B}(\cL BG)}\bigl((X_0,P_0),(X_1,P_1)\bigr)} \ar@{=}[d] \\
*+[r]{\Hom_{\Bord_n^{\bs B}(\cL BG)}(\boo,\boo)} \ar[r]^(0.34){\ot\la} & *+[l]{\Hom_{\Bord_n^{\bs B}(\cL BG)}\bigl(\boo\ot(X_0,P_0),\boo\ot(X_1,P_1)\bigr).}
\end{equation*}
(v) There is a commutative diagram
*+[r]_n^B(BG) [d]^bc3eq17_≅[r]_ξ_n^B(BG) *+[l]_n+1^B(BG) [d]^bc3eq11_≅
+[r]__n-1^B(BG)() [r]^I_n^B(G)_bc3eq15 *+[l]__n^B(BG)(),
where $\xi_n^{\bs B}(BG)$ is as in Definition <ref>.
Theorem <ref>(a) now shows that $\Bord_n^{\bs B}(\cL BG)$ is classified up to equivalence as a Picard groupoid by the abelian groups $\Om_n^{\bs B}(\cL BG),\Om_{n+1}^{\bs B}(\cL BG)$ and a linear quadratic form $q:\Om_n^{\bs B}(\cL BG)\ra\Om_{n+1}^{\bs B}(\cL BG)$.
§.§ Bordism categories Bordₓ(BG)
The next definition is a variation of Definition <ref>, in which we fix the $n$-manifold $X$, and take $W=X\t[0,1]$ and $V=X\t[0,1]^2$.
Let $X$ be a compact $n$-manifold and $G$ a Lie group. Define $\Bord_X(BG)$ to be the category with objects $P$ for $P\ra X$ a principal $G$-bundle, and morphisms $[Q]:P_0\ra P_1$ be $\sim$-equivalence classes $[Q]$ of principal $G$-bundles $Q\ra X\t[0,1]$ with chosen isomorphisms $Q\vert_{X\t\{i\}}\cong P_i$ for $i=0,1$. If $Q,Q'$ are alternative choices for $Q$, we write $Q\sim Q'$ if there exists a principal $G$-bundle $R\ra X\t[0,1]^2$ with chosen isomorphisms
\begin{align*}
R\vert_{X\t\{0\}\t[0,1]}&\cong P_0\t[0,1], & R\vert_{X\t\{1\}\t[0,1]}&\cong P_1\t[0,1], \\ R\vert_{X\t[0,1]\t\{0\}}&\cong Q, & R\vert_{X\t[0,1]\t\{1\}}&\cong Q',
\end{align*}
which are compatible over $X\t\{0,1\}^2$ with the given isomorphisms $Q\vert_{X\t\{i\}}\cong P_i\cong Q'\vert_{X\t\{i\}}$. To define composition of morphisms $[Q]:P_0\ra P_1$ and $[Q']:P_1\ra P_2$ we set $[Q']\ci[Q]=[Q'']$, where $Q''\ra X\t[0,1]$ is given by $Q''\vert_{X\t\{t\}}=Q\vert_{X\t\{2t\}}$ for $t\in[0,\ha]$, and $Q''\vert_{X\t\{t\}}=Q'\vert_{X\t\{2t-1\}}$ for $t\in[\ha,1]$, and when $t=\ha$ we identify $Q''\vert_{X\t\{\frac{1}{2}\}}=Q\vert_{X\t\{1\}}=Q'\vert_{X\t\{0\}}$ via the given isomorphisms $Q\vert_{X\t\{1\}}\cong P_1\cong Q'\vert_{X\t\{0\}}$. To define the smooth structure on $Q''$ near $X\t\{\ha\}$ we use collars as in Definition <ref>(d).
It is then easy to show that composition is associative, so that $\Bord_X(BG)$ is a category, where identity morphisms are $\id_P=[P\t[0,1]]:P\ra P$. Every morphism in $\Bord_X(BG)$ is invertible, where the inverse of $[Q]:P_0\ra P_1$ is $[Q]^{-1}=[Q']:P_1\ra P_0$, with $Q'\vert_{X\t\{t\}}=Q\vert_{X\t\{1-t\}}$ for $t\in[0,1]$.
Now suppose that $\bs B$ is a tangential structure, and $X$ has a $\bs B$-structure $\bs\ga_X$. Since the stable tangent bundles of $X\t[0,1]$ and $X\t[0,1]^2$ are the pullbacks of the stable tangent bundle of $X$, pullback of $\bs\ga_X$ along the projections $X\t[0,1]\ra X$, $X\t[0,1]^2\ra X$ induces $\bs B$-structures on $X\t[0,1]$ and $X\t[0,1]^2$. Define a functor
to map $P\mapsto(X,P)$ on objects and $[Q]\mapsto\bigl[X\t[0,1],Q\bigr]$ on morphisms, using the $\bs B$-structures on $X,X\t[0,1]$. This is well defined as writing $W=X\t[0,1]$ and $V=X\t[0,1]^2$, the definitions above of the equivalence $\sim$ on $Q$ and $(X\t[0,1],Q)$, and of compositions of morphisms, and so on, map to those in Definition <ref>.
If $P\ra X$ is a principal $G$-bundle, we write $\Bord_X(BG)_P\subset\Bord_X(BG)$ to be the full subcategory with one object $P$ in $\Bord_X(BG)$. Write $\Pi_{X,P}^{\bs B}$ for the restriction of $\Pi_X^{\bs B}$ to $\Bord_X(BG)_P\subset\Bord_X(BG)$.
If $\ga:G_1\ra G_2$ is a morphism of Lie groups as for bc3eq9 there is a functor
mapping $(X,P)\mapsto (X,(P\t G_2)/G_1)$ on objects and $[Q]\mapsto[(Q\t G_2)/G_1]$ on morphisms.
Next suppose $X,X'$ are compact $n$-manifolds with $\bs B$-structures $\bs\ga_X,\bs\ga_{X'}$, and set $X''=X\amalg X'$ with $\bs B$-structure $\bs\ga_{X''}=\bs\ga_X\amalg\bs\ga_{X'}$. There is a diagram of functors and natural transformations:
\begin{equation*}
\xymatrix@C=180pt@R=15pt{
*+[r]{\Bord_X(BG)\t\Bord_{X'}(BG)} \ar[r]_(0.6){\amalg} \ar[d]^{\Pi_X^{\bs B}\t\Pi_{X'}^{\bs B}} \drtwocell_{}\omit^{}\omit{^{\al_{X,X'}^{\bs B}\,\,\,\,\,\,\,\,\,\,\,\,\,}} & *+[l]{\Bord_{X''}(BG)} \ar[d]_{\Pi_{X''}^{\bs B}} \\
*+[r]{\Bord_n^{\bs B}(BG)\t\Bord_n^{\bs B}(BG)} \ar[r]^(0.6)\ot &*+[l]{\Bord_n^{\bs B}(BG).}
\end{equation*}
Here the functor $\amalg$ on the top row acts by $(P,P')\mapsto P\amalg P'$ on objects and $([Q],[Q'])\mapsto[Q\amalg Q']$ on morphisms, and the functor $\ot$ on the bottom row is the monoidal structure on $\Bord_n^{\bs B}(BG)$ from Definition <ref>. The natural isomorphism $\al_{X,X'}^{\bs B}$ is just the identity, mapping $(P,P')\mapsto\id_{(X\amalg X',\bs\ga_X\amalg\bs\ga_{X'},P\amalg P')}$.
In a similar way to Propositions <ref> and <ref>, we can use homotopy theory to give a partial description of the categories $\Bord_X(BG)$ and functors $\Pi_X^{\bs B}$. The next proposition is proved in <ref>.
Suppose $\bs B$ is a tangential structure, $X$ a compact $n$-manifold with $\bs B$-structure $\bs\ga_X,$ $G$ a Lie group, and $P\ra X$ a principal $G$-bundle. Then $P$ is an object in $\Bord_X(BG),$ and $(X,\bs\ga_X,P)$ an object in $\Bord_n^{\bs B}(BG),$ and $\Pi_X^{\bs B}:P\mapsto(X,\bs\ga_X,P)$. We have a commutative diagram
*+[r]__X(BG)(P) [r]_(0.45)Π_X^B [d]^χ_P^B *+[l]__n^B(BG)(X,_X,P)
+[r]_n^B(BG) [r]^(0.45)ξ_n^B(BG) *+[l]_n+1^B(BG), [u]^bc3eq12_≅
where $\xi_n^{\bs B}(BG)$ is in Definition <ref>, the right hand column is the bijection bc3eq12, and $\chi_P^{\bs B}$ is defined as follows: let $\phi_P:X\ra BG$ be a classifying map for $P$. Then for $[Q]:P\ra P$ in $\Aut_{\Bord_X(BG)}(P),$ as $Q\ra X\t[0,1]$ is a principal $G$-bundle with chosen isomorphisms $Q\vert_{X\t\{0\}}\cong P\cong Q\vert_{X\t\{1\}},$ we can choose a classifying map $\phi_Q:X\t[0,1]\ra BG$ for $Q$ such that $\phi_Q\vert_{X\t\{0\}}=\phi_Q\vert_{X\t\{1\}}=\phi_P$. Writing $\cS^1=\R/\Z=[0,1]/(0\sim 1)$ with projection $\pi:[0,1]\ra\cS^1,$ define $\bar\phi_Q:X\t\cS^1\ra BG$ by $\bar\phi_Q\ci(\id_X\t\pi)=\phi_Q$. Let $\ti\phi_Q:X\ra \cL BG=\Map_{C^0}(\cS^1,BG)$ be the induced map. Then define
§.§ Orientation functors on categories Bordₓ(BG)
We now come to the bordism-theoretic point of view on the orientations from <ref>. For this, we first rephrase the $\G_P$-equivariance of the orientation bundles $\hat O^{E_\bu}_P\ra\A_P$ for the gauge group action in terms of the functoriality of constructions $\sO_X^{E_\bu,G}:\Bord_X(BG)\ra\sZtor$. While this is straightforward, it sets the stage for the generalization stated in the next section.
We define the categories $\Ztor, \sZtor.$ Recall that a $\Z_2$-torsor is just a two point set $T=\{a,b\}$, with the obvious free transitive $\Z_2$-action. Morphisms of $\Z_2$-torsors are bijections $\io:\{a,b\}\ra\{c,d\}$. Write $\Ztor$ for the category of $\Z_2$-torsors, which is a Picard groupoid in the sense of <ref>, with unit object $\Z_2$ and monoidal structure $T\ot T'=(T\t T')/\Z_2,$ where we take the quotient by the anti-diagonal $\Z_2$-action on $T\t T'.$ The symmetric structure identifies $T\ot T'\cong T'\ot T$ by $(t,t')\Z_2\cong(t',t)\Z_2$ in the obvious way.
A super $\Z_2$-torsor (also called a $\Z_2$-graded $\Z_2$-torsor) is a pair $(T,(-1)^\ep)$ of a $\Z_2$-torsor $T$ and $\ep\in\Z_2=\{\ul 0,\ul 1\}.$ Morphisms $(T,(-1)^\ep)\ra (T',(-1)^{\ep'})$ are only defined if $\ep=\ep'$, and then are morphisms of $\Z_2$-torsors $\io:T\ra T'$. Write $\sZtor$ for the category of super $\Z_2$-torsors, a Picard groupoid with identity $(\Z_2,(-1)^{\ul{0}})$ and monoidal structure
\begin{equation*}
\bigl(T,(-1)^\ep\bigr)\ot \bigl(T',(-1)^{\ep'}\bigr)=\bigl(T\ot_{\Z_2}T',(-1)^{\ep+\ep'}\bigr).
\end{equation*}
The nontrivial part of the definition is in the symmetric structure, which identifies $\bigl(T,(-1)^\ep\bigr)\ot \bigl(T',(-1)^{\ep'}\bigr)\ra \bigl(T',(-1)^{\ep'}\bigr)\ot \bigl(T,(-1)^\ep\bigr)$ by
\begin{equation*}
\bigl((t,t')\Z_2,(-1)^{\ep+\ep'}\bigr)\cong\bigl((-1)^{\ep\ep'}(t',t)\Z_2,(-1)^{\ep+\ep'}\bigr),
\end{equation*}
that is, the natural isomorphism $T\ot T'\cong T'\ot T$ is twisted by $(-1)^{\ep\ep'}$.
There is an inclusion functor $I_\sZtor:\Ztor\ra\sZtor$ mapping $T\mapsto (T,(-1)^{\ul 0})$ on objects which preserves symmetric monoidal structures. There is also a forgetful functor $\Pi_\Ztor:\sZtor\ra\Ztor$ mapping $(T,(-1)^\ep)\mapsto T$ on objects which preserves monoidal structures, but not symmetric structures.
We will use (super) $\Z_2$-torsors to study orientations on gauge-theoretic moduli spaces as in <ref>. In the situation of Definition <ref> the set of orientations on $\det(D^{\nabla_{\Ad(P)}})\cong\R$ is a nonempty $\Z_2$-torsor $\Or(\det(D^{\nabla_{\Ad(P)}})),$ which we make into a super $\Z_2$-torsor by placing it in degree $(-1)^{\ind D^{\nabla_{\Ad(P)}}}$ (of course, we can even define a $\Z$-graded $\Z_2$-torsor in degree $\ind D^{\nabla_{\Ad(P)}}\in\Z,$ but only the induced $\Z_2$-grading is important for the symmetric structure).
If we work only on one fixed manifold $X$, then the `super' part is not important and we could work with ordinary $\Z_2$-torsors. But in <ref> we will want to compare gauge theory orientations over $X_1,X_2,$ and $X_1\amalg X_2,$ and to do this we will need the twisted symmetric structure on $\sZtor$.
As in Definition <ref> let $X$ be a compact manifold, $E_\bu$ an elliptic differential operator on $X,$ and $G$ a Lie group. We now define a functor
that encodes the orientations on $\A_P,\B_P$ from <ref>.
Recall the principal $\Z_2$-bundle of orientations $\hat O^{E_\bu}_P\ra\A_P$ from Definition <ref>. Since $\A_P$ is contractible, the space of global continuous sections $\Ga_{C^0}(\hat O^{E_\bu}_P)$ is a nonempty $\Z_2$-torsor. Moreover, the index $\ind D^{\nabla_{\Ad(P)}}$ of $E_\bu$ twisted by $\nabla_P\in\A_P$ is independent of $\nabla_P$. We define $\sO_X^{E_\bu,G}$ on objects $P\ra X$ in $\Bord_X(BG)$ by
\begin{align*}
\sO_X^{E_\bu,G}(P)&=\bigl(\Ga_{C^0}(\hat O^{E_\bu}_P),(-1)^{\ind D^{\nabla_{\Ad(P)}}}\bigr).
\end{align*}
If $[Q]:P_0\ra P_1$ is a morphism in $\Bord_X(BG),$ then $Q_t=Q\vert_{X\t\{t\}}$ is a principal $G$-bundle over $X$ for all $t\!\in\![0,1]$, and so defines a $\Z_2$-torsor $\Ga_{C^0}(\hat O^{E_\bu}_{Q_t})$. This depends continuously on $t\in[0,1]$ and agrees with $\Ga_{C^0}(\hat O^{E_\bu}_{P_t})$ at $t=0,1$ via the given isomorphisms $Q_t\vert_{X\t\{t\}}\cong P_t.$ Parallel transport in $[0,1]$ along the continuous family of $\Z_2$-torsors determines an isomorphism of $\Z_2$-torsors
\begin{equation*}
\sO_X^{E_\bu,G}(Q):\Ga_{C^0}(\hat O^{E_\bu}_{P_0})\longra\Ga_{C^0}(\hat O^{E_\bu}_{P_1}).
\end{equation*}
Since $P_0,P_1$ are isomorphic, $\ind D^{\nabla_{\Ad(P_0)}}=\ind D^{\nabla_{\Ad(P_1)}}$ and thus $\sO_X^{E_\bu,G}(Q):\sO_X^{E_\bu,G}(P_0)\ra\sO_X^{E_\bu,G}(P_1)$ is a morphism in $\sZtor$. This defines $\sO_X^{E_\bu,G}$ on morphisms, and it is easy to check that is defines a functor bc3eq23.
For a fixed principal $G$-bundle $P\ra X$ we also write $\sO_{X,P}^{E_\bu,G}$ for the restriction of $\sO_X^{E_\bu,G}$ to the subcategory $\Bord_X(BG)_P\subseteq\Bord_X(BG)$ from Definition <ref>.
In Definition <ref> we explained that if $E_\bu$ is skew-adjoint then the orientation bundle $\hat O^{E_\bu}_P\ra\A_P$ is canonically trivial, but we can instead define the Pfaffian orientation bundle $\hat O^{E_\bu}_{\Pf,P}\ra\A_P$. This gives an analogue of Definition <ref>.
As in Definition <ref> let $X$ be a compact manifold, $E_\bu$ a skew-adjoint elliptic differential operator on $X,$ and $G$ a Lie group. We define a functor
that encodes the Pfaffian orientations on $\A_P,\B_P$ from <ref>. The definition is exactly as in Definition <ref>, but replacing $\hat O^{E_\bu}_P\ra\A_P$ by $\hat O^{E_\bu}_{\Pf,P}\ra\A_P$ throughout. For a fixed principal $G$-bundle $P\ra X$ we also write $\sO_{\Pf,X,P}^{E_\bu,G}$ for the restriction of $\sO_{\Pf,X}^{E_\bu,G}$ to the subcategory $\Bord_X(BG)_P\subseteq\Bord_X(BG)$ from Definition <ref>.
The next proposition expresses the orientations on $\B_P$ from <ref> in terms of natural isomorphisms of the functor $\sO_{X,P}^{E_\bu,G}$. The proof is straightforward: the fact the functor $\boo:\Bord_X(BG)_P\ra\Ztor$ that takes every object to $\Z_2$ and every morphism to $\id_{\Z_2},$ maps all morphisms $P\ra P$ in $\Bord_X(BG)_P$ to $\id_{\Z_2}$ means that the orientations on $\A_P$ chosen by the natural isomorphism $\be_P$ are invariant under the action of $\G_P$ on $\A_P$, and therefore descend to $\B_P$.
Work in the situation of Definition <ref>. Let $P\ra X$ be a principal $G$-bundle, and consider the existence of a natural isomorphism $\be_P$ in the diagram of functors
*+[r]_X(BG)_P [rrrr]^(0.55)_X,P^E_,G @/_1pc/[drrrr]_(0.4) _^_ _P *+[l] [d]_Π_
(a) Such $\be_P$ exists if and only if $\B_P$ is orientable in the sense of <ref>.
(b) A choice of $\be_P$ is equivalent to an orientation on $\B_P$.
If $E_\bu$ is skew-adjoint, the analogue holds for $\sO_{\Pf,X,P}^{E_\bu,G}$ in Definition <ref> and Pfaffian orientations in Definition <ref>.
Let $X,X'$ be compact $n$-manifolds, $E_\bu,E'_\bu$ linear elliptic partial differential operators on $X,X'$ of the same order, and $G$ a Lie group. Set $X''=X\amalg X'$, and let $E''_\bu=E_\bu\amalg E'_\bu$, which is an elliptic operator on $X''$. We will define a diagram of functors and natural transformations:
\begin{equation*}
\xymatrix@C=180pt@R=15pt{
*+[r]{\Bord_X(BG)\t\Bord_{X'}(BG)} \ar[r]_(0.6){\amalg} \ar[d]^{\sO_X^{E_\bu,G}\t\sO_{X'}^{E'_\bu,G}} \drtwocell_{}\omit^{}\omit{^{\ga_{X,X'}^{E_\bu,E'_\bu,G}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,}} & *+[l]{\Bord_{X''}(BG)} \ar[d]_{\sO_{X''}^{E''_\bu,G}} \\
*+[r]{\sZtor\t\sZtor} \ar[r]^(0.6)\ot &*+[l]{\sZtor.}
\end{equation*}
Here the columns are from Definition <ref>, the functor $\amalg $ on the top row acts by $(P,P')\mapsto P\amalg P'$ on objects and $([Q],[Q'])\mapsto[Q\amalg Q']$ on morphisms, and the functor $\ot$ on the bottom row is the monoidal structure on $\sZtor$ from Definition <ref>. To define the natural isomorphism $\ga_{X,X'}^{E_\bu,E'_\bu,G}$ for all principal $G$-bundles $P\ra X$, $P'\ra X'$ we have to define an isomorphism of $\Z_2$-torsors
\begin{equation*}
\ga_{X,X'}^{E_\bu,E'_\bu,G}(P,P'):\Ga_{C^0}(\hat O^{E_\bu}_P)\ot_{\Z_2}\Ga_{C^0}(\hat O^{E'_\bu}_{P'})\longra\Ga_{C^0}(\hat O^{E''_\bu}_{P\amalg P'})
\end{equation*}
satisfying certain commutation conditions. After choosing $\nabla_P\in\A_P$, $\nabla_{P'}\in\A_{P'}$, this is equivalent to defining an isomorphism
\begin{equation*}
\Or\bigl(\det(D^{\nabla_{\Ad(P)}})\bigr)\ot_{\Z_2}\Or\bigl(\det(D^{\nabla_{\Ad(P')}})\bigr)\longra\Or\bigl(\det(D^{\nabla_{\Ad(P\amalg P')}})\bigr).
\end{equation*}
There is a natural choice, but it requires an orientation convention, and it depends on the order of $X,X'$: exchanging $X,X'$ changes the sign of the isomorphism by a factor $(-1)^{\ind D^{\nabla_{\Ad(P)}}\cdot\ind D^{\nabla_{\Ad(P')}}}$. It is then easy to check that $\ga_{X,X'}^{E_\bu,E'_\bu,G}$ is a natural isomorphism. If $E_\bu,E'_\bu$ are skew-adjoint, the analogue holds for $\sO_{\Pf,X}^{E_\bu,G},\sO_{\Pf,X'}^{E_\bu',G},\sO_{\Pf,X''}^{E_\bu'',G}$ in the obvious way.
We applied the projection $\Pi_\Ztor$ in bc3eq25 as if we had just considered functors to $\sZtor$, we would have had to replace the functors $\boo$ by functors mapping $P$ to either $(\Z_2,1)$ or $(\Z_2,-1)$, depending on whether $\ind D^{\nabla_{\Ad(P)}}$ is even or odd. This is an indication that we should expect difficulties in choosing canonical orientations in problems when $\ind D^{\nabla_{\Ad(P)}}$ can be odd, as these might not respect the symmetric structures, and the sign changes mentioned in Definition <ref>.
§.§ Factorizing orientation functors via Bordₙᴮ(BG)
So far, most of <ref>–<ref> is just notation, and Proposition <ref> is just an alternative point of view on orientations on $\B_P$. We now introduce a powerful new technique, the bordism invariance of orientations: for certain elliptic operators $E_\bu$ and tangential structures $\bs B$, the functor $\sO_X^{E_\bu,G}\colon\Bord_X(BG)\ra\sZtor$ from bc3eq23 factorizes via a functor $\sO_n^{\bs B}\colon\Bord_n^{\bs B}(BG)\ra\sZtor$.
When this works it is a useful tool for analyzing orientation problems, as we can reduce questions about orientability and choice of orientations to questions about morphisms $\Om_n^{\bs B}(\cL BG;BG)\ra\Z_2$, which can then hopefully be answered by explicit computation. Effectively, we study gauge theory orientations on all $n$-manifolds $X$ at once by factoring the gauge group action through a more fundamental action of a `categorical group' of bordisms between possibly different manifolds as in <ref>.
We restrict ourselves here to the important case of real Dirac operators. This handles all the applications we have in mind, which are to orienting moduli spaces of $G_2$-instantons on compact $G_2$-manifolds, to $\Spin(7)$-instantons on compact $\Spin(7)$-manifolds, and to coherent sheaves on Calabi–Yau 4-folds for DT4 invariants, see <ref>–<ref>. There are probably other interesting classes of elliptic operators for which this approach also works.
The next theorem is proved in the second author <cit.>. The restriction to dimensions $n\equiv 1,7,8\pmod 8$ is because in dimensions $n\equiv 2,3,4,5,6\pmod 8$ the real Dirac operator $\slashed{D}$ is $\C$- or $\H$-linear, so moduli spaces $_P$ with orientation bundles $O^E__P$ have canonical orientations for essentially trivial reasons.
\begin{thm}
\label{bc3thm1}
Let\/ $\bs B$ be a tangential structure factoring via\/ $\Spin,$ and\/ $G$ be a Lie group. For all\/ $n\ge 0$ with\/ $n\equiv 1,7,$ or\/ $8\pmod 8$ there exists a functor
\e
\sO_n^{\bs B}\colon\Bord_n^{\bs B}(BG)\longra\sZtor,
\label{bc3eq26}
\e
which maps to\/ $\Ztor\subset\sZtor$ if\/ $n\equiv 7\pmod 8,$ such that:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item[{\bf(a)}] $\sO_n^{\bs B}$ is a symmetric monoidal functor.
\item[{\bf(b)}] Let\/ $X$ be a compact\/ $n$-manifold with a\/ $\bs B$-structure and define an elliptic operator\/ $E_\bu$ on\/ $X$ by
\e
\text{skew Dirac operator $\slashed{D}_X^\skew$}& \text{if\/ $n\equiv 1\pmod{8},$}\\
\text{Dirac operator $\slashed{D}_X$} & \text{if\/ $n\equiv 7\pmod{8},$}\\
\text{positive Dirac operator $\slashed{D}_X^+$}& \text{if\/ $n\equiv 8\pmod{8}.$}
\end{cases}
\label{bc3eq27}
\e
Then if\/ $n\equiv 7$ or\/ $8\pmod 8$ there exists a natural isomorphism\/ $\de_X^{E_\bu,G}$ making the following diagram commute:
\e
\begin{gathered}
\xymatrix@!0@C=32pt@R=30pt{
*+[r]{\Bord_X(BG)} \ar[rrrr]^(0.48){\Pi_X^{\bs B}} \ar@/_1pc/[drrrr]_(0.35){\sO_X^{E_\bu,G}} & \drrtwocell_{}\omit^{}\omit{_{\,\,\,\,\,\,\,\,\,\,\,\,\de_X^{E_\bu,G}}} &&& *+[l]{\Bord_n^{\bs B}(BG)} \ar[d]_{\sO_n^{\bs B}} \\
&&&& *+[l]{\sZtor.}
\end{gathered}
\label{bc3eq28}
\e
If\/ $n\equiv 1\pmod 8$ then\/ $E_\bu$ is skew-adjoint, and the analogue of \eq{bc3eq28} holds with\/ $\sO_{\Pf,X}^{E_\bu,G}$ from \eq{bc3eq24} in place of\/ $\sO_X^{E_\bu,G}$. In other words,\/ $\sO_n^{\bs B}$ in \eq{bc3eq26} encodes gauge theory orientations \textup(or Pfaffian orientations\textup) as in Proposition\/~\textup{\ref{bc3prop4}} for the Dirac operators \eq{bc3eq27}.
\item[{\bf(c)}] Let\/ $\io:G\ra H$ be a morphism of Lie groups of complex type in the sense of Definition\/ {\rm\ref{bc2def4}}. Then for\/ $F_\io$ as in \eq{bc3eq9} there exists a canonical natural isomorphism\/ $\ep_{n,G}^{\bs B,H}$ making the following diagram commute:
\e
\begin{gathered}
\xymatrix@!0@C=32pt@R=30pt{
*+[r]{\Bord_n^{\bs B}(BG)} \ar[rrrr]^(0.48){F_\io} \ar@/_1pc/[drrrr]_(0.35){\sO_n^{\bs B}} & \drrtwocell_{}\omit^{}\omit{_{\,\,\,\,\,\,\,\,\,\,\,\,\ep_{n,G}^{\bs B,H}}} &&& *+[l]{\Bord_n^{\bs B}(BH)} \ar[d]_{\sO_n^{\bs B}} \\
&&&& *+[l]{\sZtor.}
\end{gathered}
\label{bc3eq29}
\e
\item[{\bf(d)}] Let\/ $G_1,G_2$ be Lie groups. Then there exists a canonical natural isomorphism\/ $\ze_{n,G_1,G_2}^{\bs B}$ making the following diagram commute:
\end{itemize}
\ea
\begin{gathered}
\xymatrix@C=175pt@R=15pt{
*+[r]{\Bord_n^{\bs B}(B(G_1\t G_2))} \ar[r]_(0.7){\sO_n^{\bs B,G_1\t G_2}} \ar[d]^{(F_{\Pi_{G_1}},F_{\Pi_{G_2}})} \drtwocell_{}\omit^{}\omit{^{\ze_{n,G_1,G_2}^{\bs B}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,}} & *+[l]{\sZtor} \\
*+[r]{\Bord_n^{\bs B}(BG_1)\t\Bord_n^{\bs B}(BG_1)} \ar[r]^(0.62){\sO_n^{\bs B,G_1}\t\sO_n^{\bs B,G_2}} & *+[l]{\sZtor\!\t\!\sZtor.} \ar[u]^\ot }
\end{gathered}
\label{bc3eq30}
\ea
\end{thm}
\subsection{Applications to orientability of moduli spaces}
\label{bc36}
Here is a criterion, proved in \S\ref{bc54}, for when the moduli spaces $_P$ in \S\ref{bc21} for a principal $G$-bundle $PX$ are orientable for all possible~$X,P$.
\begin{thm}
\label{bc3thm2}
Work in the situation of Theorem\/ {\rm\ref{bc3thm1},} with\/ $n\equiv 1,7,8\pmod{8}$ and\/ $G$ a fixed Lie group. Consider the commutative diagram
\e
\begin{gathered}
\xymatrix@!0@C=90pt@R=35pt{
& \Aut_{\Bord_{n-1}^{\bs B}(\cL BG)}(\boo) \ar[rr]^(0.37){I_n^{\bs B}(G)}_(0.37){\eq{bc3eq15}} && *+[l]{\Aut_{\Bord_n^{\bs B}(BG)}(\boo)} \ar[dd]^(0.4){\sO_n^{\bs B}}_(0.4){\eq{bc3eq26}} \\
*+[r]{\Om_n^{\bs B}(\cL BG)} \ar[ur]^{\eq{bc3eq17}}_\cong \ar[d]^{\Pi_n^{\bs B}(BG)} \ar[rr]^(0.4){\xi_n^{\bs B}(BG)}_(0.4){\eq{bc2eq17}} && \Om_{n+1}^{\bs B}(BG) \ar[ur]^(0.4){\eq{bc3eq11}}_(0.4)\cong \\
*+[r]{\Om_n^{\bs B}(\cL BG;BG)} \ar@{..>}[rrr]^(0.43){\Xi_n^{\bs B,G}} \ar[urr]_(0.8){\ti\xi_n^{\bs B}(BG)} &&& *+[l]{\Z_2=\Aut_{\sZtor}(\Z_2,\ul{0}),\!} }\!\!\!
\end{gathered}
\label{bc3eq31}
\e
where\/ $\xi_n^{\bs B}(BG),$ $\ti\xi_n^{\bs B}(BG),$ $\Pi_n^{\bs B}(BG)$ are as in Definition\/ {\rm\ref{bc2def8}}. The top parallelogram commutes by \eq{bc3eq18}. The bottom left triangle commutes by \eq{bc2eq18}. Define\/ $\Xi_n^{\bs B,G}$ to be the unique morphism making the bottom right quadrilateral commute.
Then\/ $\B_P$ is orientable \textup(if\/ $n\equiv 7,8\pmod{8}$\textup) or Pfaffian orientable \textup(if\/ $n\equiv 1\pmod{8}$\textup) for every compact Riemannian\/ $n$-manifold\/ $(X,g)$ with\/ $\bs B$-structure\/ $\bs\ga_X$ and every principal\/ $G$-bundle\/ $P\ra X$ if and only if\/ $\Xi_n^{\bs B,G}\equiv\ul{0},$ where the \textup(Pfaffian\textup) orientation bundles\/ $O^{E_\bu}_P,O^{E_\bu}_{\Pf,P}\ra\B_P$ are defined using\/ $E_\bu$ as in~\eq{bc3eq27}.
\end{thm}
The next proposition follows easily from Theorems \ref{bc3thm1}(c),(d) and \ref{bc3thm2}.
\begin{prop}
\label{bc3prop5}
The morphisms\/ $\Xi_n^{\bs B,G}$ in Theorem\/ {\rm\ref{bc3thm2}} satisfy:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item[{\bf(a)}] Let\/ $\io:G\ra H$ be a morphism of Lie groups of complex type, in the sense of Definition\/ {\rm\ref{bc2def4}}. Then the following diagram commutes:
\begin{equation*}
\xymatrix@C=140pt@R=15pt{
*+[r]{\Om_n^{\bs B}(\cL BG;BG)} \ar[r]_{B\io_\rel^{\bs B}}<EMAIL_ADDRESS>B,G}} & *+[l]{\Om_n^{\bs B}(\cL BH;BH)} \ar[d]_{\Xi_n^{\bs B,H}} \\
& *+[l]{\Z_2.\!} }
\end{equation*}
\item[{\bf(b)}] Let\/ $G_1,G_2$ be Lie groups. Then the following diagram commutes:
\begin{equation*}
\xymatrix@C=186pt@R=15pt{
*+[r]{\Om_n^{\bs B}(\cL B(G_1\t G_2);B(G_1\t G_2))} \ar[r]_(0.75){\Xi_n^{\bs B,G_1\t G_2}} \ar[d]^{((B\Pi_{G_1})^{\bs B}_\rel,(B\Pi_{G_2})^{\bs B}_\rel)} & *+[l]{\Z_2} \\
*+[r]{\Om_n^{\bs B}(\cL BG_1;BG_1)\t\Om_n^{\bs B}(\cL BG_2;BG_2)} \ar[r]^(0.75){\Xi_n^{\bs B,G_1}\t\Xi_n^{\bs B,G_2}} & *+[l]{\Z_2\t\Z_2.\!} \ar[u]^+ }
\end{equation*}
\end{itemize}
\end{prop}
We now restrict to $B=$ and $n=7,8$ for the rest of \S\ref{bc3}. The next theorem will be proved in~\S\ref{bc55}.
\begin{thm}
\label{bc3thm3}
{\bf(a)} In Theorem\/ {\rm\ref{bc3thm2}} let\/ $n=7$ or\/ $8,$ $\bs B=\Spin,$ and\/ $G$ be one of the following compact, connected Lie groups:
\e
\label{bc3eq32}
E_8,\; E_7,\; E_6,\; G_2,\; \Spin(3),\; \SU(m),\; \U(m),\; \Spin(2m),
\quad
\text{where $m\ge 1.$}
\e
\textup(Here,\/ $E_6,$ $E_7$ are the simply-connected versions.\textup)
Then\/ $\Xi_n^{\Spin,G}=0$ in \eq{bc3eq31}, so\/ $\B_P$ is orientable for every compact spin Riemannian\/ $n$-manifold\/ $(X,g)$ and every principal\/ $G$-bundle\/~$P\ra X$.
\smallskip
\noindent{\bf(b)} Part {\bf(a)} also holds if\/ $G$ is any finite product of the groups in \eq{bc3eq32}.
\smallskip
\noindent{\bf(c)} Let\/ $G=(G_1\t\cdots\t G_k)/K$ be the quotient of any finite product of groups\/ $G_1\t\cdots\t G_k$ in \eq{bc3eq32} by a finite normal subgroup\/ $K,$ e.g.\ $G=\SO(2m)=\Spin(2m)/\Z_2$ or\/ $G=\mathop{\rm PSU}(m)=\SU(m)/\Z_m$. Then\/ $\B_P$ is orientable for every compact, \begin{bfseries}simply-connected\end{bfseries} spin Riemannian\/ $n$-manifold\/ $(X,g)$ and every principal\/ $G$-bundle\/~$P\ra X$.
\end{thm}
Here is an outline of the proof:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item[(i)] Theorems \ref{bc2thm3}, \ref{bc2thm4} and \ref{bc3thm2} imply that $\Xi_n^{\Spin,\SU(m)}=0$ for $n=7,8$.
\item[(ii)] Using Theorem \ref{bc2thm7} we deduce that $\Xi_n^{\Spin,E_8}=0$.
\item[(iii)] Theorem \ref{bc3thm3}(a),(b) then follow from Proposition \ref{bc3prop5} and Theorem \ref{bc2thm2}.
\item[(iv)] We deduce (c) from the fact that $(G_1\t\cdots\t G_k)/K$-bundles can be lifted to $G_1\t\cdots\t G_k$-bundles over simply-connected manifolds.
\end{itemize}
The next theorem will be proved in \S\ref{bc56}. The proof uses examples in \cite[\S 2.4]{JoUp1} and \cite[Ex.~1.14]{CGJ} of principal $(2)$-bundles $X(2)X$ with $_X(2)$ non-orientable for $X=7,8$, and Proposition \ref{bc3prop5}.
\begin{thm}
\label{bc3thm4}
{\bf(a)} In Theorem\/ {\rm\ref{bc3thm2},} let\/ $n=7$ or\/ $8,$ $\bs B=\Spin,$ and\/ $G$ be one of the following compact, connected Lie groups:
\e
\begin{aligned}
& F_4, &
&\Sp(m+1), &
\quad
\text{where $m\ge 1.$}
\end{aligned}
\label{bc3eq33}
\e
Then\/ $\Xi_n^{\Spin,G}\ne 0$. Hence there exists a compact spin Riemannian\/ $n$-manifold\/ $(X,g)$ and a principal\/ $G$-bundle $P\ra X$ for which\/ $\B_P$ is not orientable. This is the case for\/ $X=\Sp(2)\t_{\Sp(1)\t\Sp(1)}\Sp(1)$ when\/ $n=7$ and\/ $X=(\Sp(2)\t_{\Sp(1)\t\Sp(1)}\Sp(1))\t\cS^1$ when\/ $n=8,$ and\/~$P=X\t G$.
\smallskip
\noindent{\bf(b)} Part {\bf(a)} also holds for any Lie group\/ $G=(G_1\t G_2)/K,$ where\/ $G_1$ is on the list\/ {\rm\eq{bc3eq33}},\/ $G_2$ is any Lie group, and\/ $K$ is a discrete normal subgroup of\/ $G_1\t G_2$. For example, we can take\/ $G=\SO(2m+3)=(\Spin(2m+3)\t\{1\})/\Z_2$.
\end{thm}
Now \eq{bc3eq32}--\eq{bc3eq33} include $(1)$ and all compact, simply-connected, simple Lie groups. But by the classification of Lie groups, every compact, connected Lie group $G$ is of the form $G=((1)^kG_1⋯G_l)/K$, where $G_1,…,G_l$ are compact, simply-connected, simple Lie groups and $K⊂(1)^kG_1⋯G_l$ is a finite normal subgroup. Thus we deduce:
\begin{cor}
\label{bc3cor1}
Every compact, connected Lie group\/ $G$ satisfies the conditions of either Theorem\/ {\rm\ref{bc3thm3}(c),} or Theorem\/ {\rm\ref{bc3thm4}(b),} but not both. Thus, for every compact, connected Lie group\/ $G,$ Theorems\/ {\rm\ref{bc3thm3}--\ref{bc3thm4}} provide a complete answer to when\/ $\B_P$ is orientable for all compact, simply-connected spin Riemannian\/ $n$-manifolds\/ $(X,g)$ with\/ $n=7,8$ and principal\/ $G$-bundles\/ $P\ra X$.
\end{cor}
\subsection{Applications to Floer gradings}
\label{bc37}
For $n≡3,78,$ the second author actually proves a stronger version of Theorem \ref{bc3thm1} in \cite[\S 3.2]{Upme2}: Since the Dirac operator $E_$ from \eqref{bc3eq27} is self-adjoint in these dimensions, we can improve Definition~\ref{bc3def6} and actually define a functor $_,X^E_,G_X(BG),$ which is closely related to the idea of Floer gradings, and [71] shows that this functor factors via $_n^B(BG).$ Hence bordism techniques are available for the study of Floer gradings. Based on Theorem \ref{bc2thm6} we show that for principal $(2)$ (or $(3)$) bundles, mod-$8$ (or mod-$6$) Floer gradings exist on the moduli spaces of $G_2$-instantons.
\begin{dfn}
\label{bc3def9}
There is a principal $\Z$-bundle $\hat\Sp{}_P^{E_\bu}\ra\cA_P,$ the \emph{spectral bundle}, whose fiber over $\na_P$ is the set of enumerations $\cdots\le\la_{-1}\le\la_0\le\la_1\le\cdots$ (with eigenvalues repeated according to multiplicity) of the spectrum of the self-adjoint twisted Dirac operator $\slashed{D}^{\na_{\Ad(P)}}$; for the definition of the topology on the bundle $\hat\Sp{}_P^{E_\bu}\ra\cA_P$ we refer to the second author's paper~\cite[\S 3.4]{Upme1}. The bundle is equivariant under the actions of $\G_P$ and $\G_P/Z(G)$ on $\A_P$ and therefore factors through principal $\Z$-bundles $\Sp{}^{E_\bu}_P\ra\B_P,$ $\bar\Sp{}^{E_\bu}_P\ra\ovB_P$ on the topological stacks $\B_P,\ovB_P$ with $\Sp{}^{E_\bu}_P\cong\Pi_P^*(\bar\Sp{}_P^{E_\bu}).$ We call $\Sp{}^{E_\bu}_P,$ $\bar\Sp{}^{E_\bu}_P$ the {\it spectral bundles\/} of~$\B_P,$ $\ovB_P,$ and we say that $\B_P$ admits {\it mod-$k$ Floer gradings\/} if $\Sp{}^{E_\bu}_P\!/k\Z$ is isomorphic to the trivial principal $\Z_k$-bundle $\B_P\t\Z_k\ra\B_P$. Then a {\it mod-$k$ Floer grading\/} on $\B_P$ is an isomorphism $\om\colon\Sp{}^{E_\bu}_P\!/k\Z\,{\buildrel\cong\over\longra}\,\B_P\t\Z_k$ of principal $\Z_k$-bundles. As $\B_P$ is connected, there are either zero or $k$ different mod-$k$ Floer gradings.
\end{dfn}
\begin{rem}
\label{bc3rem4}
{\bf(a)} Essentially, a mod-$k$ Floer grading is a continuous enumeration $\cdots\le\la_{-1}(\na_P)\ab\le\ab\la_0(\na_P)\le\la_1(\na_P)\le\cdots$ of the spectra of the associated twisted Dirac operators $D^{\na_{\Ad(P)}}$ for all $\na_P\in\A_P$ for which elements of the gauge group $\G_P$ only shift the spectrum of the Dirac operator by multiples of $k,$ so the parallel transport of $\{\,\la_i(\ga^*(\na_P));i\in\Z\,\}\in\hat\Sp{}_P^{E_\bu}|_{\ga^*(\na_P)}$ from $\ga^*(\na_P)$ to $\na_P$ along any path is $\{\,\la_{i+k\ell}(\na_P);i\in\Z\,\}\in\hat\Sp{}_P^{E_\bu}|_{\na_P}$ for some $\ell\in\Z$.
\smallskip
\noindent
{\bf(b)} The definition of Floer grading here is new and should be compared to the `relative' version of Floer gradings $\de(\na_P,\na_P')\in\Z$ as defined by Donaldson~\cite[\S 3.2.2]{Dona} for acyclic flat connections $\na_P,$ $\na_P'.$ From our point of view $\de(\na_P,\na_P')$ is the parallel transport isomorphism in $\hat\Sp{}^{E_\bu}_P\ra\A_P$ along a path from $\na_P$ to $\na_P'.$ The invertibility of the operators (as the connections are acyclic) at the endpoints implies $\hat\Sp{}^{E_\bu}_P|_{\na_P}\cong\Z,$ $\hat\Sp{}^{E_\bu}_P|_{\na_P'}\cong\Z$ and is used to identify the parallel transport isomorphism $\hat\Sp{}^{E_\bu}_P|_{\na_P}\ra\hat\Sp{}^{E_\bu}_P|_{\na_P'}$ with an integer $\de(\na_P,\na_P').$
\end{rem}
\begin{dfn}
\label{bc3def10}
Let $X$ be a compact Riemannian spin $n$-manifold with $n\equiv 3,7\pmod{8},$ $E_\bu$ a self-adjoint elliptic differential operator on $X,$ and $G$ be a Lie group. The functor
\e
\label{bc3eq34}
\sO_{\Sp,X}^{E_\bu,G}\colon\Bord_X(BG)\ra\ZZtor
\e
is defined on objects by $(X,P)\longmapsto\Ga_{C^0}(\hat \Sp{}_P^{E_\bu})$ and on morphisms by using parallel transport in the same way as in Definition~\ref{bc3def6}.
\end{dfn}
The functor \eqref{bc3eq34} refines $_X^E_,G$ from \eqref{bc3eq23} in the sense that the diagram
\begin{equation*}
\xymatrix@C=120pt@R=15pt{
*+[r]{\Bord_X(BG)} \drtwocell_{}\omit^{}\omit{^{}} \ar[r]_(0.65){\sO_{\Sp,X}^{E_\bu,G}} \ar[d]^{\sO_X^{E_\bu,G}} & *+[l]{\ZZtor} \ar[d]_{\Pi_\Ztor} \\
*+[r]{\sZtor} \ar[r]^(0.65){\Pi_\Ztor} & *+[l]{\Ztor} }
\end{equation*}
commutes up to natural isomorphism. Hence $_,X^E_,G(X,P)≅_X^E_,G(X,P)/2$ for all objects $(X,P)$ and, in particular, mod-2 Floer gradings can be identified with orientations as in Definition~\ref{bc2def2}. The second author \cite[Cor.~3.8]{Upme2} implies that \eqref{bc3eq34} can be factored as in Theorem~\ref{bc3thm1} through a symmetric monoidal functor $_,n^B_n^B(BG).$ The analogues of parts (a), (b), and (d) of Theorem~\ref{bc3thm1} continue to hold but there is no analogue of part (c). This implies the following version of Theorem~\ref{bc3thm2}, whose proof is exactly the same as the one given in \S\ref{bc54} and is left to the reader.
\begin{thm}
\label{bc3thm5}
Work in the situation of Theorem\/ {\rm\ref{bc3thm1},} with\/ $n\equiv 3,7\pmod{8}$ and\/ $G$ a fixed Lie group. Consider the commutative diagram
\begin{equation*}
\xymatrix@!0@C=90pt@R=35pt{
& \Aut_{\Bord_{n-1}^{\bs B}(\cL BG)}(\boo) \ar[rr]^(0.37){I_n^{\bs B}(G)}_(0.37){\eq{bc3eq15}} && *+[l]{\Aut_{\Bord_n^{\bs B}(BG)}(\boo)} \ar[dd]_{\sO_{\Sp,n}^{\bs B}} \\
*+[r]{\Om_n^{\bs B}(\cL BG)} \ar[ur]^{\eq{bc3eq17}}_\cong \ar[d]^{\Pi_n^{\bs B}(BG)} \ar[rr]^(0.4){\xi_n^{\bs B}(BG)}_(0.4){\eq{bc2eq17}} && \Om_{n+1}^{\bs B}(BG) \ar[ur]^(0.4){\eq{bc3eq11}}_(0.4)\cong \\
*+[r]{\Om_n^{\bs B}(\cL BG;BG)} \ar@{..>}[rrr]^(0.43){\Xi_{\Sp,n}^{\bs B,G}} \ar[urr]_(0.8){\ti\xi_n^{\bs B}(BG)} &&& *+[l]{\Z=\Aut_{\ZZtor}(\Z).\!} }\!\!\!
\end{equation*}
Define\/ $\Xi_{\Sp,n}^{\bs B,G}$ to be the unique morphism making the bottom right quadrilateral commute. Then\/ $\B_P$ admits mod-$k$ Floer gradings for every compact Riemannian\/ $n$-manifold\/ $(X,g)$ with\/ $\bs B$-structure and principal\/ $G$-bundle\/ $P\ra X$ if and only if\/ $\Xi_{\Sp,n}^{\bs B,G}\bmod{k}\equiv\ul{0}\in\Z_k$.
\end{thm}
In \S\ref{bc57} we will use Theorems \ref{bc2thm6}(d) and \ref{bc3thm5} to prove:
\begin{thm}
\label{bc3thm6}
Let\/ $(X,g)$ be a compact Riemannian spin\/ $7$-manifold, and\/ $P\ra X$ be a principal\/ $\SU(r)$-bundle. Then
\begin{itemize}
\item[\bf (i)] If\/ $r=2,$ then mod-$8$ Floer gradings exist on\/ $\B_P.$
\item[\bf (ii)] If\/ $r=3,$ then mod-$6$ Floer gradings exist on\/ $\B_P.$
\item[\bf (iii)] If\/ $r\ge 4$ then mod-$2$ Floer gradings exist on\/ $\B_P$ (equivalently, $\B_P$ is orientable, as in Theorem\/ {\rm\ref{bc2thm3}(a)}), but for general\/ $X,P$ mod-$k$ Floer gradings do not exist on\/ $\B_P$ for any $k>2$. For example, this is true for $X=\cS^4\t\cS^3$ and\/ $P\ra X$ an $\SU(r)$-bundle with\/ $c_2(P)=\mathop{\rm Pd}(\{\rm{pt}\}\t\cS^3)$.
\end{itemize}
In particular, such Floer gradings exist on the moduli spaces of\/ $G_2$-instantons.
\end{thm}
\begin{rem}
\label{bc3rem5}
{\bf(a)} The Floer gradings of Theorem \ref{bc3thm6} would be important if (in the spirit of Donaldson--Thomas [32] and Donaldson--Segal [31]) one wanted to define instanton Floer homology groups of compact $G_2$-manifolds using $G_2$-instantons and $\Spin(7)$-instantons, by analogy with instanton Floer homology groups of compact 3-manifolds, as in Donaldson [29].
\smallskip
\noindent{\bf(b)} In contrast to orientations, Floer gradings are sensitive to the passage $\SU(r)\subset\SU(r+1)$; for orientations, `stability' was based on \cite[\S2.2.9]{JTU} and \cite[p.~18]{JTU}, which are not available here. Instead, the existence of mod-$k$ Floer gradings is preserved under~$\SU(r)\subset\SU(r+k).$
\smallskip
\noindent{\bf(c)} Because of {\bf(b)}, there is an alternative way to define Floer gradings: if $r<r'$, given an $\SU(r)$-bundle $P\ra X$ we can extend it to an $\SU(r')$-bundle $P'=(P\t\SU(r'))/\SU(r)$ by adding a trivial $\C^{r'-r}$ factor, and compute Floer gradings on $\B_P$ using $P'$ rather than $P$. (Note that for monodromy calculations round loops in $\B_P$ we do {\it not\/} consider general $\SU(r')$-bundles $Q'\ra X\t\cS^1$, but only $Q'=(Q\t\SU(r'))/\SU(r)$, so that $c_m(Q')=0$ for $r<m\le r'$.) For $r=2,3$, computing Floer gradings using $\SU(r')$-bundles in this way can yield different answers to Theorem \ref{bc3thm6}(i),(ii). For example, it follows from \eq{bc5eq12} below that if $r=2,3$ and $r'\equiv 6\pmod{12}$ then $\B_P$ admits mod-$24$ Floer gradings.
\smallskip
\noindent{\bf(d)} Divisibility properties of indices can be difficult to determine. A common approach in the literature is to make a fortuitous choice of twisting for which the integrality property of the twisted index implies the divisibility result for the index in question. This works for orientations, $k=2,$ as in \cite[p.~149]{Walp1} or for mod-$8$ Floer gradings in Chern--Simons theory on $3$-manifolds as in \cite[\S 3.3.2]{Dona}, but is challenging here. The role of bordism theory here is to reduce the divisibility question for the index to a simple index calculation in a few basic examples.
\end{rem}
\subsection{Applications to canonical orientation of moduli spaces}
\label{bc38}
The next theorem will be proved in \S\ref{bc58}.
\begin{thm}
\label{bc3thm7}
Let\/ $(X,g)$ be a compact, oriented, spin, Riemannian\/ $8$-mani\-fold, and fix an orientation for\/ $\det(\slashed{D}_+),$ where\/ $\slashed{D}_+$ is the positive Dirac operator on\/ $(X,g)$. Suppose\/ $P\ra X$ is a principal\/ $\SU(m)$-bundle for\/ $m\ge 1$ with\/ $c_2(P)=0$ in\/ $H^4(X,\Z)$. Then the moduli space\/ $\B_P$ has a \begin{bfseries}canonical\end{bfseries} orientation.
Similarly, if\/ $Q\ra X$ is a principal\/ $\U(m)$-bundle for\/ $m\ge 1$ with\/ $c_2(Q)-c_1(Q)^2=0$ in\/ $H^4(X,\Z)$ then\/ $\B_Q$ has a canonical orientation.
\end{thm}
The main idea of the proof is that if $PX$ is a principal $(8)$-bundle with $c_2(P)=0$ then the associated principal $E_8$-bundle $R=(PE_8)/(8)X$ is trivializable. We use Theorem \ref{bc3thm3}(a) to show that $_R$ has a canonical orientation, which we pull back to a canonical orientation on $_P$ using Theorem \ref{bc2thm1}. We extend from $(8)$ to $(m)$ for any $m≥1$ by stabilization.
Combining Theorems \ref{bc2thm5} and \ref{bc3thm7} we deduce
\begin{cor}
\label{bc3cor3}
Let\/ $X$ be a projective Calabi--Yau\/ $4$-fold, and\/ $\al\in K^0_\top(X)$ with\/ $c_2(\al)-c_1(\al)^2=0$ in\/ $H^4(X,\Z)$. As in Theorem\/ {\rm\ref{bc2thm5}(a)} we have a moduli stack\/ $\M_\al$ of objects\/ $G^\bu$ in\/ $D^b\coh(X)$ with class\/ $\lb G^\bu\rb=\al$ in\/ $K^0_\top(X)$. It has a\/ $4$-Calabi--Yau obstruction theory and a principal\/ $\Z_2$-bundle\/ $O^{\cF^\bu}_\al\ra\M_\al$ of orientations as in Definition\/ {\rm\ref{bc2def5}}. Then\/ $\M_\al$ has a \begin{bfseries}canonical\end{bfseries} orientation.
\end{cor}
\begin{proof}
Theorem \ref{bc2thm5}(g) constructs a principal $\U(m)$-bundle $P\ra X$, unique up to isomorphism, such that orientations on $\B_P$ induce orientations on $\M_\al$. We have $c_2(P)-c_1(P)^2=c_2(\al)-c_1(\al)^2=0$. As in Theorem \ref{bc2thm5}(b), there is a canonical isomorphism $\Or(\det\slashed{D}_+)\cong\Z_2$. Thus Theorem \ref{bc3thm7} gives a canonical orientation on $\B_P$, which induces a canonical orientation on $\M_\al$. This is independent of the choice of $P$, as the canonical orientations in Theorem \ref{bc3thm7} are preserved by isomorphisms~$P\cong P'$.
\end{proof}
\begin{rem}
\label{bc3rem6}
What Theorem \ref{bc3thm7} and Corollary \ref{bc3cor3} really mean is that we have an {\it algorithm\/} for constructing orientations on $\B_P,\B_Q,\M_\al$, which depends only on $(X,g)$, the orientation for $\det(\slashed{D}_+)$, and~$P,Q,\al$.
In contrast, Theorem \ref{bc2thm3}(b) says that to construct orientations on $\B_R$ for $R\ra Y$ a principal $\U(m)$- or $\SU(m)$-bundle over a compact spin 7-manifold $Y$, we need to choose the additional algebro-topological data of a {\it flag structure\/} on $Y$. Theorem \ref{bc3thm7} and Corollary \ref{bc3cor3} show that no analogue of a flag structure is needed in the 8-dimensional case provided~$c_2(\al)-c_1(\al)^2=0$.
Note that other algorithms are possible, which would yield orientations on $\B_P,\B_Q,\M_\al$ differing from those in Theorem \ref{bc3thm7} and Corollary \ref{bc3cor3} by a sign depending on natural invariants in the problem such as $m$, $\rank\al$, $\chi(X)$, $\int_Xc_1(Q)^4$ and $\int_Xc_1(\al)c_3(X)$. We have no way to say which of these algorithms is `best', if this even makes sense.
\end{rem}
Corollary \ref{bc3cor3} has applications to the theory of DT4 invariants of Calabi--Yau 4-folds discussed in \S\ref{bc24}--\S\ref{bc25}. In particular, since all parts of Conjecture \ref{bc2conj1} involve only moduli spaces $_$ on $X$ with $c_1()=c_2()=0$, we deduce:
\begin{cor}
\label{bc3cor4}
As in Conjecture\/ {\rm\ref{bc2conj1},} for a Calabi--Yau\/ $4$-fold\/ $X$ there are conjectures in {\rm[8, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]} of the form \eq{bc2eq8} relating conventional invariants of\/ $X$ \textup(which require no choice of orientation\textup) and DT4 invariants of\/ $X$ \textup(which do require a choice of orientation\textup), an apparent paradox.
Corollary\/ {\rm\ref{bc3cor3}} provides canonical orientations for all the moduli spaces\/ $\M_\al$ occurring in Conjecture\/ {\rm\ref{bc2conj1},} resolving this paradox.
\end{cor}
\section{Proofs of theorems in \S\ref{bc2}}
\label{bc4}
\subsection{Proof of Theorem \ref{bc2thm2}}
\label{bc41}
We will first show that:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item[(i)] Suppose a Lie group $H$ has a torus subgroup $T\subseteq H$, and write $G=Z(T)$ for the centralizer of $T$. Then $\inc:G\hookra H$ is of complex type.
\item[(ii)] Let $\io:G\ra H$ be a morphism of connected Lie groups which is a covering map. Then $\io$ is of complex type.
\item[(iii)] Compositions of complex type morphisms are of complex type.
\end{itemize}
Parts (ii),(iii) are obvious. For (i), write $,$ for the Lie algebras of $G,H$. Under the adjoint representation of $T$ on $$ we have a splitting $h=$, where $$ is a trivial $T$-representation and $$ contains only nontrivial $T$-representations. Let $(1)⊆T$ be a sufficiently general $(1)$-subgroup. Then $$ contains only nontrivial $(1)$ representations, so we may split $=_k>0V_k_^2[k]$ as $(1)$-representations, where $V_k$ is a real vector space and $^2[k]$ is the irreducible real $(1)$-representation with action $e^iþ↦(coskþ sinkþ
-sinkþ coskþ)$.
We make $$ into a complex vector space by identifying $^2[k]≅$ with $i∈$ acting by $(0 1
-1 0)$. As the $G$- and $(1)$-actions on $$ commute and the complex structure on $$ is determined by the $(1)$-action, it is preserved by $G$. Hence $:GH$ is of complex type.
Suppose now that $H$ is a compact, connected, simply-connected, simple Lie group corresponding to a Dynkin diagram $$, e.g. $H=E_8$. Then $H$ has a maximal torus $(1)^_0$ with $(1)$ factors corresponding to the set of vertices $_0$ of $$. Choose $k$ vertices $v_1,…,v_k$ in $$, corresponding to a subgroup $(1)^k⊂H$. Then $:Z((1)^k)H$ is of complex type by (i).
By Lie theory, it is easy to show the Lie algebra of $Z((1)^k)$ is $𝔷((1)^k)=(̆1)^^k$, where $$ is the semisimple Lie algebra whose Dynkin diagram $'$ is the result of deleting vertices $v_1,…,v_k$ and any edges meeting them from $$. Write $G$ for the compact, connected, simply-connected, semisimple Lie group with Dynkin diagram $'$. It is then nearly true that $Z((1)^k)=(1)^kG$.
In fact $Z((1)^k)$ could have finitely many connected components, and its identity component $Z((1)^k)_1$ is of the form $Z((1)^k)_1=((1)^kG)/K$ for $K⊂(1)^kG$ a finite normal subgroup. But $Z((1)^k)H$ of complex type implies that $Z((1)^k)_1H$ is of complex type, which implies that $(1)^kGH$ is of complex type by (ii),(iii).
In \eq{bc2eq6}, the morphisms $E_7(1)E_8$, $E_6(1)^2E_8$, $(14)(1)E_8$, $(8)(1)E_8$, $(3)(1)F_4$, and $(7)(1)F_4$, all arise this way by deleting 1 or 2 vertices from the Dynkin diagrams $E_8,F_4$.
For $G_2(8)$, we have inclusions $G_2(7)(8)$, where in Lie algebras $𝔰𝔭𝔦𝔫(7)/_2$ and $𝔰𝔭𝔦𝔫(8)/𝔰𝔭𝔦𝔫(7)$ are both the irreducible 7-dimensional $G_2$-representation $_7$. Hence $𝔰𝔭𝔦𝔫(7)/_2≅_7_7≅_7_$, so $G_2(8)$ is of complex type. Also $(m)(m)$ is by~(ii).
Next consider the three embeddings of Lie groups:
\begin{align*}
&{\rm(A)} & \U(1)&\longra\SU(m+1), & e^{i\th}&\longmapsto \mathop{\rm diag}\bigl(e^{i\th},\ldots, e^{i\th},e^{-im\th}\bigr), \\
&{\rm(B)} & \U(1)&\longra\Sp(m+1), & e^{i\th}&\longmapsto \mathop{\rm diag}\bigl(1,\ldots,1, e^{i\th}\bigr), \\
&{\rm(C)} & \U(1)&\longra\SO(m+2), & e^{i\th}&\longmapsto \begin{pmatrix}
1 & 0 & \cdots & 0 & 0 & 0 \\
0 & 1 & 0 & \cdots & 0 & 0\\
\vdots & 0 & \ddots & \ddots & \vdots & \vdots \\
0 & \vdots & \ddots & 1 & 0 & 0 \\
0 & 0 & \cdots & 0 & \cos\th & \sin\th \\
0 & 0 & \cdots & 0 & -\sin\th & \cos\th
\end{pmatrix}.
\end{align*}
For (A), $Z((1))≅(m)⊂(m+1)$, where the embedding $(m)(m+1)$ maps $A↦( A 0
0 A^-1 )$. Hence $(m)(m+1)$ is of complex type by (i), completing Theorem \ref{bc2thm2}(a). Also $(m)(1)(m)$ is a covering map, so $(m)(1)(m+1)$ is of complex type by~(ii),(iii).
For (B), $Z((1))≅(m)(1)⊂(m+1)$, so $(m)(1)(m+1)$ is of complex type by (i). For (C), $Z((1))≅(m)(2)⊂(m+2)$ with $(2)≅(1)$, so $(m)(1)(m+2)$ is of complex type by (i). We show $(m)(1)(m+2)$ is of complex type by lifting to Spin groups. We have now constructed the four complex type morphisms in \eq{bc2eq7}.
To prove the claims on $p$-connectedness, we first show:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item[(iv)] Suppose $\inc:G\hookra H$ is an inclusion of a Lie subgroup, and $Y=H/G$ has $\pi_i(Y)=0$ for $i\le p$. Then $\inc$ is $p$-connected.
\end{itemize}
This holds because $HH/G$ is a fibration with fibre $G$, so we have a long exact sequence of homotopy groups
\begin{equation*}
\xymatrix@C=20pt{ \cdots \ar[r] & \pi_{i+1}(H/G) \ar[r] & \pi_i(G) \ar[r] & \pi_i(H) \ar[r] & \pi_i(H/G) \ar[r] & \cdots. }
\end{equation*}
We now see that
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item[(v)] $\SU(m+1)/\SU(m)\cong\cS^{2m+1}$, with $\pi_i(\cS^{2m+1})=0$ for $i\le 2m$, so $\SU(m)\hookra\SU(m+1)$ is $2m$-connected.
\item[(vi)] $\Sp(m+1)/\Sp(m)\cong\cS^{4m+3}$, with $\pi_i(\cS^{4m+3})=0$ for $i\le 4m+2$, so $\SU(m)\hookra\SU(m+1)$ is $(4m+2)$-connected.
\item[(vii)] $\SO(m+1)/\SO(m)\cong\cS^m$, with $\pi_i(\cS^m)=0$ for $i\le m-1$, so $\SO(m)\hookra\SO(m+1)$ is $m-1$-connected. Similarly $\SO(m+1)\hookra\SO(m+2)$ is $m$-connected, so the composition $\SO(m)\hookra\SO(m+2)$ is $(m-1)$-connected.
\item[(viii)] Lifting (vii) to Spin groups, $\Spin(m)\hookra\Spin(m+2)$ is $(m-1)$-connected.
\end{itemize}
This completes the proof of Theorem \ref{bc2thm2}.
\subsection{Proof of Theorem \ref{bc2thm6}}
\label{bc42}
\subsubsection{The (co)homology of $\SU$}
\label{bc421} |
# The SOFC-Exp Corpus and Neural Approaches
to Information Extraction in the Materials Science Domain
Annemarie Friedrich1 Heike Adel1 Federico Tomazic2 Johannes Hingerl1 Renou
Benteau1 Anika Maruscyk2 Lukas Lange1
1Bosch Center for Artificial Intelligence, Renningen, Germany
2Corporate Research, Robert Bosch GmbH, Renningen, Germany
<EMAIL_ADDRESS>
###### Abstract
This paper presents a new challenging information extraction task in the
domain of materials science. We develop an annotation scheme for marking
information on experiments related to solid oxide fuel cells in scientific
publications, such as involved materials and measurement conditions. With this
paper, we publish our annotation guidelines, as well as our SOFC-Exp corpus
consisting of 45 open-access scholarly articles annotated by domain experts. A
corpus and an inter-annotator agreement study demonstrate the complexity of
the suggested named entity recognition and slot filling tasks as well as high
annotation quality. We also present strong neural-network based models for a
variety of tasks that can be addressed on the basis of our new data set. On
all tasks, using BERT embeddings leads to large performance gains, but with
increasing task complexity, adding a recurrent neural network on top seems
beneficial. Our models will serve as competitive baselines in future work, and
analysis of their performance highlights difficult cases when modeling the
data and suggests promising research directions.
## 1 Introduction
The design of new experiments in scientific domains heavily depends on domain
knowledge as well as on previous studies and their findings. However, the
amount of publications available is typically very large, making it hard or
even impossible to keep track of all experiments conducted for a particular
research question. Since scientific experiments are often time-consuming and
expensive, effective knowledge base population methods for finding promising
settings based on the published research would be of great value (e.g., Auer
et al., 2018; Manica et al., 2019; Strötgen et al., 2019; Mrdjenovich et al.,
2020). While such real-life information extraction tasks have received
considerable attention in the biomedical domain (e.g., Cohen et al., 2017;
Demner-Fushman et al., 2018, 2019), there has been little work in other
domains (Nastase et al., 2019), including materials science (with the notable
exception of the work by Mysore et al., 2017, 2019).
The corresponding [SOFC${}_{\textsc{Device}}$] with
[Pt${}_{\textsc{Material}}$] /[SmNiO3${}_{\textsc{Material}}$] /
[Pt${}_{\textsc{Material}}$] geometry [demonstrated${}_{\textsc{Experiment}}$]
dramatic power output of[225 mW cm−2${}_{\textsc{Value}}$] at [500
°C${}_{\textsc{Value}}$].WorkingTemperaturePowerDensityElectrolyteCathodeMaterialDeviceAnodeMaterial
Figure 1: Sentence describing a fuel-cell related experiment, annotated with
Experiment frame information.
In this paper, we introduce a new information extraction use case from the
materials science domain and propose a series of new challenging information
extraction tasks. We target publications about solid oxide fuel cells (SOFCs)
in which the interdependence between chosen materials, measurement conditions
and performance is complex (see Figure 1). For making progress within natural
language processing (NLP), the genre-domain combination presents interesting
challenges and characteristics, e.g., domain-specific tokens such as material
names and chemical formulas.
We provide a new corpus of open-access scientific publications annotated with
semantic frame information on experiments mentioned in the text. The
annotation scheme has been developed jointly with materials science domain
experts, who subsequently carried out the high-quality annotation. We define
an “Experiment”-frame and annotate sentences that evoke this frame with a set
of 16 possible slots, including among others AnodeMaterial, FuelUsed and
WorkingTemperature, reflecting the role the referent of a mention plays in an
experiment. Frame information is annotated on top of the text as graphs rooted
in the experiment-evoking element (see Figure 1). In addition, slot-filling
phrases are assigned one of the types Material, Value, and Device.
The task of finding experiment-specific information can be modeled as a
retrieval task (i.e., finding relevant information in documents) and at the
same time as a semantic-role-labeling task (i.e., identifying the slot
fillers). We identify three sub-tasks: (1) identifying sentences describing
relevant experiments, (2) identifying mentions of materials, values, and
devices, and (3) recognizing mentions of slots and their values related to
these experiments. We propose and compare several machine learning methods for
the different sub-tasks, including bidirectional long-short term memory
(BiLSTM) networks and BERT-based models. In our results, BERT-based models
show superior performance. However, with increasing complexity of the task, it
is beneficial to combine the two approaches.
With the aim of fostering research on challenging information extraction tasks
in the scientific domain, we target the domain of SOFC-related experiments as
a starting point. Our findings based on this sample use case are transferable
to similar experimental domains, which we illustrate by applying our best
model configurations to a previously existing related corpus (Mysore et al.,
2019), achieving state-of-the-art results.
We sum up our contributions as follows:
* •
We develop an annotation scheme for marking information on materials-science
experiments on scientific publications (Section 3).
* •
We provide a new corpus of 45 materials-science publications in the research
area of SOFCs, manually annotated by domain experts for information on
experimental settings and results (Section 4). Our corpus is publicly
available.111Resources related to this paper can be found at:
https://github.com/boschresearch/sofc-exp_textmining_resources Our inter-
annotator agreement study provides evidence for high annotation quality
(Section 5).
* •
We identify three sub-tasks of extracting experiment information and provide
competitive baselines with state-of-the-art neural network approaches for them
(Sections 4, 6, 7).
* •
We show the applicability of our findings to modeling the annotations of
another materials-science corpus (Mysore et al., 2019, Section 7).
## 2 Related work
Information extraction for scientific publications. Recently, several studies
addressed information extraction and knowledge base construction in the
scientific domain Augenstein et al. (2017); Luan et al. (2018); Jiang et al.
(2019); Buscaldi et al. (2019). We also aim at knowledge base construction but
target publications about materials science experiments, a domain understudied
in NLP to date.
Information extraction for materials science. The work closest to ours is the
one of Mysore et al. (2019) who annotate a corpus of 230 paragraphs describing
synthesis procedures with operations and their arguments, e.g., “The resulting
[solid productsMaterial] were … [driedOperation] at
[120Number][$\mathrm{c}\mathrm{e}\mathrm{l}\mathrm{s}\mathrm{i}\mathrm{u}\mathrm{s}$ConditionUnit]
for [8Number] [hConditionUnit].” Operation-evoking elements (“dried”) are
connected to their arguments via links, and with each other to indicate
temporal sequence, thus resulting in graph structures similar to ours. Their
annotation scheme comprises 21 entity types and 14 relation types such as
Participant-material, Apparatus-of and Descriptor-of. Kononova et al. (2019)
also retrieve synthesis procedures and extract recipes, though with a coarser-
grained label set, focusing on different synthesis operation types. Weston et
al. (2019) create a dataset for named entity recognition on abstracts of
materials science publications. In contrast to our work, their label set
(e.g., Material, Application, Property) is targeted to document indexing
rather than information extraction. A notable difference to our work is that
we perform full-text annotation while the aforementioned approaches annotate a
pre-selected set of paragraphs (see also Kim et al., 2017).
Mysore et al. (2017) apply the generative model of Kiddon et al. (2015) to
induce action graphs for synthesis procedures of materials from text. In
Section 7.1, we implement a similar entity extraction system and also apply
our algorithms to the dataset of Mysore et al. (2019). Tshitoyan et al. (2019)
train word2vec Mikolov et al. (2013) embeddings on materials science
publications and show that they can be used for recommending materials for
functional applications. Other works adapt the BERT model to clinical and
biomedical domains (Alsentzer et al., 2019; Sun and Yang, 2019), or generally
to scientific text (Beltagy et al., 2019).
Neural entity tagging and slot filling. The neural-network based models we use
for entity tagging and slot filling bear similarity to state-of-the-art models
for named entity recognition (e.g., Huang et al., 2015; Lample et al., 2016;
Panchendrarajan and Amaresan, 2018; Lange et al., 2019). Other related work
exists in the area of semantic role labeling (e.g., Roth and Lapata, 2015;
Kshirsagar et al., 2015; Hartmann et al., 2017; Adel et al., 2018; Swayamdipta
et al., 2018).
## 3 Annotation Scheme
In this section, we describe our annotation scheme and guidelines for marking
information on SOFC-related experiments in scientific publications.
### 3.1 Experiment-Describing Sentences
We treat the annotation task as identifying instances of a semantic frame
Fillmore (1976) that represents SOFC-related experiments. We include (1) cases
that introduce novel content; (2) descriptions of specific previous work; (3)
general knowledge that one could find in a textbook or survey; and also (4)
suggestions for future work.
We assume that a frame is introduced to the discourse by words that evoke the
frame. While we allow any part-of-speech for such frame-evoking elements, in
practice, our annotators marked almost only verbs, such as “test,” “perform,”
and “report” with the type Experiment. In the remainder of this paper, we
treat all sentences containing at least one such annotation as experiment-
describing.
### 3.2 Entity Mention Types
In a second annotation layer, annotators mark spans with one of the following
entity types. The annotations are marked only on experiment-describing
sentences as well as several additional sentences selected by the annotator.
Material.
We use the type Material to annotate text spans referring to materials or
elements. They may be specified by a particular composition formula (e.g.,
“La0.75Sr0.25Cr0.5Mn0.5O3”) or just by a mention of the general class of
materials, such as “oxides” or “hydrocarbons.”222If the material is referenced
by a common noun or by a pronoun and a more specific mention occurs earlier in
the text, we indicate this coreference with the aim of facilitating oracle
information extraction experiments in future work.
Value.
We annotate numerical values and their respective units with the type Value.
In addition, we include specifications like “more than” or “between” in the
annotation span (e.g., “above 750 $\mathrm{\SIUnitSymbolCelsius}$,” “1.0
$\mathrm{W}\text{\,}{\mathrm{cm}}^{-2}$”).
Device.
This label is used to mark mentions of the type of device used in the fuel
cell experiment (e.g., “IT-SOFC”).
### 3.3 Experiment Slot Types
The above two steps of recognizing relevant sentences and marking coarse-
grained entity types are in general applicable to a wide range of experiment
types within the materials science domain. We now define a set of slot types
particular to experiments on SOFCs. During annotation, we mark these slot
types as links between the experiment-evoking phrase and the respective slot
filler (entity mention), see Figure 1. As a result, experiment frames are
represented by graphs rooted in the node corresponding to the frame-evoking
element.
Our annotation scheme comprises 16 slot types relevant for SOFC experiments.
Here we explain a few of these types for illustration. A full list of these
slot types can be found in Supplementary Material Table 11; detailed
explanations are given in the annotation guidelines published along with our
corpus.
AnodeMaterial, CathodeMaterial:
These slots are used to mark the fuel cell’s anode and cathode, respectively.
Both are entity mentions of type Material. In some cases, simple surface
information indicates that a material fulfills such a role. Other cases
require specific domain knowledge and close attention to the context.
FuelUsed:
This slot type indicates the chemical composition or the class of a fuel or
the oxidant species (indicated as a Material).
PowerDensity, Resistance, WorkingTemperature:
These slots are generally filled by mentions of type Value, i.e., a numerical
value plus a unit. Our annotation guidelines give examples for relevant units
and describe special cases. This enables any materials scientist, even if
he/she is not an expert on SOFCs, to easily understand and apply our
annotation guidelines.
#### Difficult cases.
We also found sentences that include enumerations of experimental settings
such as in the following example: “It can be seen that the electrode
polarization resistances in air are
$0.027\text{\,}\mathrm{\SIUnitSymbolOhm}$cm2,
$0.11\text{\,}\mathrm{\SIUnitSymbolOhm}$cm2, and
$0.88\text{\,}\mathrm{\SIUnitSymbolOhm}$cm2 at
$800\text{\,}\mathrm{\SIUnitSymbolCelsius}$,
$700\text{\,}\mathrm{\SIUnitSymbolCelsius}$ and
$600\text{\,}\mathrm{\SIUnitSymbolCelsius}$, respectively.”333See
[PMC4673446]. We decided to simply link all slot fillers (the various
resistance and temperature values) to the same frame-evoking element, leaving
disentangling and grouping of this set of parameters to future work.
### 3.4 Links between Experiments
We instruct our annotators to always link slot fillers to the syntactically
closest Experiment mention. If the description of an experiment spans more
than one clause, we link the two relevant Experiments using the relation
same_exp. We use exp_variation to link experiments done on the same cell, but
with slightly different operating conditions. The link type exp_variation can
also relate two frame-evoking elements that refer to two measurements
performed on different materials/cells, but in the same experimental
conditions. In this case, the frame-evoking elements usually convey an idea of
comparison, e.g., “increase” or “reach from … to.”
## 4 Corpus Statistics and Task Definitions
In this section, we describe our new corpus and propose a set of information
extraction tasks that can be trained and evaluated using this dataset.
#### SOFC-Exp Corpus.
Our corpus consists of 45 open-access scientific publications about SOFCs and
related research, annotated by domain experts. For manual annotation, we use
the InCeption annotation tool (Klie et al., 2018). Table 1 shows the key
statistics for our corpus. Sentence segmentation was performed
automatically.444InCeption uses Java’s built-in sentence segmentation
algorithm with US locale. As a preparation for experimenting with the data, we
manually remove all sentences belonging to the Acknowledgment and References
sections. We propose the experimental setting of using the training data in a
5-fold cross validation setting for development and tuning, and finally
applying the model(s) to the independent test set.
| train | test
---|---|---
documents | 34 | 11
sentences | 7,630 | 1,836
avg. token/sentence | 29.4 | 35.0
experiment-describing sentences | 703 | 173
in % | 9.2 | 9.4
sentences with entity mention | |
annotations | 853 | 210
entity mention annotations | 4,037 | 1058
Material | 1,530 | 329
Value | 1,177 | 370
Device | 468 | 130
Experiment | 862 | 229
Table 1: SOFC-Exp corpus annotation statistics.
#### Task definitions.
Our rich graph-based annotation scheme allows for a number of information
extraction tasks. In the scope of this paper, we address the following steps
of (1) identifying sentences that describe SOFC-related experiments, (2)
recognizing and typing relevant named entities, and (3) extracting slot
fillers from these sentences. The originally annotated graph structures would
also allow for modeling as relations or dependency structures. We leave this
to future work.
The setup of our tasks is based on the assumption that in most cases, one
sentence describes a single experiment. The validity of this assumption is
supported by the observation that in almost all sentences containing more than
one Experiment, experiment-evoking verbs actually describe variations of the
same experiment. (For details on our analysis of links between experiments,
see Supplementary Material Section B.) In our automatic modeling, we treat
slot types as entity-types-in-context, which is a valid approximation for
information extraction purposes. We leave the tasks of deciding whether two
experiments are the same (same_exp) or whether they constitute a variation
(exp_variation) to future work. While our dataset provides a good starting
point, tackling these tasks will likely require collecting additional data.
## 5 Inter-annotator Agreement Study
We here present the results of our inter-annotator agreement study, which we
perform in order to estimate the degree of reproducibility of our corpus and
to put automatic modeling performance into perspective. Six documents (973
sentences) have been annotated independently both by our primary annotator, a
graduate student of materials science, and a second annotator, who holds a
Ph.D. in physics and is active in the field of materials science. The label
distribution in this subset is similar to the one of our overall corpus, with
each annotator choosing Experiment about 11.8% of the time.
#### Identification of experiment-describing sentences.
Agreement on our first task, judging whether a sentence contains relevant
experimental information, is 0.75 in terms of Cohen’s $\kappa$ (Cohen, 1968),
indicating substantial agreement according to Landis and Koch (1977). The
observed agreement, corresponding to accuracy, is 94.9%; expected agreement
amounts to 79.2%. Table 2 shows precision, recall and F1 for the doubly-
annotated subset, treating one annotator as the gold standard and the other
one’s labels as predicted. Our primary annotator identifies 119 out of 973
sentences as experiment-describing, our secondary annotator 111 sentences,
with an overlap of 90 sentences. These statistics are helpful to gain further
intuition of how well a human can reproduce another annotator’s labels and can
also be considered an upper bound for system performance.
| P | R | F1 | count
---|---|---|---|---
Experiment | 81.1 | 75.6 | 78.3 | 119
No-Experiment | 96.6 | 97.5 | 97.1 | 854
Table 2: Inter-annotator agreement study. Precision, recall and F1 for the
subset of doubly-annotated documents. count refers to the number of mentions
labeled with the respective type by our primary annotator.
#### Entity mention detection and type assignment.
As mentioned above, relevant entity mentions and their types are only
annotated for sentences containing experiment information and neighboring
sentences. Therefore, we here compute agreement on the detection of entity
mention and type assignment on the subset of 90 sentences that both annotators
considered as containing experimental information. We again look at precision
and recall of the annotators versus each other, see Table 3. The high
precision indicates that our secondary annotator marks essentially the same
mentions as our primary annotator, but recall suggests a few missing cases.
The difference in marking Experiment can be explained by the fact that the
primary annotator sometimes marks several verbs per sentence as experiment-
evoking elements, connecting them with same_exp or exp_variation, while the
secondary annotator links the mentions of relevant slots to the first
experiment-evoking element (see also Supplementary Material Section B).
Overall, the high agreement between domain expert annotators indicates high
data quality.
| P | R | F1 | count
---|---|---|---|---
Experiment | 100.0 | 89.3 | 94.3 | 112
Material | 100.0 | 92.1 | 95.9 | 190
Value | 100.0 | 91.5 | 95.5 | 211
Device | 96.3 | 98.7 | 97.5 | 78
Table 3: Inter-annotator agreement study. Precision, recall and F1 for
labeling entity types. count refers to the number of mentions labeled with the
respective type by our primary annotator.
#### Identifying experiment slot fillers.
We compute agreement on the task of identifying the slots of an experiment
frame filled by the mentions in a sentence on the subset of sentences that
both annotators marked as experiment-describing. Slot fillers are the
dependents of the respective edges starting at the experiment-evoking element.
Table 4 shows F1 scores for the most frequent ones among those categories. See
Supplementary Material Section C for all slot types. Overall, our agreement
study provides support for the high quality of our annotation scheme and
validates the annotated dataset.
| | IAA | train
---|---|---|---
| F1 | count | count
AnodeMaterial | 72.0 | 13 | 280
CathodeMaterial | 86.7 | 44 | 259
Device | 95.0 | 71 | 381
ElectrolyteMaterial | 85.7 | 48 | 219
FuelUsed | 85.7 | 11 | 159
InterlayerMaterial | 71.8 | 25 | 51
OpenCircuitVoltage | 90.0 | 10 | 44
PowerDensity | 92.0 | 47 | 175
Resistance | 100.0 | 26 | 136
Thickness | 92.6 | 27 | 83
WorkingTemperature | 96.5 | 73 | 414
Table 4: Inter-annotator agreement study. F1 was computed for the two
annotators vs. each other on the set of experiment slots; IAA count refers to
the number of mentions labeled with the respective type by our primary
annotator in the inter-annotator agreement study (IAA).
## 6 Modeling
In this section, we describe a set of neural-network based model architectures
for tackling the various information extraction tasks described in Section 4.
#### Experiment detection.
The task of experiment detection can be modeled as a binary sentence
classification problem. It can also be conceived as a retrieval task,
selecting sentences as candidates for experiment frame extraction. We
implement a bidirectional long short-term memory (BiLSTM) model with attention
for the task of experiment sentence detection. Each input token is represented
by a concatenation of several pretrained word embeddings, each of which is
fine-tuned during training. We use the Google News word2vec embeddings Mikolov
et al. (2013), domain-specific word2vec embeddings (mat2vec, Tshitoyan et al.,
2019, see also Section 2), subword embeddings based on byte-pair encoding
(bpe, Heinzerling and Strube, 2018), BERT Devlin et al. (2019), and SciBERT
(Beltagy et al., 2019) embeddings. For BERT and SciBERT, we take the
embeddings of the first word piece as token representation. The embeddings are
fed into a BiLSTM model followed by an attention layer that computes a vector
for the whole sentence. Finally, a softmax layer decides whether the sentence
contains an experiment.
In addition, we fine-tune the original (uncased) BERT Devlin et al. (2019) as
well as SciBERT (Beltagy et al., 2019) models on our dataset. Sci-BERT was
trained on a large corpus of scientific text. We use the implementation of the
BERT sentence classifier by Wolf et al. (2019) that uses the CLS token of BERT
as input to the classification
layer.555https://github.com/huggingface/transformers
Finally, we compare the neural network models with traditional classification
models, namely a support vector machine (SVM) and a logistic regression
classifier. For both models, we use the following set of input features: bag-
of-words vectors indicating which 1- to 4-grams and part-of-speech tags occur
in the sentence.666We use sklearn, https://scikit-learn.org.
#### Entity mention extraction.
For entity and concept extraction, we use a sequence-tagging approach similar
to (Huang et al., 2015; Lample et al., 2016), namely a BiLSTM model. We use
the same input representation (stacked embeddings) as above, which are fed
into a BiLSTM. The subsequent conditional random field (CRF, Lafferty et al.,
2001) output layer extracts the most probable label sequence. To cope with
multi-token entities, we convert the labels into BIO format.
We also fine-tune the original BERT and SciBERT sequence tagging models on
this task. Since we use BIO labels, we extend it with a CRF output layer to
enable it to correctly label multi-token mentions and to enable it to learn
transition scores between labels. As a non-neural baseline, we train a CRF
model using the token, its lemma, part-of-speech tag and mat2vec embedding as
features.777We use sklearn-pycrfsuite, https://pypi.org/project/sklearn-
pycrfsuite.
#### Slot filling.
As described in Section 4, we approach the slot filler extraction task as
fine-grained entity-typing-in-context, assuming that each sentence represents
a single experiment frame. We use the same sequence tagging architectures as
above for tagging the tokens of each experiment-describing sentence with the
set of slot types (see Table 11). Future work may contrast this sequence
tagging baseline with graph-induction based frame extraction.
## 7 Experiments
In this section, we present the experimental results for detecting experiment-
describing sentences, entity mention extraction and experiment slot
identification. For tokenization, we employ
ChemDataExtractor,888http://chemdataextractor.org which is optimized for
dealing with chemical formulas and unit mentions.
We tune our models in a 5-fold cross-validation setting. We also report the
mean and standard deviation across those folds as development results. For the
test set, we report the macro-average of the scores obtained when applying
each of the five models to the test set. To put model performance in relation
to human agreement, we report the corresponding statistics obtained from our
inter-annotator agreement study (Section 5). Note that these numbers are based
on a subset of the data and are hence not directly comparable.
#### Hyperparameters and training.
The BiLSTM models are trained with the Adam optimizer Kingma and Ba (2015)
with a learning rate of 1e-3. For fine-tuning the original BERT models, we
follow the configuration published by Wolf et al. (2019) and use AdamW
(Loshchilov and Hutter, 2019) as optimizer and a learning rate of 4e-7 for
sentence classification and 1e-5 for sequence tagging. When adding BERT tokens
to the BiLSTM, we also use the AdamW optimizer for the whole model and
learning rates of 4e-7 or 1e-5 for the BERT part and 1e-3 for the remainder.
For regularization, we employ early stopping on the development set. We use a
stacked BiLSTM with two hidden layers and 500 hidden units for all tasks with
the exception of the experiment sentence detection task, where we found one
BiLSTM layer to work best. The attention layer of the sentence detection model
has a hidden size of 100.
| dev | test
---|---|---
Model | F1 | P | R | F1
RBF SVM | 54.2+/-3.7 | 64.6 | 54.9 | 59.4
Logistic Regression | 53.0+/-4.2 | 68.2 | 50.9 | 58.3
BiLSTM mat2vec | 49.9+/-3.1 | 49.6 | 69.4 | 57.8
BiLSTM word2vec | 52.3+/-4.6 | 51.1 | 65.3 | 57.4
\+ mat2vec | 55.9+/-4.2 | 52.0 | 59.0 | 55.3
\+ bpe | 58.6+/-3.0 | 58.9 | 64.7 | 61.7
\+ BERT-base | 66.8+/-4.9 | 60.2 | 71.7 | 65.4
\+ SciBERT | 67.9+/-4.0 | 58.6 | 74.6 | 65.6
BiLSTM BERT-base | 64.7+/-4.6 | 63.7 | 69.9 | 66.7
BiLSTM SciBERT | 68.1+/-3.7 | 60.2 | 73.4 | 66.1
BERT-base | 66.0+/-4.6 | 58.6 | 71.1 | 64.2
SciBERT | 67.9+/-4.0 | 60.8 | 74.6 | 67.0
BERT-large | 64.3+/-4.3 | 63.1 | 75.1 | 68.6
humans | 78.3 | 81.1 | 75.6 | 78.3
Table 5: Experiments: identifying experiment-describing sentences. P, R and F1
for experiment-describing sentences. With the exception of SVM, we downsample
the non-experiment-describing sentences of the training set by 0.3.
#### Experiment sentence detection.
Table 5 shows our results on the detection of experiment-describing sentences.
The neural models with byte-pair encoding embeddings or BERT clearly
outperform the SVM and logistic regression models. Within the neural models,
BERT and SciBERT add the most value, both when using their embeddings as
another input to the BiLSTM and when fine-tuning the original BERT models.
Note that even the general-domain BERT is strong enough to cope with non-
standard domains. Nevertheless, models based on SciBERT outperform BERT-based
models, indicating that in-domain information is indeed beneficial. For
performance reasons, we use BERT-base in our experiments, but for the sake of
completeness, we also run BERT-large for the task of detecting experiment
sentences. Because it did not outperform BERT-base in our cross-validation
based development setting, we did not further experiment with BERT-large.
However, we found that it resulted in the best F1-score achieved on our test
set. In general, SciBERT-based models provide very good performance and seem
most robust across dev and test sets. Overall, achieving F1-scores around
67.0-68.6, such a retrieval model may already be useful in production.
However, there certainly is room for improvement.
#### Entity mention extraction.
Table 6 provides our results on entity mention detection and typing. Models
are trained and results are reported on the subset of sentences marked as
experiment-describing in the gold standard, amounting to 4,590 entity mentions
in total.999The SOFC-Exp gold standard marks all entity mentions that
correspond to one of the four relevant types occurring in these sentences,
regardless of whether the mention fills a slot in an experiment or not. The
CRF baseline achieves comparable or better results than the Bi-LSTM with
word2vec and/or mat2vec embeddings. However, adding subword-based embeddings
(bpe and/or BERT) significantly increases performance of the BiLSTM,
indicating that there are many rare words. Again, the best results are
obtained when using BERT or SciBERT embeddings or when using the original
SciBERT model. It is relatively easy for all model variants to recognize Value
as these mentions usually consist of a number and unit which the model can
easily memorize. Recognizing the types Material and Device, in contrast, is
harder and may profit from using gazetteer-based extensions.
Model | Exp. | Mat. | Val. | Dev. | avg.
---|---|---|---|---|---
CRF | 61.4 | 42.3 | 73.6 | 64.1 | 60.3
BiLSTM mat2vec | 47.1 | 52.4 | 60.9 | 46.1 | 51.6
BiLSTM word2vec | 55.8 | 58.6 | 59.1 | 51.7 | 56.3
+mat2vec | 57.9 | 75.2 | 64.3 | 61.5 | 64.7
+bpe | 63.3 | 81.6 | 68.0 | 68.1 | 70.2
+BERT-base | 76.0 | 88.1 | 72.9 | 81.5 | 79.7
+SciBERT | 76.9 | 89.8 | 74.1 | 85.2 | 81.5
BiLSTM BERT-base | 75.4 | 87.6 | 72.6 | 80.8 | 79.1
BiLSTM SciBERT | 77.1 | 89.9 | 72.1 | 85.7 | 81.2
BERT-base | 81.8 | 70.6 | 88.2 | 73.1 | 78.4
SciBERT | 84.5 | 77.0 | 91.6 | 72.7 | 81.5
humans | 94.3 | 95.9 | 95.5 | 97.5 | 95.8
Table 6: Experiments: entity mention detection and typing. Results on test set
(experiment-describing sentences only) in terms of F1, rightmost column shows
the macro-average.
#### Experiment slot filling.
Model | dev | test
---|---|---
CRF | 45.3+/-5.6 | 41.3
BiLSTM mat2vec | 25.9+/-11.2 | 22.5
BiLSTM word2vec | 27.5+/-9.0 | 27.0
\+ mat2vec | 43.0+/-11.5 | 34.9
\+ bpe | 50.2+/-11.8 | 38.9
\+ BERT-base | 64.6+/-12.8 | 54.2
\+ SciBERT | 67.1+/-13.3 | 59.7
BiLSTM BERT-base | 63.3+/-12.9 | 57.4
BiLSTM SciBERT | 67.8+/-12.9 | 62.6
BERT-base | 63.4+/-13.8 | 54.9
SciBERT | 65.6+/-13.2 | 56.4
humans | 83.4
Table 7: Experiments: slot identification. Model comparison in terms of macro
F1.
Table 7 shows the macro-average F1 scores for our different models on the slot
identification task.101010We evaluate on the 16 slot types as listed in Table
11. When training our model, we use the additional types
experiment_evoking_word and Thickness, which are not frame slots but related
annotations present in our data, see guidelines. As for entity typing, we
train and evaluate our model on the subset of sentences marked as experiment-
describing, which contain 4,263 slot instances. Again, the CRF baseline
outperforms the BiLSTM when using only mat2vec and/or word2vec embeddings. The
addition of BERT or SciBERT embeddings improves performance. However, on this
task, the BiLSTM model with (Sci)BERT embeddings outperforms the fine-tuned
original (Sci)BERT model. Compared to the other two tasks, this task requires
more complex reasoning and has a larger number of possible output classes. We
assume that in such a setting, adding more abstraction power to the model (in
the form of a BiLSTM) leads to better results.
For a more detailed analysis, Table 8 shows the slot-wise results for the non-
neural CRF baseline and the model that performs best on the development set:
BiLSTM with SciBERT embeddings. As in the case of entity mention detection,
the models do well for the categories that consist of numeric mentions plus
particular units. In general, model performance is also tied to the frequency
of the slot types in the dataset. Recognizing the role a material plays in an
experiment (e.g., AnodeMaterial vs. CathodeMaterial) remains challenging,
possibly requiring background domain knowledge. This type of information is
often not stated explicitly in the sentence, but introduced earlier in the
discourse and would hence require document-level modeling.
| | BiLSTM |
---|---|---|---
| CRF | SciBERT | count
AnodeMaterial | 25.0 | 19.0 | 280
CathodeMaterial | 11.8 | 28.9 | 259
Device | 59.3 | 67.6 | 381
ElectrolyteMaterial | 20.0 | 47.2 | 219
FuelUsed | 45.9 | 55.5 | 159
InterlayerMaterial | 0.0 | 10.7 | 51
OpenCircuitVoltage | 43.5 | 84.3 | 44
PowerDensity | 69.0 | 97.6 | 175
Resistance | 64.5 | 93.9 | 136
WorkingTemperature | 72.5 | 90.3 | 414
Table 8: Experiments: slot identification. Results in terms of F1 on the test
set, BiLSTM results averaged across 5 models.
### 7.1 Entity Extraction Evaluation on the Synthesis Procedures Dataset
Model | micro-avg. F1
---|---
DCNN Mysore et al. (2017) | 77.5
BiLSTM-CRF Mysore et al. (2017) | 77.6
BiLSTM mat2vec | 73.9
BiLSTM word2vec | 76.4
\+ mat2vec | 83.5
BERT-base | 85.5
SciBERT | 87.2
BiLSTM BERT-base | 89.3
BiLSTM SciBERT | 90.7
BiLSTM + all (with BERT-base) | 89.3
BiLSTM + all (with SciBERT) | 92.2
Table 9: Experiments: modeling mention types in synthesis procedure data set.
Results from Mysore et al. (2017) are not directly comparable to ours as they
are based on a slightly different data set; our BiLSTM mat2vec+word2vec
roughly corresponds to their BiLSTM-CRF model.
As described in Section 2, the data set curated by Mysore et al. (2019)
contains 230 synthesis procedures annotated with entity type
information.111111See https://github.com/olivettigroup/annotated-materials-
syntheses We apply our models to this entity extraction task in order to
estimate the degree of transferability of our findings to similar data sets.
To the best of our knowledge, there have not yet been any publications on the
automatic modeling of this data set. We hence compare to the previous work of
Mysore et al. (2017), who perform action graph induction on a similar data
set.121212According to correspondence with authors. Our implementation of
BiLSTM-CRF mat2vec+word2vec roughly corresponds to their BiLSTM-CRF system.
Table 9 shows the performance of our models when trained and evaluated on the
synthesis procedures dataset. Detailed scores by entity type can be found in
the Supplementary Material. We chose to use the data split suggested by the
authors for the NER task, using 200 documents for training, and 15 documents
for each dev and test set. Among the non-BERT-based systems, the BiLSTM
variant using both mat2vec and word2vec performs best, indicating that the two
pre-trained embeddings contain complementary information with regard to this
task. The best performance is reached by the BiLSTM model including word2vec,
mat2vec, bpe and SciBERT embeddings, with 92.2 micro-average F1 providing a
strong baseline for future work.
## 8 Conclusion
We have presented a new dataset for information extraction in the materials
science domain consisting of 45 open-access scientific articles related to
solid oxide fuel cells. Our detailed corpus and inter-annotator agreement
studies highlight the complexity of the task and verify the high annotation
quality. Based on the annotated structures, we suggest three information
extraction tasks: the detection of experiment-describing sentences, entity
mention recognition and typing, and experiment slot filling. We have presented
various strong baselines for them, generally finding that BERT-based models
outperform other model variants. While some categories remain challenging,
overall, our models show solid performance and thus prove that this type of
data modeling is feasible and can lead to systems that are applicable in
production settings. Along with this paper, we make the annotation guidelines
and the annotated data freely available.
#### Outlook.
In Section 7.1, we have shown that our findings generalize well by applying
model architectures developed on our corpus to another dataset. A natural next
step is to combine the datasets in a multi-task setting to investigate to what
extent models can profit from combining the information annotated in the
respective datasets. Further research will investigate the joint modeling of
entity extraction, typing and experiment frame recognition. In addition, there
are also further natural language processing tasks that can be researched
using our dataset. They include the detection of events and sub-events when
regarding the experiment-descriptions as events, and a more linguistically
motivated evaluation of the frame-semantic approach to experiment descriptions
in text, e.g., moving away from the one-experiment-per-sentence and one-
sentence-per-experiment assumptions and modeling the graph-based structures as
annotated.
## Acknowledgments
We thank Jannik Strötgen, Felix Hildebrand, Dragan Milchevski and everyone
else involved in the Bosch MatKB project for their support of this research.
We also thank Stefan Grünewald, Sherry Tan, and the anonymous reviewers for
their insightful comments related to this paper.
## References
* Adel et al. (2018) Heike Adel, Laura Ana Maria Bostan, Sean Papay, Sebastian Padó, and Roman Klinger. 2018. DERE: A task and domain-independent slot filling framework for declarative relation extraction. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 42–47, Brussels, Belgium. Association for Computational Linguistics.
* Alsentzer et al. (2019) Emily Alsentzer, John Murphy, William Boag, Wei-Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In _Proceedings of the 2nd Clinical Natural Language Processing Workshop_ , pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
* Auer et al. (2018) Sören Auer, Viktor Kovtun, Manuel Prinz, Anna Kasprzik, Markus Stocker, and Maria Esther Vidal. 2018. Towards a knowledge graph for science. In _Proceedings of the 8th International Conference on Web Intelligence, Mining and Semantics_ , WIMS ’18, New York, NY, USA. Association for Computing Machinery.
* Augenstein et al. (2017) Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. SemEval 2017 task 10: ScienceIE - extracting keyphrases and relations from scientific publications. In _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_ , pages 546–555, Vancouver, Canada. Association for Computational Linguistics.
* Beltagy et al. (2019) Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3615–3620, Hong Kong, China. Association for Computational Linguistics.
* Buscaldi et al. (2019) Davide Buscaldi, Danilo Dessì, Enrico Motta, Francesco Osborne, and Diego Reforgiato Recupero. 2019. Mining scholarly data for fine-grained knowledge graph construction. In _Proceedings of the Workshop on Deep Learning for Knowledge Graphs_ , pages 21–30.
* Cohen (1968) Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. _Psychological bulletin_ , 70(4):213.
* Cohen et al. (2017) Kevin Bretonnel Cohen, Dina Demner-Fushman, Sophia Ananiadou, and Junichi Tsujii, editors. 2017. _BioNLP 2017_. Association for Computational Linguistics, Vancouver, Canada,.
* Demner-Fushman et al. (2018) Dina Demner-Fushman, Kevin Bretonnel Cohen, Sophia Ananiadou, and Junichi Tsujii, editors. 2018. _Proceedings of the BioNLP 2018 workshop_. Association for Computational Linguistics, Melbourne, Australia.
* Demner-Fushman et al. (2019) Dina Demner-Fushman, Kevin Bretonnel Cohen, Sophia Ananiadou, and Junichi Tsujii, editors. 2019. _Proceedings of the 18th BioNLP Workshop and Shared Task_. Association for Computational Linguistics, Florence, Italy.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Fillmore (1976) Charles J. Fillmore. 1976. Frame semantics and the nature of language*. _Annals of the New York Academy of Sciences_ , 280(1):20–32.
* Hartmann et al. (2017) Silvana Hartmann, Ilia Kuznetsov, Teresa Martin, and Iryna Gurevych. 2017. Out-of-domain FrameNet semantic role labeling. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 471–482, Valencia, Spain. Association for Computational Linguistics.
* Heinzerling and Strube (2018) Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_ , Miyazaki, Japan. European Language Resources Association (ELRA).
* Huang et al. (2015) Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. In _CorRR_ , volume abs/1508.01991.
* Jiang et al. (2019) Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, and Meng Jiang. 2019\. The role of ”condition”: A novel scientific knowledge graph representation and construction model. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 1634–1642, Anchorage, AK, USA. ACM.
* Kiddon et al. (2015) Chloé Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi. 2015\. Mise en place: Unsupervised interpretation of instructional recipes. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 982–992, Lisbon, Portugal. Association for Computational Linguistics.
* Kim et al. (2017) Edward Kim, Kevin Huang, Alex Tomala, Sara Matthews, Emma Strubell, Adam Saunders, Andrew McCallum, and Elsa Olivetti. 2017. Machine-learned and codified synthesis parameters of oxide materials. _Scientific data_ , 4:170127.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_.
* Klie et al. (2018) Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The INCEpTION platform: Machine-assisted and knowledge-oriented interactive annotation. In _Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations_ , pages 5–9, Santa Fe, New Mexico. Association for Computational Linguistics.
* Kononova et al. (2019) Olga Kononova, Haoyan Huo, Tanjin He, Ziqin Rong, Tiago Botari, Wenhao Sun, Vahe Tshitoyan, and Gerbrand Ceder. 2019. Text-mined dataset of inorganic materials synthesis recipes. _Scientific data_ , 6(1):1–11.
* Kshirsagar et al. (2015) Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime Carbonell, Noah A. Smith, and Chris Dyer. 2015. Frame-semantic role labeling with heterogeneous annotations. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 218–224, Beijing, China. Association for Computational Linguistics.
* Lafferty et al. (2001) John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In _Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 - July 1, 2001_ , pages 282–289. Morgan Kaufmann.
* Lample et al. (2016) Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 260–270, San Diego, California. Association for Computational Linguistics.
* Landis and Koch (1977) J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. _Biometrics_ , 33(1):159–174.
* Lange et al. (2019) Lukas Lange, Heike Adel, and Jannik Strötgen. 2019. NLNDE: The neither-language-nor-domain-experts’ way of spanish medical document de-identification. In _Proceedings of the Iberian Languages Evaluation Forum_.
* Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net.
* Luan et al. (2018) Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3219–3232, Brussels, Belgium. Association for Computational Linguistics.
* Manica et al. (2019) Matteo Manica, Christoph Auer, Valéry Weber, Federico Zipoli, Michele Dolfi, Peter W. J. Staar, Teodoro Laino, Costas Bekas, Akihiro Fujita, Hiroki Toda, Shuichi Hirose, and Yasumitsu Orii. 2019. An information extraction and knowledge graph platform for accelerating biochemical discoveries. _CoRR_ , abs/1907.08400.
* Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In _1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings_.
* Mrdjenovich et al. (2020) David Mrdjenovich, Matthew K. Horton, Joseph H. Montoya, Christian M. Legaspi, Shyam Dwaraknath, Vahe Tshitoyan, Anubhav Jain, and Kristin A. Persson. 2020. propnet: A knowledge graph for materials science. _Matter_ , 2(2):464 – 480.
* Mysore et al. (2019) Sheshera Mysore, Zachary Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, and Elsa Olivetti. 2019. The materials science procedural text corpus: Annotating materials synthesis procedures with shallow semantic structures. In _Proceedings of the 13th Linguistic Annotation Workshop_ , pages 56–64, Florence, Italy. Association for Computational Linguistics.
* Mysore et al. (2017) Sheshera Mysore, Edward H Kim, Emma Strubell, Ao Liu, Haw-Shiuan Chang, Srikrishna Kompella, Kevin Huang, Andrew McCallum, and Elsa Olivetti. 2017. Automatically extracting action graphs from materials science synthesis procedures. In _NIPS Workshop on Machine Learning for Molecules and Materials_.
* Nastase et al. (2019) Vivi Nastase, Benjamin Roth, Laura Dietz, and Andrew McCallum, editors. 2019. _Proceedings of the Workshop on Extracting Structured Knowledge from Scientific Publications_. Association for Computational Linguistics, Minneapolis, Minnesota.
* Panchendrarajan and Amaresan (2018) Rrubaa Panchendrarajan and Aravindh Amaresan. 2018. Bidirectional LSTM-CRF for named entity recognition. In _Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation_ , Hong Kong. Association for Computational Linguistics.
* Roth and Lapata (2015) Michael Roth and Mirella Lapata. 2015. Context-aware frame-semantic role labeling. _Transactions of the Association for Computational Linguistics_ , 3:449–460.
* Strötgen et al. (2019) Jannik Strötgen, Trung-Kien Tran, Annemarie Friedrich, Dragan Milchevski, Federico Tomazic, Anika Marusczyk, Heike Adel, Daria Stepanova, Felix Hildebrand, and Evgeny Kharlamov. 2019. Towards the bosch materials science knowledge base. In _Proceedings of the ISWC 2019 Satellite Tracks (Posters & Demonstrations, Industry, and Outrageous Ideas)_, volume 2456 of _CEUR Workshop Proceedings_ , pages 323–324, Auckland, New Zealand. CEUR-WS.org.
* Sun and Yang (2019) Cong Sun and Zhihao Yang. 2019. Transfer learning in biomedical named entity recognition: An evaluation of BERT in the PharmaCoNER task. In _Proceedings of The 5th Workshop on BioNLP Open Shared Tasks_ , pages 100–104, Hong Kong, China. Association for Computational Linguistics.
* Swayamdipta et al. (2018) Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3772–3782, Brussels, Belgium. Association for Computational Linguistics.
* Tshitoyan et al. (2019) Vahe Tshitoyan, John Dagdelen, Leigh Weston, Alexander Dunn, Ziqin Rong, Olga Kononova, Kristin A. Persson, Gerbrand Ceder, and Anubhav Jain. 2019. Unsupervised word embeddings capture latent knowledge from materials science literature. _Nature_ , 571:95 – 98.
* Weston et al. (2019) L. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Trewartha, K. A. Persson, G. Ceder, and A. Jain. 2019. Named entity recognition and normalization applied to large-scale information extraction from the materials science literature. _Journal of Chemical Information and Modeling_ , 59(9):3692–3702. PMID: 31361962.
* Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s transformers: State-of-the-art natural language processing. _ArXiv_ , abs/1910.03771.
## Supplementary Material
### A Background on Solid Oxide Fuel Cells
A fuel cell is an electrochemical device that generates electricity exploiting
the chemical reaction of a fuel (usually hydrogen) with an oxidant (usually
air). The reactions take place on two electrodes, the cathode and the anode,
while the circuit is closed by an electrolyte material that only allows the
transfer of charged atoms (see Figure 2). Fuel cells that use a solid oxide as
electrolyte (Solid Oxide Fuel Cells or SOFCs) are very efficient and cost-
effective, but can only operate at high temperatures (500-1000°C), which can
cause long start-up times and fast degradation. SOFCs can be used as
stationary stand-alone devices, to produce clean power for residential or
industrial purposes, or integrated with other power generation systems to
increase the overall efficiency.
Figure 2: Solid Oxide Fuel Cell schema.
### B Data Analysis: Between-Experiment Links
As stated in Section 3, we instructed annotators to mark the closest
experiment-evoking word as Experiment and link the respective slot arguments
to this mention. In addition, the Experiment annotations could then be linked
either by same_exp or exp_variation links. Table 10 shows some statistics on
the number of Experiment annotations per sentence and how often the primary
annotator actually made use of the possibility to link experiments. In the
training data, out of 703 sentences describing experiments, 135 contain more
than one experiment-evoking word, with 114 sentences containing two, 18
sentences containing three, and 3 sentences containing four Experiment
annotations (see Table 10). In the 114 sentences containing two experiment
annotations, only in 2 sentences, the Experiments were not linked to any
others. Upon being shown these cases, our primary annotator judged that one of
them should actually have been linked.
Next, we analyze the number of cross-sentence links. In the training data,
there are 256 same_exp and 93 exp_variation links, of which 138 and 57 cross
sentence-boundaries respectively. Cross-sentence links between experiment-
evoking words and slot fillers rarely occur in our dataset (only 13 out of
2,540 times).
# Experiment | 1 | 2 | 3 | 4
---|---|---|---|---
per sentence | | | |
# sentences | 568 | 114 | 18 | 3
# same_exp | 0 | 82 | 28 | 7
# exp_variation | 0 | 27 | 8 | 1
# sent. with ‘unlinked’ exp. | - | 2 | 1 | 0
Table 10: Data analysis. Number of Experiment annotations per sentence, and
counts of links between them (within sentence). Training set: 703 experiment-
describing sentences.
### C Inter-annotator Agrement Study: further statistics
Table 11 shows the full set of statistics for the experiment slot agreement.
| agreement study | IAA | train
---|---|---|---
| P | R | F1 | count | count
AnodeMaterial | 75.0 | 69.2 | 72.0 | 13 | 280
CathodeMaterial | 84.8 | 88.6 | 86.7 | 44 | 259
Conductivity | - | - | - | - | 55
CurrentDensity | 100.0 | 60.0 | 75.0 | 5 | 65
DegradationRate | 100.0 | 100.0 | 100.0 | 2 | 19
Device | 97.1 | 93.0 | 95.0 | 71 | 381
ElectrolyteMaterial | 78.9 | 93.8 | 85.7 | 48 | 219
FuelUsed | 90.0 | 81.8 | 85.7 | 11 | 159
InterlayerMaterial | 100.0 | 56.0 | 71.8 | 25 | 51
OpenCircuitVoltage | 90.0 | 90.0 | 90.0 | 10 | 44
PowerDensity | 100.0 | 85.1 | 92.0 | 47 | 175
Resistance | 100.0 | 100.0 | 100.0 | 26 | 136
SupportMaterial | 75.0 | 37.5 | 50.0 | 8 | 106
TimeOfOperation | 83.3 | 100.0 | 90.9 | 5 | 47
Voltage | 100.0 | 33.3 | 50.0 | 6 | 35
WorkingTemperature | 98.6 | 94.5 | 96.5 | 73 | 414
Table 11: Inter-annotator agreement study. Precision, recall and F1 scores of
the two annotators vs. each other on the set of slots. IAA count refers to the
number of mentions labeled with the respective type by our primary annotator
in the 6 documents of the inter-annotator agreement study. train count refers
to the number of instances in the training set. (Conductivity has been added
to the set of slots only after conducting the inter-annotator agreement
study.)
### D Additional Experimental Results
In the following tables, we give detailed statistics for the experiments
described in the main paper.
Table 12
reports full statistics for the task of identifying experiment-describing
sentences, including precision and recall in the dev setting.
Table 13
reports F1 per entity type for the dev setting including standard deviations.
Table 14
reports F1 per entity type/slot for the synthesis procedures dataset (Mysore
et al., 2019).
| dev (5-fold cv) | test
---|---|---
Model | P | R | F1 | P | R | F1
RBF SVM | 66.4 | 46.1 | 54.2+/-3.7 | 64.6 | 54.9 | 59.4
Logistic Regression | 72.7 | 41.9 | 53.0+/-4.2 | 68.2 | 50.9 | 58.3
BiLSTM mat2vec | 46.3 | 55.6 | 49.9+/-3.1 | 49.6 | 69.4 | 57.8
BiLSTM word2vec | 50.0 | 56.1 | 52.3+/-4.6 | 51.1 | 65.3 | 57.4
\+ mat2vec | 59.8 | 53.6 | 55.9+/-4.2 | 52.0 | 59.0 | 55.3
\+ bpe | 62.2 | 56.4 | 58.6+/-3.0 | 58.9 | 64.7 | 61.7
\+ BERT | 66.1 | 67.8 | 66.8+/-4.9 | 60.2 | 71.7 | 65.4
+SciBERT | 68.6 | 68.0 | 68.1+/-3.7 | 60.2 | 73.4 | 66.1
BiLSTM BERT | 65.5 | 64.2 | 64.7+/-4.6 | 63.7 | 69.9 | 66.7
BiLSTM SciBERT | 67.1 | 69.1 | 67.9+/-4.0 | 58.6 | 74.6 | 65.6
BERT-base | 64.0 | 68.2 | 66.0+/-4.6 | 58.6 | 71.1 | 64.2
BERT-large | 61.8 | 68.9 | 64.3+/-4.6 | 63.1 | 75.1 | 68.6
SciBERT | 66.0 | 70.2 | 67.9+/-4.0 | 60.8 | 74.6 | 67.0
humans (on agreement data) | 80.4 | 77.6 | 78.9 | 80.4 | 77.6 | 78.9
Table 12: Experiments: Identifying experiment sentences. P, R and F1 for experiment-describing sentences. With the exception of SVM, we downsample the non-experiment-describing sentences by 0.3. Model | Experiment | Material | Value | Device | macro-avg. | Experiment | Material | Value | Device | macro-avg.
---|---|---|---|---|---|---|---|---|---|---
CRF | 66.5+/-3.5 | 47.0+/-9.1 | 73.0+/-6.4 | 56.2+/-10.0 | 60.7+/-4.5 | 61.4 | 42.3 | 73.6 | 64.1 | 60.3
BiLSTM mat2vec | 52.9+/-3.4 | 55.3+/-2.0 | 47.9+/-6.3 | 53.2+/-1.9 | 52.3+/-3.4 | 47.1 | 52.4 | 60.9 | 46.1 | 51.6
\+ BERT | 80.3+/-3.2 | 87.7+/-3.3 | 76.8+/-5.3 | 81.9+/-5.5 | 81.7+/-4.3 | 74.3 | 87.9 | 71.0 | 80.7 | 78.5
BiLSTM word2vec | 62.3+/-3.0 | 61.6+/-2.1 | 52.1+/-5.2 | 59.5+/-1.0 | 58.9+/-2.8 | 55.8 | 58.6 | 59.1 | 51.7 | 56.3
+mat2vec | 65.8+/-4.2 | 78.4+/-1.6 | 61.9+/-8.2 | 69.6+/-4.0 | 68.9+/-4.5 | 57.9 | 75.2 | 64.3 | 61.5 | 64.7
+bpe | 69.2+/-5.8 | 82.3+/-1.9 | 60.1+/-11.2 | 73.4+/-4.7 | 71.2+/-5.9 | 63.3 | 81.6 | 68.0 | 68.1 | 70.2
+BERT | 80.0+/-3.4 | 87.9+/-2.8 | 74.4+/-5.6 | 80.7+/-3.9 | 80.8+/-3.9 | 76.0 | 88.1 | 72.9 | 81.5 | 79.7
+SciBERT | 81.4+/-1.6 | 89.4+/-2.4 | 73.8+/-8.7 | 82.0+/-4.3 | 81.7+/-4.2 | 76.9 | 89.8 | 74.1 | 85.2 | 81.5
BiLSTM BERT | 79.6+/-2.4 | 87.6+/-2.4 | 72.0+/-7.5 | 80.5+/-5.1 | 79.9+/-4.3 | 75.4 | 87.6 | 72.6 | 80.8 | 79.1
BiLSTM SciBERT | 80.5+/-1.2 | 89.4+/-2.8 | 73.0+/-9.4 | 82.3+/-3.5 | 81.3+/-4.2 | 77.1 | 89.9 | 72.1 | 85.7 | 81.2
BERT-base | 85.4+/-2.8 | 73.7+/-7.2 | 90.0+/-2.1 | 68.3+/-3.7 | 79.3+/-3.9 | 81.8 | 70.6 | 88.2 | 73.1 | 78.4
SciBERT | 84.5+/-3.0 | 77.0+/-7.4 | 91.6+/-2.8 | 72.7+/-2.1 | 81.5+/-3.8 | 81.2 | 75.3 | 91.9 | 73.2 | 80.4
humans | 94.3 | 95.9 | 95.5 | 97.5 | 95.8 | 94.3 | 95.9 | 95.5 | 97.5 | 95.8
Table 13: Experiments: entity mention extraction and labeling. Results on 5-fold cross validation for dev and test set (experiment-describing sentences only) in terms of F1. Entity Types | Mysore et | BiLSTM | BiLSTM
---|---|---|---
| al. (2017) | w2v+m2v | \+ all (SciBERT)
Amount-Unit | 83.5 | 93.5 | 95.8
Brand | - | 67.9 | 83.3
Condition-Misc | 74.6 | 85.1 | 88.9
Condition-Unit | 94.5 | 97.2 | 95.0
Material | 80.2 | 84.0 | 92.3
Material-Descriptor* | 62.0 | 65.5 | 88.5
Nonrecipe-Material | - | 45.8 | 80.0
Number | 91.9 | 93.4 | 98.4
Operation | 82.8 | 93.5 | 98.1
Synthesis-Apparatus | - | 63.9 | 81.3
Table 14: Experiments: Modeling mention types in synthesis procedure data,
most frequent entity types. Results in terms of F1. Results from Mysore et al.
(2017) are not directly comparable. *Type called Descriptor in their paper.
|
CERN-TH-2024-091, MPP-2024-121
e1 e2 e3 e4 e5
11institutetext: Theoretical Physics Department, CERN, 1211 Geneva 23,
Switzerland 22institutetext: Max-Planck-Institut für Physik, Boltzmannstraße
8, 85748 Garching, Germany 33institutetext: Institute for Theoretical Physics,
University of Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, Germany
44institutetext: Physik-Department, Technische Universität München, James-
Franck-Strasse 1, 85748 Garching, Germany
# An event generator for neutrino-induced Deep Inelastic Scattering and
applications to neutrino astronomy
Silvia Ferrario Ravasioaddr1,e1 Rhorry Gauldaddr2,e2 Barbara Jägeraddr3,e3
Alexander Karlbergaddr1,e4 Giulia Zanderighiaddr2,addr4,e5
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
(August 28, 2024)
###### Abstract
We extend the recently presented, fully exclusive, next-to-leading-order
accurate event generator for the simulation of massless neutral- and charged-
current deep inelastic scattering (DIS) to the case of incoming neutrinos. The
generator can be used to study neutrino-nucleon interactions at (ultra) high
energies, and is relevant for a range of fixed-target collider experiments and
large-volume neutrino detectors, investigating atmospheric and astrophysical
neutrinos. The matching with multi-purpose event generators such as PYTHIA 8
is performed with the POWHEG method, and accounts for parton showering and
non-perturbative effects such as hadronization. This makes it possible to
investigate higher-order perturbative corrections to realistic observables,
such as the distribution of charged particles. To illustrate the capabilities
of the code we provide predictions for several differential distributions in
fixed-target collisions for neutrino energies up to $1\leavevmode\nobreak\
\mathrm{PeV}$.
††journal: Eur. Phys. J. C
###### Contents
1. 1 Introduction
2. 2 Details of the implementation
1. 2.1 Fixed-target experiments
2. 2.2 Nucleon targets
3. 2.3 Variable neutrino flux
4. 2.4 On the momentum mappings, mass effects and possible extensions to more complex processes
3. 3 Fixed-order validation
4. 4 Phenomenological results
1. 4.1 Particle multiplicities
2. 4.2 Energy-based distributions
3. 4.3 Charm production
5. 5 Conclusions
6. A Appendix DIS process selection
## 1 Introduction
Neutrinos, together with photons, are the most abundant elementary particles
in the universe. While the properties of photons are extremely well
understood, there are still many outstanding questions regarding neutrinos.
For instance, the origin and nature of neutrino masses (Dirac vs. Majorana
mass) is not understood, nor is their mass hierarchy (normal vs. inverted
ordering). Furthermore, neutrinos provide a portal to beyond the Standard
Model (BSM) physics, making neutrino experiments at the luminosity frontier
sensitive to such BSM interactions (see e.g. Ref. Batell:2009di for a
review).
Neutrino properties are difficult to measure because they only interact though
the weak force. For this reason, their study often requires large-volume
detectors, which have enabled the discovery of (ultra) high-energy cosmic
neutrinos in 2014, the observation of an astrophysical source of energetic
neutrinos accompanied by gamma-ray emissions in 2018, and the determination of
the oscillation properties of multi-$\mathrm{GeV}$ energy atmospheric
neutrinos (see e.g. Ref. Ackermann:2022rqc for a review of these and several
recent results). Ongoing experiments such as ANTARES ANTARES:2011hfw , Baikal
BAIKAL:2005qnn , IceCube IceCube:2016zyt , and KM3NeT KM3Net:2016zxf , will
continue to extract information on (ultra) high-energy neutrinos to which
their detectors are exposed. Moreover, a range of proposed next-generation
detectors will facilitate precise measurements of (ultra) high-energy
neutrinos from atmospheric and cosmic sources. This advancement will usher in
a new era of precision, enabling to probe neutrino properties, their
interactions, and fundamental symmetries at the highest possible energies.
Furthermore, this programme will be instrumental to discover and characterize
the astrophysical sources of the most energetic cosmic and gamma-rays.
Additional data opportunities come from high-luminosity experiments. For
example, measurements of neutrino-matter scattering at collider facilities
(e.g. charm production measured by NuTeV NuTeV:2007uwm ) have provided
important information on the hadron structure. Forward-physics facilities such
as SND@LHC SNDLHC:2022ihg ; SNDLHC:2023pun , SHiP SHiP:2015vad , and
FASER$\nu$ FASER:2019dxq ; FASER:2020gpr ; FASER:2023zcr , are already taking
data and the Forward Physics Facility (FPF) is on the horizon for the HL-LHC
Anchordoqui:2021ghd ; Feng:2022inv . A major goal of each of these experiments
is to extract the flavour and energy-dependence of the neutrino flux to which
their detector is exposed. This requires, in addition to a detailed
understanding of the detector, precise knowledge of the expected differential
rates of neutrino-nucleon scattering for varying neutrino flavour and energy.
At the large energies under consideration (multi-$\mathrm{GeV}$ and above),
the scattering rate of neutrinos with matter is dominated by the deep
inelastic scattering (DIS) process. The role of theory in this context is thus
an important one: it provides a well defined (and rigorously tested)
computational framework, that of collinear factorisation Collins:1989gx , to
predict the differential scattering rates of neutrinos. This framework is
reliable provided the exchanged momentum, $Q^{\mu}$, satisfies $|Q^{2}|\gtrsim
m_{p}^{2}$, $m_{p}$ being the proton mass, and can be applied across many
orders of magnitude in neutrino energy. It relies on a combination of
perturbative QCD ingredients, and of the knowledge of the universal partonic
content of the colliding hadrons (as extracted from global analyses of hadron
collider data), see Ref. Ethier:2020way for a recent review.
This theoretical framework can be straightforwardly applied to the case of
(ultra) high-energy neutrino-nucleon scattering by expressing the differential
cross-section in terms of DIS structure functions (see for example the
discussion in Section II of Cooper-Sarkar:2011jtt ). The structure functions
encapsulate the strong dynamics of the nucleon as struck by an exchanged gauge
boson, and they can be predicted through the convolution of parton
distribution functions (PDFs) with a set of perturbatively calculated
coefficient functions. The simplicity of this approach stems from the fact
that the structure functions provide an inclusive description of all QCD
radiation in the scattering process. On the other hand, it is limited as
predicted cross-sections are differential only in quantities inclusive over
QCD radiation, such as the leptonic momentum transfer $Q^{2}$ and the Bjorken
momentum fraction, $x_{\mathrm{B}}$. The massless hard coefficient functions
that enter into the structure functions have been computed at 3-loops
SanchezGuillen:1990iq ; vanNeerven:1991nn ; Zijlstra:1991qc ; Zijlstra:1992qd
; Zijlstra:1992kj ; vanNeerven:1999ca ; vanNeerven:2000uj ; Moch:1999eb ;
Moch:2004xu ; Vermaseren:2005qc ; Vogt:2006bt ; Moch:2007rq ; Davies:2016ruz ;
Blumlein:2022gpp . Following the structure-function approach, dedicated
theoretical studies of neutrino-nucleon DIS at high energies have appeared
over the years, both at leading-order (LO) Gandhi:1998ri ; Gluck:1998js ;
Cooper-Sarkar:2007zsa ; Connolly:2011vc , next-to-leading order (NLO) Cooper-
Sarkar:2011jtt and recently at next-to-next-to-leading order (NNLO) in QCD
Bertone:2018dse ; Xie:2023suk . The impact of the physics effects due to
heavy-quark masses, nuclear modifications of PDFs, and resummation of
small-$x$ contributions has been studied in Refs. Bertone:2018dse ;
Xie:2023suk , the role of certain classes of QED effects has been investigated
in Refs. Seckel:1997kk ; Alikhanov:2015kla ; Gauld:2019pgt ; Zhou:2019vxt ;
Zhou:2019frk ; Xie:2023qbn , and effects beyond collinear factorisation have
also been discussed Jalilian-Marian:2003ghc ; Fiore:2005wf ; Block:2013nia ;
Albacete:2015zra ; Goncalves:2015fua ; Arguelles:2015wba .
Predictions obtained in this way provide an important benchmark for
differential DIS cross-sections in terms of QCD-inclusive quantities (e.g.
distributions of $Q^{2}$ and $x_{\mathrm{B}}$), as well as the total cross-
section. However, they do not provide an exclusive description of the
radiation which is generated in the scattering process. This is a significant
limitation for many analyses at current (and future) neutrino experiments
which aim to reconstruct the energy and direction of the incoming neutrino,
and which rely on an accurate description of the properties of final-state
radiation (such as the distribution of electromagnetically charged and neutral
particles) to do so. A step towards overcoming this issue is made in the
current work with the development of an event generator for the simulation of
neutrino-induced massless neutral- and charged-current DIS based on the POWHEG
Nason:2004rx ; Frixione:2007vw method. The predictions obtained with this
program are accurate at NLO in QCD and can be matched with a multi-purpose
Shower Monte Carlo generator to provide a fully exclusive description of the
scattering process. The implementation is based on the existing generator for
charged-lepton induced DIS processes presented in Banfi:2023mhz , and has been
implemented in the publicly available framework POWHEG-BOX-RES Jezo:2015aia .
The code can be obtained from svn://powhegbox.mib.infn.it/trunk/User-
Processes-RES/DIS.
While this paper was being finalised, an NLO accurate event generator
implementation for lepton-hadron DIS was presented Buonocore:2024pdv . This
implementation is based on the POWHEG-BOX-V2 framework Alioli:2010xd , and has
a particular focus on processes with a heavy lepton, such as a tau neutrino,
and/or a heavy charm quark in the final state. We briefly discuss the
differences between the two codes in 2.4.
The structure of the paper is as follows: in Sec. 2 we summarise the main
details of the process implementation and new features as compared to the
existing generator which describes charged-lepton induced DIS; a validation of
the code for various DIS subprocesses is provided in Sec. 3; in Sec. 4 we
present phenomenological results for several distributions of charged
particles and charmed hadrons for incident neutrino energies of $10^{5}$ and
$10^{6}\leavevmode\nobreak\ \mathrm{GeV}$. Concluding remarks are presented in
Sec. 5. A complete list of all the new features in the code, and how to use
them, is provided in A.
## 2 Details of the implementation
In this section we discuss the extensions needed to augment the POWHEG-BOX-RES
generator for massless neutral- and charged-current DIS, presented in Ref.
Banfi:2023mhz , to allow for the inclusion of initial-state neutrinos and
generic (massive) nuclear targets. The POWHEG-BOX-RES framework combines NLO-
QCD calculations with parton showers (PS) according to the POWHEG method, and
was originally only designed to handle hadron-hadron collisions. One of the
main novelties of Ref. Banfi:2023mhz was the design of new momentum mappings
that preserve the special kinematics of DIS in the FKS subtraction formalism
Frixione:1995ms ; Frixione:2007vw as implemented in the POWHEG-BOX-RES
framework.
The original generator of Ref. Banfi:2023mhz was designed to describe DIS
reactions resulting from the collision of a massless proton with a charged
lepton, relevant to interpret data from, for instance, HERA and the
forthcoming Electron Ion Collider (EIC). It was since extended to also include
polarised beams in Ref. Borsa:2024rmh .
The extension presented here contains three new major features: 1. The
incoming lepton can now be of any species, in particular it can be a neutrino
or a charged lepton; 2. The code can now handle a massive nucleon at rest, of
relevance to fixed-target experiments; 3. A variable flux can be supplied for
the incoming lepton beam. The handling of massive nucleons at rest is
described in Sec. 2.1, and a discussion of how to consistently account for the
nuclear target PDFs can be found in Sec. 2.2. Although in this paper we focus
on phenomenological studies of neutrino beams with fixed energy, we discuss
how to include a variable flux in Sec. 2.3. Finally in Sec. 2.4 we comment on
our momentum mappings and how mass effects are approximately included.
### 2.1 Fixed-target experiments
By default, the POWHEG-BOX-RES can only handle collisions of massless beams.
In this section we therefore describe how to perform fixed-target collisions,
using a set of massless beams. Denoting the energies of two massless colliding
beams in the laboratory frame by $E_{1}$ and $E_{2}$, the POWHEG-BOX builds
the four-momenta of the beam particles as follows:
$\displaystyle k_{\rm beam,1}=$
$\displaystyle\left\\{E_{1},0,0,+E_{1}\right\\},$ $\displaystyle k_{\rm
beam,2}=$ $\displaystyle\left\\{E_{2},0,0,-E_{2}\right\\}.$ (1)
These four-vectors are then used to construct the momenta of the incoming
elementary fermions entering the scattering process.
To account for the collision of a beam of massless particles of energy $E$
with a fixed target nucleon (i.e. proton or neutron) of mass $m$ we extend
this approach by effectively treating the nucleon as massless. In the fixed-
target frame the true momenta are given by the lepton beam momentum, $P_{1}$,
and the fixed target momentum, $P_{2}$,
$\displaystyle P_{1}=$ $\displaystyle\left\\{E,0,0,E\right\\},$ $\displaystyle
P_{2}=$ $\displaystyle\left\\{m,0,0,0\right\\}.$ (2)
From these momenta we obtain a centre-of-mass energy, $E_{\mathrm{CM}}$, via
$E_{\mathrm{CM}}^{2}=(P_{1}+P_{2})^{2}=2mE+m^{2}.$ (3)
We then trivially observe that if we pick $E_{1}=E_{2}=E_{\mathrm{CM}}/2$ in
Eq. (2.1) we can construct a set of massless momenta that coincide with the
centre-of-mass frame of the fixed-target collision. Now consider the boost
from the centre-of-mass frame to the _true_ fixed-target frame. Applying this
boost to our newly constructed massless momenta we can construct massless beam
momenta in Eq. (2.1) where the energies of the beams are set to
$\displaystyle E_{1}=E+m/2,\qquad E_{2}=m/2.$ (4)
Both the massless centre-of-mass and massless fixed-target momenta satisfy
$k_{\rm beam,1}+k_{\rm beam,2}=P_{1}+P_{2}$ by construction, but do not
preserve the mass of $P_{2}$. In practice we expect the massles construction
to be reliable as long as $m/E\ll 1$. The two sets of momenta result in
equivalent predictions, since they are related by a boost, but in practice we
find that using the centre-of-mass momenta is numerically more stable for
ultra-high energy collisions ($E/m\gtrsim 10^{5}-10^{6}$). We provide both
options in the code, as described in A.
We note that when interfacing the events to the parton shower, e.g. PYTHIA
8, the actual mass of the nucleon is restored while retaining the centre-of-
mass energy of the two beams, thereby restoring the correct kinematics.
### 2.2 Nucleon targets
When considering lepton scattering off the nucleons of a bound nucleus, it is
important to differentiate whether the nucleon target is a proton or a
neutron. This distinction is relevant for the eventual matching to the parton
shower, where the quantum numbers of the nucleon remnant must be known. The
selection of the nucleon type in the powheg.input file can be made by setting
the integer `ih2`, as described in A. For the selection of a neutron, we
provide the option to either directly use neutron PDFs, or to instead provide
a set of proton PDFs which the program then internally converts via an isospin
transformation. The latter option has been added because some nuclear PDF
fitting groups (which assume isospin symmetry) provide the nuclear PDFs in the
format of average bound proton PDFs.
Taking as an example the scattering of neutrinos with H2O molecules, the total
cross section is given by
$\displaystyle\sigma^{\text{H}_{2}\text{O}}_{\nu}=2\sigma^{p}_{\nu}+Z\sigma^{p/O}_{\nu}+(A-Z)\sigma^{n/O}_{\nu}\,,$
(5)
where $\sigma^{p}_{\nu}$, $\sigma^{p/O}_{\nu}$, and $\sigma^{n/O}_{\nu}$ are
the cross sections for free protons, bound protons and bound neutrons,
respectively, and $Z=A-Z=8$ for oxygen. In this case one has to perform three
different runs: The first run using free protons, the second using bound
protons, and the third using bound neutrons. For both the bound protons and
neutrons one should use nuclear PDFs. The final showered result is then given
by combining these three runs according to the above equation.
When considering scattering on a single nucleus (such as oxygen), one could
generate events using a PDF which is the appropriate admixture of protons and
neutrons in the target nucleus. This would then require two instances of the
parton shower – one for the proton and one for the neutron – that one selects
event by event with the probability determined by the relative fraction of the
PDFs for protons and neutrons in the nucleus. For an extension of the PYTHIA
8 Monte Carlo event generator that enables the simulation of collisions
between a generic hadron beam on a generic nuclear target see Ref.
Helenius:2024vdj . That work combines the extension of PYTHIA 8 to deal with
heavy ion collisions Bierlich:2018xfw , and the extension to collisions of a
varying hadron beam on a proton target Sjostrand:2021dal .
### 2.3 Variable neutrino flux
By default, we consider a monochromatic incoming lepton flux. To account for
the typical environment of a neutrino-induced DIS process our new
implementation additionally provides an option for a realistic neutrino flux.
The user can implement a realistic flux by modifying the function
pdf_lepton_beam, which is contained in the file lepton_flux.f. If importance
sampling associated with the lepton’s energy fraction is required, the user
can modify the function sample_x_lepton, also contained in the same file. This
function builds the lepton’s energy fraction given a random number.
The correct modeling of such a flux depends on the specific experiment and
goes beyond the scope of this publication. A detailed study for SND@LHC,
FASER$\nu$, and the planned FPF experiments FLArE and FASER$\nu$2, using our
code and framework, will be presented in Ref. RojoDIS .
### 2.4 On the momentum mappings, mass effects and possible extensions to
more complex processes
In Ref. Banfi:2023mhz we introduced new momentum mappings, focusing on the
fully massless case, and used them to implement a DIS generator in the POWHEG-
BOX-RES framework. A POWHEG-BOX-V2 generator was presented in Ref.
Buonocore:2024pdv , where such mappings have been generalised to account for
an explicit lepton-mass dependence. This mass dependence can be relevant when
studying processes involving $\tau$ leptons for $Q$ values not much higher
than the mass of the $\tau$ lepton, as probed by the FASER$\nu$ and SHiP
experiments. Additionally, the initial-state map of Ref. Buonocore:2024pdv
supports heavy coloured final-state particles. In Ref. Buonocore:2024pdv
there is no dedicated treatment of the collinear singularities associated with
the emissions from a final-state heavy quark. This would have required an
extension of the work of Refs. Barze:2012tt ; Buonocore:2017lry to the DIS
case. Instead, contributions associated with emissions collinear to a heavy
quark, as well as power-suppressed terms, are included at fixed-order accuracy
as a separate regular contribution, involving potentially large mass
logarithms. Therefore, when the centre-of-mass energy becomes very large
relative to the relevant quark masses - as is the case in (ultra) high-energy
neutrino collisions – the massless QCD calculation, available in both codes,
has to be preferred. Indeed we stress that, even in the massless
approximation, when generating radiation in POWHEG, mass thresholds for the
heavy-quarks are present so that the leading mass-logarithms associated with
collinear final-state emissions are included to all orders. Therefore, in
POWHEG events, radiation with a transverse momentum smaller than the mass of
the emitting quark is vetoed, effectively mimicking a dead cone. Furthermore,
we also stress that even for calculations where final-state quarks or leptons
are treated as massless in the matrix-elements, the generated momenta of the
POWHEG events are reshuffled to include finite masses and that the subsequent
parton shower is fully aware of mass effects, including the correct decays of
$\tau$ leptons.
We also note that, in the massless limit, the maps of Refs. Banfi:2023mhz ;
Buonocore:2024pdv as well as the handling of final-state radiation are
identical. For initial-state radiation instead, while the kinematic map is the
same, they differ in the definition of the POWHEG hardness parameter away from
the soft and collinear limits. Denoting by $\xi$ and $y$ the energy-fraction
and the cosine of the emission angle and by $\bar{s}$ the centre-of-mass
energy of the underlying Born, the two definitions are given by
$\displaystyle\quad t_{\rm ISR}$
$\displaystyle=\frac{\xi^{2}}{2-\xi(1+y)}\bar{s}(1-y),$ $\displaystyle\text{in
Ref.\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Banfi:2023mhz}{\@@citephrase{(}}{\@@citephrase{)}}}},$ (6)
$\displaystyle\quad t_{\rm ISR}$ $\displaystyle=\frac{\xi^{2}}{2(1-\xi
y)}\bar{s}(1-y),$ $\displaystyle\text{in Ref.\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Buonocore:2024pdv}{\@@citephrase{(}}{\@@citephrase{)}}}}.$
(7)
It is evident that the two definitions are identical in the soft ($\xi\to 0$)
and in the collinear ($y\to 1$) limits. We thus conclude that the two codes
have the same formal accuracy.
The POWHEG-BOX-RES framework, specifically designed to handle hadronic
scattering processes that contain decaying resonances and thus require the
inclusion of radiative corrections not only in the production, but also in the
decay process, is particularly well-suited for extending our approach to other
processes relevant for the phenomenology of hadron-hadron collisions, as well
as including electroweak corrections in DIS. In particular, for processes such
as vector boson fusion or vector boson scattering, that can be modelled as
generalised two-fold DIS processes, the POWHEG-BOX-RES framework is best
suited to handle the two hadronic sub-sectors with a factorised approach. In
this sense, our POWHEG-BOX-RES implementation of the genuine DIS process
provides a stepping stone towards the development of suitable generators for
such more complex hadron-hadron collision processes. It is also more
straightforward to include soft photon emissions connecting the leptonic and
the hadronic sectors of the DIS process in the POWHEG-BOX-RES framework. This
feature will be essential for the inclusion of electroweak corrections in the
generator.
## 3 Fixed-order validation
To validate our new implementation, we perform a comparison with existing
fixed-order predictions for selected DIS processes where a neutrino is
scattering off an oxygen target. Specifically, we compute the quantity
$\displaystyle\sigma^{\text{i/O}}_{\nu}=Z/A\,\sigma^{p/O}_{\nu}+(A-Z)/A\,\sigma^{n/O}_{\nu}\,,$
(8)
which is the per-nucleon cross-section for an (isoscalar) oxygen target.
In our work, we have used the set of nuclear PDFs
`nNNPDF30_nlo_as_0118_p_O16` AbdulKhalek:2022fyi , which is provided in a
variable flavour number scheme ($n_{f}^{\rm max}=5$) and is expressed in terms
of average bound-proton states. We note that top quark contributions to DIS
are expected to be negligible below neutrino energies of about $1$ PeV. If
higher energies are considered, the inclusion of the top-quark contributions
could become relevant for the CC process, see Ref. Garcia:2020jwr . We
generated separately a sample for proton ($p$) and neutron ($n$) targets as
described in Sec. 2.2. The neutron PDF is obtained from the proton one using
isospin relations, as described in A. The central renormalisation $\mu_{R}$
and factorisation scales $\mu_{F}$ are set to the momentum transfer $Q$. Scale
uncertainties are estimated by performing an independent variation of
$\mu_{R}$ and $\mu_{F}$ by a factor $2$ up and down, subject to the constraint
$1/2\leq\mu_{R}/\mu_{F}\leq 2$. We impose a lower cutoff on $Q$ of $Q_{\rm
min}=2.0\leavevmode\nobreak\ \mathrm{GeV}$ which ensures the PDFs and the
strong coupling $\alpha_{\mathrm{s}}$ are never evaluated at scales below
$1.0\leavevmode\nobreak\ \mathrm{GeV}$.
For the masses and widths of the electroweak gauge bosons we start from the
on-shell values given in the PDG ParticleDataGroup:2022pth
$\displaystyle m_{W}^{\rm OS}=80.3770\leavevmode\nobreak\
\mathrm{GeV}\,,\qquad\Gamma_{W}^{\rm OS}=2.085\leavevmode\nobreak\
\mathrm{GeV}\,,$ $\displaystyle m_{Z}^{\rm OS}=91.1876\leavevmode\nobreak\
\mathrm{GeV}\,,\qquad\Gamma_{Z}^{\rm OS}=2.4955\leavevmode\nobreak\
\mathrm{GeV}\,,$ (9)
and convert them to the pole values as described e.g. in Ref. Denner:2019vbn ,
which are then used as input values for the simulations. For the Fermi
constant and the weak mixing angle we use
$G_{F}\\!=\\!1.1663787\\!\times\\!10^{-5}\leavevmode\nobreak\
\mathrm{GeV}^{-2}\\!,\;\;\sin^{2}\theta_{W}=0.2316.$ (10)
The value of electromagnetic coupling $\alpha$ is derived from these
parameters as $\alpha=\sqrt{2}/\pi G_{F}m_{W}^{2}\sin^{2}\theta_{W}$. For the
charged current process this effectively implies the replacement
$\alpha/\sin^{2}\theta_{W}\to G_{F}^{2}m_{W}^{2}$ when evaluating the squared
amplitude. This choice ensures the resummation of the leading universal
electroweak corrections Denner:1991kt . A similar replacement also takes place
for the neutral current process, while additional dependencies on
$\sin^{2}\theta_{W}$ appearing in the squared amplitude are described by our
chosen value of $\sin^{2}\theta_{W}$ (which is fixed to the measured effective
weak mixing angle). This approach provides an accurate normalisation of the
couplings, and ensures that the measured on-shell values of the boson masses
enter the propagators for both the charged and neutral current processes we
are describing.
For the entries of the Cabibbo-Kobayashi-Maskawa matrix we have used
$\displaystyle V_{ud}=V_{cs}=0.97446\,,$ $\displaystyle
V_{us}=V_{cd}=0.22456\,,$ $\displaystyle V_{tb}=1\,,$ (11)
with all other entries zero.
The fixed-order predictions are provided at both NLO and NNLO, and have been
obtained using the implementation from Bertone:2018dse , which relies on APFEL
Bertone:2013vaa for the computation of the DIS structure functions up to NNLO
vanNeerven:1991nn ; Zijlstra:1991qc ; Zijlstra:1992qd ; Zijlstra:1992kj ;
Moch:1999eb . In each case the same NLO accurate nuclear PDF set specified
above is used. The structure functions have been benchmarked against Hoppet
Salam:2008qg ; Bertone:2024dpm and the fixed-order predictions have been
cross-checked against predictions from disorder Karlberg:2024hnl .
In the following we denote by LO+PS and NLO+PS predictions at LO and NLO,
respectively, matched to parton shower. For the NLO+PS predictions shown below
we interface our POWHEG-BOX implementation to PYTHIA 8.308 Bierlich:2022pfr ,
with default settings (Monash tune Skands:2014pea ), and we use the simple
shower with fully-local recoil option Cabouat:2017rzi . For the results
presented in this section, QED radiation and hadronization effects are not
included.
We have performed comparisons of cross sections differential with respect to
the DIS variables $Q^{2}$ and $x_{\mathrm{B}}$ with different neutrino
energies for both charged current (CC) and neutral current (NC) processes in
the case of either incoming neutrinos or antineutrinos for the scattering off
an oxygen target at rest, i.e. the reactions $\nu_{e}O\to e^{-}X$,
$\bar{\nu}_{e}O\to e^{+}X$, $\nu_{e}O\to\nu_{e}X$, and
$\bar{\nu}_{e}O\to\bar{\nu}_{e}X$, where $X$ denotes the unresolved hadronic
final state of the DIS reaction. We show explicit results for the selected
processes $\nu_{e}O\to e^{-}X$ and $\nu_{e}O\to\nu_{e}X$ in Fig. 1 and Fig. 2,
respectively. In both cases we consider fixed-target collisions with a
neutrino energy of $E_{\nu}=0.1\leavevmode\nobreak\ \mathrm{PeV}$,
corresponding to a neutrino-nucleon centre-of-mass energy of
$\sqrt{s}=431.74\leavevmode\nobreak\ \mathrm{GeV}$. In Figs. 1 and 2 we show
the differential results with respect to $\ln(Q^{2}/{\rm GeV}^{2})$ (left
panel) and $\ln(x_{\mathrm{B}})$ (right panel) for CC and NC, respectively.
For the LO+PS, NLO+PS and NNLO predictions, we show scale variation
uncertainties, while statistical errors are much smaller and not shown here.
(a)
(b)
Figure 1: Differential cross-section (per-nucleon) for the charged-current
scattering of a neutrino $\nu_{e}$ of energy $E_{\nu}=0.1\leavevmode\nobreak\
\mathrm{PeV}$ on oxygen, with respect to $\ln(Q^{2}/{\rm GeV}^{2})$ (left) and
$\ln(x_{\mathrm{B}})$ (right) at LO+PS (green), NLO+PS (blue), pure NLO
(violet) and NNLO (red). The widths of the bands indicate scale uncertainties
estimated by a 7-point variation of $\mu_{R}$ and $\mu_{F}$ by a factor of two
around the central value $Q$. The lower panels show ratios to the respective
NLO+PS results with $\mu_{R}=\mu_{F}=Q$.
(a)
(b)
Figure 2: Analogous to Fig. 1 for the neutral current process
$\nu_{e}O\to\nu_{e}X$.
Cross-sections with the cut $Q>2\leavevmode\nobreak\ \mathrm{GeV}$ for
$E_{\nu}=0.1$ PeV
---
Process | NLO+PS (pb) | NNLO (pb)
$\nu_{e}O\to e^{-}X$ | $200.68^{+2.87}_{-3.53}\,\text{(scales)}\,^{+2.68}_{-3.29}\,\text{(PDFs)}$ | $197.92^{+1.21}_{-1.02}\,\text{(scales)}$
$\bar{\nu}_{e}O\to e^{+}X$ | $168.32^{+2.73}_{-3.34}\,\text{(scales)}\,^{+2.64}_{-3.34}\,\text{(PDFs)}$ | $165.73^{+1.16}_{-0.99}\,\text{(scales)}$
$\nu_{e}O\to\nu_{e}X$ | $75.97^{+1.25}_{-1.39}\,\text{(scales)}\,^{+0.76}_{-0.91}\,\text{(PDFs)}$ | $74.81^{+0.44}_{-0.41}\,\text{(scales)}$
$\bar{\nu}_{e}O\to\bar{\nu}_{e}X$ | $64.85^{+1.21}_{-1.33}\,\text{(scales)}\,^{+0.78}_{-0.82}\,\text{(PDFs)}$ | $63.75^{+0.42}_{-0.40}\,\text{(scales)}$
Table 1: Total cross-section with the cut $Q>2\leavevmode\nobreak\
\mathrm{GeV}$ for a selection of DIS processes with a (anti-)neutrino of
energy $E_{\nu}=0.1$ PeV at NLO+PS and NNLO accuracy. The quoted uncertainties
are due to scale variation. For the NLO+PS results we also indicate the size
of the PDF uncertainties in the second entry.
Cross-sections with the cut $Q>2\leavevmode\nobreak\ \mathrm{GeV}$ for
$E_{\nu}=1$ PeV
---
Process | NLO+PS (pb) | NNLO (pb)
$\nu_{e}O\to e^{-}X$ | $624.49^{+14.14}_{-16.44}\,\text{(scales)}\,^{+15.26}_{-15.42}\,\text{(PDFs)}$ | $613.42^{+5.02}_{-3.70}\,\text{(scales)}$
$\bar{\nu}_{e}O\to e^{+}X$ | $598.05^{+14.00}_{-16.33}\,\text{(scales)}\,^{+15.81}_{-15.90}\,\text{(PDFs)}$ | $587.09^{+4.99}_{-3.68}\,\text{(scales)}$
$\nu_{e}O\to\nu_{e}X$ | $258.59^{+6.48}_{-7.11}\,\text{(scales)}\,^{+5.67}_{-5.69}\,\text{(PDFs)}$ | $253.61^{+2.06}_{-1.61}\,\text{(scales)}$
$\bar{\nu}_{e}O\to\bar{\nu}_{e}X$ | $248.73^{+6.43}_{-7.07}\,\text{(scales)}\,^{+5.82}_{-5.58}\,\text{(PDFs)}$ | $243.78^{+2.05}_{-1.60}\,\text{(scales)}$
Table 2: Analogous to Tab. 1, now for $E_{\nu}=1$ PeV.
We observe that at low-to-moderate values of $Q^{2}$, within the given scale
uncertainties, the fixed-order NLO predictions agree with the NLO+PS results
and are very similar to the LO+PS results. Obviously, the impact of higher-
order corrections is small on this observable. For the Bjorken variable we
find agreement between the NLO and the NLO+PS results, as expected for this
inclusive quantity. Technically we expect the agreement between NLO and NLO+PS
to be near-perfect, as the shower without QED radiation preserves the lepton
momenta. However, as discussed in Sec. 2.4, the POWHEG-BOX performs a small
momentum reshuffling to account for the finite quark and lepton masses, and
additionally, as was discussed in Sec. 2.1, at event level the nucleon mass is
restored. This reshuffling has a tiny impact on the $Q^{2}$ and
$x_{\mathrm{B}}$ distributions, as was also discussed in Ref. Banfi:2023mhz .
It is worth noticing that the NLO+PS result is not always contained with in
the scale variation band of the LO+PS result. The perturbative uncertainties
of the LO+PS result are not expected to be fully covered by a standard scale
variation, as at this order only $\mu_{F}$ can be varied, while $\mu_{R}$ does
not even enter. On the other hand, we see that the NNLO prediction is fully
contained within the scale variation band of the NLO+PS prediction, thereby
establishing confidence in the reliability of our prediction.
In addition to the differential validation, we also report results for the
per-nucleon cross section, with a cut $Q\geq 2$ GeV, obtained up to NNLO
accuracy in Tab. 1 for $E_{\nu}=0.1$ PeV and Tab. 2 for $E_{\nu}=1$ PeV. The
results are given for a selection of processes and (anti)-neutrino energies.
The central prediction and the uncertainty due to scale variations are shown
in each case. It has been checked that the NLO entries obtained with this
generator (labelled as NLO+PS) reproduce exactly, including scale variations,
the NLO results based on the structure function computation. For that reason
we only show the NLO+PS results. We have additionally reported the
uncertainties due to the nuclear PDFs computed at NLO. Typically these
uncertainties are in the range of $(1-2)\%$ and are similar in size to those
of the scale uncertainties at NLO. Finally, we note that the structure
functions are non-zero below $Q_{\rm min}$ (and hence so is the cross-
section), but the description of this region goes beyond the applicability of
collinear factorisation. Alternative (data-driven) approaches exist to
describe the low-$Q$ region, see for example Bodek:2002vp ; Bodek:2003wd ;
Bodek:2004pc ; Bodek:2010km ; Bodek:2021bde and, more recently, Ref.
Candido:2023utz .
## 4 Phenomenological results
As highlighted in Sec. 1, a major advantage of the NLO+PS simulation over the
NLO predictions is that they enable a fully exclusive simulation of final-
state radiation while retaining the NLO accuracy of the hard scattering
process. In this section we consider full particle level predictions obtained
with our NLO+PS generator interfaced to PYTHIA 8. We use the same PDFs,
scale settings, and electroweak input parameters specified in Sec. 3, but we
also include QED radiation and hadronization effects in the PYTHIA 8
simulation, which allow us to provide predictions for the production of
hadrons, and to investigate properties of their distributions. We note that
the inclusion of QED corrections can have important consequences for the
description of charged-lepton based observables (see the recent discussion in
Ref. Plestid:2024bva ), and that the leading corrections are naturally
included (and resummed) by the parton shower in the following. Specifically,
we consider fixed-target collisions on oxygen atoms for electron neutrinos
with energies of $0.1$ and $1\leavevmode\nobreak\ \mathrm{PeV}$, which are
primarily relevant for analyses aiming to measure the flux of cosmic
neutrinos.
### 4.1 Particle multiplicities
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Charged particle multiplicity distribution (left) and multiplicity
ratio between charged and neutral particles (right) obtained at NLO+PS (blue)
and LO+PS (green) accuracy for neutrino induced CC DIS, panels (a),(b), and NC
DIS panels (c),(d), on an oxygen target with a neutrino energy of
$E_{\nu}=0.1\leavevmode\nobreak\ \mathrm{PeV}$, and for CC DIS with
$E_{\nu}=1\leavevmode\nobreak\ \mathrm{PeV}$, panels (e),(f). The widths of
the bands indicate scale uncertainties estimated by a 7-point variation of
$\mu_{R}$ and $\mu_{F}$ by a factor of two around the central value $Q$. The
lower panels show ratios to the respective NLO+PS results with
$\mu_{R}=\mu_{F}=Q$.
Water-based detector concepts rely on observing the Cherenkov radiation
pattern generated by charged particles in the detector volume. An accurate
modelling of particle multiplicities in such scattering events is therefore
critical. Charged particle multiplicities, as well as the ratio of charged to
neutral particle multiplicities are shown in Fig. 3 for $\nu_{e}$-induced CC
and NC DIS at $E_{\nu}=0.1\leavevmode\nobreak\ \mathrm{PeV}$, (upper and
middle panels), and CC DIS at $E_{\nu}=1\leavevmode\nobreak\ \mathrm{PeV}$
(lower panels). The multiplicity distribution at
$E_{\nu}=0.1\leavevmode\nobreak\ \mathrm{PeV}$ peaks for a number of charged
particles, $n_{\mathrm{ch}}$, of about 18 in both the CC and NC cases. At
$E_{\nu}=1\leavevmode\nobreak\ \mathrm{PeV}$ the peak is shifted to around
$n_{\mathrm{ch}}=22$.
As a consequence of charge conservation, an odd (even) number of charged
particles is generated in CC neutrino scattering off protons (neutrons).
Furthermore, because of the different flavour composition and associated PDFs
of these two types of target particles, the absolute scattering rate is
different for CC on a proton and on a neutron. The combination of these
effects leads to the observed “oscillatory” behaviour for the
$n_{\mathrm{ch}}$ distributions. This feature is slightly less pronounced at
higher neutrino energies, as the contribution from PDFs at smaller values of
$x$, where the isospin asymmetric contribution of valence quarks is less
important, becomes more relevant. We note that the ratio
$n_{\mathrm{ch}}/n_{\mathrm{neut}}$ peaks at smaller values for the NC
process. Generally, we observe a reduction of scale uncertainty when including
NLO corrections and considerable shape changes induced by NLO effects which
are outside the LO scale uncertainty band, both for the charged particle
multiplicities, as well as the ratios. When considering higher neutrino
energies we notice that the charged particle multiplicity increases, as
expected, and that the NLO corrections are becoming yet more pronounced and
the theoretical uncertainty stemming from scale variation increases.
It is interesting to note that the centre-of-mass energies considered here are
comparable to those of the HERA collider. Our NLO+PS implementation opens up
the opportunity for the re-tuning of event generators such as PYTHIA 8,
which could be relevant given the large impact of NLO+PS corrections on
particle multiplicities.
### 4.2 Energy-based distributions
(a)
(b)
(c)
(d)
(e)
(f)
Figure 4: Similar to Fig. 3, but for the energy of the leading charged
particle, $E_{1,\rm chg}$, (left) and the mean charged particle energy
$\langle E_{\rm chg}\rangle$ (right).
In Fig. 4 we compare the predictions for the energy of the hardest charged
particle, $E_{1,\rm chg}$, and the mean charged particle energy, $\langle
E_{\rm chg}\rangle$, as predicted at LO+PS and NLO+PS accuracy. We notice that
these energy distributions are genuinely different for the CC and NC cases.
This is due to the fact that in the CC case the outgoing lepton contributes to
both distributions, while this is not the case for NC. For this reason, NLO
corrections turn out to be moderate in the CC case, which is dominated by the
lepton kinematics, but considerable for NC. We note that, generally, for the
determination of $E_{1,\rm chg}$ and $\langle E_{\rm chg}\rangle$ all charged
particles (i.e. hadrons and leptons) are taken into account. If, however, the
outgoing charged lepton is not included in the definition of $E_{1,\rm chg}$
or $\langle E_{\rm chg}\rangle$ in the CC case, it is observed that the
resultant distributions (and the behaviour of the NLO corrections) are similar
to those of the NC case. Like for the case of the particle multiplicity, the
LO scale uncertainty band significantly underestimates the size of higher-
order effects as it does not overlap with the NLO band in the majority of the
phase space. When going to higher energies (plots (e) and (f)), the peaks of
the distributions move accordingly and we find that, as for particle
multiplicities, NLO corrections become more pronounced.
### 4.3 Charm production
(a)
(b)
(c)
(d)
Figure 5: $D$-meson energy distributions at NLO+PS (blue) and LO+PS (green)
accuracy for neutrino induced CC (left) and NC (right) DIS with a neutrino
energy of $E_{\nu}=0.1\leavevmode\nobreak\ \mathrm{PeV}$, panels (a),(b), and
$E_{\nu}=1\leavevmode\nobreak\ \mathrm{PeV}$, panels (c),(d). The widths of
the bands indicate scale uncertainties estimated by a 7-point variation of
$\mu_{R}$ and $\mu_{F}$ by a factor of two around the central value $Q$. The
lower panels show ratios to the respective NLO+PS results with
$\mu_{R}=\mu_{F}=Q$.
It is also interesting to investigate the effect of QCD corrections on
$D$-meson distributions. This is relevant as, through semi-leptonic decays,
$D$-mesons provide a source of energetic muons which can mimic a starting
track signature similar to that arising from muon-neutrino induced CC. As
discussed in Sec. 2.4, despite being based on a purely massless calculation,
once interfaced to a parton shower, our event generator is well suited to
describe DIS processes involving heavy quarks if their mass is much smaller
than $Q$, as considered in this section. In fact, at the considered neutrino
energies, the typical $Q^{2}$ value which dominates the cross-section is far
in excess of the charm quark mass (i.e. $|Q^{2}|\gg m_{c}^{2}$), as shown in
Fig. 1(a). In such a kinematic regime a massless approach to describing the
scattering process is the appropriate one, and ensures a resummation of the
logarithmically enhanced terms in both the initial and final-state.
We consider here the production of stable $D$-mesons at LO+PS and NLO+PS
accuracy, where the $D$-mesons are produced using the hadronization feature of
PYTHIA 8. In Fig. 5 we present the distribution of the $D$-meson energy,
$E_{D}$, in the CC and NC cases, respectively. We find that in the CC case NLO
corrections are moderate for low energies, but become large for high values of
$E_{D}$, where the cross section peaks.
The CC case is dominated by scattering off $d$\- and $s$-quark distributions,
while NC involves primarily a $c$-PDF, which is generated perturbatively and
has a large factorization scale dependence. For this reason, for NC DIS the
scale uncertainties are larger than in the CC case. These are substantially
reduced at NLO. In each case, the NLO corrections are essential for a
reasonable description of the shape of the energy distribution.
## 5 Conclusions
This work presents a number of extensions to the simulation of neutral- and
charged-current deep inelastic scattering (DIS) Banfi:2023mhz in the POWHEG-
BOX-RES. First, the code has been extended to accommodate an incoming neutrino
beam. Second, the incoming lepton is no longer required to be monochromatic,
as in standard high-energy DIS experiments. Instead, any incoming lepton flux
can be included. Moreover, an option is provided to straightforwardly account
for the kinematics of fixed-target experiments. Furthermore, more flexible
options for the nuclear targets are now supported.
With the new implementation we have provided sample results for fiducial
cross-sections, standard DIS variables, as well as neutral and charged
particle distributions for various neutrino-induced DIS processes. In our
sample numerical analyses we put a particular focus on the kinematic regime
relevant for the investigation of cosmic neutrinos with the IceCube detector.
We note, however, that our program is not restricted to this application, but
can be employed for the simulation of any neutrino-induced DIS process. In
general, we find that an NLO+PS simulation is necessary to achieve theory
uncertainties below approximately 10%.
The code, along with the new features discussed in this article, is publicly
available via the POWHEG-BOX-RES repository. The reliance on the POWHEG-BOX-
RES framework, which is well-suited for describing complex reactions involving
multiple competing and interfering sub-processes, will enable us to further
improve the description of hadron-collider processes such as vector boson
scattering and vector boson fusion, going beyond what is already available in
POWHEG-BOX-V2. These reactions can be described as (generalized) two-fold DIS
processes, and are highly relevant for the phenomenology of the Large Hadron
Collider. Additionally, our approach paves the way for the simulation of
electroweak corrections in DIS consistently accounting for photon radiation in
the hadronic and leptonic sectors.
## Acknowledgments
We are grateful to Luca Buonocore, Giovanni Limatola, Paolo Nason and
Francesco Tramontano for discussions and to Georg Raffelt for providing useful
references. In particular, we thank Paolo and Francesco for discussions on the
treatment of collisions involving heavy nuclei. We also acknowledge
stimulating discussions with Andrea Banfi during early stages of this work. We
are also indebted to Melissa van Beekveld, Eva Groenendijk, Peter Krack, Juan
Rojo, and Valentina Schutze Sanchez for having triggered this project and
tested a pre-release version of our code. Finally, we are grateful to Alfonso
García Soto for advice and discussions related to experimentally motivated
observables. The work of BJ was supported by the German Research Foundation
(DFG) through the Research Unit FOR 2926. GZ would like to thank CERN for
hospitality while this work was being finalized.
## Appendix A DIS process selection
In this appendix we summarise inputs that can be used to select the process
and settings in the powheg.input file, which are specific to the DIS case.
#### Lepton beam.
The flavour of the incoming lepton must be specified using `ih1 int`, where
the integer number int is the identifier of the desired lepton in the Particle
Data Group numbering convention ParticleDataGroup:2022pth .
The energy of the lepton beam must be specified using ebeam1 double with a
double-precision number double. By default the code assumes a fixed lepton
energy. To use a variable flux add the option
`fixed_lepton_beam 0`, (see Sec. 2.3 for more details). A variable flux should
be provided in terms of a boost-invariant energy fraction, of the lepton beam
with repect to the maximum energy available.
#### Hadron beam/target.
The selection of the nucleon type in the powheg.input file must be chosen by
setting the value of int in `ih2 int`. We currently support protons and
neutrons. To that end the following options are available:
1. `ih2 1 #proton target, input proton PDF`
2. `ih2 2 #neutron target, input proton PDF`
3. `ih2 22 #neutron target, input neutron PDF`
Depending on the selection for ih2, the PDF specified via the entry lhans2
according to the numbering scheme of the LHAPDF repository Buckley:2014ana is
interpreted either as a proton or a neutron PDF.111lhans1 must be set to the
same value as lhans2, even if not used, in case the PDF implementation of the
running of the QCD coupling constant (alphas_from_pdf 1) is to be used.
The energy of the hadron is selected via the mandatory entry ebeam2 double. By
default, the code assumes that the hadron beam is massless, with a
longitudinal momentum equal to its energy. For fixed-target collisions, one
has to add the option
fixed_target 1
In this case the value of the entry for ebeam2 is interpreted as the mass of
the nucleon (i.e. proton or neutron).
#### Hard process selection.
Both CC and NC processes can be simulated within our framework. To select the
desired channel (for a given type of lepton beam, as specified by the value of
ih1), one can use the following option
channel_type int
with int=3 for CC, and int=4 for NC. In case of charged-lepton induced NC DIS,
the boson exchanged in the $t$-channel has to be specified using `vtype` with
1. 1.
`vtype 1 # photon exchange only`
2. 2.
`vtype 2 # Z echange only`
3. 3.
`vtype 3 # photon+Z exchange`
#### Generation cuts.
The user must specify cuts on the DIS invariants $Q^{2}$, $x_{\mathrm{B}}$ and
$y_{\mathrm{DIS}}=Q^{2}/(x_{\mathrm{B}}S)$, with $S=(P_{1}+P_{2})^{2}$. The
values of Qmin and Qmax are supposed to be provided in units of GeV. For
example, to probe all the available phase space, one should set
Qmin 1d0
Qmax 1d8
xmin 0d0
xmax 1d0
ymin 0d0
ymax 1d0
where Qmax has been set to a value much larger than the center-of-mass energy.
We stress that Qmin=1 GeV is the lowest value accepted by the code, since the
validity of a perturbative QCD approach to describe the cross section is no
longer guaranteed for small $Q^{2}$.
We note that it is possible to fix up to 2 of these variables by setting the
minimum and maximum values equal to each other. In any case, the code will
never generate events outside the physically allowed bounds.
#### Final-state particles masses.
Notice that all particles entering the hard process are treated as massless in
our NLO calculation. As in most POWHEG-BOX implementations, a small
reshuffling of the momenta can be applied when generating events, so as to
give a finite mass to all the final-state massive particles. The mass of the
charged leptons and of the heavy quarks can be specified, e.g. using
electron_mass 0.51099891d-3
muon_mass 0.1056583668d0
tauon_mass 1.77684d0
charm_mass 1.5d0
bottom_mass 4.5d0
The numbers chosen in this example correspond to the default values. More
comments on how mass effects are approximately included in our generator are
given in 2.4.
## References
* (1) B. Batell, M. Pospelov, A. Ritz, Exploring Portals to a Hidden Sector Through Fixed Targets, Phys. Rev. D 80 (2009) 095024. arXiv:0906.5614, doi:10.1103/PhysRevD.80.095024.
* (2) M. Ackermann, et al., High-energy and ultra-high-energy neutrinos: A Snowmass white paper, JHEAp 36 (2022) 55–110. arXiv:2203.08096, doi:10.1016/j.jheap.2022.08.001.
* (3) M. Ageron, et al., ANTARES: the first undersea neutrino telescope, Nucl. Instrum. Meth. A 656 (2011) 11–38. arXiv:1104.1607, doi:10.1016/j.nima.2011.06.103.
* (4) V. Aynutdinov, et al., Search for a diffuse flux of high-energy extraterrestrial neutrinos with the nt200 neutrino telescope, Astropart. Phys. 25 (2006) 140–150. arXiv:astro-ph/0508675, doi:10.1016/j.astropartphys.2005.12.005.
* (5) M. G. Aartsen, et al., The IceCube Neutrino Observatory: Instrumentation and Online Systems, JINST 12 (03) (2017) P03012. arXiv:1612.05093, doi:10.1088/1748-0221/12/03/P03012.
* (6) S. Adrian-Martinez, et al., Letter of intent for KM3NeT 2.0, J. Phys. G 43 (8) (2016) 084001. arXiv:1601.07459, doi:10.1088/0954-3899/43/8/084001.
* (7) D. Mason, et al., Measurement of the Nucleon Strange-Antistrange Asymmetry at Next-to-Leading Order in QCD from NuTeV Dimuon Data, Phys. Rev. Lett. 99 (2007) 192001. doi:10.1103/PhysRevLett.99.192001.
* (8) G. Acampora, et al., SND@LHC: the scattering and neutrino detector at the LHC, JINST 19 (05) (2024) P05067. arXiv:2210.02784, doi:10.1088/1748-0221/19/05/P05067.
* (9) R. Albanese, et al., Observation of Collider Muon Neutrinos with the SND@LHC Experiment, Phys. Rev. Lett. 131 (3) (2023) 031802. arXiv:2305.09383, doi:10.1103/PhysRevLett.131.031802.
* (10) M. Anelli, et al., A facility to Search for Hidden Particles (SHiP) at the CERN SPS (4 2015). arXiv:1504.04956.
* (11) H. Abreu, et al., Detecting and Studying High-Energy Collider Neutrinos with FASER at the LHC, Eur. Phys. J. C 80 (1) (2020) 61. arXiv:1908.02310, doi:10.1140/epjc/s10052-020-7631-5.
* (12) H. Abreu, et al., Technical Proposal: FASERnu (1 2020). arXiv:2001.03073.
* (13) H. Abreu, et al., First Direct Observation of Collider Neutrinos with FASER at the LHC, Phys. Rev. Lett. 131 (3) (2023) 031801. arXiv:2303.14185, doi:10.1103/PhysRevLett.131.031801.
* (14) L. A. Anchordoqui, et al., The Forward Physics Facility: Sites, experiments, and physics potential, Phys. Rept. 968 (2022) 1–50. arXiv:2109.10905, doi:10.1016/j.physrep.2022.04.004.
* (15) J. L. Feng, et al., The Forward Physics Facility at the High-Luminosity LHC, J. Phys. G 50 (3) (2023) 030501. arXiv:2203.05090, doi:10.1088/1361-6471/ac865e.
* (16) J. C. Collins, D. E. Soper, G. F. Sterman, Factorization of Hard Processes in QCD, Adv. Ser. Direct. High Energy Phys. 5 (1989) 1–91. arXiv:hep-ph/0409313, doi:10.1142/9789814503266_0001.
* (17) J. J. Ethier, E. R. Nocera, Parton Distributions in Nucleons and Nuclei, Ann. Rev. Nucl. Part. Sci. 70 (2020) 43–76. arXiv:2001.07722, doi:10.1146/annurev-nucl-011720-042725.
* (18) A. Cooper-Sarkar, P. Mertsch, S. Sarkar, The high energy neutrino cross-section in the Standard Model and its uncertainty, JHEP 08 (2011) 042. arXiv:1106.3723, doi:10.1007/JHEP08(2011)042.
* (19) J. Sanchez Guillen, J. Miramontes, M. Miramontes, G. Parente, O. A. Sampayo, Next-to-leading order analysis of the deep inelastic R = sigma-L / sigma-total, Nucl. Phys. B 353 (1991) 337–345. doi:10.1016/0550-3213(91)90340-4.
* (20) W. L. van Neerven, E. B. Zijlstra, Order alpha-s**2 contributions to the deep inelastic Wilson coefficient, Phys. Lett. B 272 (1991) 127–133. doi:10.1016/0370-2693(91)91024-P.
* (21) E. B. Zijlstra, W. L. van Neerven, Contribution of the second order gluonic Wilson coefficient to the deep inelastic structure function, Phys. Lett. B 273 (1991) 476–482. doi:10.1016/0370-2693(91)90301-6.
* (22) E. B. Zijlstra, W. L. van Neerven, Order alpha-s**2 QCD corrections to the deep inelastic proton structure functions F2 and F(L), Nucl. Phys. B 383 (1992) 525–574. doi:10.1016/0550-3213(92)90087-R.
* (23) E. B. Zijlstra, W. L. van Neerven, Order alpha-s**2 correction to the structure function F3 (x, Q**2) in deep inelastic neutrino - hadron scattering, Phys. Lett. B 297 (1992) 377–384. doi:10.1016/0370-2693(92)91277-G.
* (24) W. L. van Neerven, A. Vogt, NNLO evolution of deep inelastic structure functions: The Nonsinglet case, Nucl. Phys. B 568 (2000) 263–286. arXiv:hep-ph/9907472, doi:10.1016/S0550-3213(99)00668-9.
* (25) W. L. van Neerven, A. Vogt, NNLO evolution of deep inelastic structure functions: The Singlet case, Nucl. Phys. B 588 (2000) 345–373. arXiv:hep-ph/0006154, doi:10.1016/S0550-3213(00)00480-6.
* (26) S. Moch, J. A. M. Vermaseren, Deep inelastic structure functions at two loops, Nucl. Phys. B 573 (2000) 853–907. arXiv:hep-ph/9912355, doi:10.1016/S0550-3213(00)00045-6.
* (27) S. Moch, J. A. M. Vermaseren, A. Vogt, The Longitudinal structure function at the third order, Phys. Lett. B 606 (2005) 123–129. arXiv:hep-ph/0411112, doi:10.1016/j.physletb.2004.11.063.
* (28) J. A. M. Vermaseren, A. Vogt, S. Moch, The Third-order QCD corrections to deep-inelastic scattering by photon exchange, Nucl. Phys. B 724 (2005) 3–182. arXiv:hep-ph/0504242, doi:10.1016/j.nuclphysb.2005.06.020.
* (29) A. Vogt, S. Moch, J. Vermaseren, Third-order QCD results on form factors and coefficient functions, Nucl. Phys. B Proc. Suppl. 160 (2006) 44–50. arXiv:hep-ph/0608307, doi:10.1016/j.nuclphysbps.2006.09.101.
* (30) S. Moch, M. Rogal, A. Vogt, Differences between charged-current coefficient functions, Nucl. Phys. B 790 (2008) 317–335. arXiv:0708.3731, doi:10.1016/j.nuclphysb.2007.09.022.
* (31) J. Davies, A. Vogt, S. Moch, J. A. M. Vermaseren, Non-singlet coefficient functions for charged-current deep-inelastic scattering to the third order in QCD, PoS DIS2016 (2016) 059. arXiv:1606.08907, doi:10.22323/1.265.0059.
* (32) J. Blümlein, P. Marquard, C. Schneider, K. Schönwald, The massless three-loop Wilson coefficients for the deep-inelastic structure functions F2, FL, xF3 and g1, JHEP 11 (2022) 156. arXiv:2208.14325, doi:10.1007/JHEP11(2022)156.
* (33) R. Gandhi, C. Quigg, M. H. Reno, I. Sarcevic, Neutrino interactions at ultrahigh-energies, Phys. Rev. D 58 (1998) 093009. arXiv:hep-ph/9807264, doi:10.1103/PhysRevD.58.093009.
* (34) M. Gluck, S. Kretzer, E. Reya, Dynamical QCD predictions for ultrahigh-energy neutrino cross-sections, Astropart. Phys. 11 (1999) 327–334. arXiv:astro-ph/9809273, doi:10.1016/S0927-6505(99)00006-7.
* (35) A. Cooper-Sarkar, S. Sarkar, Predictions for high energy neutrino cross-sections from the ZEUS global PDF fits, JHEP 01 (2008) 075. arXiv:0710.5303, doi:10.1088/1126-6708/2008/01/075.
* (36) A. Connolly, R. S. Thorne, D. Waters, Calculation of High Energy Neutrino-Nucleon Cross Sections and Uncertainties Using the MSTW Parton Distribution Functions and Implications for Future Experiments, Phys. Rev. D 83 (2011) 113009. arXiv:1102.0691, doi:10.1103/PhysRevD.83.113009.
* (37) V. Bertone, R. Gauld, J. Rojo, Neutrino Telescopes as QCD Microscopes, JHEP 01 (2019) 217. arXiv:1808.02034, doi:10.1007/JHEP01(2019)217.
* (38) K. Xie, J. Gao, T. J. Hobbs, D. R. Stump, C. P. Yuan, High-energy neutrino deep inelastic scattering cross sections, Phys. Rev. D 109 (11) (2024) 113001\. arXiv:2303.13607, doi:10.1103/PhysRevD.109.113001.
* (39) D. Seckel, Neutrino photon reactions in astrophysics and cosmology, Phys. Rev. Lett. 80 (1998) 900–903. arXiv:hep-ph/9709290, doi:10.1103/PhysRevLett.80.900.
* (40) I. Alikhanov, Hidden Glashow resonance in neutrino–nucleus collisions, Phys. Lett. B 756 (2016) 247–253. arXiv:1503.08817, doi:10.1016/j.physletb.2016.03.009.
* (41) R. Gauld, Precise predictions for multi-TeV and PeV energy neutrino scattering rates, Phys. Rev. D 100 (9) (2019) 091301. arXiv:1905.03792, doi:10.1103/PhysRevD.100.091301.
* (42) B. Zhou, J. F. Beacom, Neutrino-nucleus cross sections for W-boson and trident production, Phys. Rev. D 101 (3) (2020) 036011. arXiv:1910.08090, doi:10.1103/PhysRevD.101.036011.
* (43) B. Zhou, J. F. Beacom, W-boson and trident production in TeV–PeV neutrino observatories, Phys. Rev. D 101 (3) (2020) 036010. arXiv:1910.10720, doi:10.1103/PhysRevD.101.036010.
* (44) K. Xie, B. Zhou, T. J. Hobbs, The photon content of the neutron, JHEP 04 (2024) 022. arXiv:2305.10497, doi:10.1007/JHEP04(2024)022.
* (45) J. Jalilian-Marian, Enhancement and suppression of the neutrino nucleon total cross-section at ultrahigh-energies, Phys. Rev. D 68 (2003) 054005, [Erratum: Phys.Rev.D 70, 079903 (2004)]. arXiv:hep-ph/0301238, doi:10.1103/PhysRevD.68.054005.
* (46) R. Fiore, L. L. Jenkovszky, A. V. Kotikov, F. Paccanoni, A. Papa, Asymptotic neutrino-nucleon cross section and saturation effects, Phys. Rev. D 73 (2006) 053012. arXiv:hep-ph/0512259, doi:10.1103/PhysRevD.73.053012.
* (47) M. M. Block, L. Durand, P. Ha, D. W. McKay, Implications of a Froissart bound saturation of $\gamma$*-p deep inelastic scattering. II. Ultrahigh energy neutrino interactions, Phys. Rev. D 88 (1) (2013) 013003. arXiv:1302.6127, doi:10.1103/PhysRevD.88.013003.
* (48) J. L. Albacete, J. I. Illana, A. Soto-Ontoso, Neutrino-nucleon cross section at ultrahigh energy and its astrophysical implications, Phys. Rev. D 92 (1) (2015) 014027. arXiv:1505.06583, doi:10.1103/PhysRevD.92.014027.
* (49) V. P. Goncalves, D. R. Gratieri, Investigating the effects of the QCD dynamics in the neutrino absorption by the Earth’s interior at ultrahigh energies, Phys. Rev. D 92 (11) (2015) 113007. arXiv:1510.03186, doi:10.1103/PhysRevD.92.113007.
* (50) C. A. Argüelles, F. Halzen, L. Wille, M. Kroll, M. H. Reno, High-energy behavior of photon, neutrino, and proton cross sections, Phys. Rev. D 92 (7) (2015) 074040. arXiv:1504.06639, doi:10.1103/PhysRevD.92.074040.
* (51) P. Nason, A New method for combining NLO QCD with shower Monte Carlo algorithms, JHEP 11 (2004) 040. arXiv:hep-ph/0409146, doi:10.1088/1126-6708/2004/11/040.
* (52) S. Frixione, P. Nason, C. Oleari, Matching NLO QCD computations with Parton Shower simulations: the POWHEG method, JHEP 11 (2007) 070. arXiv:0709.2092, doi:10.1088/1126-6708/2007/11/070.
* (53) A. Banfi, S. Ferrario Ravasio, B. Jäger, A. Karlberg, F. Reichenbach, G. Zanderighi, A POWHEG generator for deep inelastic scattering, JHEP 02 (2024) 023. arXiv:2309.02127, doi:10.1007/JHEP02(2024)023.
* (54) T. Ježo, P. Nason, On the Treatment of Resonances in Next-to-Leading Order Calculations Matched to a Parton Shower, JHEP 12 (2015) 065. arXiv:1509.09071, doi:10.1007/JHEP12(2015)065.
* (55) L. Buonocore, G. Limatola, P. Nason, F. Tramontano, An event generator for Lepton-Hadron Deep Inelastic Scattering at NLO+PS with POWHEG including mass effects (6 2024). arXiv:2406.05115.
* (56) S. Alioli, P. Nason, C. Oleari, E. Re, A general framework for implementing NLO calculations in shower Monte Carlo programs: the POWHEG BOX, JHEP 06 (2010) 043. arXiv:1002.2581, doi:10.1007/JHEP06(2010)043.
* (57) S. Frixione, Z. Kunszt, A. Signer, Three jet cross-sections to next-to-leading order, Nucl. Phys. B 467 (1996) 399–442. arXiv:hep-ph/9512328, doi:10.1016/0550-3213(96)00110-1.
* (58) I. Borsa, B. Jäger, Parton-shower effects in polarized deep inelastic scattering (4 2024). arXiv:2404.07702.
* (59) I. Helenius, M. Utheim, Hadron-ion collisions in Pythia and the vector-meson dominance model for photoproduction (6 2024). arXiv:2406.10403.
* (60) C. Bierlich, G. Gustafson, L. Lönnblad, H. Shah, The Angantyr model for Heavy-Ion Collisions in PYTHIA8, JHEP 10 (2018) 134. arXiv:1806.10820, doi:10.1007/JHEP10(2018)134.
* (61) T. Sjöstrand, M. Utheim, Hadron interactions for arbitrary energies and species, with applications to cosmic rays, Eur. Phys. J. C 82 (1) (2022) 21. arXiv:2108.03481, doi:10.1140/epjc/s10052-021-09953-5.
* (62) M. van Beekveld, S. Ferrario Ravasio, E. Groenendijk, P. Krack, J. Rojo, V. Schutze Sanchez, A Phenomenological Analysis of LHC Neutrino Scattering at NLO Accuracy Matched to Parton Showers (In preparation).
* (63) L. Barze, G. Montagna, P. Nason, O. Nicrosini, F. Piccinini, Implementation of electroweak corrections in the POWHEG BOX: single W production, JHEP 04 (2012) 037. arXiv:1202.0465, doi:10.1007/JHEP04(2012)037.
* (64) L. Buonocore, P. Nason, F. Tramontano, Heavy quark radiation in NLO+PS POWHEG generators, Eur. Phys. J. C 78 (2) (2018) 151. arXiv:1711.06281, doi:10.1140/epjc/s10052-018-5638-y.
* (65) R. Abdul Khalek, R. Gauld, T. Giani, E. R. Nocera, T. R. Rabemananjara, J. Rojo, nnnpdf3.0: evidence for a modified partonic structure in heavy nuclei, Eur. Phys. J. C 82 (6) (2022) 507. arXiv:2201.12363, doi:10.1140/epjc/s10052-022-10417-7.
* (66) A. Garcia, R. Gauld, A. Heijboer, J. Rojo, Complete predictions for high-energy neutrino propagation in matter, JCAP 09 (2020) 025. arXiv:2004.04756, doi:10.1088/1475-7516/2020/09/025.
* (67) R. L. Workman, et al., Review of Particle Physics, PTEP 2022 (2022) 083C01. doi:10.1093/ptep/ptac097.
* (68) A. Denner, S. Dittmaier, Electroweak Radiative Corrections for Collider Physics, Phys. Rept. 864 (2020) 1–163. arXiv:1912.06823, doi:10.1016/j.physrep.2020.04.001.
* (69) A. Denner, Techniques for calculation of electroweak radiative corrections at the one loop level and results for W physics at LEP-200, Fortsch. Phys. 41 (1993) 307–420. arXiv:0709.1075, doi:10.1002/prop.2190410402.
* (70) V. Bertone, S. Carrazza, J. Rojo, APFEL: A PDF Evolution Library with QED corrections, Comput. Phys. Commun. 185 (2014) 1647–1668. arXiv:1310.1394, doi:10.1016/j.cpc.2014.03.007.
* (71) G. P. Salam, J. Rojo, A Higher Order Perturbative Parton Evolution Toolkit (HOPPET), Comput. Phys. Commun. 180 (2009) 120–156. arXiv:0804.3755, doi:10.1016/j.cpc.2008.08.010.
* (72) V. Bertone, A. Karlberg, Benchmark of deep-inelastic-scattering structure functions at $\mathcal{O}(\alpha_{s}^{3})$ (4 2024). arXiv:2404.15711.
* (73) A. Karlberg, disorder: Deep inelastic scattering at high orders (1 2024). arXiv:2401.16964.
* (74) C. Bierlich, et al., A comprehensive guide to the physics and usage of PYTHIA 8.3 (3 2022). arXiv:2203.11601, doi:10.21468/SciPostPhysCodeb.8.
* (75) P. Skands, S. Carrazza, J. Rojo, Tuning PYTHIA 8.1: the Monash 2013 Tune, Eur. Phys. J. C 74 (8) (2014) 3024. arXiv:1404.5630, doi:10.1140/epjc/s10052-014-3024-y.
* (76) B. Cabouat, T. Sjöstrand, Some Dipole Shower Studies, Eur. Phys. J. C 78 (3) (2018) 226. arXiv:1710.00391, doi:10.1140/epjc/s10052-018-5645-z.
* (77) A. Bodek, U. K. Yang, Modeling deep inelastic cross-sections in the few GeV region, Nucl. Phys. B Proc. Suppl. 112 (2002) 70–76. arXiv:hep-ex/0203009, doi:10.1016/S0920-5632(02)01755-3.
* (78) A. Bodek, U. K. Yang, Modeling neutrino and electron scattering inelastic cross- sections in the few GeV region with effective LO PDFs TV Leading Order, in: 2nd International Workshop on Neutrino-Nucleus Interactions in the Few GeV Region, 2003. arXiv:hep-ex/0308007.
* (79) A. Bodek, I. Park, U.-k. Yang, Improved low Q**2 model for neutrino and electron nucleon cross sections in few GeV region, Nucl. Phys. B Proc. Suppl. 139 (2005) 113–118. arXiv:hep-ph/0411202, doi:10.1016/j.nuclphysbps.2004.11.208.
* (80) A. Bodek, U.-k. Yang, Axial and Vector Structure Functions for Electron- and Neutrino- Nucleon Scattering Cross Sections at all $Q^{2}$ using Effective Leading order Parton Distribution Functions (11 2010). arXiv:1011.6592.
* (81) A. Bodek, U. K. Yang, Y. Xu, Inelastic Axial and Vector Structure Functions for Lepton-Nucleon Scattering 2021 Update (8 2021). arXiv:2108.09240.
* (82) A. Candido, A. Garcia, G. Magni, T. Rabemananjara, J. Rojo, R. Stegeman, Neutrino Structure Functions from GeV to EeV Energies, JHEP 05 (2023) 149. arXiv:2302.08527, doi:10.1007/JHEP05(2023)149.
* (83) R. Plestid, B. Zhou, Final state radiation from high and ultrahigh energy neutrino interactions (3 2024). arXiv:2403.07984.
* (84) A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht, M. Schönherr, G. Watt, LHAPDF6: parton density access in the LHC precision era, Eur. Phys. J. C 75 (2015) 132. arXiv:1412.7420, doi:10.1140/epjc/s10052-015-3318-8.
|
# Percentile Risk-Constrained Budget Pacing for Guaranteed Display Advertising
in Online Optimization
Liang Dai1, Kejie Lyu1, Chengcheng Zhang1, Guangming Zhao2, Zhonglin Zu1,
Liang Wang2, Bo Zheng2
###### Abstract
Guaranteed display (GD) advertising is a critical component of advertising
since it provides publishers with stable revenue and enables advertisers to
target specific audiences with guaranteed impressions. However, smooth pacing
control for online ad delivery presents a challenge due to significant budget
disparities, user arrival distribution drift, and dynamic change between
supply and demand. This paper presents robust risk-constrained pacing
(RCPacing) that utilizes Lagrangian dual multipliers to fine-tune
probabilistic throttling through monotonic mapping functions within the
percentile space of impression performance distribution. RCPacing combines
distribution drift resilience and compatibility with guaranteed allocation
mechanism, enabling us to provide near-optimal online services. We also show
that RCPacing achieves $O(\sqrt{T})$ dynamic regret where $T$ is the length of
the horizon. RCPacing’s effectiveness is validated through offline evaluations
and online A/B testing conducted on Taobao brand advertising platform.
## Introduction
According to a report by the Internet Advertising Bureau, online display
advertising generated a remarkable revenue of $63.5 billion in 2022,
demonstrating a substantial year-over-year increase of 12.0% (IAB 2023). Ad
exposures in display markets are sold through both guaranteed and non-
guaranteed (like real-time bidding or RTB) selling channels. Within the
guaranteed display (GD) selling channel, an advertiser (demand side) and
publisher (supply side) negotiate a fixed price (cost-per-mille or CPM) for ad
placement, including details such as when, where, and how the ad campaigns
will be displayed. These contractual arrangements guarantee the delivery of a
specified number of impressions that meet specific targeting criteria during a
specified period.
In addition to contractual agreements, advertisers usually expect their ad
campaigns to be delivered smoothly and steadily during the purchased period
for various reasons, including making campaign performance as good as
possible, reaching a wider audience, increasing the display ratio of the
target audience, and maintaining stable online viewership for live streaming
events. However, smooth and robust pacing control for hundreds or thousands of
GD advertisements on a brand advertising platform that deals with billions of
daily requests is a challenging task. To summarize, the main challenges are as
follows:
* •
Significant differences among campaigns: the guaranteed daily impressions
range from thousands to millions, the targeted audience size also vary
greatly. Moreover, different campaigns have different optimization goals, such
as click-through rate (CTR) or conversion rate (CVR).
* •
Drastic changes in traffic environment: these changes include significant
fluctuations in overall traffic, dynamic shifts in the distribution of user
arrival over time, and the impact of other campaigns going online or offline.
The existing smooth pacing techniques have primarily focused on RTB ads (Nuara
et al. 2022; Liu et al. 2020), which is incompatible with GD allocation.
Although some research has considered the smoothness or representativeness in
online optimal allocation of GD ads, it is often not optimized and evaluated
as a separate key metric. In this paper, we consider smooth pacing for GD ads
from the perspective of a publisher. Our contributions can be summarized as
follows:
* •
We introduce a novel framework called RCPacing, which employs Lagrangian dual
multipliers to adjust probabilistic throttling based on monotonic functions
within the percentile space, allowing us to effectively manage risk and ensure
optimal ad delivery performance.
* •
We also show that RCPacing attains regret of order $O(\sqrt{T})$ when the
length of the horizon $T$ and the initial number of resources are scaled
proportionally.
* •
As there exists a tradeoff between smooth and optimal allocation in online
matching problems, RCPacing offers flexible control over this balance.
* •
We implement RCPacing in our online display advertising system and conduct
extensive online/offline experimental evaluations. The results demonstrate
that RCPacing is highly effective in improving both the performance and
smoothness of online delivery for GD campaigns.
## Related Work
In the past few years, the allocation of GD advertising has received
significant attention from researchers (Wu et al. 2021; Wang et al. 2022). It
is typically modeled as an online matching problem, intending to achieve the
maximum match between impressions and contracts(Chen et al. 2012). While the
primary objective is to provide each advertiser with a predetermined number of
display opportunities, it is also necessary to consider the smoothness of
budget consumption. Some researchers include a fixed representative term in
objectives (Fang et al. 2019; Dai et al. 2023; Bharadwaj et al. 2012), which
aims to minimize the deviation between the allocation probability and its
corresponding supply-demand ratio for each contract. However, the
representative term is fixed without consideration of dynamic adjustment.
Another research direction is to achieve budget pacing through feedback
control, which can be further categorized into bid modification (Mehta et al.
2007; Zhou et al. 2021) and probabilistic throttling (Agarwal et al. 2014; Xu
et al. 2015; Lee, Jalali, and Dasdan 2013). Bid modification influences the
budget spending of an ad by adjusting its bidding price. Mehta et al. (Mehta
et al. 2007) modify the bid by multiplying it with a value that reflects the
proportion of unused budget, and the impression would be allocated to the ad
with the highest modified bid. Balseiro et al. (Balseiro, Lu, and Mirrokni
2020) and Zhou et al. (Zhou et al. 2021) utilize a dual multiplier to form a
virtual bid, which is consistently updated based on the variance between the
actual and expected budget consumption. These methods adhere to the same
principle of decreasing an ad’s bid when the budget is being spent too
rapidly. However, both the dramatic change in the bid win-rate curve and bid
landscape make it challenging to control the budget through bid modification.
On the other hand, probabilistic throttling methods decouple spending control
from bid calculation, they directly impact the participating likelihood based
on its budget consumption speed. Agarwal et al. (Agarwal et al. 2014) set a
global pass-through rate (PTR), which is decreased when the budget consumption
speed exceeds the expectation, and increased when the consumption speed falls
below. Although this method demonstrates good budget control capability, it
heavily relies on the accuracy of traffic forecasting.
To further consider performance optimization while achieving budget control,
Xu et al. (Xu et al. 2015) group requests with similar response rates (e.g.
CTR) together and share a PTR among them. When the PTRs need to be adjusted,
the average response rate of each group determines the priority of that group.
While effective in budget control, relying solely on PTR regulation is
insufficient to ensure guaranteed display for GD allocation.
## Preliminaries
### Problem Formulation
We formulate the GD allocation problem as the following optimization problem:
$\begin{split}\max_{x^{(t)}\in\mathcal{X}}\sum_{t=0}^{T-1}f_{t}(x^{(t)})=\sum_{t=0}^{T-1}{v^{(t)}}^{\top}x^{(t)}\\\
\text{s.t.}\sum_{t=0}^{T-1}x^{(t)}\leq B\end{split}$ (1)
where $x^{(t)}\in\mathcal{X}\subseteq\mathbb{R}^{M}$ is the one-hot decision
vector at time $t\in\left[1,\;T\right]$, $M$ is the total number of campaigns
and the impression arrived at time $t$ would be allocated to the $j$-th
campaign if the $j$-th component of $x^{(t)}$ is 1, $v^{(t)}\in R^{M}$ denotes
the impression quality between the impression and the campaigns,
$f_{t}\left(x^{(t)}\right)={v^{(t)}}^{\top}x^{(t)}\in\mathbb{R}$ is the
revenue obtained at time $t$, $B\in\mathbb{R}^{M}$ is the positive budget
vector which represents the campaign budgets.
Following Balseiro et al (Balseiro, Lu, and Mirrokni 2020)., we define the
offline dual problem as
$\displaystyle\min_{\alpha\geq 0}D\left(\alpha\right)$
$\displaystyle=\sum_{i=1}^{n}p_{i}f_{i}^{*}\left(\alpha\right)+\alpha^{\top}\rho$
(2)
$\displaystyle=\sum_{i=1}^{n}p_{i}\max_{x\in\mathcal{X}}\left\\{v^{(i)\top}x-\alpha^{\top}x\right\\}+\alpha^{\top}\rho$
where
$f_{i}^{*}\left(\alpha\right):=\max_{x\in\mathcal{X}}\left\\{f_{i}\left(x\right)-\alpha^{\top}x\right\\}$
is the conjugate function of $f_{i}\left(x\right)$ (restricted in
$\mathcal{X}$ ), $p_{i}$ is the probability that the $i\text{-th}$ impression
has a quality vector of $v^{(i)}$ , n is the total number of impressions,
$\rho=B/T$ is the average budget for each time period, $\alpha$ is the dual
variable, and the $j$-th element of $\alpha$ (denoted as $\alpha_{j}$ )
reflects the additional revenue generated by allowing one unit of resources to
be added to the $j$-th campaign’s budget.
### Dual Mirror Descent Algorithm
Our method is built upon the Dual Mirror Descent (DMD) algorithm (Balseiro,
Lu, and Mirrokni 2020), which addresses the general online allocation problem
with budget constraint. At time $t$, DMD filters out campaigns that have
exhausted their budget and assigns the request to the campaign that offers the
highest premium among the remaining campaigns (equation 3). The dual variable
is then updated according to the online mirror descent (equation 6). More
details about DMD is given in Algorithm 1.
Algorithm 1 Dual Mirror Descent Algorithm
0: Time period $T$, remaining resources $B^{(0)}=T\rho$, reference function
$h\left(\cdot\right):\mathbb{R}^{M}\rightarrow\mathbb{R}$, and step-size
$\eta$.
1: $\alpha^{(0)}=0$
2: for $t=0$ to $T-1$ do
3: Receive $v^{(t)}\sim\mathcal{P}$
4: Make the decision $\tilde{x}^{(t)}$ and update the remaining resources
$B^{(t+1)}$, where $\tilde{x}_{j}^{(t)}=\left\\{\begin{aligned} 1,&\;\text{if
}j=\mathop{\arg\max}\limits_{B_{j}^{(t)}\geq
1}\left\\{v_{j}^{(t)}-\alpha_{j}^{(t)}\right\\}\\\ 0,&\;\text{otherwise.}\\\
\end{aligned}\right.$ (3) $B^{(t+1)}=B^{(t)}-x^{(t)}$ (4)
5: Obtain a stochastic sub-gradient of $D\left(\alpha^{(t)}\right)$
$\tilde{g}^{(t)}:=-\tilde{x}^{(t)}+\rho$ (5)
6: Update the dual variable by mirror descent
$\displaystyle\alpha^{(t+1)}=\mathop{\arg\min}\limits_{\alpha\geq
0}\left<\tilde{g}^{(t)},\alpha\right>+\frac{1}{\eta}V_{h}\left(\alpha,\alpha^{(t)}\right)$
(6) $\displaystyle\text{where
}V_{h}\left(\alpha,\alpha^{(t)}\right)=h\left(\alpha\right)-h\left(\alpha^{(t)}\right)-$
$\displaystyle\left<\nabla h\left(\alpha\right),\alpha-\alpha^{(t)}\right>$
7: end for
## Motivation
### Assumptions
In this paper, we adopt the following assumptions:
* •
The small bids assumption. Each impression has only one slot available for
displaying ads, which is significantly lower than the demand for campaigns and
the supply of publishers (Mehta 2012).
* •
The known IID assumption. The known Independent and Identically Distributed
(IID) assumption implies that impressions arrive online according to a known
probability distribution with repetition (Huang and Shu 2021), which is a
realistic assumption in our problem.
### Motivation
Upon receiving a user’s online request, the ad engine retrieves GD campaigns
with recall rate (RR) that meet the targeting criteria and employs real-time
prediction models such as deep neural networks (DNN) to estimate performance
scores for each campaign (Zhou et al. 2019; Gao et al. 2021). The decision-
maker then determines whether and which campaign to display based on the
scores. The detailed processing flow is illustrated in figure1. The campaign
$j$ passes the pacing control module with the pass-through rate (PTR) before
the calculation of the “price premium”. It will only win out if it has the
highest positive price compared to all other campaigns. The positive ratio of
price and the win-out ratio are referred to as the participation ratio (PR)
and win rate (WR). Without loss of generality, we use CTR as performance score
in the following paragraph. The cost for campaign $j$ can be denoted as:
$\mathbb{E}\left[Cost_{j}\right]=\mathbb{E}\left[RR_{j}\right]\sum
PTR_{j}PR_{j}WR_{j}$ (7)
Figure 1: Online process for GD campaign $j$.
It is worth noting that the primary risk GD campaigns is over-spend of the
budget because it is irreversible once the budget is over-spent. Apart from
PTR, the delivery of GD campaign is primarily determined by PR and WR. Let’s
consider two GD campaigns Ad1 and Ad2, with identical impression budgets,
performance distributions, and similar competitive environments, but with
different supply amounts (Ad1 $>$ Ad2). When the online allocation reaches a
stable state, the dual variable of Ad1 is located at a higher percentile than
Ad2 in performance distribution.
* •
Risk analysis under stable conditions: A higher percentile indicates a greater
potential available traffic for Ad1, which makes it more vulnerable to over-
spending. Moreover, Ad1 is more challenging to initialize dual variables
because a higher percentile implies higher uncertainty especially before the
start of delivery.
* •
Risk analysis under dynamic conditions: Higher percentile results in a smaller
bid price, making the campaign more susceptible to over-acceleration if other
campaigns suddenly go offline. Moreover, as shown in the figure 2, the PR of
Ad1 generates greater fluctuations if the dual variable shifts the same
distance, and is more sensitive to distribution drift in user arrival or
switching of online prediction models, which can be deduced by the proof of
Theorem 2 and Theorem 3 in appendix.
Figure 2: Different changes of PR when adjusting for dual variables or
distribution drift with the same magnitude.
Based on the above risk analysis, RCPacing is designed to adjust the dual
variables in the dual percentile space, while constraining dual variables
within the low-risk region through the pacing module using probabilistic
throttling method.
## Risk-Constrained Pacing Algorithm
The factor dependency of RCPacing is illustrated in figure 3. The dual
variables and PTRs are adjusted in the dual percentile space of performance
distributions. These two factors jointly determine the final win-out of each
request. Although RCPacing adjusts the dual variables in percentile space
rather than dual space, the Theorem 1 in appendix shows that it attains regret
of order $O(\sqrt{T})$ when the length of the horizon $T$ and the initial
number of resources are scaled proportionally.
Figure 3: Factor dependency graph of RCPacing.
### Parametric Percentile Transformation
#### Forward Transformation
RCPacing converts the CTR into the percentile space, which is called forward
transformation, to assess the non-smooth risk and standardize the range of
dual variables. Specifically, the CTR is first subjected to statistical Box-
Cox transformation to achieve a normal shape, after which it is converted into
the percentile space using the normal cumulative distribution function
$\Phi(x)$. The parameter $\lambda_{j}^{*}$ of campaign $j$ can be estimated
from global or campaign’s historical logs using the maximum-likelihood method
(Sakia 1992):
$\lambda_{j}^{*}=\underset{\lambda_{j}}{\operatorname{argmax}}MLE\left(\lambda_{j},v_{ij}\right)$
(8)
And Box-Cox transformation can be denoted as:
$v_{ij}^{boxc}=BoxCox(\lambda_{j}^{*},v_{ij})=\begin{cases}\frac{v^{\lambda_{j}^{*}}_{ij}-1}{\lambda_{j}^{*}}&\text{
if }\lambda_{j}^{*}\neq 0\\\ \ln\left(v_{ij}\right)&\text{ if
}\lambda_{j}^{*}=0\end{cases}$ (9)
The mean $\mu_{j}$ and standard deviation $\sigma_{j}$ can be estimated:
$\mu_{j}=\mathbb{E}(v_{ij}^{boxc}),\
\sigma_{j}=\sqrt{\mathbb{E}\left[(v_{ij}^{boxc}-\mu_{j})^{2}\right]}$ (10)
To improve the robustness of drifts in the user arrival distribution, RCPacing
skews the transformation towards the middle percentile region by a factor
$\epsilon$:
$\bar{v}_{ij}=\Phi\left(\frac{BoxCox(\lambda_{j}^{*},v_{ij})-\mu_{j}}{\sigma_{j}+\epsilon\sigma_{j}}\right),where\
\epsilon\geq 0$ (11)
Figure 4: The transformation process from beta distribution to percentile
uniform distribution and the different skewness of the distribution under
different $\epsilon$.
#### Backward Transformation
RCPacing periodically updates the dual variables in the percentile space
through feedback and then performs a backward transformation of the percentile
variables $\bar{\alpha}_{j}$ into the original dual space $\alpha_{j}$ for
online service. It guarantees that RCPacing approaches the optimal solution in
the original space rather than percentile space. Here is the backward process:
$\alpha_{j}=BoxCox^{-1}\left(\lambda_{j}^{*},\mu_{j}+\Phi^{-1}(\bar{\alpha}_{j})*(\sigma_{j}+\epsilon\sigma_{j})\right)$
(12)
### Pacing Rate Factor Decoupling
The pacing rate serves multiple functions in RCPacing, including constraining
the percentile of dual variables within the safety region and addressing
unexpected environmental changes and cold-start problem. RCPacing decouples
the pacing rate into different factors to achieve optimal performance, for the
campaign $j$ retrieved in request $i$:
$\overline{PTR}_{ij}=PTR^{base}_{j}\cdot fp\left(\bar{\alpha}_{j}\right)\cdot
fv\left(\bar{\alpha}_{j},\bar{v}_{ij}\right)$ (13)
where $PTR^{base}_{j}$ is the basic statistical PTR, $f(\cdot)$ and
$fv(\cdot)$ are the fine-tune factors. Given a safe upper bound of percentile
threshold $P_{ub}$ (such as 90%), the expected PTR can be calculated based on
its targeted audience $TA_{j}$ without considering the competition from other
campaigns:
$PTR^{exp}_{j}=\frac{B_{j}}{(1.0-P_{ub})TA_{j}}$ (14)
The initial value of $\bar{\alpha}_{j}$ can be expressed as:
$\bar{\alpha}^{(0)}_{j}=\left\\{\begin{array}[]{l}\begin{aligned}
P_{ub}&,\text{if }PTR^{exp}_{j}\leq 1\\\
1-(1-P_{ub})PTR^{exp}_{j}&,\text{otherwise.}\end{aligned}\end{array}\right.$
(15)
Given the global hyper-parameter $WR_{glb}$ (such as 0.2), the basic PTR
considering the competition of $WR$ can be expressed as:
$PTR^{base}_{j}=\min\left\\{1.0,PTR^{exp}_{j}/WR_{glb}\right\\}$ (16)
During the dynamic update in RCPacing, $PTR_{j}$ should be gradually increased
to enhance traffic supply if $\bar{\alpha}_{j}<P_{ub}$. Conversely, it should
be quickly decayed to reduce the non-smooth risk. It is illustrated in
equation 17 and figure 5:
$fp\left(\bar{\alpha}_{j}\right)=\left\\{\begin{array}[]{l}\begin{aligned}
50^{(P_{ub}-\bar{\alpha}_{j})/P_{ub}}&,\text{if }\bar{\alpha}_{j}\leq
P_{ub}\\\ 0.2^{(P_{ub}-\bar{\alpha}_{j})/(P_{ub}-1)}&,\text{
otherwise.}\end{aligned}\end{array}\right.$ (17)
Taking inspiration from smart pacing, RCPacing assigns a higher PTR to traffic
with higher performance scores. Instead of employing discrete layered pacing,
RCPacing utilizes linear functions to achieve non-uniform pacing:
$fv\left(\bar{\alpha}_{j},\bar{v}_{ij}\right)=10(\bar{v}_{ij}-\bar{\alpha}_{j})+1$
(18)
Figure 5: Functions of $fp$ and $fv$ in percentile space.
### Emergence Control and Cold Start Problem
Despite RCPacing’s adaptive adjustment of the PTR, it cannot completely
mitigate the risks of non-smooth delivery caused by unpredictable factors,
such as sharp increases in user traffic, significant distribution changes due
to the switch of online real-time prediction models, offsets caused by updates
of Box-Cox parameters, and modifications of budgets. Additionally, due to the
absence of historical logs, there is also a risk of non-smoothness during the
cold start phase. To address the these risks, RCPacing incorporates an
emergent PTR intervention module (ePTR) that can be activated in emergency
situations. The final PTR can be denoted as:
$PTR_{ij}=\min\\{1,\overline{PTR}_{ij}\\}\times ePTR_{j}$ (19)
The motivation behind ePTR is to limit the consumption speed within a certain
range when a campaign is over-accelerated while maintaining the gradient
direction of dual variables. The ratio of the actual cost to the expected cost
can represent the spending speed of campaign $j$ during period $t$:
$spd_{j}^{(t)}=\frac{Cost_{j}^{(t)}}{eCost_{j}^{(t)}}$ (20)
RCPacing uses proportional control instead of gradient methods to quickly
control the risks. Given a safe upper ratio 2.0, the update of ePTR is:
$ePTR_{j}^{(t+1)}=\min\\{1,ePTR_{j}^{(t)}*\min\\{2,\frac{2}{spd_{j}^{(t)}}\\}\\}$
(21)
An initial trial rate is usually set for each campaign at the start of
delivery to reduce the risks of the cold start problem.
### Adaptive Gradient Clipping
Stable online iterative update of dual variables is also a critical factor for
smooth delivery. However, choosing inappropriate learning rates can result in
significant fluctuations and may have a cascading effect on the overall
competitive environment. A simple and direct method is to restrict the change
range into $\hat{\alpha}$ by gradient clipping (Chen, Wu, and Hong 2020).
Given the updated dual variables $\tilde{\alpha}^{(t+1)}_{j}$, gradient
clipping can be denoted as:
$\bar{\alpha}^{(t+1)}_{j}=\max\left\\{\bar{\alpha}^{(t)}_{j}-\hat{\alpha},\min\left\\{\tilde{\alpha}^{(t+1)}_{j},\bar{\alpha}^{(t)}_{j}+\hat{\alpha}\right\\}\right\\}$
(22)
Suppose that $spd_{j}^{(t)}<1.0$ , which indicates that the campaign’s
spending is lower than expected. The feedback control method will decrease the
value of $\alpha_{j}^{(t)}$ to $\alpha_{j}^{(t+1)}$, leading to an increase in
the bid price. Assuming that the competition remains the same, which indicates
that $WR_{ij}^{(t+1)}\geq WR_{ij}^{(t)},\text{ if
}v_{ij}^{(t+1)}=v_{ij}^{(t)}$. Suppose the expected spending speed in the next
period is equal to 1, it can be deduced that:
$\displaystyle
1.0=\frac{Cost_{j}^{(t+1)}}{eCost_{j}^{(t+1)}}=\frac{Cost_{j}^{(t+1)}}{eCost_{j}^{(t)}}=spd_{j}^{(t)}\frac{Cost_{j}^{(t+1)}}{Cost_{j}^{(t)}}$
(23)
$\displaystyle=spd_{j}^{(t)}\frac{\mathbb{E}\left[RR_{j}^{(t+1)}\right]\sum
PTR_{j}^{(t+1)}PR_{j}^{(t+1)}WR_{j}^{(t+1)}}{\mathbb{E}\left[RR_{j}^{(t)}\right]\sum
PTR_{j}^{(t)}PR_{j}^{(t)}WR_{j}^{(t)}}$ $\displaystyle\geq spd_{j}^{(t)}\sum
PTR_{j}^{(t+1)}PR_{j}^{(t+1)}/\sum PTR_{j}^{(t)}PR_{j}^{(t)}$
$\displaystyle=spd_{j}^{(t)}\mathbb{E}\left[PTR_{j}^{(t+1)}PR_{j}^{(t+1)}\right]/\mathbb{E}\left[PTR_{j}^{(t)}PR_{j}^{(t)}\right]$
Without consideration the effect of $ePTR$, $PTR$ and $PR$ are determined and
have a monotonic decreasing relationship with $\bar{\alpha}$. We can calculate
the expectation using the importance sampling method in uniform percentile
space:
$\displaystyle\psi_{j}(\bar{\alpha}_{j}^{(t)})$
$\displaystyle=\mathbb{E}\left[PTR_{j}^{(t)}PR_{j}^{(t)}\right]=\int_{0}^{1}PTR_{j}^{(t)}(\bar{\alpha}_{j}^{(t)},$
(24) $\displaystyle x)\cdot PR_{j}(\bar{\alpha}_{j}^{(t)},x)\mathrm{d}x,\text{
}\ x\sim\text{uniform}(0,1)$
The lower bound of $\alpha_{j}^{(t+1)}$ can be represented as:
$\bar{\alpha}_{j}^{(t+1)}\geq\psi_{j}^{-1}\left(\psi_{j}(\bar{\alpha}_{j}^{(t)})/spd_{j}^{(t)}\right)=\psi_{j}^{-1}$
(25)
where $y=\psi_{j}^{-1}(\bar{\alpha}_{j},x)$ can be approximated through an
iterative process by solving the equation $\psi_{j}(\bar{\alpha}_{j},y)=x$
based on the bisection method illustrated in figure 6. To include
$spd_{j}^{(t)}\geq 1.0$, $\bar{\alpha}_{j}^{(t+1)}$ should satisfy the
following conditions:
$\bar{\alpha}^{(t+1)}_{j}=\left\\{\begin{array}[]{l}\begin{aligned}
\max\left\\{\tilde{\alpha}^{(t+1)}_{j},\bar{\alpha}^{(t)}_{j}-\hat{\alpha},\psi_{j}^{-1}\right\\}&,\text{
if }\tilde{g}_{j}^{(t)}\geq 0\\\
\min\left\\{\tilde{\alpha}^{(t+1)}_{j},\bar{\alpha}^{(t)}_{j}+\hat{\alpha},\psi_{j}^{-1}\right\\}&,\text{
otherwise.}\end{aligned}\end{array}\right.$ (26)
Figure 6: The areas of the color section represent the value of
$\psi_{j}(\bar{\alpha}_{j}^{(t)})$ under different variables
$\alpha_{j}^{(t)}$.
### Bregman Divergence Selection
Algorithm 1 presents the basic decision process based on Bregman divergence
with respect to a given convex reference function. It is obvious that if we
use the squared loss function and the dual update becomes:
$h(\alpha)=\alpha^{2}\Rightarrow\tilde{\alpha}^{(t+1)}_{j}=\bar{\alpha}_{j}^{(t)}-\eta\tilde{g}_{j}^{(t)},\forall
j$ (27)
However, due to the higher fluctuation of PR in the high percentile region
with the same shift, the variation magnitude of dual variables should be
smaller to minimize non-smooth risk. It means that as $\bar{\alpha}_{j}$
approaches 1.0, the $\eta$ should become smaller. We propose a modified
Itakura-Saito divergence (Banerjee et al. 2005) to achieve this objective:
$\displaystyle h(\alpha)=-ln(1.5-\alpha)\Rightarrow$ (28)
$\displaystyle\tilde{\alpha}^{(t+1)}_{j}=\bar{\alpha}^{(t)}_{j}-\frac{(1.5-\bar{\alpha}^{(t)}_{j})^{2}}{1-\eta\tilde{g}_{j}^{(t)}(1.5-\bar{\alpha}^{(t)}_{j})}\eta\tilde{g}_{j}^{(t)},\forall
j$ $\displaystyle\text{ where
}\eta\tilde{g}_{j}^{(t)}(1.5-\bar{\alpha}^{(t)}_{j})<1$
The overall processing details of RCPacing are described in Algorithm 2.
Algorithm 2 RCPacing
0: Budget of the campaigns $\boldsymbol{B}$, safe upper bound $P_{ub}$, global
win rate $WR_{glb}$, skew factor $\epsilon$, step size $\eta$, static gradient
clipping $\hat{\alpha}$, total time period $T$
1: Budget exhausted campaign set $\mathcal{G}=\emptyset$
2: Calculate $\boldsymbol{PTR}^{base}$ and $\boldsymbol{\bar{\alpha}}^{(0)}$
with eq. 14 $\sim$ 16
3: for $t=0$ to $T-1$ do
4: Estimate $\boldsymbol{\lambda}^{*}$, $\boldsymbol{\mu}$, and
$\boldsymbol{\sigma}$ from historical logs with eq. 8 $\sim$ 10
5: Obtain $\boldsymbol{\alpha}^{(t)}$ from $\boldsymbol{\bar{\alpha}}^{(t)}$
by backward transformation in eq. 12
6: Receive $\boldsymbol{v}^{(t)}$ from online requests
7: Obtain $\boldsymbol{\bar{v}}^{(t)}$ from $\boldsymbol{v}^{(t)}$ by forward
transformation in eq. 11
8: Calculate $\boldsymbol{PTR}^{(t)}$ with eq. 13 and eq. 17 $\sim$ 19
9: $\boldsymbol{bid}^{(t)}=\boldsymbol{v}^{(t)}-\boldsymbol{\alpha}^{(t)}$
10: Element-wise randomly set ${bid}^{(t)}_{ij}=0$ with probability
$1-{PTR}^{(t)}_{ij}$ and set ${bid}^{(t)}_{ij}=0$ if ${j}\in\mathcal{G}$
11:
$\boldsymbol{j}^{*}=\mathop{\arg\max}\left\\{\boldsymbol{bid}^{(t)}\right\\}$
12: Make the decision $\tilde{\boldsymbol{x}}^{(t)}$, where
$\tilde{{x}}^{(t)}_{ij}=\left\\{\begin{aligned} 1,&\;\text{if
}{bid}^{(t)}_{ij}>0\text{ and }j=\boldsymbol{j}^{*}_{i}\\\
0,&\;\text{otherwise.}\\\ \end{aligned}\right.$ (29)
13: $\boldsymbol{B}=\boldsymbol{B}-\sum_{i}\tilde{\boldsymbol{x}}^{(t)}$
14: Add budget exhausted campaign to $\mathcal{G}$
15: Calculate $\tilde{\boldsymbol{\alpha}}^{(t+1)}$ with eq. 28
16: Update $\bar{\boldsymbol{\alpha}}^{(t+1)}$ by clipping
$\tilde{\boldsymbol{\alpha}}^{(t+1)}$ with eq. 26
17: Update $\boldsymbol{ePTR}^{(t+1)}$ with eq. 21
18: end for
## Experimental Results
This section begins with an introduction to the evaluation metrics and the
baseline methods, and compares RCPacing to the baselines through offline and
online experiments.
### Evaluation Metrics
* •
Delivery rate is defined as the ratio of allocated impressions to the total
budgets of the advertisers:
$delivery\;rate=\frac{\sum_{t}\sum_{j}\tilde{x}_{j}^{(t)}}{\sum_{j}B_{j}}$
(30)
* •
Unsmoothness index (UI) measures the deviation between the actual and expected
budget consumption:
$unsmoothness=\frac{1}{M}\sum_{j=1}^{M}\sqrt{\frac{1}{T}\sum_{t=0}^{T-1}\left(\tilde{x}_{j}^{(t)}-\rho_{j}\right)^{2}}$
(31)
* •
Average CTR reflects the quality of impressions, is calculated as the ratio of
clicks to the total impressions:
$CTR_{avg}=\frac{\sum_{t}\sum_{j}v_{j}^{(t)}\tilde{x}_{j}^{(t)}}{\sum_{t}\sum_{j}\tilde{x}_{j}^{(t)}}$
(32)
### Baseline Methods
We compare RCPacing with the following four methods: 1) DMD (Balseiro, Lu, and
Mirrokni 2020) is a Lagrangian dual-based online allocation framework that
maximizes revenue while adhering to resource constraints by adjusting their
virtual bids. 2) Smart Pacing (Xu et al. 2015) is a control-based method
proposed to achieve smooth delivery and optimal performance by probabilistic
throttling. 3) AUAF (Cheng et al. 2022) is a dual-based method that optimizes
delivery rate and impression quality with a fixed smoothness term. The dual
variables are updated by feedback control algorithm to ensure fairness. 4)
PDOA (Zhou et al. 2021) solves online matching in dynamic environments with
experts and meta-algorithm. It achieves smoothness by bid modification.
### Offline Evaluation
Table 1: Optimal values for the important hyper-parameters parameter | value | description
---|---|---
$\epsilon$ | 0.1 | skew factor
$\eta$ | 0.2 | step size
$\hat{\alpha}$ | 0.05 | static gradient clipping
$P_{ub}$ | 90% | safe percentile upper bound
$WR_{glb}$ | 15% | global win rate
#### Datasets
We construct a large-scale industrial dataset111Dataset and the code for all
methods are available in https://github.com/danifree/RCPacing. by collecting
real-world ad-serving data from our display advertising system, which consists
of 600K impressions and 300 GD ads. The impressions are evenly distributed
across 50 time periods. The CTR values predicted by a DNN are reserved to
measure the impression quality.
#### Implementation Details
Table 1 provides a summary of the optimal values for the important hyper-
parameters.
#### Evaluation Results
In order to exclude the influence of accidental factors, we randomly scale the
budget of GD ads by a factor ranging from 0.8 to 1.2, and calculate the mean
and standard deviation across 50 rounds. As shown in Table 2, Smart Pacing
achieves the highest average CTR, but its low delivery rate is inappropriate
for GD allocation, which results in publishers being penalized for unsatisfied
demand. RCPacing demonstrates a significant reduction in UI, with a 59.4% and
50.8% improvement compared to PDOA and AUAF, respectively. Furthermore, it
delivers superior CTR performance, achieving a 23.1% and 45.1% increase
compared to PDOA and AUAF.
Table 2: Offline evaluation results
Method | Unsmoothness | Delivery Rate (%) | CTR (%)
---|---|---|---
DMD | $15.71\pm 1.46$ | $\mathbf{100.0\pm 0.0}$ | $5.39\pm 0.02$
Smart | $10.52\pm 1.04$ | $95.9\pm 0.9$ | $\mathbf{7.88\pm 0.37}$
AUAF | $12.95\pm 1.29$ | $100.0\pm 0.0$ | $5.14\pm 0.01$
PDOA | $15.70\pm 2.27$ | $100.0\pm 0.0$ | $6.06\pm 0.27$
RCPacing | $\mathbf{6.37\pm 0.72}$ | $99.8\pm 0.1$ | $7.46\pm 0.44$
Figure 7: The ablative analysis.
#### Ablative Analysis
We focus on UI and CTR since the delivery rates of the variants are very close
to 100%.
* •
The impact of percentile upper bound: A higher safety percentile upper bound
($P_{ub}$) allows advertisers to filter low-quality impressions more
effectively, but it also raises the risk of fluctuations. As demonstrated in
figure 7, RCPacing has higher CTR when using a larger $P_{ub}$, but there is a
4.14% increase in unsmoothness when $P_{ub}$ is changed from 90% to 95%
(Itakura-Saito divergence).
* •
The impact of different divergence: As mentioned earlier, a modified Itakura-
Saito divergence helps alleviate the issue of high fluctuations in the high
percentile range. Figure 7 illustrates that the proposed Itakura-Saito
divergence provides better UI especial when $P_{ub}$ is high (e.g., a 3.74%
improvement in smoothness when $P_{ub}$ equals 90%), while the average CTR is
comparable to that of the Euclidean divergence.
Additional ablative analysis can be found in the appendix.
### Online Evaluation
Figure 8: The online evaluation results.
#### Implementation Details
In order to evaluate the performance of RCPaing in an online environment, we
conduct A/B testing on our Taobao brand advertising platform for a continuous
period of two weeks. Since the delivery rate for Smart pacing is too low for
GD allocation, we only compare our method with DMD, AUAF, and PDOA.
#### Evaluation Results
As the delivery rates of all methods exceed 99.5%, we concentrate on the other
two metrics in figure 8, RCPacing outperforms all the baselines. For example,
compared with PDOA, our method achieves a 35.3% and 23.4% improvement in UI
and CTR, respectively.
## Conclusion
GD contracts are a crucial source of revenue for large publishers. This paper
presents a robust percentile risk-constrained pacing framework designed from
the perspective of a publisher. RCPacing achieves smooth and optimal
allocation for GD campaigns by leveraging its compatibility with the
guaranteed allocation mechanism. Our analysis presents the relationship
between non-smooth risks and percentile of dual variables, and RCPacing is
designed to constrain dual variables within the low-risk region. Adaptive
gradient clipping and modified Bregman divergence techniques are also employed
to achieve more stable update of dual variables. We also illustrate the trade-
off and flexible control over smooth and optimal allocation in online
matching. Our experimental evaluations on real-world A/B testing demonstrate
that RCPacing outperforms other compared methods and it has been widely
deployed in Taobao display advertising system.
## References
* Agarwal et al. (2014) Agarwal, D.; Ghosh, S.; Wei, K.; and You, S. 2014. Budget pacing for targeted online advertisements at linkedin. In _Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining_ , 1613–1619.
* Balseiro, Lu, and Mirrokni (2020) Balseiro, S.; Lu, H.; and Mirrokni, V. 2020. Dual mirror descent for online allocation problems. In _International Conference on Machine Learning_ , 613–628. PMLR.
* Banerjee et al. (2005) Banerjee, A.; Merugu, S.; Dhillon, I. S.; Ghosh, J.; and Lafferty, J. 2005. Clustering with Bregman divergences. _Journal of machine learning research_ , 6(10).
* Bharadwaj et al. (2012) Bharadwaj, V.; Chen, P.; Ma, W.; Nagarajan, C.; Tomlin, J.; Vassilvitskii, S.; Vee, E.; and Yang, J. 2012. Shale: an efficient algorithm for allocation of guaranteed display advertising. In _Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining_ , 1195–1203.
* Chen et al. (2012) Chen, P.; Ma, W.; Mandalapu, S.; Nagarjan, C.; Shanmugasundaram, J.; Vassilvitskii, S.; Vee, E.; Yu, M.; and Zien, J. 2012. Ad serving using a compact allocation plan. In _Proceedings of the 13th ACM Conference on Electronic Commerce_ , 319–336.
* Chen, Wu, and Hong (2020) Chen, X.; Wu, S. Z.; and Hong, M. 2020. Understanding gradient clipping in private SGD: A geometric perspective. _Advances in Neural Information Processing Systems_ , 33: 13773–13782.
* Cheng et al. (2022) Cheng, X.; Liu, C.; Dai, L.; Zhang, P.; Fang, Z.; and Zu, Z. 2022. An Adaptive Unified Allocation Framework for Guaranteed Display Advertising. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_ , 132–140.
* Dai et al. (2023) Dai, L.; Zu, Z.; Wu, H.; Wang, L.; and Zheng, B. 2023. Fairness-aware Guaranteed Display Advertising Allocation under Traffic Cost Constraint. In _Proceedings of the ACM Web Conference 2023_ , 3572–3580.
* Fang et al. (2019) Fang, Z.; Li, Y.; Liu, C.; Zhu, W.; Zheng, Y.; and Zhou, W. 2019. Large-scale personalized delivery for guaranteed display advertising with real-time pacing. In _2019 IEEE International Conference on Data Mining (ICDM)_ , 190–199. IEEE.
* Gao et al. (2021) Gao, C.; Lei, W.; He, X.; de Rijke, M.; and Chua, T.-S. 2021. Advances and challenges in conversational recommender systems: A survey. _AI Open_ , 2: 100–126.
* Huang and Shu (2021) Huang, Z.; and Shu, X. 2021. Online stochastic matching, poisson arrivals, and the natural linear program. In _Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing_ , 682–693.
* IAB (2023) IAB. 2023. Internet Advertising Revenue Report. https://www.iab.com/wp-content/uploads/2023/04/IAB˙PwC˙Internet˙Advertising˙Revenue˙Report˙2022.pdf. Accessed: 2023-04.
* Lee, Jalali, and Dasdan (2013) Lee, K. C.; Jalali, A.; and Dasdan, A. 2013. Real Time Bid Optimization with Smooth Budget Delivery in Online Advertising. _arXiv e-prints_.
* Liu et al. (2020) Liu, M.; Yue, W.; Qiu, L.; and Li, J. 2020. An effective budget management framework for real-time bidding in online advertising. _IEEE Access_ , 8: 131107–131118.
* Mehta (2012) Mehta, A. 2012. Online Matching and Ad Allocation. _Foundations and trends in theoretical computer science_ , (8-4).
* Mehta et al. (2007) Mehta, A.; Saberi, A.; Vazirani, U.; and Vazirani, V. 2007. Adwords and generalized online matching. _Journal of the ACM (JACM)_ , 54(5): 22–es.
* Nuara et al. (2022) Nuara, A.; Trovò, F.; Gatti, N.; and Restelli, M. 2022. Online joint bid/daily budget optimization of internet advertising campaigns. _Artificial Intelligence_ , 305: 103663.
* Sakia (1992) Sakia, R. M. 1992. The Box-Cox transformation technique: a review. _Journal of the Royal Statistical Society Series D: The Statistician_ , 41(2): 169–178.
* Wang et al. (2022) Wang, X.; Tan, B.; Guo, Y.; Yang, T.; Huang, D.; Xu, L.; Freris, N. M.; Zhou, H.; and Li, X.-Y. 2022. CONFLUX: A Request-level Fusion Framework for Impression Allocation via Cascade Distillation. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ , 4070–4078.
* Wu et al. (2021) Wu, D.; Chen, C.; Chen, X.; Pan, J.; Yang, X.; Tan, Q.; Xu, J.; and Lee, K.-C. 2021. Impression Allocation and Policy Search in Display Advertising. In _2021 IEEE International Conference on Data Mining (ICDM)_ , 749–756. IEEE.
* Xu et al. (2015) Xu, J.; Lee, K.-c.; Li, W.; Qi, H.; and Lu, Q. 2015. Smart pacing for effective online ad campaign optimization. In _Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining_ , 2217–2226.
* Zhou et al. (2019) Zhou, G.; Mou, N.; Fan, Y.; Pi, Q.; Bian, W.; Zhou, C.; Zhu, X.; and Gai, K. 2019. Deep interest evolution network for click-through rate prediction. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 33, 5941–5948.
* Zhou et al. (2021) Zhou, Y.-H.; Hu, P.; Liang, C.; Xu, H.; Huzhang, G.; Feng, Y.; Da, Q.; Wang, X.; and Zeng, A.-X. 2021. A Primal-Dual Online Algorithm for Online Matching Problem in Dynamic Environments. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, 11160–11167.
## Appendix
Table 3: The impact of slope $k$
$k$ | Unsmoothness | Delivery Rate (%) | CTR (%)
---|---|---|---
0 | $6.54\pm 0.65$ | $99.8\pm 0.1$ | $7.17\pm 0.39$
1 | $6.68\pm 0.65$ | $99.8\pm 0.1$ | $7.36\pm 0.37$
10 | $6.37\pm 0.72$ | $99.8\pm 0.1$ | $7.46\pm 0.44$
100 | $6.84\pm 0.61$ | $99.9\pm 0.0$ | $7.71\pm 0.35$
1000 | $7.07\pm 0.72$ | $99.9\pm 0.1$ | $7.77\pm 0.42$
Table 4: The impact of gradient clipping
Method | $\eta$ | Unsmoothness | Delivery Rate (%) | CTR (%)
---|---|---|---|---
w/o clip | 0.2 | $6.57\pm 0.59$ | $99.8\pm 0.1$ | $7.44\pm 0.39$
0.4 | $7.68\pm 0.82$ | $99.7\pm 0.1$ | $7.30\pm 0.39$
0.8 | $11.54\pm 1.13$ | $99.7\pm 0.0$ | $7.41\pm 0.41$
w/ clip | 0.2 | $6.37\pm 0.72$ | $99.8\pm 0.1$ | $7.46\pm 0.44$
0.4 | $7.16\pm 0.79$ | $99.8\pm 0.1$ | $7.32\pm 0.43$
0.8 | $8.26\pm 0.72$ | $99.9\pm 0.1$ | $7.36\pm 0.39$
### Supplementary experiments
This subsection presents additional experimental results that further
demonstrate the efficacy of RCPacing’s design strategies.
#### The impact of performance-based pacing
The performance-based pacing ($fv=k(\bar{v}_{ij}-\bar{\alpha}_{j})+1$) assigns
a higher PTR to traffic with higher performance scores, and the slope $k$
determines how significant this non-uniform is. As shown in Table 3,
deactivating the performance-based pacing by setting $k=0$ yields the lowest
average CTR. The average CTR increases with the increase in $k$, but it will
eventually reach saturation since the PTR cannot exceed 1.
#### The impact of gradient clipping
During iterative updates of the dual variables, gradient clipping is an
effective technique that restricts the range of changes to prevent significant
fluctuations caused by inappropriate learning rates. Table 4 demonstrates that
when the learning rate is excessively large, gradient clipping aims in
maintaining a smooth and stable allocation.
### Theoretical proof
#### Assumption 1
(Assumptions on constraint set $\mathcal{X}$). We assume that: (i)
$0\in\mathcal{X}$, and (ii) for each $x_{t}\in\mathcal{X}\text{ and }x_{t}\neq
0$, $x_{t}$ is a one-hot vector indicating which campaign the impression is
assigned to.
The above assumption implies that we can only assign an impression to one
campaign at a time. Moreover, we can always take the void action by choosing
$x_{t}=0$ in order to make sure we do not exceed the budget constraints. This
guarantees the existence of a feasible solution.
#### Assumption 2
(Assumptions on revenue function $f$).
* •
$f\left(x\right)=v^{\top}x=\sum_{j=1}^{M}v_{j}x_{j}$, $v_{j}$ is the
impression quality and follows a beta distribution:
$v_{j}\sim Beta(m,n),\quad m\geq 2,n\geq 2$ (33)
* •
There exists a positive constant $\bar{f}$ such that
$f\left(x\right)\leq\bar{f}$ is satisfied in all cases.
#### Assumption 3
(Assumptions on dual variable $\alpha$). We assume that $\alpha_{j}\in[0,1]$
is satisfied for any campaign $j$. Since campaign $j$ will not participate in
the auction when $v_{j}\leq\alpha_{j}$, it doesn’t make sense for $\alpha_{j}$
to be greater than 1. Moreover, from the definition of the dual problem
(equation 2), we have $\alpha_{j}\geq 0$.
#### Assumption 4
(Assumptions on budget parameter $\rho$). We assume there exist positive
constants $\bar{\rho}$ and $\underline{\rho}$ such that
$\underline{\rho}\leq\rho_{j}\leq\bar{\rho}$ is satisfied for any campaign
$j$.
#### Assumption 5
(Assumptions on reference function $h\left(\cdot\right)$). We assume
* •
$h\left(\bar{\alpha}\right)$ is coordinate-wisely separable, i.e.,
$h\left(\bar{\alpha}\right)=\sum_{j}h_{j}\left(\bar{\alpha}_{j}\right)$ where
$h_{j}\left(\cdot\right)$ is a convex univariate function.
* •
$h\left(\bar{\alpha}\right)$ is $\sigma_{1}$-strongly convex in $l_{1}$-norm
in $[0,1]$, i.e., $h\left(\bar{\alpha}_{1}\right)\geq
h\left(\bar{\alpha}_{2}\right)+\left<\nabla
h\left(\bar{\alpha}_{2}\right),\bar{\alpha}_{1}-\bar{\alpha}_{2}\right>+\frac{\sigma_{1}}{2}||\bar{\alpha}_{1}-\bar{\alpha}_{2}||^{2}_{1}$
for any $\bar{\alpha}_{1},\bar{\alpha}_{2}\in[0,1]$
* •
$h\left(\bar{\alpha}\right)$ is $\sigma_{2}$-strongly convex in $l_{2}$-norm
in $[0,1]$, i.e., $h\left(\bar{\alpha}_{1}\right)\geq
h\left(\bar{\alpha}_{2}\right)+\left<\nabla
h\left(\bar{\alpha}_{2}\right),\bar{\alpha}_{1}-\bar{\alpha}_{2}\right>+\frac{\sigma_{2}}{2}||\bar{\alpha}_{1}-\bar{\alpha}_{2}||^{2}_{2}$
for any $\bar{\alpha}_{1},\bar{\alpha}_{2}\in[0,1]$
#### Definition 1
We define $\alpha^{max}\in\mathbb{R}^{M}$ such that
$\alpha^{max}_{j}:=\frac{\bar{f}}{\rho_{j}}+1$, where $M$ is the total number
of campaigns.
#### Definition 2
We define the stopping time $\tau_{j}$ of campaign $j$ as the first time less
than $T$ that satisfies
$\sum_{t=1}^{\tau_{j}}x_{tj}+1\geq\rho_{j}T$ (34)
$\tau_{j}$ is a random variable, and campaign $j$ will adhere to its budget
constraint until the stopping time $\tau_{j}$. To prevent campaign $j$ from
participating in any auctions after its budget is exhausted, we set $v_{tj}$
to be $0$ after $\tau_{j}$.
Moreover, we define the stopping time $\tau_{A}$ of the algorithm as
$\tau_{A}=\max_{j}\\{\tau_{j}\\}$ (35)
#### Definition 3
We define the expected revenue of an algorithm $A$ as
$R(A)=\mathbb{E}\left[\sum_{t=1}^{T}f_{t}\left(x_{t}\right)\right]$ (36)
where $x_{t}$ is the allocation decision made by the algorithm at time $t$. We
take the offline problem as the baseline for comparison, which determines the
optimal allocation under complete information of all requests and then
calculates the expected revenue across all possible outcomes:
$\operatorname{OPT}=\mathbb{E}\left[\begin{split}\max_{x_{t}\in\mathcal{X}}\sum_{t=1}^{T}f_{t}(x_{t})=\sum_{t=1}^{T}v_{t}^{\top}x_{t}\\\
\text{s.t.}\sum_{t=1}^{T}x_{t}\leq B=T\rho\end{split}\right]$ (37)
Moreover, the regret of algorithm $A$ is defined as:
$\operatorname{Regret}\left(A\right):=\operatorname{OPT}-R\left(A\right)$ (38)
#### Lemma 1
Suppose $\alpha_{j}\in[0,1],\forall j$, then there exists a constant $K_{1}$
such that $\bar{\alpha}_{j}\leq K_{1}\alpha_{j},\forall j$ always holds.
Proof. Denote the ratio $\bar{\alpha}_{j}/\alpha_{j}$ as
$\mathcal{R}_{1}\left(\alpha_{j}\right)=\frac{\int^{\alpha_{j}}_{0}\frac{1}{B\left(m,n\right)}s^{m-1}\left(1-s\right)^{n-1}ds}{\alpha_{j}}$
(39)
its derivative is obtained by
$\mathcal{R}^{\prime}_{1}\left(\alpha_{j}\right)=\frac{\alpha^{m}_{j}\left(1-\alpha_{j}\right)^{n-1}-\int^{\alpha_{j}}_{0}s^{m-1}\left(1-s\right)^{n-1}ds}{\alpha^{2}_{j}B\left(m,n\right)}$
(40)
Let
$\mathcal{G}_{1}\left(\alpha_{j}\right)=\alpha^{m}_{j}\left(1-\alpha_{j}\right)^{n-1}-\int^{\alpha_{j}}_{0}s^{m-1}\left(1-s\right)^{n-1}ds$
(41)
its derivative is obtained by
$\mathcal{G}^{\prime}_{1}\left(\alpha_{j}\right)=\alpha^{m-1}_{j}\left(1-\alpha_{j}\right)^{n-2}\left[m-1-\left(m+n-2\right)\alpha_{j}\right]$
(42)
Let $\alpha^{\prime}_{j}=\left(m-1\right)/\left(m+n-2\right)$,
$\mathcal{G}_{1}\left(\alpha_{j}\right)$ monotonically increases on
$\left[0,\alpha^{\prime}_{j}\right]$, and monotonically decreases on
$\left[\alpha^{\prime}_{j},1\right]$. Since $\mathcal{G}_{1}\left(0\right)=0$,
$\mathcal{G}_{1}\left(\alpha^{\prime}_{j}\right)>0$,
$\mathcal{G}_{1}\left(1\right)=-\int^{1}_{0}s^{m-1}\left(1-s\right)^{n-1}$,
there must be an
$\alpha^{\prime\prime}_{j}\in\left[\alpha^{\prime}_{j},1\right]$ to make
$\mathcal{G}_{1}\left(\alpha^{\prime\prime}_{j}\right)=0$. Thus
$\mathcal{R}^{max}_{1}=\mathcal{R}_{1}\left(\alpha^{\prime\prime}_{j}\right)$.
Let $K_{1}=\mathcal{R}^{max}_{1}$, we have
$\bar{\alpha}_{j}=\mathcal{R}_{1}\left(\alpha_{j}\right)\alpha_{j}\leq\mathcal{R}^{max}_{1}\alpha_{j}=K_{1}\alpha_{j}$,
and the lemma is proved.
#### Lemma 2
Suppose the update of dual variables in the original space is satisfied that
$0<\Delta_{m}\leq\Delta\leq 1$, then the percentile update must be satisfied
that $\bar{\Delta}\geq K_{2}\Delta$, where $K_{2}$ is a positive constant.
Proof. Denote the ratio $\bar{\Delta}/\Delta$ as
$\mathcal{R}_{2}\left(\alpha_{j},\Delta\right)=\frac{\int^{\alpha_{j}+\Delta}_{\alpha_{j}}\frac{1}{B\left(m,n\right)}s^{m-1}\left(1-s\right)^{n-1}ds}{\Delta}>0$
(43)
Since $\mathcal{R}_{2}\left(\alpha_{j},\Delta\right)$ is continuous when
$\Delta\in[\Delta_{m},1]$ and $\alpha_{j}\in\left[0,1\right]$, there must be a
positive constant $K_{2}$ satisfying
$K_{2}\leq\min_{\alpha_{j},\Delta}\left\\{\mathcal{R}_{2}\left(\alpha_{j},\Delta\right)\right\\}$.
Thus we have
$\bar{\Delta}=\mathcal{R}_{2}\left(\alpha_{j},\Delta\right)\Delta\geq
K_{2}\Delta$, and the lemma is proved.
#### Lemma 3
Let $\tilde{g}=\nabla f^{*}\left(\alpha\right)+\rho$ with
$f\left(x\right)=v^{\top}x$ and $v\in\left\\{v_{1},\cdots,v_{n}\right\\}$, and
$\bar{\alpha}^{+}=\mathop{\arg\min}_{\bar{\alpha}^{*}\geq
0}\left<\tilde{g},\bar{\alpha}^{*}\right>+\frac{1}{\eta}V_{h}\left(\bar{\alpha}^{*},\bar{\alpha}\right)$.
Suppose $\alpha\leq\alpha^{max}$and $\eta\leq K_{1}\sigma_{2}$, then it holds
that $\bar{\alpha}^{+}\leq\bar{\alpha}^{max}=K_{1}\alpha^{max}$.
Proof. Denote $J:=\left\\{j|\bar{\alpha}^{+}_{j}>0\right\\}$, then we just
need to show that $\bar{\alpha}^{+}_{j}\leq K_{1}\alpha^{max}$ holds for any
$j\in J$. Since we update the dual variables in the percentile space as
$\bar{\alpha}^{+}=\mathop{\arg\min}_{\bar{\alpha}^{*}\geq
0}\left<\tilde{g},\bar{\alpha}^{*}\right>+\frac{1}{\eta}V_{h}\left(\bar{\alpha}^{*},\bar{\alpha}\right)$
(44)
$V_{h}\left(\bar{\alpha}^{*},\bar{\alpha}\right)=h\left(\bar{\alpha}^{*}\right)-h\left(\bar{\alpha}\right)-\left<\nabla
h\left(\bar{\alpha}\right),\bar{\alpha}^{*}-\bar{\alpha}\right>$ (45)
it holds for any $j\in J$ that
$\dot{h}_{j}\left(\bar{\alpha}^{+}_{j}\right)=\dot{h}_{j}\left(\bar{\alpha}_{j}\right)-\eta\tilde{g}_{j}=\dot{h}_{j}\left(\bar{\alpha}_{j}\right)-\eta\left(\nabla
f^{*}\left(\alpha\right)\right)_{j}-\eta\rho_{j}$ (46)
Define
$h^{*}_{j}\left(c\right)=\max_{\bar{\alpha}}\left\\{c\bar{\alpha}_{j}-h_{j}\left(\bar{\alpha}_{j}\right)\right\\}$
as the conjugate function of $h_{j}\left(\bar{\alpha}_{j}\right)$, then by the
property of conjugate function it holds that $h_{j}^{*}\left(\cdot\right)$ is
a $\frac{1}{\sigma_{2}}$-smooth univariate convex function. Furthermore,
$\dot{h}^{*}_{j}\left(\cdot\right)$ is increasing, and
$\dot{h}^{*}_{j}\left(\dot{h}_{j}\left(\bar{\alpha}_{j}\right)\right)=\bar{\alpha}_{j}$.
Now define
$\tilde{x}:=\mathop{\arg\max}_{x\in\mathcal{X}}\left\\{f\left(x\right)-\alpha^{\top}x\right\\}=-\nabla
f^{*}\left(\alpha\right)$. Then it holds that $0=f\left(0\right)\leq
f\left(\tilde{x}\right)-\alpha^{\top}\tilde{x}\leq\bar{f}-\alpha^{\top}\tilde{x}$,
whereby $\alpha^{\top}\tilde{x}\leq\bar{f}$. Since $\alpha\geq
0,\tilde{x}_{j}\in\left\\{0,1\right\\}$, it holds for any $j\in J$ that
$\tilde{x}_{j}\leq\min\left(\frac{\bar{f}}{\alpha_{j}},1\right)$. Together
with equation 46, it holds that
$\dot{h}_{j}\left(\bar{\alpha}^{+}_{j}\right)\leq\dot{h}_{j}\left(\bar{\alpha}_{j}\right)+\eta\min\left(\frac{\bar{f}}{\alpha_{j}},1\right)-\eta\rho_{j}$
(47)
If $\frac{\bar{f}}{\rho_{j}}\leq\alpha_{j}\leq\alpha_{j}^{max}$, we have
$\min\left(\frac{\bar{f}}{\alpha_{j}},1\right)-\rho_{j}\leq 0$, thus it holds
that $\bar{\alpha}_{j}^{+}\leq\bar{\alpha}_{j}\leq K_{1}\alpha_{j}\leq
K_{1}\alpha_{j}^{max}$ by utilizing equation 47, Lemma 1, and convexity of
$\dot{h}_{j}$. Otherwise, $\alpha_{j}\leq\frac{\bar{f}}{\rho_{j}}$, and
furthermore,
$\displaystyle\bar{\alpha}^{+}_{j}$
$\displaystyle=\dot{h}^{*}_{j}\left(\dot{h}_{j}\left(\bar{\alpha}^{+}_{j}\right)\right)\leq\dot{h}^{*}_{j}\left(\dot{h}_{j}\left(\bar{\alpha}_{j}\right)+\eta\right)$
(48)
$\displaystyle\leq\dot{h}^{*}_{j}\left(\dot{h}_{j}\left(\bar{\alpha}_{j}\right)\right)+\frac{\eta}{\sigma_{2}}\leq
K_{1}\alpha_{j}+\frac{\eta}{\sigma_{2}}$ $\displaystyle\leq
K_{1}\left(\frac{\bar{f}}{\rho_{j}}+1\right)=K_{1}\alpha_{j}^{max}$
where the first inequality is from equation 47 and the monotonicity of
$\dot{h}^{*}_{j}\left(\cdot\right)$, the second inequality is from
$\dot{h}^{*}_{j}\left(\dot{h}_{j}\left(\bar{\alpha}_{j}\right)\right)=\bar{\alpha}_{j}$
and the $\frac{1}{\sigma_{2}}$-smoothness of $h^{*}_{j}\left(\cdot\right)$,
the third inequality is from Lemma 1, and the last equality follows from
Definition 1. This finishes the proof of the lemma.
#### Proposition 1
It holds for any $\alpha\geq 0$ that
$\operatorname{OPT}\leq TD\left(\alpha\right).$ (49)
Proof. Notice that for any $\alpha\geq 0$, it holds that
$\displaystyle\operatorname{OPT}$ (50)
$\displaystyle=\mathbb{E}\left[\begin{array}[]{cl}\max_{x_{t}\in\mathcal{X}}&\sum_{t=1}^{T}f_{t}\left(x_{t}\right)\\\
\text{ s.t. }&\sum_{t=1}^{T}x_{t}\leq T\rho\end{array}\right]$
$\displaystyle\leq\mathbb{E}\left[\max_{x_{t}\in
X}\sum_{t=1}^{T}f_{t}\left(x_{t}\right)+T\alpha^{\top}\rho-\alpha^{\top}\sum_{t=1}^{T}x_{t}\right]$
$\displaystyle=T\mathbb{E}\left[\max_{x\in
X}f(x)-\alpha^{\top}x+\alpha^{\top}\rho\right]$
$\displaystyle=T\left(\sum_{i=1}^{n}p_{i}\max_{x\in\mathcal{X}}\left\\{f_{i}(x)-\alpha^{\top}x\right\\}+\alpha^{\top}\rho\right)$
$\displaystyle=T\left(\sum_{i=1}^{n}p_{i}f_{i}^{*}\left(\alpha\right)+\alpha^{\top}\rho\right)$
where the first inequality is because of the feasibility of $x$ and
$\alpha\geq 0$ and the last equality is due to the definition of $f_{i}^{*}$.
This finishes the proof.
#### Proposition 2
Consider Algorithm 2 with step-size $\eta\leq K_{1}\sigma_{2}$. Then it holds
that $\bar{\alpha}_{t}\leq\bar{\alpha}^{max}=K_{1}\alpha^{max}$ for any $t\leq
T$. Furthermore, it holds with probability 1 that
$T-\tau_{A}\leq\frac{1}{\eta\underline{\rho}}||\nabla
h\left(K_{1}\alpha^{max}\right)-\nabla
h\left(\bar{\alpha}_{0}\right)||_{\infty}+\frac{1}{\underline{\rho}}.$ (51)
Proof. First, a direct application of Lemma 3 shows that for any $t\leq T$,
$\bar{\alpha}_{t}\leq\bar{\alpha}^{max}=K_{1}\alpha^{max}$. Next, it follows
by the definition of $\tau_{A}$ that
$\sum_{t=1}^{\tau_{A}}x_{tj}+1\geq\rho_{j}T$ is satisfied for all $j$. By the
definition of $\tilde{g}_{t}$, we have
$\sum_{t=1}^{\tau_{A}}\tilde{g}_{tj}=\rho_{j}\tau_{A}-\sum_{t=1}^{\tau_{A}}x_{tj}\leq\rho_{j}\tau_{A}-\rho_{j}T+1$
(52)
thus
$T-\tau_{A}\leq\frac{1-\sum_{t=1}^{\tau_{A}}\tilde{g}_{tj}}{\rho_{j}},\forall
j$ (53)
On the other hand, it follows the update rule (equation 44 and 45) that for
any $t\leq\tau_{A}$,
$\dot{h}_{j}\left(\bar{\alpha}_{\left(t+1\right)j}\right)\geq\dot{h}_{j}\left(\bar{\alpha}_{tj}\right)-\eta\tilde{g}_{tj}$
(54)
Thus,
$\displaystyle\sum_{t=1}^{\tau_{A}}-\tilde{g}_{tj}$
$\displaystyle\leq\frac{1}{\eta}\left[\dot{h}_{j}\left(\bar{\alpha}_{\left(\tau_{A}+1\right)j}\right)-\dot{h}_{j}\left(\bar{\alpha}_{0j}\right)\right]$
(55)
$\displaystyle\leq\frac{1}{\eta}\left[\dot{h}_{j}\left(K_{1}\alpha_{j}^{max}\right)-\dot{h}_{j}\left(\bar{\alpha}_{0j}\right)\right]$
where the last inequality is due to the monotocity of
$\dot{h}^{*}_{j}\left(\cdot\right)$. Combining equation 53 and 55, we reach
$T-\tau_{A}\leq\frac{\dot{h}_{j}\left(K_{1}\alpha_{j}^{max}\right)-\dot{h}_{j}\left(\bar{\alpha}_{0j}\right)}{\eta\rho_{j}}+\frac{1}{\rho_{j}},\forall
j$ (56)
This finishes the proof by noticing that $\rho_{j}\geq\underline{\rho}$ and
$\dot{h}_{j}\left(K_{1}\alpha_{j}^{max}\right)-\dot{h}_{j}\left(\bar{\alpha}_{0j}\right)\leq||\nabla
h\left(K_{1}\alpha^{max}\right)-\nabla
h\left(\bar{\alpha}_{0}\right)||_{\infty}$.
#### Proposition 3
Consider the Algorithm 2 with given step size $\eta$ under Assumptions 1-5.
Let $\tau_{A}$ be the stopping time defined in Definition 2. Denote is
$\hat{\alpha}_{\tau_{A}}=\frac{\sum_{t=1}^{\tau_{A}}\alpha_{t}}{\tau_{A}}$.
Then the following inequality holds:
$\displaystyle\mathbb{E}\left[\tau_{A}D\left(\hat{\alpha}_{\tau_{A}}\right)-\sum_{t=1}^{\tau_{A}}f_{t}\left(x_{t}\right)\right]$
(57)
$\displaystyle\leq\frac{2\left(1+\bar{\rho}^{2}\right)}{K_{2}\sigma_{1}}\eta\mathbb{E}\left[\tau_{A}\right]+\frac{V_{h}\left(0,\bar{\alpha}_{0}\right)}{K_{2}\eta}$
Proof. Before proving Proposition 3, we first introduce some new notations
which are used in the proof. By the definition of conjugate function, we can
rewrite the dual problem (equation 2) as the following saddle-point problem:
$(S):\min_{0\leq\alpha}\max_{y\in
p\mathcal{X}}L(y,\alpha):=\sum_{i=1}^{n}p_{i}f_{i}\left(y_{i}/p_{i}\right)-\alpha^{\top}By+\alpha^{\top}\rho$
(58)
where $y:=\left[y_{1},\ldots,y_{n}\right]\in\mathbb{R}^{nM}$,
$B:=\left[I_{1};\ldots;I_{n}\right]\in\mathbb{R}^{M\times nM}$,
$p\mathcal{X}:=\left\\{y\mid y_{i}\in
p_{i}X\right\\}\subseteq\mathbb{R}^{nM}_{+}$, and $I_{i}\in\mathbb{R}^{M\times
M}$ is an identity matrix. By minimizing over $\alpha$ in equation 58, we
obtain the following primal problem:
$\begin{gathered}(P):\max_{y}P(y):=\sum_{i=1}^{n}p_{i}f_{i}\left(y_{i}/p_{i}\right)\\\
\text{ s.t. }By\leq\rho\\\ y\in p\mathcal{X}\end{gathered}$ (59)
The decision variable $y_{i}/p_{i}\in\mathcal{X}$ can be interpreted as the
expected action to be taken when a request of type $i$ arrives. Therefore, (P)
can be interpreted as a deterministic optimization problem in which resource
constraints can be satisfied in expectation. Moreover, we define an auxiliary
primal vari- able sequence $\left\\{z_{t}\right\\}_{{t=1},\ldots,T}$:
$z_{t}=\arg\max_{z\in p\mathcal{X}}L\left(z,\alpha_{t}\right)$ (60)
As a direct consequence of equation 58 and 60, we obtain:
$g_{t}:=-Bz_{t}+\rho=\nabla_{\alpha}L\left(z_{t},\alpha_{t}\right)\in\partial_{\alpha}D\left(\alpha_{t}\right)$
(61)
From the definition of $\tilde{g}_{t}$ and $\bar{\rho}$, we have
$\mathbb{E}_{\gamma_{t}}\left\|\tilde{g}_{t}\right\|_{\infty}^{2}\leq
2\left(\mathbb{E}_{\gamma_{t}}\left\|x_{t}\right\|_{\infty}^{2}+\|\rho\|_{\infty}^{2}\right)\leq
2\left(1+\bar{\rho}^{2}\right)$ (62)
Note that $\bar{\alpha}_{t}\in\sigma\left(\xi_{t-1}\right)$,
$g_{t}\in\sigma\left(\xi_{t-1}\right)$, and
$\tilde{g}_{t}\in\sigma\left(\xi_{t}\right)$, where $\sigma(X)$ denotes the
sigma algebra generated by a stochastic process $X$. Notice
$\mathbb{E}_{\gamma_{t}}\tilde{g}_{t}=g_{t}$, thus it holds for any
$\bar{\alpha}\in[0,1]$ that
$\displaystyle\left\langle g_{t},\bar{\alpha}_{t}-\bar{\alpha}\right\rangle$
(63) $\displaystyle=$
$\displaystyle\left\langle\mathbb{E}_{\gamma_{t}}\left[\tilde{g}_{t}\mid\bar{\alpha}_{t}\right],\bar{\alpha}_{t}-\bar{\alpha}\right\rangle$
$\displaystyle\leq$
$\displaystyle\mathbb{E}_{\gamma_{t}}{\left[\left\langle\tilde{g}_{t},\bar{\alpha}_{t}-\bar{\alpha}_{t+1}\right\rangle+\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t}\right)\right.}$
$\displaystyle\left.\quad-\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t+1}\right)-\frac{1}{\eta}V_{h}\left(\bar{\alpha}_{t+1},\bar{\alpha}_{t}\right)\mid\bar{\alpha}_{t}\right]$
$\displaystyle\leq$
$\displaystyle\mathbb{E}_{\gamma_{t}}\left[\left\langle\tilde{g}_{t},\bar{\alpha}_{t}-\bar{\alpha}_{t+1}\right\rangle+\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t}\right)\right.$
$\displaystyle\left.\quad-\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t+1}\right)-\frac{\sigma_{1}}{2\eta}\left\|\bar{\alpha}_{t+1}-\bar{\alpha}_{t}\right\|_{1}^{2}\mid\bar{\alpha}_{t}\right]$
$\displaystyle\leq$
$\displaystyle\mathbb{E}_{\gamma_{t}}\left[\frac{\eta}{\sigma_{1}}\left\|\tilde{g}_{t}\right\|_{\infty}^{2}+\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t}\right)-\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t+1}\right)\mid\bar{\alpha}_{t}\right]$
$\displaystyle\leq$
$\displaystyle\frac{2\eta}{\sigma_{1}}\left(1+\bar{\rho}^{2}\right)+\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t}\right)-\mathbb{E}_{\gamma_{t}}\left[\frac{1}{\eta}V_{h}\left(\bar{\alpha},\bar{\alpha}_{t+1}\right)\mid\bar{\alpha}_{t}\right]$
where the first inequality follows from Three-Point Property, the second
inequality is by strongly convexity of $h$, the third inequality uses that
$a^{2}+b^{2}\geq 2ab$ for $a,b\in\mathbb{R}$ and Cauchy-Schwarz to obtain
$\displaystyle\frac{\sigma_{1}}{2\eta}\left\|\bar{\alpha}_{t+1}-\bar{\alpha}_{t}\right\|_{1}^{2}+\frac{\eta}{\sigma_{1}}\left\|\tilde{g}_{t}\right\|_{\infty}^{2}$
$\displaystyle\geq\left\|\bar{\alpha}_{t+1}-\bar{\alpha}_{t}\right\|_{1}\left\|\tilde{g}_{t}\right\|_{\infty}$
(64)
$\displaystyle\geq\left|\left\langle\tilde{g}_{t},\bar{\alpha}_{t}-\bar{\alpha}_{t+1}\right\rangle\right|$
and the last inequality follows from equation 62. Taking expectation with
respect to $\xi_{t-1}$ and multiplying by $\eta$ on both sides of equation 64
yields:
$\displaystyle\mathbb{E}_{\xi_{t-1}}\left[\eta\left\langle
g_{t},\bar{\alpha}_{t}-\bar{\alpha}\right\rangle\right]$ (65)
$\displaystyle\leq\frac{2\left(1+\bar{\rho}^{2}\right)}{\sigma_{1}}\eta^{2}+\mathbb{E}_{\xi_{t-1}}\left[V_{h}\left(\bar{\alpha},\bar{\alpha}_{t}\right)\right]-\mathbb{E}_{\xi_{t}}\left[V_{h}\left(\bar{\alpha},\bar{\alpha}_{t+1}\right)\right]$
Consider the process $Q_{t}=\sum^{t}_{s=1}\eta\left\langle
g_{s},\bar{\alpha}_{s}-\bar{\alpha}\right\rangle-\mathbb{E}_{\xi_{s-1}}\left[\left\langle
g_{s},\bar{\alpha}_{s}-\bar{\alpha}\right\rangle\right]$, which is martingale
with respect to $\xi_{t}$ (i.e., $Q_{t}\in\sigma\left(\xi_{t}\right)$ and
$\mathbb{E}\left[Q_{t+1}\mid\xi_{t}\right]=Q_{t}$) with increments bounded by
$\displaystyle\left|Q_{t}-Q_{t-1}\right|$
$\displaystyle\leq\eta\left(\left\|g_{t}\right\|_{\infty}+\mathbb{E}_{\xi_{t-1}}\left\|g_{t}\right\|_{\infty}\right)\left\|\bar{\alpha}_{t}-\bar{\alpha}\right\|_{1}$
(66) $\displaystyle\leq
2(1+\bar{\rho})M\left\|\bar{\alpha}_{t}-\bar{\alpha}\right\|_{\infty}$
$\displaystyle\leq 4M(1+\bar{\rho})\left\|K_{1}\alpha^{\max}\right\|_{\infty}$
$\displaystyle=4MK_{1}(1+\bar{\rho})\left(\frac{\bar{f}}{\underline{\rho}}+1\right)<\infty$
where the first inequality is Cauchy-Schwarz, the second inequality is from
$\left\|g_{t}\right\|_{\infty}\leq 1+\bar{\rho}$ almost surely, and the last
inequality utilize Lemma 3. Since $\tau_{A}$ is a stopping time with respect
to $\xi_{t}$ and $\tau_{A}$ is bounded, the Optional Stopping Theorem implies
that $\mathbb{E}\left[M_{\tau_{A}}\right]=0$. Therefore,
$\begin{gathered}\mathbb{E}\left[\sum_{t=1}^{\tau_{A}}\eta\left\langle
g_{t},\bar{\alpha}_{t}-\bar{\alpha}\right\rangle\right]=\mathbb{E}\left[\sum_{t=1}^{\tau_{A}}\mathbb{E}_{\xi_{t-1}}\left[\eta\left\langle
g_{t},\bar{\alpha}_{t}-\bar{\alpha}\right\rangle\right]\right]\\\
\leq\frac{2\left(1+\bar{\rho}^{2}\right)}{\sigma_{1}}\eta^{2}\mathbb{E}\left[\tau_{A}\right]+V_{h}\left(\bar{\alpha},\bar{\alpha}_{0}\right)\end{gathered}$
(67)
where the inequality follows from summing up equation 65 from $t=1$ to
$t=\tau$, telescoping, and using that the Bregman divergence is non-negative.
On the other hand, it holds that by choosing $\bar{\alpha}=\alpha=0$
$\displaystyle\sum_{t=1}^{\tau_{A}}\eta\left\langle
g_{t},\bar{\alpha}_{t}-\bar{\alpha}\right\rangle$ (68) $\displaystyle\geq$
$\displaystyle\sum_{t=1}^{\tau_{A}}\eta\left\langle
g_{t},K_{2}\left(\alpha_{t}-\alpha\right)\right\rangle$ $\displaystyle=$
$\displaystyle\sum_{t=1}^{\tau_{A}}\eta
K_{2}\left\langle\nabla_{\alpha}L\left(z_{t},\alpha_{t}\right),\alpha_{t}-\alpha\right\rangle$
$\displaystyle=$ $\displaystyle\sum_{t=1}^{\tau_{A}}\eta
K_{2}\left(L\left(z_{t},\alpha_{t}\right)-L\left(z_{t},\alpha\right)\right)$
$\displaystyle=$ $\displaystyle\sum_{t=1}^{\tau_{A}}\eta
K_{2}\left(L\left(z_{t},\alpha_{t}\right)-P\left(z_{t}\right)-\alpha\left(\rho-
Bz_{t}\right)\right)$ $\displaystyle=$ $\displaystyle\sum_{t=1}^{\tau_{A}}\eta
K_{2}\left(D\left(\alpha_{t}\right)-P\left(z_{t}\right)-\alpha\left(\rho-
Bz_{t}\right)\right)$ $\displaystyle\geq$ $\displaystyle\tau_{A}\eta
K_{2}\left(D\left(\hat{\alpha}_{\tau_{A}}\right)-\frac{\sum_{t=1}^{\tau_{A}}P\left(z_{t}\right)}{\tau_{A}}\right)-\sum_{t=1}^{\tau_{A}}\alpha\left(\rho-
Bz_{t}\right)$ $\displaystyle=$ $\displaystyle\tau_{A}\eta
K_{2}\left(D\left(\hat{\alpha}_{\tau_{A}}\right)-\frac{\sum_{t=1}^{\tau_{A}}P\left(z_{t}\right)}{\tau_{A}}\right)$
where the first inequality uses Lemma 2, the first equality uses equation 61,
the second equality is because $L\left(z,\alpha\right)$ is linear in $\alpha$,
the third equality is from
$z_{t}=\mathop{\arg\min}_{z}L\left(z,\alpha_{t}\right)$, the second inequality
uses convexity of $D\left(\cdot\right)$ over $\alpha$, and the last equality
is because $\alpha=0$. Combining equation 67 and 68 and choosing
$\bar{\alpha}=\alpha=0$, we obtain:
$\displaystyle\mathbb{E}\left[\tau_{A}D\left(\hat{\alpha}_{\tau_{A}}\right)-\sum_{t=1}^{\tau_{A}}P\left(z_{t}\right)\right]$
(69)
$\displaystyle\leq\frac{2\left(1+\bar{\rho}^{2}\right)}{K_{2}\sigma_{1}}\eta\mathbb{E}\left[\tau_{A}\right]+\frac{V_{h}\left(0,\bar{\alpha}_{0}\right)}{K_{2}\eta}$
Notice that $\alpha_{t}$ and $z_{t}$ are measurable given the sigma algebra
$\sigma\left(\xi_{t-1}\right)$. From the update of $x_{t}$ and $z_{t}$, we
know that if a request of type $i$-th is realized in the $t$-th iteration,
then $x_{t}=(z_{t})_{i}/p_{i}$. Thus it holds for any $t\leq\tau_{A}$ that
$\mathbb{E}_{\gamma_{t}}\left[f_{t}\left(x_{t}\right)\mid\xi_{t-1}\right]=\sum_{i=1}^{n}p_{i}f_{i}\left(\left(z_{t}\right)_{i}/p_{i}\right)=P\left(z_{t}\right)$
(70)
Therefore, another martingale argument yields that
$\mathbb{E}\left[\sum_{t=1}^{\tau_{A}}f_{t}\left(x_{t}\right)\right]=\mathbb{E}\left[\sum_{t=1}^{\tau_{A}}P\left(z_{t}\right)\right]$
(71)
Combining equation 69 and 71 finishes the proof.
#### Theorem 1
Consider Algorithm 2 with step-size $\eta\leq K_{1}\sigma_{2}$ and initial
dual solution $\alpha_{0}\leq\alpha^{max}$. Suppose Assumption 1-5 are
satisfied. Then it holds for any $T\geq 1$ that
$\displaystyle\text{Regret}\left(A\right)\leq$
$\displaystyle\frac{2\left(1+\bar{\rho}^{2}\right)}{K_{2}\sigma_{1}}\eta
T+\frac{V_{h}\left(0,\bar{\alpha}_{0}\right)}{K_{2}\eta}$ (72)
$\displaystyle+\frac{\bar{f}}{\underline{\rho}\eta}\left\|\nabla
h\left(K_{1}\alpha^{\max}\right)-\nabla
h\left(\bar{\alpha}_{0}\right)\right\|_{\infty}+\frac{\overline{f}}{\underline{\rho}}.$
When choosing $\eta=O\left(1/\sqrt{T}\right)$, we obtain that
$\text{Regret}\left(A\right)\leq O\left(\sqrt{T}\right)$ when $T$ is
sufficiently large, and, therefore, our algorithm yields sublinear regret.
Proof. For any $\tau_{A}\leq T$, we have
$\displaystyle\mathrm{OPT}$
$\displaystyle=\frac{\tau_{A}}{T}\mathrm{OPT}+\frac{T-\tau_{A}}{T}\mathrm{OPT}$
(73)
$\displaystyle\leq\tau_{A}D\left(\hat{\alpha}_{\tau_{A}}\right)+\left(T-\tau_{A}\right)\bar{f}$
where the inequality uses equation 49 and the fact that
$\mathrm{OPT}\leq\bar{f}$ . Therefore,
$\displaystyle\operatorname{Regret}(A)$ (74) $\displaystyle=$
$\displaystyle\operatorname{OPT}-R(A)$ $\displaystyle\leq$
$\displaystyle\mathbb{E}_{\mathcal{P}}\left[\tau_{A}D\left(\hat{\alpha}_{\tau_{A}}\right)+\left(T-\tau_{A}\right)\bar{f}-\sum_{t=1}^{T}f_{t}\left(x_{t}\right)\right]$
$\displaystyle\leq$
$\displaystyle\mathbb{E}_{\mathcal{P}}\left[\left(\tau_{A}D\left(\hat{\alpha}_{\tau_{A}}\right)-\sum_{t=1}^{\tau_{A}}f_{t}\left(x_{t}\right)\right)\right]$
$\displaystyle+\mathbb{E}_{\mathcal{P}}\left[\left(T-\tau_{A}\right)\bar{f}\right]$
$\displaystyle\leq$
$\displaystyle\frac{2\left(1+\bar{\rho}^{2}\right)}{K_{2}\sigma_{1}}\eta\mathbb{E}_{\mathcal{P}}\left[\tau_{A}\right]+\frac{V_{h}\left(0,\bar{\alpha}_{0}\right)}{K_{2}\eta}$
$\displaystyle+\frac{\bar{f}}{\underline{\rho}\eta}\left\|\nabla
h\left(K_{1}\alpha^{\max}\right)-\nabla
h\left(\bar{\alpha}_{0}\right)\right\|_{\infty}+\frac{\overline{f}}{\underline{\rho}}$
$\displaystyle\leq$
$\displaystyle\frac{2\left(1+\bar{\rho}^{2}\right)}{K_{2}\sigma_{1}}\eta
T+\frac{V_{h}\left(0,\bar{\alpha}_{0}\right)}{K_{2}\eta}$
$\displaystyle+\frac{\bar{f}}{\underline{\rho}\eta}\left\|\nabla
h\left(K_{1}\alpha^{\max}\right)-\nabla
h\left(\bar{\alpha}_{0}\right)\right\|_{\infty}+\frac{\overline{f}}{\underline{\rho}}$
where the second inequality is because $\tau_{A}\leq T$ and
$f_{t}\left(x_{t}\right)\geq 0$, the third inequality uses Proposition 2 and
Proposition 3, and the last inequality is from $\tau_{A}\leq T$ almost surely.
Moreover, equation 74 holds for any $\mathcal{P}\in\mathcal{J}$, which
finishes the proof of Theorem 1.
#### Theorem 2
When the dual variable shifts the same distance, a higher percentile will make
the participation rate of a campaign generate greater fluctuations.
Proof. According to Assumption 2, the impression quality (or CTR in this
paper) follows a beta distribution, whose probability density function can be
expressed as
$Beta\left(v|m,n\right)=\frac{1}{B\left(m,n\right)}v^{m-1}\left(1-v\right)^{n-1}$
(75)
$B\left(m,n\right)=\int^{1}_{0}v^{m-1}\left(1-v\right)^{n-1}dv=\frac{\Gamma\left(m\right)\Gamma\left(n\right)}{\Gamma\left(m+n\right)}$
(76)
where $v\in[0,1]$ denotes the impression quality, $m\geq 2$ and $n\geq 2$ are
parameters of the beta distribution.
Suppose the dual variable of a campaign is $\alpha$, then its participation
rate can be obtained by
$\displaystyle\operatorname{PR}_{\alpha}$
$\displaystyle=\int^{1}_{\alpha}Beta\left(v|m,n\right)dv$ (77)
$\displaystyle=\frac{1}{B\left(m,n\right)}\int^{1}_{\alpha}v^{m-1}\left(1-v\right)^{n-1}dv$
When the dual variable shifts by a distance of $\delta\in(0,\alpha]$, the
resulting fluctuation in the participation rate is
$\mathcal{F}_{\delta}\left(\alpha\right)=\frac{\operatorname{PR}_{\alpha-\delta}}{\operatorname{PR}_{\alpha}}=\frac{\int^{1}_{\alpha-\delta}v^{m-1}\left(1-v\right)^{n-1}dv}{\int^{1}_{\alpha}v^{m-1}\left(1-v\right)^{n-1}dv}$
(78)
Let $g\left(\alpha\right)=\alpha^{m-1}\left(1-\alpha\right)^{n-1}$ and
$h\left(\alpha\right)=\int^{1}_{\alpha}g\left(v\right)dv$, the original
problem can be formulated as the task of demonstrating
$\mathcal{F}_{\delta}\left(\alpha\right)$ is monotonically increasing when
$0<\delta\leq\alpha<1$. The derivative of
$\mathcal{F}_{\delta}\left(\alpha\right)$ is
$\displaystyle\mathcal{F}^{\prime}_{\delta}\left(\alpha\right)$
$\displaystyle=\frac{g\left(\alpha\right)h\left(\alpha-\delta\right)-g\left(\alpha-\delta\right)h\left(\alpha\right)}{h^{2}\left(\alpha\right)}$
(79)
$\displaystyle=\frac{1}{h^{3}\left(\alpha\right)h\left(\alpha-\delta\right)}\cdot\left[\frac{g\left(\alpha\right)}{h\left(\alpha\right)}-\frac{g\left(\alpha-\delta\right)}{h\left(\alpha-\delta\right)}\right]$
Since $h\left(\alpha\right)>0$ when $0\leq\alpha<1$, the problem can be
converted to demonstrating that
$\phi\left(\alpha\right)=\frac{g\left(\alpha\right)}{h\left(\alpha\right)}$ is
monotonically increasing. The derivative of $\phi\left(\alpha\right)$ is
$\displaystyle\phi^{\prime}\left(\alpha\right)$
$\displaystyle=\frac{g^{\prime}\left(\alpha\right)h\left(\alpha\right)-g\left(\alpha\right)h^{\prime}\left(\alpha\right)}{h^{2}\left(\alpha\right)}$
(80)
$\displaystyle=\frac{g^{\prime}\left(\alpha\right)h\left(\alpha\right)+g^{2}\left(\alpha\right)}{h^{2}\left(\alpha\right)}$
We only need to prove that
$\Phi\left(\alpha\right)=g^{\prime}\left(\alpha\right)h\left(\alpha\right)+g^{2}\left(\alpha\right)\geq
0$ when $0\leq\alpha<1$. Let
$\displaystyle
a_{i}=\alpha+\frac{1-\alpha}{k},\;b_{i}=\alpha+\frac{1-\alpha}{k}i$ (81)
$\displaystyle
c_{i}=\alpha,\;d_{i}=\alpha+\frac{1-\alpha}{k}i+\frac{1-\alpha}{k}$
where $i=1,2,\cdots,n-1$. We have
$a_{i}+b_{i}=c_{i}+d_{i}=2\alpha+\frac{1-\alpha}{k}(i+1)=\mathcal{M}_{i}$ (82)
where $0\leq c_{i}\leq a_{i}\leq\mathcal{M}_{i}/2\leq b_{i}\leq
d_{i}\leq\mathcal{M}_{i}$.
Let $q_{i}\left(\alpha\right)=\alpha\left(\mathcal{M}_{i}-\alpha\right)$,
since $q\left(\alpha\right)$ is monotonically increasing on
$\left[0,\mathcal{M}_{i}/2\right]$ and $0\leq c_{i}\leq
a_{i}\leq\mathcal{M}_{i}/2$, we have $a_{i}b_{i}=q_{i}\left(a_{i}\right)\geq
q_{i}\left(c_{i}\right)=c_{i}d_{i}$, and
$\displaystyle
g\left(\alpha+\frac{1-\alpha}{k}\right)g\left(\alpha+\frac{1-\alpha}{k}i\right)$
(83)
$\displaystyle\quad-g\left(\alpha\right)g\left(\alpha+\frac{1-\alpha}{k}+\frac{1-\alpha}{k}i\right)$
$\displaystyle=g\left(a_{i}\right)g\left(b_{i}\right)-g\left(c_{i}\right)g\left(d_{i}\right)$
$\displaystyle=\left(a_{i}b_{i}\right)^{m-1}\cdot\left(1-\mathcal{M}_{i}+a_{i}b_{i}\right)^{n-1}$
$\displaystyle\quad-\left(c_{i}d_{i}\right)^{m-1}\cdot\left(1-\mathcal{M}_{i}+c_{i}d_{i}\right)^{n-1}\geq
0$
From the definition of derivative and integration, we can obtain
$\displaystyle\Phi\left(\alpha\right)=g^{\prime}\left(\alpha\right)h\left(\alpha\right)+g^{2}\left(\alpha\right)$
(84)
$\displaystyle=\lim_{k\rightarrow\infty}\frac{g\left(\alpha+\frac{1-\alpha}{k}\right)-g\left(\alpha\right)}{\frac{1-\alpha}{k}}\sum^{k}_{i=0}\frac{1-\alpha}{k}g\left(\alpha+\frac{1-\alpha}{k}i\right)$
$\displaystyle\quad+g^{2}\left(\alpha\right)$
$\displaystyle=\lim_{k\rightarrow\infty}\left[g\left(\alpha+\frac{1-\alpha}{k}\right)-g\left(\alpha\right)\right]\sum^{k}_{i=0}g\left(\alpha+\frac{1-\alpha}{k}i\right)$
$\displaystyle\quad+g^{2}\left(\alpha\right)$
$\displaystyle=\lim_{k\rightarrow\infty}\sum^{k}_{i=1}g\left(\alpha+\frac{1-\alpha}{k}\right)g\left(\alpha+\frac{1-\alpha}{k}i\right)-$
$\displaystyle\quad\lim_{k\rightarrow\infty}\sum^{k-1}_{i=1}g\left(\alpha\right)g\left(\alpha+\frac{1-\alpha}{k}+\frac{1-\alpha}{k}i\right)$
$\displaystyle=\lim_{k\rightarrow\infty}\sum^{k-1}_{i=1}{\left[g\left(\alpha+\frac{1-\alpha}{k}\right)g\left(\alpha+\frac{1-\alpha}{k}i\right)\right.}$
$\displaystyle\quad\left.-g\left(\alpha\right)g\left(\alpha+\frac{1-\alpha}{k}+\frac{1-\alpha}{k}i\right)\right]$
$\displaystyle\quad+\lim_{k\rightarrow\infty}g\left(\alpha+\frac{1-\alpha}{k}\right)g\left(1\right)$
$\displaystyle=\lim_{k\rightarrow\infty}\sum^{k-1}_{i=1}{\left[g\left(\alpha+\frac{1-\alpha}{k}\right)g\left(\alpha+\frac{1-\alpha}{k}i\right)\right.}$
$\displaystyle\quad\left.-g\left(\alpha\right)g\left(\alpha+\frac{1-\alpha}{k}+\frac{1-\alpha}{k}i\right)\right]\geq
0,$
where the last equality is because $g\left(1\right)=0$, and the last
inequality uses equation 83. This finishes the proof of Theorem 2.
#### Theorem 3
When the distribution of impression quality drifts (assume $m$ and $n$ do not
change simultaneously), a higher percentile will make the participation rate
of a campaign generate greater fluctuations.
Proof. Denote the participation rate of a campaign as
$\displaystyle\operatorname{PR}_{\alpha}\left(m,n\right)$
$\displaystyle=\int^{1}_{\alpha}Beta\left(v|m,n\right)dv$ (85)
$\displaystyle=\frac{\int^{1}_{\alpha}v^{m-1}\left(1-v\right)^{n-1}dv}{\int^{1}_{0}v^{m-1}\left(1-v\right)^{n-1}dv}$
To demonstrate its monotonicity with regard to $m$, the partial derivative of
$\operatorname{PR}_{\alpha}\left(m,n\right)$ with respect to $m$ is deduced as
$\frac{\partial PR_{\alpha}}{\partial
m}=\frac{h\left(0\right)\cdot\hat{h}\left(\alpha\right)-\hat{h}\left(0\right)\cdot
h\left(\alpha\right)}{h^{2}\left(0\right)}$ (86)
where $h\left(\alpha\right)=\int^{1}_{\alpha}v^{m-1}\left(1-v\right)^{n-1}dv$
and $\hat{h}\left(\alpha\right)=\int^{1}_{\alpha}\ln v\cdot
v^{m-1}\left(1-v\right)^{n-1}dv$. Let
$\mathcal{H}\left(\alpha\right)=h\left(0\right)\cdot\hat{h}\left(\alpha\right)-\hat{h}\left(0\right)\cdot
h\left(\alpha\right)$, its derivative with respect to $\alpha$ is
$\mathcal{H}^{\prime}\left(\alpha\right)=\alpha^{m-1}\left(1-\alpha\right)^{n-1}\left[\hat{h}\left(0\right)-h\left(0\right)\ln\alpha\right]$
(87)
It is obvious that $\mathcal{H}^{\prime}\left(\alpha\right)$ is positive on
$\left[0,\alpha_{0}\right]$ and negative on $\left[\alpha_{0},1\right]$, where
$\alpha_{0}=\hat{h}\left(0\right)/h\left(0\right)$. Thus,
$\mathcal{H}\left(\alpha\right)$ increases on $\left[0,\alpha_{0}\right]$ and
decreases on $\left[\alpha_{0},1\right]$. Since
$\mathcal{H}\left(0\right)=\mathcal{H}\left(1\right)=0$, it can be deduced
that $\mathcal{H}\left(\alpha\right)\geq 0$ when $\alpha\in\left[0,1\right]$.
Together with equation 86, it can be proved that
$\operatorname{PR}_{\alpha}\left(m,n\right)$ increases monotonically with
respect to $m$. Similarly, we can prove that
$\operatorname{PR}_{\alpha}\left(m,n\right)$ decreases monotonically with
respect to $n$.
When $m$ has an increase of $\delta$, the resulting fluctuation in the
participation rate is
$\displaystyle\mathcal{F}_{\delta}\left(\alpha\right)$
$\displaystyle=\frac{\operatorname{PR}_{\alpha}\left(m+\delta,n\right)}{\operatorname{PR}_{\alpha}\left(m,n\right)}$
(88)
$\displaystyle=V\left(m,n,\delta\right)\cdot\frac{\int^{1}_{\alpha}v^{m+\delta-1}\left(1-v\right)^{n-1}dv}{\int^{1}_{\alpha}v^{m-1}\left(1-v\right)^{n-1}dv}$
Our target is to prove that $\mathcal{F}_{\delta}\left(\alpha\right)$ is
monotonically increasing with respect to $\alpha$ for any $\alpha\in[0,1)$.
The derivative of $\mathcal{F}_{\delta}\left(\alpha\right)$ is obtained by
$\displaystyle\mathcal{F}_{\delta}^{\prime}\left(\alpha\right)$ (89)
$\displaystyle=\frac{\alpha^{m-1}\left(1-\alpha\right)^{n-1}\int^{1}_{\alpha}\left(v^{\delta}-\alpha^{\delta}\right)v^{m-1}\left(1-v\right)^{n-1}dv}{\left[\int^{1}_{\alpha}v^{m-1}\left(1-v\right)^{n-1}dv\right]^{2}}$
$\displaystyle\geq 0$
which indicates that a greater $\alpha$ will result in a more pronounced
fluctuation in the participation rate as $m$ varies. Through a similar
process, we can also prove that when $n$ changes, a greater $\alpha$ will
result in a more pronounced fluctuation in the participation rate, which is
omitted here.
|
# Benchmarking Cross-Domain Audio-Visual Deception Detection
Xiaobao Guo, Zitong Yu, , Nithish Muthuchamy Selvaraj, Bingquan Shen,
Adams Wai-Kin Kong, and Alex C. Kot Manuscript received July, 2023.
Corresponding author: Zitong Yu.X. Guo is with ROSE Lab, Interdisciplinary
Graduate Programme, and also with SCSE, Nanyang Technological University,
Singapore. E-mail<EMAIL_ADDRESS>Z. Yu, N. M. Selvaraj, and A. Kot are
with ROSE Lab, Nanyang Technological University, Singapore. E-mail:
<EMAIL_ADDRESS>{ms.nithish<EMAIL_ADDRESS>B. Shen is with DSO
National Laboratories, Singapore. E-mail<EMAIL_ADDRESS>A. W. -K. Kong
is with SCSE, Nanyang Technological University, Singapore. E-mail:
<EMAIL_ADDRESS>
###### Abstract
Automated deception detection is crucial for assisting humans in accurately
assessing truthfulness and identifying deceptive behavior. Conventional
contact-based techniques, like polygraph devices, rely on physiological
signals to determine the authenticity of an individual’s statements.
Nevertheless, recent developments in automated deception detection have
demonstrated that multimodal features derived from both audio and video
modalities may outperform human observers on publicly available datasets.
Despite these positive findings, the generalizability of existing audio-visual
deception detection approaches across different scenarios remains largely
unexplored. To close this gap, we present the first cross-domain audio-visual
deception detection benchmark, that enables us to assess how well these
methods generalize for use in real-world scenarios. We used widely adopted
audio and visual features and different architectures for benchmarking,
comparing single-to-single and multi-to-single domain generalization
performance. To further exploit the impacts using data from multiple source
domains for training, we investigate three types of domain sampling
strategies, including domain-simultaneous, domain-alternating, and domain-by-
domain for multi-to-single domain generalization evaluation. Furthermore, we
proposed the Attention-Mixer fusion method to improve performance, and we
believe that this new cross-domain benchmark will facilitate future research
in audio-visual deception detection. Protocols and source code are available
at https://github.com/Redaimao/cross_domain_DD.
###### Index Terms:
audio-visual, multimodal deception detection, cross-domain, generalization.
## 1 Introduction
Figure 1: Typical samples from different publicly deception detection
datasets: Real Life Trials [1], Bag of Lies [2], MU3D [3], and Box of Lies
[4]. The samples in each row are from different datasets while those in each
column are with different modalities (visual vs. audio) and ground truth
labels (i.e., truthful vs. deceptive). It can be seen that serious domain
shifts (e.g., resolution/illumination/pose in visual faces and
pitch/loudness/noise in audio) occur among these datasets.
Audio-visual deception detection involves utilizing AI techniques and
algorithms to automatically detect deceptive behavior in speech and facial
movements [5, 6, 7, 8]. Deception detection has a significant impact on
various real-world applications such as law enforcement [9], healthcare [10],
and business [11]. It has the potential to prevent fraud, improve security
measures, and enhance trust and confidence. A reliable deception detection
tool can support more accurate decision-makings.
Traditional deception detection is often a contact-based method. It assesses
whether someone is telling the truth or not by monitoring physiological
responses like skin conductance and heart rate [12]. Experts’ Behavioral
observation and analysis are another technique that evaluates changes in a
person’s body language, speech patterns, and eye movements [13, 14]. However,
such an assessment can be time-consuming and require significant expertise to
perform accurately.
Recently, the development of automated deception detection systems using AI
and machine learning techniques has gained significant attention as the
existing methods have limitations in terms of reliability, accuracy, and
scalability. Various multimodal datasets have been introduced, including real-
life trials from court scenes [1], lab-based setups [2, 3], and game show
scenarios [4]. These datasets provide a wide variety of deceptive samples from
different domains, enabling researchers to examine the effectiveness of AI
models on deception detection. Based on these datasets, progress has been made
in deception detection techniques within specific domains [7, 6, 15]. Recent
studies have utilized rich visual and audio features [16, 17, 18], such as Mel
Spectrogram, emotional states, and facial action units, to enhance the
performance of deception detection tasks.
However, there remains a substantial research gap that needs to be addressed.
Specifically, fewer studies have explored the cross-domain issue, despite the
presence of significant domain shifts in public deception detection datasets.
As shown in Figure 1, domain shifts are observed in both audio and visual
modalities from publicly available datasets. The generalizability of the
models is critical for practical applications. Therefore, such domain shifts
need to be investigated in order to develop deception detection models that
can be generalized across different contexts. Additionally, effective methods
must be proposed to alleviate the domain shift issue by fusing both audio and
visual features in a meaningful way. Addressing these issues can benefit
automated deception detection systems in improving generalizability in real-
world applications.
To address the issue of cross-domain deception detection, we introduce a new
benchmark that evaluates the generalization capacity of AI models using audio
and visual features over publicly available datasets. Our benchmarking
approach utilizes widely adopted audio and visual features, and we compare the
single-to-single domain performance and multi-to-single domain generalization
using different architectures. Specifically, for the multi-to-single setting,
three domain sampling strategies, _i.e.,_ domain simultaneous, domain
alternating, and domain-by-domain, are implemented to conduct cross-domain
testing. To further enhance performance, we propose an Attention-Mixer fusion
method based on MLP-Mixer [19]. This benchmarking framework serves as an
important tool for evaluating the effectiveness of audio-visual deception
detection models in diverse contexts, which will help improve the capabilities
of automated deception detection systems in real-world settings. Additionally,
we hope our work will inspire further research on multimodal models that
address domain shift issues. In summary, our main contributions include:
* •
Introducing a new benchmark for evaluating the generalization capacity of AI
models using audio and visual features across different domains.
* •
Comparing the single-to-single domain and multi-to-single domain
generalization using different architectures.
* •
Providing three domain sampling strategies _i.e.,_ domain simultaneous, domain
alternating, and domain-by-domain, to conduct multi-to-single cross-domain
testing.
* •
Proposing the Attention-Mixer fusion method to enhance performance.
In the rest of the paper, Sec. 2 provides a review of related psychological
studies on cues to deception and multimodal deception detection works. Sec. 3
introduces our benchmarking approach and fusion method. Sec. 4 provides the
cross-domain benchmark results and fusion results. Finally, conclusions and
future works are given in Sec. 5.
## 2 Related work
### 2.1 Cues to Deception
The research on using behavioral cues for deception has gradually become
active over the past few decades. Psychological researchers have published a
large number of works on the analysis of cues to deception [20, 21, 22]. Among
the studied behavioral cues, verbal and nonverbal cues were preferred as
humans may behave differently between lying and telling the truth. DePaulo _et
al._ [23] studied and reported experimental results on 158 cues to deception.
They revealed that, in general, people who tell lies are less forthcoming and
less convincing than those who tell the truth. Liars usually talk about fewer
details and make fewer spontaneous corrections. They also sound less involved
but more vocally tense. Through the study, the researchers statistically found
that liars often press their lips, repeat words, raise their chins, and show
less genuine smiles. The results show that some behavioral cues do potentially
appear in deception and are even more pronounced when liars are more motivated
to cheat.
Levine _et al._ [24] reviewed the status quo and provided a new perspective on
the theories of deception. They pointed out that lying usually happens when
problematic information is involved. It is critical to understand the verbal
content in the context. Vrij _et al._ [25] realized that interviewers play a
vital role in eliciting and enhancing cues to deceit. The authors proposed the
“interviewing to detect deception” technique to open a new path in the
deception detection research field. They argued that different psychological
states can be exploited by adopting appropriate interview techniques of liars
and truth-tellers. Hirschberg _et al._ [26] proposed a method to distinguish
deceptive from non-deceptive speeches by a large corpus. They also conducted
experiments using acoustic, lexical, and speaker-dependent features, which
showed improved performance by combining multiple feature sets. Warren _et
al._ [27] conducted experiments to investigate the relationship between
affective facial expressions to deception. The results indicated that leaked
emotions with the incongruous intended message can provide useful cues to
deception, which supported the nonverbal leakage theory [28, 29].
Figure 2: Main network and method. (a) Model architecture. Visual modality
includes face and behavior inputs. Audio modality includes Mel Spectrogram
input. The features obtained by the respective encoders. The fusion methods
include score fusion and feature fusion. (b) Domain sampling strategies.
Domain-simultaneous: each batch consists of samples from multiple sources.
Domain-alternating: each batch is alternatively sampled from multiple sources.
Domain-by-domain: the batches are sampled from one source and then from
another.
### 2.2 Multimodal Deception Detection
Recent works for deception detection usually use verbal and non-verbal
features and propose effective fusion methods [30, 15, 16, 17]. For example,
some works utilized facial features from RGB images to perform deception
detection [15, 17, 5]. To capture facial movements, facial action units (AUs)
were utilized. Other features, such as facial expression, were also adopted
[6, 30]. Besides visual features, many works incorporated audio features to
boost performance [7, 6, 15]. For example, Wu _et al._ [6] used MFCC (Mel-
frequency Cepstral Coefficients) features and Karimi _et al._ [15] used raw
audio. Most of the recent works mentioned above have considered multimodal
fusion approaches that extract visual, audio, and text information to boost
performance. In addtion to visual, audio and text, Karnati _et al._ [16]
exploited physiological signals, _i.e.,_ EEG representations for deception
detection.
To better fuse the multimodal features, different fusion methods were
proposed, which can be broadly categorized into feature-level fusion and
decision-level fusion. Specifically, feature-level fusion focused on producing
better multimodal embeddings and used the linear layers to extract crossmodal
dynamics [15, 17, 5, 6, 7]. Rather than that, decision-level fusion aimed to
fuse multimodal dynamics at a late stage, to reduce computational complexity
and learn good marginal representations [16, 7].
However, previous works on multimodal deception detection did not consider
cross-domain issues that occur from one domain to another, which is the focus
of this work.
## 3 Methodology
The mainstream architecture for audio-visual deception detection usually
includes encoders for unimodal feature extraction and/or a fusion module. We
follow the widely adopted architecture to build the benchmark on cross-domain
audio-visual deception detection in this work. As shown in Fig. 2, audio and
visual features are extracted from audio and visual encoders. The fusion
module is performed based on audio and visual features. The fused feature is
input to the classifier for classification. We build the benchmark for cross-
domain generalization performance based on such network architecture with
different encoders. We conducted single-to-single and multi-to-single
evaluations where three domain sampling strategies included, _i.e.,_ domain-
simultaneous, domain-alternating, and domain-by-domain.
### 3.1 Audio and Visual Feature Learning
To establish a benchmark for cross-domain audio-visual deception detection, we
utilize widely adopted audio and visual features along with their respective
encoders. Our approach treats audio and visual features as equally important,
extracting different types of features simultaneously. As depicted in Fig. 1,
this network structure offers several advantages: (1) flexibility in network
selection: different audio or visual encoders can be effortlessly incorporated
and compared in a fair manner, (2) adaptability: the addition or removal of
specific modules and/or losses is straightforward. For instance, a fusion
module can be inserted before classifiers, and (3) easy performance
benchmarking: the system facilitates evaluating performance in various
settings, such as score-level fusion and feature-level fusion. In this work,
we focus on audio and visual modalities for deception detection. In
particular, two kinds of visual features are extracted, _i.e.,_ face features
from RGB face images and behavior features consist of AUs, affects, etc.
As shown in Fig. 1, given a detected RGB face image as input $X_{f}$, the deep
features $F_{f}$ could be extracted via face encoder networks
$\mathcal{E}_{f}$ (e.g., ResNet18 [31]). Similarly, behavior inputs such as
the AU and/or affect features $X_{b}$ are encoded by OpenFace [32] or affect
model (e.g., EmotionNet [33]) $\mathcal{E}_{b}$ to output behavior features
$F_{b}$. Note that we regard both face frames and behavior features as the
visual modality but differentiate them in this work as they have different
types of information and representations. Given audio input $X_{a}$ (either
Mel Spectrogram [34] or waveforms), audio features $F_{a}$ are extracted
through audio encoder $\mathcal{E}_{a}$. The corresponding classification
heads for face frames ($\mathcal{H}_{f}$), behavior features
($\mathcal{H}_{b}$), and audio features ($\mathcal{H}_{a}$) output the
prediction logits $\hat{Y}_{f}$, $\hat{Y}_{b}$, and $\hat{Y}_{a}$,
respectively. The fusion head $\mathcal{G}$ takes $F_{f}$, $F_{b}$, and
$F_{a}$ as input. $\mathcal{G}$ is determined by the actual fusion method,
e.g., liner layer, transformer layers, MLP, etc. The output logit of
$\mathcal{G}$ is denoted by $\hat{Y}_{g}$. Therefore, the audio and visual
learning process can be denoted as follows:
$\small\begin{split}F_{f}&=\mathcal{E}_{f}(X_{f}),\hat{Y}_{f}=\mathcal{H}_{f}(F_{f}),\\\
F_{b}&=\mathcal{E}_{b}(X_{b}),\hat{Y}_{b}=\mathcal{H}_{b}(F_{b}),\\\
F_{a}&=\mathcal{E}_{a}(X_{a}),\hat{Y}_{a}=\mathcal{H}_{a}(F_{a}),\\\
\hat{Y}_{g}&=\mathcal{G}(F_{f},F_{b},F_{a}).\end{split}$ (1)
Loss Function. For deception detection ground truth $Y$, where $Y=0$ for
truthful and $Y=1$ for deception, the binary cross-entropy loss (BCE) is
adopted. The loss for each sample with a certain modality or fused prediction
can be denoted as
$\mathcal{L}_{m}=-(Ylog(\hat{Y}_{m})+(1-Y)log(1-\hat{Y}_{m})),$ (2)
where $m\in\\{f,b,a,g\\}$, $\hat{Y}_{m}$ is the corresponding prediction
logits. In other words, the BCE loss is calculated separately for each type of
modality and/or its fused feature depending on whether a sample has any face
frames, visual inputs, or audio inputs. The overall loss function can be
described as follows:
$\mathcal{L}=\frac{1}{N}\sum_{i=i}^{N}\left(\sum_{m={f,b,a}}\mathcal{L}_{m,i}+\lambda\mathcal{L}_{g,i}\right),$
(3)
where $N$ is the number of data samples and $\lambda$ is a trade-off parameter
between modality loss and fusion loss. $\lambda$ is set to 0.5 in our
experiments.
### 3.2 Cross-domain Generalization
We benchmark the cross-domain generalization on the deception detection task.
First, we introduce the notations and definitions in this section. A domain is
composed of data that are sampled from a distribution (dataset), which can be
denoted as $\mathcal{S}=\\{(X;Y)_{i}\\}_{i=1}^{N}\sim P_{S}$, where
$X=(X_{f},X_{b},X_{a})$, $X_{f},X_{b},X_{a}$ represent samples of face frames,
behavior, and audio modalities, respectively. $Y$ denotes the label, and
$P_{S}$ denotes the joint distribution of the input samples and the output
label. In this paper, for simplicity, we follow similar definitions in [35,
36] to treat each dataset as an individual domain due to their obvious
distribution gaps, but more fine-grained intra-domain factors would be
explored in future work. For domain generalization, $M$ source domains
(training datasets) are given, _i.e.,_
$\mathcal{S}_{train}=\\{S_{j}|j=1,\cdots,M\\}$, where
$\mathcal{S}_{j}=\\{(X;Y)_{i}\\}_{i=1}^{N_{j}}\sim P_{S_{j}}$ denotes the
$j$-th domain, and $P_{S_{i}}\neq P_{S_{j}}$ for $1\leq i,j\leq M$. $N_{j}$ is
the number of total samples in $S_{j}$. The goal of domain generalization is
to learn the predictive function $h$ in $M$ source domains to achieve minimum
error on an unseen test domain $\mathcal{S}_{test}\sim P_{S_{test}}$, and
$P_{S_{test}}\neq P_{S_{j}}$ for $1\leq j\leq M$:
$min~{}\mathbb{E}_{(X;Y)\in\mathcal{S}_{test}}\left[\mathcal{L}(h(X),Y)\right],$
(4)
where $X=(X_{f},X_{b},X_{a})$, $Y$ is the label, $\mathcal{L}$ is the loss
function, and $\mathbb{E}$ is the expectation.
When $M=1$, it is a Single-to-single Cross-domain Generalization task, where
the modal is trained on one training dataset and tested on another dataset.
When $M\geqslant 2$, we propose three strategies to learn from multiple
domains for the Multi-to-single Cross-domain Generalization. Let $B$ denote
one batch of training data with a size of $N_{B}$. Given multiple training
domains $\mathcal{S}_{train}=\\{S_{j}|j=1,\cdots,M\\}$, $B$ is a set of
training data sampled from $\mathcal{S}_{train}$.
Domain-Simultaneous means to train multiple domains in parallel within each
batch of data. In domain simultaneous training, the $k-$th batch of training
data is a group of samples from different domains, _i.e.,_
$B^{k}=(b_{S_{1}}^{k},\cdots,b_{S_{M}}^{k})$, $k\in[1,\cdots K]$, where
$b_{S_{j}}^{k}$ is the batch samples from domain $S_{j}$ for $j=1,\cdots,M$,
$K$ is the number of batches during training. The total number of
$b_{S_{j}}^{k}$ is $N_{B}$. As shown in Fig. 2 (b), each training batch
contains smaller batch samples from all the source domains during training.
Models are trained to learn from different domains simultaneously by feeding
the mixed batch data.
Domain-Alternating is different from domain simultaneous strategy in terms of
batch samples. In domain-alternating, $B^{k}=b_{S_{j}}^{k}$ for
$j={k-\lfloor{(k-1)\over{M}}\rfloor\cdot M}$, where $\lfloor{\cdot}\rfloor$ is
the flooring operator. The number of $b_{S_{j}}^{k}$ is $N_{B}$. Fig. 2 (b)
shows that the consecutive batch samples come from different domains.
Domain-by-Domain aims to train the model by feeding data from source domain
data one by one. $B^{k}=b_{S_{j}}^{k}$ for
$\lceil{\sum_{i=0}^{i=j-1}N_{i}\over{N_{B}}}\rceil\leqslant
k\leqslant\lceil{\sum_{i=0}^{i=j}N_{i}\over{N_{B}}}\rceil$, $N_{0}=0$, where
$\lceil{\cdot}\rceil$ is the ceiling operator. The number of $b_{S_{j}}^{k}$
is $N_{B}$. As shown in Fig. 2, the batch data samples from one domain after
finishing sampling from its previous domains.
### 3.3 Attention-Mixer Fusion
Besides investigating cross-domain sampling strategies, inspired by [19], we
propose Attention-Mixer Fusion to enhance the performance by fusing audio-
visual modalities, where the attention mixer layer takes multimodal features
as input to produce fused features. In particular, an attention mixer layer is
composed of unimodal MLP layers, self-attention layers [37], and crossmodal
MLP layers. First, for batch size $N_{B}$, the input features from different
modalities are concatenated and projected to be a tensor
$U\in\mathbb{R}^{N_{B}\times N_{m}\times D}$ by a Liner layer, followed by
several attention mixer layers, where $N_{m}$ is the number of input
modalities. Specifically, the unimodal MLP layer, the self-attention layer,
and the crossmodal MLP layer can be respectively described as
$\small U^{*,*,i}=F_{g}^{*,*,i}+\mathbf{W_{2}}\ \sigma(\mathbf{W_{1}}\
LN(F_{g}^{*,*,i})),\ i=\left[1,D\right],$ (5) $\footnotesize
U=\left[\left(softmax\left(U{\mathbf{W_{3}}(U\mathbf{W_{4}})^{T}}\over{\sqrt{D}}\right)U\mathbf{W_{5}}\right)_{h}\right]\mathbf{W_{6}},\
h=[1,H],$ (6) $\small U^{*,j,*}=U^{*,j,*}+\mathbf{W_{8}}\
\sigma(\mathbf{W_{7}}\ LN(U^{*,j,*})),\ j=\left[1,N_{m}\right],$ (7)
where $LN(\cdot)$ denotes the Layer Normalization, $\mathbf{W}_{1-8}$ are
trainable weights, $H$ is the number of heads in multihead self-attention, and
$*$ denotes all the entries in that dimension. Several attention mixer layers
are stacked as a deep block, which is set as a hyperparameter in practice. We
set it to 6 in our experiment. Finally, $U\in\mathbb{R}^{N_{B}\times
N_{m}\times D}$ is reduced to $U\in\mathbb{R}^{N_{B}\times N_{m}\times 1}$ by
obtaining the mean value on the feature dimension. In Eq. 5, the unimodal MLP
layer is conducted along the feature dimension to learn the dynamics in each
unimodal feature. Eq. 6 shows the multi-head self-attention operation on the
tensor $U$, which further explores the attention between the unimodal
features. In Eq. 7, the crossmodal MLP layer learns the dynamics across the
modality dimension from the corresponding feature tokens.
TABLE I: The results of single-to-single cross-domain generalization accuracy
(%) on benchmark datasets, Real-life Trial (R), Bag of Lies (B1), Box of Lies
(B2), and MU3D (M).
Modality & Inputs | Method | R to B1 | R to B2 | R to M | B1 to R | B1 to B2 | B1 to M | M to R | M to B1 | M to B2 | Avg
---|---|---|---|---|---|---|---|---|---|---|---
Visual (AU) | LSTM [38] | 48.11 | - | - | 61.21 | - | - | - | - | - | -
Visual (Face frames) | ResNet18 | 52.00 | 61.39 | 51.25 | 50.93 | 57.43 | 50.62 | 57.94 | 51.69 | 57.43 | 54.52
Visual (Face frames) | ResNet18+GRU | 53.54 | 63.37 | 52.81 | 57.41 | 59.41 | 51.56 | 46.73 | 52.92 | 55.45 | 54.80
Visual (AU+Gaze) | MLP | 50.77 | 65.35 | 56.87 | 58.88 | 58.42 | 50.94 | 46.73 | 51.69 | 53.47 | 54.79
Visual (Affect) | MLP | 50.46 | 58.42 | 50.31 | 50.47 | 51.49 | 52.19 | 66.36 | 51.08 | 60.40 | 54.58
Visual (AU+Gaze+Affect) | MLP | 54.46 | 59.41 | 54.37 | 50.47 | 57.43 | 54.69 | 60.75 | 51.69 | 55.45 | 55.41
Audio (Mel spectrogram) | ResNet18 | 46.77 | 53.47 | 52.19 | 50.47 | 66.34 | 50.62 | 54.21 | 51.38 | 55.45 | 53.43
Audio (Waveform) | Wave2Vec | 51.08 | 48.51 | 50.94 | 46.73 | 58.42 | 50.00 | 63.55 | 56.31 | 56.44 | 53.55
TABLE II: The results of multi-to-single cross-domain generalization accuracy
(%) on benchmark datasets, Real-life Trial (R), Bag of Lies (B1), Box of Lies
(B2), and MU3D (M), for different generalization strategies.
Modality & Inputs | Method | R&M to B1 | R&M to B2 | R&B1 to B2 | R&B1 to M | B1&M to R | B1&M to B2 | R&B1&M to B2 | Avg
---|---|---|---|---|---|---|---|---|---
Domain-Simultaneous
Visual (Face frames) | ResNet18 | 53.85 | 49.50 | 49.50 | 50.94 | 44.86 | 60.40 | 44.55 | 50.51
Visual (Face frames) | ResNet18+GRU | 52.62 | 54.46 | 51.49 | 51.88 | 53.27 | 59.41 | 44.55 | 52.53
Visual (AU+Gaze) | MLP | 53.54 | 47.52 | 48.51 | 50.94 | 52.34 | 53.47 | 56.44 | 51.82
Visual (Affect) | MLP | 50.15 | 54.46 | 55.45 | 52.19 | 53.27 | 57.43 | 62.38 | 55.04
Visual (AU+Gaze+Affect) | MLP | 50.46 | 52.48 | 61.39 | 51.25 | 51.4 | 60.4 | 63.37 | 55.82
Audio (Mel spectrogram) | ResNet18 | 48.92 | 45.54 | 53.47 | 53.12 | 43.93 | 62.38 | 50.5 | 51.12
Audio (Waveform) | Wave2Vec | 52.92 | 55.45 | 44.55 | 51.25 | 69.16 | 42.57 | 46.53 | 51.78
Domain-Alternating
Visual (Face frames) | ResNet18 | 50.15 | 45.54 | 56.44 | 51.56 | 50.47 | 54.46 | 65.85 | 53.50
Visual (Face frames) | ResNet18+GRU | 55.38 | 52.48 | 60.40 | 50.00 | 50.47 | 60.40 | 64.62 | 56.25
Visual (AU+Gaze) | MLP | 55.45 | 47.52 | 53.47 | 51.25 | 54.21 | 57.43 | 60.40 | 54.25
Visual (Affect) | MLP | 51.08 | 56.44 | 61.39 | 52.19 | 52.34 | 58.42 | 53.47 | 55.05
Visual (AU+Gaze+Affect) | MLP | 51.38 | 58.42 | 63.37 | 50.31 | 53.27 | 52.48 | 60.40 | 55.66
Audio (Mel spectrogram) | ResNet18 | 50.15 | 60.40 | 53.47 | 50.31 | 58.88 | 51.49 | 47.52 | 53.17
Audio (Waveform) | Wave2Vec | 52.92 | 55.45 | 44.55 | 50.62 | 64.49 | 58.42 | 48.51 | 53.57
Domain-by-Domain
Visual (Face frames) | ResNet18 | 52.00 | 53.47 | 56.44 | 50.00 | 59.81 | 41.58 | 55.45 | 52.68
Visual (Face frames) | ResNet18+GRU | 54.46 | 41.58 | 66.34 | 50.62 | 51.40 | 56.44 | 60.40 | 54.46
Visual (AU+Gaze) | MLP | 51.08 | 43.56 | 55.45 | 53.75 | 57.01 | 53.47 | 54.46 | 52.68
Visual (Affect) | MLP | 55.69 | 57.43 | 57.43 | 51.56 | 52.34 | 49.50 | 61.39 | 55.05
Visual (AU+Gaze+Affect) | MLP | 50.15 | 56.44 | 58.42 | 50.00 | 57.94 | 60.40 | 63.37 | 56.67
Audio (Mel spectrogram) | ResNet18 | 52.31 | 50.50 | 58.42 | 49.38 | 53.27 | 56.44 | 59.41 | 54.24
Audio (Waveform) | Wave2Vec | 56.00 | 47.52 | 44.55 | 53.12 | 67.29 | 57.43 | 58.42 | 54.90
## 4 Experiments
In this part, extensive experiments are conducted to benchmark the cross-
domain performances on public deception detection datasets. In the following,
we sequentially describe the benchmark datasets & metrics (Sec. 4.1),
implementation details (Sec. 4.2), benchmarking results (Sec. 4.3 \- 4.4) and
fusion performances (Sec. 4.5).
### 4.1 Databases and Metrics
Datasets. We benchmarked the cross-domain generalization performance based on
four publicly available datasets. Real Life Trials [1] dataset is a popular
real-world dataset collected from public court trials, which consists of 121
videos including 61 deceptive and 60 truthful video clips. As it is a real-
world dataset, the Real Life trial dataset has more noise on both the video
and audio. We filtered out some corrupted videos and obtained 108 videos (54
truthful and 54 deceptive) with 58 subjects for our experiments. Bag of Lies
[2] is a multimodal dataset collected from well-controlled lab-based
scenarios, where video, audio, EEG, and gaze data are collected. It has 35
subjects, 163 truthful and 162 deceptive video clips. The backgrounds for the
videos are relatively clean and it is less noisy. MU3D [3] has 320 video clips
and 80 subjects that cover different races and genders. It is also a lab-based
dataset that uses the personal description paradigm to stimuli real-world
cases. Each participant tells a positive truth, a positive lie, a negative
truth, and a negative lie. Box of Lies [4] is a deception dataset collected
from an online gameshow, which has 25 videos and 26 participants (6 male and
20 female). The full video set contains 29 truthful and 36 deceptive rounds of
games. However, the quality of the original Box of Lies dataset is not
satisfactory. The visual (the face of the participant) and audio from many
clips are not matching due to the frequent changes of viewpoints. To perform a
fair comparison, we preprocessed and cleaned the Box of Lies dataset. After
preprocessing, 101 video clips were extracted for testing. Some of the typical
samples from these datasets are shown in Fig. 1.
Evaluation Metrics. In this work, we followed the widely adopted metric,
binary classification accuracy (%), for experimental evaluation. The deceptive
clips were labeled as 1 and truthful clips were labeled as 0.
### 4.2 Implementation Details
Feature Extraction. Several widely-adopted audio and visual features were
extracted by different tools. For visual features, OpenFace [32] was used to
extract 35-dimensional AUs and 8-dimensional gaze features. Face frames were
extracted and aligned by MTCNN [39], where we uniformly sampled 64 face frames
for each video clip. Affect features were extracted by Emonet [33], where the
feature included 5-class emotions, arousal, and valence. For audio features,
Mel Spectrograms were extracted by OpenSmile toolkit [34]. Raw audio waveforms
were also used in our experiments.
Protocols. Inspired by [35, 36], we treated each dataset as a domain. To
evaluate the models’ cross-domain generalization capacity and alleviate domain
information leakage, all the preprocessed data including original training and
test data from each dataset was used for either training or testing. Note that
the Box of Lies dataset was only used for testing as many samples were
filtered out due to their unsatisfactory quality. The experiments were
conducted on the single-to-single domain (e.g., R to B1 stands for training on
Real-life Trial (R) and testing on Bag of Lies (B1)) and multi-to-single
domain (e.g., R&M to B2 stands for training on Real-life Trial (R) and MU3D
(M) and testing on Box of Lies (B2)).
Model Selection. Models for audio and visual modalities were selected to fit
the data volume. For face frames, we adopted ResNet18 [31] and Gate Recurrent
Unit (GRU) [40] models for facial feature extraction and temporal modeling,
respectively. Two-layer multilayer perception (MLP) [41] models were used for
AUs, gaze, and affect feature representation. For the audio-based Mel
spectrogram, we used the ResNet18 [31] model for time-frequency feature
representation. For audio waveforms, the Wave2Vec [42] model was applied for
audio feature extraction.
Experimental Setting. Our proposed method was implemented with Pytorch. The
ImageNet pretrained models (e.g., ResNet18) for classification were trained on
the benchmark datasets using SGD optimizer with the initial learning rate
(lr), momentum, and weight decay (wd) were 1e-3, 0.9, and 5e-5, respectively.
We trained models with a maximum of 30 epochs and batchsize 32 on a single
Nvidia V100 GPU. As for the fusion models (e.g., Atten-Mixer on face frames
and Mel Spectrogram), Adam optimizer with initial lr=1e-3 and wd=5e-5 was
used. The models were trained with batchsize 16 for a maximum of 30 epochs.
TABLE III: The fusion results of single-to-single cross-domain generalization
accuracy (%).
Modality & Inputs | Fusion Postion | Fusion Method | R to B1 | R to B2 | R to M | B1 to R | B1 to B2 | B1 to M | M to R | M to B1 | M to B2 | Avg
---|---|---|---|---|---|---|---|---|---|---|---|---
| Score-level | Average | 53.23 | 41.58 | 51.88 | 64.49 | 62.38 | 50.62 | 57.94 | 53.23 | 62.38 | 55.30
| | Concat | 51.08 | 55.45 | 51.88 | 54.21 | 58.42 | 51.25 | 57.01 | 50.15 | 61.39 | 54.54
| | SE-Concat | 53.85 | 60.40 | 51.25 | 55.14 | 58.42 | 50.62 | 56.07 | 51.38 | 65.35 | 55.83
| | Cross-Atten | 55.38 | 61.39 | 52.19 | 51.40 | 60.40 | 51.25 | 55.14 | 56.31 | 60.40 | 55.98
| | MLP-Mixer | 55.08 | 48.51 | 53.44 | 56.07 | 58.42 | 53.75 | 55.14 | 59.08 | 59.41 | 55.43
Visual (Face frames) + Visual (AU+Gaze+Affect) | Feature-level | Atten-Mixer(Ours) | 56.92 | 59.41 | 57.94 | 63.37 | 53.75 | 53.75 | 60.75 | 56.00 | 61.39 | 58.14
| Score-level | Average | 53.23 | 49.50 | 51.88 | 50.47 | 59.41 | 53.75 | 65.42 | 56.31 | 49.50 | 54.39
| | Concat | 50.77 | 53.47 | 50.47 | 62.38 | 51.56 | 51.88 | 54.21 | 52.62 | 58.42 | 53.98
| | SE-Concat | 50.15 | 44.55 | 51.40 | 61.39 | 53.12 | 52.50 | 65.42 | 56.31 | 66.34 | 55.69
| | Cross-Atten | 54.46 | 51.49 | 55.14 | 58.42 | 51.25 | 52.19 | 63.55 | 56.62 | 66.34 | 55.95
| | MLP-Mixer | 52.31 | 55.45 | 57.94 | 63.37 | 53.12 | 51.25 | 64.49 | 57.85 | 62.38 | 57.57
Visual (Face frames) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 57.54 | 55.45 | 56.07 | 61.39 | 50.94 | 53.12 | 67.29 | 57.85 | 64.36 | 58.22
| Score-level | Average | 49.85 | 58.42 | 54.06 | 45.79 | 60.40 | 50.62 | 49.53 | 56.00 | 63.37 | 54.23
| | Concat | 49.54 | 53.47 | 47.66 | 61.39 | 53.44 | 51.88 | 57.01 | 50.15 | 58.42 | 53.66
| | SE-Concat | 50.15 | 48.51 | 55.14 | 60.40 | 53.12 | 50.00 | 57.01 | 49.85 | 63.37 | 54.17
| | Cross-Atten | 53.23 | 44.55 | 57.94 | 55.45 | 54.69 | 54.06 | 63.55 | 51.69 | 64.36 | 55.50
| | MLP-Mixer | 49.54 | 57.43 | 50.47 | 63.37 | 54.06 | 53.12 | 59.81 | 55.08 | 69.31 | 56.91
Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 53.52 | 54.46 | 57.94 | 61.39 | 53.12 | 51.56 | 69.16 | 58.15 | 64.36 | 58.18
| Score-level | Average | 52.00 | 58.42 | 51.88 | 53.27 | 58.42 | 50.94 | 57.01 | 53.54 | 61.39 | 55.20
| | Concat | 53.23 | 58.42 | 55.14 | 61.39 | 52.50 | 52.19 | 56.07 | 54.15 | 60.40 | 55.94
| | SE-Concat | 51.38 | 58.42 | 51.40 | 62.38 | 52.81 | 50.94 | 59.81 | 54.77 | 52.48 | 54.93
| | Cross-Atten | 51.08 | 48.51 | 55.14 | 60.40 | 53.12 | 53.44 | 60.75 | 56.31 | 60.40 | 55.46
| | MLP-Mixer | 55.69 | 46.53 | 44.86 | 63.37 | 51.56 | 50.94 | 64.49 | 56.00 | 60.40 | 54.87
Visual (Face frames) + Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 55.08 | 60.40 | 57.01 | 64.36 | 53.44 | 51.25 | 67.29 | 56.00 | 62.38 | 58.58
### 4.3 Cross-domain Testing with Unimodal Features
In this subsection, we present the benchmark results of cross-domain testing
by investigating unimodal features to evaluate their generalization
capacities. For clarity, we use “visual (face frames)” to indicate face inputs
and “visual (AU/gaze/affect)” to indicate behavior inputs.
Single-to-Single Domain. Specifically, the models were trained on one dataset
from one domain and tested on the other dataset from another domain. The
experiments were conducted on the four public datasets, Real-life Trial (R),
Bag of Lies (B1), Box of Lies (B2), and MU3D (M). As shown in Tabel I, for
visual modalities, we extracted the most adopted visual features including
face frames, and behavior features such as AUs, gaze and affect. For audio
modality, Mel spectrogram and waveform were extracted. We also applied several
different backbone networks as audio and visual encoders. We can observe that
R and B1 datasets generalized the best on B2, and M generalized the best on R.
Note that the B2 dataset was not adopted as a source domain dataset as the
original dataset had too much noise and we cleaned it only for testing. On
average, we can observe that the best result was achieved by using visual
(AU+gaze+affect) features.
Multi-to-Single Domain. Here we fully evaluate the performance of multi-to-
single cross-domain generalization using unimodal features. We conducted
experiments for different domain sampling strategies. As shown in Table II,
for domain-simultaneous training, the best generalization performance is
achieved by training on the Bag of Lies and MU3D datasets and testing on the
Real-life Trial dataset (69.16%), which was also the case for domain-by-domain
training strategy (67.29%). For the domain-alternating strategy, the best
result was observed when transferring to the Box of Lies dataset using the
rest three datasets for training (65.85%). The results showed that the best
generalization performances were obtained when transferring to the real-world
dataset and the gameshow dataset by training on the lab-based datasets. This
was because the lab-based datasets (Bag of Lies and MU3D) are relatively clean
compared to the real-world dataset (Real-life Trial) and the gameshow dataset
(Box of Lies). However, in the opposite case, the generalization performance
degraded, for example, R&M to B1 and R&B1 to M. Different domain sampling
strategies reached their best average performance on different input features
and backbone networks. To be specific, for both domain-simultaneous and
domain-by-domain strategies, models trained on visual (AU+gaze+affect)
features reached their highest accuracies, which were 55.82% and 56.67%,
respectively. Using the domain-alternating strategy, the best accuracy of
56.25% was achieved by training on visual (face frames) features. We can
observe that models trained on visual modalities outperformed those trained on
audio modalities across all the generalization strategies. This may be due to
the rich deceptive cues captured by visual modalities in the publicly
available datasets.
### 4.4 Domain-Simultaneous with Gradient Reversal Layer (GRL)
Following the implementation by Ganin _et al._ [43], we compared the multi-to-
single domain generalization accuracies w/w.o GRL. GRL was proposed to
mitigate the domain shift issue by manipulating the training gradients. It
worked by acting as an identity transform in forward propagation and
multiplying the gradient by a certain negative constant during the
backpropagation without having trainable parameters. GRL was inserted between
encoders and domain classifiers, which was easy to implement. As GRL is a
widely adopted method for domain generalization, it is investigated to show
its effectiveness for the deception detection task. We selected domain-
simultaneous as the baseline and added GRL to the original network with the
same training setups. The average accuracies were reported in Fig. 3, where
different types of visual and audio features and methods were compared.
Training with GRL, the performance of ResNet18 and ResNet18+GRU models using
visual (face frames) features and Wave2Vec model using waveform were enhanced.
However, we observed that MLP models using visual (AU/gaze/affect) features
and the ResNet18 model using Mel spectrograms degraded in performance.
Generally, ResNet18 trained with GRL performed better than MLP for visual
modality, and Wave2Vec trained with GRL boosted the performance and surpassed
the model trained on the Mel spectrogram.
Figure 3: Performance comparisons of Domain-simultaneous training w/ and w/o
Gradient Reversal Layer (GRL). TABLE IV: The fusion results of multi-to-
single cross-domain generalization accuracy (%) for different generalization
strategies.
Modality & Inputs | Fusion Postion | Fusion Method | R&M to B1 | R&M to B2 | R&B1 to B2 | R&B1 to M | B1&M to R | B1&M to B2 | R&B1&M to B2 | Avg
---|---|---|---|---|---|---|---|---|---|---
Domain-Simultaneous
| Score-level | Average | 53.23 | 43.56 | 57.43 | 50.94 | 51.40 | 58.42 | 48.51 | 51.93
| | Concat | 52.00 | 56.44 | 60.40 | 51.88 | 51.40 | 62.38 | 57.43 | 55.99
| | SE-Concat | 55.69 | 58.42 | 44.55 | 52.50 | 56.07 | 58.42 | 58.42 | 54.87
| | Cross-Atten | 51.08 | 52.48 | 55.45 | 51.56 | 57.01 | 60.40 | 57.43 | 55.06
| | MLP-Mixer | 53.85 | 43.56 | 62.38 | 52.19 | 56.07 | 61.39 | 59.41 | 55.55
Visual (Face frames) + Visual (AU+Gaze+Affect) | Feature-level | Atten-Mixer(Ours) | 53.23 | 57.43 | 63.37 | 52.19 | 57.01 | 63.37 | 62.38 | 58.43
| Score-level | Average | 49.54 | 55.45 | 52.48 | 51.25 | 54.21 | 59.41 | 55.45 | 53.97
| | Concat | 49.54 | 54.46 | 49.50 | 53.75 | 44.86 | 63.37 | 54.46 | 52.85
| | SE-Concat | 52.00 | 58.42 | 42.57 | 51.56 | 54.21 | 58.42 | 63.37 | 54.36
| | Cross-Atten | 50.46 | 52.48 | 60.40 | 53.12 | 52.34 | 58.42 | 59.41 | 55.23
| | MLP-Mixer | 53.54 | 57.43 | 52.48 | 50.31 | 52.34 | 60.40 | 47.52 | 53.43
Visual (Face frames) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 55.69 | 64.36 | 56.44 | 53.75 | 58.88 | 59.41 | 58.42 | 58.14
| Score-level | Average | 48.62 | 58.42 | 56.44 | 50.94 | 51.40 | 56.44 | 53.47 | 53.68
| | Concat | 48.31 | 53.47 | 56.44 | 50.31 | 56.07 | 63.37 | 57.43 | 55.06
| | SE-Concat | 49.85 | 50.50 | 47.52 | 52.19 | 49.53 | 57.43 | 61.39 | 52.63
| | Cross-Atten | 51.38 | 63.37 | 61.39 | 51.25 | 48.60 | 59.41 | 58.42 | 56.26
| | MLP-Mixer | 55.69 | 59.41 | 57.43 | 53.75 | 50.47 | 61.39 | 50.50 | 55.52
Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 55.08 | 55.45 | 58.42 | 54.37 | 52.34 | 61.39 | 60.40 | 56.78
| Score-level | Average | 54.77 | 57.43 | 57.43 | 52.81 | 60.75 | 58.42 | 56.44 | 56.86
| | Concat | 54.46 | 54.46 | 47.52 | 53.12 | 53.27 | 61.39 | 56.44 | 54.38
| | SE-Concat | 53.23 | 55.45 | 54.46 | 52.50 | 59.81 | 56.44 | 58.42 | 55.76
| | Cross-Atten | 48.62 | 57.43 | 50.50 | 53.12 | 48.60 | 58.42 | 56.44 | 53.30
| | MLP-Mixer | 48.92 | 48.51 | 59.41 | 52.81 | 54.21 | 57.43 | 51.49 | 53.25
Visual (Face frames) + Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 52.31 | 59.41 | 58.42 | 52.50 | 58.88 | 61.39 | 60.40 | 57.62
Domain-Alternating
| Score-level | Average | 55.38 | 57.43 | 58.42 | 50.94 | 57.94 | 50.50 | 61.39 | 56.00
| | Concat | 55.08 | 58.42 | 63.37 | 51.56 | 62.62 | 62.38 | 53.47 | 58.13
| | SE-Concat | 50.46 | 49.50 | 63.37 | 52.19 | 57.01 | 65.35 | 56.44 | 56.33
| | Cross-Atten | 55.08 | 59.41 | 60.40 | 52.50 | 48.60 | 58.42 | 61.39 | 56.54
| | MLP-Mixer | 57.54 | 63.37 | 63.37 | 50.94 | 65.42 | 52.48 | 56.44 | 58.51
Visual (Face frames) + Visual (AU+Gaze+Affect) | Feature-level | Atten-Mixer(Ours) | 56.62 | 61.39 | 63.37 | 51.56 | 58.88 | 62.38 | 58.42 | 58.95
| Score-level | Average | 50.15 | 58.42 | 60.40 | 50.62 | 51.40 | 60.40 | 58.42 | 55.69
| | Concat | 53.23 | 62.38 | 54.46 | 51.25 | 59.81 | 54.46 | 54.46 | 55.72
| | SE-Concat | 50.15 | 52.48 | 64.36 | 50.31 | 59.81 | 55.45 | 50.50 | 54.72
| | Cross-Atten | 52.92 | 57.43 | 49.50 | 52.81 | 65.42 | 53.47 | 66.34 | 56.84
| | MLP-Mixer | 51.69 | 58.42 | 57.43 | 51.25 | 62.62 | 58.42 | 56.44 | 56.61
Visual (Face frames) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 53.23 | 63.37 | 56.44 | 51.25 | 63.55 | 59.41 | 56.44 | 57.67
| Score-level | Average | 51.38 | 56.44 | 54.46 | 50.00 | 64.49 | 61.39 | 49.50 | 55.38
| | Concat | 49.85 | 62.38 | 58.42 | 51.88 | 61.68 | 55.45 | 51.49 | 55.88
| | SE-Concat | 50.46 | 62.38 | 58.42 | 50.00 | 57.94 | 59.41 | 65.35 | 57.71
| | Cross-Atten | 49.85 | 62.38 | 50.50 | 53.44 | 58.88 | 56.44 | 56.44 | 55.42
| | MLP-Mixer | 50.15 | 55.45 | 58.42 | 50.31 | 58.88 | 61.39 | 57.43 | 56.00
Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 50.15 | 59.41 | 50.50 | 52.19 | 58.88 | 60.40 | 61.39 | 56.13
| Score-level | Average | 56.00 | 52.48 | 59.41 | 51.88 | 51.40 | 60.40 | 56.44 | 55.43
| | Concat | 51.69 | 58.42 | 58.42 | 52.81 | 57.01 | 61.39 | 56.44 | 56.60
| | SE-Concat | 56.31 | 59.41 | 58.42 | 50.31 | 53.27 | 61.39 | 55.45 | 56.37
| | Cross-Atten | 49.54 | 67.33 | 54.46 | 51.56 | 66.36 | 52.48 | 60.40 | 57.45
| | MLP-Mixer | 50.15 | 63.37 | 60.40 | 51.56 | 62.62 | 61.39 | 61.39 | 58.70
Visual (Face frames) + Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 51.69 | 66.34 | 60.40 | 51.88 | 59.81 | 61.39 | 62.38 | 59.13
Domain-by-Domain
| Score-level | Average | 54.77 | 46.53 | 59.41 | 50.62 | 57.94 | 59.41 | 58.42 | 55.30
| | Concat | 53.85 | 53.47 | 63.37 | 51.88 | 57.94 | 65.35 | 58.42 | 57.75
| | SE-Concat | 56.62 | 44.55 | 63.37 | 52.19 | 54.21 | 58.42 | 59.41 | 55.54
| | Cross-Atten | 54.15 | 49.50 | 62.38 | 54.69 | 57.01 | 60.40 | 63.37 | 57.36
| | MLP-Mixer | 57.54 | 62.38 | 71.29 | 51.25 | 54.21 | 62.38 | 62.38 | 60.20
Visual (Face frames) + Visual (AU+Gaze+Affect) | Feature-level | Atten-Mixer(Ours) | 57.54 | 58.42 | 63.37 | 52.19 | 69.16 | 60.40 | 64.36 | 60.78
| Score-level | Average | 52.00 | 50.50 | 58.42 | 51.56 | 57.94 | 61.39 | 60.40 | 56.03
| | Concat | 50.46 | 59.41 | 62.38 | 52.19 | 60.75 | 56.44 | 57.43 | 57.01
| | SE-Concat | 55.38 | 48.51 | 58.42 | 52.81 | 62.62 | 57.43 | 59.41 | 56.37
| | Cross-Atten | 52.31 | 53.47 | 58.42 | 51.25 | 64.49 | 58.42 | 57.43 | 56.54
| | MLP-Mixer | 53.23 | 60.40 | 62.38 | 52.19 | 61.68 | 58.42 | 58.42 | 58.10
Visual (Face frames) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 53.85 | 66.34 | 63.37 | 53.75 | 58.88 | 64.36 | 63.37 | 60.56
| Score-level | Average | 48.62 | 59.41 | 51.49 | 51.25 | 60.75 | 62.38 | 63.37 | 56.75
| | Concat | 57.23 | 60.40 | 56.44 | 51.25 | 56.07 | 57.43 | 56.44 | 56.47
| | SE-Concat | 55.08 | 60.40 | 59.41 | 54.06 | 60.75 | 61.39 | 57.43 | 58.36
| | Cross-Atten | 50.15 | 56.44 | 61.49 | 57.50 | 59.81 | 64.36 | 58.42 | 58.31
| | MLP-Mixer | 52.62 | 56.44 | 58.42 | 52.50 | 60.75 | 61.39 | 58.42 | 57.22
Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 51.69 | 56.44 | 57.43 | 54.06 | 66.36 | 64.36 | 60.40 | 58.68
| Score-level | Average | 53.85 | 51.49 | 65.35 | 53.75 | 58.88 | 57.43 | 58.42 | 57.02
| | Concat | 52.92 | 52.48 | 55.45 | 53.75 | 62.62 | 62.38 | 56.44 | 56.58
| | SE-Concat | 51.38 | 60.40 | 58.42 | 51.56 | 57.01 | 62.38 | 59.41 | 57.22
| | Cross-Atten | 51.80 | 52.48 | 62.38 | 53.12 | 59.81 | 64.36 | 60.40 | 57.76
| | MLP-Mixer | 53.85 | 55.45 | 60.40 | 50.94 | 59.81 | 64.36 | 60.40 | 57.89
Visual (Face frames) + Visual (AU+Gaze+Affect) + Audio (Mel spectrogram) | Feature-level | Atten-Mixer(Ours) | 55.69 | 61.39 | 59.41 | 55.00 | 58.88 | 59.41 | 60.40 | 58.60
### 4.5 Cross-domain Testing with Multimodal Fusion
Here we present multimodal fusion results of cross-domain testing to evaluate
models’ generalization capacities.
Single-to-Single Domain with Fusion. In this section, we conducted experiments
of single-to-single cross-domain testing. Two types of fusion positions were
involved including score-level and feature-level fusion. For feature-level
fusion, multiple fusion methods were adopted such as simple concatenation, SE-
Concat [44], Cross-Atten [37], MLP-Mixer [19], and Attention-Mixer fusion
(Ours). To be specific, simple concatenation refers to the concatenation of
the extracted features before the input to the classifier. SE-concat stands
for the SE attention applied to the concatenated features. Cross-Atten means
the crossmodal attention among input features by using the attention mechanism
from the Transformer. MLP-Mixer uses the method in [19] on the extracted
features. The modalities and input include three types, visual (face frames),
visual (AU+gaze+affect), and audio (Mel spectrogram). It results in four
combinations of either two or three types of inputs. As shown in Table IV,
among each type of input combination, the proposed Attention-Mixer fusion
(Atten-Mixer) achieved the best results. The rest fusion methods showed
comparable results, which were less performed than Atten-Mixer.
Multi-to-Single Domain with Fusion. We benchmarked multi-to-single cross-
domain generalization with different fusion methods by three types of cross-
testing strategies. In total, seven sub-experiments were conducted for
different domain combinations. The results are shown in Tabel IV. On average,
the best accuracies were 58.43%, 59.13%, and 60.78% on domain-simultaneous,
domain-alternating, and domain-by-domain strategies, respectively. Among
these, for both domain-simultaneous and domain-by-domain strategies, the best
results were achieved by taking visual (face frames) and visual
(AU+gaze+affect) features as input, while the best result for domain-
alternating was achieved by using visual (face frames), visual
(AU+gaze+affect) and audio (Mel spectrogram) features. Taking a close look at
the average fusion results, for each type of modality input, Atten-Mixer
achieved the best among the six fusion methods. showing the effectiveness of
the proposed method. By using Atten-Mixer, in general, the average result
showed slightly better when using the domain-by-domain strategy and taking
visual (face frames) and visual (AU+gaze+affect) features as input. To sum up,
the results showed that on the current publicly available datasets, visual
features were better when it came to multi-to-single cross-domain
generalizability. However, the performance differences were not a large gap,
and there were also no significant differences by comparing two-to-one domain
and three-to-one domain cross-testing performances.
Figure 4: Ablation study for attention-mixer layers. The number of layers 4,
5, 6, and 7 are compared. The modality and inputs for A, B, C, and D are in
line with those in Table III from the top to the bottom.
Ablation Study for Attention-Mixer Fusion Module. We conducted an ablation
study for the proposed attention-mixer fusion module with the changes in the
number of attention-mixer layers. The experiments were conducted on single-to-
single domain testing, where the average accuracies were compared. As shown in
Fig. 4, the number of attention-mixer layers was set to 4, 5, 6, and 7. The
modalities and inputs in Table III are compared, where “A” had the inputs of
Visual (Face frame) + Visual(AU+Gaze+Affect), “B” had Visual (Face frames) +
Audio(Mel spectrogram), “C” had Visual (AU+Gaze+Affect) + Audio (Mel
spectrogram), and “D” had Visual (Face frame) + Visual (AU+Gaze+Affect) +
Audio (Mel spectrogram). The results showed that models with 6 attention-mixer
layers achieved the best average accuracies, followed by 7 attention-mixer
layers.
### 4.6 Discussion
We can observe that the general performance of cross-domain deception
detection is unsatisfactory because it is challenging to reduce the domain gap
between each dataset. The domain generalization ability of widely-adopted
methods was relatively weak using either audio or visual features. Different
domain sampling strategies worked well for different audiovisual features.
Fusing multiple modalities is able to mitigate the problem. However, the
performance still needs to be improved.
Ethical Consideration. Developing deception detection using AI should
emphasize respecting privacy, minimizing psychological harm, preventing
discrimination, promoting transparency, etc. Researchers should follow
appropriate regulations to develop and deploy AI systems for deception
detection. Potential misuses and negative impacts include invasion of privacy,
discrimination, erosion of trust, etc. Mitigating these risks requires
responsible practices from researchers and developers.
## 5 Conclusion
In this paper, we benchmark the cross-domain generalization performance for
deception detection on publicly available datasets. We compare the single-to-
single domain and multi-to-single domain generalization performances, where
three strategies are used including domain-simultaneous, domain-alternating,
and domain-by-domain. We also investigate the effectiveness of the gradient
reversal layer for domain-simultaneous strategy. Moreover, we propose the
Attention-Mixer fusion method to alleviate the domain shift issue and boost
the performance. Future works for deception detection are encouraged to
propose better methods to improve the domain generalizability on audio-visual
deception detection task.
Acknowledgments. This work was carried out at the Rapid-Rich Object Search
(ROSE) Lab, Nanyang Technological University, Singapore. The research is
supported by the DSO National Laboratories, under project agreement No.
DSOCL21238.
## References
* [1] V. Pérez-Rosas, M. Abouelenien, R. Mihalcea, and M. Burzo, “Deception detection using real-life trial data,” in _Proceedings of the 2015 ACM on International Conference on Multimodal Interaction_ , 2015, pp. 59–66.
* [2] V. Gupta, M. Agarwal, M. Arora, T. Chakraborty, R. Singh, and M. Vatsa, “Bag-of-lies: A multimodal dataset for deception detection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_ , 2019, pp. 0–7.
* [3] E. P. Lloyd, J. C. Deska, K. Hugenberg, A. R. McConnell, B. T. Humphrey, and J. W. Kunstman, “Miami university deception detection database,” _Behavior research methods_ , vol. 51, no. 1, pp. 429–439, 2019.
* [4] F. Soldner, V. Pérez-Rosas, and R. Mihalcea, “Box of lies: Multimodal deception detection in dialogues,” in _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , 2019, pp. 1768–1777.
* [5] M. Ding, A. Zhao, Z. Lu, T. Xiang, and J.-R. Wen, “Face-focused cross-stream network for deception detection in videos,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 7802–7811.
* [6] Z. Wu, B. Singh, L. Davis, and V. Subrahmanian, “Deception detection in videos,” in _Proceedings of the AAAI conference on artificial intelligence_ , vol. 32, no. 1, 2018.
* [7] M. Gogate, A. Adeel, and A. Hussain, “Deep learning driven multimodal fusion for automated deception detection,” in _2017 IEEE symposium series on computational intelligence (SSCI)_. IEEE, 2017, pp. 1–6.
* [8] L. Mathur and M. J. Matarić, “Unsupervised audio-visual subspace alignment for high-stakes deception detection,” in _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2021, pp. 2255–2259.
* [9] G. Wang, H. Chen, and H. Atabakhsh, “Criminal identity deception and deception detection in law enforcement,” _Group Decision and Negotiation_ , vol. 13, pp. 111–127, 2004.
* [10] H. Joudaki, A. Rashidian, B. Minaei-Bidgoli, M. Mahmoodi, B. Geraili, M. Nasiri, and M. Arab, “Using data mining to detect health care fraud and abuse: a review of literature.” _Global Journal of Health Science_ , vol. 7, no. 1, pp. 194–202, 2014.
* [11] F. H. Glancy and S. B. Yadav, “A computational model for financial reporting fraud detection,” _Decision Support Systems_ , vol. 50, no. 3, pp. 595–601, 2011.
* [12] J. Synnott, D. Dietzel, and M. Ioannou, “A review of the polygraph: history, methodology and current status,” _Crime Psychology Review_ , vol. 1, no. 1, pp. 59–83, 2015.
* [13] S. Porter and M. Campbell, “A. vrij, detecting lies and deceit: The psychology of lying and implications for professional practice,” _Expert Evidence_ , vol. 7, pp. 227–232, 09 1999.
* [14] A. Nortje and C. Tredoux, “How good are we at detecting deception? a review of current techniques and theories,” _South African Journal of Psychology_ , vol. 49, no. 4, pp. 491–504, 2019.
* [15] H. Karimi, J. Tang, and Y. Li, “Toward end-to-end deception detection in videos,” in _2018 IEEE International Conference on Big Data (Big Data)_. IEEE, 2018, pp. 1278–1283.
* [16] M. Karnati, A. Seal, A. Yazidi, and O. Krejcar, “Lienet: a deep convolution neural networks framework for detecting deception,” _IEEE Transactions on Cognitive and Developmental Systems_ , 2021.
* [17] D. Avola, L. Cinque, G. L. Foresti, and D. Pannone, “Automatic deception detection in rgb videos using facial action units,” in _Proceedings of the 13th International Conference on Distributed Smart Cameras_ , 2019, pp. 1–6.
* [18] J.-T. Yang, G.-M. Liu, and S. C.-H. Huang, “Multimodal deception detection in videos via analyzing emotional state-based feature,” _arXiv preprint arXiv:2104.08373_ , 2021.
* [19] I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit _et al._ , “Mlp-mixer: An all-mlp architecture for vision,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 24 261–24 272, 2021.
* [20] D. B. Buller and J. K. Burgoon, “Interpersonal deception theory,” _Communication theory_ , vol. 6, no. 3, pp. 203–242, 1996.
* [21] M. Hartwig and C. F. Bond Jr, “Why do lie-catchers fail? a lens model meta-analysis of human lie judgments.” _Psychological bulletin_ , vol. 137, no. 4, p. 643, 2011.
* [22] A. Vrij, _Detecting lies and deceit: The psychology of lying and implications for professional practice_. Wiley, 2000.
* [23] B. M. DePaulo, J. J. Lindsay, B. E. Malone, L. Muhlenbruck, K. Charlton, and H. Cooper, “Cues to deception.” _Psychological bulletin_ , vol. 129, no. 1, p. 74, 2003.
* [24] T. R. Levine and S. A. McCornack, “Theorizing about deception,” _Journal of Language and Social Psychology_ , vol. 33, no. 4, pp. 431–440, 2014.
* [25] A. Vrij and P. A. Granhag, “Eliciting cues to deception and truth: What matters are the questions asked,” _Journal of Applied Research in Memory and Cognition_ , vol. 1, no. 2, pp. 110–117, 2012.
* [26] J. B. Hirschberg, S. Benus, J. M. Brenier, F. Enos, S. Friedman, S. Gilman, C. Girand, M. Graciarena, A. Kathol, L. Michaelis _et al._ , “Distinguishing deceptive from non-deceptive speech,” 2005.
* [27] G. Warren, E. Schertler, and P. Bull, “Detecting deception from emotional and unemotional cues,” _Journal of Nonverbal Behavior_ , vol. 33, no. 1, pp. 59–69, 2009.
* [28] P. Ekman and W. V. Friesen, “Nonverbal leakage and clues to deception,” _Psychiatry_ , vol. 32, no. 1, pp. 88–106, 1969.
* [29] ——, “Detecting deception from the body or face.” _Journal of personality and Social Psychology_ , vol. 29, no. 3, p. 288, 1974.
* [30] L. Mathur and M. J. Matarić, “Introducing representations of facial affect in automated multimodal deception detection,” in _Proceedings of the 2020 International Conference on Multimodal Interaction_ , 2020, pp. 305–314.
* [31] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [32] B. Amos, B. Ludwiczuk, M. Satyanarayanan _et al._ , “Openface: A general-purpose face recognition library with mobile applications,” _CMU School of Computer Science_ , vol. 6, no. 2, p. 20, 2016.
* [33] A. Toisoul, J. Kossaifi, A. Bulat, G. Tzimiropoulos, and M. Pantic, “Estimation of continuous valence and arousal levels from faces in naturalistic conditions,” _Nature Machine Intelligence_ , vol. 3, no. 1, pp. 42–50, 2021.
* [34] F. Eyben, M. Wöllmer, and B. Schuller, “Opensmile: the munich versatile and fast open-source audio feature extractor,” in _Proceedings of the 18th ACM international conference on Multimedia_ , 2010, pp. 1459–1462.
* [35] Z. Wang, Z. Wang, Z. Yu, W. Deng, J. Li, T. Gao, and Z. Wang, “Domain generalization via shuffled style assembly for face anti-spoofing,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 4123–4133.
* [36] T. Varanka, Y. Li, W. Peng, and G. Zhao, “Data leakage and evaluation issues in micro-expression analysis,” _IEEE Transactions on Affective Computing_ , 2023.
* [37] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [38] H. U. D. Ahmed, U. I. Bajwa, F. Zhang, and M. W. Anwar, “Deception detection in videos using the facial action coding system,” _arXiv preprint arXiv:2105.13659_ , 2021.
* [39] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” _IEEE Signal Processing Letters_ , vol. 23, no. 10, pp. 1499–1503, 2016.
* [40] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” _arXiv preprint arXiv:1412.3555_ , 2014.
* [41] L. Noriega, “Multilayer perceptron tutorial,” _School of Computing. Staffordshire University_ , vol. 4, no. 5, p. 444, 2005.
* [42] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” _Advances in Neural Information Processing Systems_ , vol. 33, pp. 12 449–12 460, 2020.
* [43] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in _International conference on machine learning_. PMLR, 2015, pp. 1180–1189.
* [44] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7132–7141.
|
# Realization of Anosov Diffeomorphisms on the Torus
Tamara Kucherenko Department of Mathematics, The City College of New York,
New York, NY, 10031, USA<EMAIL_ADDRESS>and Anthony Quas
Department of Mathematics and Statistics, University of Victoria, Victoria, BC
Canada<EMAIL_ADDRESS>
###### Abstract.
We study area preserving Anosov maps on the two-dimensional torus within a
fixed homotopy class. We show that the set of pressure functions for Anosov
diffeomorphisms with respect to the geometric potential is equal to the set of
pressure functions for the linear Anosov automorphism with respect to Hölder
potentials. We use this result to provide a negative answer to the
$C^{1+\alpha}$ version of the question posed by Rodriguez Hertz on whether two
homotopic area preserving $C^{\infty}$ Anosov difeomorphisms whose geometric
potentials have identical pressure functions must be $C^{\infty}$ conjugate.
###### Key words and phrases:
Anosov diffeomorphisms, smooth conjugacy problem, thermodynamic formalism,
pressure function, equilibrium states, Hölder potentials
###### 2020 Mathematics Subject Classification:
37D35, 37B10, 37A60, 37C15, 37D20
T.K. is supported by grants from the Simons Foundation #430032.
A.Q. is supported by a grant from NSERC
## 1\. Introduction
We consider an Anosov diffeomorphism $T$ of the two-dimensional torus
$\mathbb{T}^{2}$. That is, there is a continuous splitting of the tangent
bundle of $\mathbb{T}^{2}$ into a direct sum $E^{u}\oplus E^{s}$ which is
preserved by the derivative $DT$ and such that the unstable subbundle $E^{u}$
is uniformly expanded by $DT$ and the stable subbundle $E^{s}$ is uniformly
contracted by $DT$. Any such Anosov diffeomorphism $T$ is homotopic and
topologically conjugate to a hyperbolic toral automorphism $L$ given by an
integer matrix with determinant one and no eigenvalues of absolute value one.
This was first proven by Franks in 1969 [6] under the assumption that all
points on the torus are non-wandering (in fact, his result was for an
$n$-dimensional torus). A year later Newhouse [22] pointed out that this
assumption is satisfied when either $\dim E^{s}=1$ or $\dim E^{u}=1$, which
provided the classification of Anosov diffeomorphisms up to topological
conjugacy in dimensions 2 and 3. The case of dimension $n\geq 4$ was settled
by Manning [18] in 1974.
Suppose $T_{1}$ and $T_{2}$ are two $C^{r}\,(r>1)$ Anosov diffeomorphisms in
the homotopy class of a fixed hyperbolic automorphism $L$. It follows from the
above that there is a homeomorphism $h$ such that $h\circ T_{1}=T_{2}\circ h$.
The problem of determining when $h$ has the same regularity as the maps
$T_{1}$ and $T_{2}$ is known as the smooth conjugacy problem and has been
studied extensively, see e.g. [14, 12, 7, 8]. Already in 1967 Anosov [1]
constructed examples which showed that $h$ may be merely Hölder even for
highly regular $T_{1}$ and $T_{2}$, which initially discouraged further study
of the problem (see comments in [25]). However, a series of papers [19, 15,
20, 16], authored, in various combinations, by de la Llave, Marco, and
Moriyón, appeared in the 1980s focusing on the study of the conjugacy of
$C^{\infty}$ diffeomorphisms on $\mathbb{T}^{2}$. The culmination of their
work is the following theorem.
###### Theorem.
[16] Let $T_{1}$ and $T_{2}$ be $C^{\infty}$ Anosov diffeomorphisms of
$\mathbb{T}^{2}$. If they are topologically conjugate and the Lyapunov
exponents at corresponding periodic orbits are the same, then the conjugating
homeomorphism is $C^{\infty}$.
Later it was shown that the equality of the corresponding Lyapunov exponents
for $C^{r}$ Anosov diffeomorphisms on $\mathbb{T}^{2}$ implies that the
conjugacy is $C^{r-\epsilon}$, however it is no longer true on
$\mathbb{T}^{4}$ even for $C^{\infty}$ maps [17]. The case of $\mathbb{T}^{3}$
is still open, with a positive result recently obtained when one of the
diffeomorphisms is an automorphism [5].
Note that if $h$ is differentiable, then for any point $x$ of period $n$ for
$T_{1}$, $h(x)$ is of period $n$ for $T_{2}$ and
$DT_{1}^{n}(x)=Dh^{-1}(h(x))DT_{2}^{n}(h(x))Dh(x).$
We see that the Lyapunov exponents of $x$ under $T_{1}$ and $h(x)$ under
$T_{2}$ coincide. The result of [16] is quite remarkable since a condition,
which is a priori weaker than $h$ being $C^{1}$, is shown to imply that $h$ is
$C^{\infty}$. F. Rodriguez Hertz asked whether we can get away with even less.
He proposed to replace the assumption of equality of the Lyapunov exponents by
the equality of the pressure functions of the geometric potentials.
To introduce the pressure function we first define the topological pressure
using the variational principle. The topological pressure of a continuous
potential $\phi:\mathbb{T}^{2}\to\mathbb{R}$ with respect to a dynamical
system $T:\mathbb{T}^{2}\to\mathbb{T}^{2}$ is given by
$P_{\rm top}(T,\phi)=\sup_{\mu}\left\\{h_{\mu}(T)+\int\phi\,d\mu\right\\},$
where $\mu$ runs over the set of all $T$-invariant probability measures on
$\mathbb{T}^{2}$ and $h_{\mu}(T)$ is the measure-theoretic entropy of $\mu$. A
measure $\mu$ which realizes the supremum is called an equilibrium state of
$\phi$. By a celebrated result of Bowen [2], for an Anosov diffeomorphism $T$
any Hölder potential $\phi:\mathbb{T}^{2}\to\mathbb{R}$ has a unique
equlibrium state $\mu_{\phi}$. Equilibrium states are mathematical
generalizations of Gibbs distributions in statistical physics. The most
important ones are the measure of maximal entropy, which is the equilibrium
state of a constant potential, and the SRB measure, which is the equilibrium
state of the _geometric potential_. The geometric potential is the negative
logarithm of the Jacobian of $T$ along the unstable bundle $E^{u}$,
$\phi_{T}^{u}(x)=-\log\big{|}D_{u}T(x)\big{|}.$
The _pressure function_ of a potential $\phi$ is the map $t\mapsto P_{\rm
top}(T,t\phi)$, where $t$ is a real valued parameter. Information about
various dynamical properties of an Anosov system is encoded into the pressure
function of the geometric potential. For example, when $T$ is area preserving,
the positive Lyapunov exponent of $T$ with respect to the normalized Lebesgue
measure (which is the equilibrium state of $\phi_{T}^{u}$) is given by the
negative derivative of the pressure function of $\phi_{T}^{u}$ at $t=1$, while
the derivative at $t=0$ gives the Lyapunov exponent with respect to the
measure of maximal entropy of $T$. F. Rodriguez Hertz asked whether
information on the regularity of the conjugating homeomorphism can also be
extracted from the pressure functions of the geometric potentials of the
corresponding maps. More precisely,
###### Question 1.
[11, attr. F. Rodriguez Hertz] Let $T_{1}$ and $T_{2}$ be $C^{\infty}$ area-
preserving Anosov diffeomorphisms on $\mathbb{T}^{2}$ that are homotopic.
Assume $P_{\rm top}(T_{1},t\phi^{u}_{T_{1}})=P_{\rm
top}(T_{2},t\phi^{u}_{T_{2}})$ for all $t$. Does this imply that $T_{1}$ and
$T_{2}$ are $C^{\infty}$ conjugate?
We point out that the answer to the above question is positive when one of the
diffeomorphisms is an automorphism. Indeed if $T_{1}$ is an automorphism, then
$\phi^{u}_{T_{1}}$ is constant, so that $P_{\rm
top}(T_{1},t\phi^{u}_{T_{1}})$, and hence $P_{\rm
top}(T_{2},t\phi^{u}_{T_{2}})$ is affine. However, pressure functions of
Hölder continuous functions are known to be strictly convex unless the
underlying potential is cohomologous to a constant. Hence $\phi^{u}_{T_{2}}$
is cohomologous to the constant $\phi^{u}_{T_{1}}$. This guarantees that the
Lyapunov exponents of periodic points of $T_{2}$ match those of periodic
orbits of $T_{1}$, so that $T_{1}$ and $T_{2}$ are $C^{\infty}$ conjugate by
the above result.
One reason that Anosov diffeomorphisms on $\mathbb{T}^{2}$ are well-understood
is that they admit symbolic codings. Using a Markov partition of
$\mathbb{T}^{2}$ one can find a finite set $\mathcal{A}$ (indexing the set of
rectangles of the Markov partition) and a mixing subshift of finite type
$\Omega\subset\mathcal{A}^{\mathbb{Z}}$ such that there exists a finite-to-one
factor map $\pi:\Omega\to\mathbb{T}^{2}$ which is Hölder. Then
$\phi^{u}_{T}\circ\pi$ is a Hölder potential on $\Omega$. It turns out that in
the symbolic setting, a related question to Question 1 has been studied by
Pollicott and Weiss in [23].
Suppose $(\Omega,\sigma)$ is a subshift of finite type and
$\psi:\Omega\to\mathbb{R}$ is a Hölder potential. Denote the Birkhoff sum of
$\psi$ by $S_{n}\psi(x)=\sum_{k=0}^{n-1}\psi(\sigma^{k}x)$. The multi-set
$\\{(S_{n}\psi(x),n):\sigma^{n}x=x\\}$ is called the _unmarked orbit spectrum
of $\psi$_. In [23] the extent to which a potential is determined by its
periodic orbit invariants such as its orbit spectrum and its pressure function
was investigated. Note that for subshifts of finite type the pressure function
can be defined topologically as
$P_{\rm
top}(\sigma,t\psi)=\lim_{n\to\infty}\frac{1}{n}\log\left(\sum_{\sigma^{n}x=x}e^{tS_{n}\psi(x)}\right),$
and therefore any two potentials with the same unmarked orbit spectrum must
have identical pressure functions. The converse is not true. It was shown by
Pollicott and Weiss that there exists an uncountable family of Hölder
continuous functions on a full shift with different unmarked orbit spectra,
but all sharing the same pressure function.
Since for Anosov $T:\mathbb{T}^{2}\to\mathbb{T}^{2}$ we have
$-\log\big{|}D_{u}T^{n}\big{|}=S_{n}\phi^{u}_{T}$, the equality of the
Lyapunov exponents at periodic orbits for torus diffeomorphisms $T_{1}$ and
$T_{2}$ corresponds to the equality of the unmarked orbit spectra of their
geometric potentials. Hence Question 1 may be seen as asking whether Hölder
functions arising from geometric potentials of Anosov diffeomorphisms on the
torus are special enough that the equality of their pressure functions implies
the equality of their unmarked orbit spectrum. That turns out not to be the
case.
We show that the set of pressure functions for Anosov diffeomorphisms with
respect to their geometric potentials is equal to the set of pressure
functions for the hyperbolic automorphism with respect to Hölder potentials.
###### Theorem 1.
Let $L$ be a hyperbolic automorphism of $\mathbb{T}^{2}$ and let $\mu$ be the
equilibrium state for a Hölder continuous potential $\phi$ with $P_{\rm
top}(L,\phi)=0$. Then there exists a $C^{1+H}$ area-preserving Anosov
diffeomorphism $T$ of $\mathbb{T}^{2}$ such that
* •
the system
$T\colon(\mathbb{T}^{2},\mathsf{Leb})\to(\mathbb{T}^{2},\mathsf{Leb})$ is
conjugate to $L\colon(\mathbb{T}^{2},\mu)\to(\mathbb{T}^{2},\mu)$ by a map
$h$;
* •
the potential $-\log|D_{u}T|\circ h$ is cohomologous to $\phi$.
In this theorem, and throughout the paper, we write $T$ is $C^{1+H}$ to mean
that there exists $0<\alpha<1$ where $T$ is $C^{1+\alpha}$.
A statement similar to the above theorem could be deduced from the work by
Cawley [4] which establishes a bijection between Teichmüller space of an
Anosov diffeomorphism and the quotient of Hölder functions by the subspace of
coboundaries plus constants. However the proofs in [4] appear to be rather
opaque. Our approach is constructive where the main step – the change of
coordinates – is given by an explicit formula in terms of the equilibrium
state of $\phi$.
In view of Theorem 1, to solve Question 1 we need to find Hölder potentials
having identical pressure functions with respect to an automorphism $L$, but
different unmarked orbit spectra. From the work of Pollicott and Weiss one
might expect uncountably many such potentials on the corresponding subshift of
finite type. However, there is no reason to expect that any of these
potentials will be Hölder continuous on the torus. Hence we have to employ
another construction to produce torus continuous examples. We obtain
###### Theorem 2.
There exist homotopic $C^{1+H}$ area-preserving Anosov diffeomorphisms $T_{1}$
and $T_{2}$ on $\mathbb{T}^{2}$ such that $P_{\rm
top}(T_{1},t\phi^{u}_{T_{1}})=P_{\rm top}(T_{2},t\phi^{u}_{T_{2}})$ for all
$t$, but $T_{1}$ and $T_{2}$ fail to be $C^{1}$ conjugate.
In fact our results give countably many homotopic Hölder differentiable area-
preserving Anosov diffeomorphisms, none of which are $C^{1}$ conjugate, but
all having the same pressure function. We do not know whether one can find
uncountably many such maps, as would be suggested by the result in [23].
We remark that our examples, which are in the $C^{1+H}$ category, do not
directly respond to the $C^{\infty}$ question of Rodriguez Hertz; however they
strongly suggest a negative answer to that question also.
Acknowledgement. Part of this work was completed during our one-week stay at
the Centre International de Rencontres Mathématiques in Luminy, France through
the Research in Residence program. We thank CIRM for the support and
hospitality.
## 2\. Preliminary Results
### 2.1. Gibbs Measures and Radon-Nikodym Derivative
In recent works an invariant measure is termed Gibbs if the weight of the
Bowen balls of order $n$ satisfies the growth estimate given in [2, Theorem
1.2]. We recall the original definition of a Gibbs state introduced by Ruelle
[24] and Capocaccia [3], which is equivalent to Bowen’s property from [2] in
our situation. Let $T:M\to M$ be an expansive homeomorphism on a compact
metric space $M$. A map $\chi$ from some open set $U\subset M$ into $M$ is
called _conjugating_ for the system $(M,T)$ if
$d(T^{n}\circ\chi(x),T^{n}(x))\to 0$ for $|n|\to\infty$ uniformly in $x\in U$.
In the case of an Anosov automorphism $L$, the conjugating homeomorphisms are
locally given by $x\mapsto x+v$ where $v$ is homoclinic to 0. For this
article, we only need the global conjugating homeomorphisms $x\mapsto x+v$.
Suppose $\phi$ is a continuous function on $M$. A probability measure $\mu$ on
$M$ is a _Gibbs state_ for $\phi$ if for every conjugating homeomorphism
$\chi:U\to\chi(U)$ where $U=U_{\chi}$ is an open set in $M$ the measure
$\chi_{*}(\mu|_{U})$ is absolutely continuous with respect to
$\mu|_{\chi(U)}$, with Radon-Nikodym derivative
(1) $\frac{d\chi_{*}\mu}{d\mu}=\exp\sum_{n\in\mathbb{Z}}\big{[}\phi\circ
T^{n}\circ\chi^{-1}-\phi\circ T^{n}\big{]}.$
For an axiom A diffeomorphism the equilibrium state of a Hölder potential
$\phi$ is also a Gibbs state for $\phi$, which is proven in Ruelle’s book [24,
Theorem 7.18]. A result of Haydn [9] is that the converse holds as well. In
fact, Haydn and Ruelle show in [10] that equilibrium states and Gibbs states
are equivalent for expansive homeomorphisms with specification and Bowen
potentials.
We need the regularity properties of the Radon-Nikodym derivative (1).
Although the question of regularity seems to be very natural, we were not able
to locate a corresponding result in the literature. We provide a proof in the
case of Anosov automorphisms, however the same argument can be
straightforwardly generalized to Anosov diffeomorphisms, Axiom A
diffeomorphisms or more general Smale spaces.
###### Lemma 3.
Let $L:\mathbb{T}^{2}\to\mathbb{T}^{2}$ be an Anosov automorphism, let $v$ be
homoclinic to 0 and let $\tau(x)=x-v$. Let $\phi$ be a Hölder continuous
function and let $\mu$ be the corresponding equilibrium state. Then the Radon-
Nikodym derivative $\frac{d\tau_{*}\mu}{d\mu}$ in (1) above is Hölder
continuous.
###### Proof.
Let $\lambda$ be the expanding eigenvalue of $L$. Then there exist $C_{1}$ and
$C_{2}$ such that $d(L^{n}v,0)\leq C_{1}\lambda^{-|n|}$ and $d(L^{n}x,0)\leq
C_{2}\lambda^{|n|}d(x,0)$ for all $n\in\mathbb{Z}$. We let $C_{3}>0$ and
$\alpha\in(0,1)$ be such that $|\phi(x)-\phi(y)|\leq C_{3}d(x,y)^{\alpha}$ for
all $x,y\in\mathbb{T}^{2}$.
We define
$\theta(x)=\sum_{n\in\mathbb{Z}}\left[\phi(L^{n}(x+v))-\phi(L^{n}x)\right].$
Suppose $x,y\in\mathbb{T}^{2}$ satisfy $d(x,y)<\lambda^{-2k}$ for some $k$.
Then we calculate
$\displaystyle|\theta(y)$
$\displaystyle-\theta(x)|\leq\sum_{n\in\mathbb{Z}}\big{|}\phi(L^{n}(y+v))-\phi(L^{n}y)-\phi(L^{n}(x+v))+\phi(L^{n}x)\big{|}$
$\displaystyle\leq\sum_{|n|\leq
k}\big{[}|\phi(L^{n}(y+v))-\phi(L^{n}(x+v))|+|\phi(L^{n}(y))-\phi(L^{n}(x))|\big{]}$
$\displaystyle+\sum_{|n|>k}\big{[}|\phi(L^{n}(y+v))-\phi(L^{n}(y))|+|\phi(L^{n}(x+v))-\phi(L^{n}(x))|\big{]}.$
We bound the sums by geometric series and obtain
$|\phi(L^{n}y)-\phi(L^{n}x)|\leq C_{3}(C_{2}\lambda^{|n|}d(x,y))^{\alpha}\leq
C_{3}C_{2}^{\alpha}\lambda^{-(2k-|n|)\alpha},$
with the same bound for $|\phi(L^{n}(y+v))-\phi(L^{n}(x+v))|$. Likewise,
$|\phi(L^{n}(x+v))-\phi(L^{n}(x))|\leq
C_{3}C_{1}^{\alpha}\lambda^{-|n|\alpha}$ with the same bound for
$|\phi(L^{n}(y+v))-\phi(L^{n}(y))|$. Summing the geometric series, we obtain
$|\theta(y)-\theta(x)|\leq K\lambda^{-k\alpha}$, where
$K=4C_{3}(C_{1}^{\alpha}+C_{2}^{\alpha})/(1-\lambda^{-\alpha})$ showing that
$\theta$ is Hölder as required. ∎
### 2.2. Coding for Toral Automorphisms
Let $L$ be a mixing toral automorphism of $\mathbb{T}^{2}$ and we let
$\mathcal{P}$ be a generating Markov partition, which we assume to consist of
(closed) rectangles whose boundaries are pieces of the unstable and stable
manifolds through the origin. We make the further assumption that if $A$ and
$B$ are elements of the partition, then $(A+v)\cap B$ is connected (either a
rectangle or an empty set). This condition is automatically satisfied if
$\operatorname{diam}(\mathcal{P})<\frac{1}{2}$, and so may be assumed without
loss of generality by replacing $\mathcal{P}$ with a Markov partition of the
form $\bigvee_{j=0}^{m-1}L^{-j}\mathcal{P}$ if necessary.
For $\mathcal{A}=\\{0,...,\\#(\mathcal{P})-1\\}$ let
$\Omega\subset\mathcal{A}^{\mathbb{Z}}$ be the corresponding shift of finite
type and let $\pi\colon\Omega\to\mathbb{T}^{2}$ be the corresponding finite-
to-one factor map from $(\Omega,\sigma)$ to $(\mathbb{T}^{2},L)$. The map
$\pi$ is one-to-one on a set of measure 1 with respect to any invariant
measure on $\Omega$. We equip $\Omega$ with the standard metric on $\Omega$
where $d(\omega,\omega^{\prime})=2^{-n}$ if $\omega_{j}=\omega^{\prime}_{j}$
whenever $|j|<n$, but $\omega_{\pm n}\neq\omega^{\prime}_{\pm n}$.
If $\phi$ is a Hölder continuous function on $\mathbb{T}^{2}$, we let $\mu$ be
its equilibrium measure. We also set $\psi=\phi\circ\pi$ to be the
corresponding potential on $\Omega$ and let $\nu$ be the equilibrium measure
of $\psi$. Since $\pi$ is one-to-one $\nu$-almost everywhere,
$\pi_{*}\nu=\mu$. Let $\Omega^{+}\subset\mathcal{A}^{\mathbb{N}_{0}}$ be the
one-sided version of $\Omega$ that is the image of $\Omega$ under the map
$p_{+}\colon\mathcal{A}^{\mathbb{Z}}\to\mathcal{A}^{\mathbb{N}_{0}}$ defined
by $p_{+}(\omega)_{n}=\omega_{n}$ for $n\geq 0$. Similarly, let
$\Omega^{-}\subset\mathcal{A}^{-\mathbb{N}}$ be the image of $\Omega$ under
the restriction map
$p_{-}\colon\mathcal{A}^{\mathbb{Z}}\to\mathcal{A}^{-\mathbb{N}}$. Then
$\nu^{+}=(p_{+})_{\ast}\nu$ and $\nu^{-}=(p_{-})_{\ast}\nu$ are the measures
corresponding to $\nu$ on $\Omega^{+}$ and $\omega^{-}$ respectively.
The main symbolic result we are using is the local product structure of $\nu$.
Ruelle proves in [24, Lemma 5.9] that $\nu$ has _local product structure_ ,
i.e.
$d\nu(\omega)=\hat{\varrho}(\omega)\,d\hat{\nu}^{+}(p_{+}(\omega))\,d\hat{\nu}^{-}(p_{-}(\omega))$
where $\hat{\nu}^{+}$ is a probability measure on $\Omega^{+}$,
$\hat{\nu}^{-}$ is a probability measure on $\Omega^{-}$, and $\hat{\varrho}$
is a positive continuous function on $\Omega$. Furthermore, it is shown in
[24, Lemma 5.23] that $\hat{\varrho}$ is Hölder on $\Omega$, and the functions
$\hat{\varrho}^{+}(\omega^{+})=\int\hat{\varrho}(\omega)\,d\hat{\nu}^{-}(\omega^{-})$,
$1/\hat{\varrho}^{+}(\omega^{+})$ are Hölder on $\Omega^{+}$. Analogous
statements hold for $\hat{\varrho}^{-}(\omega^{-})$. Note that for each
$\omega^{+}\in\Omega^{+}$ the integral is taken over the set
$\\{\omega^{-}\in\Omega^{-}\colon\omega^{-}_{-1}\omega^{+}_{0}\text{ is legal
in $\Omega$}\\}$. In this case the measure $\nu^{+}$ on $\Omega^{+}$ is given
by $d\nu^{+}=\varrho^{+}(\omega^{+})\,d\hat{\nu}^{+}$; similarly for
$\nu^{-}$.
We are mostly concerned with the structure of $\nu$ on the cylinder
$[0]=\\{\omega\in\Omega:\omega_{0}=0\\}$. We let
$A^{-}=\\{\omega^{-}\in\Omega^{-}\colon\omega^{-}_{-1}0\text{ is legal in
$\Omega$}\\}$. For $\omega^{-}\in A^{-}$ and $\omega^{+}\in p_{+}([0])$ we
write
$\varrho^{+}(\omega^{+})=\int_{A^{-}}\hat{\varrho}(\omega^{-}\omega^{+})\,d\hat{\nu}^{-}(\omega^{-})$,
$\varrho^{-}(\omega^{-})=\int_{[0]}\hat{\varrho}(\omega^{-}\omega^{+})\,d\hat{\nu}^{+}(\omega^{+})$,
and
$\varrho(\omega^{-}\omega^{+})=\frac{\hat{\varrho}(\omega^{-}\omega^{+})}{\varrho^{-}(\omega^{-})\varrho^{+}(\omega^{+})}$,
so that
$d\nu(\omega)=\rho(\omega)\,d\nu^{+}(\omega^{+})\,d\nu^{-}(\omega^{-})$. In
particular,
(2)
$\begin{split}\int_{A^{-}}\varrho(\omega^{-}\omega^{+})\,d\nu^{-}(\omega^{-})&=\int_{A^{-}}\frac{\hat{\varrho}(\omega^{-}\omega^{+})}{\varrho^{-}(\omega^{-})\varrho^{+}(\omega^{+})}\,d\nu^{-}(\omega^{-})\\\
&=\frac{1}{\varrho^{+}(\omega^{+})}\int_{A^{-}}\hat{\varrho}(\omega^{-}\omega^{+})\,d\hat{\nu}^{-}(\omega^{-})\\\
&=1\end{split}$
We summarize the above in the following lemma which is frequently used
throughout this article.
###### Lemma 4 (Ruelle [24]).
Let $\psi$ be a Hölder continuous function on a mixing shift of finite type
$\Omega$ and let $\nu$ be its equilibrium state. Then $\nu$ has _local product
structure_. That is, on the cylinder set $[0]$ there exist a positive Hölder
continuous function $\varrho(\omega)$ such that
$d\nu(\omega)=\varrho(\omega)\,d\nu^{+}(\omega^{+})\,d\nu^{-}(\omega^{-})$
where $\nu^{-}$, $\nu^{+}$ are the restrictions of $\nu$ to $\Omega^{+}$,
$\Omega^{-}$ respectively, and $\omega$ denotes the concatenation of
$\omega^{-}$ and $\omega^{+}$.
It is shown by Walters in [26] that under the assumptions of the above lemma
there is a Hölder function $g:\Omega^{+}\to(0,1)$ such that $\log g$ is
cohomologous to $\phi$ and $\nu^{+}$ is the unique $g$-measure for $g$, i.e.
for $\omega^{+}\in\Omega^{+}$
(3) $g(\omega^{+})=\lim_{\begin{subarray}{c}{\rm diam}(S)\to 0\\\
\nu^{+}(S)\neq 0,\,\omega^{+}\in
S\end{subarray}}\frac{\nu^{+}(S)}{\nu^{+}(\sigma_{+}(S))}.$
Since the map $\pi\colon\Omega\to\mathbb{T}^{2}$ is Hölder continuous, given a
Hölder continuous function $\phi$ on the torus, we see that $\phi\circ\pi$ is
Hölder; however many Hölder continuous functions on the shift cannot be
written in the form $\phi\circ\pi$. We call a function $f$ defined on $\Omega$
_torus-Hölder_ if it can be written in the form $\phi\circ\pi$ where $\phi$ is
a Hölder continuous function of the torus. A subset $R$ of $\Omega$ is called
a _rectangle_ if it satisfies the following conditions
* •
$\omega,\omega^{\prime}\in R$ implies the concatenation
$p_{-}(\omega)p_{+}(\omega^{\prime})$ belongs to $R$;
* •
$\pi(R)$ is connected;
* •
$\operatorname{diam}(\pi(R))<\frac{1}{2}$;
* •
$R=\pi^{-1}(\pi(R))$.
###### Lemma 5.
Let $L$ be an Anosov automorphism of $\mathbb{T}^{2}$ and let $\mathcal{P}$ be
a Markov partition as described above. Let $\Omega$ be the corresponding shift
of finite type and let $\pi\colon\Omega\to\mathbb{T}^{2}$ be the natural
factor map. Let $R$ be a rectangular subset of a cylinder set $[i]$ in
$\Omega$ and suppose that $f\colon R\to\mathbb{R}$ is a Hölder continuous
function. If $f$ has the property that $f(\omega)=f(\omega^{\prime})$ whenever
$\pi(\omega)=\pi(\omega^{\prime})$, then $f$ may be expressed as $h\circ\pi$
where $h$ is a Hölder continuous function defined on
$\pi(R)\subset\mathbb{T}^{2}$.
###### Proof.
Since $f(\omega)=f(\omega^{\prime})$ when $\pi(\omega)=\pi(\omega^{\prime})$,
we see that $f$ takes the same value on each element of $\pi^{-1}(x)$ for any
$x\in\pi(R)$. Hence $h(x):=f(\pi^{-1}x)$ is well-defined on the rectangle
$A:=\pi(R)$ which has sides parallel to the stable and unstable directions.
Since $f$ is Hölder continuous, let $c$ and $\alpha$ be such that
$|f(\omega)-f(\omega^{\prime})|\leq c\alpha^{n}$ whenever
$d(\omega,\omega^{\prime})\leq 2^{-n}$.
Since $A$ is a rectangle in $\mathbb{T}^{2}$, we define for $x,y\in A$,
$\llbracket x,y\rrbracket_{A}$ to be the unique point $z$ in $A$ such that the
line segments $[x,z]$ and $[z,y]$ lie in $A$ with $[x,z]$ in the stable
direction and the $[z,y]$ in the unstable direction. We now estimate
$|h(x)-h(z)|$. An exactly similar estimate applies to $|h(z)-h(y)|$. Let $C$
be the constant (depending only on the angle between the stable and unstable
directions) so that if $x,y$ lie in $A$ then $d(x,\llbracket
x,y\rrbracket_{A}),d(y,\llbracket x,y\rrbracket_{A})\leq Cd(x,y)$. Let
$\lambda$ be the expanding eigenvalue and let $n$ be the smallest natural
number such that $C^{-1}\operatorname{diam}(\mathcal{P})\lambda^{-n}\leq
d(x,y)$.
Let $x=\pi(\xi)$ and $\llbracket x,y\rrbracket_{A}=\pi(\zeta)$. Then either
$x$ and $\llbracket x,y\rrbracket_{A}$ lie in the same element of
$L^{j}\mathcal{P}$ for each $0\leq j<n$, in which case $|h(x)-h(\llbracket
x,y\rrbracket_{A})|=|f(\xi)-f(\zeta)|\leq c\alpha^{n}$ or there exists a point
$w$ in $\partial L^{-(n-1)}\mathcal{P}\cap[x,\llbracket x,y\rrbracket_{A}]$.
Since $d(x,w)$ and $d(\llbracket x,y\rrbracket_{A},w)$ are less than
$\operatorname{diam}(\mathcal{P})\lambda^{-(n-1)}$ and $w$ is on the boundary,
$x$ and $w$ must belong to a common element of $L^{-(n-1)}\mathcal{P}$ and
similarly for $w$ and $\llbracket x,y\rrbracket_{A}$, see Fig 1.
$\mathbb{T}^{2}$$A_{0}$$x$$\llbracket x,y\rrbracket_{A}$$w$ Figure 1. On
$\mathbb{T}^{2}$ the unstable and stable directions are shown as north-east
and north-west respectively.
Now write $w=\pi(\eta)=\pi(\eta^{\prime})$ where
$\eta_{-\infty}^{n-1}=\xi_{-\infty}^{n-1}$ and
${\eta^{\prime}}_{-\infty}^{n-1}=\zeta_{-\infty}^{n-1}$. We then have
$|h(x)-h(z)|=|f(\xi)-f(\zeta)|\leq|f(\xi)-f(\eta)|+|f(\eta^{\prime})-f(\zeta)|\leq
2c\alpha^{n},$ where we made use of the fact that $f(\eta)=f(\eta^{\prime})$.
Combining this with the analogous estimate for $|h(\llbracket
x,y\rrbracket_{A})-h(y)|$, we see $|h(x)-h(y)|\leq 4c\alpha^{n}\leq
4c\big{(}Cd(x,y)/(\lambda\operatorname{diam}(\mathcal{P})\big{)}^{-\log\alpha/\log\lambda}$,
so that $h$ is Hölder as required. ∎
## 3\. Anosov realization
In this section we show that given a hyperbolic automorphism $L$ for any
positive Hölder continuous potential $\phi$ with zero topological pressure
there exists a conjugate Anosov diffeomorphism $T$ for which the geometric
potential is cohomologous to $\phi$.
###### Theorem 6.
Let $L$ be an Anosov automorphism of $\mathbb{T}^{2}$ and let $\mu$ be the
equilibrium state for a Hölder continuous potential $\phi$ with $P_{\rm
top}(L,\phi)=0$. Then there exists a $C^{1+H}$-atlas on $\mathbb{T}^{2}$ with
respect to which $L$ is an Anosov diffeomorphism with Hölder derivative and
its geometric potential is cohomologous to $\phi$.
We prove the theorem in a number of steps.
### 3.1. Definition of new $C^{1+H}$ atlas
We let $\mathcal{H}$ denote the collection of points of $\mathbb{T}^{2}$ that
are homoclinic to 0 under the action of $L$. Since $L$ is an automorphism, it
follows that if $v\in\mathcal{H}$ and $x\in\mathbb{T}^{2}$ then
$d(L^{n}(x+v),L^{n}(x))=d(L^{n}v,0)\to 0$ as $|n|\to\infty$. Recall that the
points homoclinic to 0 are dense in $\mathbb{T}^{2}$ (see e.g. [21]).
For the remainder of this section $A_{0}$ denotes the element of the partition
$\mathcal{P}$ which corresponds to the cylinder $[0]$ in $\Omega$, i.e.
$\pi([0])=A_{0}$.
###### Lemma 7.
Let $w\in\mathcal{H}$ and suppose that $A_{0}\cap(A_{0}-w)$ has non-empty
interior. Then there exist vectors $u,v\in\mathcal{H}$ such that:
* •
$u+v=w$;
* •
if $x\in\text{Int}(A_{0}\cap(A_{0}-u))$ then the line segment $[x,x+u]$ lies
in $\text{Int}(A_{0})$ and is parallel to the stable direction;
* •
if $x\in\text{Int}(A_{0}\cap(A_{0}-v))$ then the line segment $[x,x+v]$ lies
in $\text{Int}(A_{0})$ and is parallel to the unstable direction;
* •
$\text{Int}(A_{0}\cap(A_{0}-w))=\text{Int}\big{(}A_{0}\cap(A_{0}-u)\cap(A_{0}-v)\big{)}$.
###### Proof.
For any $x\in\text{Int}(A_{0}\cap(A_{0}-w))$, since $A_{0}$ is a parallelogram
with edges parallel to the stable and unstable directions, the vector $w$ may
be expressed as a sum of pieces $u$ and $v$ parallel to the stable and
unstable directions, where $[x,x+u]$ and $[x,x+v]$ lie in $A_{0}$. Note that
$x+u$ is the point of intersection of the stable manifold of $x$ and the
unstable manifold of $x+w$. Linearity of $L$ implies that $u$ belongs to the
stable manifold of 0 and unstable manifold of $w$. Since $w\in\mathcal{H}$, we
conclude that $u\in\mathcal{H}$ as well. Similarly, $x+v$ is the point of
intersection of the unstable manifold of $x$ and the stable manifold of $x+w$,
so $v\in\mathcal{H}$. ∎
We define two functions $\xi_{1}$ and $\xi_{2}$ on $A_{0}$. Let $\xi_{1}(x)$
be the $\mu$-measure of the rectangle contained in $A_{0}$ lying to the left
of the connected portion of the stable manifold of $x$ within $A_{0}$ as
illustrated in Figure 2. Similarly, let $\xi_{2}(x)$ be the $\mu$-measure of
the rectangle contained in $A_{0}$ lying below the connected portion of the
unstable manifold of $x$ within $A_{0}$. We denote
$\xi(x)=(\xi_{1}(x),\xi_{2}(x))$.
$\mathbb{T}^{2}$$A_{0}$$x$ Figure 2. $\xi_{1}(x)$ is the measure of the
region shaded with horizontal lines; $\xi_{2}(x)$ is the measure of the region
shaded with vertical lines.
We introduce a new family of charts on $\mathbb{T}^{2}$. For
$v\in\mathcal{H}$, let $\tau_{v}$ denote the translation $\tau_{v}(x)=x+v$. We
then define a chart $\alpha_{v}$ with domain $\text{Int}(A_{0})-v$ by
$\alpha_{v}=\xi\circ\tau_{v}$. Since $\mathcal{H}$ is dense in
$\mathbb{T}^{2}$, the collection of charts covers all of $\mathbb{T}^{2}$. Our
goal for the reminder of this subsection is to show that the family of charts
$\\{(\alpha_{v},\text{Int}(A_{0})-v)\\}_{v\in\mathcal{H}}$ forms a
$C^{1+H}$-differentiable atlas on $\mathbb{T}^{2}$. We first prove a key
lemma.
Let $v\in\mathcal{H}$ be such that $A_{0}\cap(A_{0}-v)$ has non-empty interior
and such that for any $x\in\text{Int}(A_{0}\cap(A_{0}-v))$, the line segment
joining $x$ and $x+v$ lies in $\text{Int}(A_{0})$ and is parallel to the
unstable direction. Using the notation from Section 2.2 we consider the
function $\xi_{1}(\pi(\omega))$ defined on
$\pi^{-1}\big{(}\text{Int}(A_{0}\cap(A_{0}-v))\big{)}\subset[0]$ in $\Omega$
and study the limit
(4)
$\ell(\omega):=\lim_{\omega^{\prime}\to\omega}\frac{\xi_{1}(\pi(\omega^{\prime})+v)-\xi_{1}(\pi(\omega)+v)}{\xi_{1}(\pi(\omega^{\prime}))-\xi_{1}(\pi(\omega))}.$
Here the limit is taken over those $\omega^{\prime}$ such that
$\xi_{1}(\pi(\omega^{\prime}))\neq\xi_{1}(\pi(\omega))$, that is those
$\omega^{\prime}$ such that $\pi(\omega^{\prime})$ does not lie in the same
local stable manifold as $\pi(\omega)$. This is illustrated in Figure 3.
$\mathbb{T}^{2}$$A_{0}$$\pi(\omega)$$\pi(\omega^{\prime})$$v$ Figure 3. The
numerator and denominator in the limit are respectively the measures of the
right and left shaded rectangles.
###### Lemma 8.
Let $v\in\mathcal{H}$ be as described above. Then the limit $\ell(\omega)$,
defined above, exists for all $\omega$ in
$\pi^{-1}\big{(}\text{Int}(A_{0}\cap(A_{0}-v))\big{)}$ and the function
$\ell(\omega)$ is torus-Hölder on its domain.
###### Proof.
Letting $R[\omega,\omega^{\prime}]$ be the rectangle bounded on the top and
bottom by the boundary of $A_{0}$ and the left and right by the stable
manifolds through $\pi(\omega)$ and $\pi(\omega^{\prime})$, we see that
(5)
$\frac{\xi_{1}(\pi(\omega^{\prime})+v)-\xi_{1}(\pi(\omega)+v)}{\xi_{1}(\pi(\omega^{\prime}))-\xi_{1}(\pi(\omega))}=\frac{\mu(R[\omega,\omega^{\prime}]+v)}{\mu(R[\omega,\omega^{\prime}])}$
We now apply the discussion of Section 2.1 to the case when $T$ is the toral
automorphism $L$. For any $v\in\mathbb{T}^{2}$ homoclinic to 0 under $L$ the
map $x\mapsto x+v$ is a (global) conjugating homeomorphism of
$\mathbb{T}^{2}$. It follows from Lemma 3 that for an equilibrium state $\mu$
of a Hölder potential $\phi$ we have
$\frac{d\mu(x+v)}{d\mu(x)}=\theta_{v}(x),$
where
(6)
$\theta_{v}(x)=\exp\left(\sum_{n\in\mathbb{Z}}\big{[}\phi(L^{n}(x+v))-\phi(L^{n}(x))\big{]}\right).$
Recall that by Lemma 3 the function
$\theta_{v}\colon\mathbb{T}^{2}\to\mathbb{R}$ is Hölder continuous.
We can now rewrite (4) as
$\displaystyle\ell(\omega)$
$\displaystyle=\lim_{\omega^{\prime}\to\omega}\frac{\mu(R[\omega,\omega^{\prime}]+v)}{\mu(R[\omega,\omega^{\prime}])}$
$\displaystyle=\lim_{\omega^{\prime}\to\omega}\frac{\int_{R[\omega,\omega^{\prime}]}\theta_{v}(x)\,d\mu(x)}{\int_{R[\omega,\omega^{\prime}]}1\,d\mu(x)}.$
We observe that $\pi^{-1}R[\omega,\omega^{\prime}]$ is a subset of $\Omega$
consisting of points $\zeta$ such that $\zeta_{0}^{\infty}$ are the non-
negative coordinates of points lying between $\pi(\omega)$ and
$\pi(\omega^{\prime})$. There is no restriction on the negative coordinates
other than that $\zeta\in\Omega$ and $\zeta_{0}=0$. Write
$A^{+}[\omega,\omega^{\prime}]$ for
$\\{\zeta^{+}\in\Omega^{+}\colon\zeta^{+}\text{ are the non-negative
coordinates of a point in $R[\omega,\omega^{\prime}]$}\\}$ and $A^{-}$ for
$\\{\zeta^{-}\in\Omega^{-}\colon\text{ $\zeta^{-}_{-1}0$ is legal in
$\Omega$}\\}$. We now apply Lemma 4, giving
(7)
$\ell(\omega)=\lim_{\omega^{\prime}\to\omega}\frac{\int_{A^{-}}\int_{A^{+}[\omega,\omega^{\prime}]}\varrho(\zeta)\theta_{v}(\pi(\zeta))\,d\nu^{+}(\zeta^{+})\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\int_{A^{+}[\omega,\omega^{\prime}]}\varrho(\zeta)\,d\nu^{+}(\zeta^{+})\,d\nu^{-}(\zeta^{-})}.$
Since $\varrho$ and $\theta_{v}\circ\pi$ are continuous, the integrands in the
numerator and denominator may be approximated for $\omega^{\prime}$ close to
$\omega$ by $\varrho(\zeta^{-}\omega^{+})\theta_{v}(\pi(\zeta^{-}\omega^{+}))$
and $\varrho(\zeta^{-}\omega^{+})$ respectively. Since these new integrands
don’t depend on $\zeta^{+}$, the inner integrals of the approximation to (7)
are just the product of the integrand and
$\nu^{+}(A^{+}[\omega,\omega^{\prime}])$. Since $\rho$ is strictly positive,
cancelling the common factor, we now see that the limit exists, and
$\displaystyle\ell(\omega)$
$\displaystyle=\frac{\int_{A^{-}}\varrho(\zeta^{-}\omega^{+})\theta_{v}(\pi(\zeta^{-}\omega^{+}))\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\varrho(\zeta^{-}\omega^{+})\,d\nu^{-}(\zeta^{-})}$
$\displaystyle=\int_{A^{-}}\varrho(\zeta^{-}\omega^{+})\theta_{v}(\pi(\zeta^{-}\omega^{+}))\,d\nu^{-}(\zeta^{-}),$
where the second equality follows from (2). Further, since $\varrho$ and
$\theta_{v}\circ\pi$ are Hölder continuous functions on $\Omega$, we can see
that $\ell(\omega)$ is a Hölder continuous function of $\omega$ on $[0]$,
depending only on the non-negative coordinates of $\omega$.
In order to show that $\ell(\omega)$ is also torus-continuous, we consider
$\omega$ belonging to the stable manifold of 0 (so that $\pi(\omega)$, which
we assumed to lie in $\text{Int}(A_{0})$, lies on the boundary of two elements
of $L^{-j}\mathcal{P}$ for some $j>0$: one on the left and one on the right).
In this case, $p_{+}^{-1}(\pi(\omega))$ consists of two elements, say
$\omega^{+}$ and $\eta^{+}$. We will show that
$\ell({\omega^{-}\omega^{+}})=\ell({\omega^{-}\eta^{+}})$.
It will be convenient to find another expression for $\ell(\omega)$ in which
$\pi(\omega)$ is translated by another homoclinic vector $\tilde{v}$ (which by
Lemma 7 we can assume to be parallel to the unstable direction and to satisfy
$[\pi(\omega),\pi(\omega)+\tilde{v}]\subset\text{Int}(A_{0})$). Let
$\tilde{R}[\omega,\omega^{\prime}]=R[\omega,\omega^{\prime}]+\tilde{v}$ and
denote by $\tilde{A}^{+}[\omega,\omega^{\prime}]$ the set of future codes of
points in the rectangle $\tilde{R}[\omega,\omega^{\prime}]$.
By similar arguments to those above and using the fact that
$\theta_{v-\tilde{v}}(x)=\theta_{-\tilde{v}}(x)\theta_{v}(x-\tilde{v})$ which
is immediate from the expression of the Radon-Nikodym derivative (6), we
obtain
$\displaystyle\ell(\omega)$
$\displaystyle=\lim_{\omega^{\prime}\to\omega}\frac{\mu(\tilde{R}[\omega,\omega^{\prime}]+v-\tilde{v})}{\mu(\tilde{R}[\omega,\omega^{\prime}]-\tilde{v})}$
$\displaystyle=\lim_{\omega^{\prime}\to\omega}\frac{\int_{A^{-}}\int_{\tilde{A}^{+}[\omega,\omega^{\prime}]}\varrho(\zeta)\theta_{-\tilde{v}}(\pi(\zeta))\theta_{v}(\pi(\zeta)-\tilde{v})\,d\nu^{+}(\zeta^{+})\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\int_{\tilde{A}^{+}[\omega,\omega^{\prime}]}\varrho(\zeta)\theta_{-\tilde{v}}(\pi(\zeta))\,d\nu^{+}(\zeta^{+})\,d\nu^{-}(\zeta^{-})},$
As before, taking a limit as $\omega^{\prime}$ approaches $\omega$, we see
that
(8)
$\ell(\omega^{-}\omega^{+})=\frac{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\omega}^{+})\theta_{-\tilde{v}}(\pi(\zeta^{-}\tilde{\omega}^{+}))\theta_{v}(\pi(\zeta^{-}\omega^{+}))\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\omega}^{+})\theta_{-\tilde{v}}(\pi(\zeta^{-}\tilde{\omega}^{+}))\,d\nu^{-}(\zeta^{-})},$
where $\tilde{\omega}^{+}$ is the future coding of $\pi(\omega)+\tilde{v}$
corresponding to $\omega^{+}$.
Letting $\tilde{\eta}^{+}$ be the future coding of $\pi(\omega)+\tilde{v}$
corresponding to $\eta^{+}$ we get
(9)
$\begin{split}\ell({\omega^{-}\eta^{+}})&=\frac{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\eta}^{+})\theta_{-\tilde{v}}(\pi(\zeta^{-}\tilde{\eta}^{+}))\theta_{v}(\pi(\zeta^{-}\eta^{+}))\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\eta}^{+})\theta_{-v^{\prime}}(\pi(\zeta^{-}\tilde{\eta}^{+}))\,d\nu^{-}(\zeta^{-})}\\\
&=\frac{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\eta}^{+})\theta_{-\tilde{v}}(\pi(\zeta^{-}\tilde{\omega}^{+}))\theta_{v}(\pi(\zeta^{-}\omega^{+}))\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\eta}^{+})\theta_{-\tilde{v}}(\pi(\zeta^{-}\tilde{\omega}^{+}))\,d\nu^{-}(\zeta^{-})},\end{split}$
where we used the facts
$\pi(\zeta^{-}\tilde{\eta}^{+})=\pi(\zeta^{-}\tilde{\omega}^{+})$ and
$\pi(\zeta^{-}\eta^{+})=\pi(\zeta^{-}\omega^{+})$.
Comparing (8) and (9), we see that the only place where they differ is that in
the numerator and denominator, $\varrho(\zeta^{-}\tilde{\omega}^{+})$ is
replaced by $\varrho(\zeta^{-}\tilde{\eta}^{+})$. However if $\tilde{v}$ is
chosen so that $\pi(\omega)+\tilde{v}$ does not lie on the stable boundary of
any element of $\bigvee_{0\leq j<n}L^{-j}\mathcal{P}$, then $\tilde{\eta}^{+}$
and $\tilde{\omega}^{+}$ agree for at least $n$ symbols. Since $\varrho$ is
Hölder continuous,
$\varrho(\zeta^{-}\tilde{\eta}^{+})/\varrho(\zeta^{-}\tilde{\omega}^{+})$ is
uniformly exponentially close to 1 as $\zeta^{-}$ runs over $A^{-}$. It
follows that $\ell(\omega)=\ell({\omega^{-}\zeta^{+}})$, so that $\ell$ is
torus-continuous. ∎
We are now ready to establish that the atlas
$\\{(\alpha_{v},\text{Int}(A_{0}-v))\colon v\in\mathcal{H}\\}$ is $C^{1+H}$.
We need to prove that for $v_{0},v_{1}\in\mathcal{H}$ with the property that
$\text{Int}(A_{0}-v_{0})\cap\text{Int}(A_{0}-v_{1})\neq\emptyset$, the map
$\alpha_{v_{1}}\circ\alpha_{v_{0}}^{-1}$ is differentiable with Hölder
continuous derivative. In this case, observe
$\alpha_{v_{1}}\circ\alpha_{v_{0}}^{-1}=(\xi\circ\tau_{v_{1}})\circ(\xi\circ\tau_{v_{0}})^{-1}=\xi\circ\tau_{w}\circ\xi^{-1}$,
where $w=v_{1}-v_{0}\in\mathcal{H}$.
Using Lemma 7, we write $w=v+u$, where $v$ is in the unstable direction and
$u$ is in the stable direction. Moreover, if both $x$ and $x+w$ are in
$\text{Int}(A_{0})$, then the line segment joining $x$ and $x+v$ lies in
$\text{Int}(A_{0})$, so that $v$ satisfies the conditions of Lemma 8. Let
$h_{1}$ be the Hölder continuous function on
$\text{Int}(A_{0})\cap\text{Int}(A_{0}-w)$ such that $\ell=h_{1}\circ\pi$ on
their domain.
We now evaluate the derivative of $\xi\circ\tau_{w}\circ\xi^{-1}$ using the
function $\ell$. If $(a,b)$ and $(a,b^{\prime})$ have the same first
coordinate and are in the range of $\xi\circ\tau_{w}\circ\xi^{-1}$, then we
see from the definition of $\xi$ that $\tau_{w}\circ\xi^{-1}(a,b)$ and
$\tau_{w}\circ\xi^{-1}(a,b^{\prime})$ lie on the same stable manifold, so that
the first coordinates of $\xi\circ\tau_{w}\circ\xi^{-1}(a,b)$ and
$\xi\circ\tau_{w}\circ\xi^{-1}(a,b^{\prime})$ agree. Similarly the second
coordinates of $\xi\circ\tau_{w}\circ\xi^{-1}(a,b)$ and
$\xi\circ\tau_{w}\circ\xi^{-1}(a^{\prime},b)$ agree, so that
$\xi\circ\tau_{w}\circ\xi^{-1}(a,b)$ is of the form $(f_{1}(a),f_{2}(b))$. We
see from the definition of $\ell$ that for $(a,b)$ in the domain of
$\xi\circ\tau_{w}\circ\xi^{-1}$,
$f_{1}^{\prime}(a)=h_{1}(\xi^{-1}(a,b))=h_{1}(\xi_{1}^{-1}(a)\cap\xi_{2}^{-1}(b))$.
Since $h_{1}$ is constant on local stable manifolds, this can also be written
as $h_{1}(\xi_{1}^{-1}(a))$.
We verify that $f_{1}^{\prime}$ is Hölder; an almost identical argument will
show that $f_{2}^{\prime}$ is Hölder. Let $e_{u}$ be the unit unstable
direction and $z$ be the bottom left corner of $A_{0}$. Using
$\iota(t)=\xi_{1}(z+te_{u})$ we can write
$f_{1}^{\prime}(a)=h_{1}(z+\kappa^{-1}(a)e_{u})$. To show that
$f_{1}^{\prime}$ is Hölder, it therefore suffices to show that $\kappa^{-1}$
is Hölder, which follows from an estimate of the form
$|\kappa(t^{\prime})-\kappa(t)|\geq c|t-t^{\prime}|^{\beta}$. We conclude the
proof by establishing an estimate of this form. Let $t^{\prime}>t$ and let $n$
be such that $|t-t^{\prime}|\geq
2\operatorname{diam}(\mathcal{P})\lambda^{-n}$ (as before, $\lambda$ denotes
the expanding eigenvalue of the matrix defining $L$). Then between the local
stable manifolds through $z+t\,e_{u}$ and $z+t^{\prime}\,e_{u}$, there is at
least one full element of $\bigvee_{j=0}^{n-1}L^{-j}\mathcal{P}$. By the Gibbs
inequality, these elements have measure at least $c^{\prime}e^{-\delta n}$ for
some $c^{\prime}$ and $\delta$ that are independent of $t$ and $t^{\prime}$,
so that $|\kappa(t^{\prime})-\kappa(t)|\geq c^{\prime}e^{-\delta n}$. But from
the bound on $|t-t^{\prime}|$, we deduce $|\kappa(t^{\prime})-\kappa(t)|\geq
c|t-t^{\prime}|^{\beta}$ for some $c$ and $\beta$ as required.
### 3.2. Differentiability of $L$ with respect to the new atlas
We proved in Section 3.1 that the family of charts
$\Xi=\\{(\alpha_{v},\text{Int}(A_{0})-v)\\}_{v\in\mathcal{H}}$ form a
$C^{1+H}$-differentiable atlas on $\mathbb{T}^{2}$. In this section we show
that $L:(\mathbb{T}^{2},\Xi)\to(\mathbb{T}^{2},\Xi)$ is $C^{1+H}$.
We first consider the case when $A_{0}\cap L^{-1}A_{0}$ has non-empty
interior. We claim that it suffices to establish that $\xi\circ
L\circ\xi^{-1}$ is $C^{1+H}$ on $\xi(A_{0}\cap L^{-1}A_{0})$. To see this, let
$v_{0},v_{1}\in\mathcal{H}$ be such that the domain of $\alpha_{v_{1}}\circ
L\circ\alpha_{v_{0}}^{-1}$, i.e. $U:=(A_{0}-v_{0})\cap L^{-1}(A_{0}-v_{1})$,
has non-empty interior. Let $(a,b)\in\alpha_{v_{0}}(U)$ and write
$(a,b)=\alpha_{v_{0}}(x)=\xi(v_{0}+x)$. Let $w\in\mathcal{H}$ be such that
$x+v_{0}+w\in\text{Int}(A_{0}\cap L^{-1}A_{0})$. We now see that on a
neighbourhood of $(a,b)$
$\displaystyle\alpha_{v_{1}}\circ L\circ\alpha_{v_{0}}^{-1}$
$\displaystyle=\xi\circ\tau_{v_{1}}\circ L\circ\tau_{-v_{0}}\circ\xi^{-1}$
$\displaystyle=(\xi\circ\tau_{v_{1}-Lv_{0}-Lw}\circ\xi^{-1})\circ(\xi\circ
L\circ\xi^{-1})\circ(\xi\circ\tau_{w}\circ\xi^{-1}):$
$\xi\circ\tau_{w}\circ\xi^{-1}(a,b)=\xi(x+v_{0}+w)$; $\xi\circ
L\circ\xi^{-1}(\xi(x+v_{0}+w))=\xi(Lx+Lv_{0}+Lw)\in\xi(A_{0})$;
$\xi\circ\tau_{v_{1}-Lv_{0}-Lw}\circ\xi^{-1}(\xi(Lx+Lv_{0}+Lw))=\xi(Lx+v_{1})=\alpha_{v_{1}}\circ
L\circ\alpha_{v_{0}}^{-1}(a,b)$. Once we establish that $\xi\circ
L\circ\xi^{-1}$ is $C^{1+H}$ on $\xi(A_{0}\cap L^{-1}A_{0})$, it will follow
from the results of the previous section that $\alpha_{v_{1}}\circ
L\circ\alpha_{v_{0}}^{-1}$ is $C^{1+H}$ on a neighbourhood of $(a,b)$.
A similar argument to that in Section 3.1 shows that $\xi\circ
L\circ\xi^{-1}(c,d)$ is of the form $(f_{1}(c),f_{2}(d))$ on its domain. We
establish Hölder differentiability of $\xi\circ L\circ\xi^{-1}$ following the
strategy of the previous section: first we show that $f_{1}^{\prime}$ is
shift-Hölder and then we verify that $f_{1}^{\prime}$ is torus-continuous. We
compute
(10) $f^{\prime}_{1}(a)=\lim_{h\to 0}\frac{\xi\circ
L\circ\xi^{-1}(a+h,b)-\xi\circ L\circ\xi^{-1}(a,b)}{h}.$
From the definition of $\xi$ we see that $h$ is the $\mu$-measure of the
rectangle in $A_{0}$ lying between the stable manifolds through $x$ and
$x^{\prime}=\xi^{-1}(a+h,b)$. Assuming that $h$ is small enough that
$x^{\prime}$ is also in $A_{0}\cap L^{-1}A_{0}$, we can write the numerator in
the limit (10) as the $\mu$-measure of the rectangle in $A_{0}$ lying between
the stable manifolds through $L(x)$ and $L(x^{\prime})$. We provide an
illustration in Figure 4 below.
$\mathbb{T}^{2}$$A_{0}$$x$$x^{\prime}$$L$$\mathbb{T}^{2}$$A_{0}$$L(x)$$L(x^{\prime})$$\mathbb{R}^{2}$$\xi^{-1}$$b$$a$$h$$\\}$$\mathbb{R}^{2}$$\xi$$\\}$$\xi\circ
L(x^{\prime})-\xi\circ L(x)$ Figure 4. The $\mu$-measures of the shaded
rectangles on the right and left are the numerator and the denominator in the
limit (10) respectively.
The derivative of $f_{1}$ can be represented symbolically on
$[00]\subset\Omega$ as
$\ell(\omega)=\lim_{\omega^{\prime}\to\omega}\frac{\mu(R[\sigma(\omega),\sigma(\omega^{\prime})])}{\mu(R[\omega,\omega^{\prime}])},$
where, as before, $R[\omega,\omega^{\prime}]$ and
$R[\sigma(\omega),\sigma(\omega^{\prime})]$ are the rectangles bounded on the
top and bottom by the boundary of $A_{0}$ and on the sides by the stable
manifolds through $\pi(\omega)$, $\pi(\omega^{\prime})$ and $L(\pi(\omega))$,
$L(\pi(\omega^{\prime}))$ respectively. Again, we observe that
$\pi^{-1}(R[\omega,\omega^{\prime}])=A^{-}\times
A^{+}[\omega,\omega^{\prime}]$ and
$\pi^{-1}(R[\sigma(\omega),\sigma(\omega^{\prime})])=A^{-}\times
A^{+}[\omega,\omega^{\prime}]$ where $A^{-}$, $A^{+}[\omega,\omega^{\prime}]$
are defined as in Section 3.1. On the other hand,
$\pi^{-1}R[\sigma(\omega),\sigma(\omega^{\prime})]$ is a subset of $\Omega$
consisting of points $\zeta$ such that $\zeta_{0}^{\infty}$ are the non-
negative coordinates of points in $L(R[\omega,\omega^{\prime}])$ and there are
no additional restrictions on the negative coordinates. Using Lemma 4 we
obtain
$\ell(\omega)=\lim_{\omega^{\prime}\to\omega}\frac{\nu(A^{-}\times\sigma_{+}(A^{+}[\omega,\omega^{\prime}]))}{\nu(A^{-}\times
A^{+}[\omega,\omega^{\prime}])}=\lim_{\omega^{\prime}\to\omega}\frac{{\nu}^{+}(\sigma_{+}(A^{+}[\omega,\omega^{\prime}]))}{{\nu}^{+}(A^{+}[\omega,\omega^{\prime}])}.$
Since $\operatorname{diam}(A^{+}[\omega,\omega^{\prime}])\to 0$ as
$\omega^{\prime}\to\omega$ and $\omega\in A^{+}[\omega,\omega^{\prime}]$, we
conclude that $\ell(\omega)=\frac{1}{g(p_{+}(\omega))}$, where $g$ is the
$g$-function for measure $\nu^{+}$. Since $g$ is strictly positive and Hölder
on $\Omega^{+}$, $\ell$ is Hölder on $\Omega$.
To prove that $\ell$ is torus continuous suppose that $x=\pi(\omega)$ lies on
the stable manifold boundary of two elements of the partition
$L^{-j}\mathcal{P}$ for some $j\in\mathbb{N}$. Let $\omega^{-}\omega^{+}$ and
$\omega^{-}\eta^{+}$ be two different symbolic representations of $x$. To show
that $\ell(\omega^{-}\omega^{+})=\ell(\omega^{-}\eta^{+})$ we apply the same
steps as in Section 3.1. For any $N\in\mathbb{N}$, let $v\in\mathcal{H}$ be
parallel to the unstable direction satisfying $x+v\in\text{Int}(A_{0}\cap
L^{-1}A_{0})$ and $x+v\notin\bigcup_{|k|<N}\partial L^{k}\mathcal{P}$. Let
$\tilde{R}[\omega,\omega^{\prime}]=R[\omega,\omega^{\prime}]+v$ and denote by
$\tilde{A}^{+}[\omega,\omega^{\prime}]$ the set of future coordinates of
points in $\tilde{R}[\omega,\omega^{\prime}]$. Using the expression for the
Radon-Nikodym derivative (6) we obtain
$\mu(R[\omega,\omega^{\prime}])=\int_{A^{-}}\int_{\tilde{A}^{+}[\omega,\omega^{\prime}]}\varrho(\zeta^{-}\zeta^{+})\theta_{-v}(\pi(\zeta^{-}\zeta^{+}))\,d\nu^{+}(\zeta^{+})\,d\nu^{-}(\zeta^{-}).$
Similarly, let
$\tilde{R}[\sigma(\omega),\sigma(\omega^{\prime})]=R[\sigma(\omega),\sigma(\omega^{\prime})]+L(v)$
and obtain
$\mu(R[\sigma(\omega),\sigma(\omega^{\prime})])=\int_{A^{-}}\int_{\sigma^{+}(\tilde{A}^{+}[\omega,\omega^{\prime}])}\varrho(\zeta^{-}\zeta^{+})\theta_{-L(v)}(\pi(\zeta^{-}\zeta^{+}))\,d\nu^{+}\,d\nu^{-}.$
Consider $\omega=\omega^{-}\omega^{+}$ and denote be $\tilde{\omega}^{+}$ the
corresponding future coding of $\pi(\omega)+v$. By continuity of $\varrho$ and
$\theta$ for each $\zeta^{-}$ the inner integral of
$\mu(R[\omega,\omega^{\prime}])$ is approximately
$\varrho(\zeta^{-}\tilde{\omega}^{+})\theta_{-v}(\pi(\zeta^{-}\sigma^{+}(\tilde{\omega}^{+})))\nu^{+}(\tilde{A}^{+}[\omega,\omega^{\prime}])$
and similarly the inner integral of
$\mu(R[\sigma(\omega),\sigma(\omega^{\prime})])$ is approximately
$\varrho(\zeta^{-}\tilde{\omega}^{+})\theta_{-L(v)}(\pi(\zeta^{-}\sigma^{+}(\tilde{\omega}^{+})))\nu^{+}(\tilde{A}^{+}[\sigma(\omega),\sigma(\omega^{\prime})])$
whenever $\zeta^{+}$ is close enough to $\tilde{\omega}^{+}$. As
$\omega^{\prime}\to\omega$ the diameter of
$\tilde{A}^{+}[\omega,\omega^{\prime}]$ tends to zero while
$\tilde{\omega}^{+}\in\tilde{A}^{+}[\omega,\omega^{\prime}]$, so that
$\frac{{\nu}^{+}(\sigma_{+}(\tilde{A}^{+}[\omega,\omega^{\prime}]))}{{\nu}^{+}(\tilde{A}^{+}[\omega,\omega^{\prime}])}\to\frac{1}{g(\tilde{\omega}^{+})}$.
Therefore,
$\ell(\omega^{-}\omega^{+})=\frac{\int_{A^{-}}\varrho(\zeta^{-}\sigma^{+}(\tilde{\omega}^{+}))\theta_{-L(v)}(\pi(\zeta^{-}\sigma^{+}(\tilde{\omega}^{+})))\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\omega}^{+})\theta_{-v}(\pi(\zeta^{-}\tilde{\omega}^{+}))\,d\nu^{-}(\zeta^{-})}\cdot\frac{1}{g(\tilde{\omega}^{+})}$
Letting $\tilde{\eta}^{+}$ be the future coding of $\pi(\omega)+v$
corresponding to $\eta^{+}$ we get
$\ell(\omega^{-}\eta^{+})=\frac{\int_{A^{-}}\varrho(\zeta^{-}\sigma^{+}(\tilde{\eta}^{+}))\theta_{-L(v)}(\pi(\zeta^{-}\sigma^{+}(\tilde{\eta}^{+})))\,d\nu^{-}(\zeta^{-})}{\int_{A^{-}}\varrho(\zeta^{-}\tilde{\eta}^{+})\theta_{-v}(\pi(\zeta^{-}\tilde{\eta}^{+}))\,d\nu^{-}(\zeta^{-})}\cdot\frac{1}{g(\tilde{\eta}^{+})}.$
Note that since $\omega^{-}\tilde{\omega}^{+}$ and
$\omega^{-}\tilde{\eta}^{+}$ are two symbolic codings of a single point $x+v$
in $\text{Int}(\pi([00]))$, $\sigma(\omega^{-}\tilde{\omega}^{+})$ and
$\sigma(\omega^{-}\tilde{\eta}^{+})$ are two symbolic codings of the point
$L(x+v)$ in $\text{Int}(\pi([0])$. Hence,
$\pi(p_{+}^{-1}(\sigma_{+}(\tilde{\omega}^{+})))$ and
$\pi(p_{+}^{-1}(\sigma_{+}(\tilde{\eta}^{+})))$ are the same local stable
manifold inside $\pi([0])$. Both points
$\pi(\omega^{-}\sigma_{+}\tilde{\omega}^{+})$ and
$\pi(\omega^{-}\sigma_{+}\tilde{\eta}^{+})$ lie on the intersection of this
local stable manifold and the local unstable manifold
$\pi(p_{-}^{-1}(\omega^{-}))$ inside $\pi([0])$, so they must coincide.
Repeating the argument at the end of Section 3.1 completes the proof. Since
$x+v$ is not on the boundary of the partition $\bigvee_{0\leq
k<N}L^{-k}\mathcal{P}$, $\tilde{\omega}^{+}$ and $\tilde{\eta}^{+}$ agree on
at least $N$ symbols. Now Hölder continuity of $\varrho$ and $g$ implies that
the ratio $\ell(\omega^{-}\omega^{+})/\ell(\omega^{-}\eta^{+})$ can be made
arbitrarily close to one when by choosing $N$ sufficiently large, so that
$\ell$ is torus continuous.
So far, we have completed the proof that $L$ is $C^{1+H}$ in the new charts in
the case that $A_{0}\cap L^{-1}A_{0}$ has non-empty interior. An essentially
identical argument shows that if $A_{0}\cap L^{-n}A_{0}$ has non-empty
interior, then $L^{n}$ is $C^{1+H}$ in the new charts. (The only modification
is that the $g$-function has to be replaced by $g^{(n)}$ defined by
$g^{(n)}(x)=g(x)g(\sigma(x))\cdots g(\sigma^{n-1}x)$). Since Anosov
automorphisms are topologically mixing, $A_{0}\cap L^{-n}A_{0}$ has non-empty
interior for all sufficiently large $n$. In particular there is $n$ such that
$L^{n}$ and $L^{n+1}$ are both $C^{1+H}$ diffeomorphisms. It follows that
$L=(L^{n})^{-1}\circ L^{n+1}$ is $C^{1+H}$ as required.
### 3.3. Cohomology of $\phi$ and the geometric potential of $L$ in the new
atlas.
###### Lemma 9.
Let $L$ and $\mathcal{P}$ be as above. There exist $\gamma>0$ and $k>0$ such
that if $R\subset A_{0}$ is of the form $R=\pi(C_{-}\times S)$ where $C_{-}$
is an $n$-cylinder in $\Omega_{-}$ and $S\subset[0]\subset\Omega_{+}$, then
$\mu(R)\leq ke^{-\gamma n}\mu(\pi\circ p_{+}^{-1}S)$.
The proof is an application of the product structure outlined in Section 2.2
together with the fact that $\nu^{-}$ is a $g$-measure with $g_{-}$ bounded
away from 1.
###### Lemma 10.
The map $L$ is expanding in the unstable direction in the new coordinate
system: for any finite sub-atlas there exists $n\in\mathbb{N}$ such that for
any $x\in\mathbb{T}^{2}$ and for any charts in the sub-atlas containing $x$
and $L^{n}x$ respectively, $D_{u}L^{n}>2$ when computed in the respective
charts.
###### Proof.
Let the finite sub-atlas be $\\{\alpha_{u_{1}},\ldots,\alpha_{u_{N}}\\}$. Let
$M$ and $M^{\prime}$ be positive constants such that
$\tfrac{1}{M}\leq\theta_{u_{i}}(x)\leq M$ for $1\leq i\leq N$ and all of the
maps $\alpha_{u_{i}}\circ\alpha_{u_{j}}^{-1}$ have derivatives between
$M^{\prime}$ and ${M^{\prime}}^{-1}$ when $1\leq i,j\leq N$. By compactness,
there exists $\delta>0$ such that for each $x\in\mathbb{T}^{2}$, there exists
an $i$ with $x+u_{i}\in A_{0}$ and $d(x+u_{i},\partial A_{0})>\delta$. Let $n$
be a fixed integer sufficiently large that $\lambda^{-n}<\delta$ and also
satisfying $e^{\gamma n}>2kM^{2}{M^{\prime}}^{2}$, where $\lambda$ is the
expanding eigenvalue of $L$ and $k$, $\gamma$ are as in Lemma 9.
Let $u,v\in\\{u_{1},...,u_{N}\\}$ be such that $x+u$ and $L^{n}x+v$ both lie
in $\text{int}_{\delta}(A_{0})$. Let $B_{1}+u$ be a rectangle in $A_{0}$ whose
projection in $A_{0}$ onto the stable direction is all of the stable manifold
segment defining $A_{0}$ and whose unstable projection in $A_{0}$ is
sufficiently narrow that $L^{n}B_{1}+v\subset A_{0}$. Let $B_{2}$ be the
rectangle in $A_{0}$ whose projection onto the stable direction is the stable
manifold segment defining $A_{0}$ and whose unstable projection is the same as
that of $L^{n}B_{1}+v$. Then by Lemma 9,
(11) $\mu(L^{n}B_{1}+v)\leq ke^{-\gamma n}\mu(B_{2}).$
We then have
(12) $\begin{split}\mu(L^{n}B_{1}+v)&\geq\tfrac{1}{M}\mu(L^{n}B_{1})\\\
\mu(L^{n}B_{1})&=\mu(B_{1})\\\
\mu(B_{1})&\geq\tfrac{1}{M}\mu(B_{1}+u).\end{split}$
Combining equations (11) and (12), by the choice of $n$ we see
$\mu(B_{2})\geq\frac{e^{\gamma n}}{kM^{2}}\mu(B_{1}+u)\geq 2M^{\prime
2}\mu(B_{1}+u).$
Shrinking $B_{1}$ so that $B_{1}+u$ shrinks to the segment of the stable
manifold of $x$ lying in $A_{0}$, we deduce the unstable derivative of $L^{n}$
in the $(\alpha_{u},\alpha_{v})$ charts is at least $2{M^{\prime}}^{2}$. Now
if $u^{\prime}$ and $v^{\prime}$ are such that $\alpha_{u^{\prime}}$ and
$\alpha_{v^{\prime}}$ are arbitrary charts in the sub-atlas containing $x$ and
$L^{n}x$ in their domain then the unstable derivative of $L^{n}$ in the
$(\alpha_{u^{\prime}},\alpha_{v^{\prime}})$ charts is at least 2. This
completes the proof. ∎
###### Lemma 11.
There exists $M>0$ such that for any $n\in\mathbb{N}$, any cylinder set $C$ in
$\Omega$ of the form $[0a_{1}\ldots a_{n-1}0]$, and any $\omega\in C$ we have
$\frac{1}{M}\leq|D_{u}L^{n}(\pi(\omega))|\cdot\exp(S_{n}\phi(\pi(\omega)))\leq
M,$
where the unstable derivative of $L$ is computed using the new charts.
###### Proof.
The proof is based on a standard argument that the fibre maps of uniformly
expanding maps have bounded distortion (see e.g. [21, Chapter III]). Suppose
that $x,y$ are points in $A_{0}$ which lie on the same local unstable manifold
and are such that $x=\pi(\omega),y=\pi(\eta)$ with
$\omega,\eta\in[0a_{1}\ldots a_{n-1}0]\subset\Omega$. Recall from Section 3.2
that in terms of charts of the new atlas, the map $L$ has the form
$\alpha_{v_{1}}\circ L\circ\alpha_{v_{0}}^{-1}(a,b)=(f_{1}(a),f_{2}(b))$ where
the functions $f_{1}$ and $f_{2}$, which depend on the choice of $v_{0}$ and
$v_{1}$, are differentiable with Hölder continuous derivatives. Since
$\sigma^{n}(\omega),\sigma^{n}(\eta)\in[0]$ we have that both $L^{n}(x)$ and
$L^{n}(y)$ are in $A_{0}$ and hence $d(L^{j}(x),L^{j}(y))\leq\lambda^{-(n-j)}$
for $0\leq j<n$, where $\lambda$ is the expanding eigenvalue of $L$ (and here
the distance is computed using the original metric). Denote by $f_{1,j}$ the
first component of $L$ computed in the charts corresponding to $L^{j}x$ and
$L^{j+1}x$. Applying the chain rule we see that
$\left|\frac{D_{u}L^{n}(x)}{D_{u}L^{n}(y)}\right|=\prod_{j=0}^{n-1}\left|\frac{f^{\prime}_{1,j}(L^{j}x)}{f^{\prime}_{1,j}(L^{j}y)}\right|.$
It follows from Hölder continuity of the derivatives and Lemma 10 that there
are $K>0$ and $\gamma\in(0,1)$ such that for $0\leq j<n$
$\left|\frac{f_{1,j}(L^{j}(x))}{f_{1,j}(L^{j}(y))}\right|\leq
1+Kd(L^{j}(x),L^{j}(y))^{\gamma}\leq 1+K\lambda^{-(n-j)}.$
Setting $M=\prod_{j=1}^{\infty}(1+K\lambda^{-j})$ we obtain that
$|D_{u}L^{n}(x)|/|D_{u}L^{n}(y)|\leq M$ for all $x,y$ lying in a segment of
the local unstable manifold contained in a single partition element.
Now suppose $C$ is a cylinder set $[a_{0}a_{1}\ldots a_{n}]$ in $\Omega$ with
$a_{0}=a_{n}=0$. Let $\omega^{-}$ be a compatible past and set
$U=\pi(\omega^{-}C)$, a piece of unstable manifold that is mapped bijectively
by $L^{n}$ onto a fibre of the unstable manifold crossing the partition
element $A_{0}$. By the mean value theorem, the length (in the new charts) of
$L^{n}U$ (which is the same as the width of the 0 partition element) is the
product of the length of $U$ and the unstable derivative at some point $u\in
U$. Since coordinates (and hence lengths) in the unstable direction are
computed using by $\mu$-measures the measure $\mu=\pi_{\ast}\nu$, this gives,
for any $\omega\in U$,
$\frac{1}{M}\leq|(D_{u}L^{n})(\pi(\omega))|\cdot\nu(C)\leq M.$
Now applying the Bowen definition [2] for the Gibbs state $\nu$ of the
potential $\phi\circ\pi$ together with the fact that $P_{\rm
top}(\sigma,\phi\circ\pi)=0$,
$\frac{1}{M^{\prime}}\exp(S_{n}\phi(\pi(\omega)))\leq\mu(C)\leq
M^{\prime}\exp(S_{n}\phi(\pi(\omega)))$
Substituting in the previous inequality gives the required statement. ∎
###### Lemma 12.
Let $\phi$ be as in the statement of Theorem 1, and let the charts be
constructed as above. Then potential $\phi(x)$ is cohomologous to
$-\log|D_{u}L(x)|$, where the unstable derivative is computed using the new
charts.
###### Proof.
We rely on Livšic’s theorem [13]: if $T$ is a hyperbolic dynamical system and
$\psi$ is a Hölder continuous function such that $S_{n}\psi(p)=0$ whenever
$T^{n}p=p$, then $\psi$ is a coboundary (with Hölder continuous transfer
function).
As a corollary, if $\Omega$ is a mixing subshift of finite type and there
exists an $M$ such that $|S_{n+1}\psi(\omega)|\leq M$ whenever $\omega\in[0]$
and $\sigma^{n}\omega\in[0]$, then $\psi$ is a Hölder coboundary.
Lemma 11 shows that the function $\psi(\omega)=\chi\circ\pi(\omega)$ where
$\chi(x)=\log|D_{u}L(x)|+\phi(x)$ satisfies the hypothesis of this corollary
of Livšic’s theorem, so that $\psi$ is a Hölder coboundary. It follows that
$\chi$ sums to zero around any periodic orbit in $\mathbb{T}^{2}$, so that
$\chi$ is also a Hölder coboundary, using Livšic’s theorem again. ∎
## 4\. Application to the smooth conjugacy problem.
In this section we explicitly construct a countable family of Hölder
potentials in the homotopy class of the toral automorphism $L$ whose geometric
potentials have identical pressure functions, yet they are not $C^{1}$
conjugate.
###### Lemma 13.
Let $L$ be an automorphism of $\mathbb{T}^{2}$, let $k\in\mathbb{N}$ and let
$M_{k}(x)=kx\bmod 1$. Then for any continuous function $\phi$ on
$\mathbb{T}^{2}$
$P_{\rm top}(L,\phi)=P_{\rm top}(L,\phi\circ M_{k}).$
###### Proof.
We use the topological definition of pressure:
$P_{\rm top}(L,\phi)=\lim_{\epsilon\to
0}\limsup_{n\to\infty}\frac{1}{n}\log\sup\left\\{\sum_{x\in
E}e^{S_{n}\phi(x)}:E\text{ is $(n,\epsilon)$-separated}\right\\},$
where a subset $E$ of $\mathbb{T}^{2}$ is _$(n,\epsilon)$ -separated_ (with
respect to $L$) if for any distinct elements $x,y\in E$, there exists $0\leq
j<n$ such that $d(L^{j}x,L^{j}y)\geq\epsilon$.
Denote $\phi_{k}=\phi\circ M_{k}$. We first show that $P_{\rm
top}(L,\phi_{k})\geq P_{\rm top}(L,\phi)$. Let $E$ be an
$(n,\epsilon)$-separated subset of $\mathbb{T}^{2}$. We define a subset
$E^{\prime}$ of $\mathbb{T}^{2}$ by
$E^{\prime}=M_{k}^{-1}(E)=\\{(x+\mathbf{n})/k\colon x\in
E,\mathbf{n}\in\\{0,\ldots,k-1\\}^{2}\\}$
and claim $E^{\prime}$ is $(n,\frac{\epsilon}{k})$-separated. In the case when
$x\in E$ and $\mathbf{m},\mathbf{n}$ are distinct elements of
$\\{0,\ldots,k-1\\}^{2}$, we claim
$d(L^{i}(\frac{x+\mathbf{m}}{k}),L^{i}(\frac{x+\mathbf{n}}{k}))\geq\frac{1}{k}$
for each $i$. Since $L$ is an automorphism, it suffices to show that
$d(L^{i}(\frac{\mathbf{p}}{k}),0)\geq\frac{1}{k}$ for each
$\mathbf{p}\in\\{0,\frac{1}{k},\ldots,\frac{k-1}{k}\\}^{2}\setminus\\{(0,0)\\}$
and $i\in\mathbb{N}$. Since the matrix $A$ defining $L$ has an inverse with
integer entries, it is not hard to see that $L$ is a permutation of the points
$\\{0,\ldots,\frac{k-1}{k}\\}^{2}$. Since $L$ is injective, it follows that
$d(L^{i}(\frac{\mathbf{p}}{k}),0)\geq\frac{1}{k}$ for each $i$. In the case
when $x,y$ are distinct elements of $E$ and $\mathbf{m},\mathbf{n}$ are
elements of $\\{0,\ldots,k-1\\}^{2}$ (not necessarily distinct), letting
$u=\frac{x+\mathbf{m}}{k}$ and $v=\frac{y+\mathbf{n}}{k}$, we have
$d(L^{i}u,L^{i}v)\geq\tfrac{1}{k}d(M_{k}(L^{i}u),M_{k}(L^{i}v))=\tfrac{1}{k}d(L^{i}x,L^{i}y).$
Since $\max_{i<n}d(L^{i}x,L^{i}y)\geq\epsilon$, it follows that
$\max_{i<n}d(L^{i}u,L^{i}v)\geq\frac{\epsilon}{k}$. Hence we have established
that $E^{\prime}$ is $(n,\frac{\epsilon}{k})$-separated as required.
Note $S_{n}\phi_{k}(\frac{x+\mathbf{m}}{k})=S_{n}\phi(x)$ for each $x\in E$
and $\mathbf{m}\in\\{0,\ldots,k-1\\}^{2}$. Therefore
$\displaystyle\sup$ $\displaystyle\left\\{\sum_{x\in
E}e^{S_{n}\phi_{k}(x)}:E\text{ is
$\textstyle(n,\frac{\epsilon}{k})$-separated}\right\\}$
$\displaystyle\qquad\geq k^{2}\sup\left\\{\sum_{x\in
E}e^{S_{n}\phi(x)}:E\text{ is }(n,\epsilon)\text{-separated}\right\\},$
which gives $P_{\rm top}(L,\phi_{k})\geq P_{\rm top}(L,\phi)$.
For the converse inequality, we first claim that any $u,v\in\mathbb{T}^{2}$
and for any positive $\epsilon<1/(2k\|A\|)$, the following implication holds:
(13) $d(u,v)<\epsilon\text{ and }d(M_{k}(Lu),M_{k}(Lv))<k\epsilon\text{
implies }d(Lu,Lv)<\epsilon.$
Again, by the linearity of $L$, it suffices to show that if $d(u,0)<\epsilon$
and $d(M_{k}(Lu),0)<k\epsilon$ then $d(Lu,0)<\epsilon$. To verify this claim,
suppose $d(u,0)<\epsilon$. By the choice of $\epsilon$, $d(Lu,0)<\frac{1}{2k}$
so that $0$ is the closest element of $M_{k}^{-1}\\{0\\}$ to $Lu$. Since
$d(M_{k}(Lu),0)<k\epsilon$, the fact that $M_{k}$ locally expands distances by
a factor of $k$ implies that $d(Lu,0)<\epsilon$ as required.
Let $\epsilon<\frac{1}{2k\|A\|}$ and let $E^{\prime}$ be an $(n,\epsilon)$
separated set in $\mathbb{T}^{2}$. We define a relation $R$ on $E^{\prime}$ by
$uRv\quad\Leftrightarrow\quad\max_{0\leq
i<n}d(L^{i}M_{k}(u),L^{i}M_{k}(v))<\frac{\epsilon}{2k}.$
Equivalently $uRv$ iff $\max_{0\leq
i<n}d(M_{k}(L^{i}u),M_{k}(L^{i}v))<\frac{\epsilon}{2k}$, since $L\circ
M_{k}=M_{k}\circ L$. We then take the transitive closure of $R$ to form an
equivalence relation $\sim$ on $E^{\prime}$. That is, $u\sim v$ if there exist
$u_{0},u_{1},\ldots,u_{l}$ with $u_{0}=u$, $u_{l}=v$ and $u_{i-1}Ru_{i}$ for
$i=1,\ldots,l$. We claim that each $\sim$-equivalence class has at most
$k^{2}$ elements. We prove this by contradiction. Suppose $C$ is a
$\sim$-equivalence class containing at least $k^{2}+1$ elements. We construct
a subset $D$ of cardinality exactly $k^{2}+1$ such that there is a path
between any two elements of $D$ using steps in $R$. To see this, fix an
initial element of $u_{0}$ of $C$, enumerate the other elements of $C$ and for
each such element $u$, find an $R$-path, from the definition of $\sim$
connecting $u_{0}$ to $u$. We now build $D$ by adding the elements of the
paths one at a time until the cardinality is exactly $k^{2}+1$. (At each step
when a vertex is to be included, $D$ may either increase by one element if the
vertex is new; or remain the same if the vertex has already been added.) By
the construction, each element of $D$ is connected by $R$ to a previous
element of $D$.
Let $D=\\{u_{0},\ldots,u_{k^{2}}\\}$. By the triangle inequality and the
definition of $R$,
$d(M_{k}(L^{i}(u_{0})),M_{k}(L^{i}(u_{j})))<\frac{k\epsilon}{2}$ for each $j$
(since we can get from $u_{0}$ to $u_{j}$ along an $R$ path of length at most
$k^{2}$). In particular, $d(M_{k}(u_{0}),M_{k}(u_{j}))<\frac{k\epsilon}{2}$
for each $j$. Using the fact that $M_{k}$ locally expands distances by a
factor of $k$, for each $0\leq j\leq k^{2}$, $u_{j}$ differs from $u_{0}$ by
an element of $M_{k}^{-1}\\{0\\}=\\{0,\frac{1}{k},\ldots,\frac{k-1}{k}\\}^{2}$
plus a term of size at most $\frac{\epsilon}{2}$. By the pigeonhole principle,
there exist $0\leq j<j^{\prime}\leq k^{2}$ such that $u_{j}$ and
$u_{j^{\prime}}$ differ by at most $\epsilon$. Since $u_{j}\sim
u_{j^{\prime}}$, we see that
$d(L^{i}M_{k}(u_{j}),L^{i}M_{k}(u_{j^{\prime}}))<k\epsilon$ for
$i=0,\ldots,n$. Applying (13) inductively we see
$d(L^{i}u_{j},L^{i}u_{j^{\prime}})<\epsilon$ for $i=0,\ldots,n$. This
contradicts the initial assumption that $E^{\prime}$ was
$(n,\epsilon)$-separated.
Hence we have shown that each $\sim$-equivalence class in $E^{\prime}$ has at
most $k^{2}$ elements. Let the equivalence classes be $C_{1},\ldots,C_{M}$;
and for each equivalence class, pick $u_{i}\in C_{i}$ for which
$S_{n}\phi_{k}(u_{i})$ is maximal in the equivalence class. We now have
$\sum_{u\in C_{i}}\exp(S_{n}\phi_{k}(u))\leq k^{2}\exp(S_{n}\phi_{k}(u_{i})).$
Summing over the equivalence classes, we obtain
$\sum_{u\in E^{\prime}}\exp(S_{n}\phi_{k}(u))\leq
k^{2}\sum_{i=1}^{M}\exp(S_{n}\phi_{k}(u_{i})).$
Let $x_{i}=M_{k}(u_{i})$ for each $i$. Since
$S_{n}\phi_{k}(u_{i})=S_{n}\phi(x_{i})$, rearranging the above inequality
gives
$\sum_{i=1}^{M}\exp(S_{n}\phi(x_{i}))\geq\frac{1}{k^{2}}\sum_{u\in
E^{\prime}}\exp(S_{n}\phi_{k}(u)).$
Finally, we claim that $\\{x_{1},\ldots,x_{M}\\}$ is
$(n,\frac{\epsilon}{2k})$-separated. If not then, there there exist $j,l$ such
that $d(L^{i}x_{j},L^{i}x_{l})<\frac{\epsilon}{2k}$ for $i=0,\ldots,n-1$.
Then, since $x_{j}=M_{k}(u_{j})$ and $x_{l}=M_{k}(u_{l})$, we see from the
definition of $R$ that $u_{j}Ru_{l}$. This contradicts the assumption that the
$u_{i}$’s belong to distinct equivalence classes.
Hence we have shown
$\displaystyle\sup$ $\displaystyle\left\\{\sum_{x\in
E}e^{S_{n}\phi(x)}:E\text{ is
$\textstyle(n,\frac{\epsilon}{2k})$-separated}\right\\}$
$\displaystyle\qquad\geq\frac{1}{k^{2}}\sup\left\\{\sum_{u\in
E^{\prime}}e^{S_{n}\phi_{k}(u)}:E\text{ is
}(n,\epsilon)\text{-separated}\right\\},$
It follows that $P_{\rm top}(L,\phi)\geq P_{\rm top}(L,\phi_{k})$ as required.
∎
###### Proof of Theorem 2.
Let $L$ be the Anosov automorphism of the torus given by the matrix
$\begin{pmatrix}1&1\\\ 1&0\end{pmatrix}$. Note that
$(\frac{1}{2},0),(\frac{1}{2},\frac{1}{2}),(0,\frac{1}{2})$ is the unique
period 3 orbit of $L$. Let $\phi$ be a Hölder continuous function of the torus
of pressure 0 such that
$\phi(0,0)\neq\frac{1}{3}(\phi(\frac{1}{2},0)+\phi(\frac{1}{2},\frac{1}{2})+\phi(0,\frac{1}{2}))$
and let $\phi_{2}(x)=\phi(2x)$ as above. Then
$\textstyle\frac{1}{3}(\phi_{2}(\frac{1}{2},0)+\phi_{2}(\frac{1}{2},\frac{1}{2})+\phi_{2}(0,\frac{1}{2}))=\phi(0,0)\neq\frac{1}{3}(\phi(\frac{1}{2},0)+\phi(\frac{1}{2},\frac{1}{2})+\phi(0,\frac{1}{2})).$
We conclude the proof by showing that if $T$ and $T_{2}$ are the area-
preserving Anosov diffeomorphisms obtained from $\phi$ and $\phi_{2}$
respectively as in Theorem 1, then $T$ and $T_{2}$ are not conjugate, but they
satisfy $P_{\rm top}(T,-sD_{u}T)=P_{\rm top}(T_{2},-sD_{u}T_{2})$ for all
$s\in\mathbb{R}$.
Let $h$ be the conjugacy between $T$ and $L$ obtained in the proof of Theorem
1. Similarly, let $h_{2}$ be the conjugacy between $T_{2}$ and $L$. The
theorem guarantees that $-\log|D_{u}T|\circ h$ is cohomologous to $\phi$ and
$-\log|D_{u}T_{2}|\circ h_{2}$ is cohomologous to $\phi_{2}$. Let
$p=h(\frac{1}{2},0)$ and notice that $\\{p,Tp,T^{2}p\\}$ is the unique period
3 orbit of $T$. Similarly let $p_{2}=h_{2}(\frac{1}{2},0)$ so that
$\\{p_{2},T_{2}p_{2},T_{2}^{2}p_{2}\\}$ is the unique period 3 orbit of
$T_{2}$. Since $-\log|D_{u}T|\circ h$ is cohomologous to $\phi$, we see that
$|D_{u}T^{3}(p)|=|D_{u}T^{3}(Tp)|=|D_{u}T^{3}(T^{2}p)|=e^{\phi(\frac{1}{2},0)+\phi(\frac{1}{2},\frac{1}{2})+\phi(0,\frac{1}{2})},$
while
$|D_{u}T_{2}^{3}(p_{2})|=e^{\phi_{2}(\frac{1}{2},0)+\phi_{2}(\frac{1}{2},\frac{1}{2})+\phi_{2}(0,\frac{1}{2})}=e^{3\phi(0,0)}.$
Since differentiable conjugacies preserve unstable multipliers, we see that
$T$ and $T_{2}$ are not differentiably conjugate.
However,
$\displaystyle P_{\rm top}(T,-s\log|D_{u}T|)$ $\displaystyle=P_{\rm
top}(hLh^{-1},-s\log|D_{u}T|)$ $\displaystyle=P_{\rm
top}(L,-s\log|D_{u}T|\circ h)$ $\displaystyle=P_{\rm top}(L,-s\phi)$
and similarly $P_{\rm top}(T_{2},-s\log|D_{u}T_{2}|)=P_{\rm
top}(L,-s\phi_{2})$. By Lemma 13, $P_{\rm top}(L,-s\phi)=P_{\rm
top}(L,-s\phi\circ M_{2})=P_{\rm top}(L,-s\phi_{2})$ for all $s\in\mathbb{R}$
so that $P_{\rm top}(T,-s\log|D_{u}T|)=P_{\rm top}(T_{2},-s\log|D_{u}T_{2}|)$
for all $s$. ∎
## References
* [1] D. V. Anosov, _Geodesic flows on closed Riemannian manifolds_ , Proc. Stek. Inst. 90 (1967)
* [2] R. Bowen, _Equilibrium states and the ergodic theory of Anosov diffeomorphisms_ , Lecture Notes in Mathematics, Vol. 470. corr. 2nd rev. ed. (1st ed. 1975) Springer (2017).
* [3] D. Capocaccia, _A definition of Gibbs states for a compact set with $\mathbb{Z}^{\nu}$-action_, Comm. Math. Phys. 48, (1976), 85-88.
* [4] E. Cawley, _The Teichmüller space of an Anosov diffeomorphism of $\mathbb{T}^{2}$_, Invent. Math. 112 (1993), no. 2, 351–376.
* [5] J. DeWitt and A. Gogolev, _Dominated splitting from constant periodic data and global rigidity of Anosov automorphisms_ , preprint arXiv:2309.07323
* [6] J. Franks, _Anosov diffeomorphisms on tori_ , Trans. AMS 145, 117-125 (1969)
* [7] A. Gogolev, _Smooth conjugacy of Anosov diffeomorphisms on higher dimensional tori_ , Journal of Modern Dynamics, 2, no. 4, 645-700 (2008)
* [8] A. Gogolev and F. Rodriguez Hertz, _Smooth rigidity for very non-algebraic Anosov diffeomorphisms of codimension one_ , Israel Journal of Mathematics, to appear.
* [9] N. Haydn, _On Gibbs and equilibrium states_ , Ergodic Theory Dyn. Syst. 7, 119-132 (1987).
* [10] N. Haydn and D. Ruelle, _Equivalence of Gibbs and Equilibrium States for Homeomorphisms Satisfying Expansiveness and Specification_ , Commun. Math. Phys. 148, 155-167 (1992).
* [11] A. Erchenko, _Flexibility of Lyapunov exponents with respect to two classes of measures on the torus_ , Ergodic Theory and Dynamical Systems, 42, Issue 2, 737 – 776 (2022).
* [12] B. Kalinin, V. Sadovskaya, _On Anosov diffeomorphisms with asymptotically conformal periodic data_ , Ergodic Theory and Dynamical Systems, 29 Issue 1, 117 - 136 (2009).
* [13] A. N. Livšic, _Cohomology of dynamical systems_ , Izv. Akad. Nauk SSSR Ser. Mat. 36, 1296–1320 (1972).
* [14] R. de la Llave, J. M. Marco, and R. Moriyón, _Canonical perturbation theory of Anosov systems and regularity results for the Livšic cohomology equation_ , Ann. of Math. (2) 123 no.3, 537–611 (1986).
* [15] R. de la Llave, _Invariants for smooth conjugacy of hyperbolic dynamical systems II_ , Commun. Math. Phys. 109, 369-378 (1987)
* [16] R. de la Llave and R. Moriyón, _Invariants for smooth conjugacy of hyperbolic dynamical systems IV_ , Commun. Math. Phys. 116, 185-192 (1988)
* [17] R. de la Llave, _Smooth Conjugacy and S-R-B Measures for Uniformly and Non-Uniformly Hyperbolic Systems_ , Commun. Math. Phys. 150, 289-320 (1992)
* [18] A. Manning, _There are no new Anosov diffeomorphisms on tori_ , Amer. J. Math. 96, 422-429 (1974).
* [19] J. M. Marco and R. Moriyón, _Invariants for smooth conjugacy of hyperbolic dynamical systems I_ , Commun. Math. Phys. 109, 681-689 (1987)
* [20] J. M. Marco and R. Moriyón, _Invariants for smooth conjugacy of hyperbolic dynamical systems IV_ , Commun. Math. Phys. 112, 317-333 (1987)
* [21] R. Mañé, _Ergodic Theory and Differentiable Dynamics_ , Springer (1987)
* [22] S. Newhouse, _On codimension one Anosov diffeomorphisms_ , Amer. J. Math. 92, 761-770 (1970).
* [23] M. Pollicott and H. Weiss, _Free Energy as a Dynamical Invariant (or Can You Hear the Shape of a Potential?)_ , Commun. Math. Phys. 240, 457–482 (2003)
* [24] D. Ruelle, _Thermodynamic formalism. The mathematical structures of classical equilibrium statistical mechanics_ , Addison-Wesley, 1978.
* [25] S. Smale, _Differentiate dynamical systems_ , Bull A.M.S. 73, 747-817 (1967).
* [26] P. Walters, _Ruelle’s operator theorem and g-measures_ , Trans. Amer. Math. Soc. 214 (1975), 375-387.
|
# The Quantum Rabi model: Towards Braak’s conjecture
Zeév Rudnick School of Mathematical Sciences, Tel Aviv University, Tel Aviv
69978, Israel<EMAIL_ADDRESS>
###### Abstract.
We establish a density one version of Braak’s conjecture on the fine structure
of the spectrum of the quantum Rabi model, as well as a recent conjecture of
Braak, Nguyen, Reyes-Bustos and Wakayama on the nearest neighbor spacings of
the spectrum. The proof uses a three-term asymptotics for large eigenvalues
due to Boutet de Monvel and Zielinski, and a number theoretic argument from
uniform distribution theory.
## 1\. Introduction
In this note we address a conjecture of Braak [1] about the fine structure of
the spectrum of the quantum Rabi model (QRM), a fundamental model of light-
matter interaction, which describes the interaction between a two-level atom
(qubit) coupled to a quantized, single-mode harmonic oscillator, see the
survey [7]. The Hamiltonian of the system is
$H=\mathbf{a}^{\dagger}\mathbf{a}+\Delta\sigma_{z}+g\sigma_{x}(\mathbf{a}+\mathbf{a}^{\dagger})$
where $\sigma_{x}=\left(\begin{smallmatrix}0&1\\\
1&0\end{smallmatrix}\right)$, $\sigma_{z}=\left(\begin{smallmatrix}1&0\\\
0&-1\end{smallmatrix}\right)$ are the Pauli matrices of the two-level system,
assumed to have level splitting $2\Delta$; $\mathbf{a}^{\dagger}$ and
$\mathbf{a}$ are the creation and annihilation operators of the harmonic
oscillator with frequency set to be unity; and $g>0$ measures the strength of
the coupling between the systems.
The Rabi Hamiltonian commutes with a parity operator
$P=(-1)^{\mathbf{a}^{\dagger}\mathbf{a}}\sigma_{z}$, and hence the Hilbert
space of states decomposes into the $\pm 1$-eigenspaces of $P$ which are
preserved by $H$, and and the spectrum of $H$ breaks up into a union of two
parity classes $\\{E_{n}^{\pm}\\}$.
The eigenvalues in each parity class satisfy $E_{n}^{\pm}=n-g^{2}+o(1)$ as
$n\to\infty$ [6, 8], so that for $n$ sufficiently large, each interval
$[n,n+1]$ contains at most $4$ shifted eigenvalues $E_{n}^{\pm}+g^{2}$. Braak
[1] conjectured that
###### Conjecture 1.1 (Braak’s G-conjecture).
For a given parity class, all intervals $[n,n+1]$ contains at most two shifted
eigenvalues, two intervals containing no shifted eigenvalues are not adjacent,
and two intervals containing two shifted eigenvalues are also not adjacent.
In this note, we show that Braak’s conjecture holds for “almost all” $n$.
###### Theorem 1.2.
Fix $\Delta>0$ and $g>0$. For all but at most $O(N^{1/2+o(1)})$ values of
$n\leq N$, the interval $(n,n+1)$ contains exactly two shifted eigenvalues of
one of the parity classes, and none for the other parity class, while the
adjacent intervals $(n-1,n)$ and $(n+1,n+2)$ contain exactly two eigenvalues
of the other parity class and none of the first parity class. Moreover,
neither $n$ nor $n\pm 1$ are shifted eigenvalues.
In particular, almost all intervals $[n,n+1]$ contain exactly two elements of
the shifted spectrum.
Concerning the last assertion, there are special choices of the parameters $g$
and $\Delta$ for which there are “exceptional” eigenvalues $E$ such that
$E+g^{2}$ is an integer, see [7, §3.2] and the references therein, and our
theorem excludes $n-g^{2}$ being one of these eigenvalues for almost all $n$.
An application of Theorem 1.2 is to prove a recent conjecture of Braak,
Nguyen, Reyes-Bustos and Wakayama [2] on the nearest neighbor spacings of the
full spectrum. Denote by $\\{E_{n}\\}$ the ordered eigenvalues of $H$ of both
parity classes:
$E_{1}\leq E_{2}\leq\dots$
In [2], the nearest neighbor spacings $\delta_{n}:=E_{n+1}-E_{n}$ were
classified into three types: positive if both $E_{n},E_{n+1}$ fell into the
positive parity class, negative if both fell into the negative parity class,
and mixed if one of the pair was positive and one negative. Based on numerical
observation, it was conjectured [2, eq 14] that
###### Conjecture 1.3 (Spacings conjecture for the QRM).
The frequencies of the three different types of nearest neighbor spacings are
$1/4$,$1/4$,$1/2$, respectively.
This clearly follows from the full conjecture of Braak, but since we establish
that Braak’s conjecture holds for $100\%$ of $n^{\prime}s$, we have also
established Conjecture 1.3.
Finally, we examine the value distribution of the normalized deviations
$\delta_{n}^{\pm}:=n^{1/4}\left(E_{n}^{\pm}-\left(n-g^{2}\right)\right).$
As an application of the method of proof of Theorem 1.2, we show that the
deviations in each parity class satisfy an arcsine law:
###### Theorem 1.4.
For any subinterval $[\alpha,\beta]\subset[-\frac{\Delta}{\sqrt{2\pi
g}},\frac{\Delta}{\sqrt{2\pi g}}]$, we have
$\lim_{N\to\infty}\frac{1}{N}\\#\Big{\\{}n\leq
N:\delta_{n}^{\pm}\in[\alpha,\beta]\Big{\\}}=\int_{\alpha}^{\beta}\frac{dy}{\pi\sqrt{\frac{2\pi
g}{\Delta^{2}}-y^{2}}}.$
The proof of Theorem 1.2 starts with an approximation to the eigenvalues due
to Boutet de Monvel and Zielinski [3] and concludes with a number-theoretic
argument.
Acknowledgement: I thank Masato Wakayama for introducing me to the QRM and for
helpful discussions.
This research was supported by the European Research Council (ERC) under the
European Union’s Horizon 2020 research and innovation programme (grant
agreement No. 786758).
## 2\. The case of good $n$’s
Boutet de Monvel and Zielinski [3] proved a three term expansion for the
eigenvalues in each parity class:
(1) $E_{n}^{\pm}=n-g^{2}\mp\frac{\Delta}{\sqrt{2\pi
g}}\frac{(-1)^{n}\cos(\theta_{n})}{n^{1/4}}+O(n^{-1/2+o(1)}),$
where
$\theta_{n}=4g\sqrt{n}-\frac{\pi}{4}.$
(this approximation was apparently proposed in [4], see also [8]).
Fix $\delta\in(0,1/2)$ small and let $N\gg 1$. We say that $n\in[N/2,N]$ is
“good” if
$|\cos(\theta_{n})|>N^{-1/2+\delta}.$
Otherwise we say that $n$ is “bad”.
Let $x_{n}^{\pm}=E_{n}^{\pm}+g^{2}$ be the shifted eigenvalues, and denote by
$\mathcal{X}^{\pm}=\\{x_{n}^{\pm}\\}$ the shifted spectra in each parity
class.
###### Proposition 2.1.
Let $N\gg 1$. If $n\in[N/2,N]$ is “good” then $n$, $n\pm 1$ are not shifted
eigenvalues and either
i) the interval $(n,n+1)$ contains both $x_{n}^{-}$ and $x_{n+1}^{-}$:
$(n,n+1)\cap\mathcal{X}^{-}=\\{x_{n}^{-},x_{n+1}^{-}\\}$, and no elements of
$\mathcal{X}^{+}$ while the intervals $(n-1,n)$ and $(n+1,n+2)$ contain no
elements of $\mathcal{X}^{-}$, $(n-1,n)$ contains both $x_{n-1}^{+}$ and
$x_{n}^{+}$, while $(n+1,n+2)$ contains both $x_{n+1}^{+}$ and $x_{n+2}^{+}$.
or
ii) Otherwise, the same holds with the roles of $\mathcal{X}^{-}$ and
$\mathcal{X}^{+}$ reversed.
###### Proof.
Let $N\gg 1$ be large, and take $n\in[N/2,N]$. Then
$\theta_{n+1}=\theta_{n}+O\left(\frac{1}{\sqrt{N}}\right)$
since
$\theta_{n+1}-\theta_{n}=4g\sqrt{n+1}-4g\sqrt{n}=\frac{4g}{\sqrt{n+1}+\sqrt{n}}\sim\frac{2g}{\sqrt{N}}.$
Hence
$\cos(\theta_{n+1})=\cos(\theta_{n})+O\left(\frac{1}{\sqrt{N}}\right),$
and likewise
$\cos(\theta_{n-1})=\cos(\theta_{n})+O\left(\frac{1}{\sqrt{N}}\right).$
and the same holds for $\cos(\theta_{n\pm 2})$.
Therefore, for “good” $n$, if $\cos(\theta_{n})>N^{-1/2+\delta}$ then
$\cos(\theta_{n\pm 1}),\cos(\theta_{n\pm 2})>\frac{1}{2}N^{-1/2+\delta}$ and
in particular have the same sign as $\cos(\theta_{n})$, and and analogous
statement holds true if $\cos(\theta_{n})<-N^{-1/2+\delta}$.
Let’s assume that $(-1)^{n}\cos(\theta_{n})>N^{-1/2+\delta}$. Then
$x_{n}^{-}-n=\frac{\Delta}{\sqrt{2\pi
g}}\frac{(-1)^{n}\cos(\theta_{n})}{n^{1/4}}+O(n^{-1/2+o(1)})>\frac{1}{2}N^{-1/2+\delta}>0$
so that $x_{n}^{-}\in(n,n+1)$. Moreover,
$(-1)^{n+1}\cos(\theta_{n\pm 1})=-(-1)^{n}\cos(\theta_{n\pm
1})<-\frac{1}{2}N^{-1/2+\delta}<0$
because $\cos(\theta_{n\pm 1})$ has the same sign and roughly the same size as
$\cos(\theta_{n})$. Hence
$\begin{split}x_{n+1}^{-}-(n+1)&=\frac{\Delta}{\sqrt{2\pi
g}}\frac{(-1)^{n+1}\cos(\theta_{n+1})}{n^{1/4}}+O(n^{-1/2+o(1)})\\\
&<-\;\frac{\Delta}{\sqrt{2\pi g}}\frac{1}{4}N^{-1/2+\delta}<0\end{split}$
so that $x_{n+1}^{-}\in(n,n+1)$. Likewise $x_{n-1}^{-}<n-1$ so that
$x_{n-1}^{-}\in(n-2,n-1)$, and $x_{n+2}^{-}\in(n+2,n+3)$,
$x_{n-2}^{-}\in(n-2,n-1)$. Thus
$\mathcal{X}^{-}\cap(n,n+1)=\\{x_{n}^{-},x_{n+1}^{-}\\},$
$\mathcal{X}^{-}\cap(n-1,n)=\emptyset=\mathcal{X}^{-}\cap(n+1,n+2)$
in this case. Furthermore, for the other parity class, we have
$x_{n}^{+}-n=-\;\frac{\Delta}{\sqrt{2\pi
g}}\frac{(-1)^{n}\cos(\theta_{n})}{n^{1/4}}+O(n^{-1/2+o(1)})<-\frac{1}{2}N^{-1/2+\delta}<0$
so that $x_{n}^{+}\in(n-1,n)$, and arguing as above we see that
$x_{n+1}^{+},x_{n+2}^{+}\in(n+1,n+2)$ and $x_{n-1}^{+},x_{n-2}^{+}\in(n-1,n)$,
so that
$\mathcal{X}^{+}\cap(n-1,n)=\\{x_{n-2}^{+},x_{n-1}^{+}\\},\quad\mathcal{X}^{+}\cap(n+1,n+2)=\\{x_{n+1}^{+},x_{n+2}^{+}\\}$
and $\mathcal{X}^{+}\cap(n,n+1)=\emptyset$.
If $(-1)^{n}\cos(\theta_{n})<-N^{-1/2+\delta}$ then we reverse the roles of
the parity classes. ∎
## 3\. Bounding the exceptional set
To conclude the proof of Theorem 1.2, we need to bound the number of “bad”
$n\in[N/2,N]$, that is $|\cos(\theta_{n})|<N^{-1/2+\delta}$, which follows
from
$\theta_{n}\bmod\pi\in[\frac{\pi}{2}-N^{-1/2+\delta},\frac{\pi}{2}+N^{-1/2+\delta}]$
or from
$((\frac{4g}{\pi}\sqrt{n}+\frac{1}{4}))\in[\frac{1}{2}-N^{-1/2+\delta},\frac{1}{2}+N^{-1/2+\delta}]$
where $((x))=x-\lfloor x\rfloor\in[0,1)$ denotes the fractional part.
An elementary argument due to Fejér (1920) (see e.g. [5, Chapter 1 §2]) shows
that for any $a>0$, and any shift $\gamma\in{\mathbb{R}}$, for suitable
$c=c(a,\gamma)>0$, for $N\gg 1$, for any interval $[\alpha,\beta]\in[0,1]$,
$\left|\\#\left\\{n\in[N/2,N]:((a\sqrt{n}+\gamma))\in[\alpha,\beta]\right\\}-(\beta-\alpha)\frac{N}{2}\right|\leq
c\sqrt{N}.$
and in particular, for an interval of length $N^{1/2+o(1)}$ the number of
fractional parts which fall into that interval is asymptotically $N/2$ times
the length of that interval. In our case, the length of the interval is
$2N^{-1/2+\delta}$ and we obtain that the number of “bad” $n\in[N/2,N]$ is
about $N^{1/2+\delta}$, as claimed.
For the readers’ benefit, we recall Fejér’s argument for the case of the
fractional parts of $\sqrt{n}$. If $k^{2}\leq n<(k+1)^{2}$ then
$((\sqrt{n}))=\sqrt{n}-k$, and then $((\sqrt{n}))\in[\alpha,\beta]$ means
$\alpha\leq\sqrt{n}-k\leq\beta$ or $(k+\alpha)^{2}\leq n\leq(k+\beta)^{2}$, so
that $n$ lies in an interval of length $2k(\beta-\alpha)+O(1)$. Summing over
$k$ we see that the number of $n\in[N/2,N]$ with
$((\sqrt{n}))\in[\alpha,\beta]$ is
$\sum_{\sqrt{N/2}\leq
k<\sqrt{N}}\left\\{2k\left(\beta-\alpha\right)+O\left(1\right)\right\\}=(\beta-\alpha)\frac{N}{2}+O(\sqrt{N}).$
## 4\. Proof of Theorem 1.4
###### Proof.
We want to count $\\#\Big{\\{}n\leq
N:\delta_{n}^{-}\in[\alpha,\beta]\Big{\\}}$. According to (1), we have
$\delta_{n}^{-}=C(-1)^{n}\cos\left(4g\sqrt{n}-\frac{\pi}{4}\right)+O(n^{-1/4+o(1)})$
with
$C=\frac{\Delta}{\sqrt{2\pi g}}.$
For the purpose of understanding the distribution of $\delta_{n}^{-}$, we may
ignore the remainder term.
Writing $\varphi_{n}=\frac{2g}{\pi}\sqrt{n}-\frac{1}{8}$, we observe that
$\cos\left(4g\sqrt{n}-\frac{\pi}{4}\right)=\cos(2\pi\varphi_{n})$ depends only
on the fractional part $((\varphi_{n}))\in[0,1)$ of $\varphi_{n}$.
We split the range $n\in[1,N]$ into even and odd $n$’s. First take $n=2m$
even. Then $\delta_{2m}^{-}\in[\alpha,\beta]$ is equivalent to
$\widehat{\alpha}\leq\cos(2\pi\varphi_{2m})\leq\widehat{\beta}$
where
$\widehat{\alpha}:=\frac{\alpha}{C},\quad\widehat{\beta}:=\frac{\beta}{C}.$
Writing $\mathbf{1}_{[\widehat{\alpha},\widehat{\beta}]}$ for the indicator
function of the interval $[\widehat{\alpha},\widehat{\beta}]$, we have
$\\#\Big{\\{}2m\leq
N:\delta_{2m}^{-}\in[\alpha,\beta]\Big{\\}}=\sum_{m=1}^{N/2}\mathbf{1}_{[\widehat{\alpha},\widehat{\beta}]}\left(\cos\left(2\pi\varphi_{2m}\right)\right).$
Now the function
$\mapsto\mathbf{1}_{[\widehat{\alpha},\widehat{\beta}]}\left(\cos\left(2\pi
x\right)\right)$ is Riemann integrable, and from uniform distribution of the
fractional parts of $\varphi_{2m}$ (Fejér’s theorem) it follows that (see e.g.
[5, Chapter 1, §1])
$\lim_{N\to\infty}\frac{1}{N/2}\sum_{m=1}^{N/2}\mathbf{1}_{[\widehat{\alpha},\widehat{\beta}]}\left(\cos\left(2\pi\varphi_{2m}\right)\right)=\int_{-1/2}^{1/2}\mathbf{1}_{[\widehat{\alpha},\widehat{\beta}]}\left(\cos\left(2\pi
x\right)\right)dx.$
Now
$\begin{split}\int_{-1/2}^{1/2}\mathbf{1}_{[\widehat{\alpha},\widehat{\beta}]}\left(\cos\left(2\pi
x\right)\right)dx&=\frac{1}{\pi}\int_{0}^{\pi}\mathbf{1}_{[\widehat{\alpha},\widehat{\beta}]}\left(\cos\left(y\right)\right)dy\\\
&=\frac{1}{\pi}\int_{\widehat{\alpha}}^{\widehat{\beta}}\frac{dt}{\sqrt{1-t^{2}}}\\\
&=\frac{1}{\pi}\int_{\alpha}^{\beta}\frac{Cdy}{\sqrt{1-C^{2}y^{2}}}=\int_{\alpha}^{\beta}\frac{dy}{\pi\sqrt{\frac{2\pi
g}{\Delta^{2}}-y^{2}}}.\end{split}$
Therefore we obtain
$\lim_{N\to\infty}\frac{1}{N}\\#\Big{\\{}2m\leq
N:\delta_{2m}^{-}\in[\alpha,\beta]\Big{\\}}=\frac{1}{2}\int_{\alpha}^{\beta}\frac{dy}{\pi\sqrt{\frac{2\pi
g}{\Delta^{2}}-y^{2}}}.$
The same considerations are valued in the case that $n=2m+1$ is odd, except
that we require
$\widehat{\alpha}\leq-\cos(2\pi\varphi_{2m})\leq\widehat{\beta},$
leading to the integral
$\int_{-1/2}^{1/2}\mathbf{1}_{[-\widehat{\beta},-\widehat{\alpha}]}\left(\cos\left(2\pi
x\right)\right)dx$
which gives the same result.
Altogether we obtain
$\operatorname{Im}_{N\to\infty}\frac{1}{N}\\#\Big{\\{}\delta_{n}^{-}\in[\alpha,\beta]\Big{\\}}=\int_{\alpha}^{\beta}\frac{dy}{\pi\sqrt{\frac{2\pi
g}{\Delta^{2}}-y^{2}}}.$
The argument for $\delta_{n}^{+}$ is identical. ∎
## References
* [1] Braak, D. Integrability of the Rabi Model. Phys. Rev. Lett. 107, 100401 (2011).
* [2] Braak, D.; Nguyen, T.H.L.; Reyes-Bustos, C. and Wakayama, M. Spacing distribution for quantum Rabi models. arXiv:2310.09811 [math-ph]
* [3] Boutet de Monvel, A. and Zielinski, L. Oscillatory behavior of large eigenvalues in quantum Rabi models. Int. Math. Res. Not. IMRN (2021), no.7, 5155–5213.
* [4] Feranchuk I. D.; Komarov L. I. and Ulyanenkov, A. P. Two-level system in a one-mode quantum field: numerical solution on the basis of the operator method, J. Phys. A: Math. Gen. 29 (1996) no. 14, 4035–4047.
* [5] Kuipers, L. and Niederreiter, H. Uniform distribution of sequences. Pure Appl. Math. Wiley-Interscience [John Wiley & Sons], New York-London-Sydney, 1974.
* [6] Tur, É. A. Jaynes–Cummings model: solution without rotating wave approximation. Opt. Spectrosc. 89, no. 4 (2000): 574–8.
* [7] Xie, Qiongtao; Zhong, Honghua; Batchelor, Murray T.; Lee, Chaohong. The quantum Rabi model: solution and dynamics. J. Phys. A 50 (2017), no. 11, 113001, 40 pp.
* [8] Yanovich, E. A. Asymptotics of eigenvalues of an energy operator in a problem of quantum physics. In Operator Methods in Mathematical Physics (OTAMP 2010, Bedlewo), 165–77. Operator Theory: Advances and Applications, 227. Basel: Birkhauser/Springer Basel AG, 2013.
|
# Localization and Offline Mapping of High-Voltage Substations in Rough
Terrain Using a Ground Vehicle
Ioannis Alamanos∗, George P. Moustris and Costas S. Tzafestas All authors are
with the School of Electrical & Computer Engineering, National Technical
University of Athens, Greece, Corresp. author email:
<EMAIL_ADDRESS>
###### Abstract
This paper proposes an efficient hybrid localization framework for the
autonomous navigation of an unmanned ground vehicle in uneven or rough
terrain, as well as techniques for detailed processing of 3D point cloud data.
The framework is an extended version of FAST-LIO2 algorithm aiming at robust
localization in known point cloud maps using Lidar and inertial data. The
system is based on a hybrid scheme which allows the robot to not only localize
in a pre-built map, but concurrently perform simultaneous localization and
mapping to explore unknown scenes, and build extended maps aligned with the
existing map. Our framework has been developed for the task of autonomous
ground inspection of high-voltage electrical substations residing in rough
terrain. We present the application of our algorithm in field trials, using a
pre-built map of the substation, but also analyze techniques that aim to
isolate the ground and its traversable regions, to allow the robot to approach
points of interest within the map and perform inspection tasks using visual
and thermal data.
## I Introduction
The localization problem refers to the pose estimation of a mobile robot
within a prior map. This research topic, although necessary for autonomous
navigation of robots in a known environment, it has not attracted enough
attention throughout the last years, especially in the case of outdoor
navigation in uneven or rough terrain, which requires the use of the three-
dimensional information of the surroundings, and not just a 2D slice. For this
reason, most of the currently used localization algorithms are just wrappers
of well-known simultaneous localization and mapping (SLAM) algorithms which
emphasize the incremental creation of a 3D map with concurrent estimation of
the pose of the robot. Besides, these extended versions of the SLAM algorithms
focus on the exclusive localization within a known map. However, in a dynamic
environment where the map continually changes, such a system would fail to
localize as it would not find enough correspondences between 3D points, or
present many mismatches. Only if the algorithm updates the map, can it
accurately localize within it, with reference to the prior map.
Figure 1: Generated Map of HVSS.
Crafting a utilizable 3D point cloud is a crucial task. Every current Lidar-
inertial odometry algorithm builds a noisy 3D point cloud. The noise comes
from errors of the Lidar measurements, the motion undistortion from the
Inertial Measurement Unit (IMU) measurements, as well as the misregistration
of points. As a result the surfaces of the objects can have an undesirable
thickness which hinders crucial processes, such as raycasting, used to
determine the visibility region of points of interest, or even accurately
define then within the map. Evidently, generating a sharp de-noised 3D model
is necessary to perform inspection tasks within a known environment.
After acquiring an accurate point cloud and determining the regions of
interest, the next task is to successfully navigate within the map of the
high-voltage electric substation (HVSS). To form a safe and feasible path, the
traversable regions of the ground must be established. The precise extraction
of the ground in a point cloud is not a widely researched topic. Almost every
algorithm is based on the cloth simulation filter (CSF) (e.g. [1, 2]) which
takes the inverted surface and places above cloth nodes which interact with
the points that belong to the ground. Another approach is the three
dimensional random sample consensus (RANSAC) [3]; however it would only be
successful if the ground plane is the dominant plane of the point cloud.
The topic of modeling a point cloud of the ground to obtain information
regarding traversability has been widely investigated in the last years.
Implementations which take advantage of the normals of the point cloud have
been developed to estimate safe regions, even in real-time, with significant
accuracy.
This work describes a complete pipeline for crafting high-quality maps from
collected point clouds and navigating within a rough terrain inside an HVSS
using ground vehicles. The main contributions of this paper are:
* •
An extended version of FAST-LIO2 able to localize within a known environment
and even update the pre-built map through a hybrid scheme.
* •
Techniques to smooth and de-noise 3D point clouds generated from real-time
SLAM algorithms and extract the ground map.
* •
Application of methods to determine the traversable regions of rough ground
terrain for safe navigation within a HVSS.
For the benefit of the community we have made the code open source and can be
found on https://github.com/iral-ntua/FAST_LIO_LOCALIZATION.
## II Related Work
In recent years, the development of algorithms focusing exclusively on
localization within a known 3D point cloud has been limited, with one notable
exception being the 3D Adaptive Monte Carlo Localization (AMCL) [4]. This
algorithm serves as an extension of the widely employed 2D AMCL, employing a
probabilistic approach with a particle filter. However, its efficiency is
contingent on the availability of precise odometry data to serve as an initial
guess for the optimization problem. Despite its computational efficiency,
obtaining accurate odometric information in 3D often necessitates the use of
SLAM algorithms, which can be computationally demanding.
Conventional localization methods often rely on iterative closest point (ICP)
[5] for aligning each Lidar scan with the prior map. However, these methods
can struggle, particularly with large point clouds, as they may not meet real-
time requirements. In contrast, recent 3D SLAM algorithms utilizing Lidar
sensors have proven both computationally efficient and accurate, alleviating
much of the computational burden. One notable example is the LIORF module
based on the LIO-SAM framework [6], which takes a prior map as input. Another
wrapper involves an extended version of FAST-LIO2, incorporating a module that
asynchronously performs ICP on the known map, adjusting the pose and the map
to the prior. However, the continuous execution of traditional ICP remains
computationally demanding for real-time applications.
To address the challenge of generating a reliable point cloud, various
techniques have been devised for point cloud de-noising. Many traditional
approaches, rooted in image processing, formulate an optimization problem by
leveraging the geometric attributes of the point cloud, such as normals, while
discarding outliers. Among these, the bilateral filter [7] stands out. This
method posits that the de-noised point cloud results from the original point
cloud with a displacement factor for each point. This factor is derived from
point normals via principal component analysis and Gaussian weights. However,
adjusting the parameters of these weights can sometimes be challenging for
users aiming to achieve desirable results.
Another notable method is the guided filter [8], which assumes a linear local
model around each 3D point. It tackles the de-noising task by solving a least
squares problem to minimize the reconstruction residual. The guided filter
excels in preserving edges and sharp shapes, distinguishing itself from other
smoothing filters. Nevertheless, it exhibits less tolerance towards noisy
point clouds, as it is inherently tied to explicit normal estimation.
Recent advancements in point cloud de-noising involve the integration of deep
learning-based models. For instance, the Neural Projection Denoising algorithm
(NPD) [9] estimates reference planes for sets of points by computing the
normal vector for each point using both local and global information. This
approach enhances robustness to noise intensity and curvature variation.
Another innovative approach is the Total Denoising Neural Network [10], which
focuses on unsupervised learning, utilizing only noisy point cloud data for
training. Despite its unsupervised nature, this method produces comparable
experimental results to supervised techniques.
Proper modeling of the ground is a critical task for both wheeled and legged
robots, as it forms an integral part of the core components, along with the
localization module, for ensuring the safe navigation of Unmanned Ground
Vehicles (UGV). A notable work that stands out for its real-time capabilities
is presented in [11]. This approach employs a Bayesian generalized kernel
inference method based on the work of Vega-Brown et al. [12]. The method
involves assigning grid cells to the point cloud and executing the Bayesian
generalized kernel inference in two steps. First, a regression is performed to
obtain a dense elevation map, and second, a classification is carried out to
determine traversable regions.
An extension of this algorithm is discussed in [13], which introduces
additional functionality for semantic segmentation using visual information
from a camera. Segmenting the map based on visual cues to identify traversable
regions becomes crucial, especially as the algorithm, by estimating the
roughness of the terrain, may sometimes overlook movable objects (such as
small plants) or impassable obstacles.
## III System Overview
Figure 2: Overview of the general pipeline of the system.
The proposed system overview is depicted in Fig. 2. The initial map of the
HVSS is generated by the SLAM algorithm, resulting in a point cloud which
further undergoes a smoothing post-processing step. Subsequently, a ground
extraction technique is applied to separate the terrain, forming the basis for
constructing the traversability map. This map becomes crucial for planning the
motion of the robot, enabling it to explore specific user-defined regions of
interest. Concurrently, taking as input the SLAM-generated point cloud, a
localization module estimates the pose of the UGV within the prior map. This
localization information is essential for executing the motion commands
generated by the planner. The proposed scheme functions as a subsystem within
the broader system, specifically designed for inspection purposes.
## IV Method
### IV-A Localization
Our localization framework is based on the architecture of FAST-LIO2 [14].
Unlike most of Lidar SLAM algorithms, FAST-LIO2 is computationally more
efficient and precise due to its novel functionalities. Its basic pipeline for
pose estimation and 3D point cloud creation is the following:
1. 1.
Scan-to-scan motion undistortion of the raw Lidar points through back-
propagation by using the measurements from an inertial measurement unit (IMU).
2. 2.
Pose estimation on each scan. To estimate the pose of the UGV on each scan, a
non-linear least squares problem is formulated to minimize the residuals
resulting from the registration of the current scan points with those of the
map. While many methods commonly employ techniques such as Levenberg-Marquardt
or Gauss-Newton, FAST-LIO2 employs an Extended Kalman Filter (EKF). In the
state prediction and covariance calculation steps, the algorithm utilizes IMU
measurements between Lidar scans. During the update step, the residuals from
point-to-plane registrations are minimized. The algorithm incorporates a novel
incremental kd-tree (ikd tree) [15] that demonstrates good performance for
k-nearest neighbors searches in Lidar odometry. In practice, the algorithm
forms a plane from the five nearest neighbors of each current Lidar scan point
on the map. If the point-to-plane distance falls below a predefined threshold,
it registers this point to the plane formed by its neighbors. This plane is
essentially described by its centroid and point normals. By adopting this
approach, FAST-LIO2 effectively refines the UGV’s pose estimation,
demonstrating a balance between computational efficiency and accurate
registration.
3. 3.
After the state estimation, the method updates the overall point cloud and its
ikd-tree with the new odometry-registered scan.
The contribution of our extended version of FAST-LIO2 can be summarized as
follows:
1. 1.
Robust localization on pre-built maps.
2. 2.
Pose initialization within the known map using an initial guess and ICP.
3. 3.
Publication of complete odometry messages for real-world applications.
4. 4.
Improved memory handling regarding Lidar and IMU data in case deprecated
messages are received.
Our localization method maintains the core of FAST-LIO2 regarding motion
undistoriton and state estimation. Nevertheless, the ikd-tree of the overall
active map that is used for each scan’s registration, is not formed and
updated incrementally from each scan but is loaded from a prior point cloud
that serves as the initial map. The reason for using the ikd-tree and not a
static kd-tree is that the user has the ability to update the tree and
essentially perform SLAM with a prior map by assigning the desired time to a
parameter. This allows the robot to localize in a known environment and
sequentially explore a new scene, generating an updated point cloud aligned
with the prior one. To the best of our knowledge, there is no other hybrid
SLAM-Localization algorithm that can perform in such scenarios. The only
required input from the user is the prior point cloud and an initial guess of
the pose of the robot relative to the frame of the map. Relevant operation of
our method is demonstrated in Fig. 3.
One of the main challenges of solving the localization problem is to estimate
the initial pose of the robot within the prior map. Given two sets of points
$P=\\{p_{1},...,p_{N}\\},Q=\\{q_{1},...,q_{M}\\}\in\mathbb{R}^{3}$, we seek to
optimize a rigid transformation matrix $T\in\mathbb{R}^{4\times 4}$, comprised
of a rotation matrix $R\in\mathbb{R}^{3\times 3}$ and a translation vector
$t\in\mathbb{R}^{3}$, in order to align P with Q.
$\boldsymbol{T}^{*}=\operatorname*{argmin}_{\boldsymbol{R},\boldsymbol{t}}\sum_{i=1}^{N}||\boldsymbol{R}p_{i}+\boldsymbol{t}-\hat{q_{i}}||^{2}+I_{SO(d)}(\boldsymbol{R}),$
(1)
where $\hat{q_{i}}\in Q$ is the corresponding point of $p_{i}$,
$||\boldsymbol{R}p_{i}+\boldsymbol{t}-\hat{q_{i}}||$ is the distance from the
transformed source point to the target corresponding point, and
$I_{SO(d)}(\boldsymbol{R})$ is an indicator function for the special
orthogonal group $SO(d)$, which requires $\boldsymbol{R}$ to be a rotation
matrix:
$I_{SO(d)}(\boldsymbol{R})=\begin{cases}0,\text{ if
}\boldsymbol{R}^{T}\boldsymbol{R}=\mathbf{I}\text{ and
}det({\boldsymbol{R}})=1,\\\ +\infty,\text{ otherwise}.\end{cases}$ (2)
To estimate the desired transformation matrix $\boldsymbol{T}^{*}$, first we
accumulate the first ten scans (source point cloud) while keeping the Lidar
static, and then perform ICP between these scans and the prior map (target
point cloud) using the PCL library [16]. The ICP method uses an iterative
approach, and assuming that we are on the $k_{th}$ iteration, the problem is
solved in the two following steps:
1. 1.
Corresponding points: find the closest point $\hat{q_{i}}^{(k)}\in Q$ for each
transformed point $p_{i}$:
$\hat{q_{i}}^{(k)}=\operatorname*{argmin}_{q\in Q}||R^{(k)}p_{i}+t^{(k)}-q||.$
(3)
2. 2.
Transformation update: optimize the transformation matrix by minimizing the 3D
euclidean distance between the corresponding sets of points:
$\begin{multlined}\boldsymbol{T}^{(k+1)}=\\\
\operatorname*{argmin}_{\boldsymbol{R}^{(k)},\boldsymbol{t}^{(k)}}\sum_{i=1}^{N}||\boldsymbol{R}^{(k)}p_{i}+\boldsymbol{t}^{(k)}-\hat{q_{i}}^{(k)}||^{2}+I_{SO(d)}(\boldsymbol{R}).\end{multlined}\boldsymbol{T}^{(k+1)}=\\\
\operatorname*{argmin}_{\boldsymbol{R}^{(k)},\boldsymbol{t}^{(k)}}\sum_{i=1}^{N}||\boldsymbol{R}^{(k)}p_{i}+\boldsymbol{t}^{(k)}-\hat{q_{i}}^{(k)}||^{2}+I_{SO(d)}(\boldsymbol{R}).$
(4)
To iteratively determine the optimized transformation matrix, ICP solves
equation (4) in closed form via Singular Value Decomposition [17]. The
drawback of this method is that an initial guess of the robot pose is needed
for the ICP to converge and accurately localize the robot to the correct map
position. Considering this constraint, except for the convergence of the ICP,
to ensure an exact initialization within the map, we take advantage of a
Euclidean fitness score which expresses the mean of squared distances from the
source to the target as shown in equation (5). Here $x_{i}$ and $\hat{x_{i}}$
are corresponding points of the source and target clouds. The pose
initialization is considered successful only if the Euclidean fitness score is
below a predefined threshold, set to 0.01 in our case.
$FS=\frac{\sum_{i=1}^{N}(x_{i}-\hat{x_{i}})^{2}}{N}$ (5)
Figure 3: Application of our method within a HVSS.
### IV-B Map Crafting
To obtain an initial 3D point cloud we used the FAST-LIO2 SLAM algorithm.
However the resulting map, which is formulated by accumulated scans registered
to the odometry frame, is not usable as it contains considerable amount of
noise due to sensor and algorithmic imperfections. Thus, there is an
undesirable thickness to every surface of the point cloud which renders it
impossible to utilize some core methods needed for the autonomous inspection
of the HVSS, e.g.raycasting between points of interest and the ground. To
address this problem and de-noise the point cloud we perform a two step
filtering procedure:
1. 1.
First, we apply a uniform sampling filter to assign voxels to the 3D
continuous space. For each point we only keep the voxel that is closer to it,
if that exists. By uniformly discretizing the three dimensional space, not
only we remove noise but also keep only necessary meaningful information by
making the point cloud more sparse and with specified structure.
2. 2.
Following, we use the Moving Least Squares (MLS) [18] algorithm of PCL library
to de-noise and smooth the point cloud. To define surfaces (planes) through
sets of points, MLS minimizes the weighted least square error which best fits
the points to a surface. Specifically, the problem can be solved by minimizing
the following term:
$\sum_{i=1}^{N}(\langle n_{i}p_{i}\rangle-D)^{2}\theta(||p_{i}-q||),$ (6)
where the local plane is defined as: $\kappa={x|\langle
n_{i}p_{i}\rangle-D=0,x\in\mathbb{R}^{3}},n\in\mathbb{R}^{3},||n||=1$, $p_{i}$
is a 3D point, $n_{i}$ its corresponding normal, $D$ the plane model, q the
projection onto the plane, and $\theta$ a smooth monotonous decreasing
function. By aligning every point set to its local surface, the noise from the
point cloud diminishes and concurrently the normal computation of the points
is enhanced. The main drawback of this method is that sharp edges within the
point cloud are slightly smoothed. Nevertheless, as demonstrated in Fig. 4,
this two-step filtering yields significantly enhanced and denoised results.
Figure 4: Raw and filtered part of map (upper and lower correspondingly).
### IV-C Traversability Mapping
Defining the traversable regions of the rough terrain of a HVSS is essential
for the safe navigation of the UGV. For the precise modeling of the ground,
the input point cloud needs to be noise-free and the ground meticulously
divided from the overground components. To attain this, we first utilize the
previously described method to acquire a ’clean’ point cloud with carefully
computed point normals; following, we use a CSF filter with appropriate
parametrization of cloth resolution and classification threshold, which
expresses the size of each cluster of the map that will be classified as
ground or non-ground, to isolate the terrain (Fig. 5).
Figure 5: (UP) Ground point cloud extracted from CSF. (DOWN) Grid map with
overground structures.
To this end, we utilized the Grid Map open-source library [19], which focuses
on online surface reconstruction and interpretation of rough terrain. Through
this library, we converted the terrain point cloud to a Grid Map object,
essentially interpreting the ground as a grid of cells, assigning to each one
a value expressing its elevation (Fig. 5). To determine the traversable ground
regions, three more filters were applied to corresponding layers,
1. 1.
Surface Normals Filter: Estimates the normal of each cell and is vital for the
operation of the next filters.
2. 2.
Slope Filter: Calculates the slope of each cell by directly taking advantage
of the surface normals.
3. 3.
Roughness Filter: Computes the roughness of each cell by utilizing the
information about the normals surrounding that cell.
By normalizing the values of the Slope and Roughness layers and using
appropriate thresholds to reflect the capability of the ground vehicle to pass
through rough terrain, we can assign a cost to each cell according to its
traversability and thus specify traversable regions. This produces a 2D
costmap which can be used in ROS for subsequent navigation.
## V Experimental Results
### V-A Metric and Experimental Setup
In this section we assess the performance of our localization method in
comparison to LIORF and traditional ICP localization. Given that there is no
ground-truth for the evaluation, the metric used for the comparison of the
algorithms is the mean cloud-to-cloud distance of the point clouds generated
using the localization algorithms and the prior map. Taking into account that
the localization point clouds are formed from odometry-registered undistorted
scans and that the purpose of a localization algorithm is the accurate
positioning of the robot within a known map, cloud-to-cloud distance is a
representative metric for the performance of the methods. Cloud-to-cloud
distance is computed by determining corresponding closest points between the
two point clouds as expressed in equation (3), and computing the mean error
between these point sets (same as in equation (5)). Rejecting as outliers
corresponding points whose distance is above from a predefined threshold, is a
key step as new scenes maybe have been mapped during localization and would
adulterate the final results.
All the experiments have been conducted on a laptop computer with an Intel
Core i9-9980 CPU and 32 GB RAM, and the data have been collecting using
Robosense’s RS-LiDAR-16 and Vectornav’s VN-100 IMU. We did not perform any
further processing time evaluation, since the core of the method remains the
same as in FAST-LIO2.
### V-B Evaluation
For the evaluation of the algorithms we collected two distinct datasets from
the HVSS. From the first dataset we generated the prior map through FAST-LIO2,
while the second was used to assess the localization methods. The experimental
results are presented in Table I.
TABLE I: Performance comparison of localization methods Method | Mean Error (m) | Standard Deviation (m)
---|---|---
ICP | - | -
LIORF | 0.050 | 0.069
FAST-LIO-LOC. | 0.026 | 0.049
From the obtained results, it is evident that our method outperforms the
others in terms of accuracy. Traditional ICP fails to localize the sensor
after a few seconds due to its high computational burden. Although Lidar
operates at 10hz the algorithm is able to perform at 2Hz at most, on an i9
cpu. On the other hand, LIORF is much more effective and localizes the robot
during the whole dataset with low drift. However, it is considerably demanding
on computational resources. In contrast, our method is more precise in terms
of accuracy, but also significantly more lightweight as presented on [14]
(comparison of FAST-LIO2 with LIO-SAM). The alignment of the corresponding
point clouds is presented in Fig. 6. As is evident in the middle depiction,
the red point cloud from LIORF is much more dominant due to the existing noise
around the surfaces, even though significantly more sparse, as only the scans
from the keyframes are projected to the map. On the other hand, on the bottom
picture where every scan from FAST-LIO-LOCALIZATION is aligned to the map, the
colors are balanced as the points lie almost exactly on the same positions
relative to the prior map.
Figure 6: (UP) Raw prior map; (MIDDLE) prior map (Blue) along with point cloud
from LIORF (Red); (BOTTOM) prior map (Blue) along with point cloud from FAST-
LIO-LOCALIZATION (Red).
## VI Conclusion
This paper proposed a hybrid localization-SLAM method, and an effective scheme
for filtering raw point clouds to generate noise-free maps. Our localization
method is based on the framework of FAST-LIO2, while map crafting incorporates
a two step filtering including a uniform sampling filter, and a smoothing
filter which utilizes moving least squares algorithm. The fabrication of a
noise-free point cloud enables the efficient development of essential tasks
for the inspection of points of interest within the HVSS, and the localization
module is necessary for the safe navigation of the UGV. The demonstrated
material corroborates the effectiveness of our map crafting method, while the
presented results related to localization confirm the superiority of our
method in comparison with the current most robust localization algorithms.
## References
* [1] Wuming Zhang, Jianbo Qi, Peng Wan, Hongtao Wang, Donghui Xie, Xiaoyan Wang, and Guangjian Yan. An easy-to-use airborne lidar data filtering method based on cloth simulation. Remote sensing, 8(6):501, 2016.
* [2] Shangshu Cai, Sisi Yu, Zhenyang Hui, and Zhanzhong Tang. Icsf: An improved cloth simulation filtering algorithm for airborne lidar data based on morphological operations. Forests, 14(8):1520, 2023.
* [3] Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
* [4] Francisco J Perez-Grau, Fernando Caballero, Antidio Viguria, and Anibal Ollero. Multi-sensor three-dimensional monte carlo localization for long-term aerial robot navigation. International Journal of Advanced Robotic Systems, 14(5):1729881417732757, 2017.
* [5] K Somani Arun, Thomas S Huang, and Steven D Blostein. Least-squares fitting of two 3-d point sets. IEEE Transactions on pattern analysis and machine intelligence, (5):698–700, 1987.
* [6] Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniela Rus. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 5135–5142. IEEE, 2020.
* [7] Carlo Tomasi and Roberto Manduchi. Bilateral filtering for gray and color images. In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271), pages 839–846. IEEE, 1998.
* [8] Kaiming He, Jian Sun, and Xiaoou Tang. Guided image filtering. IEEE transactions on pattern analysis and machine intelligence, 35(6):1397–1409, 2012.
* [9] Chaojing Duan, Siheng Chen, and Jelena Kovacevic. 3d point cloud denoising via deep neural network based local surface estimation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8553–8557. IEEE, 2019.
* [10] Pedro Hermosilla, Tobias Ritschel, and Timo Ropinski. Total denoising: Unsupervised learning of 3d point cloud cleaning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 52–60, 2019.
* [11] Tixiao Shan, Jinkun Wang, Brendan Englot, and Kevin Doherty. Bayesian generalized kernel inference for terrain traversability mapping. In Conference on Robot Learning, pages 829–838. PMLR, 2018.
* [12] William R Vega-Brown, Marek Doniec, and Nicholas G Roy. Nonparametric bayesian inference on multivariate exponential families. Advances in Neural Information Processing Systems, 27, 2014.
* [13] Lupeng Zhou, Jikai Wang, Shiqi Lin, and Zonghai Chen. Terrain traversability mapping based on lidar and camera fusion. In 2022 8th International Conference on Automation, Robotics and Applications (ICARA), pages 217–222, 2022.
* [14] Wei Xu, Yixi Cai, Dongjiao He, Jiarong Lin, and Fu Zhang. Fast-lio2: Fast direct lidar-inertial odometry. IEEE Transactions on Robotics, 38(4):2053–2073, 2022.
* [15] Yixi Cai, Wei Xu, and Fu Zhang. ikd-tree: An incremental kd tree for robotic applications. arXiv preprint arXiv:2102.10808, 2021.
* [16] Radu Bogdan Rusu and Steve Cousins. 3d is here: Point cloud library (pcl). In 2011 IEEE international conference on robotics and automation, pages 1–4. IEEE, 2011.
* [17] Olga Sorkine-Hornung and Michael Rabinovich. Least-squares rigid motion using svd. Computing, 1(1):1–5, 2017.
* [18] Marc Alexa, Johannes Behr, Daniel Cohen-Or, Shachar Fleishman, David Levin, and Claudio T. Silva. Computing and rendering point set surfaces. IEEE Transactions on visualization and computer graphics, 9(1):3–15, 2003.
* [19] Péter Fankhauser and Marco Hutter. A universal grid map library: Implementation and use case for rough terrain navigation. Robot Operating System (ROS) The Complete Reference (Volume 1), pages 99–120, 2016.
|
# A Unified Model of Congestion Games with Priorities: Two-Sided Markets with
Ties, Finite and Non-Affine Delay Functions, and Pure Nash Equilibria
Kenjiro Takazawa Department of Industrial and Systems Engineering, Faculty of
Science and Engineering, Hosei University, Tokyo 184-8584, Japan.
<EMAIL_ADDRESS>Supported by JSPS KAKENHI Grant Numbers JP20K11699,
24K02901, JP24K14828, Japan.
(July 2024)
###### Abstract
The study of equilibrium concepts in congestion games and two-sided markets
with ties has been a primary topic in game theory, economics, and computer
science. Ackermann, Goldberg, Mirrokni, Röglin, Vöcking (2008) gave a common
generalization of these two models, in which a player more prioritized by a
resource produces an infinite delay on less prioritized players. While
presenting several theorems on pure Nash equilibria in this model, Ackermann
et al. posed an open problem of how to design a model in which more
prioritized players produce a large but finite delay on less prioritized
players. In this paper, we present a positive solution to this open problem by
combining the model of Ackermann et al. with a generalized model of congestion
games due to Bilò and Vinci (2023). In the model of Bilò and Vinci, the more
prioritized players produce a finite delay on the less prioritized players,
while the delay functions are of a specific kind of affine function, and all
resources have the same priorities. By unifying these two models, we achieve a
model in which the delay functions may be finite and non-affine, and the
priorities of the resources may be distinct. We prove some positive results on
the existence and computability of pure Nash equilibria in our model, which
extend those for the previous models and support the validity of our model.
## 1 Introduction
The study of equilibrium concepts in noncooperative games is a primary topic
in the fields of game theory, economics, and computer science. In particular,
the models of _congestion games_ and _two-sided markets with ties_ have played
important roles in the literature.
_Congestion games_ , introduced by Rosenthal [35] in 1973, represent the human
behaviour of avoiding congestion. Each _player_ chooses a strategy, which is a
set of _resources_. If a resource is shared by many players, then much delay
is imposed on those players. The objective of a player is to minimize the
total delay of the resources in her strategy. Rosenthal [35] proved that every
congestion game is a _potential game_. A noncooperative game is called a
potential game if it admits a _potential function_ , the existence of which
guarantees the existence of a pure Nash equilibrium. Moreover, Monderer and
Shapley [32] proved the converse: every potential game can be represented as a
congestion game. On the basis of these results, congestion games are
recognized as a fundamental model in the study of pure Nash equilibria in
noncooperative games (see, e.g., [7, 36]).
A _two-sided market_ consists of _agents_ and _markets_ , which have
preferences over the other side. Each agent chooses a set of markets. On the
basis of the choices of the agents, the markets determine an assignment of the
players to the markets according to their preferences over the agents. The
objective of a player is to maximize her payoff, which is determined by the
assignment. Typical special cases of a two-sided market are the stable
matching problem and the Hospitals/Residents problem. Since the pioneering
work of Gale and Shapley [14], analyses on equilibria have been a primary
topic in the study of two-sided markets, and a large number of generalized
models have been proposed. In particular, a typical generalization of allowing
_ties_ in the preferences [24] critically changes the difficulty of the
analyses (see [16, 29]), and attracts intensive interests [15, 17, 21, 22, 25,
26, 27, 30, 42].
In 2008, Ackermann, Goldberg, Mirrokni, Röglin, and Vöking [1] introduced a
model which commonly generalizes congestion games and two-sided markets with
ties, and is referred to as _congestion games with priorities_. This model is
briefly described as follows. Each resource $e$ has priorities (preferences)
with ties over the players. Among the players choosing $e$ in their
strategies, only the players most prioritized by $e$ receive a finite delay
from $e$, and the other players receive an infinite delay. In other words,
only the most prioritized players are accepted. It is clear that this model
generalizes congestion games, and it also generalizes a certain model of two-
sided markets with ties, _correlated two-sided markets with ties_.
For several classes of their model, Ackermann et al. [1] presented some
positive results on the existence and computability of pure Nash equilibria.
These results are summarized in Table 1 and will be formally described in
Section 2.2. In _player-specific congestion games_ , each resource $e$ has a
specific delay function $d_{i,e}$ for each player $i$. In a _singleton
congestion game_ , every strategy of every player consists of a single
resource. In a _matroid congestion game_ , the strategies of each player are
the bases of a _matroid_.
Table 1: Results of Ackermann et al. [1]. “NPS” stands for “Non-Player-Specific,” while “PS” stands for “Player-Specific.” “Polynomial BR Dynamics” means that there exists a sequence of a polynomial number of best responses reaching a pure Nash equilibrium. | Consistent Priorities | Inconsistent Priorities
---|---|---
NPS | | Polynomial BR Dynamics
---
\- Singleton Game (Theorem 2.6)
\- Matroid Game (Theorem 2.10)
| Potential Function
---
\- Singleton Game (Theorem 2.7)
\- Matroid Game (Theorem 2.11)
PS | — | | Potential Function
---
\- Two-Sided Singleton Market (Theorem 2.9)
\- Two-Sided Matroid Market (Theorem 2.13)
Polynomial Algorithm
\- Singleton Game (Theorem 2.8)
\- Matroid Game (Theorem 2.12)
Meanwhile, Ackermann et al. [1] posed an open question of how to design a
model in which the less prioritized players receive a finite delay caused by
the more prioritized players. We appreciate the importance of this question
because such a model can include the many-to-many models of the stable
matching problem [4, 5, 9, 12, 38, 39] (see also [29]). In congestion games,
the fact that each resource accepts multiple players is essential, since the
number of those players determine the cost of the resource. In the models of
stables matchings in which each market accepts multiple agents, the
preferences of the markets indeed affect the stability of the matchings, but
it is not the case that only the most preferred agents are accepted. Thus,
such a generalization of congestion games with priorities suggested in [1] is
crucial to attain a more reasonable generalization of the stable matching
problem.
The contributions of the paper are described as follows. We first point out
that a generalized model of congestion games by Bilò and Vinci [6] partially
answers to the question posed by Ackermann et al. [1]. In their model, the
players more prioritized by a resource indeed produce a finite delay on the
less prioritized players. Meanwhile, this model only covers a special case of
the model of Ackermann et al. [1] in which the delay functions are of a
specific kind of affine functions and all resources have the same priorities
over the players. We refer to the model of [6] as a _priority-based affine
congestion game with consistent priorities_.
A main contribution of this paper is to design a model which gives a positive
and full answer to the open problem of Ackermann et al. [1]. By unifying the
models of Ackermann et al. [1] and Bilò and Vinci [6], we present a model of
congestion games with priorities in which the more prioritized players produce
a finite delay on the less prioritized players, the delay function may be non-
affine, and the priorities of the resources may be inconsistent. We refer to
our model as a _priority-based congestion games with (in)consistent
priprities_. We then prove some positive results on the existence and
computability of pure Nash equilibria in our model, which extend those for the
previous models [1, 6] and support the validity of our model. Our technical
results are summarized in Table 2.
Table 2: Summary of Our Results. “NPS” stands for “Non-Player-Specific,” while “PS” stands for “Player-Specific.” “Polynomial BR Dynamics” means that there exists a sequence of a polynomial number of better responses reaching a pure Nash equilibrium. “PNE” stands for “Pure Nash Equilibrium.” | Consistent Priorities | Inconsistent Priorities
---|---|---
NPS | | Polynomial BR Dynamics
---
\- Singleton Game (Theorem 4.1)
\- Matroid Game (Theorem 6.2)
Existence of a PNE
\- General Game (Theorem 6.7)
| Potential Function
---
\- Singleton Game (Theorem 4.4)
\- Matroid Game (Theorem 6.3)
PS | | Polynomial BR Dynamics
---
\- Singleton Game (Theorem 4.1)
\- Matroid Game (Theorem 6.2)
| Potential Function
---
\- Two-Sided Singleton Market (Theorem 5.3)
\- Two-Sided Matroid Market (Theorem 6.5)
Existence of a PNE
\- Singleton Game (Theorem 4.5)
\- Matroid Game (Theorem 6.4)
The rest of the paper is organized as follows. We review previous results in
Section 2. Emphases are put on a formal description of the model and results
of congestion games with priorities [1]. In Section 3, we describe our model
of priority-based congestion games. In Section 4, we present some positive
results on pure Nash equilibria in priority-based singleton congestion games.
Section 5 is devoted to a description of how correlated two-sided markets with
ties are generalized in our model. Finally, in Section 6, we deal with
priority-based congestion games which are not singleton games.
## 2 Preliminaries
Let ${\mathbb{Z}}$ denote the set of the integers, and ${\mathbb{R}}$ that of
the real numbers. Subscripts $+$ and ${++}$ represent that the set consists of
nonnegative numbers and positive numbers, respectively. For instance,
${\mathbb{R}}_{+}$ denotes the set of the nonnegative real numbers and
${\mathbb{Z}}_{++}$ that of the positive integers.
### 2.1 Congestion Games
A congestion game is described by a tuple
$(N,E,(\mathcal{S}_{i})_{i\in N},(d_{e})_{e\in E}).$
Here, $N=\\{1,\ldots,n\\}$ denotes the set of the players and $E$ that of the
resources. Each player $i\in N$ has her _strategy space_
$\mathcal{S}_{i}\subseteq 2^{E}$, and chooses a _strategy_
$S_{i}\in\mathcal{S}_{i}$. The collection $(S_{1},\ldots,S_{n})$ of the chosen
strategies is called a _strategy profile_. For a resource $e\in E$ and a
strategy profile $S=(S_{1},\ldots,S_{n})$, let $N_{e}(S)\subseteq N$ denote
the set of players whose strategy includes $e$, and let
$n_{e}(S)\in{\mathbb{Z}}_{+}$ denote the size of $N_{e}(S)$, i.e.,
$N_{e}(S)=\\{i\in N\colon e\in S_{i}\\},\quad n_{e}(S)=|N_{e}(S)|.$
Each resource $e\in E$ has its _delay function_
$d_{e}\colon{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$. In a strategy profile $S$,
the function value $d_{e}(n_{e}(S))$ represents the delay of a resource $e\in
E$. The objective of each player is to minimize her cost, which is the sum of
the delays of the resources in her strategy. Namely, the cost $\gamma_{i}(S)$
imposed on a player $i\in N$ in a strategy profile $S$ is defined as
$\gamma_{i}(S)=\sum_{e\in S_{i}}d_{e}(n_{e}(S))$, which is to be minimized.
For a strategy profile $S=(S_{1},\ldots,S_{n})$ and a player $i\in N$, Let
$S_{-i}$ denote a collection of the strategies in $S$ other than $S_{i}$,
namely $S_{-i}=(S_{1},\ldots,S_{i-1},S_{i+1},\ldots,S_{n})$. A _better
response_ of a player in a strategy profile is a change of her strategy so
that her cost strictly decreases. Namely, when $i\in N$ changes her strategy
from $S_{i}$ to $S_{i}^{\prime}$ in a strategy profile $S$, it is a better
response if $\gamma_{i}(S_{-i},S_{i}^{\prime})<\gamma_{i}(S)$. In particular,
a better response from $S_{i}$ to $S_{i}^{\prime}$ is a _best response_ if
$S_{i}^{\prime}$ minimizes $\gamma_{i}(S_{-i},S_{i}^{\prime})$. A _pure Nash
equilibrium_ is a strategy profile in which no player has a better response.
Namely, a strategy profile $S$ is a pure Nash equilibrium if
$\displaystyle\gamma_{i}(S)\leq\gamma_{i}(S_{-i},S_{i}^{\prime})\quad\mbox{for
each player $i\in N$ and each of her strategy
$S_{i}^{\prime}\in\mathcal{S}_{i}$}.$
A _potential function_ $\Phi$ is one which is defined on the set of the
strategy profiles and satisfies
$\Phi(S_{-i},S_{i}^{\prime})-\Phi(S)=\gamma_{i}(S_{-i},S_{i}^{\prime})-\gamma_{i}(S)$
for each strategy profile $S$, each player $i\in N$, and each strategy
$S_{i}^{\prime}\in\mathcal{S}$. The existence of a potential function implies
the existence of a pure Nash equilibrium, because a strategy profile
minimizing the potential function must be a pure Nash equilibrium. A game
admitting a potential function is referred to as a _potential game_. The
following theorem is a primary result on congestion games, stating that each
congestion game is a potential game and vice versa.
###### Theorem 2.1 ([32, 35]).
A congestion game is a potential game, and hence possesses a pure Nash
equilibrium. Moreover, every potential game is represented as a congestion
game.
Hereafter, we assume that each delay function $d_{e}$ ($e\in E$) is
monotonically nondecreasing, i.e., $d_{e}(x)\leq d_{e}(x^{\prime})$ if
$x<x^{\prime}$.
Study on congestion games from the viewpoint of _algorithmic game theory_ [7,
34, 36] has appeared since around 2000. For singleton congestion games, Ieong,
McGrew, Nudelman, Shoham, and Sun [23] proved that a pure Nash equilibrium in
a singleton congestion game can be attained after a polynomial number of
better responses.
###### Theorem 2.2 ([23]).
In a singleton congestion game, starting from an arbitrary strategy profile, a
pure Nash equilibrium is attained after a polynomial number of better (hence,
best)responses.
This theorem is followed by a large number of extensions. Recall that a
_player-specific congestion game_ is one in which each resource $e\in E$ has a
delay function $d_{i,e}\colon{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$ specific to
each player $i\in N$. Milchtaich [31] proved the following theorem for player-
specific singleton congestion games.
###### Theorem 2.3 ([31]).
In a player-specific singleton congestion game, there exists a sequences of
polynomial number of best responses starting from an arbitrary strategy
profile and reaching a pure Nash equilibrium.
Note that Theorem 2.3 differs from Theorem 2.2 in that not any sequence of
best responses reaches to a pure Nash equilibrium.
A significant work along this line is due to Ackermann, Röglin, and Vöking [2,
3], who employed the discrete structure of _matroids_ into congestion games.
For a finite set $E$ and its subset family $\mathcal{S}\subseteq 2^{E}$, the
pair $(E,\mathcal{S})$ is a _matroid_ if $\mathcal{S}\neq\emptyset$ and
for $S,S^{\prime}\in\mathcal{S}$ and $e\in S\setminus S^{\prime}$, there
exists $e^{\prime}\in S^{\prime}\setminus S$ such that
$(S\setminus\\{e\\})\cup\\{e^{\prime}\\}\in\mathcal{S}$. (1)
A set in $\mathcal{S}$ is referred to as a _base_. It follows from (1) that
all bases in $\mathcal{S}$ has the same cardinality, which is referred to as
the _rank_ of the matroid $(E,\mathcal{S})$.
A congestion game $(N,E,(\mathcal{S}_{i})_{i\in N},(d_{e})_{e\in E})$ is
referred to as a _matroid congestion game_ if $(E,\mathcal{S}_{i})$ is a
matroid for every player $i\in N$. It is straightforward to see that a
singleton congestion game is a spacial case of a matroid congestion game.
Ackermann, Röglin, and Vöking [2, 3] proved the following extensions of
Theorems 2.2 and 2.3 to matroid congestion games.
###### Theorem 2.4 ([2]).
In a matroid congestion game, starting from an arbitrary strategy profile, a
pure Nash equilibrium is attained after a polynomial number of best responses.
###### Theorem 2.5 ([3]).
In a player-specific matroid congestion game, there exists a sequence of
polynomial number of better responses starting from an arbitrary strategy
profile and reaching a pure Nash equilibrium.
Since these works, matroid congestion games have been recognized as a well-
behaved class of congestion games, and study on more generalized and related
models followed. In the models of _congestion games with mixed objectives_
[11] and _congestion games with complementarities_ [10, 41], the cost on a
player is not necessarily the sum of the delays in her strategy. A _budget
game_ [8] is a variant of a congestion game, and their common generalization
is proposed in [28]. A _resource buying game_ [19, 40] is another kind of a
noncooperative game in which the players share the resources. In all of the
above models, the fact that $(E,\mathcal{S}_{i})$ is a matroid for each player
$i$ plays a key role to guaranteeing the existence of a pure Nash equilibrium.
A further generalized model in which the strategy space is represented by a
_polymatroid_ is studied in [18, 20]. A different kind of relation between
matroids and congestion games is investigated in [13].
### 2.2 Congestion Games with Priorities
Ackermann et al. [1] offered a model which commonly generalizes congestion
games and a certain class of two-sided markets with ties. This model is
described by a tuple
$(N,E,(\mathcal{S}_{i})_{i\in N},(p_{e})_{e\in E},(d_{e})_{e\in E}),$
in which the player set $N$, the resource set $E$, the strategy spaces
$\mathcal{S}_{i}\subseteq 2^{E}$ ($i\in N$), and the delay functions
$d_{e}\colon{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$ ($e\in E$) are the same as
those in the classical model in Section 2.1. What is specific to this model is
that each resource $e\in E$ has a _priority function_ $p_{e}\colon
N\to{\mathbb{Z}}_{++}$. If $p_{e}(i)<p_{e}(j)$ for players $i,j\in N$, then
the resource $e$ prefers $i$ to $j$.
In a strategy profile $S=(S_{1},\ldots,S_{n})$, the delay of $e$ imposed on
each player in $N_{e}(S)$ is determined in the following way. Define
$p^{*}_{e}(S)\in{\mathbb{Z}}_{++}\cup\\{+\infty\\}$ by
$\displaystyle p^{*}_{e}(S)=\begin{cases}\min\\{p_{e}(i)\colon i\in
N_{e}(S)\\}&\mbox{if $N_{e}(S)\neq\emptyset$},\\\ +\infty&\mbox{if
$N_{e}(S)=\emptyset$}.\end{cases}$
For a positive integer $q$, define $n_{e}^{q}(S)\in{\mathbb{Z}}_{+}$ by
$\displaystyle n_{e}^{q}(S)=\left|\\{i\in N_{e}(S)\colon
p_{e}(i)=q\\}\right|.$ (2)
Now the delay imposed on a player $i\in N_{e}(S)$ by the resource $e$ is
defined as
$\displaystyle\begin{cases}d_{e}\left(n_{e}^{p_{e}^{*}(S)}(S)\right)&\mbox{if
$p_{e}(i)=p_{e}^{*}(S)$},\\\ +\infty&\mbox{if
$p_{e}(i)>p_{e}^{*}(S)$}.\end{cases}$
This model is referred to as a _congestion game with priorities_. A special
case in which all resources have the same priority function is called a
_congestion game with consistent priorities_. The general model is often
referred to as a _congestion game with inconsistent priorities_.
It is straightforward to see that the model of congestion games with
priorities includes congestion games. An instance
$(N,E,(\mathcal{S}_{i})_{i\in N},(d_{e})_{e\in E})$ of a congestion game
reduces to a congestion game $(N,E,(\mathcal{S}_{i})_{i\in N},(d_{e})_{e\in
E})$ reduces to a congestion game $(N,E,(\mathcal{S}_{i})_{i\in
N},(p_{e})_{e\in E},(d_{e})_{e\in E})$ with priorities in which all resources
have the same constant priority function. As mentioned above, the model of
congestion games with priorities also includes _correlated two-sided markets
with ties_. See Section 2.2.2 for details.
#### 2.2.1 Singleton Games
For singleton congestion games with consistent priorities, Ackermann et al.
[1] proved the following theorem on the basis of Theorem 2.2.
###### Theorem 2.6 ([1]).
In a singleton congestion game with consistent priorities, there exists a
sequence of a polynomial number of best responses starting from an arbitrary
strategy profile and reaching a pure Nash equilibrium.
To the best of our knowledge, an extension of Theorem 2.3 to player-specific
delay functions is missing in the literature, and will be discussed in a more
generalized form in Section 4.1.
Ackermann et al. [1] further proved that every singleton congestion game with
inconsistent priorities is a potential game.
###### Theorem 2.7 ([1]).
A singleton congestion game with inconsistent priorities is a potential game,
and hence possesses a pure Nash equilibrium.
We remark that the potential function establishing Theorem 2.7 obeys a
generalized definition of potential functions. It maps a strategy profile to a
sequence of vectors, which lexicographically decreases by a better response.
The details will appear in our proof of Theorem 4.4, which extends Theorem 2.7
to priority-based singleton congestion games.
For player-specific congestion games with inconsistent priorities, Ackermann
et al. [1] designed a polynomial-time algorithm for constructing a pure Nash
equilibrium. Let $n$ denote the number of the players and $m$ that of the
resources.
###### Theorem 2.8 ([1]).
A player-specific singleton congestion game with inconsistent priorities
possesses a pure Nash equilibrium, which can be computed in polynomial time
with $O(n^{3}m)$ strategy changes.
#### 2.2.2 Correlated Two-Sided Markets with Ties
Here we describe a _correlated to two-sided market with ties_ [1], and see
that it can be represented as a player-specific congestion game with
inconsistent priorities. For unity, we apply the terminology of congestion
games to two-sided markets. For example, we use the terms players and
resources instead of agents and markets. We also assume that the objective of
a player is to minimize her delay, instead of to maximize her payoff.
A _correlated two-sided market with ties_ is represented by a tuple
$\displaystyle(N,E,(\mathcal{S}_{i})_{i\in N},(c_{i,e})_{i\in N,e\in
E},(d_{e})_{e\in E}).$
For each pair $(i,e)$ of a player $i\in N$ and a resource $e\in E$, a _cost_
$c_{i,e}\in{\mathbb{R}}_{+}$ is associated. The costs implicitly determine the
preferences of the players, since the objective of a player is to minimize her
cost. Moreover, each resource $e$ also prefer players with smaller costs, and
in particular only accepts the players with smallest cost, which is formally
described in the following way.
Let $S=(S_{1},\ldots,S_{n})$ be a strategy profile, and let $e\in E$ be a
resource. Let $c^{*}_{e}(S)\in{\mathbb{R}}_{+}$ be the minimum cost associated
with a player in $N_{e}(S)$ and $e$, i.e., $c_{e}^{*}(S)=\min\\{c_{i,e}\colon
i\in N_{e}(S)\\}$. Let $N_{e}^{*}(S)\subseteq N_{e}(S)$ denote the set of the
players in $N_{e}(S)$ with cost $c_{e}^{*}(S)$, and let
$|N_{e}^{*}(S)|=n_{e}^{*}(S)$. Namely,
$\displaystyle c^{*}_{e}(S)=\min\\{c_{i,e}\colon i\in N_{e}(S)\\},\quad
N_{e}^{*}(S)=\\{i\in N_{e}(S)\colon c_{i,e}=c^{*}_{e}(S)\\},\quad
n_{e}^{*}(S)=|N_{e}^{*}(S)|.$
Each player in $N_{e}(S)\setminus N_{e}^{*}(S)$ receives an infinite cost from
$e$. The cost on a player $i\in N_{e}^{*}(S)$ satisfies that it is
nonincreasing with respect to $n_{e}^{*}(S)$ and is equal to $c^{*}_{e}(S)$ if
$n_{e}^{*}(S)=1$, i.e., $i$ is the only player in $N_{e}^{*}(S)$. This is
represented by a bivariate delay function
$d_{e}:{\mathbb{R}}_{+}\times{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$ such that,
for each $x\in{\mathbb{R}}_{+}$, $d_{e}(x,1)=x$ and $d_{e}(x,y)$ is
nondecreasing with respect to $y$. In summary, the cost imposed on a player
$i\in N_{e}(S)$ by $e$ is equal to
$\displaystyle\begin{cases}d_{e}(c_{e}^{*}(S),n_{e}^{*}(S))&(i\in
N_{e}^{*}(S)),\\\ +\infty&(i\in N_{e}(S)\setminus N_{e}^{*}(S)).\end{cases}$
(3)
A correlated two-sided market $(N,E,(\mathcal{S}_{i}),(c_{i,e}),(d_{e}))$ with
ties reduces to a player-specific congestion game with inconsistent
priorities. For a resource $e\in E$, construct a priority function
$p_{e}\colon N\to{\mathbb{Z}}_{++}$ satisfying that
$\displaystyle\mbox{$p_{e}(i)<p_{e}(j)$ if and only if $c_{i,e}<c_{j,e}$ for
each $i,j\in N$}.$ (4)
Then, for each pair $(i,e)$ of a player $i\in N$ and a resource $e\in E$,
define a player-specific delay function
$d^{\prime}_{i,e}\colon{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$ by
$\displaystyle
d^{\prime}_{i,e}(y)=d_{e}(c_{i,e},y)\quad(y\in{\mathbb{Z}}_{++}).$
We refer to a correlated two-sided markets with ties in which each strategy of
each player is a singleton as a _correlated two-sided singleton markets with
ties_. It follows from the above reduction that Theorem 2.8 applies to a
correlated two-sided singleton markets with ties. Ackermann et al. [1] proved
a stronger result that a correlated two-sided singleton markets with ties has
a potential function.
###### Theorem 2.9 ([1]).
A correlated two-sided singleton market with ties is a potential game, and
hence possesses a pure Nash equilibrium.
#### 2.2.3 Extension to Matroid Games
Finally, Ackermann et al. [1] provided the following extensions of Theorems
2.6–2.9 from singleton games to matroid games. For a matroid game, define its
_rank_ $r$ as the the maximum rank of the matroids forming the strategy spaces
of all players.
A better response of a player $i\in N$ in a strategy profile $S$ from a
strategy $S_{i}$ to another strategy $S_{i}^{\prime}$ is referred to as a
_lazy better response_ if if there exists a sequence
$(S^{0}_{i},S^{1}_{i},...,S^{k}_{i})$ of strategies of $i$ such that
$S^{0}_{i}=S_{i}$, $S^{k}_{i}=S_{i}^{\prime}$, $|S^{k^{\prime}+1}_{i}\setminus
S^{k^{\prime}}_{i}|=1$ and the cost on $i$ in a strategy profile
$(S_{-i},S^{k^{\prime}+1}_{i})$ is strictly smaller than that in
$(S_{-i},S^{k^{\prime}}_{i})$ for each $k^{\prime}=0,1,\ldots,k-1$. A
_potential game with respect to lazy better responses_ is a game admitting a
potential function which strictly decreases by a lazy better response.
###### Theorem 2.10 ([1]).
In a matroid congestion game with consistent priorities, there exists a
sequence of a polynomial number of best responses starting from an arbitrary
strategy profile and reaching a pure Nash equilibrium.
###### Theorem 2.11 ([1]).
A matroid congestion game with inconsistent priorities is a potential game
with respect to lazy better responses, and hence possesses a pure Nash
equilibrium.
###### Theorem 2.12 ([1]).
A player-specific matroid congestion game with inconsistent priorities
possesses a pure Nash equilibrium, which can be computed in polynomial time
with $O(n^{3}mr)$ strategy changes.
###### Theorem 2.13 ([1]).
A correlated two-sided matroid market with ties is a potential game with
respect to lazy better responses, and hence possesses a pure Nash equilibrium.
### 2.3 Priority-Based Affine Congestion Games
In this subsection, we describe the model of priority-based affine congestion
games with consistent priorities [6], by using the terminology of congestion
games with priorities. A priority-based affine congestion game with consistent
priorities is described by a tuple
$(N,E,(\mathcal{S}_{i})_{i\in N},p,(\alpha_{e},\beta_{e})_{e\in E}).$
Again, $N$ and $E$ denote the set of the players and that of the resources,
respectively, and $\mathcal{S}_{i}\subseteq 2^{E}$ is the strategy space of a
player $i\in N$. Note that all resources have the same priority function
$p\colon N\to{\mathbb{Z}}_{++}$. Each resource $e\in E$ is associated with two
nonnegative real numbers $\alpha_{e},\beta_{e}\in{\mathbb{R}}_{+}$, which
determine the delay function of $e$ in the following manner.
Let $S$ be a strategy profile, $e\in E$ be a resource, and
$q\in{\mathbb{Z}}_{+}$ a positive integer. Define
$n_{e}^{q}(S)\in{\mathbb{Z}}_{+}$ as in (2), in which $p_{e}$ is replaced by
$p$. Similarly, define $n_{e}^{<q}(S)\in{\mathbb{Z}}_{+}$ by
$\displaystyle n_{e}^{<q}(S)\ {}$ $\displaystyle{}=\ \left|\\{i\in
N_{e}(S)\colon p(i)<q\\}\right|.$ (5)
Now the delay imposed on a player $i\in N_{e}(S)$ by $e$ is defined as
$\displaystyle\alpha_{e}\cdot\left(n_{e}^{<p(i)}(S)+\frac{n_{e}^{p(i)}(S)+1}{2}\right)+\beta_{e},$
(6)
which is interpreted in the following way. The delay imposed on player $i\in
N_{e}(S)$ by $e\in E$ is affected by the $n_{e}^{<p(i)}(S)$ players in
$N_{e}(S)$ more prioritized than $i$. It is also affected by the
$n_{e}^{p(i)}(S)$ players with the same priority as $i$, which is reflected to
$(n_{e}^{p(i)}(S)+1)/2$ in (6). This value is the expected number of the
players more or equally prioritized than $i$ when the ties of the
$n_{e}^{p(i)}(S)$ players are broken uniformly at random.
Bilò and Vinci [6] proved that every priority-based affine congestion game
with consistent priorities has a pure Nash equilibrium, and that it can be
constructed by finding a pure Nash equilibrium of the most prioritized
players, and then inductively extending the pure Nash equilibrium of the
players with up to the $k$-th priority to those with up to the $(k+1)$-st
priority. In each step, the game restricted to the players with the $(k+1)$-st
priority is a potential game.
###### Theorem 2.14 ([6]).
A priority-based affine congestion game with consistent priorities possesses a
pure Nash equilibrium.
We should remark that Bilò and Vinci [6] further conducted an elaborated
analysis on the price of anarchy and the price of stability of the pure Nash
equilibria of this model, which might be a main contribution of their paper.
## 3 Our Model
We first point out that the model of Bilò and Vinci [6] described in Section
2.3 partially answers to the open question of Ackermann et al. [1]. Indeed,
the delay (6) of a player $i\in N_{e}(S)$ is finitely affected by the more
prioritized players in $N_{e}(S)$. Meanwhile, compared to the model of
Ackermann et al. [1], the delay (6) is specific in that it is a particular
affine function of $n_{e}^{<p(i)}(S)$ and $n_{e}^{p(i)}(S)$, and the
priorities of the resources are consistent. Below we resolve these points by
providing a common generalization of the two models, which provides a full
answer to the open question in [1].
Our model is represented by a tuple
$(N,E,(\mathcal{S}_{i})_{i\in N},(p_{e})_{e\in E},(d_{e})_{e\in E}),$
which is often abbreviated as $(N,E,(\mathcal{S}_{i}),(p_{e}),(d_{e}))$.
Again, $N$ and $E$ denote the sets of players and resources, respectively,
each player $i\in N$ has her strategy space $\mathcal{S}_{i}\subseteq 2^{E}$,
and each resource $e\in E$ has a priority function $p_{e}\colon
N\to{\mathbb{Z}}_{++}$.
Let $S=(S_{1},\ldots,S_{n})$ be a strategy profile, $i\in N$, and $e\in
S_{i}$. Reflecting the delay function (6), our delay function
$d_{e}\colon{\mathbb{Z}}_{+}\times{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$ ($e\in
E$) is a bivariate function with variables $n_{e}^{<p_{e}(i)}(S)$ and
$n_{e}^{p_{e}(i)}(S)$. Namely, the delay imposed on $i$ by $e$ is described as
$\displaystyle d_{e}\left(n_{e}^{<p_{e}(i)}(S),n_{e}^{p_{e}(i)}(S)\right).$
(7)
We assume that each delay function $d_{e}$ ($e\in E$) has the following
properties:
$\displaystyle d_{e}(x,y)\leq d_{e}(x^{\prime},y)$ $\displaystyle(\mbox{if
$x<x^{\prime}$}),$ (8) $\displaystyle d_{e}(x,y)\leq d_{e}(x,y^{\prime})$
$\displaystyle(\mbox{if $y<y^{\prime}$}),$ (9) $\displaystyle d_{e}(x,y)\leq
d_{e}(x+y-1,1)$ $\displaystyle(\mbox{for each $x\in{\mathbb{Z}}_{+}$ and
$y\in{\mathbb{Z}}_{++}$}).$ (10)
Property (8) and (9) mean that the delay function $d_{e}$ is nondecreasing
with respect to $n_{e}^{<p_{e}(i)}(S)$ and $n_{e}^{p_{e}(i)}(S)$,
respectively. These properties reflect the monotonicity of the delay functions
in the previous models. Property (10) means that the cost on $i$ increases if
the $n_{e}^{p_{e}(i)}(S)-1$ players in $N_{e}(S)$ with the same priority as
$i$ are replaced by the same number of more prioritized players. This property
captures the characteristic of the models of [1, 6] that prioritized players
produce more delays than those with the same priority.
We refer to our model as a _priority-based congestion game with inconsistent
priorities_ , or _priority-based congestion game_ for short. If the resources
have the same priority function, then the game is referred to as a _priority-
based congestion game with consistent priorities_.
A priority-based affine congestion game $(N,E,(\mathcal{S}_{i})_{i\in
N},p,(\alpha_{e},\beta_{e})_{e\in E})$ with consistent priority [6] is
represented as a priority-based congestion game with consistent priorities $p$
and delay function
$d_{e}\colon{\mathbb{Z}}_{+}\times{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$ ($e\in
E$) defined as in (6), namely
$\displaystyle
d_{e}\left(n_{e}^{<p_{e}(i)}(S),n_{e}^{p_{e}(i)}(S)\right)=\alpha_{e}\left(n_{e}^{<p(i)}(S)+\frac{n_{e}^{p(i)}(S)+1}{2}\right)+\beta_{e}.$
(11)
It is not difficult to see that the delay function $d_{e}$ in (11) satisfies
the properties (8)–(10).
A congestion game with inconsistent priorities [1] is also a special case of a
priority-based congestion game. Given a congestion game
$(N,E,(\mathcal{S}_{i})_{i\in N},(p_{e})_{e\in E},(d_{e})_{e\in E})$ with
priorities, define a delay function
$d_{e}^{\prime}\colon{\mathbb{Z}}_{+}\times{\mathbb{Z}}_{++}\to{\mathbb{R}}$
of a priority-based congestion game by
$\displaystyle d_{e}^{\prime}(x,y)=\begin{cases}+\infty&(x\geq 1),\\\
d_{e}(y)&(x=0).\end{cases}$ (12)
Again, the delay function $d_{e}^{\prime}$ in (12) satisfies the properties
(8)–(10) if $d_{e}$ is a nondecreasing function. The properties (8) and (9)
are directly derived. The property (10) follows from the fact that
$d_{e}^{\prime}(x+y-1,1)\neq+\infty$ only if $(x,y)=(0,1)$, and in that case
both $d_{e}^{\prime}(x,y)$ and $d_{e}^{\prime}(x+y-1,1)$ is equal to
$d_{e}^{\prime}(0,1)=d_{e}(1)$.
## 4 Priority-Based Singleton Congestion Games
In this section, we present some theorems on pure Nash equilibria in singleton
games in our model.
### 4.1 Consistent Priorities
In this subsection, we present a theorem on pure Nash equilibria in priority-
based player-specific singleton congestion games with consistent priorities
(Theorem 4.1). This theorem is not only an extension of Theorems 2.2 and 2.6,
which concern pure Nash equilibria in non-player-specific singleton congestion
games, but also implies the existence of pure Nash equilibria in player-
specific congestion games with consistent priorities (Corollary 4.2), which is
missing in the literature.
Hereafter, some theorems are marked with $(\star)$, meaning that their proofs
appear in Appendix.
###### Theorem 4.1 ($\star$).
In a priority-based player-specific singleton congestion game
$G=(N,E,(\mathcal{S}_{i}),p,(d_{i,e}))$ with consistent priorities, there
exists a sequence of polynomial number of better responses starting from an
arbitrary strategy profile and reaching a pure Nash equilibrium.
The following corollary is a direct consequence of Theorem 4.1.
###### Corollary 4.2.
In a player-specific singleton congestion game with consistent priorities,
there exists a sequences of polynomial number of better responses starting
from an arbitrary strategy profile and reaching a pure Nash equilibrium.
###### Remark 4.3.
Corollary 4.2 does not imply that a pure Nash equilibrium in a priority-based
singleton congestion games which is not player-specific is obtained from an
_arbitrary_ sequence of best responses. This is because, as described in the
proof for Theorem 4.1, the order of the players in the sequence is specified
by the priority function.
### 4.2 Inconsistent Priorities
In this subsection, we investigate priority-based singleton congestion games
with inconsistent priorities. We first prove the following extension of
Theorem 2.7.
###### Theorem 4.4.
A priority-based singleton congestion game
$(N,E,(\mathcal{S}_{i}),(p_{e}),(d_{e}))$ with inconsistent priorities is a
potential game, and hence possesses a pure Nash equilibrium.
###### Proof.
For each strategy profile $S=(e_{1},\ldots,e_{n})$, define its potential
$\Phi(S)\in({\mathbb{R}}_{+}\times{\mathbb{Z}}_{++})^{n}$ as follows. Let
$e\in E$ be a resource, and let $Q_{e}(S)=\\{q_{1},...,q_{k^{*}}\\}$ be a set
of integers such that $Q_{e}(S)=\\{q\colon n_{e}^{q}(S)>0\\}$ and
$q_{1}<\cdots<q_{k^{*}}$. The resource $e\in E$ contributes the following
$n_{e}(S)$ vectors in ${\mathbb{R}}_{+}\times{\mathbb{Z}}_{++}$ to $\Phi(S)$:
$\displaystyle{}(d_{e}(0,1),\,q_{1}),\ldots,(d_{e}(0,n_{e}^{q_{1}}(S)),\,q_{1}),$
$\displaystyle{}(d_{e}(n_{e}^{q_{1}}(S),1),\,q_{2}),\ldots,(d_{e}(n_{e}^{q_{1}}(S),n_{e}^{q_{2}}(S)),\,q_{2}),$
$\displaystyle{}\ldots,$
$\displaystyle{}\left(d_{e}\left(n_{e}^{<q_{k}}(S),1\right),\,q_{k}\right),\ldots,\left(d_{e}\left(n_{e}^{<q_{k}}(S),n_{e}^{q_{k}}(S)\right),\,q_{k}\right),$
$\displaystyle{}\ldots,$
$\displaystyle{}\left(d_{e}\left(n_{e}^{<q_{k^{*}}}(S),1\right),\,q_{k^{*}}\right),\ldots,\left(d_{e}\left(n_{e}^{<{q_{k^{*}}}}(S),n_{e}^{q_{k^{*}}}(S)\right),\,q_{k^{*}}\right).$
(13)
For two vectors
$(x,y),(x^{\prime},y^{\prime})\in{\mathbb{R}}_{+}\times{\mathbb{Z}}_{++}$, we
define a lexicographic order
$(x,y)\operatorname{\preceq_{lex}}(x^{\prime},y^{\prime})$ if
$\displaystyle{}x<x^{\prime},\quad\mbox{or}\quad x=x^{\prime}\mbox{ and }y\leq
y^{\prime}.$
The strict relation $(x,y)\operatorname{\prec_{lex}}(x^{\prime},y^{\prime})$
means that $(x,y)\operatorname{\preceq_{lex}}(x^{\prime},y^{\prime})$ and
$(x,y)\neq(x^{\prime},y^{\prime})$ hold.
The potential $\Phi(S)$ is obtained by ordering the $n$ vectors contributed by
all resources in the lexicographically nondecreasing order. We remark that the
order in (13) is lexicographically nondecreasing, which can be derived from
(8)–(10) as follows. It follows from (9) that
$\displaystyle d_{e}(n_{e}^{<q_{k}}(S),y)\leq
d_{e}(n_{e}^{<q_{k}}(S),y+1)\quad\mbox{($k=1,\ldots,k^{*}$,
$y=1,\ldots,q_{k-1}$)},$ (14)
and from (8) and (10) that
$\displaystyle d_{e}\left(n_{e}^{<q_{k}}(S),n_{e}^{q_{k}}(S)\right)\leq
d_{e}\left(n_{e}^{<q_{k+1}}(S)-1,1\right)\leq
d_{e}\left(n_{e}^{<q_{k+1}}(S),1\right)\quad\mbox{($k=1,\ldots,k^{*}-1$)}.$
(15)
We then define a lexicographic order over the potentials. For strategy
profiles $S$ and $S^{\prime}$, where
$\displaystyle\Phi(S)=((x_{1},y_{1}),\ldots,(x_{n},y_{n})),\quad\Phi(S^{\prime})=((x^{\prime}_{1},y^{\prime}_{1}),\ldots,(x^{\prime}_{n},y^{\prime}_{n})),$
define $\Phi(S^{\prime})\operatorname{\preceq_{lex}}\Phi(S)$ if there exists
an integer $\ell$ with $1\leq\ell\leq n$ such that
$\displaystyle{}\mbox{$(x^{\prime}_{\ell^{\prime}},y^{\prime}_{\ell^{\prime}})=(x_{\ell^{\prime}},y_{\ell^{\prime}})$
for each $\ell^{\prime}<\ell$, and
$(x^{\prime}_{\ell},y^{\prime}_{\ell})\operatorname{\prec_{lex}}(x_{\ell},y_{\ell})$}.$
The strict relation $\Phi(S^{\prime})\operatorname{\prec_{lex}}\Phi(S)$ means
that $\Phi(S^{\prime})\operatorname{\preceq_{lex}}\Phi(S)$ and
$\Phi(S^{\prime})\neq\Phi(S)$ hold.
Suppose that a player $i$ has a better response in a strategy profile $S$,
which changes her strategy from $e$ to $e^{\prime}$. Let
$S^{\prime}=(S_{-i},e^{\prime})$. Below we show that
$\Phi(S^{\prime})\operatorname{\prec_{lex}}\Phi(S)$, which completes the
proof.
Let $p_{e}(i)=q$ and $p_{e^{\prime}}(i)=q^{\prime}$. Since the delay imposed
on $i$ becomes smaller due to the better response, it holds that
$\displaystyle
d_{e^{\prime}}(n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1)<d_{e}(n_{e}^{<q}(S),n_{e}^{q}(S)).$
(16)
Note that $e^{\prime}$ contributes a vector
$\displaystyle(d_{e^{\prime}}(n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1),\,q^{\prime})$
(17)
to $\Phi(S^{\prime})$ but not to $\Phi(S)$. To prove
$\Phi(S^{\prime})\operatorname{\prec_{lex}}\Phi(S)$, it suffices to show that
a vector belonging to $\Phi(S)$ but not to $\Phi(S^{\prime})$ is
lexcographically larger than the vector (17).
First, consider the vectors in $\Phi(S)$ contributed by $e$. Let
$Q_{e}(S)=\\{q_{1},\ldots,q_{k^{*}}\\}$, where $q_{1}<\cdots<q_{k^{*}}$. The
better response of $i$ changes the vectors in $\Phi(S)$ whose second component
is larger than $q$, because the first argument of the delay function $d_{e}$
decreases by one. If $q=q_{k^{*}}$, then those vectors do not exist and thus
we are done. Suppose that $q=q_{k}$ for some $k<k^{*}$. Among those vectors,
the lexicographically smallest one is
$\left(d_{e}\left(n_{e}^{<q_{k+1}}(S),1\right),\,q_{k+1}\right).$
Recall (15), saying that
$d_{e}\left(n_{e}^{<q_{k}}(S),n_{e}^{q_{k}}(S)\right)\leq
d_{e}\left(n_{e}^{<q_{k+1}}(S),1\right),$
and thus
$d_{e^{\prime}}\left(n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1\right)<d_{e}\left(n_{e}^{<q_{k+1}}(S),1\right)$
follows from (16). Hence, we conclude that
$\left(d_{e^{\prime}}\left(n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1\right),\,q^{\prime}\right)\operatorname{\prec_{lex}}\left(d_{e}\left(n_{e}^{<q_{k+1}}(S),1\right),\,q_{k+1}\right).$
Next, consider the vectors in $\Phi(S)$ contributed by $e^{\prime}$. Without
loss of generality, suppose that there exists a positive integer
$q^{\prime\prime}$ such that $q^{\prime\prime}\in Q_{e^{\prime}}(S)$ and
$q^{\prime\prime}>q^{\prime}$. Let $q^{\prime\prime}$ be the smallest integer
satisfying these conditions. The lexicographically smallest vector in
$\Phi(S)$ contributed by $e^{\prime}$ and changed by the better response of
$i$ is
$\left(d_{e^{\prime}}\left(n_{e^{\prime}}^{<q^{\prime\prime}}(S),1\right),\,q^{\prime\prime}\right).$
It follows from the property (10) that
$d_{e^{\prime}}\left(n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1\right)\leq
d_{e^{\prime}}\left(n_{e^{\prime}}^{<q^{\prime\prime}}(S),1\right),$
and thus
$\left(d_{e^{\prime}}\left(n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1\right),\,q^{\prime}\right)\operatorname{\prec_{lex}}\left(d_{e^{\prime}}\left(n_{e^{\prime}}^{<q^{\prime\prime}}(S),1\right),\,q^{\prime\prime}\right),$
completing the proof. ∎
We next show the following theorem, which corresponds to Theorem 2.8 but does
not include a polynomial bound on the number of strategy changes.
###### Theorem 4.5 ($\star$).
A priority-based player-specific singleton congestion game with inconsistent
priorities possesses a pure Nash equilibrium, which can be computed with a
finite number of strategy changes.
## 5 Generalized Correlated Two-Sided Markets with Ties
In this section, we introduce the model of _generalized correlated two-sided
markets with ties_ , which generalizes correlated two-sided markets with ties
described in Section 2.2.2. We show that this model is a special class of
priority-based player-specific congestion games with inconsistent priorities,
and it includes priority-based congestion games with inconsistent priorities.
This is in contrast to the situation of correlated two-sided markets with
ties, in which it is unclear whether correlated two-sided markets with ties
include congestion games with inconsistent priorities. We then prove that a
generalized correlated two-sided market with ties is a potential game, which
extends Theorem 2.9.
### 5.1 Model
A generalized correlated two-sided market with ties is described by a tuple
$(N,E,(\mathcal{S}_{i})_{i\in N},(c_{i,e})_{i\in N,e\in E},(d_{e})_{e\in E}).$
Again, $N$ and $E$ denote the sets of the players and resources, respectively.
For each player $i\in N$ and each resource $e\in E$, a nonnegative real number
$c_{i,e}\in{\mathbb{R}}_{+}$ is associated, which implies the preferences of
$i$ and $e$, and are reflected in the delay function $d_{e}$ of $e$ in the
following way.
Let $S=(S_{1},\ldots,S_{n})$ be a strategy profile and $e\in E$ be a resource.
In the same way as (2) and (5), for a nonnegative number
$q\in{\mathbb{R}}_{+}$, define $n_{e}^{q}(S),n_{e}^{<q}(S)\in{\mathbb{Z}}_{+}$
by
$\displaystyle n_{e}^{q}(S)\ {}$ $\displaystyle{}=\ \left|\\{i\in
N_{e}(S)\colon c_{i,e}=q\\}\right|,$ $\displaystyle n_{e}^{<q}(S)\ {}$
$\displaystyle{}=\ \left|\\{i\in N_{e}(S)\colon c_{i,e}<q\\}\right|.$
Note that $n_{e}^{c_{i,e}}(S)>0$ if $e\in S_{i}$. The delay function $d_{e}$
is a trivariate function
$d_{e}\colon{\mathbb{R}}_{+}\times{\mathbb{Z}}_{+}\times{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$.
The cost imposed by $e$ on a player $i\in N_{e}(S)$ is
$d_{e}\left(c_{i,e},\,n_{e}^{<c_{i,e}}(S),\,n_{e}^{c_{i,e}}(S)\right).$
Here, the delay functions $d_{e}$ ($e\in E$) have the following properties:
$\displaystyle d_{e}(c,x,y)\leq d_{e}(c^{\prime},x,y)$ $\displaystyle(\mbox{if
$c<c^{\prime}$}),$ (18) $\displaystyle d_{e}(c,x,y)\leq d_{e}(c,x^{\prime},y)$
$\displaystyle(\mbox{if $x<x^{\prime}$}),$ (19) $\displaystyle
d_{e}(c,x,y)\leq d_{e}(c,x,y^{\prime})$ $\displaystyle(\mbox{if
$y<y^{\prime}$}),$ (20) $\displaystyle d_{e}(c,x,y)\leq d(c,x+y-1,1)$
$\displaystyle(\mbox{for each $x\in{\mathbb{Z}}_{+}$ and
$y\in{\mathbb{Z}}_{++}$}).$ (21)
The properties (18)–(20) represent the monotonicity of $d_{e}$, while
(19)–(21) corresponds to the properties (8)–(10) of the delay functions in
priority-based congestion games. We also remark that $d_{e}(c,0,1)$ is not
necessarily equal to $c$, whereas $d_{e}(c,1)=c$ in correlated two-sided
markets with ties.
### 5.2 Relation to Other Models
A correlated two-sided market with ties
$(N,E,(\mathcal{S}_{i}),(c_{i,e}),(d_{e}))$ is represented as a generalized
correlated two-sided market with ties
$(N,E,(\mathcal{S}_{i}),(c_{i,e}),(d_{e}^{\prime}))$ by defining the
trivariate function
$d_{e}^{\prime}\colon{\mathbb{R}}_{+}\times{\mathbb{Z}}_{+}\times{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$
by
$\displaystyle d_{e}^{\prime}(c,x,y)=\begin{cases}d_{e}(c,y)&\mbox{if
$x=0$},\\\ +\infty&\mbox{if $x\geq 1$}\end{cases}$
for each resource $e\in E$.
The following propositions show that generalized correlated two-sided markets
with ties lie between priority-based congestion games with inconsistent
priorities and priority-based player-specific congestion games with
inconsistent priorities.
###### Proposition 5.1 ($\star$).
A priority-based congestion games with inconsistent priorities is represented
as a generalized correlated two-sided market with ties.
###### Proposition 5.2 ($\star$).
A generalized correlated two-sided market with ties is represented as a
priority-based player-specific congestion games with inconsistent priorities.
### 5.3 Pure Nash Equilibria and Potential
From Proposition 5.2 and Theorem 4.5, it follows that a generalized correlated
singleton two-sided market has a pure Nash equilibrium and it can be computed
with a finite number of strategy changes. What is more, the proof for Theorem
4.4 applies to a generalized correlated singleton two-sided market, and hence
it is indeed a potential game.
###### Theorem 5.3 ($\star$).
A generalized correlated two-sided singleton market
$(N,E,(\mathcal{S}_{i}),(c_{i,e}),(d_{e}))$ with ties is a potential game, and
hence possesses a pure Nash equilibrium.
## 6 Extension Beyond Singleton Games
In this section, we discuss extensions of the above results on priority-based
singleton congestion games into larger classes with respect to the strategy
spaces. We present extensions of Theorems 4.1, 4.4, 4.5, and 5.3 into matroid
games, followed by an investigation of priority-based congestion games with
consistent priorities without any assumption on the strategy spaces of the
players.
### 6.1 Matroid Games
The following is a fundamental property of matroids, which is essential to the
extension of our arguments for singleton games to matroid games.
###### Lemma 6.1 (see, e.g., [33, 37]).
Let $(E,\mathcal{S})$ be a matroid, $S\in\mathcal{S}$ be a base, and
$w_{e}\in{\mathbb{R}}$ be a weight for each $e\in E$. If there exists a base
$S^{\prime}\in\mathcal{S}$ such that $\sum_{e\in S^{\prime}}w_{e}<\sum_{e\in
S}w_{e}$, then there exists an element $e\in S$ and $e^{\prime}\in E\setminus
S$ such that $(S\setminus\\{e\\})\cup\\{e^{\prime}\\}\in\mathcal{S}$ and
$w_{e^{\prime}}<w_{e}$.
It follows from Lemma 6.1 that we can implement an arbitrary better response
of a player in a matroid game as a lazy better response. On the basis of this
fact, the proofs for Theorems 4.1, 4.4, 4.5, and 5.3 can be adapted to matroid
games.
###### Theorem 6.2.
In a priority-based player-specific matroid congestion game with consistent
priorities, there exists a sequences of polynomial number of better responses
starting from an arbitrary strategy profile and reaching a pure Nash
equilibrium.
###### Theorem 6.3.
A priority-based matroid congestion game with inconsistent priorities is a
potential game, and hence possesses a pure Nash equilibrium.
###### Theorem 6.4.
A priority-based player-specific matroid congestion game with inconsistent
priority possesses a pure Nash equilibrium, which can be computed with a
finite number of strategy changes.
###### Theorem 6.5.
A generalized correlated two-sided matroid market with ties is a potential
game, and hence possesses a pure Nash equilibrium.
### 6.2 Arbitrary Strategy Spaces
For a priority-based congestion game $(N,E,(\mathcal{S}_{i})_{i\in
N},p,(d_{e})_{e\in E})$ with consistent priorities, let $N^{q}$ denote the set
of the players with priority-function value $q$, namely $N^{q}=\\{i\in N\colon
p(i)=q\\}$.
###### Lemma 6.6 ($\star$).
Let $G=(N,E,(\mathcal{S}_{i})_{i\in N},p,(d_{e})_{e\in E})$ be a priority-
based congestion game with consistent priorities. Let $S=(S_{1},\ldots,S_{n})$
be its strategy profile and let $q\in{\mathbb{Z}}_{+}$. Fix the strategy of
each player $j\in N\setminus N^{q}$ to $S_{j}$, and let $G^{q}$ denote the
game restricted to the players in $N^{q}$. Then, the game $G^{q}$ is a
potential game with potential function
$\displaystyle\Phi(S^{q})=\sum_{e\in
E}\sum_{k=1}^{n_{e}(S^{q})}d_{e}(n_{e}^{<q}(S),k)\quad(S^{q}=(S_{i})_{i\in
N^{q}}).$
It directly follows from Lemma 6.6 that a pure Nash equilibrium of a priority-
based congestion game $G$ with consistent priorities can be constructed by
combining pure Nash equilibria $S^{q}$ of a game $G^{q}$ for each priority-
function value $q$, where $G^{q}$ is defined by fixing the strategies of the
players in $N^{q^{\prime}}$ to form a pure Nash equilibrium of
$G^{q^{\prime}}$ for each $q^{\prime}<q$.
###### Theorem 6.7.
A priority-based congestion game with consistent priorities possesses a pure
Nash equilibrium.
## 7 Conclusion
We have presented a common generalization of the models of congestion games by
Ackermann et al. [1] and Bilò and Vinci [6]. This generalization gives a
positive and full answer to the open question posed by Ackermann et al. [1].
We then proved some theorems on the existence of pure Nash equilibria,
extending those in [1] and [6].
Once the existence of pure Nash equilibria is established, a possible
direction of future work is to design an efficient algorithm for finding a
pure Nash equilibrium in our model. Analyses on the price of anarchy and the
price of stability in our model is also of interest, as is intensively done
for the model of Bilò and Vinci [6].
## References
* [1] H. Ackermann, P. W. Goldberg, V. S. Mirrokni, H. Röglin, and B. Vöcking. A unified approach to congestion games and two-sided markets. Internet Math., 5(4):439–457, 2008.
* [2] H. Ackermann, H. Röglin, and B. Vöcking. On the impact of combinatorial structure on congestion games. J. ACM, 55(6):25:1–25:22, 2008.
* [3] H. Ackermann, H. Röglin, and B. Vöcking. Pure Nash equilibria in player-specific and weighted congestion games. Theor. Comput. Sci., 410(17):1552–1563, 2009.
* [4] M. Baïou and M. Balinski. Many-to-many matching: stable polyandrous polygamy (or polygamous polyandry). Discret. Appl. Math., 101(1-3):1–12, 2000.
* [5] V. Bansal, A. Agrawal, and V. S. Malhotra. Polynomial time algorithm for an optimal stable assignment with multiple partners. Theor. Comput. Sci., 379(3):317–328, 2007.
* [6] V. Bilò and C. Vinci. Congestion games with priority-based scheduling. Theor. Comput. Sci., 974:114094, 2023.
* [7] V. Bilò and C. Vinci. Coping with Selfishness in Congestion Games—Analysis and Design via LP Duality. Monographs in Theoretical Computer Science. An EATCS Series. Springer, 2023.
* [8] M. Drees, M. Feldotto, S. Riechers, and A. Skopalik. Pure Nash equilibria in restricted budget games. J. Comb. Optim., 37(2):620–638, 2019.
* [9] P. Eirinakis, D. Magos, I. Mourtos, and P. Miliotis. Finding all stable pairs and solutions to the many-to-many stable matching problem. INFORMS J. Comput., 24(2):245–259, 2012.
* [10] M. Feldotto, L. Leder, and A. Skopalik. Congestion games with complementarities. In D. Fotakis, A. Pagourtzis, and V. T. Paschos, editors, 10th International Conference on Algorithms and Complexity, CIAC 2017, volume 10236 of Lecture Notes in Computer Science, pages 222–233, 2017.
* [11] M. Feldotto, L. Leder, and A. Skopalik. Congestion games with mixed objectives. J. Comb. Optim., 36(4):1145–1167, 2018.
* [12] T. Fleiner. On the stable $b$-matching polytope. Math. Soc. Sci., 46(2):149–158, 2003.
* [13] S. Fujishige, M. X. Goemans, T. Harks, B. Peis, and R. Zenklusen. Congestion games viewed from M-convexity. Oper. Res. Lett., 43(3):329–333, 2015.
* [14] D. Gale and L. Shapley. College admissions and the stability of marriage. Am. Math. Mon., 69:9–15, 1962.
* [15] H. Goko, K. Makino, S. Miyazaki, and Y. Yokoi. Maximally satisfying lower quotas in the hospitals/residents problem with ties. In P. Berenbrink and B. Monmege, editors, 39th International Symposium on Theoretical Aspects of Computer Science, STACS 2022, volume 219 of LIPIcs, pages 31:1–31:20, 2022.
* [16] D. Gusfield and R. W. Irving. The Stable Marriage Problem—Structure and Algorithms. Foundations of computing series. MIT Press, 1989.
* [17] K. Hamada, S. Miyazaki, and H. Yanagisawa. Strategy-proof approximation algorithms for the stable marriage problem with ties and incomplete lists. In P. Lu and G. Zhang, editors, 30th International Symposium on Algorithms and Computation, ISAAC 2019, volume 149 of LIPIcs, pages 9:1–9:14, 2019.
* [18] T. Harks, M. Klimm, and B. Peis. Sensitivity analysis for convex separable optimization over integral polymatroids. SIAM J. Optim., 28(3):2222–2245, 2018.
* [19] T. Harks and B. Peis. Resource buying games. In A. S. Schulz, M. Skutella, S. Stiller, and D. Wagner, editors, Gems of Combinatorial Optimization and Graph Algorithms, pages 103–111. Springer, 2015.
* [20] T. Harks and V. Timmermans. Uniqueness of equilibria in atomic splittable polymatroid congestion games. J. Comb. Optim., 36(3):812–830, 2018.
* [21] C.-C. Huang, K. Iwama, S. Miyazaki, and H. Yanagisawa. A tight approximation bound for the stable marriage problem with restricted ties. In N. Garg, K. Jansen, A. Rao, and J. D. P. Rolim, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2015, volume 40 of LIPIcs, pages 361–380, 2015\.
* [22] C.-C. Huang and T. Kavitha. Improved approximation algorithms for two variants of the stable marriage problem with ties. Math. Program., 154(1-2):353–380, 2015.
* [23] S. Ieong, R. McGrew, E. Nudelman, Y. Shoham, and Q. Sun. Fast and compact: A simple class of congestion games. In M. M. Veloso and S. Kambhampati, editors, 20th Annual AAAI Conference on Artificial Intelligence, AAAI 2005, pages 489–494, 2005.
* [24] R. W. Irving. Stable marriage and indifference. Discret. Appl. Math., 48(3):261–272, 1994.
* [25] N. Kamiyama. Stable matchings with ties, master preference lists, and matroid constraints. In M. Hoefer, editor, 8th International Symposium, SAGT 2015, volume 9347 of Lecture Notes in Computer Science, pages 3–14. Springer, 2015.
* [26] N. Kamiyama. Many-to-many stable matchings with ties, master preference lists, and matroid constraints. In E. Elkind, M. Veloso, N. Agmon, and M. E. Taylor, editors, 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2019, pages 583–591. IFAAMAS, 2019.
* [27] T. Kavitha. Stable matchings with one-sided ties and approximate popularity. In A. Dawar and V. Guruswami, editors, 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2022, volume 250 of LIPIcs, pages 22:1–22:17, 2022.
* [28] F. Kiyosue and K. Takazawa. A common generalization of budget games and congestion games. In P. Kanellopoulos, M. Kyropoulou, and A. A. Voudouris, editors, 15th International Symposium on Algorithmic Game Theory, SAGT 2022, volume 13584 of Lecture Notes in Computer Science, pages 258–274. Springer, 2022.
* [29] D. F. Manlove. Algorithmics of Matching Under Preferences, volume 2 of Series on Theoretical Computer Science. WorldScientific, 2013.
* [30] D. Marx and I. Schlotter. Parameterized complexity and local search approaches for the stable marriage problem with ties. Algorithmica, 58(1):170–187, 2010.
* [31] I. Milchtaich. Congestion games with player-specific payoff functions. Game. Econ. Behav., 13:111–124, 1996.
* [32] D. Monderer and L. S. Shapley. Potential games. Games Econ. Behav., 14:124–143, 1996.
* [33] K. Murota. Discrete Convex Analysis. Society for Industrial and Applied Mathematics, Philadelphia, 2003.
* [34] N. Nisan, T. Roughgarden, É. Tardos, and V. V. Vazirani, editors. Algorithmic Game Theory. Cambridge University Press, 2007.
* [35] R. W. Rosenthal. A class of games possessing pure-strategy Nash equilibria. Int. J. Game Theory, 2:65–67, 1973.
* [36] T. Roughgarden. Twenty Lectures on Algorithmic Game Theory. Cambridge University Press, 2016.
* [37] A. Schrijver. Combinatorial Optimization—Polyhedra and Efficiency. Springer, 2003.
* [38] M. Sotomayor. The lattice structure of the set of stable outcomes of the multiple partners assignment game. Int. J. Game Theory, 28(4):567–583, 1999.
* [39] M. Sotomayor. Three remarks on the many-to-many stable matching prblem. Math. Soc. Sci., 38(1):55–70, 1999.
* [40] K. Takazawa. Generalizations of weighted matroid congestion games: pure Nash equilibrium, sensitivity analysis, and discrete convex function. J. Comb. Optim., 38(4):1043–1065, 2019.
* [41] K. Takazawa. Pure Nash equilibria in weighted congestion games with complementarities and beyond. In M. Dastani, J. S. Sichman, N. Alechina, and V. Dignum, editors, Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024, pages 2495–2497. ACM, 2024.
* [42] Y. Yokoi. An approximation algorithm for maximum stable matching with ties and constraints. In H. Ahn and K. Sadakane, editors, 32nd International Symposium on Algorithms and Computation, ISAAC 2021, volume 212 of LIPIcs, pages 71:1–71:16, 2021.
## Appendix A Omitted Proof
### A.1 Proofs from Section 4
In a singleton game, a strategy $\\{e\\}$ is simply denoted by $e$. We use a
term _state_ to refer to a collection of the strategies of some of the players
in $N$, and let $N(S)$ denote the set of the players contributing a state $S$.
In other words, a strategy profile is a special case of a state $S$ for which
$N(S)=N$. For a state $S$ and a resource $e\in E$, let $N_{e}(S)$ denote the
set of players choosing $e$ as the strategy, and let $|N_{e}(S)|=n_{e}(S)$.
###### Proof of Theorem 4.1.
Let $\\{q_{1},q_{2},\ldots,q_{k}\\}$ denote the set of the priority-function
values of all players, i.e.,
$\\{q_{1},q_{2},\ldots,q_{k}\\}=\\{p(i)\colon i\in N\\}$, where
$q_{1}<q_{2}<\cdots<q_{k}$.
For each $k^{\prime}=1,2,\ldots,{k}$, define $N^{k^{\prime}}\subseteq N$ by
$N^{k^{\prime}}=\\{i\in N\colon p(i)=q_{k^{\prime}}\\}$.
Let $S=(e_{1},\ldots,e_{n})$ be an arbitrary strategy profile of $G$, and let
$S^{k^{\prime}}$ be a state of $G$ consisting of the strategies of the players
in $N^{k^{\prime}}$ in $S$ for each ${k^{\prime}}=1,2,\ldots,{k}$. We prove
the theorem by induction on $k^{\prime}$. First, define a player-specific
singleton congestion game
$G^{1}=(N^{1},E,(\mathcal{S}_{i})_{i\in N^{1}},(d_{i,e}^{\prime})_{i\in
N^{1},e\in E})$
in which the delay function
$d^{\prime}_{i,e}\colon{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$ ($i\in N^{1}$,
$e\in E$) is defined by
$\displaystyle d_{i,e}^{\prime}(y)=d_{i,e}(0,y)\quad(y\in{\mathbb{Z}}_{++}).$
It then follows from Theorem 2.3 that $G^{1}$ has a pure Nash equilibrium
$\hat{S}^{1}$, which is attained by a polynomial number of best responses from
$S^{1}$.
Now let $k^{\prime}\in\\{1,\ldots,k-1\\}$ and suppose that we have a state
$\hat{S}^{k^{\prime}}$ of the players in
$\bigcup_{\ell=1}^{k^{\prime}}N^{\ell}$ in which no player has an incentive to
change her strategy. Then construct a player-specific singleton congestion
game
$G^{{k^{\prime}}+1}=(N^{{k^{\prime}}+1},E,(\mathcal{S}_{i})_{i\in
N^{{k^{\prime}}+1}},(d^{\prime}_{i,e})_{i\in N^{k^{\prime}+1},e\in E})$
in which
$\displaystyle
d_{i,e}^{\prime}(y)=d_{i,e}(n_{e}(\hat{S}^{k^{\prime}}),y)\quad(y\in{\mathbb{Z}}_{++})$
for each $e\in E$. It again follows from Theorem 2.3 that the game
$G^{{k^{\prime}}+1}$ has a pure Nash equilibrium and it is attained by a
polynomial number of best responses from an arbitrary strategy profile.
By induction, we have proved that a pure Nash equilibrium of a player-specific
priority-based singleton congestion game can be attained through a polynomial
number of best responses from an arbitrary strategy profile. ∎
###### Proof of Theorem 4.5.
We prove this theorem by presenting an algorithm for computing a pure Nash
equilibrium of a priority-based player-specific singleton congestion game
$(N,E,(\mathcal{S}_{i}),(p_{e}),(d_{i,e}))$. The algorithm constructs a
sequence $S_{0},S_{1},\ldots,S_{k}$ of states in which $N(S_{0})=\emptyset$,
$N(S_{k})=N$, and
each player in $N(S_{k^{\prime}})$ has no incentive to change her strategy
(22)
for each ${k^{\prime}}=0,1,\ldots,k$, implying that $S_{k}$ is a pure Nash
equilibrium.
It is clear that (22) is satisfied for ${k^{\prime}}=0$. Below we show how to
construct $S_{{k^{\prime}}+1}$ from $S_{k^{\prime}}$ under an assumption that
$S_{k^{\prime}}$ satisfies (22) and $N(S_{k^{\prime}})\subsetneq N$.
Take a player $i\in N\setminus N(S_{k^{\prime}})$, and let $i$ choose a
resource $e\in E$ imposing the minimum cost on $i$ if $i$ is added to
$N_{e}(S_{k^{\prime}})$. We construct the new state $S_{{k^{\prime}}+1}$ by
changing the strategy of each player $j\in N_{e}(S_{k^{\prime}})$ in the
following way. The other players do not change their strategies.
For the players in $N_{e}(S_{k^{\prime}})$, we have the following cases A and
B.
Case A.
No player in $N_{e}(S_{k^{\prime}})$ comes to have a better response when $i$
is added to $N_{e}(S_{k^{\prime}})$.
Case B.
Some players in $N_{e}(S_{k^{\prime}})$ comes to have a better response when
$i$ is added to $N_{e}(S_{k^{\prime}})$.
In Case A, we do not change the strategies of the players in
$N_{e}(S_{k^{\prime}})$. In Case B, if a player $j\in N_{e}(S_{k^{\prime}})$
comes to have a better response, it must hold that $p_{e}(j)\geq p_{e}(i)$. We
further separate Case B into the following two cases.
Case B1.
There exists a player $j\in N_{e}(S_{k^{\prime}})$ having a better response
and satisfying $p_{e}(j)=p_{e}(i)$.
Case B2.
Every player $j\in N_{e}(S_{k^{\prime}})$ having a better response satisfies
$p_{e}(j)>p_{e}(i)$.
In each case, the strategies are changed as follows.
Case B1.
Only one player $j\in N_{e}(S_{k^{\prime}})$ having a better response and
$p_{e}(j)=p_{e}(i)$ changes her strategy by discarding her strategy. Namely,
$j\not\in N(S_{k^{\prime}+1})$. The other players do not change their
strategies.
Case B2.
Every player $j\in N_{e}(S_{k^{\prime}})$ having a better response discards
her strategy.
We have now constructed the new state $S_{k^{\prime}+1}$. It is
straightforward to see that the state $S_{k^{\prime}+1}$ satisfies (22). We
complete the proof by showing that this algorithm terminates within a finite
number of strategy changes.
For a resource $e\in E$, let $q^{*}_{e}=\max\\{p_{e}(i)\colon i\in N\\}$. For
each state $S_{k^{\prime}}$ appearing in the algorithm, define its potential
$\Phi(S_{k^{\prime}})\in\left(\bigtimes_{e\in
E}{\mathbb{Z}}_{+}^{q_{e}^{*}}\right)\times{\mathbb{Z}}_{++}$ in the following
manner. For each resource $e\in E$, define a vector
$\phi_{e}\in{\mathbb{Z}}_{+}^{q_{e}^{*}}$ by
$\displaystyle\phi_{e}(q)=n_{e}^{q}(S_{k^{\prime}})\quad(q=1,2,\ldots,q_{e}^{*}),$
which is a contribution of $e$ to the first component of
$\Phi(S_{k^{\prime}})$. The first component of $\Phi(S_{k^{\prime}})$ is
constructed by ordering the vectors $\phi_{e}$ ($e\in E$) in the
lexicographically nondecreasing order.
For a resource $e\in E$ and a player $i\in N_{e}(S_{k^{\prime}})$, define
$\operatorname{tol}(i,S_{k^{\prime}})\in{\mathbb{Z}}_{++}$ as the maximum
number $y\in{\mathbb{Z}}_{++}$ such that $e$ is an optimal strategy for $i$ if
$i$ shares $e$ with $y$ players having the same priority as $e$, i.e.,
$\displaystyle d_{i,e}(n_{e}^{<p_{e}(i)}(S_{k^{\prime}}),y)\leq
d_{i,e^{\prime}}(n_{e^{\prime}}^{<p_{e^{\prime}}(i)}(S_{k^{\prime}}),n_{e^{\prime}}^{p_{e^{\prime}}(i)}(S_{k^{\prime}})+1)$
for each $e^{\prime}$ with $e^{\prime}\neq e$ and
$\\{e^{\prime}\\}\in\mathcal{S}_{i}$. Note that $i$ herself is counted in $y$,
and hence $\operatorname{tol}(i,S_{k^{\prime}})\geq 1$ for each $i\in
N(S_{k^{\prime}})$. Now the second component of the potential
$\Phi(S_{k^{\prime}})$ is defined as $\sum_{i\in
N(S_{k^{\prime}})}\operatorname{tol}(i,S_{k^{\prime}})$.
We prove that the potential $\Phi(S_{k^{\prime}})$ increases lexicographically
monotonically during the algorithm. Let a state $S_{k^{\prime}+1}$ is
constructed from $S_{k^{\prime}}$ and the involvement of a player $i\in
N\setminus N(S_{k^{\prime}})$ choosing a resource $e\in E$. It is
straightforward to see that $\phi_{e^{\prime}}$ is unchanged for each
$e^{\prime}\in E\setminus\\{e\\}$. Consider how the vector $\phi_{e}$ changes.
##### Case A.
The unique change of $\phi_{e}$ is that $\phi_{e}(p_{e}(i))$ increases by one,
implying that the first component of $\Phi(S_{k^{\prime}+1})$ is
lexicographically larger than that of $\Phi(S_{k^{\prime}})$.
##### Case B1.
Let $j^{*}\in N_{e}(S_{k^{\prime}})$ denote the unique player who discard her
strategy. Recall that $p_{e}(j^{*})=p_{e}(i)$. It follows that $\phi_{e}$ is
unchanged, and hence the first component of $\Phi(S_{k^{\prime}+1})$ is the
same as that of $\Phi(S_{k^{\prime}})$. The second component of
$\Phi(S_{{k^{\prime}}+1})$ is strictly larger than that of
$\Phi(S_{k^{\prime}})$, because
$\displaystyle{}N(S_{{k^{\prime}}+1})=(N(S_{k^{\prime}})\cup\\{i\\})\setminus\\{j^{*}\\},$
$\displaystyle{}\operatorname{tol}(j,S_{{k^{\prime}}+1})=\operatorname{tol}(j,S_{{k^{\prime}}})\quad\mbox{for
each $j\in N(S_{k^{\prime}})\setminus\\{i,j^{*}\\}$},$
$\displaystyle{}\operatorname{tol}(i,S_{{k^{\prime}}+1})\geq
n_{e}^{p_{e}(i)}(S_{k^{\prime}})+1,$
$\displaystyle{}\operatorname{tol}(j^{*},S_{{k^{\prime}}})=n_{e}^{p_{e}(i)}(S_{k^{\prime}}).$
##### Case B2.
It holds that $n_{e}^{q}(S_{{k^{\prime}}+1})=n_{e}^{q}(S_{k^{\prime}})$ for
each $q<p_{e}(i)$ and
$n_{e}^{p_{e}(i)}(S_{{k^{\prime}}+1})=n_{e}^{p_{e}(i)}(S_{k^{\prime}})+1$.
Thus, the first component of $\Phi$ lexicographically increases. ∎
### A.2 Proofs from Section 5
###### Proof of Proposition 5.1.
Given a priority-based congestion game
$(N,E,(\mathcal{S}_{i}),(p_{e}),(d_{e}))$ with inconsistent priorities,
construct a generalized correlated two-sided market
$(N,E,(\mathcal{S}_{i}),(c_{i,e}),(d_{e}^{\prime}))$ with ties by defining
$\displaystyle{}c_{i,e}=p_{e}(i)$ $\displaystyle(i\in N,e\in E),$
$\displaystyle{}d_{e}^{\prime}(c,x,y)=d_{e}(x,y).$ $\displaystyle(e\in
E,c\in{\mathbb{R}}_{+},x\in{\mathbb{Z}}_{+},y\in{\mathbb{Z}}_{++}).$
It is straightforward to see that the delay function $d_{e}^{\prime}$ ($e\in
E$) satisfy (18)–(21) in which $d_{e}$ is replaced by $d_{e}^{\prime}$, if the
original delay function $d_{e}$ satisfies (8)–(10). ∎
###### Proof of Proposition 5.2.
Given a generalized correlated two-sided market
$(N,E,(\mathcal{S}_{i}),(c_{i,e}),(d_{e}))$ with ties, construct a priority-
based player-specific congestion game
$(N,E,(\mathcal{S}_{i}),(p_{e}),(d_{i,e}^{\prime}))$ with inconsistent
priorities as follows. For each resource $e\in E$, construct its priority
function $p_{e}\colon N\to{\mathbb{Z}}_{+}$ in the same way as in (4), and
define its delay function
$d_{i,e}^{\prime}\colon{\mathbb{Z}}_{+}\times{\mathbb{Z}}_{++}\to{\mathbb{R}}_{+}$
specific to a player $i\in N$ by
$\displaystyle{}d_{i,e}^{\prime}(x,y)=d_{e}(p_{e}(i),x,y)\quad(x\in{\mathbb{Z}}_{+},y\in{\mathbb{Z}}_{++}).$
It is straightforward to see that the delay function $d_{i,e}^{\prime}$ ($i\in
N$, $e\in E$) satisfies (8)–(10) in which $d_{e}$ is replaced by
$d_{i,e}^{\prime}$, if the original delay function $d_{e}$ satisfies
(18)–(21). ∎
###### Proof of Theorem 5.3.
For each strategy profile $S=(e_{1},\ldots,e_{n})$, define its potential
$\Phi(S)\in({\mathbb{R}}_{+}\times{\mathbb{Z}}_{++})^{n}$ as follows. Let
$e\in E$ be a resource, and let $Q_{e}(S)=\\{q_{1},...,q_{k^{*}}\\}$ be a set
of integers such that $Q_{e}(S)=\\{q\colon n_{e}^{q}(S)>0\\}$ and
$q_{1}<\cdots<q_{k^{*}}$. The resource $e\in E$ contributes the following
$n_{e}(S)$ vectors in ${\mathbb{R}}_{+}\times{\mathbb{Z}}_{++}$ to $\Phi(S)$:
$\displaystyle{}(d_{e}(q_{1},0,1),\,q_{1}),\ldots,(d_{e}(q_{1},0,n_{e}(S,q_{1})),\,q_{1}),$
$\displaystyle{}(d_{e}(q_{2},n_{e}^{q_{1}}(S),1),\,q_{2}),\ldots,(d_{e}(q_{2},n_{e}^{q_{1}}(S),n_{e}^{q_{2}}(S)),\,q_{2}),$
$\displaystyle{}\ldots,$
$\displaystyle{}(d_{e}(q_{k},n_{e}^{<q_{k}}(S),1),\,q_{k}),\ldots,(d_{e}(q_{k},n_{e}^{<q_{k}}(S),n_{e}^{q_{k}}(S)),\,q_{k}),$
$\displaystyle{}\ldots,$
$\displaystyle{}(d_{e}(q_{k^{*}},n_{e}^{<q_{k^{*}}}(S),1),\,q_{k^{*}}),\ldots,(d_{e}(q_{k^{*}},n_{e}^{<q_{k^{*}}}(S),n_{e}^{q_{k^{*}}}(S)),\,q_{k^{*}}).$
The potential $\Phi(S)$ is obtained by ordering the $n$ vectors contributed by
all resources in the lexicographically nondecreasing order. We can observe
that the order of the $n_{e}(S)$ vectors shown above is lexicographically
nondecreasing in the following way. It follows from the property (20) that
$\displaystyle{}d_{e}(q_{k},n_{e}^{<q_{k}}(S),y)\leq
d_{e}(q_{k},n_{e}^{<q_{k}}(S),y+1)$
for each $k=1,\ldots,k^{*}$ and for each $y=q_{1},\ldots,q_{k^{-}1}$. It
further follows from the properties (18), (19) and (21) that
$\displaystyle d_{e}(c,x,y)\leq d_{e}(c,x+y-1,1)\leq d_{e}(c^{\prime},x+y,1)$
(23)
if $c<c^{\prime}$, and in particular
$\displaystyle d_{e}(q_{k},n_{e}^{<q_{k}}(S),n_{e}^{q_{k}}(S))\leq
d_{e}(q_{k+1},n_{e}^{<q_{k+1}}(S),1)\quad\mbox{for each
$k=1,\ldots,k^{*}-1$.}$ (24)
Suppose that a player $i$ has a better response in a strategy profile $S$,
which changes her strategy from $e$ to $e^{\prime}$, and let
$S^{\prime}=(S_{-i},e^{\prime})$. Below we show that
$\Phi(S^{\prime})\operatorname{\prec_{lex}}\Phi(S)$, which completes the
proof.
Let $c_{i,e}=q$ and $c_{i,e^{\prime}}=q^{\prime}$. Since the delay imposed on
$i$ becomes smaller due to the better response, it holds that
$\displaystyle
d_{e^{\prime}}(q^{\prime},n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1)<d_{e}(q,n_{e}^{<q}(S),n_{e}^{q}(S)).$
(25)
Note that $e^{\prime}$ contributes a vector
$\displaystyle(d_{e^{\prime}}(q^{\prime},n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1),q^{\prime})$
(26)
to $\Phi(S^{\prime})$ but not to $\Phi(S)$. To prove
$\Phi(S^{\prime})\operatorname{\prec_{lex}}\Phi(S)$, it suffices to show that
a vector belonging to $\Phi(S)$ but not to $\Phi(S^{\prime})$ is
lexcographically larger than the vector (26).
First, consider the vectors in $\Phi(S)$ contributed by $e$. Let
$Q_{e}(S)=\\{q_{1},\ldots,q_{k^{*}}\\}$, where $q_{1}<\cdots<q_{k^{*}}$. Due
to the better response of $i$, the vectors in $\Phi(S)$ whose second component
is larger than $q$ changes, because the second argument of the delay function
$d_{e}$ decreases by one. If $q=q_{k^{*}}$, then those vectors do not exist
and thus we are done. Suppose that $q=q_{k}$ for some $k<k^{*}$. Among those
vectors, the lexicographically smallest one is
$(d_{e}(q_{k+1},n_{e}^{<q_{k+1}}(S),1),q_{k+1}).$
It follows from (24) and (25) that
$d_{e^{\prime}}(q^{\prime},n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1)<d_{e}(q,n_{e}^{<q}(S),n_{e}^{q}(S))\leq
d_{e}(q_{k+1},n_{e}^{<q_{k+1}}(S),1).$
Hence, it holds that
$(d_{e^{\prime}}(q^{\prime},n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1),q^{\prime})\operatorname{\prec_{lex}}(d_{e}(q_{k+1},n_{e}^{<q_{k+1}}(S),1),q_{k+1}).$
Next, consider the vectors in $\Phi(S)$ contributed by $e^{\prime}$. Without
loss of generality, suppose that there exists a positive integer
$q^{\prime\prime}$ such that $q^{\prime\prime}\in Q_{e^{\prime}}(S)$ and
$q^{\prime\prime}>q^{\prime}$. Let $q^{\prime\prime}$ be the smallest integer
satisfying these conditions. The lexicographically smallest vector in
$\Phi(S)$ contributed by $e^{\prime}$ and changed by the better response of
$i$ is
$(d_{e^{\prime}}(q^{\prime\prime},n_{e^{\prime}}^{<q^{\prime\prime}}(S),1),q^{\prime\prime}).$
It then follows from the properties (18) and (21) that
$d_{e^{\prime}}(q^{\prime},n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1)\leq
d_{e^{\prime}}(q^{\prime\prime},n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1)\leq
d_{e^{\prime}}(q^{\prime\prime},n_{e^{\prime}}^{<q^{\prime\prime}}(S),1)$
and thus
$(d_{e^{\prime}}(q^{\prime},n_{e^{\prime}}^{<q^{\prime}}(S),n_{e^{\prime}}^{q^{\prime}}(S)+1),q^{\prime})\operatorname{\prec_{lex}}(d_{e^{\prime}}(q^{\prime\prime},n_{e^{\prime}}^{<q^{\prime\prime}}(S),1),q^{\prime\prime}),$
completing the proof. ∎
### A.3 Proof from Section 6
###### Proof of Lemma 6.6.
Let $i$ be a player in $N^{q}$, and $S_{i}\in\mathcal{S}_{i}$ be an arbitrary
strategy of $i$. For each $S_{i}^{\prime}\in\mathcal{S}_{i}$, it holds that
$\displaystyle\Phi(S^{q}_{-i},S_{i}^{\prime})-\Phi(S^{q}){}$
$\displaystyle{}=\sum_{e\in S_{i}^{\prime}\setminus
S_{i}}d_{e}(n_{e}^{<q}(S),n_{e}(S^{q})+1)-\sum_{e\in S_{i}\setminus
S_{i}^{\prime}}d_{e}(n_{e}^{<q}(S),n_{e}(S^{q}))$
$\displaystyle{}=\gamma_{i}(S^{q}_{-i},S_{i}^{\prime})-\gamma_{i}(S^{q}),$
and hence the function $\Phi$ is a potential function of $G^{q}$. ∎
|
* Augustin et al., (2014) Augustin, T., Walter, G., and Coolen, F. P. A. (2014). Statistical inference. In Introduction to Imprecise Probabilities, Wiley Ser. Probab. Stat., pages 135–189. Wiley, Chichester.
* Balch et al., (2019) Balch, M. S., Martin, R., and Ferson, S. (2019). Satellite conjunction analysis and the false confidence theorem. Proc. Royal Soc. A, 475(2227):2018.0565.
* Barndorff-Nielsen, (1977) Barndorff-Nielsen, O. (1977). Exponentially decreasing distributions for the logarithm of particle size. Proc. R. Soc. Lond. A., 353(1674):401–419.
* Barndorff-Nielsen, (1983) Barndorff-Nielsen, O. (1983). On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70(2):343–365.
* Barndorff-Nielsen and Cox, (1994) Barndorff-Nielsen, O. E. and Cox, D. R. (1994). Inference and Asymptotics, volume 52 of Monographs on Statistics and Applied Probability. Chapman & Hall, London.
* Baroni, (2004) Baroni, P. (2004). Extending consonant approximations to capacities. In Proceedings of Information Processing with Management of Uncertainty in Knowledge-based Systems (IPMU’04), pages 1127–1134.
* Berger, (1984) Berger, J. O. (1984). The robust Bayesian viewpoint. In Robustness of Bayesian Analyses, volume 4 of Stud. Bayesian Econometrics, pages 63–144. North-Holland, Amsterdam. With comments and with a reply by the author.
* Berger and Wolpert, (1984) Berger, J. O. and Wolpert, R. L. (1984). The Likelihood Principle. Institute of Mathematical Statistics Lecture Notes—Monograph Series, 6. Institute of Mathematical Statistics, Hayward, CA.
* Bernardo and Smith, (1994) Bernardo, J.-M. and Smith, A. F. M. (1994). Bayesian Theory. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons, Ltd., Chichester.
* Birnbaum, (1962) Birnbaum, A. (1962). On the foundations of statistical inference. J. Amer. Statist. Assoc., 57:269–326.
* (12) Cella, L. and Martin, R. (2022a). Direct and approximately valid probabilistic inference on a class of statistical functionals. Internat. J. Approx. Reason., 151:205–224.
* (13) Cella, L. and Martin, R. (2022b). Valid inferential models for prediction in supervised learning problems. Internat. J. Approx. Reason., 150:1–18.
* (14) Cella, L. and Martin, R. (2022c). Validity, consonant plausibility measures, and conformal prediction. Internat. J. Approx. Reason., 141:110–130.
* Couso and Dubois, (2018) Couso, I. and Dubois, D. (2018). A general framework for maximizing likelihood under incomplete data. Internat. J. Approx. Reason., 93:238–260.
* Couso et al., (2001) Couso, I., Montes, S., and Gil, P. (2001). The necessity of the strong $\alpha$-cuts of a fuzzy set. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 9(2):249–262.
* Cox and Reid, (1987) Cox, D. R. and Reid, N. (1987). Parameter orthogonality and approximate conditional inference (with discussion). J. Roy. Statist. Soc. Ser. B, 49(1):1–39.
* Cui and Hannig, (2022) Cui, Y. and Hannig, J. (2022). Demystifying inferential moels: A fiducial approach. arXiv:2205.05612.
* Dawid, (1991) Dawid, A. P. (1991). Fisherian inference in likelihood and prequential frames of reference. J. Roy. Statist. Soc. Ser. B, 53(1):79–109. With discussion and a reply by the author.
* De Cooman, (1997) De Cooman, G. (1997). Possibility theory. II. Conditional possibility. Internat. J. Gen. Systems, 25(4):325–351.
* de Finetti, (1937) de Finetti, B. (1937). La prévision : ses lois logiques, ses sources subjectives. Ann. Inst. H. Poincaré, 7(1):1–68.
* Dempster, (1967) Dempster, A. P. (1967). Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Statist., 38:325–339.
* (23) Dempster, A. P. (1968a). A generalization of Bayesian inference. (With discussion). J. Roy. Statist. Soc. Ser. B, 30:205–247.
* (24) Dempster, A. P. (1968b). Upper and lower probabilities generated by a random closed interval. Ann. Math. Statist., 39:957–966.
* Dempster, (2008) Dempster, A. P. (2008). The Dempster–Shafer calculus for statisticians. Internat. J. Approx. Reason., 48(2):365–377.
* Denœux, (2006) Denœux, T. (2006). Constructing belief functions from sample data using multinomial confidence regions. Internat. J. of Approx. Reason., 42(3):228–252.
* Denœux, (2014) Denœux, T. (2014). Likelihood-based belief function: justification and some extensions to low-quality data. Internat. J. Approx. Reason., 55(7):1535–1547.
* Denœux, (2022) Denœux, T. (2022). Reasoning with fuzzy and uncertain evidence using epistemic random fuzzy sets: general framework and practical models. Fuzzy Sets Syst., to appear.
* Denœux and Li, (2018) Denœux, T. and Li, S. (2018). Frequency-calibrated belief functions: review and new insights. Internat. J. Approx. Reason., 92:232–254.
* Destercke et al., (2009) Destercke, S., Dubois, D., and Chojnacki, E. (2009). A consonant approximation of the product of independent consonant random sets. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 17(6):773–792.
* Dubois et al., (2004) Dubois, D., Foulloy, L., Mauris, G., and Prade, H. (2004). Probability-possibility transformations, triangular fuzzy sets, and probabilistic inequalities. Reliab. Comput., 10(4):273–297.
* Dubois and Prade, (1984) Dubois, D. and Prade, H. (1984). Fuzzy logics and the generalized modus ponens revisited. Cybernet. Systems, 15(3-4):293–331.
* (33) Dubois, D. and Prade, H. (1986a). The principle of minimum specificity as a basis for evidential reasoning. In Proceedings of International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems, pages 75–84. Springer.
* (34) Dubois, D. and Prade, H. (1986b). A set-theoretic view of belief functions. Logical operations and approximations by fuzzy sets. Internat. J. Gen. Systems, 12(3):193–226.
* (35) Dubois, D. and Prade, H. (1988a). Possibility Theory. Plenum Press, New York.
* (36) Dubois, D. and Prade, H. (1988b). Representation and combination of uncertainty with belief functions and possibility measures. Comput. Intell., 4(3):244–264.
* (37) Dubois, D. and Prade, H. (1990a). Consonant approximations of belief functions. Internat. J. Approx. Reason., 4(5-6):419–449.
* (38) Dubois, D. and Prade, H. (1990b). The logical view of conditioning and its application to possibility and evidence theories. Internat. J. Approx. Reason., 4(1):23–46.
* Efron, (1979) Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Ann. Statist., 7(1):1–26.
* Efron, (1998) Efron, B. (1998). R. A. Fisher in the 21st century. Statist. Sci., 13(2):95–122.
* Efron and Hinkley, (1978) Efron, B. and Hinkley, D. V. (1978). Assessing the accuracy of the maximum likelihood estimator: observed versus expected Fisher information. Biometrika, 65(3):457–487. With comments by Ole Barndorff-Nielsen, A. T. James, G. K. Robinson and D. A. Sprott and a reply by the authors.
* Efron and Tibshirani, (1993) Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. Chapman & Hall, New York.
* Ferguson, (1973) Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. Ann. Statist., 1:209–230.
* Fisher, (1935) Fisher, R. A. (1935). The fiducial argument in statistical inference. Ann. Eugenics, 6:391–398.
* (45) Fisher, R. A. (1973a). Statistical Methods and Scientific Inference. Hafner Press, New York, 3rd edition.
* (46) Fisher, R. A. (1973b). Statistical Methods for Research Workers. Hafner Publishing Co., New York. Fourteenth edition—revised and enlarged.
* Fraser, (2004) Fraser, D. A. S. (2004). Ancillaries and conditional inference. Statist. Sci., 19(2):333–369. With comments and a rejoinder by the author.
* Fraser, (2014) Fraser, D. A. S. (2014). Why does statistics have two theories? In Lin, X., Genest, C., Banks, D. L., Molenberghs, G., Scott, D. W., and Wang, J.-L., editors, Past, Present, and Future of Statistical Science, chapter 22. Chapman & Hall/CRC Press.
* Fraser et al., (1997) Fraser, D. A. S., Reid, N., and Wong, A. (1997). Simple and accurate inference for the mean of a gamma model. Canad. J. Statist., 25(1):91–99.
* Ghosal and van der Vaart, (2017) Ghosal, S. and van der Vaart, A. (2017). Fundamentals of Nonparametric Bayesian Inference, volume 44 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge.
* Ghosh et al., (2006) Ghosh, J. K., Delampady, M., and Samanta, T. (2006). An Introduction to Bayesian Analysis. Springer, New York.
* Ghosh et al., (2010) Ghosh, M., Reid, N., and Fraser, D. A. S. (2010). Ancillary statistics: A review. Statist. Sinica, 20:1309–1332.
* Godambe, (1960) Godambe, V. P. (1960). An optimum property of regular maximum likelihood estimation. Ann. Math. Statist., 31:1208–1211.
* Gong and Meng, (2021) Gong, R. and Meng, X.-L. (2021). Judicious judgment meets unsettling updating: dilation, sure loss and Simpson’s paradox. Statist. Sci., 36(2):169–190.
* Goodman, (1965) Goodman, L. A. (1965). On simultaneous confidence intervals for multinomial proportions. Technometrics, 7(2):247–254.
* Guillaume and Dubois, (2020) Guillaume, R. and Dubois, D. (2020). A min-max regret approach to maximum likelihood inference under incomplete data. Internat. J. Approx. Reason., 121:135–149.
* Hamada et al., (2004) Hamada, M., Johnson, V., Moore, L. M., and Wendelberger, J. (2004). Bayesian prediction intervals and their relationship to tolerance intervals. Technometrics, 46(4):452–459.
* Hannig et al., (2016) Hannig, J., Iyer, H., Lai, R. C. S., and Lee, T. C. M. (2016). Generalized fiducial inference: a review and new results. J. Amer. Statist. Assoc., 111(515):1346–1361.
* Hannig and Xie, (2012) Hannig, J. and Xie, M.-g. (2012). A note on Dempster–Shafer recombination of confidence distributions. Electron. J. Stat., 6:1943–1966.
* Heitjan and Rubin, (1991) Heitjan, D. F. and Rubin, D. B. (1991). Ignorability and coarse data. Ann. Statist., 19(4):2244–2253.
* Hisdal, (1978) Hisdal, E. (1978). Conditional possibilities, independence, and noninteraction. Fuzzy Sets and Systems, 1:283–297.
* Hose, (2022) Hose, D. (2022). Possibilistic Reasoning with Imprecise Probabilities: Statistical Inference and Dynamic Filtering. PhD thesis, University of Stuttgart.
* Hose and Hanss, (2020) Hose, D. and Hanss, M. (2020). On data-based estimation of possibility distributions. Fuzzy Sets and Systems, 399:77–94.
* Hose and Hanss, (2021) Hose, D. and Hanss, M. (2021). A universal approach to imprecise probabilities in possibility theory. Internat. J. Approx. Reason., 133:133–158.
* Hose et al., (2022) Hose, D., Hanss, M., and Martin, R. (2022). A practical strategy for valid partial prior-dependent possibilistic inference. In Le Hégarat-Mascle, S., Bloch, I., and Aldea, E., editors, Belief Functions: Theory and Applications (BELIEF 2022), volume 13506 of Lecture Notes in Artificial Intelligence, pages 197–206. Springer.
* Huber, (1981) Huber, P. J. (1981). Robust Statistics. John Wiley & Sons Inc., New York. Wiley Series in Probability and Mathematical Statistics.
* Imbens and Rubin, (2015) Imbens, G. W. and Rubin, D. B. (2015). Causal Inference—for Statistics, Social, and Biomedical Sciences. Cambridge University Press, New York.
* (68) Jacob, P. E., Gong, R., Edlefsen, P. T., and Dempster, A. P. (2021a). A Gibbs sampler for a class of random convex polytopes. J. Amer. Statist. Assoc., 116(535):1181–1192.
* (69) Jacob, P. E., Gong, R., Edlefsen, P. T., and Dempster, A. P. (2021b). Rejoinder—A Gibbs sampler for a class of random convex polytopes. J. Amer. Statist. Assoc., 116(535):1211–1214.
* Jaynes, (2003) Jaynes, E. T. (2003). Probability Theory. Cambridge University Press, Cambridge.
* Kalbfleisch and Sprott, (1970) Kalbfleisch, J. D. and Sprott, D. A. (1970). Application of likelihood methods to models involving large numbers of parameters. J. Roy. Statist. Soc. Ser. B, 32:175–208.
* Klein and Moeschberger, (2003) Klein, J. P. and Moeschberger, M. L. (2003). Survival Analysis. Springer-Verlag, New York, 2nd edition.
* Lash, (2017) Lash, T. L. (2017). The harm done to reproducibility by the culture of null hypothesis significance testing. Am. J. Epidemiol., 186(6):627–635.
* Lawrence et al., (2009) Lawrence, E., Liu, C., Vander Weil, S., and Zhang, J. (2009). A new method for multinomial inference using Dempster–Shafer theory. https://www.osti.gov/biblio/956387.
* Little and Rubin, (2002) Little, R. J. A. and Rubin, D. B. (2002). Statistical Analysis with Missing Data. Wiley Series in Probability and Statistics. Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, second edition.
* Liu and Martin, (2021) Liu, C. and Martin, R. (2021). Inferential models and possibility measures. Handbook of Bayesian, Fiducial, and Frequentist Inference, to appear; arXiv:2008.06874.
* Manski, (2003) Manski, C. F. (2003). Partial Identification of Probability Distributions. Springer Series in Statistics. Springer-Verlag, New York.
* Martin, (2018) Martin, R. (2018). On an inferential model construction using generalized associations. J. Statist. Plann. Inference, 195:105–115.
* (79) Martin, R. (2019a). False confidence, non-additive beliefs, and valid statistical inference. Internat. J. Approx. Reason., 113:39–73.
* (80) Martin, R. (2019b). On valid uncertainty quantification about a model. In De Bock, J., de Campos, C. P., de Cooman, G., Quaeghebeur, E., and Wheeler, G., editors, Proceedings of the Eleventh International Symposium on Imprecise Probabilities: Theories and Applications, volume 103 of Proceedings of Machine Learning Research, pages 295–303, Thagaste, Ghent, Belgium. PMLR.
* (81) Martin, R. (2021a). An imprecise-probabilistic characterization of frequentist statistical inference. https://researchers.one/articles/21.01.00002.
* (82) Martin, R. (2021b). Inferential models and the decision-theoretic implications of the validity property. https://researchers.one/articles/21.12.00005.
* Martin, (2022) Martin, R. (2022). Valid and efficient imprecise-probabilistic inference under partial priors, I. First results. https://researchers.one/articles/21.05.00001.
* Martin et al., (2021) Martin, R., Balch, M., and Ferson, S. (2021). Response to the comment ‘Confidence in confidence distributions!’. Proc. R. Soc. A., 477:20200579.
* Martin and Lingham, (2016) Martin, R. and Lingham, R. T. (2016). Prior-free probabilistic prediction of future observations. Technometrics, 58(2):225–235.
* Martin and Liu, (2013) Martin, R. and Liu, C. (2013). Inferential models: a framework for prior-free posterior probabilistic inference. J. Amer. Statist. Assoc., 108(501):301–313.
* (87) Martin, R. and Liu, C. (2014a). Discussion: Foundations of statistical inference, revisited. Statist. Sci., 29:247–251.
* (88) Martin, R. and Liu, C. (2014b). A note on p-values interpreted as plausibilities. Statist. Sinica, 24(4):1703–1716.
* (89) Martin, R. and Liu, C. (2015a). Conditional inferential models: combining information for prior-free probabilistic inference. J. R. Stat. Soc. Ser. B. Stat. Methodol., 77(1):195–217.
* (90) Martin, R. and Liu, C. (2015b). Inferential Models, volume 147 of Monographs on Statistics and Applied Probability. CRC Press, Boca Raton, FL.
* (91) Martin, R. and Liu, C. (2015c). Marginal inferential models: prior-free probabilistic inference on interest parameters. J. Amer. Statist. Assoc., 110(512):1621–1631.
* Martin and Syring, (2022) Martin, R. and Syring, N. (2022). Direct Gibbs posterior inference on risk minimizers: construction, concentration, and calibration. In Srinivasa Rao, A. S. R., Young, G. A., and Rao, C. R., editors, Handbook of Statistics: Advancements in Bayesian Methods and Implementation, volume 47, pages 1–41. Elsevier.
* Mayo, (2018) Mayo, D. G. (2018). Statistical Inference as Severe Testing. Cambridge University Press, Cambridge.
* Murph et al., (2021) Murph, A., Hannig, J., and Williams, J. (2021). Introduction to generalized fiducial inference. Handbook on Bayesian, Fiducial, and Frequentist Inference, to appear.
* Murphy and van der Vaart, (2000) Murphy, S. A. and van der Vaart, A. W. (2000). On profile likelihood. J. Amer. Statist. Assoc., 95(450):449–485. With discussion.
* Nguyen, (1978) Nguyen, H. T. (1978). On conditional possibility distributions. Fuzzy Sets and Systems, 1(4):299–309.
* Nguyen, (2006) Nguyen, H. T. (2006). An Introduction to Random Sets. Chapman & Hall/CRC, Boca Raton, FL.
* Normand, (1999) Normand, S. (1999). Meta-analysis: formulating, evaluating, combining, and reporting. Stat. Med., 18(3):321–359.
* Owen, (1990) Owen, A. (1990). Empirical likelihood ratio confidence regions. Ann. Statist., 18(1):90–120.
* Owen, (1988) Owen, A. B. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika, 75(2):237–249.
* Owen, (2001) Owen, A. B. (2001). Empirical Likelihood, volume 92 of Monographs on Statistics and Applied Probability. Chapman & Hall/CRC Press, Boca Raton, FL.
* Pearson, (1920) Pearson, K. (1920). The fundamental problem of practical statistics. Biometrika, 13:1–16.
* Pereira and Stern, (2022) Pereira, C. A. B. and Stern, J. M. (2022). The $e$-value: a fully Bayesian significance measure for precise statistical hypotheses and its research program. São Paulo J. Math. Sci., 16(1):566–584.
* Pereira et al., (2008) Pereira, C. A. d. B., Stern, J. M., and Wechsler, S. (2008). Can a significance test be genuinely Bayesian? Bayesian Anal., 3(1):79–100.
* Qin and Lawless, (1994) Qin, J. and Lawless, J. (1994). Empirical likelihood and general estimating equations. Ann. Statist., 22(1):300–325.
* Rao, (1973) Rao, C. R. (1973). Linear statistical inference and its applications. Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, New York-London-Sydney, second edition.
* Reid, (2003) Reid, N. (2003). Asymptotics and the theory of inference. Ann. Statist., 31(6):1695–1731.
* Royall, (1997) Royall, R. M. (1997). Statistical Evidence, volume 71 of Monographs on Statistics and Applied Probability. Chapman & Hall, London.
* Schervish, (1996) Schervish, M. J. (1996). $P$ values: what they are and what they are not. Amer. Statist., 50(3):203–206.
* Schreiber, (2000) Schreiber, T. (2000). Statistical inference from set-valued observations. Probab. Math. Statist., 20(2, Acta Univ. Wratislav. No. 2256):223–235.
* Schweder and Hjort, (2016) Schweder, T. and Hjort, N. L. (2016). Confidence, Likelihood, Probability, volume 41 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, New York.
* Seidenfeld, (1992) Seidenfeld, T. (1992). R. A. Fisher’s fiducial argument and Bayes’ theorem. Statist. Sci., 7(3):358–368.
* Shackle, (1961) Shackle, G. L. S. (1961). Decision Order and Time in Human Affairs. Cambridge University Press, Cambridge.
* Shafer, (1976) Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton University Press, Princeton, N.J.
* Shafer, (1982) Shafer, G. (1982). Belief functions and parametric models. J. Roy. Statist. Soc. Ser. B, 44(3):322–352. With discussion.
* Shafer and Vovk, (2008) Shafer, G. and Vovk, V. (2008). A tutorial on conformal prediction. J. Mach. Learn. Res., 9:371–421.
* Stern and Pereira, (2014) Stern, J. M. and Pereira, C. A. D. B. (2014). Bayesian epistemic values: focus on surprise, measure probability! Log. J. IGPL, 22(2):236–254.
* Stigler, (1977) Stigler, S. M. (1977). Do robust estimators work with real data? Ann. Statist., 5(6):1055–1098. With discussion and a reply by the author.
* Swihart and Lindsey, (2022) Swihart, B. and Lindsey, J. (2022). rmutil: Utilities for nonlinear regression and repeated measurements models. R package version 1.1.10.
* Syring and Martin, (2017) Syring, N. and Martin, R. (2017). Gibbs posterior inference on the minimum clinically important difference. J. Statist. Plann. Inference, 187:67–77.
* Syring and Martin, (2019) Syring, N. and Martin, R. (2019). Calibrating general posterior credible regions. Biometrika, 106(2):479–486.
* Syring and Martin, (2022) Syring, N. and Martin, R. (2022). Gibbs posterior concentration rates under sub-exponential type losses. Bernoulli, to appear, arXiv:2012.04505.
* Tang and Leng, (2010) Tang, C. Y. and Leng, C. (2010). Penalized high-dimensional empirical likelihood. Biometrika, 97(4):905–919.
* Thornton and Xie, (2020) Thornton, S. and Xie, M.-g. (2020). Bridging Bayesian, frequentist and fiducial (BFF) inferences using confidence distribution. arXiv:2012.04464.
* Trafimow and Marks, (2015) Trafimow, D. and Marks, M. (2015). Editorial. Basic Appl. Soc. Psych., 37(1):1–2.
* Troffaes and de Cooman, (2014) Troffaes, M. C. M. and de Cooman, G. (2014). Lower Previsions. Wiley Series in Probability and Statistics. John Wiley & Sons, Ltd., Chichester.
* Troffaes et al., (2013) Troffaes, M. C. M., Miranda, E., and Destercke, S. (2013). On the connection between probability boxes and possibility measures. Inform. Sci., 224:88–108.
* Tsybakov, (2009) Tsybakov, A. B. (2009). Introduction to Nonparametric Estimation. Springer, New York.
* Vovk et al., (2005) Vovk, V., Gammerman, A., and Shafer, G. (2005). Algorithmic Learning in a Random World. Springer, New York.
* Walley, (1991) Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities, volume 42 of Monographs on Statistics and Applied Probability. Chapman & Hall Ltd., London.
* Walley, (2002) Walley, P. (2002). Reconciling frequentist properties with the likelihood principle. J. Statist. Plann. Inference, 105(1):35–65.
* Wang et al., (2012) Wang, C. M., Hannig, J., and Iyer, H. K. (2012). Fiducial prediction intervals. J. Statist. Plann. Inference, 142(7):1980–1990.
* Wang and Martin, (2021) Wang, Z. and Martin, R. (2021). Gibbs posterior inference on a Lévy density under discrete sampling. arXiv:2109.06567.
* Wasserman, (2008) Wasserman, L. (2008). Why isn’t everyone a Bayesian? In Morris, C. and Tibshirani, R., editors, The Science of Bradley Efron, pages 260–261. Springer, New York.
* Wasserman et al., (2020) Wasserman, L., Ramdas, A., and Balakrishnan, S. (2020). Universal inference. Proceedings of the National Academy of Sciences, 117(29):16880–16890.
* (136) Wasserman, L. A. (1990a). Belief functions and statistical inference. Canad. J. Statist., 18(3):183–196.
* (137) Wasserman, L. A. (1990b). Prior envelopes based on belief functions. Ann. Statist., 18(1):454–464.
* Wu and Martin, (2022) Wu, P.-S. and Martin, R. (2022). Generalized Bayes inference on a linear personalized minimum clinically important difference. arXiv:2208.12565.
* Xie and Singh, (2013) Xie, M. and Singh, K. (2013). Confidence distribution, the frequentist distribution estimator of a parameter: a review. Int. Stat. Rev., 81(1):3–39.
* Zabell, (1992) Zabell, S. L. (1992). R. A. Fisher and the fiducial argument. Statist. Sci., 7(3):369–387.
* Zadeh, (1975) Zadeh, L. A. (1975). The concept of a linguistic variable and its application to approximate reasoning. I. Information Sci., 8:199–249.
* Zadeh, (1978) Zadeh, L. A. (1978). Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems, 1(1):3–28.
* Zhang, (2010) Zhang, Z. (2010). Profile likelihood and incomplete data. Int. Stat. Rev., 78:102–116.
* Zhou, (2022) Zhou, M. (2022). emplik: Empirical Likelihood Ratio for Censored/Truncated Data. R package version 1.2. |
# Visual Instruction Inversion:
Image Editing via Visual Prompting
Thao Nguyen Yuheng Li Utkarsh Ojha Yong Jae Lee
University of Wisconsin-Madison
https://thaoshibe.github.io/visii/
###### Abstract
Text-conditioned image editing has emerged as a powerful tool for editing
images. However, in many situations, language can be ambiguous and ineffective
in describing specific image edits. When faced with such challenges, visual
prompts can be a more informative and intuitive way to convey the desired
edit. We present a method for image editing via visual prompting. Given
example pairs that represent the “before” and “after” images of an edit, our
approach learns a text-based editing direction that can be used to perform the
same edit on new images. We leverage the rich, pretrained editing capabilities
of text-to-image diffusion models by inverting visual prompts into editing
instructions. Our results show that even with just one example pair, we can
achieve competitive results compared to state-of-the-art text-conditioned
image editing frameworks.
Figure 1: Image editing via visual prompting. Given a pair of _before-and-
after images_ of an edit, our approach (bottom) can _learn and apply_ that
edit along with the user’s text prompt to enable a more accurate and intuitive
image editing process compared to text-only conditioned approaches (top).
## 1 Introduction
In the past few years, diffusion models [35, 36, 6, 27, 38, 8] have emerged as
a powerful framework for image generation. In particular, text-to-image
diffusion models can generate stunning images conditioned on a text prompt.
Such models have also been developed for _image editing_ [25, 4, 18, 28, 51,
13, 15, 37, 9, 24]; i.e., transforming an image into another based on a text
specification. As these models rely on textual guidance, significant effort
has been made in prompt engineering [46, 12, 45], which aims to find well-
designed prompts for text-to-image generation and editing.
But, what if the desired edit is difficult to describe in words? For example,
describing your drawing style of your cat can be challenging to put into a
sentence (Figure 2a). Or imagine that you want to transform a roadmap image
into an aerial one – it could be difficult to know what the different colored
regions in the roadmap image are supposed to represent, leading to an
incorrect output image. In such cases, it would be easier, and more direct, to
convey the edit _visually_ by showing a before-and-after example image pair
(Figure 2b). In other words, language can be ambiguous when describing a
specific image edit transformation, while visual prompts can offer a more
intuitive and precise way to describe it.
Visual prompting for image editing has very recently been explored in [3, 41].
These works reformulate the problem as an image in-painting task, where an
example image pair (“before” and “after”) and query image are provided in a
single grid-like image. The target output is inpainted (by forming the
analogy, before:after = query:output). After training on a large dataset of
computer vision tasks (e.g., edge detection, bounding box localization), these
systems aim to perform any of those tasks during testing with in-context
learning [3, 41, 42, 43] without further fine-tuning. However, while they can
work reasonably well for standard computer vision tasks such as segmentation
and colorization, they cannot be used for general image editing tasks since
large datasets for arbitrary edits are typically unavailable.
This paper investigates image editing via visual prompting using text-to-image
diffusion models. Inspired by textual inversion [10], which inverts a visual
identity specified by an example image into the rich, pre-trained text
embedding of a large vision-and-language model [36, 30] for text-to-image
generation, we propose to _invert the visual edit transformation specified by
the example before-and-after image pair into a text instruction_. In
particular, we leverage the textual instruction space that InstructPix2Pix [4]
has learned. Since InstructPix2Pix directly builds upon a pretrained Stable
Diffusion model’s vast text-to-image generation capabilities, while further
finetuning it with 450,000 (text instruction, before image, after image)
triplets, our hypothesis is that its learned instruction space is rich enough
to cover many image-to-image translations (i.e., image edits), and thus can be
fruitful for visual prompt based image editing. Specifically, given a pair of
images representing the “before” and “after” states of an editing task, we
learn the edit direction in text space by optimizing for the textual
instruction that converts the “before” image into the “after” image. Once
learned, this edit direction can then be applied to a new test image, together
with a text prompt, facilitating precise image editing; see Figure 1.
Our contributions and main findings are: (1) We introduce a new scheme for
image editing via visual prompting. (2) We propose a framework for inverting
visual prompts into editing instructions for text-to-image diffusion models.
(3) By conducting in-depth analyses, we share valuable insights about image
editing with diffusion models; e.g., concatenating instructions between
learned and natural language yields a hybrid editing instruction that is more
precise; or reusing the same noise schedule in training for testing leads to a
more balanced result between editing effects and faithfulness to the input
image.
Figure 2: Image editing with visual prompting. (a) Text-conditioned scheme
(Prior work): Model takes an input image and a text prompt to perform the
desired edit. (b) Visual prompting scheme (Ours): Given a pair of before-after
images of an edit, our goal is to learn an implicit text-based editing
instruction, and then apply it to new images.
## 2 Related Work
Text-to-image Models. Early works on text-to-image synthesis based on GANs
[50, 53, 20, 47] were limited to small-scale and object-centric datasets, due
to training difficulties of GANs. Auto-regressive models pioneered the use of
large-scale data for text-to-image generation [31, 32, 39, 8, 49]. However,
they typically suffer from high computation costs and error accumulation. An
emerging trend is large-scale text-to-image diffusion models, which are the
current state-of-the-art in image synthesis, offering unprecedented image
fidelity and language reasoning capabilities. Research efforts have focused on
improving their image quality, controllability, and expanding the type of
conditional inputs [21, 9, 2, 51, 35, 36, 8, 27, 38, 48]. In the realm of
image editing, diffusion models are now at the forefront, providing rich
editing capabilities through text descriptions and conditional inputs. In this
work, we investigate how to use visual prompts to guide image edits with
diffusion models.
Image Editing. In the beginning, image editing was primarily done within the
image space. Previous GAN-based approaches utilized the meaningful latent
space of GANs to perform editing [17, 1, 22, 11]. The inversion technique [34,
7, 40, 33] has been used to obtain the latent features of the input image,
perform editing in the latent space, and revert the output to the image space.
More recently, with the support of CLIP [30], a bridge between images and
texts, image editing can now be guided by text prompts [29, 11]. Recent models
for text-conditioned image editing have leveraged CLIP embedding guidance and
text-to-image diffusion models to achieve state-of-the-art results for a
variety of edits. There are three main directions for research in this area:
(1) zero-shot (exploiting the CLIP embedding directions, stochastic
differential equations, or attention control) [13, 28, 24, 25, 19]; (2)
optimizing text prompts and/or diffusion models [18, 37, 10, 15]; and (3)
fine-tuning diffusion models on a supervised dataset [4, 51]. In contrast to
prior works that rely on text prompts to guide the editing, we aim to leverage
visual prompts to better assist the process.
Prompt Tuning. Diffusion models have shown stirring results in text-to-image
generation, but they can struggle to comprehend specific or novel concepts.
Several works have focused on addressing this issue. Textual Inversion [10]
learns a specialized token for new objects, which can later be plugged in with
natural language to generate novel scenes. ReVersion [15] learns a specified
text prompt for relation properties between two sets of images. Although a
continuous prompt can be more task-specific, a discrete prompt is typically
easier to manipulate by users. PEZ [45] proposes a method to discover prompts
that can retrieve similar concepts of given input images. Instead of learning
novel concepts for image generation, our work focuses on learning the
_transformation_ between an example pair of images that is better suited for
image editing.
#### Visual Prompting.
Since proposed in NLP [5], prompting has been adapted by computer vision
researchers. Unlike traditional methods that require separate models for each
downstream task, visual prompting utilizes in-context learning to solve
different tasks during inference. The first application of visual prompts was
proposed by [3], where an example and query image are combined to form a grid-
image. The task solver fills in the missing portion, which contains the
answer. They showed that the task solvers can perform effectively on several
tasks with only training on a dataset of Computer Vision figures. Later, [41]
and [42] expanded the framework to increase the number of tasks that can be
solved. Recently, Prompt Diffusion [44] introduced a diffusion-based
foundation for in-context learning. Although it shows high-quality in-context
generation, a text prompt is still needed. Similar to textual prompts, not all
visual prompts perform equally well. There are ongoing efforts to understand
how to design a good example pair [52]. Despite the success of visual
prompting in solving a wide range of standard computer vision tasks, the
question of whether one can use visual prompting for image editing remains
unanswered.
## 3 Framework
In this section, we present our approach for enabling image editing via visual
prompting. First, we provide a brief background on text-conditioned image
editing diffusion models (Section 3.1). Section 3.2 and 3.3 describe how to
invert visual prompts into text-based instructions. Finally, our full Visual
Instruction Inversion algorithm is given in Section 3.4.
Let $\\{x,y\\}$ denote the before-and-after example of an edit. Our goal is to
learn a text-based edit $c_{T}$ that captures the editing direction from $x$
to $y$. Once learned, $c_{T}$ can be applied to any new input image
$x^{\prime}$, to obtain an edited image $y^{\prime}$ that undergoes a similar
transformation: $x\rightarrow y\approx x^{\prime}\rightarrow y^{\prime}$. To
avoid confusion, we use only one image pair example to describe our approach.
However, it is worth noting that our algorithm still holds for an arbitrary
number of example pairs.
### 3.1 Preliminaries
Diffusion models for image generation are trained on a sequence of gradually
noisier image $x$ over a series of timesteps $t=1,\dots,T$. The goal is to
learn a denoising autoencoder $\epsilon_{\theta}$, which predicts a denoised
variant of a noisy version of $x$ at each timestep $t$, commonly denoted as
$x_{t}$ [14]. Initially, this approach was implemented in pixel-space [6], but
it has now been extended to the latent space for faster inference and improved
quality [35]. Here, prior to the diffusion process, image $x$ is encoded by an
encoder $\mathcal{E}$ to obtain the latent image $z_{x}$, which is
subsequently decoded by a decoder $\mathcal{D}$ to convert the latent image
back to the image space. The objective function is defined as follows:
$\mathcal{L}=\mathbb{E}_{{\mathcal{E}(x),\epsilon\sim\mathcal{N}(0,1)},t}\lVert\epsilon-\epsilon_{\theta}(z_{x_{t}},t)\rVert_{2}$
To enable diffusion models to take text prompts as conditional inputs, [35]
introduced a domain-specific encoder $\tau_{\theta}$ that projects text
prompts to an intermediate representation $c_{T}$. This representation can
then be inserted into the layers of the denoising network via cross-attention:
$\mathcal{L}=\mathbb{E}_{{\mathcal{E}(x),c_{T},\epsilon\sim\mathcal{N}(0,1)},t}\lVert\epsilon-\epsilon_{\theta}(z_{x_{t}},t,c_{T})\rVert_{2}$
Conditioned on a text description $c_{T}$, diffusion models can synthesis
stunning images. However, they are still not completely well-suited for image
editing. Suppose that we want to edit image $x$ to image $y$, conditioned on
text prompt $c_{T}$. Text prompt $c_{T}$ then needs to align with our desired
edit $x\rightarrow y$ and fully capture the visual aspects of $x$, which can
be difficult. There are methods to help discover text prompts that can
retrieve similar content [45, 26], however, they typically cannot accurately
describe all aspects of the input image $x$. To address this challenge, the
idea of adding the input image to the denoising network was proposed in [4,
38]. The input image $x$ can then be encoded as $c_{I}=\mathcal{E}(x)$ and
concatenated to the latent image $z_{y_{t}}$, jointly guiding the editing
process with text prompt $c_{T}$. Based on this idea, InstructPix2Pix [4]
fine-tunes the text-to-image diffusion model in a supervised way to perform
image editing. Its objective function is changed accordingly, as it now learns
to denoise a noisy version of $y$, which is an edited image of $x$ based on
editing direction $c_{T}$:
$\mathcal{L}=\mathbb{E}_{{\mathcal{E}(y),c_{T},c_{I},\epsilon\sim\mathcal{N}(0,1)},t}\lVert\epsilon-\epsilon_{\theta}(z_{y_{t}},t,c_{T},c_{I})\rVert_{2}$
(1)
Figure 3: Our framework. (a) Given an example before-and-after image pair, we
optimize the latent text instruction that converts the “before” image to the
“after” image using a frozen image editing diffusion model. (b) We leverage
the CLIP embedding space to help learn the editing direction. (c) Once
learned, the instruction can be applied to a new image to achieve the same
edit. Optionally, the user can also combine the learned instruction with a
natural text prompt to create a hybrid instruction.
### 3.2 Learning to Reconstruct Images
Prior textual inversion methods [10, 37, 15] all utilize an image
reconstruction loss. However, their aim is to learn to capture the essence of
the concept in the image so that it can be synthesized in new contexts, but
not to faithfully follow pixel-level details of the input image which are
required for image editing. The closest idea to ours is [18], but it needs to
fine-tune the diffusion model again for each edit and input image. We instead
exploit a pre-trained text-conditioned image editing model, which offers
editing capabilities, while avoiding additional fine-tuning.
Given only two images $\\{x,y\\}$ which represent the “before” and “after”
images of an edit $c_{T}$, the first and foremost objective is to recover
image $y$. We follow the same strategy as [4], where we optimize the
instruction $c_{T}$ based on the supervised pair $\\{x,y\\}$. In our case,
conditional image $c_{I}$ is the “before” image $x$, and target image is the
“after” image $y$. The objective function is then adopted from Eq. 1 as:
$\mathcal{L}_{mse}=\mathbb{E}_{{\mathcal{E}(y),c_{T},z_{x},\epsilon\sim\mathcal{N}(0,1)},t}\lVert\epsilon-\epsilon_{\theta}(z_{y_{t}},t,c_{T},z_{x})\rVert_{2}$
(2)
Figure 4: Instruction details. (a) Instruction Optimization: We only optimize
a part of the instruction embedding $c_{T}$, called <ins>. (b) Instruction
Concatenation: During test time, we can add extra information into the learned
instruction $c_{T}$ to further guide the edit.
### 3.3 Learning to Perform Image Editing
If we rely only on the image reconstruction constraint (Eq. 2), we may learn a
description of the edited image $y$, instead of the desired editing
instruction. [28] has shown that the CLIP embedding [30] is a good indicator
of the editing direction. It uses GPT-3 [5] to generate a set of sentences for
the “before” and “after” domains of an edit; for example, cat
$\leftrightarrow$ dog. The mean difference between the CLIP embeddings of
these sentences represents the text editing direction “before”
$\leftrightarrow$ “after”.
In our case, we can use the difference between the CLIP embeddings of the
“after” and “before” images to help learn the edit. Specifically, for an
example pair $\\{x,y\\}$, we compute the image editing direction
$\Delta_{x\rightarrow y}$ as:
$\Delta_{x\rightarrow
y}=\mathcal{E_{\text{clip}}}(y)-\mathcal{E_{\text{clip}}}(x)$
We encourage the learned instruction $c_{T}$ to be aligned with this editing
direction (Figure 3b). To this end, we minimize the cosine distance between
them in the CLIP embedding space:
$\mathcal{L}_{clip}=\text{cosine}(\Delta_{x\rightarrow y},c_{T})$ (3)
### 3.4 Image Editing via Visual Prompting
Finally, given an example before-and-after image pair $\\{x,y\\}$, we
formulate the visual prompting as an instruction optimization using our two
constraints: Image reconstruction loss (Eq. 2) and CLIP loss (Eq. 3). We
provide an illustration of our framework in training and testing in Figure
3a,c, and pseudocode in Algorithm 1. Our algorithm also holds for $n$ example
pairs $\\{(x_{1},y_{1}),\dots(x_{n},y_{n})\\}$. In this case,
$\Delta_{x\rightarrow y}$ becomes the mean difference of all examples, and at
each optimization step, we randomly sample one pair $\\{x_{i},y_{i}\\}$.
Algorithm 1 Visual Instruction Inversion (VISII)
1:Input: An example pair $\\{x,y\\}$
2: Pretrained denoising model $\epsilon_{\theta}$; Image encoder
$\mathcal{E}$; CLIP encoder $\mathcal{E}_{clip}$
3: Number of optimization steps $N$; Number of timesteps $T$
4: Hyperparameters $\lambda_{clip}$, $\lambda_{mse}$; Learning rate $\gamma$
5: // Start optimization
6: Initialize $c_{T}$ $\triangleright$ Initialize instruction
7: Encode $z_{x}=\mathcal{E}(x);\quad z_{y}=\mathcal{E}(y)$ $\triangleright$
Encode image
8: Compute $\Delta_{x\rightarrow
y}=\mathcal{E_{\text{clip}}}(y)-\mathcal{E_{\text{clip}}}(x)$ $\triangleright$
Compute editing direction
9:for $i=1,\cdots,N$ do
10: Sample $t\sim\mathcal{U}(0,T)$; $\epsilon\sim\mathcal{N}(0,1)$
$\triangleright$ Sample timestep and noise
11: $z_{y_{t}}\leftarrow$ add $\epsilon$ to $z_{y}$ at timestep $t$
$\triangleright$ Prepare noisy version of $z_{y}$ at timestep $t$
12: $\hat{\epsilon}=\epsilon_{\theta}(z_{y_{t}},t,c_{T},z_{x})$
$\triangleright$ Predict noise condition on $x$
13:
$\mathcal{L}=\lambda_{\textit{mse}}\lVert\epsilon-\hat{\epsilon}\rVert_{2}+\lambda_{clip}(\text{cosine}(c_{T},\Delta_{x\rightarrow
y}))$ $\triangleright$ Compute losses
14: Update $c_{T}=c_{T}-\gamma\nabla\mathcal{L}$
15:end for
16:Output: $c_{T}$
Once $c_{T}$ is learned, we can apply it to a new image $x_{test}$ to edit it
into $y_{test}$. Moreover, our designed approach allows users to input extra
information, enabling them to combine the learned instruction $c_{T}$ with an
additional text prompt (Figure 3c). To that end, we optimize a fixed number of
tokens of $c_{T}$ only, which provides us with the flexibility to concatenate
additional information to the learned instruction during inference (Figure
4b). This allows us to achieve more fine-grained control over the resulting
images, and is the final default approach.
## 4 Evaluation
We compare our approach against both image-editing and visual prompting
frameworks, on both synthetic and real images. In Section 4.2, we present
qualitative results, followed by a quantitative comparison in Section 4.3.
Both quantitative and qualitative results demonstrate that our approach not
only achieves competitive performance to state-of-the-art models, but also has
additional merits in specific cases. Additional qualitative results can be
found in the Appendix.
Figure 5: Qualitative comparisons. Our method learns edits from example pairs
and thus can produce visually closer edited images to the target example than
other state-of-the-art baselines.
### 4.1 Experimental Settings
Training Setting. We use the frozen pretrained InstructPix2Pix [4] to optimize
the instruction $c_{T}$ for $N=1000$ steps, $T=1000$ timesteps. We use AdamW
optimizer [23] with learning rate $\gamma=0.001$, $\lambda_{mse}=4$, and
$\lambda_{clip}=0.1$. Text guidance and image guidance scores are set at their
default value of $7.5$ and $1.5$, respectively. All experiments are conducted
on a 4 $\times$ NVIDIA RTX 3090 machine.
Dataset. We randomly sampled images from the Clean-InstructPix2Pix dataset
[4], which consists of synthetic paired before-after images with corresponding
descriptions. In addition, we download paired photos from [16] to test the
models. Since some real images do not have edited versions, we utilize [51]
with manual text prompts to generate the after images with different edits.
Evaluation Metrics. Following [4], we assess the effectiveness of our approach
using the Directional CLIP similarity [11] and Image CLIP similarity metrics.
However, as the CLIP directional metric does not reflect the transformation
similarity between the before-after example and before-after output pair, we
propose an additional metric called the Visual CLIP similarity. Specifically,
we compute the cosine similarity between the before-after example pair and the
before-after test pair as follows:
$s_{visual}=1-\text{cosine}(\Delta_{x\rightarrow
y},\Delta_{x^{\prime}\rightarrow y^{\prime}})$.
Baseline Models. We compare our approach to two main categories of baselines:
Image Editing and Visual Prompting. For image editing, we compare against
InstructPix2Pix [4] and SDEdit [24], which are the state-of-the-art. We
directly use the ground-truth editing instruction for InstructPix2Pix and
after descriptions for SDEdit. For real images, we manually write instructions
and descriptions for them, respectively. For visual prompting, we compare our
approach against Visual Prompting [3]. The Visual Prompting code is from the
author’s official repository, while SDEdit and InstructPix2Pix codes are from
HuggingFace.
### 4.2 Qualitative Results
Figure 6: A variety of edits can be performed for “Turn it into a drawing/
painting” (Zoom in for details).
Figure 5 presents qualitative comparisons. As can be seen, Visual Prompting
[3] fails to perform the image editing task. Text-conditioned image editing
frameworks, InstructPix2Pix [4] and SDEdit [24], can edit images based on the
provided text prompts, but can fall short in producing edited images that are
visually close to the “after” example. In contrast, our approach can learn the
edit from the given example pair and apply it to new test images. For example,
in wolf $\leftrightarrow$ dog (Fig. 5, row 3), we not only achieve successful
domain translation from wolf to dog, but we also preserve the color of the
dog’s coat. Please refer to the Appendix for more qualitative results.
Figure 6 demonstrates the advantage of visual prompting compared to text-based
instruction. We can see that one text instruction can be unclear to describe
specific edits. For example, the instruction “Turn it into a drawing” or “Make
it a painting” can have multiple interpretations in terms of style and genres.
In these cases, by showing an example pair, our method can learn and replicate
the distinctive characteristics of each specific art style.
### 4.3 Quantitative Results
Figure 7: Quantitative comparison. Histogram of Image, Directional, and Visual
CLIP similarity scores. Our results are comparable to state-of-the-art text-
conditioned image editing frameworks.
We perform quantitative evaluation of our method against two baselines:
InstructPix2Pix [4] and SDEdit [24]. Since Visual Prompting was not effective
in performing image editing tasks, we did not include it in our comparison. We
randomly sampled 300 editing directions, resulting in a total of 1030 image
pairs, from the Clean-InstructPix2Pix dataset.
We analyze the histograms of Image, Directional, and Visual CLIP Similarity
(Figure 7). Results indicate that our method performs competitively to the
baselines. In terms of Directional CLIP Similarity, InstructPix2Pix achieves
the highest score, as it can make large changes the input image toward the
editing instruction. Our method scores similarly to SDEdit, indicating that
our approach can also perform well in learning the editing direction. Our
approach is the most faithful to the input image, as reflected by the highest
Image CLIP Similarity scores. Finally, for Visual CLIP Similarity, which
measures the agreement between the changes in before-after example and before-
after test images, our approach performs nearly identically to the two state-
of-the-art models.
## 5 Analysis
Table 1: Quantitative Analysis. We report Image, Directional, and Visual CLIP
Similarity scores. Despite learning from only one example pair, our approach
performs competitively to state-of-the-art image editing models. (“Direct.”:
“Directional”; #: number of training pairs; “Init”: Initialization of
instruction; “GT”: Ground-truth instruction; “Cap.”: Image captioning of
“after” image.)
| Losses | Init. | Random noise | Fixed noise
---|---|---|---|---
# | MSE | CLIP | GT | Cap. | Img $\uparrow$ | Direct. $\uparrow$ | Visual $\uparrow$ | Img $\uparrow$ | Direct. $\uparrow$ | Visual $\uparrow$
| ground-truth | | | 0.824 | 0.196 | 0.301 | - | - | -
| no training | | ✓ | 0.866 | 0.090 | 0.199 | - | - | -
1 | ✓ | | ✓ | | 0.841 | 0.120 | 0.247 | 0.854 | 0.105 | 0.223
1 | ✓ | | | ✓ | 0.845 | 0.115 | 0.254 | 0.861 | 0.110 | 0.225
1 | ✓ | ✓ | ✓ | | 0.838 | 0.131 | 0.231 | 0.852 | 0.102 | 0.236
1 | ✓ | ✓ | | ✓ | 0.823 | 0.126 | 0.299 | 0.847 | 0.113 | 0.251
1 | ✓ | ✓ | | ✓ | 0.823 | 0.126 | 0.299 | 0.847 | 0.113 | 0.251
2 | ✓ | ✓ | | ✓ | 0.791 | 0.141 | 0.292 | 0.826 | 0.117 | 0.253
3 | ✓ | ✓ | | ✓ | 0.780 | 0.148 | 0.283 | 0.805 | 0.132 | 0.256
4 | ✓ | ✓ | | ✓ | 0.798 | 0.148 | 0.280 | 0.812 | 0.133 | 0.260
We next conduct an in-depth study to better understand our method. For all of
the studies below, we sample 100 editing directions (resulting in 400 before-
and-after pairs in total) from the Clean-InstructPix2Pix [4] dataset. We show
that both the CLIP loss and instruction initialization are critical for
achieving optimal performance. Additionally, we present some interesting
findings regarding the effects of random noise, which can lead to variations
in the output images.
Losses. We ablate the effect of each loss in Table 1. The additional CLIP loss
helps improve scores in Visual and Directional CLIP Similarity [11], which
reflect the editing directions. This shows that the CLIP loss encourages the
learned instruction to be aligned with the target edit.
Initialization. Prior work utilizes a coarse user text prompt (e.g.,
“sculpture” or “a sitting dog”) for textual initialization [10, 18], which can
be practical, but may not always be effective. The reason is that natural text
prompts can be misaligned with the model’s preferred prompts [12]. We can also
optimize upon a user’s coarse input, however, we find that it is more
effective to initialize the instruction vector $c_{T}$ to be somewhere close
to the editing target; i.e., a caption of the “after” image. We evaluate both
initialization strategies, including user input and captioning. To mimic a
user’s coarse input, we directly use ground-truth editing instructions from
Clean-InstructPix2Pix dataset [4]. We employ [45] to generate captions for
“after” images. Results are shown in Table 1. As expected, directly using the
caption as the instruction to InstructPix2Pix will not yield good results (Row
2), but initializing our model’s learned instruction from the caption helps to
improve the Visual and Image CLIP Similarity scores. This indicates that the
learned instruction is more faithful to the input test image that we want to
edit, while still retaining editing capabilities.
Noises.
Figure 8: Fixed noise leads to more balanced results. Different noises can
lead to large variations in the output. Using the same training noises yields
a balanced trade-off between editing manipulation and image reconstruction.
Text-conditioned models generate multiple variations of output images
depending on the sampled noise sequence. This is true for our approach too.
However, for image editing, we would prefer outputs that best reflect the edit
provided in the before-and-after image pair, and preserve the test image as
much as possible apart from that edit. We find that using identical noises
from training in test time can help achieve this. Specifically, denote the
noises sampled during the training optimization timesteps $t=1\dots T$, as
$\\{\epsilon_{1},\dots\epsilon_{T}\\}$, which are added to the latent image
$z_{y}$. We reuse the corresponding noises in the backward process during test
time to denoise the output images. This technique helps to retain the input
image content, as shown in Table 1. However, there is a trade-off between
aggressively moving toward the edit, and retaining the conditional input. It
is worth noting that in test time, we can also use random noises for denoising
if desired. We visualize this phenomenon in Figure 8.
Figure 9: Hybrid instruction. We can concatenate extra information into the
learned instruction $c_{T}$ to navigate the edit. (Zoom in for details.)
Hybrid instruction. Finally, our approach allows users to incorporate
additional information into the learned instruction. Specifically, we create a
hybrid instruction by concatenating the learned instruction with the user’s
text prompt. This hybrid instruction better aligns with the given example pair
while still following the user’s direction. In Figure 1, we transform “cat”
$\rightarrow$ “watercolor cat”. We demonstrate how concatenating extra
information to the learned instruction (<ins>) enables both image editing
(changing to a watercolor style) and domain translation (e.g., “cat”
$\rightarrow$ “tiger”). The painting style is consistent with the before-and-
after images, while the domain translation corresponds to the additional
information provided by the user. Figure 9 provides more qualitative examples.
Applying InstructPix2Pix [4] often does not yield satisfactory results, as the
painting style differs from the reference image.
## 6 Discussion and Conclusion
We presented a novel framework for image editing via visual prompt inversion.
With just one example representing the “before” and “after” states of an image
editing task, our approach achieves competitive results to state-of-the-art
text-conditioned image editing models. However, there are still several
limitations and open questions left for future research.
One major limitation is our reliance on a pre-trained model, InstructPix2Pix.
As a result, it restricts our ability to perform editing in the full scope of
diffusion models, and we might also inherit unwanted biases. Additionally,
there are cases where our model fails, as shown in Figure 10a, where we fail
to learn “add a dinosaur” to the input image, presumably because it is very
small.
Figure 10: Discussion. (a) Failure Case: Our model can fail to capture fine
details. (b) Interesting case: By preparing an image and segmentation pair, we
can perform image segmentation. (c) Quality of example pair: One example does
not work equally well for different test images.
As we address the question of effectively using visual prompting with
diffusion models, one might ask an interesting question in the reverse
direction: Can diffusion models be used as a task solver for downstream
computer vision tasks? We find out that by preparing a foreground segmentation
as a green area, we can learn instructions and apply them to new images to
obtain corresponding segmentations (Figure 10b). However, further research is
needed to fully explore this question. We acknowledge that this question is
beyond the scope of our current study, which is primarily focused on image
editing. Additionally, visual in-context learning has been shown to be
sensitive to prompt selection [52, 3]. Figure 10c shows cases where one
example may not fit all test images. This shows that there are open questions
regarding what makes a good example for image editing.
## References
* [1] Alaluf, Y., Patashnik, O., Wu, Z., Zamir, A., Shechtman, E., Lischinski, D., Cohen-Or, D.: Third time’s the charm? image and video editing with stylegan3. In: arXiv (2022)
* [2] Balaji, Y., Nah, S., Huang, X., Vahdat, A., Song, J., Zhang, Q., Kreis, K., Aittala, M., Aila, T., Laine, S., Catanzaro, B., Karras, T., Liu, M.Y.: ediff-i: Text-to-image diffusion models with ensemble of expert denoisers. arXiv preprint arXiv:2211.01324 (2022)
* [3] Bar, A., Gandelsman, Y., Darrell, T., Globerson, A., Efros, A.A.: Visual prompting via image inpainting. arXiv preprint arXiv:2209.00647 (2022)
* [4] Brooks, T., Holynski, A., Efros, A.A.: Instructpix2pix: Learning to follow image editing instructions. In: arXiv (2023)
* [5] Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D.M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D.: Language models are few-shot learners. In: arXiv (2020)
* [6] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Advances in Neural Information Processing Systems. vol. 34, pp. 8780–8794. Curran Associates, Inc. (2021), https://proceedings.neurips.cc/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf
* [7] Dinh, T.M., Tran, A.T., Nguyen, R., Hua, B.S.: Hyperinverter: Improving stylegan inversion via hypernetwork. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
* [8] Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: arXiv (2021)
* [9] Gafni, O., Polyak, A., Ashual, O., Sheynin, S., Parikh, D., Taigman, Y.: Make-a-scene: Scene-based text-to-image generation with human priors. In: arXiv (2022)
* [10] Gal, R., Alaluf, Y., Atzmon, Y., Patashnik, O., Bermano, A.H., Chechik, G., Cohen-Or, D.: An image is worth one word: Personalizing text-to-image generation using textual inversion. In: arXiv (2022). https://doi.org/10.48550/ARXIV.2208.01618, https://arxiv.org/abs/2208.01618
* [11] Gal, R., Patashnik, O., Maron, H., Chechik, G., Cohen-Or, D.: Stylegan-nada: Clip-guided domain adaptation of image generators. In: arXiv (2021)
* [12] Hao, Y., Chi, Z., Dong, L., Wei, F.: Optimizing prompts for text-to-image generation. In: arXiv (2022)
* [13] Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. In: arXiv (2022)
* [14] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: arXiv (2020)
* [15] Huang, Z., Wu, T., Jiang, Y., Chan, K.C., Liu, Z.: ReVersion: Diffusion-based relation inversion from images. arXiv preprint arXiv:2303.13495 (2023)
* [16] Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CVPR (2017)
* [17] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
* [18] Kawar, B., Zada, S., Lang, O., Tov, O., Chang, H., Dekel, T., Mosseri, I., Irani, M.: Imagic: Text-based real image editing with diffusion models. In: Conference on Computer Vision and Pattern Recognition 2023 (2023)
* [19] Kim, G., Kwon, T., Ye, J.C.: Diffusionclip: Text-guided diffusion models for robust image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2426–2435 (June 2022)
* [20] Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R.S., Torralba, A., Urtasun, R., Fidler, S.: Skip-thought vectors. arXiv preprint arXiv:1506.06726 (2015)
* [21] Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., Li, C., Lee, Y.J.: Gligen: Open-set grounded text-to-image generation. In: arXiv:2301.07093 (2023)
* [22] Liu, Y., Gal, R., Bermano, A.H., Chen, B., Cohen-Or, D.: Self-conditioned generative adversarial networks for image editing. In: arXiv (2022)
* [23] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: arXiv (2019)
* [24] Meng, C., He, Y., Song, Y., Song, J., Wu, J., Zhu, J.Y., Ermon, S.: Sdedit: Guided image synthesis and editing with stochastic differential equations. In: arXiv (2022)
* [25] Mokady, R., Hertz, A., Aberman, K., Pritch, Y., Cohen-Or, D.: Null-text inversion for editing real images using guided diffusion models. In: arXiv (2022)
* [26] Mokady, R., Hertz, A., Bermano, A.H.: Clipcap: Clip prefix for image captioning. In: arXiv (2021)
* [27] Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In: arXiv (2022)
* [28] Parmar, G., Singh, K.K., Zhang, R., Li, Y., Lu, J., Zhu, J.Y.: Zero-shot image-to-image translation. In: arXiv (2023)
* [29] Patashnik, O., Wu, Z., Shechtman, E., Cohen-Or, D., Lischinski, D.: Styleclip: Text-driven manipulation of stylegan imagery. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 2085–2094 (October 2021)
* [30] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: arXiv (2021)
* [31] Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text-conditional image generation with clip latents. In: arXiv (2022). https://doi.org/10.48550/ARXIV.2204.06125, https://arxiv.org/abs/2204.06125
* [32] Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text-conditional image generation with clip latents. In: arXiv (2022)
* [33] Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2021)
* [34] Roich, D., Mokady, R., Bermano, A.H., Cohen-Or, D.: Pivotal tuning for latent-based editing of real images. In: arXiv (2021)
* [35] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10684–10695 (June 2022)
* [36] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: arXiv (2021)
* [37] Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In: arXiv (2022)
* [38] Saharia, C., Chan, W., Chang, H., Lee, C.A., Ho, J., Salimans, T., Fleet, D.J., Norouzi, M.: Palette: Image-to-image diffusion models. In: arXiv (2022)
* [39] Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S.K.S., Ayan, B.K., Mahdavi, S.S., Lopes, R.G., Salimans, T., Ho, J., Fleet, D.J., Norouzi, M.: Photorealistic text-to-image diffusion models with deep language understanding. In: arXiv (2022)
* [40] Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O., Cohen-Or, D.: Designing an encoder for stylegan image manipulation. arXiv preprint arXiv:2102.02766 (2021)
* [41] Wang, X., Wang, W., Cao, Y., Shen, C., Huang, T.: Images speak in images: A generalist painter for in-context visual learning. arXiv preprint arXiv:2212.02499 (2022)
* [42] Wang, X., Zhang, X., Cao, Y., Wang, W., Shen, C., Huang, T.: Seggpt: Segmenting everything in context. arXiv preprint arXiv:2304.03284 (2023)
* [43] Wang, Z., Jiang, Y., Lu, Y., Shen, Y., He, P., Chen, W., Wang, Z., Zhou, M.: In-context learning unlocked for diffusion models. arXiv preprint arXiv:2305.01115 (2023), https://arxiv.org/abs/2305.01115
* [44] Wang, Z., Jiang, Y., Lu, Y., Shen, Y., He, P., Chen, W., Wang, Z., Zhou, M.: In-context learning unlocked for diffusion models. arXiv preprint arXiv:2305.01115 (2023), https://arxiv.org/abs/2305.01115
* [45] Wen, Y., Jain, N., Kirchenbauer, J., Goldblum, M., Geiping, J., Goldstein, T.: Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. In: arXiv (2023)
* [46] Witteveen, S., Andrews, M.: Investigating prompt engineering in diffusion models. In: arXiv (2022)
* [47] Xu, T., Zhang, P., Huang, Q., Han Zhang, Z.G., Huang, X., He, X.: Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In: CVPR (2018)
* [48] Yang, Z., Wang, J., Gan, Z., Li, L., Lin, K., Wu, C., Duan, N., Liu, Z., Liu, C., Zeng, M., Wang, L.: Reco: Region-controlled text-to-image generation. In: arXiv (2022)
* [49] Yu, J., Xu, Y., Koh, J.Y., Luong, T., Baid, G., Wang, Z., Vasudevan, V., Ku, A., Yang, Y., Ayan, B.K., Hutchinson, B., Han, W., Parekh, Z., Li, X., Zhang, H., Baldridge, J., Wu, Y.: Scaling autoregressive models for content-rich text-to-image generation. In: arXiv (2022)
* [50] Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., Metaxas, D.: Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In: arXiv (2017)
* [51] Zhang, L., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: arXiv (2023)
* [52] Zhang, Y., Zhou, K., Liu, Z.: What makes good examples for visual in-context learning? In: arXiv (2023)
* [53] Zhu, M., Pan, P., Chen, W., Yang, Y.: Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. In: arXiv (2019)
## Appendix
## Appendix A Textual Inversion vs. Visual Instruction Inversion
Textual Inversion [10, 37] is a method to invert a visual concept into a
corresponding representation in the language space. In particular, given (i) a
text-to-image pre-trained model and (ii) some images describing a visual
concept (e.g., a particular kind of toy; Figure 11 bottom row), Textual
Inversion learns new “words” in the embedding space of the text-to-image model
to represent those visual concepts. Once these “words” are learned for that
concept, they can be plugged into arbitrary textual descriptions, just like
other English words, which can then be used to create the target visual
concept in different contexts. Instead of learning the representation for an
isolated visual concept, our approach (Visual Instruction Inversion), learns
the _transformation_ from a before-and-after image pair. This learned
transformation is then applied to a test image to achieve similar edit
“before” $\rightarrow$ “after”.
Figure 11: Ours vs. Textual Inversion. (a) Textual Inversion inverts a visual
concept or object (e.g., a particular <cat-toy>) into a word embedding. This
optimized word embedding can then be combined with a textual description to
generate novel scenes. (b) Our Visual Instruction Inversion learns the
transformation <ins>: “before” $\rightarrow$ “after” in a given before-and-
after image pair. This learned instruction can then be applied to new test
images to perform the same edit.
#### Applicability of Textual Inversion for image editing.
Given these differences with our proposed method, we now try to see if Textual
Inversion can be used for image editing. Textual Inversion can generate a
“painting of a <cat-toy> in the style of Monet” by using the learned word
<cat-toy> from example images (Figure 11a, Row 1). However, the synthesized
images often only capture the essence of the objects, and disregard the
details of the input images. As a result, textual inversion is suitable for
novel scene composition, but is not effective for image editing.
On the other hand, our Visual Instruction Inversion does not learn novel token
representations for objects or concepts. Instead, we learn the edit
instruction from before-and-after pairs, which can be applied to any test
image to obtain corresponding edits. This allows us to achieve fine-grained
control over the resulting images. For example, by providing a photo of <cat-
toy>, one before and one in a specific impressionist style, we learn the
transformation from before to impressionist, denoted as <ins>. Once learned,
this instruction can be applied to new <cat-toy> images to achieve the same
impressionist painting style, without losing the fine details of the test
image (Figure 11b, Row 1).
One might suggest an alternative approach to image editing using Textual
Inversion, which involves learning two tokens: one for the object and another
for the style (e.g., “Painting of <cat-toy> in the style of <watercolor-
portraits>’’). Figure 11a (Row 2) shows the results of this approach. As can
be seen, Textual Inversion still often introduces significant changes that
deviate from the original input image. Thus, Textual Inversion is not suitable
for accurate image editing.
Figure 12: Additional qualitative comparisons to Imagic [18] and Null-text
Inversion [25]. Imagic and Null-text Inversion fail to match the reference
image as they perform edits based on ambiguous text prompts (Row 1-4); or
exhibit inconsistency in producing outputs for the same prompt across test
images (Row 6). In contrast, our method produces visually closer edited images
to the before-after pair while demonstrating improved consistency by using the
learned instructions.
## Appendix B Additional Qualitative Comparisons
We present qualitative comparisons with other state-of-the-art text-
conditioned image editing methods, Imagic [18] and Null-text Inversion [25]
(Figure 12). These methods can generate outputs based on given text prompts,
such as “A watercolor painting of a cat” (Row 3). However, the outputs often
do not match the given reference. The text prompts can also be ambiguous and
result in unsatisfactory outputs, as illustrated by the case of “A character
in a Pixar movie” (Row 1).
Another challenge is the inconsistency of text-conditioned models, where the
same text prompt can produce different outputs for different test images. For
example, the text prompt “A frozen waterfall” (Row 6) generates different
water colors (blue vs. white) when applied to different test images (Before-
and-after pair is from [25]). Our method is more consistent in this case, as
the learned instruction might have learned the water color.
## Appendix C Implementation Details
We use the pretrained clip-vit-large-patch14 as the CLIP Encoder in our
approach. For instruction initialization [45], we set the caption length for
after image to 10 tokens. However, this specific caption length does not
affect the optimization algorithm. We can optimize initialization instructions
of varying lengths (up to 77 tokens). It takes roughly 7 minutes to optimize
for one edit, and 4 seconds to apply the learned instruction to new images.
Specifically, during the optimization process, we freeze the tokens
representing the start of text (<|startoftext|>), end of text (<|endoftext|>),
and all padding tokens after end of text (<|endoftext|>). We only update the
tokens inside the text prompt, called <ins> (between <|startoftext|> and
<|endoftext|>) (Figure 4a).
## Photo Attribution
* •
Elsa (Human): reddit.com/r/Frozen
* •
Disney characters: princess.disney.com
* •
Toy Story characters: toystory.disney.com
* •
Toonify faces: toonify.photos
* •
Girl with a Pearl Earring: wikipedia/girl-with-a-pearl-earring
* •
Mona Lisa: wikipedia/mona-lisa
* •
The Princesse de Broglie: wikipedia/Princesse-de-Broglie
* •
Self-portrait in a Straw Hat: wikipedia/self-portrait-in-a-straw-hat
* •
Bo the Shiba and Mam the Cat: instagram/avoshibe
* •
<cat-toy> and <watercolor-portraits> concept: huggingface.co/sd-concepts-
library
* •
Gnochi cat, waterfall, and cake images are from Imagic [18] and Null-text
Inversion [25].
|
# Depletion of Resources by a Population of Diffusing Species
Denis S. Grebenkov<EMAIL_ADDRESS>Laboratoire de Physique
de la Matière Condensée (UMR 7643),
CNRS – Ecole Polytechnique, IP Paris, 91128 Palaiseau, France
###### Abstract
Depletion of natural and artificial resources is a fundamental problem and a
potential cause of economic crises, ecological catastrophes, and death of
living organisms. Understanding the depletion process is crucial for its
further control and optimized replenishment of resources. In this paper, we
investigate a stock depletion by a population of species that undergo an
ordinary diffusion and consume resources upon each encounter with the stock.
We derive the exact form of the probability density of the random depletion
time, at which the stock is exhausted. The dependence of this distribution on
the number of species, the initial amount of resources, and the geometric
setting is analyzed. Future perspectives and related open problems are
discussed.
Resources, Consumption, First-Passage Time, Diffusion, Boundary Local Time
###### pacs:
02.50.-r, 05.40.-a, 02.70.Rr, 05.10.Gg
## I Introduction
How long does it take to deplete a finite amount of resources? This
fundamental question naturally appears in many aspects of our everyday life
and in various disciplines, including economics and ecology. On a global
scale, it may concern renewable and non-renewable natural resources such as
water, oil, forests, minerals, food, as well as extinction of wildlife
populations or fish stocks Mangel85 ; Wada10 ; Dirzo14 . On a local scale, one
may think of depletion-controlled starvation of a forager due to the
consumption of environmental resources Benichou14 ; Chupeau16 ; Benichou16 ;
Chupeau17 ; Bhat17 ; Benichou18 that poses various problems of optimal search
and exploration Viswanathan ; Viswanathan99 ; Benichou11 ; Gueudre14 . On even
finer, microscopic scale, the depletion of oxygen, glucose, ions, ATP
molecules and other chemical resources is critical for life and death of
individual cells Fitts94 ; Parekh97 ; Ha99 ; Clapham07 . A reliable
characterization of the depletion time, i.e., the instance of an economical
crisis, an ecological catastrophe, or the death of a forager or a cell due to
resources extinction, is a challenging problem, whose solution clearly depends
on the considered depletion process.
In this paper, we investigate a large class of stock depletion processes
inspired from biology and modeled as follows: there is a population of $N$
independent species (or particles) searching for a spatially localized stock
of resources located on the impenetrable surface of a bulk region (Fig. 1).
Any species that has reached the location of the stock, receives a unit of
resource and continues its motion. The species are allowed to return any
number of times to the stock, each time getting a unit of resource,
independently of its former delivery history and of other species. This is a
simple yet rich model of a diffusion-controlled release of non-renewable
resources upon request. While the applicability of this simplistic model for a
quantitative description of natural depletion phenomena is debatable, its
theoretical analysis can reveal some common, yet unexplored features of the
general stock depletion problem.
If the stock can be modeled as a node on a graph, which is accessed by $N$
random walkers, the stock depletion problem is equivalent to determining the
first time when the total number of visits of that site (or a group of sites)
exceeds a prescribed threshold Spitzer ; Condamin05 ; Condamin07 . In turn,
for continuous-space dynamics, two situations have to be distinguished: (i)
The stock is a bulk region, through which the species can freely diffuse; in
this case, each species is continuously receiving a fraction of resources as
long as it stays within the stock region; the total residence time (also known
as occupation or sojourn time) spent by $N$ species inside the stock region
can be considered as a proxy for the number of released resources, an one is
interested in the first time when this total residence time exceeds a
prescribed threshold. The distribution of the residence time for single and
multiple particles has been thoroughly investigated Darling57 ; Ray63 ;
Knight63 ; Agmon84 ; Berezhkovskii98 ; Dhar99 ; Yuste01 ; Godreche01 ;
Majumdar02 ; Benichou03 ; Grebenkov07a ; Burov07 ; Burov11 . (ii)
Alternatively, the stock can be located on the impenetrable surface of a bulk
region, in which case the species gets a unit of resources at each encounter
with that boundary region (Fig. 1); the total number of encounters with the
stock region, which is a natural proxy for the number of released resources,
is characterized by the total boundary local time $\ell_{t}$ spent by all
species on the stock region Levy ; Ito ; Grebenkov07a ; Grebenkov19b ;
Grebenkov21a . In this paper, we focus on this yet unexplored setting and aim
at answering the following question: If the amount of resources is limited,
when does the stock become empty? The time of the stock depletion can be
formally introduced as the first-crossing time of a given threshold $\ell$
(the initial amount of resources on the stock) by $\ell_{t}$:
${\mathcal{T}}_{\ell,N}=\inf\\{t>0~{}:~{}\ell_{t}>\ell\\}.$ (1)
We investigate the probability density of this random variable and its
dependence on the number $N$ of diffusing species, the initial amount of
resources $\ell$, and the geometric setting in which search occurs. We also
show how this problem generalizes the extreme first-passage time statistics
that got recently considerable attention Weiss83 ; Basnayake19 ; Lawley20 ;
Lawley20b ; Lawley20c ; Bray13 ; Majumdar20 ; Grebenkov20d .
Figure 1: Schematic illustration of a stock depletion problem. (a) Random
trajectories of three species diffusing in a bounded domain with the
reflecting boundary (shown in gray); at each encounter with the stock region
(black circle), one unit of resources is consumed; here, the species are
released at different starting points (indicated by small black disks) for a
better visualization. (b) The number of consumed resources (thick solid red
curve), $\ell_{t}$, as a function of time, and a prescribed threshold (thick
dotted black horizontal line), $\ell$, of initially available resources on the
stock region; the arrow indicates the first-crossing time
${\mathcal{T}}_{\ell,N}$ when the stock is depleted. Thin curves show the
resources $\ell_{t}^{i}$ consumed by individual species.
## II Model and general solution
We assume that $N$ independent point-like particles are released at time $t=0$
from a fixed starting point $\bm{x}_{0}\in\Omega$ inside an Euclidean domain
$\Omega\subset{\mathbb{R}}^{d}$ with a smooth boundary $\partial\Omega$ (Fig.
1). Each of these particles undertakes an ordinary diffusion inside $\Omega$
with diffusion coefficient $D$ and normal reflections on the impenetrable
boundary $\partial\Omega$. Let $\Gamma\subset\partial\Omega$ denote a stock
region (that we will also call a target), on which resources are distributed.
For each particle $i$, we introduce its boundary local time $\ell_{t}^{i}$ on
the stock region $\Gamma$ as $\ell_{t}^{i}=\lim\limits_{a\to
0}a\,\mathcal{N}_{t}^{a,i}$, where $\mathcal{N}_{t}^{a,i}$ is the number of
downcrossings of a thin boundary layer of width $a$ near the stock region,
$\Gamma_{a}=\\{\bm{x}\in\Omega~{}:~{}|\bm{x}-\Gamma|<a\\}$, up to time $t$
Levy ; Ito ; Grebenkov07a ; Grebenkov19b ; Grebenkov21a . In other words,
$\mathcal{N}_{t}^{a,i}$ represents the number of encounters of the $i$-th
particle with the stock region $\Gamma$ (see Grebenkov20 for further
discussion). While $\mathcal{N}_{t}^{a,i}$ diverges in the limit $a\to 0$ due
to the self-similar nature of Brownian motion, rescaling by $a$ yields a well-
defined limit $\ell_{t}^{i}$. For a small width $a$,
$\mathcal{N}_{t}^{a,i}\approx\ell_{t}^{i}/a$ can thus be interpreted as the
number of resources consumed by the $i$-th particle up to time $t$. In the
following, we deal directly with the boundary local times $\ell_{t}^{i}$,
which can be easily translated into $\mathcal{N}_{t}^{a,i}$ for any small $a$.
For a single particle, the probability distribution of the random process
$\ell_{t}^{i}$ was studied in Grebenkov07a ; Grebenkov19b ; Grebenkov21a . In
particular, the moment-generating function of $\ell_{t}^{i}$ was shown to be
${\mathbb{E}}_{\bm{x}_{0}}\\{e^{-q\ell_{t}^{i}}\\}=S_{q}(t|\bm{x}_{0}),$ (2)
where $S_{q}(t|\bm{x}_{0})$ is the survival probability, which satisfies the
(backward) diffusion equation
$\partial_{t}S_{q}(t|\bm{x}_{0})=D\Delta
S_{q}(t|\bm{x}_{0})\qquad(\bm{x}_{0}\in\Omega),$ (3)
with the initial condition $S_{q}(0|\bm{x}_{0})=1$ and the mixed Robin-Neumann
boundary condition:
$\displaystyle\left.(\partial_{n}+q)S_{q}(t|\bm{x}_{0})\right|_{\Gamma}$
$\displaystyle=0,$ (4a)
$\displaystyle\left.\partial_{n}S_{q}(t|\bm{x}_{0})\right|_{\partial\Omega\backslash\Gamma}$
$\displaystyle=0$ (4b)
(for unbounded domains, the regularity condition $S_{q}(t|\bm{x}_{0})\to 1$ as
$|\bm{x}_{0}|\to\infty$ is also imposed). Here $\Delta$ is the Laplace
operator, and $\partial_{n}$ is the normal derivative at the boundary oriented
outward the domain $\Omega$. The survival probability of a diffusing particle
in the presence of a partially reactive target has been thoroughly
investigated Collins49 ; Sano79 ; Sano81 ; Shoup82 ; Zwanzig90 ; Sapoval94 ;
Filoche99 ; Sapoval02 ; Grebenkov03 ; Berezhkovskii04 ; Grebenkov05 ;
Grebenkov06a ; Traytak07 ; Bressloff08 ; Lawley15 ; Galanti16 ; Lindsay17 ;
Bernoff18b ; Grebenkov17 ; Grebenkov19d ; Guerin21 . In particular, the
parameter $q\geq 0$ characterizes the reactivity of the target, ranging from
an inert target for $q=0$ to a perfect sink or trap for $q=\infty$. While we
speak here about a reactive target in the context of the survival probability,
there is no reaction in the stock depletion problem, in which the stock region
is inert. In other words, we only explore the fundamental relation (2) between
the survival probability and the moment-generating function
${\mathbb{E}}_{\bm{x}_{0}}\\{e^{-q\ell_{t}^{i}}\\}$ in order to determine the
probability density of the boundary local time $\ell_{t}^{i}$ for a single
particle, as well as the probability density of the associated first-crossing
time Grebenkov20 ; Grebenkov20b .
The amount of resources consumed up to time $t$ is modeled by the total
boundary local time,
$\ell_{t}=\ell_{t}^{1}+\ldots+\ell_{t}^{N},$ (5)
spent by all species on the stock region. As the individual boundary local
times $\ell_{t}^{i}$ are independent, the moment-generating function of
$\ell_{t}$ reads
${\mathbb{E}}_{\bm{x}_{0}}\\{e^{-q\ell_{t}}\\}=\bigl{(}{\mathbb{E}}_{\bm{x}_{0}}\\{e^{-q\ell_{t}^{1}}\\}\bigr{)}^{N}=\bigl{[}S_{q}(t|\bm{x}_{0})\bigr{]}^{N},$
(6)
from which the probability density $\rho_{N}(\ell,t|\bm{x}_{0})$ of $\ell_{t}$
is formally obtained via the inverse Laplace transform with respect to $q$:
$\rho_{N}(\ell,t|\bm{x}_{0})={\mathcal{L}}_{q,\ell}^{-1}\bigl{\\{}[S_{q}(t|\bm{x}_{0})]^{N}\bigr{\\}}.$
(7)
Since the total boundary local time is a non-decreasing process, the
cumulative distribution function of the first-crossing time
${\mathcal{T}}_{\ell,N}$, defined by Eq. (1), is
$Q_{N}(\ell,t|\bm{x}_{0})={\mathbb{P}}_{\bm{x}_{0}}\\{{\mathcal{T}}_{\ell,N}<t\\}={\mathbb{P}}_{\bm{x}_{0}}\\{\ell_{t}>\ell\\},$
(8)
from which Eq. (7) implies
$Q_{N}(\ell,t|\bm{x}_{0})=1-{\mathcal{L}}_{q,\ell}^{-1}\biggl{\\{}\frac{[S_{q}(t|\bm{x}_{0})]^{N}}{q}\biggr{\\}}.$
(9)
In turn, the probability density of the first-crossing time is obtained by
time derivative:
$U_{N}(\ell,t|\bm{x}_{0})=\partial_{t}Q_{N}(\ell,t|\bm{x}_{0})={\mathcal{L}}_{q,\ell}^{-1}\biggl{\\{}-\partial_{t}\frac{[S_{q}(t|\bm{x}_{0})]^{N}}{q}\biggr{\\}}.$
(10)
Equations (9, 10) that fully characterize the depletion time
${\mathcal{T}}_{\ell,N}$ in terms of the survival probability
$S_{q}(t|\bm{x}_{0})$ of a single particle, present the first main result.
In the limit $\ell\to 0$, Eq. (10) becomes
$U_{N}(0,t|\bm{x}_{0})=-\partial_{t}[S_{\infty}(t|\bm{x}_{0})]^{N},$ (11)
i.e., we retrieved the probability density of the fastest first-passage time
among $N$ particles to a perfectly absorbing target:
${\mathcal{T}}_{0,\ell}=\min\\{\tau^{1}_{\infty},\ldots,\tau^{N}_{\infty}\\}$,
where $\tau^{i}_{\infty}=\inf\\{t>0~{}:~{}\bm{X}_{t}^{i}\in\Gamma\\}$ is the
first-passage time of the $i$-th particle to $\Gamma$ Weiss83 ; Basnayake19 ;
Lawley20 ; Lawley20b ; Lawley20c . Our analysis thus extends considerably the
topic of extreme first-passage time statistics beyond the first arrival. More
generally, replacing a fixed threshold $\ell$ by a random threshold
$\hat{\ell}$ allows one to implement partially reactive targets and various
surface reaction mechanisms Grebenkov20 . For instance, if $\hat{\ell}$ is an
exponentially distributed variable with mean $1/q$, i.e.,
${\mathbb{P}}\\{\hat{\ell}>\ell\\}=e^{-q\ell}$, then the probability density
of the first-crossing time ${\mathcal{T}}_{\hat{\ell},N}$ of the random
threshold $\hat{\ell}$ is obtained by averaging $U_{N}(\ell,t|\bm{x}_{0})$
with the density $qe^{-q\ell}$ of $\hat{\ell}$ that yields according to Eq.
(10):
$\int\limits_{0}^{\infty}d\ell\,qe^{-q\ell}\,U_{N}(\ell,t|\bm{x}_{0})=-\partial_{t}\bigl{[}S_{q}(t|\bm{x}_{0})\bigr{]}^{N}.$
(12)
One can notice that the right-hand side is precisely the probability density
of the minimum of $N$ independent first-passage times,
$\tau^{1}_{q},\ldots,\tau^{N}_{q}$, to a partially reactive target with
reactivity parameter $q$. In other words, we conclude that
${\mathcal{T}}_{\hat{\ell},N}=\min\\{\tau^{1}_{q},\ldots,\tau^{N}_{q}\\}.$
(13)
In turn, the individual first-passage times can also be defined by using the
associated boundary local times as
$\tau^{i}_{q}=\inf\\{t>0~{}:~{}\ell_{t}^{i}>\hat{\ell}^{i}\\}$, where
$\hat{\ell}^{1},\ldots,\hat{\ell}^{N}$ are independent exponential random
variables with the mean $1/q$ Grebenkov20 . Interestingly, while every
$\tau^{i}_{q}$ is defined as the time of the first crossing of a random
threshold $\hat{\ell}^{i}$ by $\ell_{t}^{i}$ independently from each other,
their minimum can be defined via Eq. (13) as the first crossing of the total
boundary local time of a random threshold $\hat{\ell}$ with the same $q$.
While the above extension to multiple particles may look simple, getting the
actual properties of the probability density $U_{N}(\ell,t|\bm{x}_{0})$ is
challenging. In fact, the survival probability $S_{q}(t|\bm{x}_{0})$ depends
on $q$ implicitly, through the Robin boundary condition (4a), except for a few
cases (see two examples in Appendices A and B). In the following, we first
describe some general properties and then employ Eq. (10) to investigate the
short-time and long-time asymptotic behaviors of the probability density
$U_{N}(\ell,t|\bm{x}_{0})$ to provide a comprehensive view onto the depletion
stock problem.
### II.1 General properties
Let us briefly discuss several generic properties of the cumulative
distribution function $Q_{N}(\ell,t|\bm{x}_{0})$. Since the total boundary
local time is a non-decreasing process, the time of crossing a higher
threshold is longer than the time of crossing a lower threshold. In
probabilistic terms, this statement reads
$Q_{N}(\ell_{1},t|\bm{x}_{0})\geq
Q_{N}(\ell_{2},t|\bm{x}_{0})\qquad(\ell_{1}<\ell_{2}).$ (14)
In particular, setting $\ell_{1}=0$ in this inequality yields an upper bound
for the cumulative distribution function:
$1-[S_{\infty}(t|\bm{x}_{0})]^{N}=Q_{N}(0,t|\bm{x}_{0})\geq
Q_{N}(\ell,t|\bm{x}_{0}),$ (15)
where we used the asymptotic behavior of Eq. (9) as $\ell\to 0$. In the same
vein, as the total boundary local time $\ell_{t}$ is the sum of non-negative
boundary local times $\ell_{t}^{i}$, the cumulative distribution function
monotonously increases with $N$:
$Q_{N_{1}}(\ell,t|\bm{x}_{0})\leq
Q_{N_{2}}(\ell,t|\bm{x}_{0})\qquad(N_{1}<N_{2}).$ (16)
Note also that $Q_{N}(\ell,t|\bm{x}_{0})$ is a monotonously increasing
function of time $t$ by definition. In the limit $t\to\infty$, one gets the
probability of crossing the threshold $\ell$, i.e., the probability of stock
depletion:
$\displaystyle Q_{N}(\ell,\infty|\bm{x}_{0})$
$\displaystyle=\int\limits_{0}^{\infty}dt\,U_{N}(\ell,t|\bm{x}_{0})$
$\displaystyle=1-{\mathcal{L}}^{-1}_{q,\ell}\biggl{\\{}\frac{[S_{q}(\infty|\bm{x}_{0})]^{N}}{q}\biggr{\\}}\,.$
(17)
Here, one can distinguish two situations: (i) if any single particle surely
reacts on the partially reactive target $\Gamma$ (i.e.,
$S_{q}(\infty|\bm{x}_{0})=0$), $\ell_{t}$ will cross any threshold $\ell$ with
probability $Q_{N}(\ell,\infty|\bm{x}_{0})=1$; (ii) in contrast, if the single
particle can survive forever (i.e., $S_{q}(\infty|\bm{x}_{0})>0$) due to its
eventual escape to infinity, then the crossing probability is strictly less
than $1$. In the latter case, the density $U_{N}(\ell,t|\bm{x}_{0})$ is not
normalized to $1$ given that the first-crossing time can be infinite with a
finite probability:
${\mathbb{P}}_{\bm{x}_{0}}\\{{\mathcal{T}}_{\ell,N}=\infty\\}=1-Q_{N}(\ell,\infty|\bm{x}_{0}).$
(18)
The probability density $U_{N}(\ell,t|\bm{x}_{0})$ also allows one to compute
the positive integer-order moments of the first-crossing time (whenever they
exist):
$\displaystyle{\mathbb{E}}_{\bm{x}_{0}}\bigl{\\{}[{\mathcal{T}}_{\ell,N}]^{k}\bigr{\\}}$
$\displaystyle=\int\limits_{0}^{\infty}dt\,t^{k}\,U_{N}(\ell,t|\bm{x}_{0})$
(19a)
$\displaystyle=k\int\limits_{0}^{\infty}dt\,t^{k-1}\,\bigl{(}1-Q_{N}(\ell,t|\bm{x}_{0})\bigr{)},$
(19b)
for $k=1,2,\ldots$, where the second relation is obtained by integrating by
parts under the assumption that $Q_{N}(\ell,\infty|\bm{x}_{0})=1$ (otherwise
the moments would be infinite). Applying the inequality (14), we deduce the
monotonous behavior of all (existing) moments with respect to $\ell$:
${\mathbb{E}}_{\bm{x}_{0}}\bigl{\\{}[{\mathcal{T}}_{\ell_{1},N}]^{k}\bigr{\\}}\leq{\mathbb{E}}_{\bm{x}_{0}}\bigl{\\{}[{\mathcal{T}}_{\ell_{2},N}]^{k}\bigr{\\}}\qquad(\ell_{1}<\ell_{2}).$
(20)
Expectedly, the moments of the fastest first-passage time
${\mathcal{T}}_{0,N}$ appear as the lower bounds:
${\mathbb{E}}_{\bm{x}_{0}}\bigl{\\{}[{\mathcal{T}}_{0,N}]^{k}\bigr{\\}}\leq{\mathbb{E}}_{\bm{x}_{0}}\bigl{\\{}[{\mathcal{T}}_{\ell,N}]^{k}\bigr{\\}}.$
(21)
We stress, however, that the computation and analysis of these moments is in
general rather sophisticated, see an example in Appendix A.4 for diffusion on
the half-line.
### II.2 Short-time behavior
The short-time behavior of $U_{N}(\ell,t|\bm{x}_{0})$ strongly depends on
whether the species are initially released on the stock region or not. Indeed,
if $\bm{x}_{0}\notin\Gamma$, the species need first to arrive onto the stock
region to initiate its depletion. Since the survival probability is very close
to $1$ at short times, one can substitute
$[S_{q}(t|\bm{x}_{0})]^{N}=\bigl{(}1-(1-S_{q}(t|\bm{x}_{0}))\bigr{)}^{N}\approx
1-N\bigl{(}1-S_{q}(t|\bm{x}_{0})\bigr{)}$
into Eq. (10) to get the short-time behavior
$U_{N}(\ell,t|\bm{x}_{0})\approx N\,U_{1}(\ell,t|\bm{x}_{0})\qquad(t\to 0).$
(22)
As the crossing of any threshold $\ell$ by any species is highly unlikely at
short times, the presence of $N$ independent species yields an $N$-fold
increase of the probability of such a rare event. In fact, the exact solution
(42) for diffusion on the half-line allows one to conjecture the following
short-time asymptotic behavior in a general domain:
$U_{1}(\ell,t|\bm{x}_{0})\propto
t^{-\alpha}\,e^{-(\delta+\ell)^{2}/(4Dt)}\qquad(t\to 0),$ (23)
where $\delta$ is the distance from the starting point $\bm{x}_{0}$ to the
stock region $\Gamma$, and $\propto$ means proportionality up to a numerical
factor independent of $t$ (as $t\to 0$). The exponent $\alpha$ of the power-
law prefactor may depend on the domain, even though we did not observe other
values than $\alpha=3/2$ for basic examples. The main qualitative argument in
favor of this relation is that, at short times, any smooth boundary looks as
locally flat so that the behavior of reflected Brownian motion in its vicinity
should be close to that in a half-space, for which the exact solution (42) is
applicable (given that the lateral displacements of the particle do not affect
the boundary local time). In particular, one may expect that the geometrical
structure of the domain and of the stock region may affect only the
proportionality coefficient in front of this asymptotic form. For instance,
the exact solution (34) for diffusion outside a ball of radius $R$ contains
the supplementary factor $e^{-\ell/R}R/|\bm{x}_{0}|$, which is not present in
the one-dimensional setting. Similarly, the short-time asymptotic relation for
$U_{1}(\ell,t|\bm{x}_{0})$ in the case of diffusion outside a disk of radius
$R$, that was derived in Grebenkov21a , has the factor
$e^{-\ell/(2R)}(R/|\bm{x}_{0}|)^{1/2}$. In both cases, the additional, non-
universal prefactor depends on the starting point $|\bm{x}_{0}|$ and accounts
for the curvature of the boundary via $e^{-\ell/R}$ or $e^{-\ell/(2R)}$.
Further development of asymptotic tools for the analysis of the short-time
behavior of $U_{1}(\ell,t|\bm{x}_{0})$ in general domains presents an
interesting perspective.
The situation is different when the species are released on the stock region
($\bm{x}_{0}\in\Gamma$) so that the depletion starts immediately. The analysis
of the short-time behavior is more subtle, while the effect of $N$ is much
stronger. In Appendix A.3, we derived the short-time asymptotic formula (67)
by using the explicit form of the survival probability for diffusion on the
half-line with the stock region located at the origin. This behavior is valid
in the general case because a smooth boundary of the stock region “looks”
locally flat at short times. Moreover, the effect of local curvature can be
partly incorporated by rewriting the one-dimensional result as
$U_{N}(t,\ell|\bm{x}_{0})\simeq
2^{N-1}\,N\,U_{1}(Nt,\ell|\bm{x}_{0})\quad(t\to 0).$ (24)
i.e., the effect of $N$ independent species is equivalent at short times to an
$N$-fold increase of time $t$ for a single species and a multiplication by a
factor $2^{N-1}N$ whose probabilistic origin is clarified in Appendix A.3.
As the cumulative distribution function $Q_{N}(\ell,t|\bm{x}_{0})$ is obtained
by integrating $U_{N}(\ell,t^{\prime}|\bm{x}_{0})$ over $t^{\prime}$ from $0$
to $t$, one can easily derive its asymptotic behavior from Eqs. (22, 24):
$\displaystyle Q_{N}(\ell,t|\bm{x}_{0})$ $\displaystyle\approx
NQ_{1}(\ell,t|\bm{x}_{0})\qquad(\bm{x}_{0}\notin\Gamma),$ (25) $\displaystyle
Q_{N}(\ell,t|\bm{x}_{0})$ $\displaystyle\approx
2^{N-1}\,Q_{1}(\ell,Nt|\bm{x}_{0})\quad(\bm{x}_{0}\in\Gamma).$ (26)
### II.3 Long-time behavior
The long-time behavior of the probability density $U_{N}(\ell,t|\bm{x}_{0})$
is related via Eq. (10) to that of the survival probability
$S_{q}(t|\bm{x}_{0})$, according to which we distinguish four situations:
$S_{q}(t|\bm{x}_{0})\simeq\left\\{\begin{array}[]{l
l}e^{-D\lambda_{0}^{(q)}t}\,\psi_{q}(\bm{x}_{0})&(\textrm{class I}),\\\
t^{-\alpha}\,\psi_{q}(\bm{x}_{0})&(\textrm{class II}),\\\ (\ln
t)^{-\alpha}\psi_{q}(\bm{x}_{0})&(\textrm{class III}),\\\
S_{q}(\infty|\bm{x}_{0})+t^{-\alpha}\psi_{q}(\bm{x}_{0})&(\textrm{class
IV}),\\\ \end{array}\right.$ (27)
where $\lambda_{0}^{(q)}$ is the smallest eigenvalue of the Laplace operator
in $\Omega$ with mixed Robin-Neumann boundary condition (4), $\alpha>0$ is a
persistence exponent Redner ; Bray13 ; Levernier19 , and
$\psi_{q}(\bm{x}_{0})$ is a domain-specific function of $\bm{x}_{0}$ and $q$.
Even though the above list of asymptotic behaviors is not complete (e.g.,
there is no stretched-exponential behavior observed in disordered
configurations of traps Kayser83 ; Kayser84 ), these classes cover the
majority of cases studied in the literature. For instance, class I includes
all bounded domains, in which the spectrum of the Laplace operator is
discrete, allowing for a spectral expansion of the survival probability and
yielding its exponentially fast decay as $t\to\infty$. For unbounded domains,
the long-time behavior of $S_{q}(t|\bm{x}_{0})$ is less universal and strongly
depends on the space dimensionality $d$ and the shape of the domain Redner ;
Bray13 ; Levernier19 ; Guerin21 . For instance, class II includes: (a) the
half-line or, more generally, a half-space, with $\alpha=1/2$ and explicitly
known form of $\psi_{q}(\bm{x}_{0})$ (see Appendix A); (b) a perfectly
reactive wedge of angle $\theta$ in the plane, with $\alpha=\pi/(2\theta)$
Redner ; (c) a perfectly reactive cone in three dimensions, with a nontrivial
relation between $\alpha$ and the cone angle Redner . The exterior of a disk
in the plane and the exterior of a circular cylinder in three dimensions are
examples of domains in class III Redner ; Levitz08 ; Grebenkov21a . Class IV
includes the exterior of a bounded set in three dimensions, in which a
particle can escape to infinity and thus never react on the target, with the
strictly positive probability $S_{q}(\infty|\bm{x}_{0})$ (see Appendix B).
It is easy to check that Eq. (10) implies the long-time behavior:
$U_{N}(\ell,t|\bm{x}_{0})\simeq\left\\{\begin{array}[]{l
l}N\alpha\,t^{-N\alpha-1}\,\Psi_{N}(\bm{x}_{0},\ell)&(\textrm{class II}),\\\
\displaystyle\frac{N\alpha\,t^{-1}}{(\ln
t)^{N\alpha+1}}\,\Psi_{N}(\bm{x}_{0},\ell)&(\textrm{class III}),\\\
N\alpha\,t^{-\alpha-1}\,\Psi_{N}(\bm{x}_{0},\ell)&(\textrm{class IV}),\\\
\end{array}\right.$ (28)
where
$\Psi_{N}(\bm{x}_{0},\ell)={\mathcal{L}}_{q,\ell}^{-1}\\{[\psi_{q}(\bm{x}_{0})]^{N}/q\\}$
for classes II and III, and
$\Psi_{N}(\bm{x}_{0},\ell)={\mathcal{L}}_{q,\ell}^{-1}\\{[S_{q}(\infty|\bm{x}_{0})]^{N-1}\psi_{q}(\bm{x}_{0})/q\\}$
for class IV. One also gets
$\displaystyle Q_{N}(\ell,t|\bm{x}_{0})$ $\displaystyle\simeq
Q_{N}(\ell,\infty|\bm{x}_{0})$ (29) $\displaystyle-\left\\{\begin{array}[]{l
l}t^{-N\alpha}\,\Psi_{N}(\bm{x}_{0},\ell)&(\textrm{class II}),\\\
\displaystyle(\ln t)^{-N\alpha}\,\Psi_{N}(\bm{x}_{0},\ell)&(\textrm{class
III}),\\\ N\,t^{-\alpha}\,\Psi_{N}(\bm{x}_{0},\ell)&(\textrm{class IV}),\\\
\end{array}\right.$ (33)
where $Q_{N}(\ell,\infty|\bm{x}_{0})$ is the crossing probability. In turn,
the asymptotic behavior in bounded domains (class I) is more subtle and will
be addressed elsewhere (see discussions in Grebenkov19b ; Grebenkov20 ;
Grebenkov20b ; Grebenkov20c for a single particle).
According to Eqs. (28, 29), the effect of multiple species strongly depends on
the geometric structure of the domain. For class II, each added species
enhances the power law decrease of the probability density. In particular, the
mean first-crossing time is infinite for $N\leq 1/\alpha$ and finite for
$N>1/\alpha$. For instance, when the species diffuse on the half-line, the
mean first-crossing time is finite for $N>2$ and scales as $N^{-2}$ at large
$N$ (see Appendix A.4). Higher-order moments are getting finite as $N$
increases. This effect is greatly diminished for class III, in which the
“gain” from having multiple species is just in powers of the logarithm of $t$.
As a consequence, the mean first-crossing time remains infinite for any $N$,
despite the recurrent nature of diffusion when each species returns infinitely
many times to the stock region. For domains of class IV, the transient
character of diffusion implies that each species may encounter the stock
region a limited number of times before leaving it forever by escaping to
infinity with a finite probability. As a consequence, the probability density
decays as $t^{-\alpha-1}$ for any $N$, and the number of species affects only
the prefactor in front of this universal form. Note that the stock depletion
is certain (with probability $1$) for classes II and III; in turn, this
probability is below $1$ for class IV but it approaches $1$ exponentially
rapidly as $N$ increases, see Eq. (17).
### II.4 Example of a spherical stock region
Figure 2: Probability density function $U_{N}(\ell,t|\bm{x}_{0})$ of the
first-crossing time ${\mathcal{T}}_{\ell,N}$ for $N$ species diffusing in the
exterior of a spherical stock region of radius $R$, with $\ell=R$, and
$|\bm{x}_{0}|=2R$ (a) and $|\bm{x}_{0}|=R$ (b). Symbols present the explicit
form (34) for a single species, whereas thick lines show the result of
numerical integration in Eq. (109), see Appendix C. Thin solid lines indicate
the long-time asymptotic relation (28), with $\alpha=1/2$ and
$\Psi_{N}(\bm{x}_{0},\ell)$ is given by Eq. (111); in turn, thin dashed lines
present the short-time behavior in Eq. (22) for panel (a) and Eq. (24) for
panel (b).
To illustrate the properties of the first-crossing time
${\mathcal{T}}_{\ell,N}$, we consider $N$ species diffusing in the three-
dimensional space and searching for a spherical stock region of radius $R$. In
this setting (class IV), the survival probability $S_{q}(t|\bm{x}_{0})$ has an
exact explicit form that allowed us to compute numerically the probability
density $U_{N}(\ell,t|\bm{x}_{0})$, see Appendix C for details. For $N=1$,
this density gets an explicit form Grebenkov20c :
$U_{1}(\ell,t|\bm{x}_{0})=\frac{R\,e^{-\ell/R}}{|\bm{x}_{0}|}\,\frac{|\bm{x}_{0}|-R+\ell}{\sqrt{4\pi
Dt^{3}}}e^{-(|\bm{x}_{0}|-R+\ell)^{2}/(4Dt)}.$ (34)
Setting $\ell=0$, one retrieves the probability density of the first-passage
time for a perfectly absorbing sphere Smoluchowski17 .
Figure 2 shows the probability density $U_{N}(\ell,t|\bm{x}_{0})$ and its
asymptotic behavior for a particular threshold $\ell=R$. When the species
start a distance away from the stock region (panel (a)), $U_{N}(\ell,t|r_{0})$
looks as being just “shifted” upwards by increasing $N$, in agreement with the
short-time behavior in Eq. (22). In particular, the most probable first-
crossing time remains close to that of a single species. Here, the species
need first to reach the stock region, so that speed up of the depletion by
having many species is modest. The situation is drastically different when the
species start on the stock region (panel (b)). In this case, some species may
stay close to the stock region, repeatedly returning to it and rapidly
consuming its resources. One sees that the total boundary local time reaches a
prescribed threshold $\ell$ much faster, and the probability density
$U_{N}(\ell,t|r_{0})$ is shifted towards shorter times as $N$ increases. In
both panels, the short-time and long-time asymptotic relations derived above
are accurate. We stress that the mean first-crossing time and higher-order
moments are infinite and thus not informative here. Some other aspects of this
depletion problem, such as the cumulative distribution function
$Q_{N}(\ell,t|\bm{x}_{0})$, the probability of depletion, and their dependence
on $N$, are discussed in Appendix B. In turn, Appendix A presents the study of
diffusion on the half-line (class II).
## III Discussion and Conclusion
As depletion of resources is one of the major modern problems, numerous former
studies addressed various aspects of this phenomenon. For instance, Bénichou
et al. investigated depletion-controlled starvation of a diffusing forager and
related foraging strategies Benichou14 ; Chupeau16 ; Benichou16 ; Chupeau17 ;
Bhat17 ; Benichou18 . These studies focused on the forager itself and on the
role of depletion on its survival. In contrast, our emphasis was on the
dynamics of stock depletion, i.e., how fast available resources are exhausted
by a population of diffusing species. To our knowledge, this problem was not
previously addressed, and the present work settles a first theoretical ground
for further explorations of this important topic in several directions.
(i) While we focused on a fixed starting point $\bm{x}_{0}$ for all species,
an extension of our results to the case of independent randomly distributed
starting points is straightforward. In particular, the major difference
between Eq. (22) for $\bm{x}_{0}\notin\Gamma$ and Eq. (24) for
$\bm{x}_{0}\in\Gamma$ suggests that the form of the initial distribution of
$\bm{x}_{0}$ in the vicinity of the stock region may strongly affect the
short-time behavior of the probability density $U_{N}(\ell,t|\bm{x}_{0})$.
(ii) For diffusion in bounded domains, the long-time behavior of the
probability density $U_{N}(\ell,t|\bm{x}_{0})$ requires a subtle asymptotic
analysis of the ground eigenmode of the Laplace operator as a function of the
implicit reactivity parameter $q$; the role of the geometric confinement
remains to be elucidated.
(iii) In the considered model of non-renewable resources, the stock region is
depleted upon each encounter with each diffusing species. This assumption can
be relaxed in different ways. For instance, one can consider a continuous-time
supply of resources, for which the problem is equivalent to finding the first-
crossing time of a deterministic time-dependent threshold $\ell(t)$.
Alternatively, replenishment of resources can be realized at random times, as
a sort of stochastic resetting. If the resetting times are independent from
diffusion of species, one may apply the renewal theory, which was successful
in describing diffusion with resetting Evans11 ; Chechkin18 ; Evans20 . Yet
another option consists of implementing a dynamic regeneration of consumed
resources on the stock region (like a natural regeneration of forests).
Finally, one can also include more sophisticated consumption mechanisms when
resources are distributed to each species depending on the number of its
previous encounters with the stock region (e.g., a species receives less
resources at its next return to the stock region). This mechanism and its
theoretical implementation resemble the concept of encounter-dependent
reactivity in diffusion-controlled reactions Grebenkov20 .
(iv) Another direction consists in elaborating the properties of species.
First, one can incorporate a finite lifetime of diffusing species and analyze
the stock depletion by “mortal” walkers Meerson15 ; Grebenkov17d . The effect
of diversity of species (e.g., a distribution of their diffusion coefficients)
can also be analyzed. Second, dynamics beyond ordinary diffusion can be
investigated; for instance, the distribution of the boundary local time was
recently obtained for diffusion with a gradient drift Grebenkov22 . The
knowledge on the survival probability of more sophisticated stochastic
dynamics, such as diffusing diffusivity or switching diffusion models Godec17
; Lanoiselee18 ; Sposini19 ; Grebenkov19f , can potentially be employed in the
analysis of the stock depletion problem. Further incorporation of interactions
between species (such as communications between ants, bees or birds) may allow
to model advanced strategies of faster stock depletion that are common in
nature. On the other hand, one can consider multiple stock regions and inquire
on their optimal spatial arrangments or replenishment modes to construct
sustainable supply networks.
The combination of these complementary aspects of the stock depletion problem
will pave a way to understand and control various depletion phenomena in
biology, ecology, economics and social sciences.
###### Acknowledgements.
The author acknowledges a partial financial support from the Alexander von
Humboldt Foundation through a Bessel Research Award.
## Appendix A Diffusion on a half-line
In this Appendix, we investigate the stock depletion problem by a population
of species diffusing on the half-line, $\Omega={\mathbb{R}}_{+}$. We first
recall the basic formulas for a single particle and then proceed with the
analysis for $N$ particles. We stress that this setting is equivalent to
diffusion in the half-space ${\mathbb{R}}^{d-1}\times{\mathbb{R}}_{+}$ because
the boundary local time is not affected by lateral displacements of the
particles along the hyperplane ${\mathbb{R}}^{d-1}$.
### A.1 Reminder for a single particle
For the positive half-line with partially reactive endpoint $0$, the survival
probability reads Redner
$S_{q}(t|x_{0})=\mathrm{erf}(z_{0})+e^{-z_{0}^{2}}\mathrm{erfcx}\bigl{(}z_{0}+q\sqrt{Dt}\bigr{)},$
(35)
where $\mathrm{erfcx}(z)=e^{z^{2}}\mathrm{erfc}(z)$ is the scaled
complementary error function, and $z_{0}=x_{0}/\sqrt{4Dt}$. One has
$S_{q}(t|x_{0})\to 1$ as $q\to 0$, and
$S_{q}(t|x_{0})\xrightarrow[q\to\infty]{}S_{\infty}(t|x_{0})=\mathrm{erf}(z_{0})+\frac{1}{\sqrt{\pi
Dt}}\,q^{-1}+O(q^{-2}),$ (36)
where we used the asymptotic behavior of $\mathrm{erfcx}(z)$. The probability
density of the first-passage time,
$H_{q}(t|x_{0})=-\partial_{t}S_{q}(t|x_{0})$, is
$H_{q}(t|x_{0})=qDe^{-z_{0}^{2}}\biggl{(}\frac{1}{\sqrt{\pi
Dt}}-q\,\mathrm{erfcx}\bigl{(}z_{0}+q\sqrt{Dt}\bigr{)}\biggr{)}.$ (37)
Note also that
$S_{q}(t|x_{0})\simeq
1-\frac{2\sqrt{Dt}}{x_{0}\sqrt{\pi}}\,\frac{2qDt}{x_{0}+2qDt}\,e^{-x_{0}^{2}/(4Dt)}\quad(t\to
0),$ (38)
so that the algebraic prefactor in front of $e^{-x_{0}^{2}/(4Dt)}$ is
different for perfectly and partially reactive targets. In the long-time
limit, one gets
$S_{q}(t|x_{0})\simeq\frac{x_{0}+1/q}{\sqrt{\pi
Dt}}+O(t^{-1})\qquad(t\to\infty),$ (39)
i.e., the half-line belongs to class II according to our classification in Eq.
(27), with
$\alpha=\frac{1}{2}\,,\qquad\psi_{q}(x_{0})=\frac{x_{0}+1/q}{\sqrt{\pi D}}\,.$
(40)
The probability density of the boundary local time $\ell_{t}^{1}$ is
$\rho_{1}(\ell,t|x_{0})=\mathrm{erf}\biggl{(}\frac{x_{0}}{\sqrt{4Dt}}\biggr{)}\delta(\ell)+\frac{\exp\bigl{(}-\frac{(x_{0}+\ell)^{2}}{4Dt}\bigr{)}}{\sqrt{\pi
Dt}}\,,$ (41)
while the probability density of the first-crossing time of a threshold $\ell$
by $\ell_{t}^{1}$ reads Borodin ; Grebenkov20c :
$U_{1}(\ell,t|x_{0})=(\ell+x_{0})\frac{e^{-(\ell+x_{0})^{2}/(4Dt)}}{\sqrt{4\pi
Dt^{3}}}\,.$ (42)
Note that
$Q_{1}(\ell,t|x_{0})=\int\limits_{\ell}^{\infty}d\ell^{\prime}\,\rho_{1}(\ell^{\prime},t|x_{0})=\mathrm{erfc}\biggl{(}\frac{x_{0}+\ell}{\sqrt{4Dt}}\biggr{)}.$
(43)
The most probable first-crossing time corresponding to the maximum of
$U_{1}(t,\ell|x_{0})$ is
$t_{\rm mp,1}=\frac{(x_{0}+\ell)^{2}}{6D}\,.$ (44)
### A.2 PDF of the total boundary local time
The probability density of the total boundary local time $\ell_{t}$ is
determined via the inverse Laplace transform in Eq. (7). In Appendix C, we
provide an equivalent representation (114) in terms of the Fourier transform,
which is more suitable for the following analysis. Substituting
$S_{q}(t|x_{0})$ from Eq. (35), we get
$\rho_{N}(\ell,t|x_{0})=\bigl{(}\mathrm{erf}(z_{0})\bigr{)}^{N}\delta(\ell)+\frac{I_{N}(\ell/\sqrt{Dt},z_{0})}{\sqrt{Dt}}\,,$
(45)
where
$\displaystyle
I_{N}(\lambda,z_{0})=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,e^{iq\lambda}$
(46)
$\displaystyle\quad\times\biggl{[}\biggl{(}\mathrm{erf}(z_{0})+e^{-z_{0}^{2}}\mathrm{erfcx}(z_{0}+iq)\biggr{)}^{N}-\bigl{(}\mathrm{erf}(z_{0})\bigr{)}^{N}\biggr{]}\,.$
The small-$\ell$ asymptotic behavior of this density can be obtained as
follows. We distinguish two cases: $z_{0}>0$ or $z_{0}=0$. In the former case,
we find
$I_{N}(\lambda,z_{0})=\frac{Ne^{-z_{0}^{2}}\bigl{(}\mathrm{erf}(z_{0})\bigr{)}^{N-1}}{\sqrt{\pi}}+o(1)\qquad(\lambda\to
0),$ (47)
and thus Eqs. (36, 45) imply in the limit $\ell\to 0$:
$\rho_{N}(\ell,t|x_{0})\simeq\bigl{(}\mathrm{erf}(z_{0})\bigr{)}^{N}\delta(\ell)+\frac{Ne^{-z_{0}^{2}}\bigl{(}\mathrm{erf}(z_{0})\bigr{)}^{N-1}}{\sqrt{\pi
Dt}}+o(1).$ (48)
In turn, for $z_{0}=0$, one has
$I_{N}(\lambda,0)=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,e^{iq\lambda}\biggl{(}\mathrm{erfcx}(iq)\biggr{)}^{N}\,.$
(49)
Note that $w(q)=\mathrm{erfcx}(-iq)$ is the Faddeeva function, which admits
the integral representation:
$\displaystyle w(q)$
$\displaystyle=\frac{1}{\sqrt{\pi}}\int\limits_{0}^{\infty}dz\,e^{-z^{2}/4+iqz}\,.$
(50)
For large $|q|$, the imaginary part of $w(q)$ behaves as $1/(q\sqrt{\pi})$,
while the real part decays much faster, so that $\mathrm{erfcx}(-iq)\simeq
i/(q\sqrt{\pi})$. Using this asymptotic behavior, one can show that
$I_{N}(\lambda,0)\simeq\frac{\lambda^{N-1}}{\pi^{N/2}\,(N-1)!}\qquad(\lambda\to
0),$ (51)
from which
$\rho_{N}(\ell,t|0)\simeq\frac{\bigl{(}\ell/\sqrt{Dt}\bigr{)}^{N-1}}{(N-1)!\,\pi^{N/2}\,\sqrt{Dt}}\qquad(\ell\to
0).$ (52)
The opposite large-$\ell$ limit relies on the asymptotic analysis of
$I_{N}(\lambda,z_{0})$ as $\lambda\to\infty$. We re-delegate the mathematical
details of this analysis to Appendix A.5 and present here the final result
based on Eq. (85):
$\displaystyle\rho_{N}(\ell,t|x_{0})$ $\displaystyle\approx\frac{1}{\sqrt{\pi
Dt}}\sum\limits_{n=1}^{N}\binom{N}{n}[\mathrm{erf}(z_{0})]^{N-n}$
$\displaystyle\times
e^{-(nx_{0}+\ell)^{2}/(4nDt)}\,\frac{2^{n-1}}{\sqrt{n}}\quad(\ell\to\infty)\,.$
(53)
If $\ell\gg Nx_{0}$, the dominant contribution comes from the term with $n=N$
that simplifies the above expression as:
$\rho_{N}(\ell,t|x_{0})\approx\frac{2^{N-1}}{\sqrt{\pi
NDt}}e^{-(Nx_{0}+\ell)^{2}/(4NDt)}\,.$ (54)
We emphasize that this result is applicable for any $N$; moreover, for $N=1$,
this asymptotic formula is actually exact, see Eq. (41). This is in contrast
with a Gaussian approximation which was earlier suggested in the long-time
limit for the case of a single particle Grebenkov07a ; Grebenkov19b . In fact,
as the particles are independent, the sum of their boundary local times
$\ell_{t}^{i}$ can be approximated by a Gaussian variable, i.e.,
$\rho_{N}(\ell,t|x_{0})\simeq\frac{\exp\bigl{(}-\frac{(\ell-N{\mathbb{E}}_{x_{0}}\\{\ell_{t}^{1}\\})^{2}}{2N\mathrm{Var}_{x_{0}}\\{\ell_{t}^{1}\\}}\bigr{)}}{\sqrt{2\pi
N\mathrm{Var}_{x_{0}}\\{\ell_{t}^{1}\\}}}\qquad(\ell\to\infty).$ (55)
This relation could also be obtained by using the Taylor expansion of the
integrand function in Eq. (46) up to the second order in $q^{2}$ for the
evaluation of its asymptotic behavior. The mean and variance of $\ell_{t}^{1}$
that appear in Eq. (55), can be found from the explicit relation (41):
$\displaystyle{\mathbb{E}}_{x_{0}}\\{\ell_{t}^{1}\\}$
$\displaystyle=\frac{2\sqrt{Dt}}{\sqrt{\pi}}\,e^{-z_{0}^{2}}-x_{0}\mathrm{erfc}(z_{0}),$
(56) $\displaystyle{\mathbb{E}}_{x_{0}}\\{[\ell_{t}^{1}]^{2}\\}$
$\displaystyle=(x_{0}^{2}+2Dt)\mathrm{erfc}(z_{0})-\frac{2x_{0}\sqrt{Dt}}{\sqrt{\pi}}\,e^{-z_{0}^{2}},$
(57)
from which the variance follows as
$\mathrm{Var}_{x_{0}}\\{\ell_{t}^{1}\\}={\mathbb{E}}_{x_{0}}\\{[\ell_{t}^{1}]^{2}\\}-\bigl{(}{\mathbb{E}}_{x_{0}}\\{\ell_{t}^{1}\\}\bigr{)}^{2}.$
(58)
In particular, one gets for $x_{0}=0$:
${\mathbb{E}}_{0}\\{\ell_{t}^{1}\\}=\frac{2}{\sqrt{\pi}}\sqrt{Dt}\,,\quad\mathrm{Var}_{0}\\{\ell_{t}^{1}\\}=2Dt(1-2/\pi).$
(59)
However, this approximation is applicable either in the large $N$ limit due to
the central limit theorem, or in the long-time limit, in which each
$\ell_{t}^{i}$ is nearly Gaussian. In particular, the Gaussian approximation
(55) does not capture the large-$\ell$ behavior shown in Fig. 3.
Figure 3 illustrates the behavior of the probability density
$\rho_{N}(\ell,t|x_{0})$ for several values of $N$. First, one sees that both
small-$\ell$ and large-$\ell$ asymptotic relations are accurate. When the
particles start away from the stock region (panel (a)), the regular part of
$\rho_{N}(\ell,t|x_{0})$ approaches a constant level, which decreases with $N$
according to Eq. (48). In turn, the effect of multiple particles onto the
small-$\ell$ behavior is much stronger when the particles are released on the
stock region (panel (b)).
Figure 3: Probability density function $\rho_{N}(\ell,t|x_{0})$ of the total
boundary local time $\ell_{t}$ for $N$ particles diffusing on the half-line,
with $t=1$, $D=1$, and $x_{0}=1$ (a) and $x_{0}=0$ (b). Symbols present the
explicit form (41) for a single particle, whereas thick lines show the
numerical integration in Eqs. (45, 46). Thin dashed lines present the
large-$\ell$ asymptotic relation (54), while thin solid lines indicate the
small-$\ell$ asymptotic relation (48) for $x_{0}=1$ and (52) for $x_{0}=0$,
respectively. In panel (a), only the “regular” part is presented, whereas the
explicit term with $\delta(\ell)$ is excluded.
### A.3 PDF of the first-crossing time
Substituting $S_{q}(t|x_{0})$ from Eq. (35) into the Fourier representation
(117) of $U_{N}(\ell,t|x_{0})$, we get
$\displaystyle U_{N}(\ell,t|x_{0})$
$\displaystyle=\frac{N\,e^{-z_{0}^{2}}}{t}\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,e^{iq\ell/\sqrt{Dt}}$
$\displaystyle\times\biggl{(}\mathrm{erf}(z_{0})+e^{-z_{0}^{2}}\mathrm{erfcx}(z_{0}+iq)\biggr{)}^{N-1}$
$\displaystyle\times\biggl{(}\frac{1}{\sqrt{\pi}}-iq\,\mathrm{erfcx}(z_{0}+iq)\biggr{)}.$
Evaluating the derivative of the function $\mathrm{erfcx}(z)$, one can
represent this expression as
$\displaystyle U_{N}(\ell,t|x_{0})$
$\displaystyle=\frac{1}{t}\biggl{[}\biggl{(}\frac{\ell}{\sqrt{4Dt}}+Nz_{0}\biggr{)}I_{N}\bigl{(}\ell/\sqrt{Dt},z_{0}\bigr{)}$
$\displaystyle-
Nz_{0}\mathrm{erf}(z_{0})I_{N-1}\bigl{(}\ell/\sqrt{Dt},z_{0}\bigr{)}\biggr{]},$
(60)
where $I_{N}(\lambda,z_{0})$ is given by Eq. (46). According to Eq. (45), we
can also write
$\displaystyle U_{N}(\ell,t|x_{0})$
$\displaystyle=\frac{1}{2t}\biggl{(}(\ell+Nx_{0})\rho_{N}(\ell,t|x_{0})$
$\displaystyle-
Nx_{0}\,\mathrm{erf}(z_{0})\,\rho_{N-1}(\ell,t|x_{0})\biggr{)}.$ (61)
In the particular case $x_{0}=0$, one gets a simpler relation
$\displaystyle U_{N}(\ell,t|0)$
$\displaystyle=\frac{\ell}{2t}\rho_{N}(\ell,t|0).$ (62)
The long-time asymptotic behavior of $U_{N}(\ell,t|x_{0})$ is determined by
the first line in Eq. (28). Substituting $\alpha$ and $\psi_{q}(x_{0})$ from
Eq. (40), we find
$U_{N}(\ell,t|x_{0})\simeq\frac{N\bigl{(}x_{0}/\sqrt{\pi
Dt}\bigr{)}^{N}}{2t}\sum\limits_{n=0}^{N}\binom{N}{n}\frac{(\ell/x_{0})^{n}}{n!}\,.$
(63)
In the limit $x_{0}\to 0$, only the term with $n=N$ survives, yielding as
$t\to\infty$
$U_{N}(\ell,t|0)\simeq\frac{D/\ell^{2}}{2\pi^{N/2}(N-1)!}\,(Dt/\ell^{2})^{-1-N/2}.$
(64)
As a consequence, the mean first-crossing time is infinite for $N=1$ and
$N=2$, but finite for $N>2$ (see Appendix A.4 for details). For $N=1$, one
retrieves the typical $t^{-3/2}$ decay of the Lévy-Smirnov probability density
of a first-passage time, see Eq. (42).
To get the short-time behavior, we treat separately the cases $x_{0}>0$ and
$x_{0}=0$. In the former case, Eqs. (22, 42) imply
$U_{N}(t,\ell|x_{0})\approx
N(\ell+x_{0})\frac{e^{-(\ell+x_{0})^{2}/(4Dt)}}{\sqrt{4\pi Dt^{3}}}\qquad(t\to
0),$ (65)
The analysis is more subtle for $x_{0}=0$, for which Eq. (60) is reduced to
$U_{N}(\ell,t|0)=\frac{\ell}{2t\sqrt{Dt}}\,I_{N}\bigl{(}\ell/\sqrt{Dt},0\bigr{)}.$
(66)
Using the asymptotic relation (87), we get the short-time behavior:
$U_{N}(\ell,t|0)\simeq 2^{N-1}\,\frac{\ell}{\sqrt{4\pi
NDt^{3}}}\,e^{-\ell^{2}/(4NDt)}\quad(t\to 0).$ (67)
This asymptotic relation coincides with the exact Eq. (42) for $N=1$. More
generally, the short-time behavior for $N$ particles is given, up to a
multificative factor $2^{N-1}$, by the probability density $U_{1}(\ell,t|0)$
for a single particle but with an $N$-fold increase of the diffusion
coefficient.
Figure 4: (a) Schematic illustration of crossing of a threshold $\ell$ by
$\ell_{t}=\ell_{t}^{1}+\ell_{t}^{2}$ for two particles that corresponds to
crossing the gray line. Red filled circle indicates the closest point, through
which the crossing is most probable at short times. (b) An equivalent view
onto this problem in terms of two-dimensional Brownian motion that starts from
the origin (blue circle) and searches to exit the rotated square. Four red
filled circles indicate the closest points through which the exit is most
probable at short times.
How can one interpret the prefactor $2^{N-1}$? For a single particle, Eq. (42)
implies that $U_{1}(\ell,t|0)=\frac{\ell}{\sqrt{4\pi
Dt^{3}}}e^{-\ell^{2}/(4Dt)}$ is identical with the probability density of the
first-passage time to the origin of the half-line for a particle started a
distance $\ell$ away. In other words, the threshold $\ell$ effectively
increases the distance from the origin for diffusion on the half-line (see
Refs. Sapoval05 ; Grebenkov15 for further discussions on the geometric
interpretation of the boundary local time). This follows from the classical
fact that the probability law of the boundary local time in this setting is
identical to the probability law of the reflected Brownian motion $|W_{t}|$
started from the origin Levy . The reflection symmetry implies that
$2U_{1}(\ell,t|0)$ also describes the short-time behavior of the probability
density of the first-exit time from the center of the interval $(-\ell,\ell)$.
Here, the factor $2$ accounts for the twofold increased probability of the
exit event through two equally distant endpoints. This interpretation can be
carried on for two particles: the boundary local times $\ell_{t}^{1}$ and
$\ell_{t}^{2}$ obey the same probability law as two independent reflected
Brownian motions. As a consequence, the first-crossing of a threshold $\ell$
by the total boundary local time $\ell_{t}=\ell_{t}^{1}+\ell_{t}^{2}$ is
equivalent to the exit from the square of diameter $2\ell$, rotated by
$45^{\circ}$ (Fig. 4). At short times, the exit is most probable through
vicinities of 4 points that are the closest to the origin. As a consequence,
$U_{2}(\ell,t|0)\approx\tfrac{1}{2}4\frac{\ell_{2}}{\sqrt{4\pi
Dt^{3}}}e^{-\ell_{2}^{2}/(4Dt)}$, where $\ell_{2}=\ell/\sqrt{2}$ is the
distance from the origin to the edges. For $N$ particles, the closest distance
$\ell_{N}=\ell/\sqrt{N}$, whereas there are $2^{N}$ facets of the hypercube,
yielding Eq. (67). Even though the exact analogy between the boundary local
time and reflected Brownian motion does not carry on beyond the half-line, the
short-time asymptotic relation is expected to hold, as illustrated below.
Figure 5 shows the probability density $U_{N}(\ell,t|x_{0})$ for several
values of $N$. As expected, the right (long-time) tail of this density becomes
steeper as $N$ increases, whereas its maximum is shifted to the left (to
smaller times). One sees that both short-time and long-time relations
correctly capture the asymptotic behavior of $U_{N}(\ell,t|x_{0})$. At short
times, the starting point $x_{0}$ considerably affects the probability
density. In fact, when $x_{0}>0$, the short-time behavior is controlled by the
arrival of any particle to the stock region, and the presence of $N$ particles
simply “shifts” the density upwards, via multiplication by $N$ in Eq. (65). In
turn, if the particles start on the stock region ($x_{0}=0$), the number $N$
significantly affects the left tail of the probability density, implying a
much faster depletion of resources by multiple particles.
Figure 5: Probability density function $U_{N}(\ell,t|x_{0})$ of the first-
crossing time ${\mathcal{T}}_{\ell,N}$ for $N$ particles diffusing on the
half-line, with $\ell=1$, $D=1$, and $x_{0}=1$ (a) and $x_{0}=0$ (b). Symbols
present the explicit form (42) for a single particle, whereas thick lines show
the numerical integration in Eqs. (60, 46). Thin solid lines indicate the
long-time asymptotic relation (63) for $x_{0}=1$ and (64) for $x_{0}=0$,
respectively. Thin dashed lines present the short-time asymptotic relation
(67) for $x_{0}=0$ and (65) for $x_{0}=1$, respectively.
### A.4 Mean first-crossing time
Using Eq. (60), one writes the mean first-crossing time as (whenever it
exists)
$\displaystyle{\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{\ell,N}\\}$
$\displaystyle=\frac{\ell^{2}}{D}\int\limits_{0}^{\infty}\frac{dy}{y^{3}}\biggl{\\{}(y+2N\xi)I_{N}(y,y\xi)$
$\displaystyle-2N\xi\,\mathrm{erf}(y\xi)I_{N-1}(y,y\xi)\biggr{\\}},$ (68)
with $\xi=x_{0}/(2\ell)$. Curiously, the expression for the mean first-
crossing time is more complicated than that for the probability density. Since
the function $I_{N}(\lambda,z_{0})$ is expressed as an integral involving the
error function, the analysis of this expression is rather sophisticated. For
this reason, we focus on the particular case $x_{0}=0$, for which the above
expression is reduced to
${\mathbb{E}}_{0}\\{{\mathcal{T}}_{\ell,N}\\}=\frac{\ell^{2}}{D}\int\limits_{0}^{\infty}\frac{dy}{y^{2}}\,\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,e^{-iqy}\biggl{(}\mathrm{erfcx}(-iq)\biggr{)}^{N}\,.$
(69)
A straightforward exchange of two integrals is not applicable as the integral
of $e^{-iqy}/y^{2}$ over $y$ diverges. To overcome this limitation, we
regularize this expression by replacing the lower integral limit by
$\varepsilon$ and then evaluating the limit $\varepsilon\to 0$:
${\mathbb{E}}_{0}\\{{\mathcal{T}}_{\ell,N}\\}=\lim\limits_{\varepsilon\to
0}\frac{\ell^{2}}{D}\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\bigl{(}\mathrm{erfcx}(-iq)\bigr{)}^{N}\,F_{\varepsilon}(q),$
(70)
where
$F_{\varepsilon}(q)=\frac{e^{-iq\varepsilon}}{\varepsilon}-iq\,{\rm
Ei}(1,iq\varepsilon),$ (71)
with ${\rm Ei}(1,z)$ being the exponential integral. The small-$\varepsilon$
expansion of this function reads
$F_{\varepsilon}(q)=\varepsilon^{-1}-iq(1-\gamma-\ln(\varepsilon))+iq\ln(iq)+O(\varepsilon).$
(72)
To get a convergent limit in Eq. (70), one has to show that the integral over
$q$ involving the first two terms of this expansion vanishes, i.e.,
$J_{N}^{(0)}=J_{N}^{(1)}=0$, where
$J_{N}^{(k)}=\pi^{\frac{N}{2}}\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,q^{k}\bigl{(}\mathrm{erfcx}(-iq)\bigr{)}^{N}.$
(73)
Let us first consider the integral $J_{N}^{(0)}$. Using the representation
(50), we can write
$\displaystyle J_{N}^{(0)}$
$\displaystyle=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,\int\limits_{{\mathbb{R}}^{N}_{+}}dz_{1}\ldots
dz_{N}\,e^{-\frac{1}{4}(z_{1}^{2}+\ldots+z_{N}^{2})+iq(z_{1}+\ldots+z_{N})}$
$\displaystyle=\int\limits_{{\mathbb{R}}^{N}_{+}}dz_{1}\ldots
dz_{N}\,e^{-\frac{1}{4}(z_{1}^{2}+\ldots+z_{N}^{2})}\,\delta(z_{1}+\ldots+z_{N}).$
For $N=1$, this integral yields $J_{1}^{(0)}=\tfrac{1}{2}$, whereas it
vanishes for any $N>1$. Similarly, the evaluation of the integral
$J_{N}^{(1)}$ involves the derivative of the Dirac distribution and yields
$J_{2}^{(1)}=i/2$, while $J_{N}^{(1)}=0$ for any $N>2$. We conclude that the
limit in Eq. (70) diverges for $N=1$ and $N=2$, in agreement with the long-
time asymptotic behavior (64) of the probability density
$U_{N}(\ell,t|x_{0})$. In turn, for $N>2$, the limit is finite and is
determined by the integral with the third term in the expansion (72):
${\mathbb{E}}_{0}\\{{\mathcal{T}}_{\ell,N}\\}=\frac{\ell^{2}}{D}\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,iq\ln(iq)\,\bigl{(}\mathrm{erfcx}(-iq)\bigr{)}^{N}.$
(74)
To derive the asymptotic behavior of this integral at large $N$, we use the
Taylor expansion for $\ln(w(q))\approx
iq\frac{2}{\sqrt{\pi}}-q^{2}(1-2/\pi)+O(q^{3})$ and then approximate the mean
as
${\mathbb{E}}_{0}\\{{\mathcal{T}}_{\ell,N}\\}\approx\frac{\ell^{2}}{D}\,\frac{\pi}{4N^{2}}\,I_{N},$
(75)
with
$I_{N}=\int\limits_{-\infty}^{\infty}\frac{dx}{2\pi}\,ix\ln(ix\sqrt{\pi}/(2N))\,e^{ix}\,e^{-x^{2}/(2z^{2})}\,,$
(76)
where we rescaled the integration variable as $x=qN(2/\sqrt{\pi})$ and set
$z=\sqrt{2N/(\pi-2)}$. As
$\int\limits_{-\infty}^{\infty}\frac{dx}{2\pi}\,x\,e^{ix}\,e^{-x^{2}/(2z^{2})}\propto
e^{-z^{2}/2}$
is exponentially small for large $N$, one can eliminate the contribution from
a numerical constant under the logarithm that allows one to write
$I_{N}\approx-\int\limits_{0}^{\infty}\frac{dx}{\pi}\,x\,\biggl{(}\frac{\pi}{2}\cos(x)+\sin(x)\,\ln(x)\biggr{)}\,e^{-x^{2}/(2z^{2})}\,.$
(77)
The first term can be evaluated explicitly and yields $1/2$ as $N\to\infty$.
To proceed with the second term, we employ the representation
$\ln(x)=\lim\limits_{\varepsilon\to 0}(x^{\varepsilon}-1)/\varepsilon$ and
exchange the order of integral and limit:
$\displaystyle I_{N}$
$\displaystyle\approx\frac{1}{2}-\lim\limits_{\varepsilon\to
0}\frac{1}{\varepsilon}\int\limits_{0}^{\infty}\frac{dx}{\pi}\,x^{1+\varepsilon}\,\sin(x)\,e^{-x^{2}/(2z^{2})}$
$\displaystyle=\frac{1}{2}+\lim\limits_{\varepsilon\to
0}\frac{1}{\varepsilon}\,\frac{\sqrt{2}z^{2+\varepsilon}e^{-z^{2}/4}\bigl{(}D_{1+\varepsilon}(-z)-D_{1+\varepsilon}(z)\bigr{)}}{4\sqrt{\pi}\cos(\pi\varepsilon/2)}\,,$
where $D_{\nu}(z)$ is the Whittaker’s parabolic cylinder function, and we
neglected the contribution from $-1/\varepsilon$, which is exponentially small
for large $N$. For large $z$, $D_{1+\varepsilon}(z)$ is exponentially small,
whereas $D_{1+\varepsilon}(-z)$ behaves as
$D_{1+\varepsilon}(-z)\approx-\frac{\sqrt{2\pi}}{\Gamma(-1-\varepsilon)}e^{-i\pi(1+\varepsilon)}z^{-2-\varepsilon}e^{z^{2}/4}.$
As a consequence, one gets
$\displaystyle I_{N}$
$\displaystyle\approx\frac{1}{2}+\lim\limits_{\varepsilon\to
0}\frac{1}{\varepsilon}\,\frac{e^{-i\pi\varepsilon}}{2\cos(\pi\varepsilon/2)\Gamma(-1-\varepsilon)}=1.$
(78)
We conclude that
${\mathbb{E}}_{0}\\{{\mathcal{T}}_{\ell,N}\\}\approx\frac{\ell^{2}}{D}\,\frac{\pi}{4}\,N^{-2}\qquad(N\gg
1).$ (79)
While the above derivation is not a mathematical proof, it captures correctly
the leading-order behavior of the mean first-crossing time, see Fig. 6(a). A
more rigorous derivation and the analysis of the next-order terms present an
interesting perspective.
Equation (79) is a rather counter-intuitive result: in fact, one might expect
that the “speed up” in crossing the threshold $\ell$ would be proportional to
$N$, i.e., the mean time would be inversely proportional to $N$. A similar
speed up by $N^{2}$ was observed for the mean first-passage time to a
perfectly absorbing target by a population of particles with uniformly
distributed initial positions Grebenkov20d ; Madrid20 .
For the case $x_{0}>0$, one can expect even more sophisticated behavior.
Indeed, as ${\mathcal{T}}_{0,N}$ is the fastest first-passage time, its mean
scales with the logarithm of $N$ Weiss83 ; Basnayake19 ; Lawley20 ; Lawley20b
; Lawley20c
${\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{0,N}\\}\propto\frac{x_{0}^{2}}{4D\ln
N}\qquad(N\gg 1),$ (80)
i.e., it exhibits a very slow decay with $N$. For any threshold $\ell>0$, the
first-crossing time for a single particle naturally splits into two
independent parts: the first-passage time from $x_{0}$ to the target,
${\mathcal{T}}_{0,1}$, and then the first-crossing time
${\mathcal{T}}_{\ell,1}^{0}$ for a particle started from the target. The
situation is much more complicated for $N$ particles. Intuitively, one might
argue that it is enough for a single particle to reach the target and to
remain near the target long enough to ensure the crossing of the threshold
$\ell$ by the total boundary local time $\ell_{t}$, even if all other
particles have not reached the target. In other words, a single particle may
do the job for the others (e.g., if $\ell_{t}=\ell_{t}^{1}$ and
$\ell_{t}^{i}=0$ for all $i=2,3,\ldots,N$). However, this is not the typical
situation that would provide the major contribution to the mean first-crossing
time. Indeed, according to the lower bound (21), the mean first-crossing time
${\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{\ell,N}\\}$ cannot decrease with $N$
faster than ${\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{0,N}\\}$, suggesting at
least a logarithmically slow decay.
This behavior is confirmed by Fig. 6(a) showing the mean first-crossing time
${\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{\ell,N}\\}$ as a function of $N$ for a
fixed value of $\ell$ and several values of the starting point $x_{0}$. When
$x_{0}=0$, we observe the earlier discussed power law decay (79). In turn, the
decay with $N$ is much slower for $x_{0}>0$. Multiplying
${\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{\ell,N}\\}$ by $\ln N$ and plotting it
as a function of $1/\ln N$ (Fig. 6(b)), we confirm numerically the leading-
order logarithmic behavior (80) but with significant corrections.
Figure 6: (a) Mean first-crossing time
${\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{\ell,N}\\}$ as a function of the number
$N$ of particles diffusing on the half-line, with $\ell=1$, $D=1$, and several
values of $x_{0}$ as indicated in the legend. Lines show the result of
numerical integration in Eq. (68), with the probability density
$U_{N}(\ell,t|x_{0})$ given by Eqs. (60, 46). Symbols present the result of
numerical integration in Eq. (75) for the case $x_{0}=0$. Thin black line
indicates the asymptotic behavior (79). (b) Another representation of the mean
first-crossing time ${\mathbb{E}}_{x_{0}}\\{{\mathcal{T}}_{\ell,N}\\}$,
multiplied by $\ln N$ and shown as a function of $1/\ln N$, for $x_{0}>0$.
### A.5 Large-$\lambda$ asymptotic analysis
In this section, we present the details of the large-$\lambda$ asymptotic
analysis of the function $I_{N}(\lambda,z_{0})$ defined by Eq. (46). Using the
binomial expansion, one gets
$I_{N}(\lambda,z_{0})=\sum\limits_{n=1}^{N}\binom{N}{n}[\mathrm{erf}(z_{0})]^{N-n}e^{-nz_{0}^{2}}\,i_{n}(\lambda,z_{0}),$
(81)
where
$i_{n}(\lambda,z_{0})=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}\,e^{iq\lambda}\bigl{[}w(iz_{0}-q)\bigr{]}^{n},$
(82)
and we used the Faddeeva function $w(z)$ to express
$\mathrm{erfcx}(z_{0}+iq)$. To evaluate the large-$\lambda$ asymptotic
behavior of the integral $i_{n}(\lambda,z_{0})$, we employ the integral
representation (50) of the Faddeeva function:
$\displaystyle i_{n}(\lambda,z_{0})$
$\displaystyle=\frac{1}{\pi^{n/2}}\int\limits_{0}^{\infty}dz_{1}\,e^{-z_{1}^{2}/4}\ldots\int\limits_{0}^{\infty}dz_{n}\,e^{-z_{n}^{2}/4}$
$\displaystyle\times\delta(z_{1}+\ldots+z_{n}-\lambda)\,e^{-z_{0}(z_{1}+\ldots+z_{n})}$
$\displaystyle=e^{-z_{0}\lambda}\,i_{n}(\lambda,0).$ (83)
We are left therefore with the asymptotic analysis of $i_{n}(\lambda,0)$.
One trivially gets $i_{1}(\lambda,0)=e^{-\lambda^{2}/4}/\sqrt{\pi}$. In
general, one has to integrate over the cross-section of the hyperplane
$z_{1}+\ldots+z_{n}=\lambda$ with the first (hyper-)octant
${\mathbb{R}}_{+}^{n}$. In the limit $\lambda\to\infty$, the dominant
contribution comes from the vicinity of the point $(\lambda,\ldots,\lambda)/n$
of that cross-section that is the closest to the origin. One can therefore
introduce new coordinates centered at this point and oriented with this cross-
section. For instance, for $n=2$, one uses $z_{1}=\lambda/2+r/\sqrt{2}$ and
$z_{2}=\lambda/2-r/\sqrt{2}$ to write
$\displaystyle i_{2}(\lambda,0)$
$\displaystyle=\frac{1}{\pi}\int\limits_{-\lambda/\sqrt{2}}^{\lambda/\sqrt{2}}\frac{dr}{\sqrt{2}}e^{-\lambda^{2}/8-r^{2}/4}$
$\displaystyle=e^{-\lambda^{2}/8}\frac{\sqrt{2}\,\mathrm{erf}(\lambda/\sqrt{8})}{\sqrt{\pi}}\,.$
As $\lambda\to\infty$, the limits of the above integral can be extended to
infinity to get
$i_{2}(\lambda,0)\simeq\frac{\sqrt{2}}{\sqrt{\pi}}e^{-\lambda^{2}/8}$.
Similarly, for $n=3$, we use the polar coordinates $(r,\theta)$ in the cross-
section
$\displaystyle z_{1}$
$\displaystyle=\frac{\lambda}{3}+r\biggl{(}\frac{\cos\theta}{\sqrt{2}}+\frac{\sin\theta}{\sqrt{6}}\biggr{)},$
$\displaystyle z_{2}$
$\displaystyle=\frac{\lambda}{3}+r\biggl{(}-\frac{2\sin\theta}{\sqrt{6}}\biggr{)},$
$\displaystyle z_{3}$
$\displaystyle=\frac{\lambda}{3}+r\biggl{(}-\frac{\cos\theta}{\sqrt{2}}+\frac{\sin\theta}{\sqrt{6}}\biggr{)},$
such that $z_{1}+z_{2}+z_{3}=\lambda$. As a consequence, we get
$z_{1}^{2}+z_{2}^{2}+z_{3}^{2}=\lambda^{2}/3+r^{2}$, from which
$\displaystyle i_{3}(\lambda,0)$
$\displaystyle\approx\frac{2\pi}{\pi^{3/2}}\int\limits_{0}^{\infty}\frac{dr\,r}{\sqrt{3}}e^{-\lambda^{2}/12-r^{2}/4}$
$\displaystyle=e^{-\lambda^{2}/12}\frac{4}{\sqrt{3\pi}}\,.$
In general, we obtain
$\displaystyle i_{n}(\lambda,0)$
$\displaystyle\approx\frac{\omega_{n-1}}{\pi^{n/2}}\int\limits_{0}^{\infty}\frac{dr\,r^{n-2}}{\sqrt{n}}e^{-\lambda^{2}/(4n)-r^{2}/4}$
$\displaystyle=e^{-\lambda^{2}/(4n)}\frac{2^{n-1}}{\sqrt{\pi n}}\,,$ (84)
where $\omega_{d}=2\pi^{d/2}/\Gamma(d/2)$ is the area of the unit
$d$-dimensional ball. Substituting this asymptotic relation into Eq. (81), we
get the large-$\lambda$ behavior:
$I_{N}(\lambda,z_{0})\approx\sum\limits_{n=1}^{N}\binom{N}{n}[\mathrm{erf}(z_{0})]^{N-n}\,\frac{2^{n-1}}{\sqrt{\pi
n}}\,e^{-(nz_{0}+\lambda/2)^{2}/n}\,.$ (85)
When $\lambda\gg Nz_{0}$, the dominant contribution comes from the term with
$n=N$ so that
$I_{N}(\lambda,z_{0})\approx\frac{2^{N-1}}{\sqrt{\pi
N}}\,e^{-(Nz_{0}+\lambda/2)^{2}/N}\,.$ (86)
In particular, one has
$I_{N}(\lambda,0)\approx\frac{2^{N-1}}{\sqrt{\pi
N}}\,e^{-\lambda^{2}/(4N)}\,.$ (87)
## Appendix B Diffusion outside a ball
In this Appendix, we consider another emblematic example of diffusion in the
exterior of a ball of radius $R$:
$\Omega=\\{\bm{x}\in{\mathbb{R}}^{3}~{}:~{}|\bm{x}|>R\\}$.
### B.1 Reminder for a single particle
For the case of partially reactive boundary, the survival probability reads
Collins49
$\displaystyle
S_{q}(t|r_{0})=1-\frac{R\exp\bigl{(}-\frac{(r_{0}-R)^{2}}{4Dt}\bigr{)}}{r_{0}(1+1/(qR))}\biggl{\\{}\mathrm{erfcx}\biggl{(}\frac{r_{0}-R}{\sqrt{4Dt}}\biggr{)}$
$\displaystyle-\mathrm{erfcx}\biggl{(}\frac{r_{0}-R}{\sqrt{4Dt}}+\left(1+qR\right)\frac{\sqrt{Dt}}{R}\biggr{)}\biggr{\\}}\,,$
(88)
where $r_{0}=|\bm{x}_{0}|\geq R$ is the radial coordinate of the starting
point $\bm{x}_{0}$. As diffusion is transient, the particle can escape to
infinity with a finite probability:
$S_{q}(t|r_{0})\xrightarrow[t\to\infty]{}S_{q}(\infty|r_{0})=1-\frac{R/r_{0}}{1+1/(qR)}>0.$
(89)
Expanding Eq. (88) in a power series of $1/\sqrt{Dt}$ up to the leading term,
one gets the long-time behavior
$S_{q}(t|r_{0})=S_{q}(\infty|r_{0})+t^{-\alpha}\psi_{q}(r_{0})+O(t^{-1}),$
(90)
with $\alpha=1/2$ and
$\psi_{q}(r_{0})=\frac{qR^{2}/r_{0}}{1+qR}\,\frac{r_{0}-R+R/(1+qR)}{\sqrt{\pi
D}}\,.$ (91)
This domain belongs therefore to class IV according to our classification
(27).
The probability density of the first-passage time,
$H_{q}(t|r_{0})=-\partial_{t}S_{q}(t|r_{0})$, follows immediately (see also
Grebenkov18 ):
$\displaystyle H_{q}(t|r_{0})$
$\displaystyle=\frac{qD}{r_{0}}e^{-(r_{0}-R)^{2}/(4Dt)}\biggl{\\{}\frac{R}{\sqrt{\pi
Dt}}$ (92)
$\displaystyle-(1+qR)\mathrm{erfcx}\biggl{(}\frac{r_{0}-R}{\sqrt{4Dt}}+(1+qR)\frac{\sqrt{Dt}}{R}\biggr{)}\biggr{\\}}.$
For a perfectly reactive target, one retrieves the Smoluchowski result:
$\displaystyle S_{\infty}(t|r_{0})$ $\displaystyle=$ $\displaystyle
1-\frac{R}{r_{0}}\mathrm{erfc}\biggl{(}\frac{r_{0}-R}{\sqrt{4Dt}}\biggr{)},$
(93) $\displaystyle H_{\infty}(t|r_{0})$ $\displaystyle=$
$\displaystyle\frac{R}{r_{0}}\,\frac{r_{0}-R}{\sqrt{4\pi
Dt^{3}}}\,e^{-(r_{0}-R)^{2}/(4Dt)}.$ (94)
In turn, the probability density $U_{1}(\ell,t|r_{0})$ reads Grebenkov20c
$U_{1}(\ell,t|r_{0})=\frac{R\,e^{-\ell/R}}{r_{0}}\,\frac{r_{0}-R+\ell}{\sqrt{4\pi
Dt^{3}}}e^{-(r_{0}-R+\ell)^{2}/(4Dt)}.$ (95)
This is a rare example when the probability density $U_{1}(\ell,t|\bm{x}_{0})$
is found in a simple closed form. Setting $\ell=0$, one retrieves the
probability density of the first-passage time for a perfectly absorbing sphere
Smoluchowski17 . Integrating the probability density over $t$, one gets
$Q_{1}(\ell,t|r_{0})=\frac{R\,e^{-\ell/R}}{r_{0}}\mathrm{erfc}\biggl{(}\frac{r_{0}-R+\ell}{\sqrt{4Dt}}\biggr{)},$
(96)
whereas the derivative with respect to $\ell$ yields the continuous part of
the probability density $\rho_{1}(\ell,t|\bm{x}_{0})$:
$\displaystyle\rho_{1}(\ell,t|r_{0})=\biggl{(}1-\frac{R}{r_{0}}\mathrm{erfc}\biggl{(}\frac{r_{0}-R}{\sqrt{4Dt}}\biggr{)}\biggr{)}\delta(\ell)$
(97)
$\displaystyle+\frac{e^{-\ell/R}}{r_{0}}\biggl{(}\mathrm{erf}\biggl{(}\frac{r_{0}-R+\ell}{\sqrt{4Dt}}\biggr{)}+\frac{R\,e^{-(r_{0}-R+\ell)^{2}/(4Dt)}}{\sqrt{\pi
Dt}}\biggr{)}$
(here we added explicitly the first term to account for the atom of the
probability measure at $\ell=0$). As diffusion is transient, the crossing
probability is below $1$:
$Q_{1}(\ell,\infty|r_{0})=\int\limits_{0}^{\infty}dt\,U_{1}(\ell,t|r_{0})=\frac{R\,e^{-\ell/R}}{r_{0}}<1.$
(98)
In other words, the density $U_{1}(\ell,t|r_{0})$ is not normalized to $1$
because the diffusing particle can escape to infinity before its boundary
local time has reached the threshold $\ell$. Expectedly, the mean first-
crossing time is infinite, whereas the most probable first-crossing time,
corresponding to the maximum of $U_{1}(\ell,t|r_{0})$, is
$t_{\rm mp,1}=\frac{(r_{0}-R+\ell)^{2}}{6D}\,.$ (99)
### B.2 The crossing probability
For the case of $N$ particles, we start by analyzing the crossing probability
$Q_{N}(\ell,\infty|r_{0})$. Rewriting Eq. (89) as
$S_{q}(\infty|r_{0})=1-R/r_{0}+\frac{R/r_{0}}{1+qR}\,,$ (100)
and substituting it into Eq. (17), one gets
$Q_{N}(\ell,\infty|r_{0})=1-{\mathcal{L}}_{q,\ell}^{-1}\left\\{\frac{\bigl{[}1-R/r_{0}+\frac{R/r_{0}}{1+qR}\bigr{]}^{N}}{q}\right\\}.$
(101)
Using the binomial expansion and the identity
${\mathcal{L}}_{q,\ell}^{-1}\biggl{\\{}\frac{1}{q(1+qR)^{n}}\biggr{\\}}=1-e^{-\ell/R}\sum\limits_{k=0}^{n-1}\frac{(\ell/R)^{k}}{k!}\,,$
(102)
we evaluate the inverse Laplace transform of each term that yields after re-
arrangment of terms:
$\displaystyle Q_{N}(\ell,\infty|r_{0})$
$\displaystyle=e^{-\ell/R}\sum\limits_{k=0}^{N-1}\frac{(\ell/R)^{k}}{k!}$
$\displaystyle\times\biggl{(}1-\sum\limits_{n=0}^{k}\binom{N}{n}\alpha^{n}(1-\alpha)^{N-n}\biggr{)},$
(103)
with $\alpha=R/r_{0}$. For $N=1$, we retrieve Eq. (98). At $r_{0}=R$, one gets
a simpler relation
$Q_{N}(\ell,\infty|R)=e^{-\ell/R}\sum\limits_{k=0}^{N-1}\frac{(\ell/R)^{k}}{k!}\,.$
(104)
For a fixed $\ell/R$ and large $N$, one has
$Q_{N}(\ell,\infty|R)\simeq
1-\frac{(\ell/R)^{N}e^{-\ell/R}}{N!}\qquad(N\to\infty),$ (105)
i.e., the crossing probability rapidly approaches $1$.
Figure 7 illustrates the behavior of the crossing probability
$Q_{N}(\ell,\infty|r_{0})$ as a function of $N$. One sees that
$Q_{N}(\ell,\infty|r_{0})$ monotonously grows with $N$ and rapidly approaches
$1$, whereas the threshold $\ell$ and the starting point $\bm{x}_{0}$
determine how fast this limit is reached.
Figure 7: Crossing probability $Q_{N}(\ell,\infty|\bm{x}_{0})$ for $N$
particles diffusing in the exterior of a ball of radius $R=1$, with three
values of $\ell$ indicated in the legend, and $|\bm{x}_{0}|=2R$ (a) and
$|\bm{x}_{0}|=R$ (b).
### B.3 PDF of the total boundary local time
Setting $z_{0}=(r_{0}-R)/\sqrt{4Dt}$ and $\alpha=R/r_{0}$, one can rewrite the
survival probability from Eq. (88) as
$\displaystyle S_{q}(t|r_{0})$
$\displaystyle=1-\frac{\alpha}{1+1/(qR)}+\frac{\alpha}{1+1/(qR)}$
$\displaystyle\times\biggl{(}\mathrm{erf}(z_{0})+e^{-z_{0}^{2}}\mathrm{erfcx}\bigl{(}z^{\prime}_{0}+q\sqrt{Dt}\bigr{)}\biggr{)},$
(106)
where $z^{\prime}_{0}=z_{0}+\sqrt{Dt}/R$, and the expression in parentheses
resembles the survival probability from Eq. (35) for diffusion on the half-
line. The probability density of the total boundary local time $\ell_{t}$
reads then
$\rho_{N}(\ell,t|r_{0})=\bigl{(}S_{\infty}(t|r_{0})\bigr{)}^{N}\delta(\ell)+\frac{I_{N}^{3d}\bigl{(}\ell/\sqrt{Dt},z_{0})}{\sqrt{Dt}}\,,$
(107)
where $S_{\infty}(t|r_{0})=1-\alpha\,\mathrm{erfc}(z_{0})$ and
$\displaystyle
I_{N}^{3d}(\lambda,z_{0})=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}e^{-iq\lambda}\biggl{\\{}\biggl{[}1-\frac{\alpha}{1+i/(qR^{\prime})}$
$\displaystyle+\frac{\alpha}{1+i/(qR^{\prime})}\biggl{(}\mathrm{erf}(z_{0})+e^{-z_{0}^{2}}\mathrm{erfcx}\bigl{(}z_{0}+1/R^{\prime}-iq\bigr{)}\biggr{)}\biggr{]}^{N}$
$\displaystyle-\bigl{[}1-\alpha\,\mathrm{erfc}(z_{0})\bigr{]}^{N}\biggr{\\}},$
(108)
with $R^{\prime}=R/\sqrt{Dt}$. We skip the analysis of this function and the
consequent asymptotic behavior for $\rho_{N}(\ell,t|r_{0})$, see Appendix A.2
for a similar treatment for diffusion on the half-line.
### B.4 PDF of the first-crossing time
Substituting Eq. (88) into Eq. (117), one gets
$\displaystyle U_{N}(\ell,t|r_{0})$
$\displaystyle=\frac{NDe^{-z_{0}^{2}}}{r_{0}\sqrt{Dt}}\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}e^{iq\ell/\sqrt{Dt}}\biggl{[}1-\frac{\alpha\,\mathrm{erfc}(z_{0})}{1-i/(qR^{\prime})}$
$\displaystyle+\frac{\alpha\,e^{-z_{0}^{2}}\mathrm{erfcx}(z_{0}+R^{\prime}+iq)}{1-i/(qR^{\prime})}\biggr{]}^{N-1}$
$\displaystyle\times\biggl{(}\frac{R^{\prime}}{\sqrt{\pi}}-(1+iqR^{\prime})\mathrm{erfcx}(z_{0}+R^{\prime}+iq)\biggr{)},$
(109)
where $R^{\prime}=R/\sqrt{Dt}$. The short-time behavior of this function is
given by Eq. (22) for $|\bm{x}_{0}|>R$ and Eq. (24) for $|\bm{x}_{0}|=R$,
respectively.
To get the long-time behavior from Eq. (28), we need to evaluate the following
inverse Laplace transform
$\displaystyle\Psi_{N}(\bm{x}_{0},\ell)$
$\displaystyle=\frac{R\alpha^{N}}{\sqrt{\pi D}}$ (110)
$\displaystyle\times{\mathcal{L}}_{q,\ell}^{-1}\biggl{\\{}\frac{1}{q}\biggl{(}1-\frac{1}{1+qR}\biggr{)}\biggl{(}\beta+\frac{1}{1+qR}\biggr{)}^{N}\biggr{\\}},$
where we used Eqs. (89, 91), and set $\beta=(1-\alpha)/\alpha$. Using the
binomial expansion and the identity (102), we get after simplifications:
$\Psi_{N}(\bm{x}_{0},\ell)=\frac{Re^{-\ell/R}}{\sqrt{\pi
D}}\sum\limits_{n=0}^{N}\binom{N}{n}(1-R/r_{0})^{N-n}\frac{(\ell/r_{0})^{n}}{n!}\,.$
(111)
Substituting this expression into Eq. (28), we obtain
$U_{N}(\ell,t|r_{0})\simeq\frac{NR\,e^{-\ell/R}}{\sqrt{4\pi
Dt^{3}}}\sum\limits_{n=0}^{N}\binom{N}{n}(1-R/r_{0})^{N-n}\frac{(\ell/r_{0})^{n}}{n!}\,.$
(112)
In the particular case $r_{0}=R$, the above sum is reduced to a single term
with $n=N$ so that
$U_{N}(\ell,t|R)\simeq\frac{R\,e^{-\ell/R}}{\sqrt{4\pi
Dt^{3}}}\,\frac{(\ell/r_{0})^{N}}{(N-1)!}\,.$ (113)
We conclude that, contrarily to the one-dimensional case, the probability
density $U_{N}(\ell,t|r_{0})$ exhibits the same $t^{-3/2}$ asymptotic decay
for any $N$, while the population size affects only the prefactor. In
particular, the mean first-crossing time is always infinite.
Figure 8: Cumulative distribution function $Q_{N}(\ell,t|\bm{x}_{0})$ of the
first-crossing time ${\mathcal{T}}_{\ell,N}$ for $N$ particles diffusing in
the exterior of a ball of radius $R=1$, with $\ell=1$, $D=1$, $|\bm{x}_{0}|=2$
(a) and $|\bm{x}_{0}|=1$ (b). Symbols present the explicit form (96) for a
single particle, whereas thick lines show the numerical integration in Eq.
(118). Thin lines indicate the long-time asymptotic relation (29), while thin
dashed lines present the short-time behavior in Eq. (25) for $|\bm{x}_{0}|=2$
and Eq. (26) for $|\bm{x}_{0}|=1$, respectively.
Figure 2 illustrated the probability density $U_{N}(\ell,t|r_{0})$ and its
asymptotic behavior for $\ell/R=1$. To provide a complementary view onto the
properties of the first-crossing time, we also present the cumulative
distribution function $Q_{N}(\ell,t|r_{0})$ on Fig. 8. As discussed
previously, when the particles are released on the stock region, the stock
depletion occurs much faster when $N$ increases.
For comparison, we also consider a smaller threshold $\ell/R=0.1$, for which
the probability density $U_{N}(t,\ell|r_{0})$ is shown in Fig. 9. As
previously, the behavior strongly depends on whether the particles start on
the stock region (or close to it) or not. In the former case ($r_{0}=R$), the
maximum of the probability density for $\ell/R=0.1$ is further shifted to
smaller times, as expected. Note also that $U_{N}(\ell,t|r_{0})$ for $N=5$
exhibits a transitory regime at intermediate times with a rapid decay, so that
the long-time behavior in Eq. (112), which remains correct, is not much useful
here, as it describes the probability density of very small amplitude. In
turn, for $r_{0}=2R$, the three curves on Fig. 9(a) resemble those on Fig.
2(a), because the limiting factor here is finding the stock region. In
particular, setting $\ell=0$, one would get the probability density of the
fastest first-passage time to the perfectly absorbing target Weiss83 ;
Basnayake19 ; Lawley20 ; Lawley20b ; Lawley20c ; Grebenkov20d .
Figure 9: Probability density function $U_{N}(\ell,t|\bm{x}_{0})$ of the
first-crossing time for $N$ particles diffusing in the exterior of a ball of
radius $R=1$, with $\ell=0.1$, $D=1$, $|\bm{x}_{0}|=2$ (a) and
$|\bm{x}_{0}|=1$ (b). Symbols present the explicit form (34) for a single
particle, whereas thick lines show the numerical integration in Eq. (109).
Thin lines indicate the long-time asymptotic relation (112), while thin dashed
lines present the short-time behavior in Eq. (22) for $|\bm{x}_{0}|=2$ and Eq.
(24) for $|\bm{x}_{0}|=1$.
## Appendix C Numerical computation
As a numerical computation of the inverse Laplace transform may be unstable,
it is convenient to replace the Laplace transform by the Fourier transform.
This is equivalent to replacing the generating function
${\mathbb{E}}_{\bm{x}_{0}}\\{e^{-q\ell_{t}}\\}$ of $\ell_{t}$ by its
characteristic function ${\mathbb{E}}_{\bm{x}_{0}}\\{e^{iq\ell_{t}}\\}$. In
this way, we get
$\displaystyle\rho_{N}(\ell,t|\bm{x}_{0})$
$\displaystyle=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}e^{-iq\ell}\,{\mathbb{E}}_{\bm{x}_{0}}\\{e^{iq\ell_{t}}\\}$
$\displaystyle=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}e^{-iq\ell}\,\bigl{(}{\mathbb{E}}_{\bm{x}_{0}}\\{e^{iq\ell_{t}^{1}}\\}\bigr{)}^{N}$
$\displaystyle=\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}e^{-iq\ell}\,\bigl{(}S_{-iq}(t|\bm{x}_{0})\bigr{)}^{N}.$
Since the survival probability $S_{\infty}(t|\bm{x}_{0})$ is strictly positive
for any $\bm{x}_{0}\notin\Gamma$, the total boundary local time $\ell_{t}$ can
be zero with a finite probability $[S_{\infty}(t|\bm{x}_{0})]^{N}$, and it is
convenient to subtract the contribution of this atom in the probability
measure explicitly, so that
$\displaystyle\rho_{N}(\ell,t|\bm{x}_{0})=\bigl{(}S_{\infty}(t|\bm{x}_{0})\bigr{)}^{N}\delta(\ell)$
(114)
$\displaystyle\quad+\int\limits_{-\infty}^{\infty}\frac{dq}{2\pi}e^{-iq\ell}\,\biggl{[}\bigl{(}S_{-iq}(t|\bm{x}_{0})\bigr{)}^{N}-\bigl{(}S_{\infty}(t|\bm{x}_{0})\bigr{)}^{N}\biggr{]},$
where $\delta(\ell)$ is the Dirac distribution. The probabilistic
interpretation of this relation is straightforward: as the total boundary
local time remains $0$ until the first arrival of any of the particles onto
the stock region, the random event $\ell_{t}=0$ (expressed by $\delta(\ell)$)
has a strictly positive probability
$\bigl{(}S_{\infty}(t|\bm{x}_{0})\bigr{)}^{N}$, i.e., the probability that
none of $N$ particles has arrived onto the stock region up to time $t$. Since
the diffusion equation (3) and the Robin boundary condition (4a) are linear,
one has
$S_{iq}(t|\bm{x}_{0})=S_{-iq}^{*}(t|\bm{x}_{0}),$ (115)
where asterisk denotes the complex conjugate. As a consequence, one can
rewrite Eq. (114) as
$\displaystyle\rho_{N}(\ell,t|\bm{x}_{0})=\bigl{(}S_{\infty}(t|\bm{x}_{0})\bigr{)}^{N}\delta(\ell)$
(116) $\displaystyle\quad+{\rm
Re}\biggl{\\{}\int\limits_{0}^{\infty}\frac{dq}{\pi}e^{iq\ell}\biggl{[}\bigl{(}S_{iq}(t|\bm{x}_{0})\bigr{)}^{N}-\bigl{(}S_{\infty}(t|\bm{x}_{0})\bigr{)}^{N}\biggr{]}\biggr{\\}}.$
Similarly, the probability density $U_{N}(\ell,t|\bm{x}_{0})$ and the
cumulative distribution function $Q_{N}(\ell,t|\bm{x}_{0})$ can be written in
the Fourier form as
$U_{N}(\ell,t|\bm{x}_{0})={\rm
Re}\biggl{\\{}\int\limits_{0}^{\infty}\frac{dq}{\pi}\,\frac{e^{iq\ell}}{iq}\,\biggl{(}-\partial_{t}[S_{iq}(t|\bm{x}_{0})]^{N}\biggr{)}\biggr{\\}}$
(117)
and
$Q_{N}(\ell,t|\bm{x}_{0})={\rm
Re}\biggl{\\{}\int\limits_{0}^{\infty}\frac{dq}{\pi}\,\frac{e^{iq\ell}}{iq}\,\biggl{(}[S_{iq}(t|\bm{x}_{0})]^{N}-1\biggr{)}\biggr{\\}}.$
(118)
## References
* (1) M. Mangel and J. H. Beder, Search and Stock Depletion: Theory and Applications, Can. J. Fish Aquat. Sci. 42, 150-163 (1985).
* (2) Y. Wada, L. P. H. van Beek, C. M. van Kempen, J. W. T. M. Reckman, S. Vasak, and M. F. P. Bierkens, Global depletion of groundwater resources, Geophys. Res. Lett. 37, L20402 (2010).
* (3) R. Dirzo, H. S. Young, M. Galetti, G. Ceballos, N. J. B. Isaac, B. Collen, Defaunation in the Anthropocene, Science 345, 401-406 (2014).
* (4) O. Bénichou and S. Redner, Depletion-Controlled Starvation of a Diffusing Forager, Phys. Rev. Lett. 113, 238101 (2014).
* (5) M. Chupeau, O. Bénichou, and S. Redner, Universality classes of foraging with resource renewal, Phys. Rev. E 93, 032403 (2016).
* (6) O. Bénichou, M. Chupeau, and S. Redner, Role of depletion on the dynamics of a diffusing forager, J. Phys. A: Math. Theor. 49, 394003 (2016).
* (7) M. Chupeau, O. Bénichou, and S. Redner, Search in patchy media: Exploitation-exploration tradeoff, Phys. Rev. E 95, 012157 (2017).
* (8) U. Bhat, S. Redner, and O. Bénichou, Does greed help a forager survive? Phys. Rev. E 95, 062119 (2017).
* (9) O. Bénichou, U. Bhat, P. L. Krapivsky, and S. Redner, Optimally frugal foraging, Phys. Rev. E 97, 022110 (2018).
* (10) G. M. Viswanathan, M. G. E. da Luz, E. P. Raposo, and H. E. Stanley, The Physics of Foraging (Cambridge University Press, Cambridge, 2011).
* (11) G. M. Viswanathan, S. V. Buldyrev, S. Havlin, M. G. E. da Luz, E. P. Raposo, and H. E. Stanley, Optimizing the success of random searches, Nature 401, 911 (1999).
* (12) O. Bénichou, C. Loverdo, M. Moreau, and R. Voituriez, Intermittent search strategies, Rev. Mod. Phys. 83, 81-130 (2011).
* (13) T. Gueudré, A. Dobrinevski, and J.-P. Bouchaud, Explore or Exploit: A Generic Model and an Exactly Solvable Case, Phys. Rev. Lett. 112, 050602 (2014).
* (14) R. H. Fitts, Cellular mechanisms of muscle fatigue, Physiol. Rev. 74, 49-94 (1994).
* (15) A. B. Parekh and R. Penner, Store depletion and calcium influx, Physiol. Rev. 77, 901-930 (1997).
* (16) H. C. Ha and S. H. Snyde, Poly(ADP-ribose) polymerase is a mediator of necroticcell death by ATP depletion, Proc. Nat. Acad. Sci. U.S.A. 96, 13978-13982 (1999).
* (17) D. E. Clapham, Calcium Signaling, Cell 131, 1047-1058 (2007).
* (18) F. Spitzer, Principles of Random Walk (Springer-Verlag, New York, 1976).
* (19) S. Condamin, O. Bénichou, and M. Moreau, First-exit times and residence times for discrete random walks on finite lattices, Phys. Rev. E 72, 016127 (2005).
* (20) S. Condamin, V. Tejedor, and O. Bénichou, Occupation times of random walks in confined geometries: From random trap model to diffusion-limited reactions, Phys. Rev. E 76, 050102R (2007).
* (21) D. A. Darling and M. Kac, On Occupation Times of the Markoff Processes, Trans. Am. Math. Soc. 84, 444-458 (1957).
* (22) D. Ray, Sojourn times of diffusion processes, Illinois J. Math. 7, 615 (1963).
* (23) F. B. Knight, Random walks and a sojourn density process of Brownian motion, Trans. Amer. Math. Soc. 109, 56-86 (1963).
* (24) N. Agmon, Residence times in diffusion processes, J. Chem. Phys. 81, 3644 (1984).
* (25) A. M. Berezhkovskii, V. Zaloj, and N. Agmon, Residence time distribution of a Brownian particle, Phys. Rev. E 57, 3937 (1998).
* (26) A. Dhar and S. N. Majumdar, Residence time distribution for a class of Gaussian Markov processes, Phys. Rev. E 59, 6413 (1999).
* (27) S. B. Yuste and L. Acedo, Order statistics of the trapping problem, Phys. Rev. E 64, 061107 (2001).
* (28) C. Godrèche and J. M. Luck, Statistics of the Occupation Time of Renewal Processes, J. Stat. Phys. 104, 489 (2001).
* (29) S. N. Majumdar and A. Comtet, Local and Occupation Time of a Particle Diffusing in a Random Medium, Phys. Rev. Lett. 89, 060601 (2002).
* (30) O. Bénichou, M. Coppey, J. Klafter, M. Moreau, and G. Oshanin, On the joint residence time of N independent two-dimensional Brownian motions, J. Phys. A.: Math. Gen. 36, 7225-7231 (2003).
* (31) S. Burov and E. Barkai, Occupation Time Statistics in the Quenched Trap Model, Phys. Rev. Lett. 98, 250601 (2007).
* (32) S. Burov and E. Barkai, Residence Time Statistics for N Renewal Processes, Phys. Rev. Lett. 107, 170601 (2011).
* (33) D. S. Grebenkov, Residence times and other functionals of reflected Brownian motion, Phys. Rev. E 76, 041139 (2007).
* (34) P. Lévy, Processus Stochastiques et Mouvement Brownien (Paris, Gauthier-Villard, 1965).
* (35) K. Ito and H. P. McKean, Diffusion Processes and Their Sample Paths (Springer-Verlag, Berlin, 1965).
* (36) D. S. Grebenkov, Probability distribution of the boundary local time of reflected Brownian motion in Euclidean domains, Phys. Rev. E 100, 062110 (2019).
* (37) D. S. Grebenkov, Statistics of boundary encounters by a particle diffusing outside a compact planar domain, J. Phys. A.: Math. Theor. 54, 015003 (2021).
* (38) G. H. Weiss, K. E. Shuler, and K. Lindenberg, Order Statistics for First Passage Times in Diffusion Processes, J. Stat. Phys. 31, 255-278 (1983).
* (39) K. Basnayake, Z. Schuss, and D. Holcman, Asymptotic formulas for extreme statistics of escape times in 1, 2 and 3-dimensions, J. Nonlinear Sci. 29, 461-499 (2019).
* (40) S. D. Lawley and J. B. Madrid, A probabilistic approach to extreme statistics of Brownian escape times in dimensions 1, 2, and 3, J. Nonlin. Sci. 30, 1207-1227 (2020).
* (41) S. D. Lawley, Universal Formula for Extreme First Passage Statistics of Diffusion, Phys. Rev. E 101, 012413 (2020).
* (42) S. D. Lawley, Distribution of extreme first passage times of diffusion, J. Math. Biol. 80, 2301-2325 (2020).
* (43) A. J. Bray, S. Majumdar, and G. Schehr, Persistence and First-Passage Properties in Non-equilibrium Systems, Adv. Phys. 62, 225-361 (2013).
* (44) S. N. Majumdar, A. Pal, and G. Schehr, Extreme value statistics of correlated random variables: a pedagogical review, Phys. Rep. 840, 1-32 (2020).
* (45) D. S. Grebenkov, R. Metzler, and G. Oshanin, From single-particle stochastic kinetics to macroscopic reaction rates: fastest first-passage time of N random walkers, New J. Phys. 22, 103004 (2020).
* (46) D. S. Grebenkov, Paradigm shift in diffusion-mediated surface phenomena, Phys. Rev. Lett. 125, 078102 (2020).
* (47) F. C. Collins and G. E. Kimball, Diffusion-controlled reaction rates, J. Coll. Sci. 4, 425 (1949).
* (48) H. Sano and M. Tachiya, Partially diffusion-controlled recombination, J. Chem. Phys. 71, 1276-1282 (1979).
* (49) H. Sano and M. Tachiya, Theory of diffusion-controlled reactions on spherical surfaces and its application to reactions on micellar surfaces, J. Chem. Phys. 75, 2870-2878 (1981).
* (50) D. Shoup and A. Szabo, Role of diffusion in ligand binding to macromolecules and cell-bound receptors, Biophys. J. 40, 33-39 (1982).
* (51) R. Zwanzig, Diffusion-controlled ligand binding to spheres partially covered by receptors: an effective medium treatment, Proc. Natl. Acad. Sci. USA 87, 5856 (1990).
* (52) B. Sapoval, General Formulation of Laplacian Transfer Across Irregular Surfaces, Phys. Rev. Lett. 73, 3314-3317 (1994).
* (53) M. Filoche and B. Sapoval, Can One Hear the Shape of an Electrode? II. Theoretical Study of the Laplacian Transfer, Eur. Phys. J. B 9, 755-763 (1999).
* (54) B. Sapoval, M. Filoche, and E. Weibel, Smaller is better – but not too small: A physical scale for the design of the mammalian pulmonary acinus, Proc. Nat. Ac. Sci. USA 99, 10411-10416 (2002).
* (55) D. S. Grebenkov, M. Filoche, and B. Sapoval, Spectral Properties of the Brownian Self-Transport Operator, Eur. Phys. J. B 36, 221-231 (2003).
* (56) A. Berezhkovskii, Y. Makhnovskii, M. Monine, V. Zitserman, and S. Shvartsman, Boundary homogenization for trapping by patchy surfaces, J. Chem. Phys. 121, 11390 (2004).
* (57) D. S. Grebenkov, M. Filoche, B. Sapoval, and M. Felici, Diffusion-Reaction in Branched Structures: Theory and Application to the Lung Acinus, Phys. Rev. Lett. 94, 050602 (2005).
* (58) D. S. Grebenkov, M. Filoche, and B. Sapoval, Mathematical Basis for a General Theory of Laplacian Transport towards Irregular Interfaces, Phys. Rev. E 73, 021103 (2006).
* (59) S. D. Traytak and W. Price, Exact solution for anisotropic diffusion-controlled reactions with partially reflecting conditions, J. Chem. Phys. 127, 184508 (2007).
* (60) P. C. Bressloff, B. A. Earnshaw, and M. J. Ward, Diffusion of protein receptors on a cylindrical dendritic membrane with partially absorbing traps, SIAM J. Appl. Math. 68, 1223-1246 (2008).
* (61) S. D. Lawley and J. P. Keener, A New Derivation of Robin Boundary Conditions through Homogenization of a Stochastically Switching Boundary, SIAM J. Appl. Dyn. Sys. 14, 1845-1867 (2015).
* (62) M. Galanti, D. Fanelli, S. D. Traytak, and F. Piazza, Theory of diffusion-influenced reactions in complex geometries, Phys. Chem. Chem. Phys. 18, 15950-15954 (2016).
* (63) A. E. Lindsay, A. J. Bernoff, and M. J. Ward, First Passage Statistics for the Capture of a Brownian Particle by a Structured Spherical Target with Multiple Surface Traps, Multiscale Model. Simul. 15, 74-109 (2017).
* (64) D. S. Grebenkov and G. Oshanin, Diffusive escape through a narrow opening: new insights into a classic problem, Phys. Chem. Chem. Phys. 19, 2723-2739 (2017).
* (65) A. Bernoff, A. Lindsay, and D. Schmidt, Boundary Homogenization and Capture Time Distributions of Semipermeable Membranes with Periodic Patterns of Reactive Sites, Multiscale Model. Simul. 16, 1411-1447 (2018).
* (66) D. S. Grebenkov and S. Traytak, Semi-analytical computation of Laplacian Green functions in three-dimensional domains with disconnected spherical boundaries, J. Comput. Phys. 379, 91-117 (2019).
* (67) T. Guérin, M. Dolgushev, O. Bénichou, and R. Voituriez, Universal kinetics of imperfect reactions in confinement, Commun. Chem. 4, 157 (2021).
* (68) D. S. Grebenkov, Joint distribution of multiple boundary local times and related first-passage time problems, J. Stat. Mech. 103205 (2020).
* (69) S. Redner, A Guide to First Passage Processes (Cambridge: Cambridge University press, 2001).
* (70) N. Levernier, M. Dolgushev, O. Bénichou, R. Voituriez, and T. Guérin, Survival probability of stochastic processes beyond persistence exponents, Nat. Comm 10, 2990 (2019).
* (71) R. F. Kayser and J. B. Hubbard, Diffusion in a Medium with a Random Distribution of Static Traps, Phys. Rev. Lett. 51, 79 (1983).
* (72) R. F. Kayser and J. B. Hubbard, Reaction diffusion in a medium containing a random distribution of nonoverlapping traps, J. Chem. Phys. 80, 1127 (1984).
* (73) P. Levitz, M. Zinsmeister, P. Davidson, D. Constantin, and O. Poncelet, Intermittent Brownian dynamics over a rigid strand: Heavily tailed relocation statistics, Phys. Rev. E 78, 030102(R) (2008).
* (74) D. S. Grebenkov, Surface Hopping Propagator: An Alternative Approach to Diffusion-Influenced Reactions, Phys. Rev. E 102, 032125 (2020).
* (75) M. Smoluchowski, Versuch einer Mathematischen Theorie der Koagulations Kinetic Kolloider Lösungen, Z. Phys. Chem. 92U, 129-168 (1917).
* (76) M. R. Evans and S. N. Majumdar, Diffusion with Stochastic Resetting, Phys. Rev. Lett. 106, 160601 (2011).
* (77) A. V. Chechkin and I. M. Sokolov, Random Search with Resetting: A Unified Renewal Approach, Phys. Rev. Lett. 121, 050601 (2018).
* (78) M. R. Evans, S. N. Majumdar, and G. Schehr, Stochastic resetting and applications, J. Phys. A: Math. Theor. 53, 193001 (2020).
* (79) B. Meerson and S. Redner, Mortality, Redundancy, and Diversity in Stochastic Search, Phys. Rev. Lett. 114, 198101 (2015).
* (80) D. S. Grebenkov and J.-F. Rupprecht, The escape problem for mortal walkers, J. Chem. Phys. 146, 084106 (2017).
* (81) D. S. Grebenkov, An encounter-based approach for restricted diffusion with a gradient drift, J. Phys. A: Math. Theor. 55, 045203 (2022).
* (82) A. Godec and R. Metzler, First passage time statistics for two-channel diffusion, J. Phys. A: Math. Theor. 50, 084001 (2017).
* (83) Y. Lanoiselée, N. Moutal, and D. S. Grebenkov, Diffusion-limited reactions in dynamic heterogeneous media, Nature Commun. 9, 4398 (2018).
* (84) V. Sposini, A. V. Chechkin, and R. Metzler, First passage statistics for diffusing diffusivity, J. Phys. A: Math. Theor. 52, 04LT01 (2019).
* (85) D. S. Grebenkov, A unifying approach to first-passage time distributions in diffusing diffusivity and switching diffusion models, J. Phys. A: Math. Theor. 52, 174001 (2019).
* (86) A. N. Borodin and P. Salminen, Handbook of Brownian Motion: Facts and Formulae (Birkhauser Verlag, Basel-Boston-Berlin, 1996).
* (87) B. Sapoval, J. S. Andrade Jr, A. Baldassari, A. Desolneux, F. Devreux, M. Filoche, D. S. Grebenkov, S. Russ, New Simple Properties of a Few Irregular Systems, Physica A 357, 1-17 (2005).
* (88) D. S. Grebenkov, Analytical representations of the spread harmonic measure, Phys. Rev. E 91, 052108 (2015).
* (89) J. Madrid and S. D. Lawley, Competition between slow and fast regimes for extreme first passage times of diffusion, J. Phys. A: Math. Theor. 53, 335002 (2020).
* (90) D. S. Grebenkov, R. Metzler, and G. Oshanin, Strong defocusing of molecular reaction times results from an interplay of geometry and reaction control, Commun. Chem. 1, 96 (2018).
|
# Representation and Embedding of Pseudo MV-algebras with Square Roots I.
Strict Square Roots
Anatolij Dvurečenskij${}^{{}^{1,2,3}}$, Omid Zahiri${}^{{}^{1,*}}$
1Mathematical Institute, Slovak Academy of Sciences, Štefánikova 49, SK-814 73
Bratislava, Slovakia 2Palacký University Olomouc, Faculty of Sciences, tř.
17. listopadu 12, CZ-771 46 Olomouc, Czech Republic 3Depart. Math.,
Constantine the Philosopher University in Nitra, Tr. A. Hlinku 1, SK-949 01
Nitra, Slovakia<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
In [DvZa3], we started the investigation of pseudo MV-algebras with square
roots. In the present paper, we continue to study the structure of pseudo MV-
algebras with square roots focusing on their new characterizations. The paper
is divided into two parts. In the present first part, we investigate the
relationship between a pseudo MV-algebra with square root and its
corresponding unital $\ell$-group in the scene of two-divisibility.
In the second part, we find some conditions under which a particular class of
pseudo MV-algebras can be embedded into pseudo MV-algebras with square roots.
We introduce and investigate the concepts of a strict square root of a pseudo
MV-algebra and a square root closure, and we compare both notions. We show
that each MV-algebra has a square root closure. Finally, using the square root
of individual elements of a pseudo MV-algebra, we find the greatest subalgebra
of a special pseudo MV-algebra with weak square root.
###### Key words and phrases:
Pseudo MV-algebra, unital $\ell$-group, symmetric pseudo MV-algebra, square
root, strict square root, square root closure, two-divisibility, divisibility,
embedding
###### 2020 Mathematics Subject Classification:
06C15, 06D35
The paper acknowledges the support by the grant of the Slovak Research and
Development Agency under contract APVV-20-0069 and the grant VEGA No.
2/0142/20 SAV, A.D
The project was also funded by the European Union’s Horizon 2020 Research and
Innovation Programme on the basis of the Grant Agreement under the Marie
Skłodowska-Curie funding scheme No. 945478 - SASPRO 2, project 1048/01/01, O.Z
* Corresponding Author: Omid Zahiri
## 1\. Introduction
Chang [Cha1, Cha2] introduced MV-algebras as the algebraic semantics of the
$[0,1]$-valued Łukasiewicz logic. In the same sense, Boolean algebras do
algebraic semantics of the classical two-valued logic. Since then, the theory
of MV-algebras has been deeply investigated and studied in the different
aspects of mathematics and logic. MV-algebras form a category that is
categorically equivalent to the category of Abelian unital $\ell$-groups. This
principle result of the theory was presented by Mundici, [Mun].
More than twenty years ago, Georgescu and Iorgulescu, [GeIo], introduced a
non-commutative generalization of MV-algebras called pseudo MV-algebras. They
form an algebraic counterpart of the non-commutative Łukasiewicz logic; see,
for example, [Haj]. This concept also was independently defined by Rachůnek
[Rac] as generalized MV-algebras. Pseudo MV-algebras have a disjunction that
is not necessarily commutative and two negations, the left and right ones,
that could coincide even in non-commutative pseudo MV-algebras. We note that
other generalizations of pseudo MV-algebras are, for example, pseudo EMV-
algebras introduced in [DvZa0, DvZa1] and generalized MV-algebras from [GaTs].
Dvurečenskij, [Dvu1], using Bosbach’s notion of a semiclan, presented a
categorical equivalence between the category of pseudo MV-algebras and the
category of unital $\ell$-groups (not necessarily Abelian ones) that
generalizes a similar result by Mundici for MV-algebras. He showed that every
pseudo MV-algebra is always an interval in a unital $\ell$-group. Due to
Komori [Kom], it is well-known that the lattice of subvarieties of MV-algebras
is countable. In contrast, the lattice of subvarieties of pseudo MV-algebras
is uncountable; see [DvHo]. Hence, the structure of pseudo MV-algebras is much
richer than that of MV-algebras. Important varieties of pseudo MV-algebras are
the variety of symmetric pseudo MV-algebras, where the left and right
negations coincide, and the variety of representable pseudo MV-algebras; they
will be the playground for us where we will mainly work on the present paper.
Root operators are useful tools in studying algebraic structures with non-
idempotent binary operations. The study of the existence and uniqueness of
roots (in a particular case, square root) of an element or all elements of an
algebraic structure with binary operations has a long history. Mal’cev’s well-
known result, [KoMe, Thm 7.3.2], showed that the extraction of roots is unique
in a torsion-free locally nilpotent group. So, the equation $x^{n}=y^{n}$ has
at most one solution. Kontorovič, [Kon1, Kon2], also studied groups with
unique extraction of roots called R-groups. Baumslag in [Bau] developed the
theory of groups with unique roots.
In [Höl], Höhle studied square roots on integral, commutative, residuated
$\ell$-monoids, and especially for MV-algebras. He also introduced a strict
square root and proposed a classification of MV-algebras with square roots. In
particular, he showed that each MV-algebra with square root is exactly a
Boolean algebra (the square root is an identity map), or a strict MV-algebra
(the square root is strict), or isomorphic to the direct product of the first
two cases. He also investigated $\sigma$-complete, complete, divisible,
locally finite, and injective MV-algebras with square roots and provided their
relations. Bělohlávek, [Bel], continued in the study of the square root of
residuated lattices, and he proved that the class of all residuated lattices
with square roots is a variety. In [Amb], Ambrosio provided a study about
2-atomless MV-algebras and strict MV-algebras and proved that on this
structure, the concepts of strict and 2-atomless are equivalent. She also
proved that each strict MV-algebra contains a copy of the MV-algebra of all
rational dyadic numbers. We refer to [Höl, Amb, NPM] for more details about
square roots on MV-algebras.
In [DvZa2], we recently studied square roots on EMV-algebras which generalize
MV-algebras. We used square roots to characterize EMV-algebras and provided a
representation of EMV-algebras with square roots. Then we investigated
divisible and locally complete EMV-algebras with square roots. It was shown
that each strict EMV-algebra has a top element, so it is an MV-algebra.
In the next step, [DvZa3], we initiated a deed study of square roots on a
class of pseudo MV-algebras. We introduced and investigated the notion of a
square root and a weak square root on a pseudo MV-algebra. The class of pseudo
MV-algebras with square roots is equational, so it is a subvariety of pseudo
MV-algebras. We found that for each square root $r$ on a pseudo MV-algebra
$M$, the element $r(0)$ plays a significant role. It helped us classify the
pseudo MV-algebras class with square roots and proposed several examples. We
found a relationship between two-divisible $\ell$-groups and representable
symmetric pseudo MV-algebras with strict square roots.
The present work focuses on investigating square roots, and we extend our
research on pseudo MV-algebras, which was initiated in [DvZa3]. The main aims
of the present paper, which is divided into two parts, are:
Part I.
* •
Present new characterizations and presentations of pseudo MV-algebras with
strict and non-strict square roots.
* •
Show how two-divisibility is connected with the existence of square roots.
* •
Investigate when the two-divisibility of a pseudo MV-algebra $M=\Gamma(G,u)$
entails the two-divisibility of $G$.
* •
Study the possibility of embedding a pseudo MV-algebra into a pseudo MV-
algebra with square root.
* •
Characterize square square roots on strongly $(H,1)$-perfect pseudo MV-
algebras.
Part II, [DvZa5].
* •
Define and study the strict square root closure of a pseudo MV-algebra.
* •
Define and study the square root closure of a pseudo MV-algebra and compare it
with the strict square root closure.
* •
Investigate the square root of not all elements of a pseudo MV-algebra.
* •
Find conditions when a maximal subalgebra exists in a pseudo MV-algebra with
weak square root.
The paper is organized as follows. Part I. Section 2 gathers basic
definitions, properties, and results about pseudo MV-algebras and square roots
that will be used in the next sections. Section 3 presents sufficient and
necessary conditions under which a pseudo MV-algebra has a strict or non-
strict square root. In Section 4, the relation between a pseudo MV-algebra
$M=\Gamma(G,u)$ with a square root and the two-divisibility of the unital
$\ell$-group $G$ is investigated. If $M$ is linearly ordered or an MV-algebra
with square root, then the $\ell$-group $G$ is two-divisible. We find a
sufficient and necessary condition under which a square root on a pseudo MV-
algebra $\Gamma(G,u)$ implies the two-divisibility of the $\ell$-group $G$. We
also characterize the lexicographic product of MV-algebras with square roots.
Part II. In Section 5, we find an answer to the question, “Is it possible to
embed a pseudo MV-algebra in a pseudo-MV-algebra with square root?” To answer
the problem, we study a class of pseudo MV-algebras that can be embedded into
a pseudo MV-algebra with strict square root. We define the concept of the
strict square root closure and prove that each MV-algebra has a strict square
root closure. Section 6 introduces a square root closure of an MV-algebra and
compares it with the strict square root. Section 7 describes a square root of
an individual element of a pseudo MV-algebra and finds the greatest subalgebra
of special pseudo MV-algebras with the weak square root property. The paper
contains interesting examples illustrating our results and some questions are
formulated.
## 2\. Preliminaries
In the section, we gather basic elements of $\ell$-groups, pseudo MV-algebras,
and square roots.
We will use groups $(G;+,0)$ written additively. A group $G$ is partially
ordered if there is a partial order $\leq$ on $G$ such that $f\leq g$ implies
$h_{1}+f+h_{2}\leq h_{1}+g+h_{2}$ for all $h_{1},h_{2}\in G$. If $\leq$ is a
lattice order, $G$ is said to be a lattice ordered or an $\ell$-group. An
element $u\geq 0$ is said to be a strong unit of $G$ if given $g\in G$, there
is an integer $n\geq 1$ such that $g\leq nu$. A couple $(G,u)$, where $u$ is a
fixed strong unit of $G$, is said to be a unital $\ell$-group. If $G$ is an
$\ell$-group, $\mathrm{C}(G)=\\{g\in G\colon g+h=h+g,\ \forall h\in G\\}$ is
the commutative center of $G$.
For more information about$\ell$-groups, we recommend to consult with e.g.,
[Dar, Gla, AnFe].
###### Definition 2.1.
[GeIo] A pseudo MV-algebra is an algebra $(M;\oplus,^{-},^{\sim},0,1)$ of type
$(2,1,1,0,0)$ such that the following axioms hold for all $x,y,z\in M$,
* (A1)
$x\oplus(y\oplus z)=(x\oplus y)\oplus z$,
* (A2)
$x\oplus 0=0\oplus x=x$,
* (A3)
$x\oplus 1=1\oplus x=1$,
* (A4)
$1^{-}=1^{\sim}=0$,
* (A5)
$(x^{-}\oplus y^{-})^{\sim}=(x^{\sim}\oplus y^{\sim})^{-}$,
* (A6)
$x\oplus(x^{\sim}\odot y)=y\oplus(y^{\sim}\odot x)=(x\odot y^{-})\oplus
y=(y\odot x^{-})\oplus x$,
* (A7)
$x\odot(x^{-}\oplus y)=(x\oplus y^{\sim})\odot y$,
* (A8)
$(x^{-})^{\sim}=x$,
where $x\odot y=(x^{-}\oplus y^{-})^{\sim}$. If the operation $\oplus$ is
commutative, equivalently $\odot$ is commutative, then $M$ is an MV-algebra.
(A6) defines $x\vee y$ and (A7) $x\wedge y$. We note that if $x^{\sim}=x^{-}$
for each $x\in M$, then $\oplus$ is not necessarily commutative. If
$x^{-}=x^{\sim}$ for each $x\in M$, $M$ is said to be symmetric.
We note that it can happen that $0=1$; in this case, $M$ is said to be
degenerate.
For example, if $(G,u)$ is a unital $\ell$-group, then
$\Gamma(G,u)=([0,u];\oplus,^{-},^{\sim},0,u)$ is a pseudo MV-algebra, where
$x\oplus y:=(x+y)\wedge u$, $x^{-}:=u-x$, and $x^{\sim}:=-x+u$. Moreover, due
to a basic representation of pseudo MV-algebras, see [Dvu1], every pseudo MV-
algebra is isomorphic to some $\Gamma(G,u)$ for a unique (up to isomorphism)
unital $\ell$-group $(G,u)$. In addition, the functor
$\Gamma:(G,u)\mapsto\Gamma(G,u)$ defines a categorical equivalence between the
category of unital $\ell$-groups and the category of pseudo MV-algebras. For
more information about the functor $\Gamma$ and its inverse $\Psi$, see
[Dvu1].
According to [Dvu1], we introduce a partial operation $+$ on any pseudo MV-
algebra $M$: Given $x,y\in M$, $x+y$ is defined if and only if $y\odot x=0$,
and in such a case, we set $x+y:=x\oplus y$. The operation $+$ is associative,
and using the $\ell$-group representation, it corresponds to the group
addition in the representing unital $\ell$-group.
For any integer $n\geq 0$ and any $x\in M$, we can define
$\displaystyle 0.x$ $\displaystyle=0,\quad\text{and}\quad n.x=(n-1).x\oplus
x,\quad n\geq 1,$ $\displaystyle 0x$ $\displaystyle=0,\quad\text{and}\quad
nx=(n-1)x+x,\quad n\geq 1,$
assuming $(n-1)x$ and $(n-1)x+x$ are defined in $M$. An element $x\in M$ is a
Boolean element if $x\oplus x=x$. The set $\mathrm{B}(M)$ denotes the set of
Boolean elements of $M$, which is a subalgebra of $M$ and a Boolean algebra;
it is a so-called Boolean skeleton of $M$.
In each pseudo MV-algebra $(M;\oplus,^{-},^{\sim},0,1)$, we can define two
additional binary operations $\to$ and $\rightsquigarrow$ by
$x\to y:=x^{-}\oplus y,\quad x\rightsquigarrow y:=y\oplus x^{\sim}.$
The properties of the operations $\to$ and $\rightsquigarrow$ can be found
e.g. in [DvZa3, Prop 2.3].
Let $n\geq 2$ be an integer. A pseudo MV-algebra $M$ is said to be
$n$-divisible if, given an element $y\in M$, there is an element $x\in M$ such
that $nx$ exists in $M$ and $nx=y$. If $M$ is $n$-divisible for each $n\geq
1$, we say that $M$ is divisible. We note that $M$ is $n$-divisible iff given
$x\in M$, there is $y\in M$ such that $n.y=x$ and $(n-1).y\odot y^{-}=0$.
Analogously, an $\ell$-group $G$ is $n$-divisible if given $g\in G$, there is
$h\in G$ such that $nh=g$, and $G$ is divisible if it is $n$-divisible for
each $n\geq 2$. If $M=\Gamma(G,u)$ is an MV-algebra, then $M$ is $n$-divisible
iff $G$ is $n$-divisible. For pseudo MV-algebras $\Gamma(G,u)$,
$n$-divisibility of $G$ trivially implies $n$-divisibility of $M=\Gamma(G,u)$.
The converse also holds if, e.g., $G$ is linearly ordered and
$u/2\in\mathrm{C}(G)$; see Corollary 4.8.
An $\ell$-group $G$ enjoys unique extraction of roots if for every integer
$n\geq 1$ and $g,h\in G$, $ng=nh$ implies $g=h$. In the same way, we say that
a pseudo MV-algebra enjoys unique extraction of roots. We note that every
linearly ordered group, [Gla, Lem 2.1.4], (linearly ordered pseudo MV-algebra)
enjoys unique extraction of roots. The same applies to each representable
$\ell$-group, see [AnFe, Page 26].
###### Remark 2.2.
If $(M;\oplus,^{-},^{\sim},0,1)$ is a pseudo MV-algebra and $a\in M$, then
$([0,a];\oplus_{a},^{-_{a}},^{\sim_{a}},0,a)$ is a pseudo MV-algebra, where
$x\oplus_{a}y=(x\oplus y)\wedge a$, $x^{-a}=a\odot x^{-}$ and $x^{\sim
a}=x^{\sim}\odot a$. Indeed, by [Dvu1], we can assume that $M=\Gamma(G,u)$,
where $(G,u)$ is a unital $\ell$-group. For each $a\in M$ and each
$x,y\in[0,a]$, $a\odot x^{-}=a-u+(u-x)\odot a=a-x=x^{-a}$, $x^{\sim}\odot
a=-x+u-u+a=-x+a=x^{\sim a}$ and $(x\oplus y)\wedge a=(x+y)\wedge u\wedge
a=(x\oplus_{a}y)$. Therefore, by [GeIo, Exm 1.3],
$([0,a];\oplus_{a},^{-a},^{\sim a},0,a)$ is a pseudo MV-algebra.
A non-empty subset $I$ of a pseudo MV-algebra $M$ is called an ideal of $M$ if
(1) for each $y\in M$, $y\leq x\in I$ implies that $y\in I$; (2) $I$ is closed
under $\oplus$. An ideal $I$ of $M$ is said to be (i) prime if $x\wedge y\in
I$ implies $x\in I$ or $y\in I$; (ii) normal if $x\oplus I=I\oplus x$ for any
$x\in M$, where $x\oplus I:=\\{x\oplus i\mid i\in I\\}$ and $I\oplus
x=\\{i\oplus x\mid i\in I\\}$, (iii) proper if $I\neq M$, (iv) maximal if $I$
is a proper ideal of $M$ and it is not properly contained in any proper ideal
of $M$. We denote by $\text{MaxI}(M)$ and $\text{NormI}(M)$ the set of maximal
ideals and normal ideals, respectively, of $M$. Of course,
$\text{MaxI}(M)\neq\emptyset$ and $\text{NormI}(M)\neq\emptyset$, but their
intersection may be empty, see, e.g., [DvHo], what for MV-algebras is
impossible.
We recall that an ideal $I$ is normal if and only if given $x,y\in M$, $x\odot
y^{-}\in I$ if and only if $y^{\sim}\odot x\in I$, [GeIo, Lem 3.2]. For each
subset $I$ of $M$, we denote $I^{-}=\\{x^{-}\mid x\in I\\}$ and
$I^{\sim}=\\{x^{\sim}\mid x\in I\\}$.
The set of the prime ideals of $M$ is denoted by $\mathrm{Spec}(M)$.
Equivalent conditions, [GeIo, Thm 2.17], for an ideal $I$ to be prime are as
follows:
* (P1)
$x\odot y^{-}\in I$ or $y\odot x^{-}$ for all $x,y\in M$.
* (P2)
$x\odot y^{\sim}\in I$ or $y\odot x^{\sim}$ for all $x,y\in M$.
A one-to-one relationship exists between congruences and normal ideals of a
pseudo MV-algebra, [GeIo, Cor. 3.10]: If $I$ is a normal ideal of a pseudo MV-
algebra, then the relation $\sim_{I}$, defined by $x\sim_{I}y$ if and only if
$x\odot y^{-},y\odot x^{-}\in I$, is a congruence, and $M/I$ with the
following operations induced from $M$ is a pseudo MV-algebra, where
$M/I=\\{x/I\mid x\in M\\}$ and $x/I$ is the equivalence class containing $x$.
$\displaystyle x/I\oplus y/I=(x\oplus
y)/I,\quad(x/I)^{-}=x^{-}/I,\quad(x/I)^{\sim}=x^{\sim}/I,\quad 0/I,\quad 1/I.$
Conversely, if $\theta$ is a congruence on $M$, then $I_{\theta}=\\{x\in M\mid
x\sim 0\\}$ is a normal ideal such that $\sim_{I_{\theta}}=\sim$.
A pseudo MV-algebra $M$ is representable if $M$ is a subdirect product of a
linearly ordered pseudo MV-algebras system. By [Dvu2, Prop. 6.9], $M$ is
representable if and only if $a^{\bot}=\\{x\in M\mid x\wedge a=0\\}$ is a
normal ideal of $M$ for each $a\in M$. Moreover, the class of representable
pseudo MV-algebras forms a variety [Dvu2, Thm 6.11].
We will also use pseudo MV-algebras of the form
$\Gamma(H\,\overrightarrow{\times}\,G,(u,0))$, where $(H,u)$ is a linearly
ordered unital $\ell$-group, $G$ is an $\ell$-group, and
$\,\overrightarrow{\times}\,$ denotes the lexicographic product of $H$ and
$G$. We note that
$\Gamma(H,u)\cong\Gamma(H\,\overrightarrow{\times}\,O,(u,0))$, where $O$ is
the one-element zero group, i.e. $O=\\{0\\}$.
An important class of pseudo MV-algebras is the class of $(H,1)$-perfect
pseudo MV-algebras, where $(H,1)$ is a unital $\ell$-subgroup of the unital
group of real numbers $(\mathbb{R},1)$. They are roughly speaking isomorphic
to $\Gamma(H\,\overrightarrow{\times}\,G,(1,0))$, where $G$ is an
$\ell$-group, see [Dvu4, Thm 4.3], see also [Dvu3]. We recall that if $A,B$
are two subsets of $M$, then $A\leq B$ means $a\leq b$ for each $a\in A$ and
each $b\in B$. We say a pseudo MV-algebra $M$ is $H$-perfect, if there is a
decomposition $(M_{h}\mid h\in H)$ of $M$ such that
* (i)
$M_{s}\neq\emptyset$ for each $s\in H$,
* (ii)
$M_{s}\leq M_{t}$ if $s\leq t$, $s,t\in H$,
* (iii)
$M_{s}^{-}=M_{1-s}=M^{\sim}_{s}$ for each $s\in H$,
* (iv)
if $x\in M_{s}$ and $y\in M_{t}$, then $x\oplus y\in M_{s\oplus t}$, where
$s\oplus t=\min\\{s+y,1\\}$, $s,t\in H$.
An $(H,1)$-perfect pseudo MV-algebra $M=\Gamma(K,v)$ is strongly
$(H,1)$-perfect if there is a system of elements $(c_{t}\mid t\in H)$ such
that
* (i)
$c_{t}\in H\cap\mathrm{C}(K)$, $t\in H$,
* (ii)
if $s,t\in H$ and $s+t\in H$, then $c_{s}+c_{t}=c_{s+t}$,
* (iii)
$c_{1}=1$.
According to [Dvu4, Thm 4.3], a pseudo MV-algebra $M$ is strongly
$(H,1)$-perfect iff there is an $\ell$-group $G$ such that
$M\cong\Gamma(H\,\overrightarrow{\times}\,G,(1,0))$.
A notion of a square root on MV-algebras was introduced in [Höl]. For pseudo
MV-algebras, it was introduced and studied in [DvZa3].
###### Definition 2.3.
A mapping $r:M\to M$ is said to be (i) a square root if it satisfies the
following conditions:
* (Sq1)
for all $x\in M$, $r(x)\odot r(x)=x$,
* (Sq2)
for each $x,y\in M$, $y\odot y\leq x$ implies $y\leq r(x)$,
* (Sq3)
for each $x\in M$, $r(x^{-})=r(x)\to r(0)$ and
$r(x^{\sim})=r(x)\rightsquigarrow r(0)$,
and (ii) a weak square root if it satisfies only (Sq1) and (Sq2). A pseudo MV-
algebra $(M;\oplus,^{-},^{\sim},0,1)$ has square roots (weak square roots) if
there exists a square root (weak square root) $r$ on $M$. If $M$ is an MV-
algebras, then both notions of a square root coincide. A square root $r$ is
strict if $r(0)=r(0)^{-}$, equivalently, $r(0)=r(0)^{\sim}$. If $M$ has a
square root (weak square root), it is unique.
We note that it can happen that a pseudo MV-algebra has a weak square root but
no square root; see [DvZa3, Sec 6] for such examples.
The basic properties of square roots on pseudo MV-algebras can be found in the
following result:
###### Proposition 2.4.
[DvZa3] Let $r$ be a square root on a pseudo MV-algebra
$(M;\oplus,^{-},^{\sim},0,1)$. For each $x,y\in M$, we have:
* (1)
$x\leq x\vee r(0)\leq r(x)$, $r(1)=1$, $(r(x)\odot r(0))\vee(r(0)\odot
r(x))\leq x$ and $r(x)\odot x=x\odot r(x)$.
* (2)
$x\leq y$ implies that $r(x)\leq r(y)$.
* (3)
$x\wedge y\leq r(x)\odot r(y),r(y)\odot r(x)$ and if $a\in\mathrm{B}(M)$ such
that $a\leq r(0)$, then $a=0$.
* (4)
$x\leq r(x\odot x)$ and $r(x\odot x)\odot r(x\odot x)=r(x)\odot r(x)\odot
r(x)\odot r(x)=x\odot x$.
* (5)
$(x\wedge x^{-})\vee(x\wedge x^{\sim})\leq r(0)$.
* (6)
$r(x)\in\mathrm{B}(M)$ if and only if $r(x)=x$.
* (7)
$r(x)\wedge r(y)=r(x\wedge y)$.
* (8)
$r(x)\rightarrow r(y)\leq r(x\rightarrow y)$ and $r(x)\rightsquigarrow
r(y)\leq r(x\rightsquigarrow y)$. Moreover, $r(x)\odot r(y)\leq r(x\odot y)$
for all $x,y\in M$ if and only if $r(x)\rightarrow r(y)=r(x\rightarrow y)$ and
$r(x)\rightsquigarrow r(y)=r(x\rightsquigarrow y)$.
* (9)
$r(x\vee y)=r(x)\vee r(y)$.
* (10)
$r(x\odot y)\leq(r(x)\odot r(y))\vee r(0)$ and $r(x\odot x)=(r(x)\odot
r(x))\vee r(0)$. Consequently, if $r(0)\leq x$, then $r(x\odot x)=x$.
* (11)
(a) $x\in\mathrm{B}(M)$ if and only if $r(x)=x\oplus r(0)$ if and only if
$r(x)=r(0)\oplus x$.
(b) $(r(0)\rightarrow 0)\odot(r(0)\rightarrow 0)=(r(0)\rightsquigarrow
0)\odot(r(0)\rightsquigarrow 0)\in\mathrm{B}(M)$.
(c) If $a,b\in M$, $a\leq b$, then $r([a,b])=[r(a),r(b)]$. In particular,
$r(M)=[r(0),1]$.
Properties (1)–(8) hold also for each weak square root on $M$.
## 3\. Characterizations of pseudo MV-algebras with square roots
Recently, in [DvZa3], we presented some characterizations of pseudo MV-
algebras with square roots using unital $\ell$-groups. In this section, we
study pseudo MV-algebras and find new conditions under which they have square
roots (strict or non-strict).
First, we gather some useful properties that will be used in the sequel.
###### Definition 3.1.
Let $(M;\oplus,^{-},^{\sim},0,1)$ be a pseudo MV-algebra and $a$ be an element
of $M$. We say that $M$ is
(i) $f$-isomorphic to $[0,a]$, if there is a bijective and order-preserving
map $f:M\to[0,a]$ such that $[0,a]$ with, $\oplus_{f}$ (the binary operation
is inherited from $M$ by $f$, that is, $f(x)\oplus_{f}f(y):=f(x\oplus y)$), -a
and ∼a form a pseudo MV-algebra such that $M\cong[0,a]$ and for all $x\in M$,
$f(x)\oplus_{f}f(x)=(f(x)\oplus f(x))\wedge a$.
(ii) isomorphic to $[0,a]$ if $(M;\oplus,^{-},^{\sim},0,1)$ is isomorphic to
$([0,a];\oplus_{a},^{-a},^{\sim a},0,a)$ (see Remark 2.2).
Note that in (ii), $\oplus_{a}$ is induced from $\oplus$, i.e.
$x\oplus_{a}y=(x\oplus y)\wedge a$. Clearly, if $M$ is isomorphic to $[0,b]$
for some $b\in M$, then there is an isomorphism $g:M\to[0,b]$ of pseudo MV-
algebras and so $M$ is $g$-isomorphic to $[0,b]$.
###### Proposition 3.2.
Let $(M;\oplus,^{-},^{\sim},0,1)$ be a pseudo MV-algebra with a square root
$r:M\to M$.
* (i)
$[r(0),r(0)^{-}]\subseteq r(\mathrm{B}(M))$.
* (ii)
$M$ is a Boolean algebra if $[r(0),r(0)^{-}]=M$, and $r$ is strict if
$[r(0),r(0)^{-}]=\\{r(0)\\}$.
* (iii)
For each $x\in M$, there exists a unique element $R(x)\leq r(0)^{-}$ such that
$R(x)\oplus r(0)=r(x)$.
* (iv)
$M$ is $f$-isomorphic to $[0,r(0)^{-}]$ for some bijection
$f:M\to[0,r(0)^{-}]$.
* (v)
For each $x\in M$, there exists a unique element $R(x)\leq r(0)^{-}$ such that
$R(x)\oplus R(x)=x$.
* (vi)
If $r(x)\odot r(y)\leq r(x\odot y)$ for all $x,y\in M$, then $M$ is isomorphic
to $[0,r(0)^{-}]$.
* (vii)
For each $x\in M$, $r(x)=\max\\{y\wedge(y\rightarrow x)\mid y\in
M\\}=\max\\{y\wedge(y\rightsquigarrow x)\mid y\in M\\}$.
* (viii)
$x\odot r(0)\leq x\odot x$ for all $x\in M$.
###### Proof.
Set $v:=r(0)^{-}\odot r(0)^{-}$. By [DvZa3, Prop 3.3] or Proposition 2.4(11),
$v\in\mathrm{B}(M)$. Then $r(0)^{\sim}=r(0)^{-}$, so that $v=r(0)^{\sim}\odot
r(0)^{\sim}$.
(i) Let $x\in[r(0),r(0)^{-}]$. Then $r(0)\leq x$ and [DvZa3, Prop 3.5(i)]
implies $x=x\vee r(0)=r(x\odot x)$, and so $r(0)^{-}=r(r(0)^{-}\odot
r(0)^{-})=r(v)$. Also, $x\leq r(0)^{-}$ implies $x\odot x\leq r(0)^{-}\odot
r(0)^{-}=v$. Consider the subset $[0,v]\subseteq M$ which is a subset of
$\mathrm{B}(M)$ ([DvZa3, Thm 4.3]). By Proposition 2.4(11)(c),
$r([0,v])=[r(0),r(v)]=[r(0),r(0)^{-}]$. In addition, $[0,v]=\\{x\odot x\mid
x\in[r(0),r(0)^{-}]\\}$.
(ii) It follows from (i) and [DvZa3, Thm 4.3].
(iii) Without loss of generality, we can assume that $M=\Gamma(G,u)$, where
$(G,u)$ is a unital $\ell$-group. For each $x\in M$ set
$R(x):=r(x^{\sim})^{-}$. By Proposition 2.4(2), $r(0)\leq r(x^{\sim})$ and so
$R(x)\leq r(0)^{-}$. By [DvZa3, Prop 3.5(v)], $R(x)\oplus r(0)=r(x)$. Now, let
$r(x)=y\oplus r(0)$ for some $y\leq r(0)^{-}$. Then $y+r(0)\leq
r(0)^{-}+r(0)=u-r(0)+r(0)=u$ and so $y+r(0)=(y+r(0))\wedge u=y\oplus r(0)$.
Similarly, $R(x)+r(0)=R(x)\oplus r(0)$. It follows that $R(x)+r(0)=R(x)\oplus
r(0)=y\oplus r(0)=y+r(0)$ entails that $R(x)=y$. In addition, from [DvZa3,
Prop 3.5(v)], we get that $r(x^{-})^{\sim}=R(x)=r(x^{\sim})^{-}$ since
$r(x^{-})^{\sim}\leq r(0)^{\sim}=r(0)^{-}$ and $r(x^{-})^{\sim}\oplus
r(0)=r(0)\oplus r(x^{-})^{\sim}=r(x)$.
(iv) Consider the mapping $f:M\to[0,r(0)^{-}]$ defined by
$f(x)=r(x^{\sim})^{-}$. From part (iii), we conclude that $f$ is well-defined,
one-to-one, and $f(x)=r(x^{-})^{\sim}$ for all $x\in M$. Also, if $y\leq
r(0)^{-}$, then $r(0)\leq y^{\sim}$ whence by [DvZa3, Prop 3.5(i)],
$y^{\sim}=y^{\sim}\vee r(0)=r(y^{\sim}\odot y^{\sim})$. It follows that
$y=r(y^{\sim}\odot y^{\sim})^{-}=r(z^{\sim})^{-}\in\mathrm{Im}(f)$, where
$z=y\oplus y$. So, $f$ is a bijection map. Note that, for each $x\in M$, by
[DvZa3, Prop 3.5(v)], $r(0)\oplus x=x\oplus r(0)$, consequently
$\displaystyle r(0)^{-}\odot x=x\odot r(0)^{-},\quad\forall x\in M.$ (3.1)
Consider the following operations of $[0,r(0)^{-}]$. If $x,y\in[0,r(0)^{-}]$,
there exist $a,b\in M$ such that $x=f(a)$ and $y=f(b)$. We set
$\displaystyle x\oplus_{r}y:=f(a\oplus b)=r((a\oplus b)^{\sim})^{-},\quad\
x^{-r}=x^{-}\odot r(0)^{-},\quad\ x^{\sim r}=x^{\sim}\odot r(0)^{-}.$
Let $x,y,z\in M$.
(1) $f(0)=r(0^{\sim})^{-}=r(1)^{-}=1^{-}=0$ and
$f(1)=r(1^{\sim})^{-}=r(0)^{-}$. For each $x,y\in M$,
$f(x)\oplus_{r}f(y)=r((x\oplus y)^{\sim})^{-}=f(x\oplus y)$.
(2) By (Sq3), (3.1), and (iii), we have (recall that in (iii), we have proved
that $r(x^{-})^{\sim}=r(x^{\sim})^{-}$ for all $x\in M$)
$\displaystyle f(x^{-})$
$\displaystyle=r(x^{-\sim})^{-}=r(x)^{-}=r(x)^{-}\wedge
r(0)^{-}=(r(x)^{-}\oplus r(0))\odot r(0)^{-}=(r(x)\to r(0))\odot r(0)^{-}$
$\displaystyle=r(x^{-})\odot r(0)^{-}=r(x^{-})^{\sim-}\odot
r(0)^{-}=r(x^{\sim})^{--}\odot r(0)^{-}=f(x)^{-}\odot r(0)^{-}=f(x)^{-r},$
$\displaystyle f(x^{\sim})$ $\displaystyle=r(x)^{\sim}=r(x)^{\sim}\wedge
r(0)^{\sim}=r(0)^{\sim}\odot(r(0)\oplus
r(x)^{\sim})=r(0)^{\sim}\odot(r(x)\rightsquigarrow r(0))=r(0)^{\sim}\odot
r(x\rightsquigarrow 0)$ $\displaystyle=r(0)^{\sim}\odot
r(x^{\sim})=r(0)^{\sim}\odot r(x^{\sim})^{-\sim}=r(0)^{\sim}\odot
f(x)^{\sim}=f(x)^{\sim}\odot r(0)^{-}=f(x)^{\sim r}.$
(3) $f(0)\oplus_{r}f(x)=r((0\oplus x)^{\sim})^{-}=r(x^{\sim})^{-}=f(x)$.
Similarly, $f(x)\oplus_{r}f(0)=f(x)$. We have $f(x)\oplus_{r}f(1)=r((1\oplus
x)^{\sim})^{-}=r(1^{\sim})^{-}=r(0)^{-}=r((x\oplus
1)^{\sim})^{-}=f(1)\oplus_{r}f(x)$ and $(r(0)^{-})^{\sim
r}=(r(0)^{-})^{\sim}\odot r(0)^{-}=r(0)\odot r(0)^{-}=0$ and
$(r(0)^{-})^{-r}=(r(0)^{\sim})^{-}\odot r(0)^{-}=r(0)\odot r(0)^{-}=0$ (by
[DvZa3, Lem 3.15]).
(4) By the definition of $\oplus_{r}$, we have
$f(x)\oplus_{r}(f(y)\oplus_{r}f(z))=f(x)\oplus_{r}(r((y\oplus
z)^{\sim})^{-})=f(x)\oplus_{r}f(y\oplus z)=f(x\oplus(y\oplus z))=f((x\oplus
y)\oplus z)$. Similarly, $(f(x)\oplus_{r}f(y))\oplus_{r}f(z)=f((x\oplus
y)\oplus z)$. Hence, $\oplus_{r}$ is associative.
(5) By [DvZa3, Prop 3.5(v)], $(f(x)^{-r})^{\sim r}=(f(x)^{-}\odot
r(0)^{-})^{\sim}\odot r(0)^{-}=(r(0)\oplus f(x))\odot r(0)^{-}=(f(x)\oplus
r(0))\odot r(0)^{-}=f(x)\wedge r(0)^{-}=f(x)$.
(6) By (1) and (2), $(f(x)^{-r}\oplus_{r}f(y)^{-r})^{\sim
r}=(f(x^{-})\oplus_{r}f(y^{-}))^{\sim r}=f((x^{-}\oplus y^{-})^{\sim})$ and
$(f(x)^{\sim r}\oplus_{r}f(y)^{\sim
r})^{-r}=(f(x^{\sim})\oplus_{r}f(y^{\sim}))^{-r}=f((x^{\sim}\oplus
y^{\sim})^{-})$.
(7) In a similar way, using (1) and (2), we can show that identities (A7) and
(A8) hold.
(8) By Proposition 2.4(10), $f(x\oplus x)=r((x\oplus
x)^{\sim})^{-}=r(x^{\sim}\odot x^{\sim})^{-}=((r(x^{\sim})\odot
r(x^{\sim}))\vee r(0))^{-}=(r(x^{\sim})^{-}\oplus r(x^{\sim})^{-})\wedge
r(0)^{-}=(f(x)\oplus f(x))\wedge r(0)^{-}$.
Therefore, $([0,r(0)^{-}];\oplus_{r},^{-r},^{\sim r},0,r(0)^{-})$ is a pseudo
MV-algebras, $f$ is an isomorphism, and $M$ is isomorphic to $[0,r(0)^{-}]$.
(v) By [DvZa3, Prop 3.5(i)], $r(x^{\sim})^{-}\oplus r(x^{\sim})^{-}=x$ and by
(iii), $r(x^{\sim})^{-}\leq r(0)^{-}$. If $y\leq r(0)^{-}$ is an element of
$M$ such that $y\oplus y=x$, then due to (iv), there exists $z\in M$ such that
$r(z^{\sim})^{-}=f(z)=y$. We have $x=y\oplus y=r(z^{\sim})^{-}\oplus
r(z^{\sim})^{-}=z$, and so $y=r(x^{\sim})^{-}$.
(vi) Set $b:=r(0)^{-}$. Consider the bijection map $f$ defined in the proof of
(iv). By part (iv) and the definition, it suffices to show that
$f(x)\oplus_{b}f(y)=f(x\oplus y)$ for all $x,y\in M$. Since $r(x)\odot
r(y)\leq r(x\odot y)$ and $r(0)\leq r(x\odot y)$, by Proposition 2.4(10), we
get $(r(x)\odot r(y))\vee r(0)=r(x\odot y)$ for all $x,y\in M$. It follows
that
$\displaystyle f(x\oplus y)$ $\displaystyle=$ $\displaystyle r((x\oplus
y)^{\sim})^{-}=r((y^{\sim}\odot x^{\sim}))^{-}=\left(\left(r(y^{\sim})\odot
r(x^{\sim})\right)\vee r(0)\right)^{-}$ $\displaystyle=$
$\displaystyle\left(r(x^{\sim})^{-}\oplus r(y^{\sim})^{-}\right)\wedge
r(0)^{-}=(f(x)\oplus f(y))\wedge r(0)^{-}.$
Similarly, we can show that if the mapping $f$ is a homomorphism, then
$r(x)\odot r(y)\leq r(x\odot y)$ for all $x,y\in M$.
(vii) Let $x\in M$ and $\Omega_{x}:=\\{y\wedge(y\rightarrow x)\mid y\in M\\}$.
Then $r(x)\odot r(x)=x$ implies that $r(x)\leq r(x)\rightarrow x$, so
$r(x)\wedge(r(x)\rightarrow x)=r(x)$ and $r(x)\in\Omega_{x}$.
Now, for each $y\in M$, by [DvZa3, Prop 2.3(i)], we have
$\displaystyle\big{(}y\wedge(y\rightarrow
x)\big{)}\odot\big{(}y\wedge(y\rightarrow x)\big{)}\leq y\odot(y\rightarrow
x)=y\wedge x\leq x,$
which means $y\wedge(y\rightarrow x)\leq r(x)$. Therefore,
$r(x)=\max\Omega_{x}=\max\\{y\wedge(y\rightarrow x)\mid y\in M\\}$.
In a similar way, we can show that $r(x)=\max\\{y\wedge(y\rightsquigarrow
x)\mid y\in M\\}$.
(viii) For each $x\in M$, we have $r(x\odot x)=x\vee r(0)$, by [DvZa3, Prop
3.3(10)]. It follows that $(x\odot x)\vee(x\odot r(0))\vee(r(0)\odot x)=(x\vee
r(0))\odot(x\vee r(0))=x\odot x$, so $x\odot r(0)\leq x\odot x$. ∎
According to Proposition 3.2(iv), if $M$ is a pseudo MV-algebra with a square
root $r$, then $M$ is $f$-isomorphic to $[0,r(0)^{-}]$, where
$f:M\to[0,r(0)^{-}]$ is defined by $f(x)=r(x^{\sim})^{-}$ for all $x\in M$.
###### Theorem 3.3.
Let $(M;\oplus,^{-},^{\sim},0,1)$ be a pseudo MV-algebra. Then $M$ has a
square root if and only if there exists $b\in M$ satisfying the following
conditions:
* (i)
$b^{-}\leq b$ and $b\odot x=x\odot b$ for all $x\in M$,
* (ii)
$f(x)\oplus f(x)=x$,
* (iii)
$M$ is $f$-isomorphic to $[0,b]$ for some bijection $f:M\to[0,b]$.
In addition, if in condition (iii), $M$ is isomorphic to the $[0,b]$, then
$r(x)\odot r(y)\leq r(x\odot y)$ for all $x,y\in M$.
###### Proof.
Let $M=\Gamma(G,u)$, where $(G,u)$ is a unital $\ell$-group.
First, assume that there exists $b\in M$ satisfying the conditions (i)–(iii).
Define $r:M\to M$ by
$\displaystyle r(x)=b^{-}+f(x),\quad x\in M.$
Since for each $x\in M$, $b^{-}+f(x)\leq b^{-}+b=u$, we have
$b^{-}+f(x)=b^{-}\oplus f(x)\in M$. In a similar way, $f(x)+b^{-}=f(x)\oplus
b^{-}$. Thus, $r$ is one-to-one. Also, for all $x\in M$,
$b^{-}+f(x)=b^{-}\oplus f(x)=(f(x)^{\sim}\odot b)^{-}=(b\odot
f(x)^{\sim})^{-}=f(x)\oplus b^{-}=f(x)+b^{-}$. We claim that $r$ is a square
root.
(1) From (i), it follows that $b^{\sim}=b^{-}$. Hence, $r(0)=b^{-}\oplus
f(0)=b^{-}=b^{\sim}$.
(2) Since $M$ is $f$-isomorphic to $[0,b]$, $f(x^{-})^{\sim}=(f(x)^{-}\odot
b)^{\sim}=b^{\sim}\oplus f(x)=b^{-}\oplus f(x)=r(x)$ for each $x\in M$.
(3) By (ii) and (2), we get that $r(x)\odot r(x)=f(x^{-})^{\sim}\odot
f(x^{-})^{\sim}=(f(x^{-})\oplus f(x^{-}))^{\sim}=(x^{-})^{\sim}=x$.
(4) Since $M$ is $f$-isomorphic to $[0,b]$, property (2) implies that
$r(x\rightarrow 0)=r(x^{-})=f(x^{--})^{\sim}=(f(x^{-})^{-}\odot
b)^{\sim}=b^{\sim}\oplus f(x^{-})=b^{-}\oplus
f(x^{-})^{\sim-}=b^{-}\oplus(f(x^{-})^{\sim})^{-}=r(x)^{-}\oplus
b^{-}=r(x)\rightarrow r(0)$.
(5) Let $y,x\in M$ such that $y\odot y\leq x$. Then $f(y\odot y)\leq f(x)$.
$\displaystyle f(y\odot y)$ $\displaystyle=$ $\displaystyle f((y^{-}\oplus
y^{-})^{\sim})=f(y^{-}\oplus y^{-})^{\sim}\odot b=\big{(}(f(y^{-})\oplus
f(y^{-}))\wedge b\big{)}^{\sim}\odot b$ $\displaystyle=$
$\displaystyle\left(\left(f(y)^{-}\odot b\right)\oplus\left(f(y)^{-}\odot
b\right)\right)^{\sim}\odot b=\left(f(y)^{-}\odot
b\right)^{\sim}\odot\left(f(y)^{-}\odot b\right)^{\sim}\odot b$
$\displaystyle=$ $\displaystyle(r(y)\odot r(y))\odot b=y\odot b\mbox{ by (1),
(2), and (3)}.$
From (2), we conclude that $y\leq y\vee b^{-}=(y\odot b)\oplus b^{-}=f(y\odot
y)\oplus b^{-}\leq f(x)\oplus b^{-}=r(x)$.
(3)–(5) imply $r$ is a square root on $M$.
Conversely, if $M$ has a square root $r$, then it suffices to set
$b:=r(0)^{-}$ and $f(x)=r(x^{\sim})^{-}$ for all $x\in M$. By Proposition
3.3(iv), conditions (i)–(iii) hold.
Now, let $M$ be isomorphic to $[0,b]$. Consider the isomorphism $g:M\to[0,b]$.
By the first part and the note right after Definition 3.1,
$r(x)=g(x^{-})^{\sim}=g(x)\oplus b^{-}$ is a square root on $M$. Let $x,y\in
M$.
$\displaystyle r(x\odot y)$ $\displaystyle=$ $\displaystyle f(y^{-}\oplus
x^{-})^{\sim}=f(y^{-}\oplus x^{-})^{\sim}=\big{(}(f(y^{-})\oplus
f(x^{-}))\wedge b\big{)}^{\sim}\mbox{ by the assumption}$ $\displaystyle=$
$\displaystyle\big{(}(f(y)^{-}\odot b)\oplus(f(x)^{-}\odot
b)\big{)}^{\sim}\vee b^{\sim}=\left(f(x)^{-}\odot
b\right)^{\sim}\odot\left(f(y)^{-}\odot b\right)^{\sim}\vee b^{\sim}$
$\displaystyle=$ $\displaystyle(r(y)\odot r(y))\vee b\mbox{ by (2) and (3)}.$
Therefore, $M$ satisfies the condition $r(x)\odot r(y)\leq r(x\odot y)$ for
all $x,y\in M$. ∎
###### Corollary 3.4.
Let $M$ be an MV-algebra. Then $M$ has a square root if and only if there
exists $b\in M$ such that $b^{-}\leq b$, $M$ is $f$-isomorphic to the MV-
algebra $[0,b]$ for some isomorphism $f:M\to[0,b]$, and for all $x\in M$,
$f(x)\oplus f(x)=x$.
###### Proof.
It follows from Theorem 3.3. Note that if $r$ is a square root on $M$, then
for each $x,y\in M$, $r(x)\odot r(y)\odot r(x)\odot r(y)=r(x)\odot r(x)\odot
r(y)\odot r(y)=x\odot y$ and so by (Sq2), $r(x)\odot r(y)\leq r(x\odot y)$. ∎
## 4\. Square roots and two-divisibility
In [DvZa3], we studied the existence of a square root on a pseudo MV-algebra
$M=\Gamma(G,u)$, where the unital $\ell$-group $(G,u)$ is two-divisible. In
this section, we study this question in more detail, in particular, we
characterize square roots on strongly $(H,1)$-perfect pseudo MV-algebras.
Note that if $(G;+,0)$ is an $\ell$-group and $x,y\in G$ such that
$(x+y)/2=x/2+y/2$, then $(x/2+y/2)+(x/2+y/2)=x+y=(x/2+x/2)+(y/2+y/2)$ and so
$y/2+x/2=x/2+y/2$. It yields that $x+y=y+x$. Hence, if $G$ is two-divisible
that enjoys unique extraction of roots, then $G$ satisfies the identity
$(x+y)/2=x/2+y/2$ if and only if $G$ is Abelian.
More generally, if $G$ is a two-divisible $\ell$-group that enjoys unique
extraction of roots and satisfies the following inequality
$x/2+y/2\leq(x+y)/2,\quad x,y\in G,$
then $x/2=((x-y)+y)/2\geq(x-y)/2+y/2$ and so $x/2-y/2\geq(x-y)/2$ for all
$x,y\in G$. Substituting $y$ by $-y$ in the last inequality, we get
$x/2+y/2=x/2-(-y)/2\geq(x+y)/2$ for all $x,y\in M$. That is,
$x/2+y/2=(x+y)/2,\quad x,y\in G.$
In a similar way, $x/2+y/2\geq(x+y)/2$ for all $x,y\in G$ implies that
$x/2+y/2=(x+y)/2$ for all $x,y\in G$.
###### Theorem 4.1.
Let a pseudo MV-algebra $M=\Gamma(G,u)$ be the direct product of linearly
ordered pseudo MV-algebras and let $M$ be with strict square root. Then $G$ is
two-divisible.
###### Proof.
Let $r$ be a strict root on $M$. We note that by [DvZa3, Thm 5.6], $M$ is
symmetric, $u/2$ exists and belongs to the center of $\mathrm{C}(G)$ of $G$,
and by [DvZa3, Thm 5.2], $r(x)=(x+u)/2$ for each $x\in M$. Since $M$ is
representable, so is $G$, and by [AnFe, Page 26], $G$ enjoys unique extraction
of roots.
For each $x\in G$ if $x/2$ exists, then
$(x/2+u/2)+(x/2+u/2)=x/2+x/2+u/2+u/2=x+u$ which implies that
$(x+u)/2=x/2+u/2$. Similarly, $(x+nu)/2=x/2+nu/2$. Also, $-(x/2)-(x/2)=-x$
implies that $(-x)/2$ exists and is equal to $-x/2$. Consequently,
$(x-u)/2=x/2-u/2$.
(I) Let us first assume $M$ is linearly ordered; then so is $G$. We show that
for every $g\in G$, the element $g/2$ is defined in $G$.
(1) If $g\in[0,u]$, then $g\in M$, $r(g)=(g+u)/2$ is defined in $[0,u]$, and
$(g+u)/2-u/2=(g+u-u)/2=g/2\in G$.
(2) Assume $g\in G^{+}$ is such that $nu\leq g\leq(n+1)u$ for some integer
$n\geq 0$. From $0\leq g-nu\leq u$ and (1), it follows that $(g-nu)/2$ exists.
Hence, $(g-nu)/2+n(u/2)=(g-nu)/2+nu/2=g/2$.
(3) If $g\in G^{-}$, then $-g\in G^{+}$ and $(-g)/2=-(g/2)$ is defined in $G$,
consequently $g/2$ exists in $G$ and is unique.
Summarizing (1)–(3), $G$ is two-divisible.
(II) Now, let $M=\prod_{i\in I}M_{i}$, where each $M_{i}$ is a linearly
ordered pseudo MV-algebra. Then, for each $i\in I$,
$M_{i}\cong\Gamma(G_{i},u_{i})$ with a linearly ordered unital $\ell$-group
$(G_{i},u_{i})$. Without loss of generality, we can assume
$M_{i}=\Gamma(G_{i},u_{i})$. Define $r_{i}:M_{i}\to M_{i}$ by
$r_{i}(x_{i})=\pi_{i}(r(x))$, $x_{i}\in M_{i}$ and $x=(x_{j})_{j}$, where
$\pi_{i}$ is the $i$-th projection from $M$ onto $M_{i}$. By [DvZa3, Prop
3.9], $r_{i}$ is a strict square root on $M_{i}$. Due to [DvZa3, Thm 5.2],
$u_{i}/2,u_{i}\in\mathrm{C}(G_{i})$ for each $i\in I$. By part (I), every
$G_{i}$ is two-divisible.
We describe the unital $\ell$-group $(G,u)$: Put $u=(u_{i})_{i}$, then
$G=\\{g=(g_{i})_{i}\in\prod_{i\in I}G_{i}\mid\exists n\in\mathbb{N}:-nu\leq
g\leq nu\\}.$
Let $g\in G^{+}$, then $g=(g_{i})_{i}$, where each $g_{i}\geq 0$. Therefore,
$g_{i}/2\geq 0$ exists in $G_{i}$ for each $i\in I$, and $g/2=(g_{i}/2)_{i}$
exists in $G$ while $0\leq g/2\leq g\leq nu$. Now take an arbitrary $g\in G$.
There is an integer $n\in\mathbb{N}$ such that $-nu\leq g\leq nu$. Then
$g+nu\geq 0$ so that $(g+nu)/2$ is defined. Therefore, $(g+nu)/2-n(u/2)=(g+nu-
nu)/2=g/2$ is defined in $G$. ∎
###### Corollary 4.2.
Let $M=\Gamma(G,u)$ be a representable pseudo MV-algebra with strict square
root. Then $(G,u)$ can be embedded into a two-divisible unital $\ell$-group.
###### Proof.
Let $r$ be a strict square root on $M$. Let $M_{i}=\Gamma(G_{i},u_{i})$ for
each $i\in I$ be a linearly ordered pseudo MV-algebra, and $f:M\to
M_{0}:=\prod_{i\in I}M_{i}$ be a subdirect embedding. Without loss of
generality, we can assume that each $M_{i}$ is non-degenerate. By [DvZa3, Prop
3.9], $r_{i}:M_{i}\to M_{i}$, defined by $r_{i}(\pi_{i}\circ
f(x))=\pi_{i}\circ f(r(x))$ for all $x\in M$, is a square root on $M_{i}$.
Since $M_{i}$ is a chain, by [DvZa3, Cor 4.5 and 5.7(3)], $r_{i}$ is strict or
$|M_{i}|=2$. If $|M_{i}|=2$, then $M_{i}=\\{0,1\\}$, so by [DvZa3, Thm 3.8],
$r_{i}(0_{i})=0_{i}$. On the other hand, since $r$ is strict we have
$r_{i}(0_{i})=r_{i}(\pi_{i}\circ f(0))=\pi_{i}\circ f(r(0))=\pi_{i}\circ
f(r(0)^{-})=(\pi_{i}\circ f(r(0)))^{-}=r_{i}(0_{i})^{-}=1_{i}$, which means
$M_{i}$ is degenerate, that is absurd. So, $|M_{i}|\neq 2$ for all $i\in I$.
It follows from Theorem 4.1, that for each $i\in I$, $G_{i}$ is two-divisible.
If we define $G_{0}$ by
$G_{0}=\\{g=(g_{i})_{i}\in\prod_{i\in I}G_{i}\mid\exists
n\in\mathbb{N}:-nu_{0}\leq g\leq nu_{0}\\},$ (4.1)
where $u_{0}=(u_{i})_{i}$, then $(G_{0},u_{0})$ is a two-divisible unital
$\ell$-group in which $(G,u)$ can be embedded. ∎
Theorem 4.1 entails the following question:
###### Problem 4.3.
Does Theorem 4.1 hold if $M=\Gamma(G,u)$ is a subdirect product of linearly
ordered pseudo MV-algebras?
Now, we show that Theorem 4.1 holds for every MV-algebra with strict square
root.
###### Theorem 4.4.
Let $M=\Gamma(G,u)$ be an MV-algebra with a strict square root $r$, where
$(G,u)$ is a unital $\ell$-group. Then $G$ is two-divisible.
###### Proof.
By [DvZa3, Thm 5.2], $r(x)=(x+u)/2$ for each $x\in M$. So,
$r(u-x)=(u-x+u)/2=x/2$ which means $x/2$ exists for all $x\in M$. Now, let
$g\in G^{+}$. Then there exists $n\in\mathbb{N}$ such that $g\leq nu$. The
Riesz interpolation property (see [Dar, Thm 1.3.11]) implies that
$g=\sum_{i=1}^{n}x_{i}$ where $0\leq x_{i}\leq u$. By the assumption,
$x_{i}/2\in G$ for all $i\in\\{1,2,\ldots,n\\}$. It follows that
$g=\sum_{i=1}^{n}x_{i}=2(\sum_{i=1}^{n}(x_{i}/2))$, that is
$g/2=\sum_{i=1}^{n}(x_{i}/2)\in G$. Now, choose an arbitrary element $g\in G$.
Then $g=g^{+}-g^{-}$. Since $g^{+},g^{-}\in G^{+}$, the elements
$(g^{+})/2,(g^{-})/2$ exist. It follows that
$g=g^{+}-g^{-}=(g^{+})/2-(g^{-})/2+(g^{+})/2-(g^{-})/2$. Hence, $g/2$ exists,
and $G$ is two-divisible. ∎
###### Corollary 4.5.
Let $M=\Gamma(G,u)$ be an MV-algebra with a square root $r$, where $(G,u)$ is
a unital $\ell$-group. Then $G$ is isomorphic to the direct product of unital
$\ell$-groups $(G_{1},u_{1})$ and $(G_{2},u_{2})$, where $G_{1}$ is a
subdirect product of copies of $(\mathbb{Z},1)$, and $G_{2}$ is a two-
divisible $\ell$-group.
###### Proof.
By [Höl, Thm 2.21], $M\cong M_{1}\times M_{2}$, where $M_{1}$ is a Boolean
algebra and $M_{2}$ is an MV-algebra with a strict square root $s:M_{2}\to
M_{2}$. Let $M_{i}=\Gamma(G_{i},u_{i})$, for $i=1,2$. Since $M_{2}$ is strict,
by Theorem 4.4, $G_{2}$ is two-divisible. If $X$ is the set of all prime ideal
of $M_{1}$, then $f:M_{1}\to\prod_{P\in X}M_{1}/P$, defined by
$f(x)=(x/P)_{P\in X}$, is a subdirect embedding. Clearly, $\prod_{P\in
X}M_{1}/P\cong\prod_{P\in X}\\{0,1\\}=\prod_{P\in X}\Gamma(\mathbb{Z},1)$.
Therefore, $G_{1}$ can be embedded in $\prod_{P\in X}\mathbb{Z}$. ∎
In the following example, we show that Theorem 4.1 does not hold for MV-
algebras with non-strict square root.
###### Example 4.6.
Let $B=\\{0,1\\}$ with $0<1$ be a two-element Boolean algebra. By [Höl,
DvZa3], $r:=\mbox{\rm Id}_{B}$ is a square root on $B$ that is not strict.
Then $M=\Gamma(\mathbb{Z},1)$, where $(\mathbb{Z},1)$ is the unital
$\ell$-group of integers with the strong unit $1$. Clearly, $\mathbb{Z}$ is
not two-divisible.
###### Proposition 4.7.
Let $M=\Gamma(G,u)$ be a representable pseudo MV-algebra with a square root
$r$. If $G$ is two-divisible, then $r$ is strict.
Conversely, if $M$ is the direct product of linearly ordered pseudo MV-
algebras, then $G$ is two-divisible.
###### Proof.
Let $G$ be two-divisible. We claim that $w=r(0)^{-}\odot r(0)^{-}=0$. By
Proposition 2.4(11), $w\in\mathrm{B}(M)$ and by the proof of Case 3 in [DvZa3,
Thm 4.3], $([0,w];\oplus,^{-_{w}},^{\sim_{w}},0,w)$ is a Boolean algebra, so
$[0,w]\subseteq\mathrm{B}(M)$. Choose $b\in[0,w]$. Since $G$ is two-divisible,
$b/2\in G$ exists, and $G$ is a representable unital $\ell$-group (since $M$
is representable), so $G$ enjoys unique extraction of roots and $0\leq b/2\leq
b\leq u$. It follows that $b/2\in[0,w]$ and so $b/2=b/2\oplus
b/2=(b/2+b/2)\wedge u=b\wedge u=b$. Consequently, $b=b/2=0$. Therefore, $w=0$.
Applying the proof of Case 2 in [DvZa3, Thm 4.3], we have $r$ is strict.
The converse follows from Theorem 4.1. ∎
###### Corollary 4.8.
Let a pseudo MV-algebra $M=\Gamma(G,u)$ be a direct product of linearly
ordered pseudo MV-algebras. The following statements are equivalent:
* (i)
The pseudo MV-algebra $M$ has a strict square root.
* (ii)
The pseudo MV-algebra $M$ is two-divisible and $u/2\in\mathrm{C}(G)$.
* (iii)
The $\ell$-group $G$ is two-divisible and $u/2\in\mathrm{C}(G)$.
In either case, $(x+u)/2$ is defined in $M$ for each $x\in M$, and
$r(x)=(x+u)/2$, $x\in M$, is a strict square root on $M$.
###### Proof.
(i) $\Rightarrow$ (ii). It was established in [DvZa3, Thm 5.6].
(ii) $\Rightarrow$ (i). If $x\in M$, then $x/2$ and $u/2$ are defined in $M$,
therefore, $(x+u)/2=(x/2)+(u/2)$ exists in $M$, and the mapping
$r(x)=(x+u)/2$, $x\in M$, is, in fact, a strict square root on $M$.
(i) $\Rightarrow$ (iii). By the equivalence (i) and (ii), we have
$u/2\in\mathrm{C}(G)$. Theorem 4.1 entails implication (i) $\Rightarrow$
(iii).
(iii) $\Rightarrow$ (i). The mapping $r(x)=(x+u)/2$, $x\in M$, is a strict
square root on $M$; see [DvZa3, Ex 3.7(iii)]. ∎
The following result is a partial answer to the question posed in Problem 4.3
above.
We note that a pseudo MV-algebra $M=\Gamma(G,u)$ is said to be dense in $G$,
if for each $g\in G^{+}$, there exists $x\in M$ and $n\in\mathbb{N}$ such that
$g=nx$.
###### Theorem 4.9.
Let $M=\Gamma(G,u)$ be a representable pseudo MV-algebra, where $(G,u)$ is a
unital $\ell$-group. The following statements are equivalent:
* (i)
The $\ell$-group $G$ is two-divisible and $u/2\in\mathrm{C}(G)$.
* (ii)
The pseudo MV-algebra $M$ is dense in $G$ and $M$ has a strict square root.
###### Proof.
Let $g\in G$ and take the positive and negative parts of $g$: $g^{+}:=g\vee 0$
and $g^{-}:=-(g\wedge 0)=-g\vee 0$. Then $g^{+}\wedge g^{-}=0$ so that
$g^{+}+g^{-}=g^{-}+g^{+}$, and $g+(-g\vee 0)=0\vee g=(-g\vee 0)+g$. That is,
$g=g^{+}-g^{-}=-g^{-}+g^{+}$.
(i) $\Rightarrow$ (ii). Suppose that $G$ is two-divisible and
$u/2\in\mathrm{C}(G)$. For each $g\in G^{+}$, there exists $n\in\mathbb{N}$
such that $g\leq nu\leq 2^{n}u$. It follows that $0\leq g/2^{n}\leq u$. Hence
$M$ is dense in $G$. Consider the mapping $r:M\to M$ defined by
$r(x)=(x+u)/2$. By [DvZa3, Exm 3.7(iii)], $r$ is a square root on $M$. In
addition, $r(0)=u/2=u-r(0)=r(0)^{-}$, whence $r$ is strict.
(ii) $\Rightarrow$ (i). Assume that $M$ is dense in $G$ and $r:M\to M$ is a
strict square root on $M$. There exist $m,n\in\mathbb{N}$ such that $g^{+}=nx$
and $g^{-}=my$ for some $x,y\in M$. Since $r$ is strict, by the first step in
the proof of Theorem 4.1, $x/2,y/2\in M$. We have
$g^{+}=nx=n(x/2+x/2)=n(x/2)+n(x/2)$ and $g^{-}=my=m(y/2+y/2)=m(y/2)+m(y/2)$.
The elements $c=g^{+}/2$ and $d=g^{-}/2$ exist in $G$.
On the other hand, since $M$ is representable, $G$ also is representable. Let
$h:M\to\prod_{i\in I}M_{i}$ be a subdirect embedding, where $\\{M_{i}\mid i\in
I\\}$ is a family of linearly ordered pseudo MV-algebras. Suppose that
$M_{i}=\Gamma(G_{i},u_{i})$ for all $i\in I$, where $G_{i}$ an $\ell$-group
with the strong unit $u_{i}$. Then $\\{(G_{i},u_{i})\mid i\in I\\}$ is a
family of linearly ordered unital $\ell$-groups, and let $f:G\to\prod_{i\in
I}G_{i}$ be a subdirect embedding.
We have $f(g^{+})=f(g)\vee f(0)=f(g)\vee(0)_{i\in I}$ and $f(g^{-})=-f(g)\vee
f(0)=-f(g)\vee(0)_{i\in I}$. Suppose that $f(g)=(g_{i})_{i\in I}$,
$f(g^{+})=(g^{+}_{i})_{i\in I}$, $f(g^{-})=(g^{-}_{i})_{i\in I}$,
$f(c)=(c_{i})_{i\in I}$ and $f(d)=(d_{i})_{i\in I}$. The linearity of $G_{i}$
gives $g^{+}_{i}\neq 0\Leftrightarrow g_{i}>0\Leftrightarrow g^{-}_{i}=0$ and
$g^{-}_{i}\neq 0\Leftrightarrow g_{i}<0\Leftrightarrow g_{i}^{+}=0$. Since
$2c_{i}=g^{+}_{i}$, $2d_{i}=g^{-}_{i}$ and $G_{i}$ is a chain, $c_{i}\neq
0\Leftrightarrow g^{+}_{i}\neq 0$ and $d_{i}\neq 0\Leftrightarrow
g^{-}_{i}\neq 0$ for all $i\in I$. Hence, $c_{i}\neq 0\Leftrightarrow
d_{i}=0$, which yields $c_{i}+d_{i}=d_{i}+c_{i}$ and therefore, we have
$c_{i}-d_{i}=-d_{i}+c_{i}$. It follows that $f(c-d)=f(c)-f(d)=(c_{i})_{i\in
I}-(d_{i})_{i\in I}=-(d_{i})_{i\in I}+(c_{i})_{i\in I}=-f(d)+f(c)=f(-d+c)$
consequently, $c-d=-d+c$. Whence $2(c-d)=2c-2d=g^{+}-g^{-}=g$. That is, $g/2$
exists in $G$. Therefore, $G$ is two-divisible. ∎
In [DvZa3, Thm 5.6], we showed that if $M=\Gamma(G,u)$ is a representable
pseudo MV-algebra with a strict square root $r$, then $u/2$ exists, belongs to
$\mathrm{C}(G)$, and $r(0)=u/2$. A similar result for the general case is as
follows:
###### Proposition 4.10.
Let $r$ be an arbitrary square root on a representable pseudo MV-algebra
$M=\Gamma(G,u)$ and $w=r(0)^{-}\odot r(0)^{-}$. Then $(u-w)/2$ exists, is
equal to $r(0)$, and for all $x\in M$, $x+(u-w)/2=(u-w)/2+x$. In particular,
$(u-w)/2=r(0)=(-w+u)/2\in\mathrm{C}(G)$.
###### Proof.
Since $w\in\mathrm{B}(M)$, $w\vee r(0)=w\oplus r(0)=(r(0)^{-}\odot
r(0)^{-})\oplus r(0)=r(0)^{-}\vee r(0)=r(0)^{-}$. Also, by Proposition
2.4(11), $w\odot r(0)=r(0)\odot w=0$. Thus, $w+r(0)=w\oplus r(0)=r(0)^{-}$
entails that $r(0)^{-}-r(0)=w$. Similarly, $r(0)+w=r(0)\oplus w=r(0)^{-}$.
From $u=r(0)^{-}+r(0)=w+r(0)+r(0)$ we get that $-w+u=2r(0)$ and so
$r(0)=(-w+u)/2$ exists. On the other hand, by [DvZa3, Lem 3.15],
$u=r(0)+r(0)^{\sim}=r(0)+r(0)^{-}=r(0)+r(0)+w$. Thus, $u-w=2r(0)$ and
$r(0)=(u-w)/2$. Consequently, $(u-w)/2=r(0)=(-w+u)/2$.
(I) First, assume that $M$ is a chain. Choose $x\in M$.
(1) If $x+(u-w)/2<u$, by [DvZa3, Prop 3.5(vi)], we have
$x+(u-w)/2=x\oplus(u-w)/2=x\oplus r(0)=r(0)\oplus x=(r(0)+x)\wedge u$. Since
$M$ is a chain, we have $r(0)+x\leq u$, and whence $x+(u-w)/2=(r(0)+x)\wedge
u=r(0)+x=(u-w)/2+x$.
(2) If $x+(u-w)/2=u$, then $x+r(0)=u$ and so
$x=u-r(0)=r(0)^{-}=r(0)^{\sim}=-r(0)+u=-((u-w)/2)+u$. It follows that
$(u-w)/2+x=u$.
(3) If $u<x+(u-w)/2$, then for $y=x-((u-w)/2)$, we have $y+(u-w)/2=x$, so
$0\leq y+(u-w)/2\leq u$. Hence, by the first parts of the proof,
$x=x-((u-w)/2)+(u-w)/2=y+(u-w)/2=(u-w)/2+y=(u-w)/2+x-((u-w)/2)$ which implies
that $x+(u-w)/2=((u-w)/2)+x$.
(II) Now, let $X:=X(M)$ be the set of all proper normal prime ideals of $M$.
Then $r_{P}:M/P\to M/P$ defined by $r_{P}(x/P)=r(x)/P$ ($x\in M$) is a square
root on $M/P$; see [DvZa3, Prop 3.9]. Given $P\in X$, let $\hat{P}$ be the
$\ell$-ideal of $G$ generated by $P$. Then, due to the representation theorem
of pseudo MV-algebras by unital $\ell$-groups, [Dvu1],
$M/P=\Gamma(G/\hat{P},u/P)$, and since $M/P$ is a chain, then $G/\hat{P}$ is a
linearly ordered group. The map $f:M\to\prod_{P\in X}M/P$, $f(x)=(x/P)_{P\in
X}$, is a subdirect embedding, and $\hat{f}:(G,u)\to\prod_{P\in
X}(G/\hat{P},u/P)$ is also a subdirect embedding; it is an extension of $f$.
Then $f(r(x))=(r_{P}(x/P))_{P\in X}$, so that $f(r(0))=(r(0)/P)_{P\in X}$ and
$f(w)=(w/P)_{P\in X}=(w_{P})_{P\in X}$, where $w_{P}=r_{P}(0/P)^{-}\odot
r_{P}(0/P)^{-}$. Due to part (I), we have
$(x/P)+_{P}(u/P-_{P}w_{P})/2=(u_{P}-_{P}w/P)/2+_{P}(x/P)$ for each $x\in M$,
where $+_{P}$ and $-_{P}$ are the addition and subtraction, respectively, in
$G/\hat{P}$. Therefore,
$\displaystyle\hat{f}(x+((u-w)/2))$ $\displaystyle=$
$\displaystyle\hat{f}(x)+\hat{f}((u-w)/2)$ $\displaystyle=$
$\displaystyle\big{(}x/P\big{)}_{P\in
X}+_{P}\big{(}(u_{P}-_{P}w/P)/2\big{)}_{P\in X}$ $\displaystyle=$
$\displaystyle\big{(}x/P+_{P}(u_{P}-_{P}w/P)/2\big{)}_{P\in X}$
$\displaystyle=$ $\displaystyle\big{(}(u/P-_{P}w/P)/2+_{P}x/P\big{)}_{P\in X}$
$\displaystyle=$
$\displaystyle\hat{f}((u-w)/2)+\hat{f}(x)=\hat{f}((u-w)/2)+x),$
giving $x+(u-w)/2=(u-w)/2+x$ for each $x\in M$ as stated.
Since, every $x\in G^{+}$ is of the form $x=x_{1}+\cdots+x_{n}$ for some
$x_{1},\ldots,x_{n}\in M$, we have $x+(u-w)/2=(u-w)/2+x$. The equality holds
for each $x\in G^{-}$. If $x=g\in G$, then the equality holds for both $g^{+}$
and $g^{-}$, and finally, for each $g\in G$. ∎
Consequently, the latter proposition says that if $r$ is a square root on a
representable pseudo MV-algebra $M=\Gamma(G,u)$, the element
$r(0)=(u-w)/2=(-w+u)/2\in\mathrm{C}(G)$. If $r$ is strict, equivalently,
$w=0$, then $r(0)=u/2\in\mathrm{C}(G)$ as it was established in [DvZa3, Thm
5.6], and Proposition 4.10 generalizes the situation known only for strict
square roots.
In [DvZa3, Prob 4.7], we proposed a question whether the class of pseudo MV-
algebras with square roots satisfying $r(x)\odot r(x)\leq r(x\odot y)$ for all
$x,y\in M$ is a proper subvariety of the variety of pseudo MV-algebras with
square roots. In the sequel, we give a partial answer to it.
###### Proposition 4.11.
Let $\mathcal{V}$ be the variety of pseudo MV-algebras with square roots
satisfying the inequality
$\displaystyle r(x)\odot r(y)\leq r(x\odot y).$ (4.2)
Then $\mathcal{V}$ properly contains the variety $\mathcal{W}$ of MV-algebras
with square roots. In addition, each representable symmetric pseudo MV-algebra
with square root is contained in $\mathcal{V}$.
###### Proof.
According to Proposition 2.4(8), we know that
$\mathcal{W}\subseteq\mathcal{V}$.
First, assume that $M$ is a linearly ordered pseudo MV-algebra with a square
root $r$. If $r(0)=0$, then $M$ is a Boolean algebra and $r$ is an identity
map, which means inequality (4.2) holds. Otherwise, by [DvZa3, Thms 5.1, 5.6],
$M$ is two-divisible and symmetric with a square root $r(x)=(x+u)/2$ for all
$x\in M$. We have
$\displaystyle r(x)\odot r(y)$ $\displaystyle=$
$\displaystyle((x+u)/2-u+(y+u)/2)\vee 0=(x+y)/2,$ $\displaystyle r(x\odot y)$
$\displaystyle=$ $\displaystyle(((x-u+y)\vee 0)+u)/2=((x+y)\vee
u)/2=(x+y)/2\vee u/2,\mbox{ since $M$ is a chain}$ $\displaystyle=$
$\displaystyle(r(x)\odot r(y))\vee r(0).$
Therefore, $M\in\mathcal{V}$ implies $\mathcal{W}$ is a proper subvariety of
$\mathcal{V}$.
Let $M$ be a symmetric representable pseudo MV-algebra with a square root $r$
and $X$ be the set of all normal prime ideals of $M$. Consider the subdirect
embedding $f:M\to\prod_{P\in X}M/P$ defined by $f(x)=(x/P)_{P\in X}$. By
[DvZa3, Prop 3.9], the onto homomorphism $\pi_{P}\circ f:M\to M/P$ induces a
square root $r_{P}:M/P\to M/P$ defined by $r_{P}(x/P)=r(x)/P$ for all $x\in
M$. By the first part, $r_{P}(x/P\odot y/P)=(r_{P}(x/P)\odot r_{P}(y/P))\vee
r_{P}(0/P)$. It follows that
$\displaystyle f\big{(}(r(x)\odot r(y))\vee r(0)\big{)}$ $\displaystyle=$
$\displaystyle\big{(}f(r(x))\odot f(r(y))\big{)}\vee
f(r(0))=\big{(}(\frac{r(x)}{P})_{P\in X}\odot(\frac{r(y)}{P})_{P\in
X}\big{)}\vee(\frac{r(0)}{P})_{P\in X}$ $\displaystyle=$
$\displaystyle\big{(}(r_{P}(\frac{x}{P}))_{P\in
X}\odot(r_{P}(\frac{y}{P}))_{P\in X}\big{)}\vee(r_{P}(\frac{0}{P}))_{P\in X}$
$\displaystyle=$ $\displaystyle\big{(}(r_{P}(x/P)\odot r_{P}(y/P))\vee
r_{P}(0/P)\big{)}_{P\in X}=\big{(}r_{P}(x/P\odot y/P)\big{)}_{P\in X}$
$\displaystyle=$ $\displaystyle\big{(}r(x\odot y)/P\big{)}_{P\in X}=f(r(x\odot
y)).$
So, $(r(x)\odot r(y))\vee r(0)=r(x\odot y)$, which entails that
$M\in\mathcal{V}$. According to Proposition 2.4(8), we know that
$\mathcal{W}\subseteq\mathcal{V}$. ∎
We present an example of a linearly ordered pseudo MV-algebra
$M\in\mathcal{V}\setminus\mathcal{W}$.
###### Example 4.12.
Let $\mathbb{Q}$ be the set of all rational numbers and
$G=\mathbb{Q}\,\overrightarrow{\times}\,\mathbb{Q}\,\overrightarrow{\times}\,\mathbb{Q}\,\overrightarrow{\times}\,\mathbb{Q}$.
Consider the following binary operation on $G$:
$\displaystyle(a,b,c,d)+(x,y,z,w)=(a+x,b+y,c+z,d+w+bz),\quad\forall(a,b,c,d),(x,y,z,w)\in
G.$ (4.3)
Similarly to [AnFe, Page 138, E41], we can show that $(G;+,(0,0,0,0))$ is a
linearly ordered group, where $-(a,b,c,d)=(-a,-b,-c,-d+bc)$. Clearly, $G$ is
not Abelian. The element $u=(1,0,0,0)$ is a strong unit of $G$: Indeed, if
$(a,b,c,d)\in G^{+}$, then for each integer $n>1+\max\\{|a|,|b|,|c|,|d|\\}$ we
have $nu=(n,0,0,0)>(a,b,c,d)$. In addition, $G$ is two-divisible: For each
$(a,b,c,d)$ consider the element $(a/2,b/2,c/2,(4d-bc)/8)\in G$. Then
$(a/2,b/2,c/2,(4d-bc)/8)+(a/2,b/2,c/2,(4d-bc)/8)=(a,b,c,(4d-bc)/4+bc/4)=(a,b,c,d)$.
On the other hand, $u/2=(1/2,0,0,0)$ and
$u/2+(a,b,c,d)=(1/2,0,0,0)+(a,b,c,d)=(1/2+a,b,c,d)=(a,b,c,d)+(1/2,0,0,0)=(a,b,c,d)+u/2$
which means $u/2\in\mathrm{C}(G)$ (consequently, $u\in\mathrm{C}(G)$). By
[DvZa3, Exm 3.7], $M=\Gamma(G,u)$ is a pseudo MV-algebra with a square root
$r$ defined by $r(x)=(x+u)/2$ for all $x\in M$. By Proposition 4.11,
$M\in\mathcal{V}$, but $M\notin\mathcal{W}$.
###### Lemma 4.13.
Let $r$ be a square root on a pseudo MV-algebra $(M;\oplus,^{-},^{\sim},0,1)$
and $S\subseteq M$. The following properties hold:
* (i)
If $\bigwedge S$ exists, then $\bigwedge r(S)$ exists and is equal to
$r(\bigwedge S)$.
* (ii)
If $\bigvee S$ exists, then $\bigvee r(S)$ exists and is equal to $r(\bigvee
S)$.
###### Proof.
(i) By Proposition 2.4(2), for each $s\in S$, we have $r(\bigwedge S)\leq
r(s)$. Let $x\in M$ be a lower bound for $r(S)$. Then $x\leq r(s)$ for all
$s\in S$ and so $x\odot x\leq r(s)\odot r(s)=s$. It follows that $x\odot
x\leq\bigwedge S$, which implies that $x\leq r(\bigwedge S)$ (by (Sq2)).
Hence, $r(\bigwedge S)$ is the greatest lower bound of $r(S)$, that is,
$\bigwedge r(S)=r(\bigwedge S)$.
(ii) Clearly, $r(\bigvee S)$ is an upper bound for the set $r(S)$. Let $b\in
M$ be such that $r(s)\leq b$ for all $s\in S$. Then $b\in[r(0),1]$. According
to Proposition 2.4(11), we know that $r(M)=[r(0),1]$, so there exists $a\in M$
such that $r(a)=b$ which implies that $r(s)\leq r(a)$ for all $s\in S$. From
Proposition 2.4(7), we get that $r(a\wedge x)=r(a)\wedge r(x)=r(x)$, whence
$a\wedge x=x$ and so $s\leq a$ for all $s\in S$ (since $r$ is a one-to-one
ordered-preserving map). Hence, $\bigvee S\leq a$ and $r(\bigvee S)\leq
r(a)=b$. Therefore, $r(\bigvee S)$ is the least upper bound of the set $r(S)$.
∎
Recall that, for each Boolean element $a$ of a pseudo MV-algebra
$(M;\oplus,^{-},^{\sim},0,1)$, the set $[a,1]$ is closed under $\oplus$ and
$([a,1];\oplus_{a},^{-a},^{\sim a},a,1)$ is a pseudo MV-algebra, where
$x^{-a}=x^{-}\vee a$, $x^{\sim a}=x^{\sim}\vee a$, and $x\oplus_{a}y=x\oplus
y$, $x,y\in[a,1]$. In addition, for all $x,y\in[a,1]$, $a=a\odot a\leq x\odot
y$, so $x\odot_{a}y:=(x\odot y)\vee a=x\odot y$.
###### Proposition 4.14.
Let $(M;\oplus,^{-},^{\sim},0,1)$ be a pseudo MV-algebra with a square root
$r$ such that the element $a=\bigvee_{n\in\mathbb{N}}r^{n}(0)$ exists. Then
the pseudo MV-algebra $([a,1];\oplus_{a},^{-a},^{\sim a},a,1)$ is a Boolean
algebra.
###### Proof.
Set $a=\bigvee_{n\in\mathbb{N}}r^{n}(0)$. Since $r^{n}(0)\leq r^{n+1}(0)$ for
all $n\in\mathbb{N}$, by Lemma 4.13, we have
$r(a)=r(\bigvee_{n\in\mathbb{N}}r^{n}(0))=\bigvee_{n\in\mathbb{N}}r^{n+1}(0)=\bigvee_{n=2}^{\infty}r^{n}(0)=\bigvee_{n\in\mathbb{N}}r^{n}(0)=a$.
Thus $a=r(a)\odot r(a)=a\odot a$, that is $a\in\mathrm{B}(M)$, consequently by
the remark mentioned just before this proposition,
$([a,1];\oplus_{a},^{-a},^{\sim a},a,1)$ is a pseudo MV-algebra. Clearly,
$r([a,1])\subseteq[a,1]$, so by the note just before this proposition, $r$ is
a square root on the pseudo MV-algebra $[a,1]$. Now, $r(a)=a$, and [DvZa3, Thm
3.8] imply that $r=\mbox{\rm Id}_{[a,1]}$, so that
$([a,1];\oplus_{a},^{-a},^{\sim a},a,1)$ is a Boolean algebra. ∎
From Proposition 4.14, we get that if $(M;\oplus,^{-},^{\sim},0,1)$ is a
$\sigma$-complete pseudo MV-algebra with a square root $r$, then
$[\bigvee_{n\in\mathbb{N}}r^{n}(0),1]\subseteq\mathrm{B}(M)$.
For example, let $M_{1}=M_{2}=M_{3}=[0,1]$ be the MV-algebra of the unit real
interval and $M_{2}=M_{4}=\\{0,1\\}$. Consider the MV-algebra
$M=\prod_{i=1}^{5}M_{i}$. Define $r_{i}:M_{i}\to M_{i}$ by $r_{i}(x)=(x+1)/2$,
$i=1,2,3$ and $r_{2}=r_{4}=\mbox{\rm Id}_{\\{0,1\\}}$. The mapping
$r=(r_{1},r_{2},r_{3},r_{4},r_{5})$ is a square root on $M$. We have
$a:=\bigvee_{n\in\mathbb{N}}r^{n}(0)=(1,0,1,0,1)$ and in the MV-algebra $M$,
we have $[a,1]=\\{a,(1,1,1,0,1),(1,0,1,1,1),(1,1,1,1,1)\\}$, which is a
Boolean algebra.
Moreover, in a pseudo MV-algebra $M$ with a square root $r$, we have
$\bigvee_{n\in\mathbb{N}}r^{n}(0)=1$ if and only if $|U_{r}|=1$ where
$U_{r}:=\\{x\in M\mid r^{n}(0)\leq x,~{}\forall n\in\mathbb{N}\\}$. We note if
$I$ is a normal ideal of $M$ with a square root $r$, then $r/I:M/I\to M/I$,
defined by $r/I(x/I)=r(x)/I$, $x/I\in M/I$, is a square root on $M/I$, [DvZa3,
Cor 3.10].
In [DvZa3, Prop 5.5(vi)], we proved that if $r$ is a square root on a pseudo
MV-algebra $(M;\oplus,^{-},^{\sim},0,1)$, then $r(0)\oplus x=x\oplus r(0)$ for
all $x\in M$. We show that $r(0)\odot x=x\odot r(0)$ for all $x\in M$.
###### Proposition 4.15.
Let $(M;\oplus,^{-},^{\sim},0,1)$ be a pseudo MV-algebra with a square root
$r$. Then $r(0)\odot x=x\odot r(0)$ for all $x\in M$.
###### Proof.
If $M$ is a Boolean algebra, then the proof is evident. Choose $x\in M$.
(i) If $M$ is strict, then by [DvZa3, Prop 3.5(vi)], $x\odot
r(0)=(r(0)^{-}\oplus x^{-})^{\sim}=(r(0)\oplus x^{-})^{\sim}=(x^{-}\oplus
r(0))^{\sim}=(x^{-}\oplus r(0)^{-})^{\sim}=r(0)\odot x$.
(ii) If $M$ is neither strict nor a Boolean algebra, then $v=r(0)^{-}\odot
r(0)^{-}\neq 0,1$. According to [DvZa3, Thm 4.3],
$M\cong[0,v]\times[0,v^{-}]$, where $[0,v]$ is a Boolean algebra and
$[0,v^{-}]$ is a strict pseudo MV-algebra. Let $r_{1}:[0,v]\to[0,v]$ and
$r_{2}:[0,v^{-}]\to[0,v^{-}]$ be square roots, where $r_{1}$ is the identity
map on $[0,v]$ and $r_{2}$ is strict. Define $s:M\to M$ by $s(x)=r_{1}(x\wedge
v)\vee r_{2}(x\wedge v^{-})$. Then $s(x)\odot s(x)=(r_{1}(x\wedge v)\vee
r_{2}(x\wedge v^{-}))\odot(r_{1}(x\wedge v)\vee r_{2}(x\wedge v^{-}))=(x\wedge
v)\vee\big{(}r_{1}(x\wedge v)\odot r_{2}(x\wedge v^{-})\big{)}\vee(x\wedge
v^{-})=(x\wedge v)\vee 0\vee(x\wedge v^{-})=x$.
In addition, if $y\in M$ such that $y\odot y\leq x$, then $((y\wedge
v)^{2}\vee(y\wedge v^{-})^{2})=((y\wedge v)\vee(y\wedge v^{-}))\odot((y\wedge
v)\vee(y\wedge v^{-}))\leq(x\wedge v)\vee(x\vee v^{-})$, it follows that
$(y\wedge v)^{2}\leq(x\wedge v)$ and $(y\wedge v^{-})^{2}\leq(x\wedge v^{-})$,
since $v\wedge v^{-}=0$. So, $y\wedge v\leq r_{1}(x\wedge v)$ and $z\wedge
v^{-}\leq r_{2}(x\wedge v^{-})$, consequently, $y=(y\wedge v)\vee(y\wedge
v^{-})\leq r_{1}(x\wedge v)\vee r_{2}(x\wedge v^{-})=s(x)$. Hence, $s$ is a
square root on $M$, so $s=r$. We have $x\odot r(0)=x\odot s(0)=((x\wedge
v)\vee(x\wedge v^{-}))\odot(r_{1}(0)\vee r_{2}(0))=(x\wedge v^{-})\odot
r_{2}(0)$. By (i), $(x\wedge v^{-})\odot r_{2}(0)=r_{2}(0)\odot(x\wedge
v^{-})$. In a similar way, $r_{2}(0)\odot(x\wedge v^{-})=r(0)\odot x$.
Therefore, $x\odot r(0)=(x\wedge v^{-})\odot r_{2}(0)=r_{2}(0)\odot(x\wedge
v^{-})=r(0)\odot x$. ∎
###### Corollary 4.16.
Let $x$ be an element of a pseudo MV-algebra $M$ with a square root $r$. The
following statements hold:
* (i)
$r(x^{n})=r(x)^{n}\vee r(0)$ for all $n\in\mathbb{N}$.
* (ii)
$\bigvee_{n\in\mathbb{N}}r(x^{n})$ exists if and only if
$\bigvee_{n\in\mathbb{N}}r(x)^{n}$ exists.
###### Proof.
(i) For $n=1$, the proof is clear, since $r(0)\leq r(x)$. For $n=2$, the
statement was proved in [DvZa3, Prop 3.3(10)]. Let for $2\leq n$ we have
$r(x^{n})=r(x)^{n}\vee r(0)$. By [DvZa3, Prop 3.3(10)],
$r(x^{n+1})\leq(r(x)\odot r(x^{n}))\vee r(0)$. Then,
$\displaystyle(r(x)\odot r(x^{n}))\odot(r(x)\odot r(x^{n}))$
$\displaystyle=r(x)\odot\big{(}(r(x)^{n}\vee r(0))\odot r(x)\big{)}\odot
r(x^{n})$ $\displaystyle=r(x)\odot\big{(}(r(x)^{n}\odot r(x))\vee(r(0)\odot
r(x))\big{)}\odot r(x^{n})$ $\displaystyle=r(x)\odot\big{(}(r(x)\odot
r(x)^{n})\vee(r(x)\odot r(0))\big{)}\odot r(x^{n})\mbox{, by Proposition
\ref{ns1}}$ $\displaystyle=r(x)\odot r(x)\odot(r(x)^{n}\vee r(0))\odot
r(x^{n})$ $\displaystyle=x\odot r(x^{n})\odot r(x^{n})=x^{n+1},$
so, by (Sq2), $r(x)\odot r(x^{n})\leq r(x^{n+1})$. Clearly, $r(0)\leq
r(x^{n+1})$, whence $(r(x)\odot r(x^{n}))\vee r(0)\leq r(x^{n+1})$. Therefore,
$\displaystyle r(x^{n+1})$ $\displaystyle=$ $\displaystyle(r(x)\odot
r(x^{n}))\vee r(0)=\Big{(}r(x)\odot\big{(}r(x)^{n}\vee r(0)\big{)}\Big{)}\vee
r(0)$ $\displaystyle=$ $\displaystyle r(x)^{n+1}\vee(r(x)\odot r(0))\vee r(0)$
$\displaystyle=$ $\displaystyle r(x)^{n+1}\vee r(0).$
Therefore, $r(0)\vee r(x)^{n}=r(x^{n})$ for all $n\in\mathbb{N}$
(ii) It follows from part (i). ∎
###### Theorem 4.17.
Let $M=\Gamma(G,u)$ be a semisimple MV-algebra with a square root $r$. Then
$r$ is strict if and only if $\bigvee_{n\in\mathbb{N}}r^{n}(0)=u$.
###### Proof.
The proof is clear for $|M|=1$, so we assume that $M$ is non-degenerate.
Suppose that $r$ is a strict square root on $M$ and $Y=\text{MaxI}(M)$ is the
set of all maximal ideals of $M$. Then $r(0)=u-r(0)$, and so $r(0)=u/2$. Since
$M$ is semisimple, $\bigcap Y=\\{0\\}$. Consider the natural embedding
$f:M\to\prod_{I\in Y}M/I$. For each $I\in Y$, the MV-algebra $M/I$ is
isomorphic to a subalgebra of the unit real interval $[0,1]$. In addition,
$|M/I|>2$ and $r_{I}:M/I\to M/I$ defined by $r_{I}(x/I)=r(x)/I$ is a strict
square root on the linearly ordered MV-algebra $M/I$. By [DvZa3, Thm 5.1], for
any $n\in\mathbb{N}$, $r_{I}^{n}(0/I)=(2^{n}-1)u/2^{n}$. Since $M/I$ is
isomorphic to a subalgebra of $[0,1]$, we have
$\bigvee_{n\in\mathbb{N}}r_{I}^{n}(0/I)=\bigvee_{n\in\mathbb{N}}(2^{n}-1)u/2^{n}=1$.
We show that $\bigvee_{n\in\mathbb{N}}r^{n}(0)=u$. Let $a\in M$ be an upper
bound for the set $\\{r^{n}(0)\mid n\in\mathbb{N}\\}$. We have
$\displaystyle(a/I)_{I\in Y}=f(a)$ $\displaystyle\geq$ $\displaystyle
f(r^{n}(0))=f((2^{n}-1)u/2^{n})=(((2^{n}-1)u/I)/2^{n})_{I\in Y}.$
Hence, $a/I=u/I$ for all $I\in Y$ and so $a=u$.
Conversely, let $\bigvee_{n\in\mathbb{N}}r^{n}(0)=u$. We claim that $r$ is
strict. If $B$ is a Boolean algebra, by [DvZa3, Thm 3.8], $r(0)=0$, so that
$u=\bigvee_{n\in\mathbb{N}}r^{n}(0)=0$, which is absurd. Otherwise, by [Höl,
Thm 2.21], $M\cong[0,v]\times[0,v^{\prime}]$, where $v=r(0)^{\prime}\odot
r(0)^{\prime}$, $[0,v]$ is a Boolean algebra and $[0,v^{\prime}]$ is a strict
MV-algebra. According [DvZa3, Thm 5.3], $r(x)=(x\wedge v)\vee((x\wedge
v^{\prime})+v^{\prime})/2$ for all $x\in M$ and so $r(0)\leq v^{\prime}$. In a
similar way, $r^{2}(0)=r(r(0))=(r(0)\wedge v)\vee(r(0)+v^{\prime})/2\leq
v^{\prime}$ and $r^{n}(0)\leq v^{\prime}$ for all $n\in\mathbb{N}$. From
$\bigvee_{n\in\mathbb{N}}r^{n}(0)=u$ we get $u\leq v^{\prime}$ which means
$u=v^{\prime}$. Therefore, $M$ is strict. ∎
###### Proposition 4.18.
Let $r$ be a square root on a representable pseudo MV-algebra
$(M;\oplus,^{-},^{\sim},0,1)$. Then
* (i)
$I:=\\{x\in M\mid x\leq r^{n}(0)^{-},~{}\forall n\in\mathbb{N}\\}$ is an ideal
of $M$.
* (ii)
If $M$ is linearly ordered, then $I$ is a normal and maximal ideal.
* (iii)
If $I$ is a normal ideal of $M$, then $|U_{r/I}|=1$.
###### Proof.
(i) Since $r(0)\leq r^{2}(0)\leq\cdots\leq r^{n}(0)\leq\cdots$, we have
$r(0)^{-}\geq r^{2}(0)^{-}\geq\cdots\geq r^{n}(0)^{-}\geq\cdots$. Clearly,
$x\leq y\in I$ implies that $x\in I$. Let $x,y\in I$. Then $x,y\leq
r^{n}(0)^{-}$ for all $n\in\mathbb{N}$. Choose $m\in\mathbb{N}$. $x\oplus
y\leq r^{m+1}(0)^{-}\oplus r^{m+1}(0)^{-}=(r^{m+1}(0)\odot
r^{m+1}(0))^{-}=r^{m}(0)^{-}$. Thus, $x\oplus y\in I$. Therefore, $I$ is an
ideal of $M$.
(ii) If $|M|=2$, then $M=\\{0,1\\}$, $r=\mbox{\rm Id}_{M}$, and clearly, $I$
is normal. Let $3\leq|M|$ and $M=\Gamma(G,u)$, where $(G,u)$ is a unital
$\ell$-group. By [DvZa3, Thm 4.3], $r$ is strict. [DvZa3, Thm 5.6] implies
that $u/2$ exists, $u/2,u\in\mathrm{C}(G)$, and $M$ is symmetric. Moreover,
$r(x)=(x+u)/2$ for each $x\in M$ (by [DvZa3, Thm 4.3]). Since for each $x\in
M$, $r(x^{-})^{-}=u-(u+(u-x))/2=x/2$, then $r(0)^{-}=u/2$,
$r^{2}(0)^{-}=r(r(0)^{-})^{-}=u/4,\ldots,r^{n+1}(0)^{-}=r((u/2^{n})^{-})^{-}=u/2^{n+1}$.
Set $H:=I\cup-I$. Then $H$ is an $\ell$-ideal (a normal convex
$\ell$-subgroup) of $G$.
(1) If $0\geq x\geq y\in-I$, then by (i), $0\leq-x\leq-y\in I$ and so
$x\in-I$. Now, since $G$ is linearly ordered, by (i) $H=I\cup-I$ is convex.
(2) Let $x,y\in I$. We can assume $x\leq y$. Then $x+y\leq y+y$. If $u\leq
y+y$, then $r(0)^{-}=u/2\leq y$, which contradicts with $y\in I$. Thus $y+y<u$
and so $y+y=y\oplus y\in I\subseteq H$. If $x,y\in-I$, then $-x,-y\in I$, so
$-(y+x)=-x+-y\in I$ and consequently $y+x\in-I\subseteq H$.
(3) Let $x\in I$ and $y\in-I$. If $0\leq x+y$, then $0\leq x+y\leq x$ and (1)
imply that $x+y\in I\subseteq H$. Otherwise, $y+x\leq 0$, then similarly by
(1), $y\leq y+x\leq 0$ implies that $y+x\in-I\subseteq H$.
(4) Let $g\in G$ and $x\in I$. We claim that $g+x-g\in I$. If $g+x-g>u$, then
$x\geq-g+u+g=u-g+g=u$ implies that $x\notin G^{+}\setminus I$. Clearly, $0\leq
g+x-g$ (since $g+x-g<0$ implies that $x\leq 0$ and so $x=0$). Consequently,
$g+x-g=0$. Thus $0\leq g+x-g\leq u$. If $g+x-g\notin H$, there exists
$n\in\mathbb{N}$ such that $r^{n}(0)^{-}\leq g+x-g$. It follows that
$g+2^{n}x-g=2^{n}(g+x-g)\geq 2^{n}r^{n}(0)^{-}=2^{n}(u/2^{n})=u$ and so
$2^{n}x\geq-g+u+g=u$, which contradicts with $x\in I$. Similarly, we can show
that for each $g\in G$ and $x\in-I$, $g+x-g\in-I$. Hence, $H$ is a normal
ideal (convex $\ell$-subgroup) of $G$, and so is its corresponding ideal in
the pseudo MV-algebra $M$, that is, $I=H\cap M$ is a normal ideal of $M$.
On the other hand, if $J$ is an ideal of $M$ properly containing $I$, then for
each $x\in J\setminus I$, there exists $n\in\mathbb{N}$ such that
$r^{n}(0)^{-}<x$. It follows from the first step of this part that
$u/2^{n}\leq x$ and so $u\in J$, which means $J=M$. Therefore, $I$ is a
maximal ideal of $M$.
(iii) Let $I$ be normal. Consider the square root $r/I:M/I\to M/I$ defined by
$r/I(x/I)=r(x)/I$. Choose $x/I\in U_{r/I}$. Then $r_{r/I}^{n}(0/I)\leq x/I$
for all $n\in\mathbb{N}$ and so $r^{n}(0)/I\leq x/I$ for all $n\in\mathbb{N}$.
It follows that $r^{n}(0)\odot x^{-}\in I$. For each $n\in\mathbb{N}$ we have
$r^{n}(0)^{-}\vee x^{-}=r^{n}(0)^{-}\oplus(r^{n}(0)\odot x^{-})\in I$, which
means $x^{-}\in I$. Hence $x/I=1/I$, that is $|U_{r/I}|=1$. ∎
From the note right before Proposition 4.18, it follows that if $M$ is a
linearly pseudo MV-algebra with a square root $r$, then
$\bigvee\\{r^{n}(0)/I\mid n\in\mathbb{N}\\}=1/I$. In addition, Proposition
4.14 implies that if $\bigvee\\{r^{n}(0)\mid n\in\mathbb{N}\\}$ exists, then
it is equal to $1$.
In the sequel, we characterize strongly $(H,1)$-perfect MV-algebras with
square roots.
###### Proposition 4.19.
Let $M=\Gamma(H\,\overrightarrow{\times}\,G,(1,0))$ be a strongly
$(H,1)$-perfect pseudo MV-algebra with a square root $r$. Then $M$ satisfies
precisely one of the following statements:
* (i)
If $H\cong\mathbb{Z}$, then $|G|=1$ and $r(0)=0$. The converse holds, too.
* (ii)
If $r$ is strict, then $H$ is two-divisible. The converse holds, too.
###### Proof.
Suppose that $M$ has a square root $r$. By [Dvu4, Thm 3.4], $I:=\\{0\\}\times
G^{+}$ is a normal and maximal ideal of $M$, and $M/I\cong\Gamma(H,1)$. Due to
[DvZa3, Prop 3.9], there is a square root $t:M/I\to M/I$ defined by
$t(x/I)=r(x)/I$ for all $x\in M$. Consequently, $\Gamma(H,1)$ has a square
root $s$. Since $\Gamma(H,1)$ is linearly ordered and symmetric, [DvZa3, Thm
4.5] implies $|\Gamma(H,1)|=2$ or $t$ is strict.
(i) If $|\Gamma(H,1)|=2$ or equivalently $H\cong\mathbb{Z}$, then
$M=(\\{0\\}\times G^{+})\cup(\\{1\\}\times G^{-})$. Let $r((0,0))=(a,b)$, then
$(0,0)=(a,b)\odot(a,b)=((2a,2b)-(1,0))\vee(0,0)=(2a-1,2b)\vee(0,0)$ and so
$a=0$. Clearly, for each $b\leq c\in G^{+}$ we have $(0,c)\odot(0,c)=(0,0)$,
whence by (Sq2), $(0,c)\leq r((0,0))=(0,b)$. Thus, $b$ is the top element of
$G$. In a similar way, we can show that $r((0,g))=(0,b)$, where $g\in G^{+}$.
Injectivity of $r$ implies that $|G|=1$. The proof of the converse is
straightforward because $|G|=1$ implies that $|M|=2$. Hence, $M$ is a Boolean
algebra, and so $\mbox{\rm Id}_{M}$ is the only square root on $M$.
(ii) Let $r$ be strict. By (i), $H$ is not isomorphic to $\mathbb{Z}$, so
$2<|\Gamma(H,1)|$, and $s$ is strict. Due to Theorem 4.1, $H$ is two-
divisible. Conversely, let $H$ be two-divisible. Then $1/2\in H$. Thus
$(1/2,0)\in M$. Clearly, $(1/2,0)\odot(1/2,0)=(0,0)$. For each $(a,b)\in M$,
$(a,b)\odot(a,b)\leq(0,0)$ implies that $(a,b)-(1,0)+(a,b)\leq(0,0)$
consequently, $(2a,2b)=2(a,b)\leq(1,0)=2(1/2,0)$. If $2a<1$, then clearly
$(a,b)\leq(1/2,0)$. If $2a=1$, then $a=1/2$ and $2b\leq 0$. From [Dar, Prop
3.6], it follows that $b\leq 0$ and so $(a,b)\leq(1/2,0)$. That is,
$r((0,0))=(1/2,0)$. Therefore, $r$ is strict. ∎
###### Theorem 4.20.
Let $M=\Gamma(H\,\overrightarrow{\times}\,G,(1,0))$ be a strongly
$(H,1)$-perfect pseudo MV-algebra such that $G$ enjoys unique extraction of
roots and $2<|M|$. If $M$ has a square root, then $H$ and $G$ are two-
divisible. In general, if $G$ does not enjoy unique extraction of roots, then
for each $g\in G$, the set $\\{x\in G\mid 2x=g\\}$ has a top element in $G$.
###### Proof.
Suppose that $r$ is a square root on $M$. Since $2<|M|$, by Proposition 4.19,
$r$ is strict and $H$ is two-divisible, so $1/2\in H$. Choose $g\in G$. Then
$(1/2,g)\in M$. Let $r((1/2,g))=(a,b)\in M$. Then
$\displaystyle(1/2,g)=(a,b)\odot(a,b)=\big{(}(a,b)-(1,0)+(a,b)\big{)}\vee(0,0)=(2a-1,2b)\vee(0,0).$
If $2a-1<0$, then $(2a-1,2b)\vee(0,0)=(0,0)\neq(1/2,g)$ and if $2a-1=0$, then
$(2a-1,2b)\vee(0,0)=(0,2b)\neq(1/2,g)$. It follows that $0<2a-1$ and so
$(2a-1,2b)\vee(0,0)=(2a-1,2b)$ that implies $2a-1=1/2$ and $2b=g$. That is
$a=3/4$, $2b=g$, and $G$ is two-divisible. Since $G$ enjoys unique extraction
of roots, we have $b=g/2$ and so $r((1/2,g))=(3/4,g/2)$.
Now, let $G$ not enjoy the unique extraction of roots. Choose $g\in G$. By the
first step of the proof, we know that $\\{x\in G\mid 2x=g\\}\neq\emptyset$ and
$r(1/2,g)=(3/4,d)$ for some $d\in G$ with $2d=g$. If $x\in G$ such that
$2x=g$, then $(3/4,x)\odot(3/4,x)=(1/2,2x)=(1/2,g)$, so $(3/4,x)\leq(3/4,d)$
which implies that $x\leq d$. Therefore, $d=\max\\{x\in G\mid 2x=g\\}$. ∎
###### Corollary 4.21.
Let $M=\Gamma(H\,\overrightarrow{\times}\,G,(1,0))$ be a strongly
$(H,1)$-perfect pseudo MV-algebra with a square root $r$, and let $G$ enjoy
unique extraction of roots. Then
$\displaystyle
r((x,y))=\big{(}\frac{x+1}{2},\frac{y}{2}\big{)},\quad\forall(x,y)\in
M\setminus\\{(a,b)\in M\mid a\neq 0\\}.$ (4.4)
###### Proof.
Choose $(x,y)\in M$. Let $r((x,y))=(a,b)$. Then $(2a-1,2b)\vee(0,0)=(x,y)$.
(i) If $x=0=y$ then there are two cases, $2a-1<0$ or $2a-1=0=2b$. By
Proposition 4.19, $r((x,y))=(1/2,0)=((x+1)/2,y/2)$.
(ii) If $x=0$ and $0<y$. Then $(x,y)=(2a-1,2b)\vee(0,0)=(0,2b\vee 0)$. So,
$2a-1=x=0$ and $2b\vee 0=y$, which means $r((x,y))=((x+1)/2,y/2)=(1/2,y/2)$.
$2a-1=0$, $2b\geq 0$ and $(2a-1,2b)\vee(0,0)=(2a-1,2b)$. It follows that
$a=(x+1)/2$ and $b=y/2$. That is, $r(x,y)=((x+1)/2,y/2)$.
(iii) If $x>0$, then $(2a-1,2b)=(2a-1,2b)\vee(0,0)=(x,y)$ and so $a=(x+1)/2$
and $b=y/2$. That is, $r(x,y)=((x+1)/2,y/2)$. ∎
## References
* [Amb] R. Ambrosio, Strict MV-algebras, J. Math. Anal. Appl. 237 (1999), 320–326.
https://doi.org/10.1006/jmaa.1999.6482
* [AnFe] M. Anderson and T. Feil, “Lattice-Ordered Groups: An Introduction”, Springer Science and Business Media, USA, 1988. https://doi.org/10.1007/978-94-009-2871-8
* [Bau] G. Baumslag, Some aspects of groups with unique roots, Acta Math. 104 (1960), 217–303. https://doi.org/10.1007/BF02546390
* [Bel] R. Bělohlávek, Some properties of residuated lattices, Czechoslovak Math. J. 53 (128) (2003), 161–171. https://doi.org/10.1023/A:1022935811257
* [Cha1] C.C. Chang, Algebraic analysis of many-valued logics, Trans. Amer. Math. Soc. 88 (1958), 467–490. https://doi.org/10.2307/1993227
* [Cha2] C.C. Chang, A new proof of the completeness of the Łukasiewicz axioms, Trans. Amer. Math. Soc. 93 (1959), 74–80. https://doi.org/10.2307/1993423
* [CDM] R. Cignoli, I.M.L. D’Ottaviano and D. Mundici, “Algebraic Foundations of Many-Valued Reasoning”, Springer Science and Business Media, Dordrecht, 2000. https://doi.org/10.1007/978-94-015-9480-6
* [Dar] M. Darnel, “Theory of Lattice-Ordered Groups”, CRC Press, USA, 1994.
* [Dvu1] A. Dvurečenskij, Pseudo MV-algebras are intervals in $\ell$-groups, J. Austral. Math. Soc. 72 (2002), 427–445. https://doi.org/10.1017/S1446788700036806
* [Dvu2] A. Dvurečenskij, States on pseudo MV-algebras, Studia Logica 68 (2001), 301–327.
https://doi.org/10.1023/A:1012490620450
* [Dvu3] A. Dvurečenskij, Lexicographic pseudo MV-algebras, J. Appl. Logic 13 (2015), 825–841.
https://doi.org/10.1016/j.jal.2015.10.001
* [Dvu4] A. Dvurečenskij, H-perfect pseudo MV-algebras and their representations, Math. Slovaca 65 (2015), 761–788. DOI: 10.1515/ms-2015-0054
* [DvHo] A. Dvurečenskij, W.C. Holland, Top varieties of generalized MV-algebras and unital lattice-ordered groups, Comm. Algebra 35 (2007), 3370–3390.
* [DvZa0] A. Dvurečenskij, O. Zahiri, Pseudo EMV-algebras. I. Basic properties, J. Appl. Logic – IfCoLog Journal of Logics and their Applications 6 (2019), 1285–1327.
* [DvZa1] A. Dvurečenskij, O. Zahiri, Pseudo EMV-algebras. II. Representation and States, J. Appl. Logics — IfCoLog Journal of Logics and their Applications 6 (2019), 1329–1372.
* [DvZa2] A. Dvurečenskij, O. Zahiri, On EMV-algebras with square roots, J. Math. Anal. Appl. 524 (2023), Art. Num 127113. https://doi.org/10.1016/j.jmaa.2023.127113
* [DvZa3] A. Dvurečenskij, O. Zahiri, Some results on pseudo MV-algebras with square roots, Fuzzy Sets and Systems 465 (2023), Art. Num 108527.
https://doi.org/10.1016/j.fss.2023.108527
* [DvZa5] A. Dvurečenskij, O. Zahiri, Representation and embedding of pseudo MV-algebras with square roots II. Closures,
* [GaTs] N. Galatos, C. Tsinakis, Generalized MV-algebras, Fuzzy Sets and Systems 283 (2005), 254–291.
https://doi.org/10.1016/j.jalgebra.2004.07.002
* [GeIo] G. Georgescu and A. Iorgulescu, Pseudo MV-algebras, Multiple-Valued Logics 6 (2001), 193–215.
* [Gla] A.M.W. Glass, “Partially Ordered Groups”, World Scientific, Singapore, 1999. https://doi.org/10.1142/3811
* [Haj] P. Hájek, Fuzzy logics with noncommutative conjuctions, J. Logic Computation 13 (2003), 469–479. https://doi.org/10.1093/logcom/13.4.469
* [Höl] U. Höhle, Commutative, residuated 1—monoids. In: U. Höhle., E.P. Klement (eds), Non-Classical Logics and their Applications to Fuzzy Subsets: A Handbook of the Mathematical Foundations of Fuzzy Set Theory, Vol 32, pp. 53–106. Springer, Dordrecht, 1995. https://doi.org/10.1007/978-94-011-0215-5_5
* [Kom] Y. Komori, Super-Łukasiewicz propositional logics, Nagoya Math. J. 84 (1981), 119–133.
* [Kon1] P.G. Kontorovič, Groups with a separation basis III Mat. Sbornik 19 (1946), 287–308.
* [Kon2] P.G. Kontorovič, On the theory of non-commutative torsion-free groups, Doklady Akad. Nauk SSSR, 59 (1948), 213–216.
* [KoMe] V.M. Kopytov, N.Ya. Medvedev, “The Theory of Lattice-Ordered Groups”, Kluwer Academic Publication, Dordrecht, 1994. https://doi.org/10.1007/978-94-015-8304-6
* [Mun] D. Mundici, Interpretation of AF C∗-algebras in Łukasiewicz sentential calculus, J. Funct. Anal. 65 (1986), 15–63. https://doi.org/10.1016/0022-1236(86)90015-7
* [NPM] V. Novák, I. Perfilieva, J. Močkoř, “Mathematical Principles of Fuzzy Logic”, Springer Science Business Media, New York, 1999. https://doi.org/10.1007/978-1-4615-5217-8
* [Rac] J. Rachůnek, A non-commutative generalization of MV-algebras, Czechoslovak Math. J. 52 (2002), 255–273. https://doi.org/10.1023/A:1021766309509
|
# A unified framework based on graph consensus term for multi-view learning
Xiangzhu Meng X. Meng and L. Feng(corresponding author, denoted by $\ast$ )
are with the School of Computer Science and Technology, Dalian University of
Technology, Dalian<EMAIL_ADDRESS><EMAIL_ADDRESS>Lin Feng1 Chonghui Guo C. Guo is with the Institute of Systems Engineering,
Dalian University of Technology, Dalian<EMAIL_ADDRESS>
###### Abstract
In recent years, multi-view learning technologies for various applications
have attracted a surge of interest. Due to more compatible and complementary
information from multiple views, existing multi-view methods could achieve
more promising performance than conventional single-view methods in most
situations. However, there are still no sufficient researches on the unified
framework in existing multi-view works. Meanwhile, how to efficiently
integrate multi-view information is still full of challenges. In this paper,
we propose a novel multi-view learning framework, which aims to leverage most
existing graph embedding works into a unified formula via introducing the
graph consensus term. In particular, our method explores the graph structure
in each view independently to preserve the diversity property of graph
embedding methods. Meanwhile, we choose heterogeneous graphs to construct the
graph consensus term to explore the correlations among multiple views jointly.
To this end, the diversity and complementary information among different views
could be simultaneously considered. Furthermore, the proposed framework is
utilized to implement the multi-view extension of Locality Linear Embedding,
named Multi-view Locality Linear Embedding (MvLLE), which could be efficiently
solved by applying the alternating optimization strategy. Empirical
validations conducted on six benchmark datasets can show the effectiveness of
our proposed method.
###### Index Terms:
Multi-view learning, Unified framework, Graph consensus term, Iterative
alternating strategy
## I Introduction
With the rapid development of the information era, more and more data could be
obtained from different domains or described from various perspectives, thus
multi-view learning technologies[1, 2] have gained extensive attention from
researchers in recent years. For examples, an image could be represented by
different visual descriptors [3, 4, 5] to reveal its color, texture, and shape
information; the document could be translated as different versions via
various languages [6, 7]; a web-page is usually able to be composed of texts,
images, and videos. These different heterogeneous features depict different
perspectives to provide complementary information for data description,
indicating that each view may contain some knowledge information that other
views do not involve. However, classical methods are usually proposed under
the single view scenario, which cannot be straightforwardly applied to the
multi-view setting. A common solution is to concatenate different views
together as one view and then employ single-view algorithms directly for this
case. But this concatenation not only lacks physical meaning owing to its
specific statistical property in each view, but also ignores the complementary
nature of different views. Therefore, the main challenge for multi-view
learning is how to effectively combine the information of multiple views and
exploit the underlying structures within data.
In recent years, a large amount of multi-view learning approaches have been
well investigated in many applications (e.g. classifications [8, 9, 10],
clustering [11, 12, 13], etc). Among existing multi-view learning works, one
representative category of methods is based on the graph, which is mainly
taken into account in this paper. One popular solution [14, 15, 16, 17] is to
consider the weighted combination of different views to explore a common
latent space shared by all views in integrating multi-view information. For
example, Multiview Spectral Embedding (MSE) [14] was proposed to extend
Laplacian Eigenmaps (LE) [18] into multi-view setting, which incorporated it
with multi-view data to find common low-dimensional representations.
Nevertheless, they could not guarantee the complementary effects across
different views. For this reason, these algorithms in co-training [19, 20] and
co-regularization [21, 22] styles are developed to explore the complementary
information among different views. The former iteratively maximizes the mutual
agreement on different views to guarantee the consistency of different views.
The latter employs co-regularization terms of discriminant functions, added
into the objective function, to ensure the consensus among distinct views.
Unfortunately, these methods may produce unsatisfactory results when facing
such multiple views that are highly related but sightly different from each
other. More notably, there aren’t still sufficient researches on generalized
multi-view frameworks, which are convenient to extend those exiting graph
embedding methods based on single view against multi-view tasks, so that the
advantages of those single-view works couldn’t be fully exploited. What’s
more, the framework of graph embedding [23] implies that most of subspace
learning methods [24, 25, 26] and their kernel extensions [27, 28, 29] could
be also cast as special embedding methods based on the graph. Besides, most
graph-based deep learning technologies [30, 31] have been widely investigated
in recent tears. However, these graph embedding methods cannot be extended
into the multi-view setting directly. Therefore, how to extend these works
into multi-view setting is the key yet challenging point.
To handle these issues above, we propose a novel model for multi-view learning
problems to simultaneously exploit both the diversity and complementary
information among different views. Importantly, this model attempts to
leverage most existing graph embedding works for single view into a unified
formulation. Specifically, to preserve the diversity property of intrinsic
information in each view, this model explores the intrinsic graph structure in
each view independently; to fully exploit the complementary information among
different learned representations, we introduce the graph consensus term,
based on heterogeneous graphs, to consider the correlations among multiple
views jointly. That is to say, we could utilize the graph consensus term to
regularize the dependence among different views and simultaneously obtain the
intrinsic structure based on its graph structure or embedding representations
for each view. To this end, we formulate the above concerns into a unified
framework, named Graph Consensus Multi-view Learning Framework (GCMLF). To
facilitate related researches, the proposed framework is utilized to implement
the multi-view extension of Locality Linear Embedding [32], named Multi-view
Locality Linear Embedding (MvLLE). Correspondingly, an algorithm based on the
alternating direction optimization strategy is provided to efficiently solve
MvLLE, which converges to the local optimal value. Finally, extensive
experiments based on the applications of document classification, face
recognition, and image retrieval validate the ideal performance of our
proposed method. In summary, our contributions in this paper could be listed
as follows:
* •
We propose a novel unified framework multi-view learning problems to leverage
most of existing single-view works based on the graph into a unified formula,
which utilizes the graph consensus term based on heterogeneous graphs to
regularize the dependence among different views.
* •
To get the feasible solution of GCMLF, a rough paradigm based on iterative
alternating strategy is proposed, which could be verified that it converges to
the local optimal value within limited iteration steps.
* •
GCMLF is utilized to implement the multi-view extension of Locality Linear
Embedding, named Multi-view Locality Linear Embedding (MvLLE), which could be
efficiently solved referring to the solving paradigm for GCMLF.
The remainder of this paper is organized as follows: in Section II, we briefly
review the background of multi-view setting and some methods closely related
to our method; in Section III, we describe the construction procedure of our
proposed method and its optimization algorithm; in Section IV, the proposed
framework is utilized to implement the multi-view extension of Locality Linear
Embedding; in Section V, extensive experiments on six datasets evaluate the
effectiveness of our proposed approach; in Section VI, we make the conclusion
of this paper.
## II Related work
In this section, we first review a brief comprehension of the related works
closed to the proposed method. Then we introduce a multi-view learning method
called co-regularized multi-view spectral clustering (Co-reg) [21] in detail.
### II-A Multi-view learning
Generally, most of multi-view learning methods belong to the category of the
graph-based method. Among them, one representative group of multi-view methods
[33, 15, 34] aim to fuse multiple features into single representation, by
exploiting the common latent space shared by all views. For example, multi-
view sparse coding [33, 34] combines the shared latent representation for the
multi-view information by a series of linear maps as dictionaries. Similarly,
Multiple Kernel Learning (MKL) [35, 36, 37] is also a natural way to integrate
different views based on the direct combination of different views, where the
work [35] learns a common low-dimensional representation with unsupervised or
supervised information. However, these methods usually map different views to
a common space, which might produce unsatisfactory results because they cannot
guarantee the complementarity across different views.
Another typical group of multi-view methods aim to integrate complementary
information among different views. Among these works, there are two classes of
multi-view methods related to our work, which are based on Canonical
Correlation Analysis (CCA) [38] and Hilbert-Schmidt Independence Criterion
(HSIC) [39], respectively. Suppose that two sets of $\bm{X}$ and $\bm{Y}$,
consisting of $N$ observations, are drawn jointly from a probability
distribution. The former [40, 41, 42] employs CCA to project the two views
into the common subspace by maximizing the cross correlation between two
views. It could be expressed as follows:
$\begin{array}[]{l}Corr({\bm{X}},{\bm{Y}})=tr\left({{\bm{W}_{X}}^{T}\bm{X}{\bm{Y}}^{T}{\bm{W}_{Y}}}\right)\\\
\end{array}$ (1)
where $\bm{W}_{X}$ and $\bm{W}_{Y}$ denote the projecting matrix of the set
$\bm{X}$ and the set $\bm{Y}$ respectively. $tr(\cdot)$ is the trace of the
matrix. In particular, Multi-View Discriminant Analysis [42] is proposed to
extend LDA [25, 29] into a multi-view setting, which projects multi-view
features into one discriminative common subspace. Generalized Multiview
Analysis (GMA) [41] solves a joint and relaxed problem of the form of
quadratic constrained quadratic program (QCQP) over different feature spaces
to obtain a common linear subspace, which generalizes CCA for multi-view
scenario, i.e. cross-view classification and retrieval. However,
dimensionalities of different views must keep equal with each other in this
case. The latter [43, 44, 45] explores complementary information by utilizing
HSIC to measure the correlations of different views. HSIC measures dependence
of the learned representations of different views by mapping variables into a
reproducing kernel Hilbert space, which could be expressed as follows:
$\begin{array}[]{l}HSIC({\bm{X}},{\bm{Y}})=(N-1)^{-2}tr\left(\bm{K}_{X}\bm{H}\bm{K}_{Y}\bm{H}\right)\\\
\end{array}$ (2)
where $\bm{K}_{X}$ and $\bm{K}_{Y}$ denote the Gram matrix of the set $\bm{X}$
and the set $\bm{Y}$ respectively. $\bm{H}=\bm{I}-N^{-1}\bm{1}\bm{1}^{T}$
centers the Gram matrix $\bm{K}_{X}$ or $\bm{K}_{Y}$ to have zero mean in the
feature space. Compared to those methods based on CCA, such methods could
relieve the restriction of equal dimensionalities for different views. In
particular, the work [43] employs a kernel dependence measure of HSIC to
quantify alternativeness between clustering solutions of two views, which
iteratively discovers alternative clusterings. Similarly, the work [45]
exploits the complementarity information of multiple views based on HSIC to
enhance the correlations (or penalize the disagreement) across different views
during the dimensionality reduction, and explores the correlations within each
view independently, jointly. However, these works usually incorporate the
inner product kernel to construct the HSIC term, which might lead to the issue
that we cannot obtain satisfactory performance when facing those nonlinear
cases. Differing from those methods above, our proposed graph consensus term
cannot only overcome the limitation of dimensional equivalent across views but
also fully discover the intrinsic structure information of each view and the
complementary information among different views.
### II-B Co-regularized Multi-view Spectral Clustering
Co-regularized Multi-view Spectral Clustering (Co-reg) [21] aims to propose a
spectral clustering framework for multi-view setting. To achieve this goal,
Co-reg works with the cross-view assumption that the true underlying
clustering should assign corresponding points in each view to the same
cluster. For the example of two-view case for the ease of exposition, the cost
function for the measure of disagreement between two clusters of the learned
embedding $\bm{U}^{v}$ and $\bm{U}^{w}$ in the $v$th view and the $w$th view
could be defined as follows:
$D\left({{\bm{U}^{v}},{\bm{U}^{w}}}\right)=\left\|\frac{\bm{K}_{\bm{U}^{v}}}{\left\|\bm{K}_{\bm{U}^{v}}\right\|_{F}^{2}}-\frac{\bm{K}_{\bm{U}^{w}}}{\left\|\bm{K}_{\bm{U}^{w}}\right\|_{F}^{2}}\right\|_{F}^{2}$
(3)
where $\bm{K}_{\bm{U}^{v}}$ is the similarity matrix for the $v$ view and
$\left\|\cdot\right\|_{F}^{2}$ denotes the Frobenius norm of the matrix. For
the convenience of solving the solution, linear kernel is chosen as the
similarity measure, that is
$\bm{K}_{\bm{U}^{v}}={\bm{U}^{v}}{\bm{U}^{v}}^{{}^{T}}$. Substituting this in
Eq. (3) and ignoring the constant additive and scaling terms that depend on
the number of clusters, the disagreement term
$D\left({{\bm{U}^{v}},{\bm{U}^{w}}}\right)$ could be expressed as:
$D\left({{\bm{U}^{v}},{\bm{U}^{w}}}\right)=-tr\left({{\bm{U}^{v}}{\bm{U}^{v}}^{{}^{T}}{\bm{U}^{w}}{\bm{U}^{w}}^{{}^{T}}}\right)$
(4)
Co-reg builds on the standard spectral clustering by appealing to the co-
regularized framework, which makes the clustering relationships on different
views agree with each other. Therefore, combining Eq. (4) with the spectral
clustering objectives of all views, we could get the following joint
maximization problem for $M$ views:
$\begin{split}&\mathop{\min}\limits_{{\bm{U}^{1}},{\bm{U}^{2}},\ldots,{\bm{U}^{M}}\in{\mathbb{R}^{N\times
k}}}\sum\limits_{v=1}^{m}{tr({\bm{U}^{v}}^{{}^{T}}{\bm{L}^{v}}{\bm{U}^{v}})}-\vspace{1cm}\\\
&\hskip 20.00003pt\lambda\sum\limits_{1\leq v\neq w\leq
M}{tr\left({{\bm{U}^{v}}{\bm{U}^{v}}^{{}^{T}}{\bm{U}^{w}}{\bm{U}^{w}}^{{}^{T}}}\right)}\\\
&\hskip 30.00005pts.t.\hskip
15.00002pt{\bm{U}^{v}}^{{}^{T}}{\bm{U}^{v}}{=I,}\forall 1\leq v\leq M\\\
\end{split}$ (5)
where ${\bm{L}^{v}}$ is the normalized graph Laplacian matrix in the $v$th
view and $\lambda$ is a the non-negative hyperparameter to trade-off the
spectral clustering objectives and the spectral embedding disagreement terms
across different views. In this way, Co-reg implements a spectral clustering
framework for multi-view setting. However, choosing linear kernel might lack
the ability to capture the nonlinear relationships among different samples in
multi-view setting. Besides, there also exists the limitation that the
dimensionalities of all views must keep same with each other.
Figure 1: The flow chart of the proposed Graph Regularized Multi-view Learning
Framework (GCMLF). Given a collection of samples with m views, e.g.,
$\\{\bm{X}^{1},\bm{X}^{2},\ldots,\bm{X}^{m}\\}$. GCMLF first explores the
graph structure in each view by graph embedding model independently, which
aims to preserve the diversity property of graph structure information in each
view. Then, it utilizes the graph consensus term to regularize the dependence
among different views, which makes different views mutually learn. For the
example with view $\bm{1}$, we could not only explore the intra-view graph
information according to $\bm{X}^{1}$, but also fully consider inter-view
graph structure information more flexibly and robustly. In this way, GCMLF
could consider the complementarity among different views and simultaneously
obtain the graph embedding for each view.
## III Methodology
In this section, we discuss the intuition of our proposed framework, named
Graph Consensus Multi-view Learning Framework (GCMLF). Here, we propose to
introduce the graph consensus term, based on heterogeneous graphs, to
regularize the dependence among different views. We first work with two-views
to formulate the graph consensus term. Then, the unified multi-view framework
is developed for the case of more than two views to enforce multiple views
close to each other. For clarity, the flow chart of GCMLF is shown in Fig.1.
Correspondingly, a rough paradigm based on iterative alternating strategy is
proposed to solve the solution of GCMLF, which could be verified that it
converges to the local optimal value. Specifically, we provide one typical
case based on two heterogeneous graphs, called Multi-view Locality Linear
Embedding (MvLLE). According to the scheme to solve GCMLF, the optimization
procedure for MvLLE is presented to complete the case. For convenience, the
important notations used in the remainder of this paper are summarized in
Table I.
TABLE I: Important notations used in this paper. Notation | Description
---|---
$\bm{X}^{v}$ | The features set in the $v$th view
$\bm{x}_{i}^{v}$ | The $i$th sample in the $v$th view
$\bm{K}^{v}$ | The kernel matrix in the $v$th view
$\bm{U}^{v}$ | The embedding in the $v$th view
$\bm{G}^{v}$ | The graph matrix defined in $\bm{X}^{v}$ based on homogeneous graph
$\bm{G}_{\ast}^{v}$ | The graph matrix defined in $\bm{U}^{v}$ based on heterogeneous graph
|
### III-A Problem Definition
Assume that we are given a dataset consisting of $M$ views, the data in the
$v$th view ($1\leq v\leq M$) could be denoted as
$\bm{X}^{v}=\\{\bm{x}_{1}^{v},\bm{x}_{2}^{v},\ldots,\bm{x}_{N}^{v}\\}$, in
which $N$ is the number of samples. The proposed method aims to obtain the
graph structure or the embedding in each view under the multi-view setting. We
separately employ $\bm{G}^{v}\in\mathbb{R}^{N\times N}$ and
$\bm{U}^{v}\in\mathbb{R}^{d^{v}\times N}$ to denote the graph structure or the
embedding in the $v$th view, where $d^{v}$ is the dimensionality of the $v$th
view. Differing from the graph $\bm{G}^{v}$ defined on $\bm{X}^{v}$,
$\bm{G}_{\ast}^{w}$ is the graph constructed by the learned embedding
$\bm{U}^{v}$. For the multi-view setting, a naive way is to incorporate all
views directly as follows:
$\begin{split}&\mathop{\min}\limits_{\left\\{\bm{U}^{v}\in\mathcal{\bm{C}}^{v},1\leq
v\leq
M\right\\}}\sum_{v=1}^{M}\mathcal{F}(\bm{G}^{v},\bm{U}^{v})+\lambda\bm{\Omega}(\bm{U}^{v})\\\
\end{split}$ (6)
where $\mathcal{\bm{C}}^{v}$ denotes the different constraints on the
embedding $\bm{U}^{v}$. $\mathcal{F}(\cdot,\cdot)$ is the loss function
defined on the embedding $\bm{U}^{v}$ and the graph $\bm{G}^{v}$, and
$\bm{\Omega}(\cdot)$ stands for the smooth regularized term of the embedding
$\bm{U}^{v}$. The positive term $\lambda$ trades-off the loss function
$\mathcal{F}(\bm{G}^{v},\bm{U}^{v})$ and the smooth regularized term
$\bm{\Omega}(\bm{U}^{v})$. Intuitively, this naive way implements graph
embedding problem for each view independently and fails to exploit the
diversity information of these multiple views. More importantly, this way
neglects the correlations of these multiple views, so that the complementary
information among multiple views cannot be made full advantage to enforce all
views to learn from each other. Accordingly, how to efficiently discover the
complementary information among views is the key point. Besides those works
based on CCA or HSIC, traditional solutions usually minimize the difference
between the embeddings of pairwise views directly. However, such methods are
only suitable for the case that the dimensionalities are equal for different
views. For these reasons, it’s necessary and worthy to develop a novel co-
regularization term with better scalibility and robustness to enforce
different views to mutually learn.
### III-B graph regularization term
In this paper, we investigate to measure the dependence among all views based
on graph structures, which reveals the relationships among all samples in each
view. Specifically, we attempt to construct the view-structure consensus in
terms of heterogeneous graphs to regularize the dependence between two views.
Taking the example with two-view case consisting of the $v$th view and the
$w$th view, if two graphs are obtained by the same style of graph approaches,
discovering similarly property of individual view, we call such two graphs as
homogeneous graphs; in contrast, if two graphs are solved by the different
style of graph approaches, we call such two graphs as heterogeneous graphs.
When facing the case of homogeneous graphs, directly minimizing the gap
between two graphs is to make the relationships among all samples, computed
from the $v$th view and the $w$th view, as consistent as possible. However,
the diversity information from multiple views might be reduced in this way.
For this reason, we introduce the heterogeneous graph consensus term to
consider the correlations among multiple views.
For the case of heterogeneous graphs, it’s unsuitable to straightforward
minimize the semantic gap between the graphs from two views owing to their
different construction styles. By design, the graph coefficients could reflect
the intrinsic geometric properties of one given view, which are invariant to
exactly such transformations. Therefore, we expect their characterization of
geometry structure in the one view to be equally valid for the other view on
the manifold. That is to say, the relationship between two samples in the
$v$th view is expected to be closer if the distance in the $w$th view is
larger. Accordingly, we propose the following cost function as measure of
dependence between two views:
$\begin{split}&Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})=\sum\limits_{i,j=1}^{N}{\left\|{\bm{U}_{i}^{v}-\bm{U}_{j}^{v}}\right\|_{2}^{2}\bm{G}_{\ast_{ij}}^{w}}\\\
&\quad\quad\quad\quad\quad\quad=tr\left(\bm{U}^{v}(\bm{D}_{\ast}^{w}-\bm{G}_{\ast}^{w}){\bm{U}^{v}}^{T}\right)\\\
\end{split}$ (7)
where $\bm{D}_{\ast}^{w}$ denote a diagonal matrix, in which the $i$th
diagonal element in $\bm{D}_{\ast}^{w}$ is the sum of all elements in the
$i$th row of $\bm{G}_{\ast}^{w}$.
Besides, when the graph structure specifically reflects the reconstruction
relationships among samples, i.e. Low-Rank Representation (LRR) [46], we try
to solve the self-representation issue by the following form:
$\begin{split}&\bm{U}^{v}=\bm{U}^{v}\bm{G}_{\ast}^{v}+\bm{E}^{v}\\\
\end{split}$ (8)
where $\bm{E}^{v}$ denotes the error term of samples reconstruction. At this
time, we investigate to measure the dependence between two views from the
aspect of space reconstruction. That is, we expect that reconstruction
relationships among samples in the one view could be equally preserved in the
other view on the manifold. Therefore, we additionally could utilize the
following cost function to measure the consensus between the $v$th view and
the $w$th view:
$\begin{split}&Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})={\left\|{\bm{U}^{v}-\bm{U}^{v}\bm{G}_{\ast}^{w})}\right\|_{F}^{2}}\\\
&\quad\quad\quad\quad\quad\quad=tr\left(\bm{U}^{v}(\bm{I}_{N}-\bm{G}_{\ast}^{w}){({\bm{I}_{N}}-\bm{G}_{\ast}^{w})}^{T}{\bm{U}^{v}}^{T}\right)\\\
\end{split}$ (9)
For convenience, we could further summarize the graph consensus term into a
unified form
$Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})=tr\left(\bm{U}^{v}\bm{L}^{w}{\bm{U}^{v}}^{T}\right)$
through the Eq.(7)-Eq.(9), where $\bm{L}^{w}$ is just dependent on the graph
$\bm{G}_{\ast}^{w}$. In above discussion, we provide two formulas of
$\bm{L}^{w}$ based on the consistent preservation between two views. To sum
up, we could utilize the graph consensus term
$Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})$ to co-regularize the dependence among
different views and simultaneously obtain the graph structure or embedding for
each view.
### III-C Multi-view learning framework based on graph consensus term
To fully explore the correlations and complementary information among multiple
views, we employ the graph consensus term in Eq.(7)-Eq.(9) to encourage the
new representations of different views to be close to each other. Accordingly,
combining graph embedding loss term in each view with graph consensus term
among all views, the overall objective function could be formulated as
follows:
$\begin{split}&\mathop{\min}\limits_{\left\\{\bm{U}^{v}\in\mathcal{\bm{C}}^{v},1\leq
v\leq
M\right\\}}\underbrace{\sum_{v=1}^{M}\left(\mathcal{F}(\bm{G}^{v},\bm{U}^{v})\right)}_{Graph\
embedding\
loss}+\underbrace{\lambda_{R}\sum_{v=1}^{M}{\bm{\Omega}(\bm{U}^{v})}}_{Normalization\
term}\\\ &+\underbrace{\lambda_{C}\sum_{v\neq
w}{Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})}}_{Graph\ consensus\ term}\\\
\end{split}$ (10)
where $\lambda_{R}>0$ and $\lambda_{C}>0$ are two trade-off parameters
corresponding to the smooth regularized term and graph consensus term
respectively. Under the assumption that space structures in different views
could reflect intrinsic properties diversely, the first term ensures that the
graphs are constructed for homogeneous structures. The second term guarantees
the smoothness within each view independently, and the third term enforces
that the learned representations $\left\\{\bm{U}^{v},1\leq v\leq M\right\\}$
should learn from each other to minimize the gap between them. In this way,
when facing multi-view issues, our proposed framework could deal with the
diversity information, smooth regularized terms, and complementary information
among multiple views jointly.
Optimization procedure: With the alternating optimization strategy, the
Eq.(10) could be approximately solved. That is to say, we solve each view at a
time while fixing other views. Specifically, with all views but $\bm{U}^{v}$
fixed, we get the following optimization problem for the $v$th view:
$\begin{split}&\mathop{\min}\limits_{\bm{U}^{v}\in\mathcal{\bm{C}}^{v}}\mathcal{F}(\bm{G}^{v},\bm{U}^{v})+\lambda_{R}\bm{\Omega}(\bm{U}^{v})+\\\
&\lambda_{C}\sum_{1\leq v\neq
w}^{M}{(Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})+Reg(\bm{U}^{w},\bm{G}_{\ast}^{v}))}\\\
\end{split}$ (11)
Note that in $Reg(\bm{U}^{w},\bm{G}_{\ast}^{v})$, $\bm{G}_{\ast}^{v}$ is
dependent on the target variable $\bm{U}^{v}$ and Eq.(11) couldn’t be directly
solved. But if $\bm{G}_{\ast}^{v}$ is set to be stationary,
$Reg(\bm{U}^{w},\bm{G}_{\ast}^{v})$ will be reduced a constant term on
$\bm{U}^{v}$. Without considering the constant terms, Eq.(11) will reduce to
the following equation:
$\begin{split}&\mathop{\min}\limits_{\bm{U}^{v}\in\mathcal{\bm{C}}^{v}}\mathcal{F}(\bm{G}^{v},\bm{U}^{v})+\lambda_{R}\bm{\Omega}(\bm{U}^{v})+\lambda_{C}\sum_{1\leq
v\neq w}^{M}{Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})}\\\ \end{split}$ (12)
which looks simpler to be solved. Suppose that $\bm{U}^{v}$ could be
effectively calculated by solving the Eq.(12), this $\bm{U}^{v}$ could be
continuously used to update $\bm{G}_{\ast}^{v}$ according to the construction
manner of chosen homogeneous graph method, which inspires us to compute
$\bm{U}^{v}$ and $\bm{G}_{\ast}^{v}$ iteratively.
Hereto, all the variables $\\{\bm{U}^{v},\bm{G}_{\ast}^{v},1\leq v\leq M\\}$
have been updated completely. The whole procedure to solve Eq.(10) is
summarized in Algorithm 1.
Input: The multi-view data $\\{\bm{X}^{v},\forall 1\leq v\leq M\\}$, the
hyperparameters $\lambda_{R}$ and $\lambda_{C}$, the loss function
$\mathcal{F}(\cdot,\cdot)$, the constraint $\mathcal{\bm{C}}^{v}$, the
homogeneous graph manner for $\bm{G}_{\ast}$.
1
2for _v=1:M_ do
3 Construct $\bm{G}^{v}$ in the loss function $\mathcal{F}(\cdot,\cdot)$.
4 Initialize $\bm{U}^{v}$ by minimizing the loss function
$\mathcal{F}(\cdot,\cdot)$ under the constraint $\mathcal{\bm{C}}^{v}$.
5 end for
6
7while _not converged_ do
8 for _v=1:M_ do
9 Update $\bm{G}_{\ast}^{v}$ for the $v$th view according to the construction
manner of the chosen homogeneous graph method.
10 end for
11 for _v=1:M_ do
12 Update $\bm{U}^{v}$ for the $v$th view by solving Eq.(12).
13 end for
14
15 end while
16
Output: Learned representations {$\bm{U}^{v},1\leq v\leq M$}.
Algorithm 1 The optimization for GCMLF
Convergence analysis: Because we adopt the alternating optimization strategy
to solve our proposed framework, it’s essential to analyze its convergence.
Theorem 1. The objective function in Eq.(10) is bounded. The proposed
optimization algorithm monotonically decreases the loss value in each step,
which makes the solution converge to a local optimum.
Proof: In most cases of graph embedding loss function in $v$th view,
$\mathcal{F}(\bm{G}^{v},\bm{U}^{v})$ is positive. Thus, it’s readily to be
satisfied that there must exist one view which can make
$\mathcal{F}_{min}=\mathcal{F}(\bm{G}^{v},\bm{U}^{v})>0$ to be smallest among
all views. Similarly, we also find that the smooth regularized term
$\bm{\Omega}(\bm{U}^{v})$ must be greater than 0. For the graph consensus
terms among views, we could verify that
$tr\left(\bm{U}^{v}\bm{L}^{w}{\bm{U}^{v}}^{T}\right)$ is positive definite
quadratic function if $\bm{L}^{w}$ is a positive definite matrix. Fortunately,
this condition is usually established. Similar to the discussion the loss
function in each view, there must exist two closest views which could make
$\mathcal{C}_{min}=tr\left(\bm{U}^{v}\bm{L}^{w}{\bm{U}^{v}}^{T}\right)>0$ to
be smallest among all pairwise views. And because the hyperparameters
$\lambda_{R}>0$ and $\lambda_{C}>0$, it is provable that the objective value
in Eq.(10) is greater than $M\mathcal{F}_{min}+M(M-1)\mathcal{C}_{min}$.
Therefore, the objective function in Eq.(10) has a lower bound.
For each iteration of optimizing problem Eq.(10), we could obtain the learned
representations {$\bm{U}^{v},1\leq v\leq M$} by iterative solving the Eq.(12),
which are corresponding to the exact minimum points of Eq.(10) for all views
respectively. Under the condition that $\bm{G}_{\ast}^{v}$ is set to be
stationary, the value of the objective function in Eq.(12) is non-increasing
in each iteration of Algorithm 1. Thus the alternating optimization procedure
will monotonically non-increasing the objective in Eq.(10).
Denote the value of loss function in Eq.(10) as $\mathcal{H}$, and let
${\\{\mathcal{H}^{t}\\}}_{t=1}^{T}$ be a sequence generated by the iteration
steps in Algorithm 1, where $T$ is the length of this sequence. Based on the
above analysis, ${\\{\mathcal{H}^{t}\\}}_{t=1}^{T}$ is a bounded below
monotone decreasing sequence. According to the bounded monotone convergence
theorem [47] that asserts the convergence of every bounded monotone sequence,
the proposed optimization algorithm converges. Accordingly, the Theorem 1 has
been proved.
### III-D Discussion with other related methods
For the proposed graph consensus term, we give a more comprehensive
explanation by comparing it with other related methods in this section.
Compared with the variants based on CCA, our method is not limited by the
dimensional equivalent across different views and more applicable to those
nonlinear cases. For the HSIC term in Eq. (2), linear kernel is usually used
to implement $\bm{K}_{X}$ and $\bm{K}_{Y}$. Even though this way is convenient
to obtain the optimal solution, the optimization for the nonlinear case is not
efficient. Besides, Co-reg might meet the similar issue when facing nonlinear
cases. Note that, when the graph consensus term focuses on the similarity
among samples in other views, HSIC term and the disagreement term
$D\left({{\bm{U}^{v}},{\bm{U}^{w}}}\right)$ in Co-reg could be seen as special
cases of the graph consensus term. For example, if
$Reg(\bm{U}^{v},\bm{G}_{\ast}^{w})={\bm{U}^{v}}\bm{H}{\bm{K}^{w}}\bm{H}{\bm{U}^{v}}^{{}^{T}}$,
it’s equivalent to the definition of HSIC term with linear kernel.
Differently, we could flexibly choose the common kernel function as similarity
measure for $\bm{K}^{w}$, such as polynomial kernel, Gaussian kernel, etc,
which is more applicable for the nonlinear case than HSCI term. Specifically,
our proposed method is a more general and robust way to enforce the agreement
among different views. In summary, our proposed framework has the following
advantages in terms of exploitation for multi-view information and the
flexibility of general framework:
* •
GCMLF is a unified framework to project multi-view data into ideal subspace
for most graph embedding methods, which makes full use of the diversity and
complementary information among different views. Differing from those methods
minimizing the difference of learned representations among views directly, our
proposed framework co-regularizes different views to be close to each other by
the graph consensus term based on heterogeneous graphs, meanwhile steadily
preserves the intrinsic property of each view on homogeneous graphs.
* •
For most of existing multi-view learning frameworks, the limitation of
dimensional equivalent makes it not flexible for the extensions of those
works. Differing from those methods that only hold under this condition to
limit their performance, we could flexibly formulate the dimensionality of
each view, which eliminates this limitation. Besides, adopting a suitable
graph manner to explore the complementary information among multiple views is
beneficial to obtain more robust and promising performance. More importantly,
GCMLF could incorporate nonlinear universal cases by exploiting the graph
structure information based on learned representations.
## IV Specific implement
In this section, we choose two heterogeneous graph embedding methods,
consisting of LE [48] and LLE [32], to provide a typical implement for our
proposed framework, named Multi-view Locality Linear Embedding (MvLLE). In
fact, LLE and LE are used to construct the graph learning loss term and
difference term between two views in Eq.(10), respectively.
### IV-A The construction process of MvLLE
LLE lies on the manifold structure of the samples space to preserve the
relationships among samples. Based on the assumption that each sample and its
neighbors to lie on or close to a locally linear patch of the manifold, then
we obtain the weights matrix $\bm{S}^{v}\in\mathbb{R}^{N\times N}$ by
minimizing the following reconstruction error:
$Error\left(\bm{S}^{v}\right){=}{\sum\limits_{i=1}^{N}{\|{\bm{X_{i}^{v}}-\sum\limits_{j\in
Neighbors\\{i\\}}{{\bm{S}_{ij}^{v}}{\bm{X}_{j}^{v}}}}\|_{2}^{2}}}$ (13)
where $Neighbors\\{i\\}$ denotes the neighbors of the $i$th sample
$\bm{X}_{i}^{v}$. By solving the above equation, we could obtain graph
structure $\bm{S}^{v}$ to reflect intrinsic properties of the samples space.
We expect their characterization of local geometry in the original space to be
equally valid for local patches on the manifold. Each original sample
$\bm{X}_{i}^{v}$ is mapped to a new representation. This is done by choosing
$d^{v}$-dimensional coordinates to minimize the following embedding cost
function:
$Error\left(\bm{U}^{v}\right){=}{\sum\limits_{i=1}^{N}{\|{{\bm{U}_{i}^{v}}-\sum\limits_{j\in
Neighbors\\{i\\}}{{\bm{S}_{ij}^{v}}{\bm{U}_{j}^{v}}}}\|_{2}^{2}}}$ (14)
Additionally, we constrain the learned representations $\bm{U}_{i}^{v},1\leq
i\leq N$ to have unit covariance. With simple algebraic formulation, the above
cost problem can be further transformed as follows:
$\begin{array}[]{l}\mathop{\min}\limits_{\bm{U}^{v}}\hskip
5.0pttr(\bm{U}^{v}\bm{{({I-S^{v}})}^{T}(I-S^{v})}\bm{U^{v^{T}}})\\\ \hskip
5.0pts.t.\hskip 10.00002pt\bm{U}^{v}\bm{U}^{v^{T}}=\bm{I}_{N}\end{array}$ (15)
Hereto, we determine that $\mathcal{F}(\bm{U}^{v})$ and $\mathcal{\bm{C}}^{v}$
are responding to
$tr(\bm{U}^{v}\bm{{({I-S^{v}})}^{T}(I-S^{v})}\bm{U^{v^{T}}})$ and
$\bm{U}^{v}\bm{U}^{v^{T}}=\bm{I}_{N}$ respectively.
LE aims at preserving the local neighborhood structure on the data manifold,
which constructs the weight matrix that describes the relationships among the
samples. Specifically, the similarity matrix $\bm{K}$ is to denote the weight
coefficients, which could choose the common kernel function as our similarity
measure, such as linear kernel, polynomial kernel, Gaussian kernel and etc.
Combining this with the graph consensus term in Eq.(7) between the $v$ view
and $w$th view, we could define $\bm{L}^{w}$ as follows:
$\begin{split}&\bm{L}^{w}=\bm{D}^{w}-\bm{K}^{w}\\\ \end{split}$ (16)
where $\bm{D}^{w}$ denotes a diagonal matrix and
${\bm{D}_{ii}^{w}}=\sum\limits_{j}{{\bm{K}_{ij}^{w}}}$. By rewriting the
normalized matrix $\bm{L}^{w}$, we could get
$\bm{L}^{w}=\bm{I}_{N}-{\bm{D}^{w}}^{-1/2}\bm{K}^{w}{\bm{D}^{w}}^{-1/2}$.
According to the above discussion, we have specified each term in objective
function in Eq.(10) and its constraint terms. In this way, we could extend
single-view based LLE into multi-view setting, named Multi-view Locality
Linear Embedding (MvLLE). Based on the above, the whole objective function for
MvLLE could be formulated as follows:
$\begin{split}&\mathop{\min}\mathcal{\bm{O}}\left(\bm{U}^{1},\bm{U}^{2},\ldots,\bm{U}^{M}\right)=\\\
&\sum_{v=1}^{M}tr(\bm{U}^{v}\bm{{({I-S^{v}})}^{T}(I-S^{v})}\bm{U^{v^{T}}})+\lambda_{R}\sum_{v=1}^{M}\bm{\Omega}(\bm{U}^{v})\\\
&+\lambda_{C}\sum_{v\neq
w}{tr\left(\bm{U}^{v}(\bm{I}_{N}-{\bm{D}^{w}}^{-1/2}\bm{K}^{w}{\bm{D}^{w}}^{-1/2}){\bm{U}^{v}}^{T}\right)}\\\
&\hskip 5.0pts.t.\hskip 10.00002pt\bm{U}^{v}\bm{U}^{v^{T}}=\bm{I}_{N},1\leq
v\leq M\\\ \end{split}$ (17)
Because the constraint terms normalize the scale of
$\\{\bm{U}^{1},\bm{U}^{2},\ldots,\bm{U}^{M}\\}$, the smooth regularized term
$\bm{\Omega}(\bm{U}^{v})$ could be neglected in the objective function of
MvLLE. That is, the above equation could be reduced as follows:
$\begin{split}&\mathop{\min}\mathcal{\bm{O}}\left(\bm{U}^{1},\bm{U}^{2},\ldots,\bm{U}^{M}\right)=\\\
&\sum_{v=1}^{M}tr(\bm{U}^{v}\bm{{({I-S^{v}})}^{T}(I-S^{v})}\bm{U^{v^{T}}})\\\
&+\lambda_{C}\sum_{v\neq
w}{tr\left(\bm{U}^{v}(\bm{I}_{N}-{\bm{D}^{w}}^{-1/2}\bm{K}^{w}{\bm{D}^{w}}^{-1/2}){\bm{U}^{v}}^{T}\right)}\\\
&\hskip 5.0pts.t.\hskip 10.00002pt\bm{U}^{v}\bm{U}^{v^{T}}=\bm{I}_{N},1\leq
v\leq M\\\ \end{split}$ (18)
### IV-B Optimization
Referring to the optimization procedure for GCMLF, the Eq.(18) could be
approximately solved. When solving the $v$th view, with all views fixed but
$\bm{U}^{v}$, we get the following optimization for the $v$th view:
$\begin{split}&\mathop{\min}\mathcal{\bm{O}}\left(\bm{U}^{v}\right)=tr\left(\bm{U}^{v}\bm{{({I-S^{v}})}^{T}(I-S^{v})}\bm{U^{v^{T}}}\right)\\\
&+\lambda_{C}\sum_{1\leq v\neq
w}^{M}{tr\left(\bm{U}^{v}(\bm{I}_{N}-{\bm{D}^{w}}^{-1/2}\bm{K}^{w}{\bm{D}^{w}}^{-1/2}){\bm{U}^{v}}^{T}\right)}\\\
&\hskip 5.0pts.t.\hskip 10.00002pt\bm{U}^{v}\bm{U}^{v^{T}}=\bm{I}_{N}\\\
\end{split}$ (19)
Due to the attributes of the matrix trace, the above equation is equivalent to
the following optimization problem:
$\begin{split}&\mathop{\min}\mathcal{\bm{O}}\left(\bm{U}^{v}\right)=tr(\bm{U}^{v}(\bm{{({I-S^{v}})}^{T}(I-S^{v})}+\\\
&\lambda_{C}\sum_{1\leq v\neq
w}^{M}{(\bm{I}_{N}-{\bm{D}^{w}}^{-1/2}\bm{K}^{w}{\bm{D}^{w}}^{-1/2})}\bm{U^{v^{T}}})\\\
&\hskip 5.0pts.t.\hskip 10.00002pt\bm{U}^{v}\bm{U}^{v^{T}}=\bm{I}_{N}\\\
\end{split}$ (20)
Under the constraint condition $\bm{U}^{v}\bm{U}^{v^{T}}=\bm{I}_{N}$, the
above equation could be efficiently solved by eigenvalue decomposition. In
this way, we could solve all the variables
$\\{\bm{U}^{v},\bm{G}_{\ast}^{v},1\leq v\leq M\\}$ iteratively and the whole
procedure to solve MvLLE is summarized in Algorithm 2. According to the
convergence analysis for our framework in Section III-C, it could be easily
verified that Algorithm 2 for MvLLE will be converged within limited iteration
steps. We also use many experiments to verify the convergence property of the
proposed method. Fig. 2 shows the relation between the objective values and
iterations. As shown in Fig. 2, we can see that with the iterations increase,
the objective function value of the proposed method decreases fast and reaches
a stable point after a few iterations, while the classification accuracy
increases dramatically during the first small number of iterations and then
reaches the stable high level for these four benchmark databases. For example,
for the Holidays dataset, the proposed method reaches the stable point in
terms of the classification accuracy within about fifteen iterations. Both
theoretical proof and experiments demonstrate that the proposed method can
obtain the local optimum quickly and has good convergence property.
Input: The multi-view data $\\{\bm{X}^{v},\forall 1\leq v\leq M\\}$, the
hyperparameter $\lambda_{C}$, kernel function $\bm{\kappa}(\cdot,\cdot)$ for
similarity matrix $\bm{K}$.
1
2for _v=1:M_ do
3 Construct $\bm{S}^{v}$ by solving the Eq.(13).
4 Initialize $\bm{U}^{v}$ by solving the Eq.(15).
5 end for
6
7while _not converged_ do
8 for _v=1:M_ do
9 Update $\bm{K}^{v}$ for the $v$th view according to kernel function
$\bm{\kappa}(\cdot,\cdot)$.
10 end for
11 for _v=1:M_ do
12 Update $\bm{U}^{v}$ by using eigenvalue decomposition to solve the Eq.(20).
13 end for
14
15 end while
16
Output: Learned representations {$\bm{U}^{v},1\leq v\leq M$}.
Algorithm 2 The optimization procedure for MvLLE
(a) Yale dataset
(b) Holidays dataset
(c) ORL dataset
(d) Corel-1K dataset
Figure 2: Convergence validations on four datasets.
### IV-C Time complexity
The computational cost for MvLLE mainly is composed of two parts. One is the
construction for the variables $\\{\bm{S}^{v},i\leq v\leq M\\}$ and the
initialization for the variables and $\\{\bm{U}^{v},i\leq v\leq M\\}$, which
solves $\bm{S}^{v}$ and $\bm{U}^{v}$ according to Eq.(13) and Eq.(15). The
other is to iteratively update $\bm{K}^{v}$ and $\bm{U}^{v}$, which needs to
perform the computation of similarity matrix and eigenvalue decomposition in
each iteration, respectively. It’s easy to find that the time complexity of
Algorithm 2 is mainly influenced by iteration times and eigenvalue
decomposition process. Therefore, its time complexity is about O($T\times
M\times N^{3}$), where $T$ is the iteration times of the alternating
optimization procedure. Note that, based on the convergence of Algorithm 2,
the iteration times $T$ will be a limited number.
### IV-D Discussion
LLE and LE are two heterogeneous graph embedding methods, in which LLE is used
to construct the graph learning loss term and LE is used to regularize the
dependence between two views in Eq.(10), respectively. Note that, LLE is based
on manifold space reconstruction, which aims to preserve reconstruction
relationships among samples. Therefore, when LE is utilized to construct the
graph learning loss term, we also consider that LLE is used to construct the
graph consensus term between two views by Eq.(9). To facilitate the solution,
we choose the former to specify the graph learning loss term in Eq. (10) in
this paper.
(a) Some examples in Yale dataset
(b) Some examples in Holidays dataset
(c) Some examples in ORL dataset
(d) Some examples in Corel-1K dataset
Figure 3: Examples images in datasets.
## V Experiments
In this section, we introduce the details of several experiments on document
classification, face recognition, and image retrieval, to verify the
effectiveness of our proposed framework. First, six benchmark datasets and
related methods for comparison are described detailedly in Section V-A. Then,
we evaluate the performance of our framework by comparing all methods in
Section V-B, Section V-C, and Section V-D, respectively. Finally, we take the
discussion on the performance of MvLLE based on experimental results on six
benchmark datasets in Section V-E.
### V-A Datasets and Compared Methods
Datasets: In our experiments, six datasets are used to validate the superior
performance of our framework, including document datasets
(3Source111http://mlg.ucd.ie/datasets/3sources.html and Cora222http://lig-
membres.imag.fr/grimal/data.html), face
datasets(ORL333http://www.uk.research.att.com/facedatabase.html and
Yale444http://cvc.yale.edu/projects/yalefaces/yalefaces.html), and image
datasets(Corel-1K555https://sites.google.com/site/dctresearch/Home/content-
based-image-retrieval and Holidays666http://lear.inrialpes.fr/jegou/data.php).
Two document datasets are two benchmark multi-view datasets. For the face and
image datasets, we utilize different descriptors to extract their
corresponding multi-view features, in which some samples in these datasets are
shown in Fig. 3. The detailed information of these datasets are summarized as
follows:
* •
3Source consists of three well-known news organizations: BBC, Reuters, and
Guardian, where each news is manually annotated with one of six labels.
Because each news source can be used as one view, we choose these news sources
as a multi-view benchmark dataset.
* •
Cora contains 2708 scientific publications of seven categories, where each
publication document could be described by content and citation. Thus, Cora
could be considered as a two-view benchmark dataset.
* •
ORL is collected from 40 distinct subjects, where ten different images are
gathered for each subject. For each person, the images are taken at different
times, varying the lighting, facial expressions, and facial details.
* •
Yale is composed of 165 faces from 15 peoples, which has been widely used in
face recognition. Each person has eleven images, with different facial
expressions and facial details.
* •
Corel-1K manually collects one thousand images corresponding to ten
categories, such as human beings, buildings, landscapes, buses, dragons,
elephants, horses, flowers, mountains, and foods. And there are one hundred
images in each category.
* •
Holidays consists of 1491 images corresponding to 500 categories, which are
mainly captured for sceneries.
To demonstrate the superior performance of our framework, we compare MvLLE
with the following methods, where the first two are single-view methods with
the most informative view, and the others are multi-view learning methods.
* •
BLE is Laplacian Eigenmaps (LE) [48] with the most informative view, i.e., one
that achieves the best performance with LE.
* •
BLLE is Locality Linear Embedding (LLE) [32] with the most informative view,
similar to BLE.
* •
MSE [14] is a multi-view spectral method based on global coordinate alignment.
* •
CCA [38] is used to deal with multi-view problems by maximizing the cross
correlation between two views.
* •
Co-reg [21] is a multi-view spectral embedding by regularizing different views
to be close to each other.
* •
AMGL [15] is an auto-weighted multiple graph learning method, which could
allocate ideal weight for each view automatically.
### V-B Document Classification
In this section, we evaluate the experimental results of the document
classification tasks on 3Source and Cora datasets. For these two datasets, we
randomly select 50% of the samples as training samples and the remaining 50%
of the dataset as testing samples every time. All the methods are conducted to
project all samples to the same dimensionality. Specifically, the dimensions
of the embedding obtained by all methods all maintain 20 and 30 dimensions. We
adopt 1NN as the classifier to classify the testing ones. After conducting
this experiment 30 times with different random training samples and testing
samples, we calculate the mean classification accuracy (MEAN) and max
classification accuracy (MAX) on 3Source and Cora datasets as the evaluation
index for all methods. Then, we can summary the evaluation indexes of MEAN and
MAX results in Table II and Table III.
TABLE II: The classification accuracy on 3Source dataset. Methods | Dims=20 | Dims=30
---|---|---
| MEAN(%) | MAX(%) | MEAN(%) | MAX(%)
BLE | 66.47 | 74.11 | 59.72 | 69.41
BLLE | 66.50 | 76.71 | 66.78 | 75.94
MSE | 50.47 | 57.64 | 46.86 | 60.00
Co-reg | 81.25 | 87.05 | 78.50 | 85.88
CCA | 53.88 | 76.45 | 54.37 | 73.56
AMGL | 49.92 | 57.64 | 48.15 | 56.47
MvLLE | 82.64 | 89.41 | 79.70 | 90.9
TABLE III: The classification accuracy on Cora dataset. Methods | Dims=20 | Dims=30
---|---|---
| MEAN(%) | MAX(%) | MEAN(%) | MAX(%)
BLE | 58.98 | 60.85 | 61.05 | 63.44
BLLE | 59.84 | 63.61 | 60.86 | 65.31
MSE | 64.65 | 66.24 | 67.72 | 69.64
Co-reg | 55.73 | 57.45 | 57.19 | 59.01
CCA | 71.11 | 72.35 | 71.52 | 72.05
AMGL | 63.71 | 65.73 | 66.90 | 69.57
MvLLE | 73.7 | 75.23 | 73.45 | 75.84
Through the experimental results of Tables II-III, it’s clear that the
proposed MvLLE is significantly superior to its counterparts in most
situations. Among the comparing methods, CCA is close to the proposed MvLLE on
classification performance, which might take more advantages of complementary
information than other compared methods on 3Source and Cora datasets. Compared
with other multi-view methods, the performance of our MvLLE is more stable.
For example, Co-reg achieves promising results on 3Source dataset while the
performance degrades sharply on the Cora dataset.
### V-C Face Recognition
In this section, we evaluate the experimental results of the face recognition
tasks on Yale and ORL datasets. For these two datasets, we first extract their
multi-view features by the different image descriptors including EDH [5], LBP
[3] and Gist [4]. Then, all the methods are conducted to project all samples
to the same dimensionality and the 1NN classifier is adopted to calculate the
recognition results, where the dimension of the embedding obtained by all
methods all maintains 30 dimensions. Note that we randomly select 50% of the
samples as training samples and the remaining 50% of the samples as testing
samples every time and run all methods 30 times with different random training
samples. Because the task of face recognition mainly cares about the
recognition accuracy, we choose the recognition accuracy as the evaluation
index in this part. The boxplot figures of accuracy values of all methods on
Yale and ORL datasets are shown in Fig. 4and Fig. 5.
Figure 4: The face recognition accuracy on Yale dataset. Figure 5: The face
recognition accuracy on ORL dataset.
Through the experiment results of the above two experiments in Figs. 4-5, the
multiple view performances are usually better than the independent view. This
demonstrates that multiple views can improve the performance of face
recognition. Among these multi-view methods, we can find that MvLLE
outperforms its comparing methods in most situations, which shows the
superiority of the proposed framework. Besides our MvLLE, Co-reg obtains
stably better than other methods on the performance of face recognition, which
takes more advantages of complementary information than other comparing
methods on Yale and ORL face datasets.
### V-D Image Retrieval
In this section, we conduct two experiments on Holidays and Corel-1K datasets
for image retrieval. For these two datasets, we both employ three image
descriptors of MSD [49], Gist [4], and HOC [50] to extract multi-view features
for all images. All the methods are conducted to project all samples to the
same dimensionality. In this part, the dimensions of the embedding obtained by
all methods maintain 30 dimensions. Besides, $\mathop{l}_{1}$ distance is
utilized to measure similarities between samples. At the aspect of the
validation index, we choose several common indexes, including average
precision rate (Precision), average recall rate (Recall), mean average
precision (MAP), and $F_{1}$-Measure, to validate the performances for image
retrieval. Actually, high Precision and Recall are required and
$F_{1}$-Measure is put forward as the overall performance measurement. Then,
we conducted this experiment on these two datasets repeatedly for twenty
times. For Holidays dataset, we summarize these experiment results, including
Precision, Recall, MAP, and $F_{1}$-Measure, on top 2 retrieval results in
Table IV. For Corel-1K dataset, we randomly select 10 images as query ones for
each category. Afterward, the relation curves on validation indexes are drawn
in Fig. 6.
TABLE IV: The image retrieval accuracy on Holidays dataset. Methods | Precision (%) | Recall (%) | MAP (%) | $F_{1}$-Measure
---|---|---|---|---
BLE | 72.92 | 56.16 | 86.46 | 31.73
BLLE | 59.84 | 63.61 | 80.86 | 30.73
MSE | 77.09 | 59.56 | 88.54 | 33.63
Co-reg | 77.25 | 59.51 | 88.52 | 33.62
CCA | 65.22 | 50.05 | 78.32 | 28.32
AMGL | 68.09 | 51.92 | 84.01 | 29.46
MvLLE | 79.13 | 61.14 | 89.56 | 34.49
(a) Precision
(b) Recall
(c) PR-Curve
(d) $F_{1}$-Measure
Figure 6: The curves of precision, recall, PR, and $F_{1}$-Measure on Corel-1K
dataset.
Through these experimental results in Table IV and Fig. 6, it can be readily
found that our proposed MvLLE achieves better performance than the other
compared methods in most situations in the field of image retrieval. Our
proposed method MvLLE could integrate compatible and complementary information
from multiple views and obtain a better embedding from these views. Therefore,
the results in Table IV and Fig. 6 could show that our framework can achieve
good performance in the field of face recognition. Note that the performance
of BLE is bad because of its unreasonable way to deal with multi-view
features.
### V-E Discussion
For the experiment results in Table II and Table III on text classification,
we can find that MvLLE outperforms other comparing methods in most situations.
Similar to the performance validation in text classification, our proposed
MvLLE also obtain promising performance in face recognition tasks through the
evaluations in Figs.4-5. As shown in Table IV and Fig.6, our method could be
also utilized to execute the image retrieval task. From the above evaluations,
it’s readily seen that the representations obtained by our method could be
more effective and suitable for multi-view features. Besides, other multi-view
methods outperform the other single-view methods in most situations, which
could show multi-view learning is a valuable research field indeed. Compared
with BLLE, MvLLE could achieve significantly better performance by integrating
the complementary information among different views meanwhile preserving its
intrinsic characteristic in each view. Note that the experimental results of
our proposed MvLLE on six datasets are without fine-tuning, and usage of fine-
tuning might further improve its performance. Besides, we find that MvLLE
could converge within limited iterations in most experiments, which
empirically indicates the fast speed of the convergence for our method.
## VI Conclusion
In this paper, we propose a novel unified multi-view framework, named Graph
Consensus Multi-view Learning Framework (GCMLF), to extend most of graph
embedding works based on single view into the multi-view setting. , It
encourages all views to learn with each other according to the complementarity
among views and explores the heterogeneous graph structure in each view
independently to preserve the diversity property among all views. Based on the
sufficient theoretical analysis, we show that GCMLF is a more robust and
flexible multi-view learning framework than those existing multi-view methods.
Correspondingly, an algorithm based on alternating direction strategy is
proposed to solve GCMLF, and the relative proof guarantees that it can
converge to a local optimal solution. Furthermore, we provide one typical
implement based on two heterogeneous graph embedding methods of LLE and LE,
called Multi-view Locality Linear Embedding (MvLLE). Extensive experimental
results demonstrate that the proposed MvLLE can effectively explore the
diversity information and underlying complementary information of the given
multi-view data, and outperforms its compared methods. With the rapid
development of graph neural networks [51, 52, 53], how to extend our framework
into this domain is very meaningful yet full of challenges, and we will
consider it in our future work.
## Acknowledgements
The authors would like to thank the anonymous reviewers for their insightful
comments and suggestions to significantly improve the quality of this paper.
This work was supported by National Natural Science Foundation of PR
China(61672130, 61972064) and LiaoNing Revitalization Talents
Program(XLYC1806006).
## References
* [1] Y. Li, M. Yang, and Z. M. Zhang, “A survey of multi-view representation learning,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 31, no. 10, pp. 1863–1883, 2018.
* [2] J. Zhao, X. Xie, X. Xu, and S. Sun, “Multi-view learning overview: Recent progress and new challenges,” _Information Fusion_ , vol. 38, pp. 43–54, 2017.
* [3] T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 24, no. 7, pp. 971–987, 2002.
* [4] M. Douze, H. Jégou, H. Sandhawalia, L. Amsaleg, and C. Schmid, “Evaluation of gist descriptors for web-scale image search,” in _Proceedings of the ACM International Conference on Image and Video Retrieval_. ACM, 2009, pp. 1–8.
* [5] X. Gao, B. Xiao, D. Tao, and X. Li, “Image categorization: Graph edit distance+ edge direction histogram,” _Pattern Recognition_ , vol. 41, no. 10, pp. 3179–3191, 2008.
* [6] M. R. Amini, N. Usunier, and C. Goutte, “Learning from multiple partially observed views - an application to multilingual text categorization,” in _Advances in Neural Information Processing Systems_ , 2009, pp. 28–36.
* [7] G. Bisson and C. Grimal, “Co-clustering of multi-view datasets: a parallelizable approach,” in _International Conference on Data Mining_. IEEE, 2012, pp. 828–833.
* [8] M. Kan, S. Shan, and X. Chen, “Multi-view deep network for cross-view classification,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 4847–4855.
* [9] C. Zhang, J. Cheng, and Q. Tian, “Multi-view image classification with visual, semantic and view consistency,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 617–627, 2019.
* [10] H. Wang, Y. Yang, B. Liu, and H. Fujita, “A study of graph-based system for multi-view clustering,” _Knowledge-Based Systems_ , vol. 163, no. JAN.1, pp. 1009–1019, 2019.
* [11] Z. Zheng, L. Li, F. Shen, S. H. Tao, and S. Ling, “Binary multi-view clustering,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 41, no. 7, pp. 1774–1782, 2018.
* [12] Y. Yang and H. Wang, “Multi-view clustering: A survey,” _Big Data Mining and Analytics_ , vol. 1, no. 2, pp. 83–107, 2018.
* [13] H. Wang, Y. Yang, and B. Liu, “Gmc: Graph-based multi-view clustering,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 32, no. 6, pp. 1116–1129, 2019.
* [14] T. Xia, D. Tao, T. Mei, and Y. Zhang, “Multiview spectral embedding,” _IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)_ , vol. 40, no. 6, pp. 1438–1446, 2010.
* [15] F. Nie, G. Cai, J. Li, and X. Li, “Auto-weighted multi-view learning for image clustering and semi-supervised classification,” _IEEE Transactions on Image Processing_ , vol. 27, no. 3, pp. 1501–1511, 2017.
* [16] S. Huang, Z. Kang, and Z. Xu, “Self-weighted multi-view clustering with soft capped norm,” _Knowledge-Based Systems_ , vol. 158, no. 15, pp. 1–8, 2018\.
* [17] L. Tian, F. Nie, and X. Li, “A unified weight learning paradigm for multi-view learning,” in _Proceedings of Machine Learning Research_ , vol. 89. PMLR, 2019, pp. 2790–2800.
* [18] M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” in _Advances in Neural Information Processing Systems_ , 2002, pp. 585–591.
* [19] W. Wang and Z. H. Zhou, “A new analysis of co-training,” in _International Conference on International Conference on Machine Learning_ , 2010.
* [20] A. Kumar and H. Daumé, “A co-training approach for multi-view spectral clustering,” in _Proceedings of the 28th International Conference on Machine Learning_ , 2011, pp. 393–400.
* [21] A. Kumar, P. Rai, and H. Daume, “Co-regularized multi-view spectral clustering,” in _Advances in Neural Information Processing Systems_ , 2011, pp. 1413–1421.
* [22] X. Niu, H. Han, S. Shan, and X. Chen, “Multi-label co-regularization for semi-supervised facial action unit recognition,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 909–919.
* [23] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extensions: A general framework for dimensionality reduction,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 29, no. 1, pp. 40–51, 2006.
* [24] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” _Chemometrics and Intelligent Laboratory Systems_ , vol. 2, no. 1-3, pp. 37–52, 1987.
* [25] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 19, no. 7, pp. 711–720, 1997.
* [26] K. Q. Weinberger, J. Blitzer, and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” in _Advances in Neural Information Processing Systems_ , 2006, pp. 1473–1480.
* [27] B. Schölkopf, A. Smola, and K.-R. Müller, “Kernel principal component analysis,” in _International Conference on Artificial Neural Networks_. Springer, 1997, pp. 583–588.
* [28] L. Torresani and K.-c. Lee, “Large margin component analysis,” in _Advances in Neural Information Processing Systems_ , 2007, pp. 1385–1392.
* [29] S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K.-R. Mullers, “Fisher discriminant analysis with kernels,” in _Neural networks for signal processing IX: Proceedings of the 1999 IEEE signal processing society workshop (cat. no. 98th8468)_. IEEE, 1999, pp. 41–48.
* [30] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep graph contrastive representation learning,” in _ICML Workshop on Graph Representation Learning and Beyond_ , 2020.
* [31] ——, “Graph contrastive learning with adaptive augmentation,” in _Proceedings of the Web Conference 2021_. ACM Press, 2021, pp. 2069–2080.
* [32] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” _Science_ , vol. 290, no. 5500, pp. 2323–2326, 2000.
* [33] T. Cao, V. Jojic, S. Modla, D. Powell, K. Czymmek, and M. Niethammer, “Robust multimodal dictionary learning,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2013, pp. 259–266.
* [34] W. Liu, D. Tao, J. Cheng, and Y. Tang, “Multiview hessian discriminative sparse coding for image annotation,” _Computer Vision and Image Understanding_ , vol. 118, pp. 50–60, 2014.
* [35] Y. Y. Lin, T. L. Liu, and C. S. Fuh, “Multiple kernel learning for dimensionality reduction,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 33, no. 6, pp. 1147–1160, 2011.
* [36] M. Nen, Alpayd, and N. Ethem, “Multiple kernel learning algorithms,” _Journal of Machine Learning Research_ , vol. 12, pp. 2211–2268, 2011.
* [37] Y. Gu, J. Chanussot, X. Jia, and J. A. Benediktsson, “Multiple kernel learning for hyperspectral image classification: A review,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 55, no. 11, pp. 6547–6565, 2017.
* [38] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, “Canonical correlation analysis: An overview with application to learning methods,” _Neural Computation_ , vol. 16, no. 12, pp. 2639–2664, 2004.
* [39] A. Gretton, O. Bousquet, A. Smola, and B. Schölkopf, “Measuring statistical dependence with hilbert-schmidt norms,” in _International conference on algorithmic learning theory_. Springer, 2005, pp. 63–77.
* [40] J. Rupnik and J. Shawe-Taylor, “Multi-view canonical correlation analysis,” in _Conference on Data Mining and Data Warehouses_ , 2010, pp. 1–4.
* [41] A. Sharma, A. Kumar, H. Daume, and D. W. Jacobs, “Generalized multiview analysis: A discriminative latent space,” in _2012 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE, 2012, pp. 2160–2167.
* [42] M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen, “Multi-view discriminant analysis,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 38, no. 1, pp. 188–194, 2016.
* [43] Niu, D., Dy, J, Jordan, and a, “Iterative discovery of multiple alternativeclustering views,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 36, no. 7, pp. 1340–1353, 2014.
* [44] X. Cao, C. Zhang, H. Fu, S. Liu, and H. Zhang, “Diversity-induced multi-view subspace clustering,” in _Computer Vision and Pattern Recognition_ , 2015, pp. 586–594.
* [45] C. Zhang, H. Fu, Q. Hu, P. Zhu, and X. Cao, “Flexible multi-view dimensionality co-reduction,” _IEEE Transactions on Image Processing_ , vol. 26, no. 2, pp. 648–659, 2016.
* [46] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 35, no. 1, pp. 171–184, 2013.
* [47] W. Rudin _et al._ , _Principles of mathematical analysis_. McGraw-hill New York, 1964, vol. 3.
* [48] M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” _Neural Computation_ , vol. 15, no. 6, pp. 1373–1396, 2003.
* [49] L. Guang-Hai, L. Zuo-Yong, Z. Lei, and X. Yong, “Image retrieval based on micro-structure descriptor,” _Pattern Recognition_ , vol. 44, no. 9, pp. 2123 – 2133, 2011.
* [50] L. Yu, L. Feng, C. Chen, T. Qiu, and J. Wu, “A novel multi-feature representation of images for heterogeneous iots,” _IEEE Access_ , vol. 4, no. 99, pp. 6204–6215, 2016.
* [51] Y. Zhu, Y. Xu, F. Yu, S. Wu, and L. Wang, “Cagnn: Cluster-aware graph neural networks for unsupervised graph representation learning,” _arXiv preprint arXiv:2009.01674_ , 2020.
* [52] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,” _IEEE Transactions on Neural Networks and Learning Systems_ , pp. 1–21, 2020.
* [53] Y. Zhu, W. Xu, J. Zhang, Q. Liu, S. Wu, and L. Wang, “Deep graph structure learning for robust representations: A survey,” _arXiv preprint arXiv:2103.03036_ , 2021.
| Xiangzhu Meng received his BS degree from Anhui University, in 2015. Now he
is working towards the PHD degree in School of Computer Science and
Technology, Dalian University of Technology, China. He has authored and co-
authored some papers in some famous journals, including Knowledge-Based
Systems, Engineering Applications of Artificial Intelligence, Neurocomputing,
etc. Furthermore, he serves as a reviewer for ACM Transactions on Multimedia
Computing Communications and Applications. His research interests include
multi-view learning, deep learning, data mining and computing vision.
---|---
| Lin Feng received the BS degree in electronic technology from Dalian
University of Technology, China, in 1992, the MS degree in power engineering
from Dalian University of Technology, China, in 1995, and the PhD degree in
mechanical design and theory from Dalian University of Technology, China, in
2004. He is currently a professor and doctoral supervisor in the School of
Innovation Experiment, Dalian University of Technology, China. His research
interests include intelligent image processing, robotics, data mining, and
embedded systems.
---|---
| Chonghui Guo received the B.S. degree in mathematics from Liaoning
University, in 1995, the M.S. degree in operational research and control
theory, and the Ph.D. degree in Institute of Systems Engineering from the
Dalian University of Technology, in 2002. He is a professor of the Institute
of Systems Engineering, Dalian University of Technology. He was a postdoctoral
research fellow in the Department of Computer Science, Tsinghua University.
His interests include data mining and knowledge discovery.
---|---
|
# Learning to Continually Learn with the Bayesian Principle
Soochan Lee Hyeonseong Jeon Jaehyeon Son Gunhee Kim
###### Abstract
In the present era of deep learning, continual learning research is mainly
focused on mitigating forgetting when training a neural network with
stochastic gradient descent on a non-stationary stream of data. On the other
hand, in the more classical literature of statistical machine learning, many
models have sequential Bayesian update rules that yield the same learning
outcome as the batch training, i.e., they are completely immune to
catastrophic forgetting. However, they are often overly simple to model
complex real-world data. In this work, we adopt the meta-learning paradigm to
combine the strong representational power of neural networks and simple
statistical models’ robustness to forgetting. In our novel meta-continual
learning framework, continual learning takes place only in statistical models
via ideal sequential Bayesian update rules, while neural networks are meta-
learned to bridge the raw data and the statistical models. Since the neural
networks remain fixed during continual learning, they are protected from
catastrophic forgetting. This approach not only achieves significantly
improved performance but also exhibits excellent scalability. Since our
approach is domain-agnostic and model-agnostic, it can be applied to a wide
range of problems and easily integrated with existing model architectures.
Machine Learning, ICML
## 1 Introduction
Continual learning (CL), the process of acquiring new knowledge or skills
without forgetting existing ones, is an essential ability of intelligent
agents. Despite recent advances in deep learning, CL remains a significant
challenge. Knoblauch et al. (2020) rigorously prove that, in general, CL is an
NP-hard problem. This implies that building a universal CL algorithm is
impossible as long as P$\neq$NP. To effectively tackle CL, one should first
narrow down a domain and design a CL algorithm tailored to leverage a domain-
specific structure. Even humans possess specialized CL abilities for specific
tasks, such as learning new faces, which may not be as effective for other
tasks, such as memorizing random digits. This specialization results from the
evolutionary process that has optimized our CL abilities for survival and
reproduction.
From this perspective, meta-continual learning (MCL) emerges as a highly
promising avenue of research. Rather than manually crafting CL algorithms
based solely on human knowledge, MCL aims to meta-learn the CL ability in a
data-driven manner – _learning to continually learn_. Thus, we can design a
general MCL algorithm and feed domain-specific data to obtain a specialized CL
algorithm. MCL can be more advantageous in many practical scenarios, as it can
utilize a large-scale dataset to improve the CL ability before deploying a CL
agent, instead of learning from scratch.
MCL follows the bi-level optimization scheme of meta-learning: in the inner
loop, a model is continually trained by a CL algorithm, while in the outer
loop, the CL algorithm is optimized across multiple CL episodes. Although
stochastic gradient descent (SGD) has been the primary learning mechanism in
deep learning, this bi-level scheme offers the flexibility to combine neural
networks with fundamentally different learning mechanisms. Specifically, we
can meta-train neural networks with SGD only in the outer loop and adopt
another update rule for CL in the inner loop.
In this context, the sequential Bayesian update stands out as the most
promising candidate, providing an ideal framework for updating a knowledge
state. While there have been a significant number of CL approaches inspired by
the Bayesian updates of the posterior of neural network parameters
(Kirkpatrick et al., 2016; Zenke et al., 2017; Chaudhry et al., 2018; Nguyen
et al., 2018; Farquhar & Gal, 2019), they require various approximations to
ensure computational tractability, which sets them apart from the ideal
Bayesian update. On the other hand, we bring the Fisher-Darmois-Koopman-Pitman
theorem (Fisher, 1934; Darmois, 1935; Koopman, 1936; Pitman, 1936) into the
scope to point out that the exponential family is the only family of
distributions that are capable of efficient and lossless sequential Bayesian
update (more precise description in §2.2). Instead of dealing with the
intractable posterior of complex neural networks, we consider the sequential
Bayesian inference of simple statistical models that inherently come with an
exponential family posterior, yielding a result identical to batch inference.
While these models are immune to catastrophic forgetting by design, they are
often too simple for modeling complex, high-dimensional data. Fortunately, the
MCL setting allows meta-training neural networks that can work as bridges
between the real world and the statistical models.
We distill this idea of combining simple statistical models and meta-learned
neural networks into a general MCL framework named _Sequential Bayesian Meta-
Continual Learning (SB-MCL)_. Since SB-MCL is domain-agnostic and model-
agnostic, it can be applied to a wide range of problem domains and integrated
with existing model architectures with minimal modifications. SB-MCL
encompasses several prior works (Banayeeanzade et al., 2021; Snell et al.,
2017; Harrison et al., 2018) as special cases and supports both supervised and
unsupervised learning. In our extensive experiments on a wide range of
benchmarks, SB-MCL achieves remarkable performance while using substantially
less resources. Code is available at https://github.com/soochan-lee/SB-MCL.
## 2 Background
### 2.1 Meta-Continual Learning
We describe the problem setting of MCL. We denote an example $(x,y)$ where $x$
is an input variable, and $y$ is a target variable, assuming a supervised
setting by default. For unsupervised learning settings, one can replace
$(x,y)$ with $x$. A CL episode $(\mathcal{D},\mathcal{E})$ consists of a
training stream ${\mathcal{D}}=((x_{t},y_{t}))_{t=1}^{T}$ and a test set
$\mathcal{E}=\\{(\tilde{x}_{n},\tilde{y}_{n})\\}_{n=1}^{N}$. The training
stream is an ordered sequence of length $T$, and its examples can be accessed
sequentially and cannot be accessed more than once. It is assumed to be non-
stationary and typically constructed as a concatenation of $K$ distinct _task_
streams. Naively training a neural network on such a non-stationary stream
with SGD results in catastrophic forgetting of the knowledge from the previous
part of the stream. The test set consists of examples of the tasks appearing
in the training stream, such that the model needs to retain knowledge of all
the tasks to obtain a high score in the test set.
In MCL, multiple CL episodes are split into a meta-training set
$\mathcal{D}=\\{({\mathcal{D}}^{i},\mathcal{E}^{i})\\}_{i}$ and a meta-test
set $\mathcal{E}=\\{({\mathcal{D}}^{j},\mathcal{E}^{j})\\}_{j}$. During the
meta-training phase, a CL algorithm is optimized across multiple episodes in
$\mathcal{D}$ to produce a competent model from a training stream. The
algorithm’s CL capability is then measured with $\mathcal{E}$. Note that
$\mathcal{D}$ and $\mathcal{E}$ typically do not share any underlying tasks
since the meta-test set aims to measure the learning capability, not the
knowledge of specific tasks that appear during meta-training. Note that MCL
should not be confused with other specialized settings that combine meta-
learning and CL (Finn et al., 2019; Riemer et al., 2019; Jerfel et al., 2019;
Gupta et al., 2020; to name a few). They have different assumptions and
objectives that are not compatible with MCL.
### 2.2 Sequential Bayesian Update of Exponential Family Posterior
The Bayes rule offers a principled way to update knowledge incrementally by
using the posterior at the previous time step as the prior for the current
time step, i.e., $p(z|x_{1:t})\propto p(x_{t}|z)p(z|x_{1:t-1})$ (Bishop, 2006;
Murphy, 2022). Therefore, the Bayesian perspective has been widely adopted in
CL research (Kirkpatrick et al., 2016; Zenke et al., 2017; Chaudhry et al.,
2018; Nguyen et al., 2018; Farquhar & Gal, 2019). However, prior works have
focused on sequentially updating the posterior of neural network parameters,
which are generally intractable to compute. Therefore, they must rely on
various approximations, resulting in a wide gap between the ideal Bayesian
update and reality.
Then, what kind of models are suitable for efficient sequential Bayesian
updates? According to the Fisher-Darmois-Koopman-Pitman theorem (Fisher, 1934;
Darmois, 1935; Koopman, 1936; Pitman, 1936), _the exponential family is the
only family of distributions where the dimension of the sufficient statistic
remains fixed, regardless of the number of examples_. Sufficient statistics
are the minimal statistics that capture all the information in the data about
the parameter of interest. Therefore, if the dimension of the sufficient
statistic remains fixed, we can store all the necessary information in a
fixed-size memory system. This theorem has significant implications for CL; if
the model’s posterior is not a member of the exponential family (as in the
case of neural networks) and does not have a large enough memory system to
store the ever-growing sufficient statistics, forgetting becomes inevitable.
From this perspective, employing a replay buffer (Lopez-Paz & Ranzato, 2017;
Chaudhry et al., 2019) is an approach that aids in partially preserving
sufficient statistics.
On the flip side, the theorem suggests an alternative approach; by embracing
an exponential family distribution, we can store sufficient statistics within
a fixed dimension, enabling efficient sequential Bayesian updates without any
compromises. Although the exponential family’s expressivity is limited, this
challenge can be effectively addressed in MCL settings by meta-learning neural
networks to reconcile the real-world data and the exponential family.
## 3 Our Approach: SB-MCL
(a) Supervised MCL.
(b) Unsupervised MCL.
Figure 1: Graphical models of MCL. For each episode $e$, training examples
$(x_{t}^{e},y_{t}^{e})$ (or just $x_{t}^{e}$) are produced conditioned on the
time step $t$ and the episode-wise latent variable $z^{e}$. Figure 2:
Schematic diagram of our SB-MCL in a single supervised CL episode. In SB-MCL,
CL is formulated as the sequential Bayesian update of an exponential family
posterior $q_{\phi}(z|x_{1:t},y_{1:t})$. The meta-learned neural networks (the
learner and the model) remain fixed during CL to protect themselves from
catastrophic forgetting.
### 3.1 The Meta-Learning Objective
Fig. 1 shows the graphical models of our MCL settings. In both supervised and
unsupervised settings, there are $E$ CL episodes. Each CL episode $e$ has a
training stream ${\mathcal{D}}^{e}$ of length $T$ and a test set
$\mathcal{E}^{e}$ of size $N$. In supervised CL settings (Fig. 1(a)), each
example is a pair of input $x$ and target $y$, and the goal is to model the
conditional probability $p(y|x)$. In unsupervised settings (Fig. 1(b)), an
example is simply $x$, and the goal is to model $p(x)$. For each CL episode
$e$, we assume an episode-specific latent variable $z^{e}$ that governs the
entire episode. The training stream’s non-stationarity, a key characteristic
of CL, is expressed by the time variable $t$ affecting the generation of $x$.
In practice, the training stream is often constructed by concatenating
multiple _task_ streams, each of which is a stationary stream sampled from a
distinct task distribution. Note that $z^{e}$ is shared by all examples inside
an episode regardless of the tasks they belong to. Under this framework, the
CL process is to sequentially refine the belief state of $z^{e}$.
The objective is to maximize the (conditional) log-likelihood of the test set
$\mathcal{E}$ after continually learning the training stream ${\mathcal{D}}$
(superscript $e$ is now omited for brevity). Assuming a model parameterized by
$\theta$, this objective can be summarized as
$\displaystyle\sum_{n=1}^{N}\log
p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},{\mathcal{D}})=\sum_{n=1}^{N}\log\int_{z}p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z)p_{\theta}(z|{\mathcal{D}})$
in supervised settings and as
$\displaystyle\sum_{n=1}^{N}\log
p_{\theta}(\tilde{x}_{n}|{\mathcal{D}})=\sum_{n=1}^{N}\log\int_{z}p_{\theta}(\tilde{x}_{n}|z)p_{\theta}(z|{\mathcal{D}})$
in unsupervised settings, where $\tilde{x}_{*}$ and $\tilde{y}_{*}$ are the
test data in $\mathcal{E}$. However, computing these objectives is generally
intractable due to the integration over $z$. For such cases, we introduce a
variational distribution $q_{\phi}$ parameterized by $\phi$ and derive the
variational lower bounds. The bounds for the supervised and unsupervised cases
are derived as
$\displaystyle\log
p_{\theta}(\tilde{y}_{1:N}|\tilde{x}_{1:N},{\mathcal{D}})=\log
p_{\theta}(\tilde{y}_{1:N}|\tilde{x}_{1:N},x_{1:T},y_{1:T})$
$\displaystyle\geq\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\sum_{n=1}^{N}\log
p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z)+\sum_{t=1}^{T}\log
p_{\theta}(y_{t}|x_{t},z)\right]$ $\displaystyle\quad-
D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z)\right)-\underbrace{\log
p_{\theta}({\mathcal{D}})}_{\mathrm{const.}},$ (1) $\displaystyle\log
p_{\theta}(\tilde{x}_{1:N}|{\mathcal{D}})=\log
p_{\theta}(\tilde{x}_{1:N}|x_{1:T})$ $\displaystyle\geq\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\sum_{n=1}^{N}\log
p_{\theta}(\tilde{x}_{n}|z)+\sum_{t=1}^{T}\log p_{\theta}(x_{t}|z)\right]$
$\displaystyle\quad-
D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z)\right)-\underbrace{\log
p_{\theta}({\mathcal{D}})}_{\mathrm{const.}}.$ (2)
For more details, please refer to Appendix A.
### 3.2 Continual Learning as Sequential Bayesian Update
In Eq. 1 and 2, the CL process is abstracted inside the variational posterior
$q_{\phi}(z|{\mathcal{D}})$, which is obtained through sequential Bayesian
updates:
$\displaystyle q_{\phi}(z|x_{1:t},y_{1:t})$ $\displaystyle\propto
q_{\phi}(x_{t},y_{t}|z)q_{\phi}(z|x_{1:t-1},y_{1:t-1}),$ (3) $\displaystyle
q_{\phi}(z|x_{1},y_{1})$ $\displaystyle\propto
q_{\phi}(x_{1},y_{1}|z)q_{\phi}(z)$ $\displaystyle q_{\phi}(z|x_{1:t})$
$\displaystyle\propto q_{\phi}(x_{t}|z)q_{\phi}(z|x_{1:t-1}),$ (4)
$\displaystyle q_{\phi}(z|x_{1})$ $\displaystyle\propto
q_{\phi}(x_{1}|z)q_{\phi}(z),$
where Eq. 3 and 4 are respectively for supervised and unsupervised CL. In the
following, we will consider only the supervised case to be concise, but the
same logic can be extended to the unsupervised case. As depicted in Fig. 2,
The CL process initially starts with a variational prior $q_{\phi}(z)$. And
the _learner_ , a neural network component, produces $q_{\phi}(x_{t},y_{t}|z)$
for each example $(x_{t},y_{t})$, which is subsequently integrated into the
variational posterior $q_{\phi}(z|x_{1:t},y_{1:t})$.111Both
$q_{\phi}(x_{t},y_{t}|z)$ and $q_{\phi}(z|x_{1:t},y_{1:t})$ are used as
functions of $z$ since $(x_{1:t},y_{1:t})$ is given. The parameters of the
prior and the learner constitute $\phi$. As previously explained in §2.2, the
Fisher-Darmois-Koopman-Pitman theorem implies that only exponential family
distributions can perform such updates without consistently increasing the
memory and compute requirement proportional to the number of examples. This
property makes them ideal for our variational posterior. Note that SB-MCL does
not involve any gradient descent during CL; the learner performs only the
forward passes to process the training examples for sequential Bayesian
updates.
As an example of exponential family distributions, we describe the exact
update rule for a factorized Gaussian posterior
${\mathcal{N}}(z;\mu_{t},\Lambda_{t}^{-1})$ where $\Lambda_{t}$ is diagonal.
First, the variational prior is also defined as a factorized Gaussian:
$q_{\phi}(z)={\mathcal{N}}(z;\mu_{0},\Lambda_{0}^{-1})$. For
$q_{\phi}(x_{t},y_{t}|z)$, the learner outputs $\hat{z}_{t}$ and $P_{t}$ for
each $(x_{t},y_{t})$, where $P_{t}$ is a diagonal matrix. We consider
$\hat{z}_{t}$ as a noisy observation of $z$ with a Gaussian noise of precision
$P_{t}$, i.e.,
$q_{\phi}(x_{t},y_{t}|z)={\mathcal{N}}(\hat{z}_{t};z,P_{t}^{-1})$ (Volpp et
al., 2021). This allows an efficient sequential update rule for the
variational posterior (Bishop, 2006):
$\displaystyle\Lambda_{t}=\Lambda_{t-1}+P_{t},\hskip
8.53581pt\mu_{t}=\Lambda_{t}^{-1}\left(\Lambda_{t-1}\mu_{t-1}+P_{t}\hat{z}_{t}\right).$
(5)
After training, the posterior
$q_{\phi}(z|{\mathcal{D}})=q_{\phi}(z|x_{1:T},y_{1:T})$ is passed on to the
test phase. During testing, the model produces outputs conditioned on the test
input $\tilde{x}_{n}$ and $z$, which is compared with the test output
$\tilde{y}_{n}$ to obtain the test log-likelihood $\mathbb{E}_{z\sim
q_{\phi}(z|x_{1:T},y_{1:T})}[\log p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z)]$.
It would be ideal if we could analytically compute it, but if this is not the
case, we may approximate it by the Monte Carlo estimation (sampling multiple
$z$’s from $q_{\phi}(z|x_{1:T},y_{1:T})$) or the maximum a posteriori
estimation of $z$.
### 3.3 Meta-Training
During the meta-training phase, the model and the learner are meta-updated to
maximize Eq. 1 or 2 with multiple CL episodes. For each episode, the CL
process in §3.2 is used to obtain $q_{\phi}(z|{\mathcal{D}})$ with the
learner. In contrast to SGD-based MCL, our approach does not need to process
the training stream sequentially. If all the training examples are available,
which is generally true during meta-training, we can feed them to the learner
in parallel and combine the results with a batch inference rule instead of the
sequential update rule. With the Gaussian posterior, for example, we can use
the following formula instead of Eq. 5 to produce the identical result:
$\displaystyle\Lambda_{T}=\sum_{t=0}^{T}P_{t},\hskip
8.53581pt\mu_{T}=\Lambda_{T}^{-1}\sum_{t=0}^{T}P_{t}\hat{z}_{t}.$ (6)
Compared to SGD-based approaches requiring forward-backward passes for each
example sequentially, the meta-training of our approach can benefit from
parallel processors such as GPUs or TPUs.
Once the variational posterior $q_{\phi}(z|{\mathcal{D}})$ is obtained, we use
Monte Carlo approximation for the expectation w.r.t.
$q_{\phi}(z|{\mathcal{D}})$ (Kingma & Welling, 2014). For the Gaussian
posterior, we can utilize the reparameterization trick (Kingma & Welling,
2014) to sample $z$ that allows backpropagation:
$\displaystyle z=\mu_{T}+\Lambda_{T}^{-1/2}\epsilon,\hskip
8.53581pt\epsilon\sim{\mathcal{N}}(0,I).$ (7)
Conditioned on $z$, we run the model on the training and test examples to
compute the first term in Eq. 1 or 2. This term encourages the cooperation
between the model and the learner to increase the likelihood of the data. The
second term is the Kullback-Leibler (KL) divergence between the variational
posterior $q_{\phi}(z|{\mathcal{D}})$ and the prior $p_{\theta}(z)$, which can
be regarded as a regularization term. We set the prior to be the same
exponential family distribution, e.g., the unit Gaussian for the Gaussian
posterior, which enables an analytical computation of the KL divergence.
Finally, the last term $\log p_{\theta}({\mathcal{D}})$ is a constant that can
be ignored for optimization purposes.
After Eq. 1 or 2 is computed for an episode or a batch of episodes, we perform
a meta-update on the model and the learner with an SGD algorithm,
backpropagating through the entire episode. Unlike existing SGD-based MCL
methods (Javed & White, 2019; Beaulieu et al., 2020), we do not need to
calculate any second-order gradients, which is a significant advantage for
scalability.
Table 1: Summary of the special cases of SB-MCL. Method | Domain | Model structure | $z$ | $q_{\phi}(z|{\mathcal{D}})$
---|---|---|---|---
GeMCL | Classification | Encoder + GMM | GMM param. | Per-class Gaussian
PN | Classification | Encoder + GMM | GMM param. | Per-class isotropic Gaussian
ALPaCA | Regression | Encoder + Linear model | Linear model param. | Matrix normal
SB-MCL | Any domain | Any model | Any auxiliary input | An exponential family distribution
### 3.4 Existing Special Cases of SB-MCL
Several prior works can be considered domain-specific special cases of SB-MCL.
We summarize the key characteristics in Table 1 and high-level descriptions in
the following.
GeMCL (Banayeeanzade et al., 2021). GeMCL can be regarded as a specific
instance of our framework in the image classification domain. It utilizes a
meta-learned neural network encoder to extract an embedding vector for each
image. During the training process, it maintains a Gaussian posterior for each
class in the embedding space. Each Gaussian posterior is updated by the
sequential Bayesian update rule whenever an example for the corresponding
class becomes available. These Gaussians collectively form a Gaussian mixture
model (GMM) within the embedding space. At test time, each test image is
converted into an embedding vector by the same encoder, and a class is
predicted by inferring the mixture component of GMM. To view GeMCL as an
instance of SB-MCL, we consider the encoder as serving two roles: one as the
learner and the other as a component of the model. During training, the
encoder is used as the learner to update the posterior
$q_{\phi}(z|x_{1:t},y_{1:t})$ where $z$ is the parameters of the GMM. At test
time, the encoder transforms the test inputs into embeddings as a model
component, and the GMM classifies the embeddings with its parameters learned
from the training phase. Banayeeanzade et al. (2021) also propose an MAP
variant, which simply produces
$p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z_{\mathrm{MAP}})$ as the output. This
variant has simpler computation without significant performance drops.
Prototypical Networks (Snell et al., 2017). While GeMCL is a special case of
SB-MCL, it can also be seen as a generalization of the Prototypical Network
(PN), which was originally proposed as a meta-learning approach for few-shot
classification. Therefore, PN also falls under the SB-MCL family. While GeMCL
takes a fully Bayesian approach, PN simply averages the embeddings of each
class to construct a prototype vector. Since the average operation can be
performed sequentially, PN can be readily applied to MCL settings. We can
simplify GeMCL to PN by assuming isotropic Gaussian posteriors and an
uninformative prior (Banayeeanzade et al., 2021).
ALPaCA (Harrison et al., 2018). Originally proposed as a meta-learning
approach for online regression problems, ALPaCA attaches a linear model on top
of a meta-learned neural network encoder, symmetrical to PN or GeMCL that
attaches a GMM for classification. In ALPaCA, the latent variable $z$ is the
weight matrix of the linear model, whose posterior is assumed to have the
matrix normal distribution. Due to the similar streaming settings of online
and continual learning, we can apply ALPaCA to MCL regression settings with
minimal modifications.
### 3.5 Converting Arbitrary Models for SB-MCL
All the prior works in the previous section share a similar architecture: a
meta-learned encoder followed by a simple statistical model. This
configuration can be ideal if the output type is suitable for the statistical
model, allowing analytic computation of the posterior. However, it is hard to
apply such architectures to domains with more complex output formats or
unsupervised settings where the output variable does not exist.
On the other hand, we can apply SB-MCL to almost any existing model
architectures or domains, since the only modification is to be conditioned on
some $z$ whose posterior is modeled with the exponential family. Once the
model is modified, a learner is added to digest the training stream into the
variational posterior of $z$. It may share most of its parameters with the
model.
While there are infinitely many ways to implement such modifications, we
currently focus on perhaps the simplest approach and leave exploring more
sophisticated architectures for future work. In our experiments, we define $z$
to be a 512-dimensional factorized Gaussian variable, which is injected into
the model as an auxiliary input. If the model structure follows an encoder-
decoder architecture, we concatenate $z$ with the encoder output and pass the
result to the decoder. It should be noted that, despite its simplicity, a
high-dimensional Gaussian can be surprisingly versatile when properly combined
with neural networks. This has also been demonstrated by generative models,
such as VAEs (Kingma & Welling, 2014) or GANs (Goodfellow et al., 2014), where
neural networks transform a unit Gaussian variable into realistic images.
While their choice of Gaussian is motivated by the convenience of sampling,
ours is motivated by its robustness to forgetting.
## 4 Related Work
SGD-Based MCL. OML (Javed & White, 2019) employs a small multi-layer
perceptron (MLP) with MAML (Finn et al., 2017) on top of a meta-learned
encoder. In the inner loop of OML, the encoder remains fixed while the MLP is
updated by sequentially learning each training example via SGD. After training
the MLP in the inner loop, the entire model is evaluated on the test set to
produce the meta-loss. Then, the gradient of the meta-loss is computed with
respect to the encoder parameters and the initial parameters of the MLP to
update them. Inspired by OML, ANML (Beaulieu et al., 2020) is another MCL
method for image classification that introduces a component called
neuromodulatory network. Its sigmoid output is multiplied to the encoder
output to adaptively gate some features depending on the input. For a detailed
survey of MCL and other combinations of meta-learning and CL, we refer the
reader to Son et al. (2023).
Continual Learning as Sequence Modeling (CL-Seq). More recently, Lee et al.
(2023) pointed out that CL is inherently a sequence modeling problem;
predicting the target $\tilde{y}$ of a test input $\tilde{x}$ after a training
stream $((x_{1},y_{1}),...,(x_{T},y_{T}))$ is equivalent to predicting the
next token $\tilde{y}$ that comes after prompt
$(x_{1},y_{1},...,x_{T},y_{T},\tilde{x})$. From this perspective, forwarding
the training stream through an autoregressive sequence model and updating its
internal state, which has been called _in-context learning_ in the language
modeling literature (Brown et al., 2020), can be considered CL. Within MCL
settings, the sequence model can be meta-trained on multiple CL episodes to
perform CL. They demonstrate that Transformer (Vaswani et al., 2017) and their
efficient variants (Katharopoulos et al., 2020; Choromanski et al., 2021)
achieve significantly better scores compared to SGD-based approaches.
Neural Processes. While motivated by different objectives, intriguing
similarities can be identified between the supervised version of SB-MCL (Eq.
1) and the neural process (NP) literature (Garnelo et al., 2018a, b). NP was
initially proposed to solve the limitations of Gaussian processes, such as the
computational cost and the difficulties in the prior design. It can also be
considered a meta-learning approach that learns a functional prior and has
been applied as a solution to the meta-learning domain (Gordon et al., 2019).
Since NPs are rooted in stochastic processes, one of their primary design
considerations is exchangeability: the model should produce the same result
regardless of the order of the training data. To achieve exchangeability, NPs
independently encode each example and aggregate them into a single variable
with a permutation-invariant operation, such as averaging, and pass it to the
decoder. While our sequential Bayesian update of an exponential family
posterior is initially inspired by the Fisher-Darmois-Koopman-Pitman theorem,
it also ensures exchangeability. Volpp et al. (2021) propose an aggregation
scheme for NPs based on Bayesian principles and even suggest the possibility
of sequential update, but they do not connect it to CL. To the best of our
knowledge, the only connection between NPs and MCL is CNAP (Requeima et al.,
2019), but it is a domain-specific architecture designed for image
classification.
## 5 Experiments
We demonstrate the efficacy of our framework on a wide range of domains,
including both supervised and unsupervised tasks. We also provide PyTorch
(Paszke et al., 2019) code, ensuring the reproducibility of all experiments.
Due to page limitations, we present only the most essential information; for
further details, please refer to the code.
### 5.1 Methods
SGD-Based MCL. Due to its simplicity and generality, we test OML (Javed &
White, 2019) as a representative baseline of SGD-based MCL. Although it was
originally proposed for classification and simple regression, Lee et al.
(2023) introduce an encoder-decoder variant of OML by stacking a MAML MLP
block between the encoder and decoder, which can be used for other domains. As
the main computational bottleneck of OML is the second-order gradient
computation, we also test its first-order approximation (OML-Rep), following
Reptile (Nichol et al., 2018).
CL-Seq. We test Transformer (TF; Vaswani et al., 2017) and Linear Transformer
(Linear TF; Katharopoulos et al., 2020) imported from the implementation of
Lee et al. (2023). In the case of TF, the computational cost keeps increasing
as it learns more examples, which has been criticized as a major drawback
limiting its scalability (Tay et al., 2022). On the other hand, Linear TF
maintains a constant computational cost like other baselines and our SB-MCL,
but its performance falls behind TF (Lee et al., 2023).
Offline and Online Learning. Although our work focuses on MCL, a significant
number of non-meta-CL methods have been proposed. To provide a reference point
to them, we report the offline and online learning scores, which are generally
considered the upper bound of CL and online CL performance (Zenke et al.,
2017; Farajtabar et al., 2020). For offline learning, we train a model from
scratch for an unlimited number of SGD steps with mini-batches uniformly
sampled from the entire training stream. Since the model is usually overfitted
to the training set, we report the best test score achieved during training.
For online learning, we randomly shuffle the training stream to be a
stationary stream, train a model from scratch for one epoch, and measure the
final test score. Note that MCL methods can outperform offline and online
learning since they can utilize a large meta-training set, unlike CL methods
(Lee et al., 2023).
The SB-MCL Family (Ours). We test the special cases of SB-MCL in Table 1 for
their respective domains, i.e., GeMCL for image classification, ALPaCA for
simple regression, and the generic variant with the factorized Gaussian
variable (§3.5) for others. GeMCL and ALPaCA support the analytic calculation
of posterior predictive distribution during testing. For the generic cases, we
impose 512D factorized Gaussian on $q_{\phi}(z|{\mathcal{D}})$ and sample $z$
five times to approximate $\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}[p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z)]$. In
Appendix D, we also report the scores of its MAP variant that simply produces
$p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z_{\mathrm{MAP}})$. The scores of MAP
estimation are nearly the same as those of Monte Carlo estimation.
### 5.2 Benchmarks
Our experimental settings are mainly based on those of Lee et al. (2023). As
the popular Omniglot dataset (Lake et al., 2015) causes severe meta-
overfitting due to its small size (1.6K classes / 32K images), they repurpose
CASIA (Liu et al., 2011) and MS-Celeb-1M (Guo et al., 2016) datasets for MCL.
CASIA is a Chinese handwriting dataset that comprises 3.9M images of 7.4K
character types, while MS-Celeb-1M contains 10M images of 100K celebrities.
Using these datasets, Lee et al. (2023) test various types of supervised
learning benchmarks, including both classification and regression. Each class
(e.g., character type or celebrity identity) is defined as a distinct task.
High-level descriptions of each benchmark are provided below. We also provide
visual illustrations of the model architectures used for each benchmark in
Appendix B.
Image Classification. We conduct experiments with the Omniglot, CASIA, and
Celeb datasets, following the setups of Lee et al. (2023). All the methods
share the same CNN encoder with five convolutional layers. GeMCL is compared
as an instance of SB-MCL.
Sine Regression. We adopt the synthetic sine wave regression setting from Lee
et al. (2023). ALPaCA is tested as an instance of SB-MCL.
Image Completion (Compl.). $x$ and $y$ are an image’s top and bottom halves,
and each class is defined as a task. We use the convolutional encoder-decoder
architecture from Lee et al. (2023). In the case of SB-MCL, we use the
factorized Gaussian posterior and introduce another five-layer convolutional
encoder for the learner, which produces $q_{\phi}(x,y|z)$ from a full training
image. The model’s decoder is slightly modified to take the concatenation of
the encoder’s output and $z$ as input.
Rotation Prediction. A model is given a randomly rotated image $x$ and tasked
to predict the rotation angle $y$. Although the rotation angle is not high-
dimensional, we use the generic supervised SB-MCL architecture as in the image
completion task. This is due to the objective function, which is defined as
$1-\cos(y-\hat{y})$ and cannot be used for analytically computing the
posterior of the linear model in ALPaCA. For the architecture, we use a
convolutional encoder followed by an MLP output module. For the learner in SB-
MCL, we share the same encoder in the model for encoding $x$ and introduce a
new MLP to encode $y$. These two encoders’ outputs are concatenated and fed to
another MLP to produce $q_{\phi}(x,y|z)$.
Table 2: Classification results in the error rate ($\downarrow$). Method | Omniglot | CASIA | Celeb
---|---|---|---
Offline | $.300^{\pm.055}$ | $.345^{\pm.045}$ | $.625^{\pm.065}$
Online | $.800^{\pm.055}$ | $.963^{\pm.020}$ | $.863^{\pm.037}$
OML | $.046^{\pm.002}$ | $.015^{\pm.001}$ | $.331^{\pm.006}$
OML-Rep | $.136^{\pm.005}$ | $.057^{\pm.003}$ | $.660^{\pm.012}$
TF | $.014^{\pm.001}$ | $.004^{\pm.000}$ | $\mathbf{.228}^{\pm.003}$
Linear TF | $.125^{\pm.016}$ | $.006^{\pm.000}$ | $.229^{\pm.003}$
SB-MCL | $\mathbf{.008}^{\pm.000}$ | $\mathbf{.002}^{\pm.000}$ | $.231^{\pm.004}$
Table 3: Regression results in the loss ($\downarrow$). Method | Sine | CASIA | CASIA | Celeb
---|---|---|---|---
Compl. | Rotation | Compl.
Offline | $.0045^{\pm.0003}$ | $.146^{\pm.009}$ | $.544^{\pm.045}$ | $.160^{\pm.008}$
Online | $.5497^{\pm.0375}$ | $.290^{\pm.023}$ | $1.079^{\pm.081}$ | $.284^{\pm.017}$
OML | $.0164^{\pm.0007}$ | $.105^{\pm.000}$ | $.052^{\pm.002}$ | $.099^{\pm.000}$
OML-Rep | $.0271^{\pm.0012}$ | $.104^{\pm.000}$ | $.050^{\pm.002}$ | $.105^{\pm.000}$
TF | $\mathbf{.0009}^{\pm.0001}$ | $\mathbf{.097}^{\pm.000}$ | $\mathbf{.034}^{\pm.001}$ | $\mathbf{.094}^{\pm.000}$
Linear TF | $.0031^{\pm.0002}$ | $.101^{\pm.000}$ | $.068^{\pm.002}$ | $.097^{\pm.000}$
SB-MCL | $.0011^{\pm.0002}$ | $.100^{\pm.001}$ | $.039^{\pm.001}$ | $.096^{\pm.000}$
Table 4: Results of deep generative models in the loss ($\downarrow$). Method | CASIA | CASIA | Celeb
---|---|---|---
VAE | DDPM | DDPM
Offline | $.664^{\pm.018}$ | $.0451^{\pm.0022}$ | $.0438^{\pm.0019}$
Online | $.862^{\pm.009}$ | $.1408^{\pm.0032}$ | $.2124^{\pm.0025}$
OML | $.442^{\pm.003}$ | $.0353^{\pm.0001}$ | $.0308^{\pm.0003}$
OML-Rep | $.454^{\pm.000}$ | $.0353^{\pm.0001}$ | $.0307^{\pm.0004}$
SB-MCL | $\mathbf{.428}^{\pm.001}$ | $\mathbf{.0345}^{\pm.0001}$ | $\mathbf{.0302}^{\pm.0004}$
Deep Generative Modeling. As the first in MCL research, we evaluate MCL
performance with deep generative models. We evaluate unsupervised learning
performances with two types of deep generative models: variational autoencoder
(VAE; Kingma & Welling, 2014) and denoising diffusion probabilistic models
(DDPM; Ho et al., 2020). We use a simple convolutional encoder-decoder
architecture for VAE and a U-Net encoder-decoder architecture for DDPM
following Ho et al. (2020). In SB-MCL, we use a separate encoder for the
learner, and $z$ is injected into the model by concatenating it with the
decoder’s input. For OML, we replace the encoder’s last MLP and the decoder’s
first MLP with a MAML MLP. Transformers are not tested in this setting since
they are not straightforward to be combined with deep generative models.
Evaluation Scheme. In all MCL experiments, we meta-train the methods in a
10-task 10-shot setting: each training stream is a concatenation of 10 tasks
with 10 examples each. We primarily evaluate their performance in a meta-test
set with the same task-shot setting, while also measuring the generalization
capability on other meta-testing setups. The hyperparameters are tuned to
maximize the performance in the 10-task 10-shot settings. We report
classification errors for the classification benchmarks and losses for others.
Therefore, lower scores are always better. For each experiment, we report the
average and the standard deviation of five runs. Within each MCL run, we
calculate the average score from 512 CL episodes sampled from the meta-test
set. For offline and online learning, which do not involve any meta-training,
we sample an episode from the meta-test set, train the model on the training
set, and measure the test score. We repeat this process 20 times and report
the average and standard error of the mean.
### 5.3 Results and Analyses
(a) More tasks
(b) More shots
Figure 3: Generalization to longer training streams with more tasks and shots
after meta-training with 10-task 10-shot on CASIA.
We present our classification, regression, and deep generative modeling
results in Table 2, 3, and 4, respectively. Fig. 3 compares the generalization
abilities in longer training streams, while Table 5 summarizes generalization
to a different dataset. For qualitative examples and more extensive results,
please refer to Appendix C and D. We discuss several notable characteristics
of our SB-MCL that can be observed in the experiments.
Strong CL Performance. In the classification, regression, and generation
experiments (Table 2-4), the SB-MCL family significantly outperforms SGD-based
approaches and Linear TF. Its performance is comparable to TF, whose per-
example computational cost constantly grows with the number of learned
examples.
Stronger Generalization Ability. When meta-tested on longer training streams
(Fig. 3) or a different dataset (Table 5), SB-MCL achieves substantially
better scores than all the other baselines. Especially, TF’s performance
degrades catastrophically due to its poor length generalization ability, which
is a well-known limitation of TF (Anil et al., 2022). Another interesting
point is that TF and OML’s performance can degrade even when provided with
more shots and the same number of tasks as presented in Fig. 3(b). This may
seem counterintuitive, as providing more information about a task without
adding more tasks should generally be beneficial. In SGD-based MCL, however,
the longer training stream results in more SGD updates, which can exacerbate
forgetting. TF’s performance deteriorates even more dramatically due to length
generalization failure. On the other hand, the SB-MCL family demonstrates a
remarkable level of robustness in many-shot settings. As the number of shots
increases, their performance even improves slightly. This observation aligns
with our formulation. Since our posterior follows an exponential family
distribution with fixed-sized sufficient statistics, maintaining the same
number of tasks while increasing the number of shots serves only to enhance
the accuracy of the variational posterior.
Table 5: Generalization to another dataset. Meta-test scores on Omniglot after meta-training on CASIA. Method | Classification | Rotation | VAE | DDPM
---|---|---|---|---
OML | $.445^{\pm.020}$ | $.856^{\pm.074}$ | $.227^{\pm.002}$ | $.027^{\pm.000}$
OML-Rep | $.496^{\pm.023}$ | $.736^{\pm.010}$ | $.244^{\pm.001}$ | $.027^{\pm.000}$
TF | $.088^{\pm.010}$ | $.850^{\pm.015}$ | – | –
Linear TF | $.102^{\pm.011}$ | $.931^{\pm.031}$ | – | –
SB-MCL | $\mathbf{.023}^{\pm.001}$ | $\mathbf{.640}^{\pm.012}$ | $\mathbf{.219}^{\pm.001}$ | $\mathbf{.026}^{\pm.000}$
Table 6: Meta-training time comparison. We report the time required to meta-train for 50K steps with a single A40 GPU. Method | OML | TF | SB-MCL
---|---|---|---
Classification | 6.5 hr | 1.2 hr | 40 min
Completion | 16.5 hr | 1.4 hr | 1.2 hr
VAE | 19 hr | N/A | 1.2 hr
DDPM | 5 days | N/A | 8 hr
Superior Efficiency. In Table 6, we compare the meta-training time of the SB-
MCL family against OML and TF. First of all, SB-MCL and TF are significantly
faster than OML, which does not support parallel training. Parallel training
is essential for utilizing parallel processors like GPUs for efficient meta-
training. SB-MCL is faster than TF in all the benchmarks, demonstrating its
superior efficiency due to the constant computational cost of the Bayesian
update.
CL as a Matter of Representational Capacity. By design, SB-MCL yields the same
results regardless of whether the training data is provided sequentially or
not; in other words, _no forgetting_ is theoretically guaranteed. This unique
property enables new approaches to CL; instead of dealing with the complex
learning dynamics of SGD on a non-stationary training stream, we can focus on
maximizing the representational capacity. This includes designing
better/bigger architectures and collecting more data, just like solving
ordinary deep-learning problems in offline settings. Note that this has not
been possible with SGD-based approaches since their CL performance is not
necessarily aligned with the representational capacity due to the complicated
dynamics of forgetting.
## 6 Conclusion
This work introduces a general MCL framework that combines the exponential
family’s robustness to forgetting and the flexibility of neural networks. Its
superior performance and efficiency are empirically demonstrated in diverse
domains. Unifying several prior works under the same framework, we aim to
establish a solid foundation for the future sequential Bayesian approaches in
the field of MCL. As discussed in §5.3, our framework reframes CL’s forgetting
issue as a matter of representational capacity. This allows us to focus on the
architectural aspect, rather than the optimization aspect of preventing
forgetting. Designing neural architectures for interacting with the
exponential family posterior can be an exciting avenue for further research.
Collecting new datasets for MCL also arises as an important future direction.
While our method can benefit from large-scale data, few datasets are available
for MCL research at the moment. We believe our approach can enable interesting
applications when combined with appropriate datasets.
## Limitation
While our framework demonstrates strong performance across various MCL tasks,
it faces a fundamental limitation due to the assumption of an exponential
family posterior. The equivalence between the sequential update rule and batch
learning, while preventing forgetting, completely disregards the order of
training data. This is acceptable and even beneficial when data order is
irrelevant, as observed in the standard CL benchmarks used in our experiments.
However, in real-world applications, the sequence of training data can be
crucial. For instance, training data may be organized into a curriculum where
acquiring new knowledge depends on previously learned information. In such
scenarios, our framework may not be the optimal choice.
Our research began with the constraint of maintaining a constant memory size
throughout the learning process. The Fisher-Darmois-Koopman-Pitman theorem
indicates that only an exponential family posterior can prevent forgetting
under this constraint. By relaxing this constraint, we could explore more
flexible, non-parametric posterior distributions. We propose this as an
intriguing direction for future research.
## Impact Statement
This paper contributes to the field of machine learning, specifically in
continual learning. While recognizing the potential societal consequences of
our work, we conclude that no particular aspects demand specific highlighting.
## Acknowledgements
This work was supported by Samsung Advanced Institute of Technology and the
Institute of Information & communications Technology Planning & Evaluation
(IITP) grants funded by the Korea government (MSIT) (No. RS-2022-II220156,
Fundamental research on continual meta-learning for quality enhancement of
casual videos and their 3D metaverse transformation; No. RS-2019-II191082, SW
StarLab; No. RS-2021-II211343, Artificial Intelligence Graduate School Program
(Seoul National University)).
## References
* Anil et al. (2022) Anil, C., Wu, Y., Andreassen, A., Lewkowycz, A., Misra, V., Ramasesh, V. V., Slone, A., Gur-Ari, G., Dyer, E., and Neyshabur, B. Exploring length generalization in large language models. In _NeurIPS_ , 2022.
* Banayeeanzade et al. (2021) Banayeeanzade, M., Mirzaiezadeh, R., Hasani, H., and Soleymani, M. Generative vs. discriminative: Rethinking the meta-continual learning. In _NeurIPS_ , 2021.
* Beaulieu et al. (2020) Beaulieu, S., Frati, L., Miconi, T., Lehman, J., Stanley, K. O., Clune, J., and Cheney, N. Learning to continually learn. In _ECAI_ , 2020.
* Bishop (2006) Bishop, C. M. _Pattern Recognition and Machine Learning_. Berlin, Heidelberg, 2006. ISBN 0387310738.
* Brown et al. (2020) Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In _NeurIPS_ , 2020.
* Chaudhry et al. (2018) Chaudhry, A., Dokania, P. K., Ajanthan, T., and Torr, P. H. S. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In _ECCV_ , 2018.
* Chaudhry et al. (2019) Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan, T., Dokania, P. K., Torr, P. H. S., and Ranzato, M. Continual learning with tiny episodic memories. _CoRR_ , abs/1902.10486, 2019.
* Choromanski et al. (2021) Choromanski, K. M., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlós, T., Hawkins, P., Davis, J. Q., Mohiuddin, A., Kaiser, L., Belanger, D. B., Colwell, L. J., and Weller, A. Rethinking attention with Performers. In _ICLR_ , 2021.
* Darmois (1935) Darmois, G. Sur les lois de probabilitéa estimation exhaustive. _CR Acad. Sci. Paris_ , 260(1265):85, 1935.
* Farajtabar et al. (2020) Farajtabar, M., Azizan, N., Mott, A., and Li, A. Orthogonal gradient descent for continual learning. In _AISTATS_ , 2020.
* Farquhar & Gal (2019) Farquhar, S. and Gal, Y. A unifying Bayesian view of continual learning. _CoRR_ , abs/1902.06494, 2019.
* Finn et al. (2017) Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In _ICML_ , 2017.
* Finn et al. (2019) Finn, C., Rajeswaran, A., Kakade, S. M., and Levine, S. Online meta-learning. In _ICML_ , 2019.
* Fisher (1934) Fisher, R. A. Two new properties of mathematical likelihood. _Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character_ , 144(852):285–307, 1934.
* Garnelo et al. (2018a) Garnelo, M., Rosenbaum, D., Maddison, C., Ramalho, T., Saxton, D., Shanahan, M., Teh, Y. W., Rezende, D. J., and Eslami, S. M. A. Conditional neural processes. In _ICML_ , 2018a.
* Garnelo et al. (2018b) Garnelo, M., Schwarz, J., Rosenbaum, D., Viola, F., Rezende, D. J., Eslami, S. M. A., and Teh, Y. W. Neural processes. _CoRR_ , abs/1807.01622, 2018b.
* Goodfellow et al. (2014) Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. C., and Bengio, Y. Generative adversarial nets. In _NeurIPS_ , 2014.
* Gordon et al. (2019) Gordon, J., Bronskill, J., Bauer, M., Nowozin, S., and Turner, R. E. Meta-learning probabilistic inference for prediction. In _ICLR_ , 2019.
* Guo et al. (2016) Guo, Y., Zhang, L., Hu, Y., He, X., and Gao, J. MS-Celeb-1M: A dataset and benchmark for large-scale face recognition. In _Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III_ , 2016.
* Gupta et al. (2020) Gupta, G., Yadav, K., and Paull, L. Look-ahead meta learning for continual learning. In _NeurIPS_ , 2020.
* Harrison et al. (2018) Harrison, J., Sharma, A., and Pavone, M. Meta-learning priors for efficient online Bayesian regression. In _Algorithmic Foundations of Robotics XIII, Proceedings of the 13th Workshop on the Algorithmic Foundations of Robotics, WAFR 2018, Mérida, Mexico, December 9-11, 2018_ , 2018.
* Ho et al. (2020) Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In _NeurIPS_ , 2020.
* Javed & White (2019) Javed, K. and White, M. Meta-learning representations for continual learning. In _NeurIPS_ , 2019.
* Jerfel et al. (2019) Jerfel, G., Grant, E., Griffiths, T., and Heller, K. A. Reconciling meta-learning and continual learning with online mixtures of tasks. In _NeurIPS_ , 2019.
* Katharopoulos et al. (2020) Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. Transformers are RNNs: Fast autoregressive Transformers with linear attention. In _ICML_ , 2020.
* Kingma & Welling (2014) Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. In _ICLR_ , 2014.
* Kirkpatrick et al. (2016) Kirkpatrick, J., Pascanu, R., Rabinowitz, N. C., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., and Hadsell, R. Overcoming catastrophic forgetting in neural networks. _CoRR_ , abs/1612.00796, 2016.
* Knoblauch et al. (2020) Knoblauch, J., Husain, H., and Diethe, T. Optimal continual learning has perfect memory and is NP-hard. In _ICML_ , 2020.
* Koopman (1936) Koopman, B. O. On distributions admitting a sufficient statistic. _Transactions of the American Mathematical society_ , 39(3):399–409, 1936.
* Lake et al. (2015) Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. _Science_ , 350:1332 – 1338, 2015.
* Lee et al. (2023) Lee, S., Son, J., and Kim, G. Recasting continual learning as sequence modeling. In _NeurIPS_ , 2023.
* Liu et al. (2011) Liu, C., Yin, F., Wang, D., and Wang, Q. CASIA online and offline chinese handwriting databases. In _ICDAR_ , 2011.
* Lopez-Paz & Ranzato (2017) Lopez-Paz, D. and Ranzato, M. Gradient episodic memory for continual learning. In _NeurIPS_ , 2017.
* Murphy (2022) Murphy, K. P. _Probabilistic Machine Learning: An introduction_. 2022\.
* Nguyen et al. (2018) Nguyen, C. V., Li, Y., Bui, T. D., and Turner, R. E. Variational continual learning. In _ICLR_ , 2018.
* Nichol et al. (2018) Nichol, A., Achiam, J., and Schulman, J. On first-order meta-learning algorithms. _CoRR_ , abs/1803.02999, 2018.
* Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. PyTorch: An imperative style, high-performance deep learning library. In _NeurIPS_ , 2019.
* Pitman (1936) Pitman, E. J. G. Sufficient statistics and intrinsic accuracy. In _Mathematical Proceedings of the Cambridge Philosophical Society_ , number 4. Cambridge University Press, 1936.
* Requeima et al. (2019) Requeima, J., Gordon, J., Bronskill, J., Nowozin, S., and Turner, R. E. Fast and flexible multi-task classification using conditional neural adaptive processes. In _NeurIPS_ , 2019.
* Riemer et al. (2019) Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., and Tesauro, G. Learning to learn without forgetting by maximizing transfer and minimizing interference. In _ICLR_ , 2019.
* Snell et al. (2017) Snell, J., Swersky, K., and Zemel, R. S. Prototypical networks for few-shot learning. In _NeurIPS_ , 2017.
* Son et al. (2023) Son, J., Lee, S., and Kim, G. When meta-learning meets online and continual learning: A survey. _CoRR_ , abs/2311.05241, 2023.
* Tay et al. (2022) Tay, Y., Dehghani, M., Bahri, D., and Metzler, D. Efficient Transformers: A survey. _ACM Comput. Surv._ , 55(6), 2022. ISSN 0360-0300.
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In _NeurIPS_ , 2017.
* Volpp et al. (2021) Volpp, M., Flürenbrock, F., Großberger, L., Daniel, C., and Neumann, G. Bayesian context aggregation for neural processes. In _ICLR_ , 2021.
* Zenke et al. (2017) Zenke, F., Poole, B., and Ganguli, S. Continual learning through synaptic intelligence. In _ICML_ , 2017.
## Appendix A Variational Bound Derivation
The derivation of the variational bound for supervised learning setup (Eq. 1)
is as follows:
$\displaystyle\log p_{\theta}(\tilde{y}_{1:N}|\tilde{x}_{1:N},{\mathcal{D}})$
$\displaystyle=-\log
p_{\theta}(z|\tilde{y}_{1:N},\tilde{x}_{1:N},{\mathcal{D}})+\log
p_{\theta}(\tilde{y}_{1:N},z|\tilde{x}_{1:N},{\mathcal{D}})$
$\displaystyle=\mathbb{E}_{z\sim q_{\phi}(z|{\mathcal{D}})}\left[\log
q_{\phi}(z|{\mathcal{D}})-\log
p_{\theta}(z|\tilde{y}_{1:N},\tilde{x}_{1:N},{\mathcal{D}})+\log
p_{\theta}(\tilde{y}_{1:N},z|\tilde{x}_{1:N},{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$
$\displaystyle=D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z|\tilde{y}_{1:N},\tilde{x}_{1:N},{\mathcal{D}})\right)+\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N},z|\tilde{x}_{1:N},{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle\geq\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N},z|\tilde{x}_{1:N},{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N}|z,\tilde{x}_{1:N})+\log
p_{\theta}(z|\tilde{x}_{1:N},{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ (8) $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N}|z,\tilde{x}_{1:N})+\log
p_{\theta}({\mathcal{D}}|z,\tilde{x}_{1:N})+\log
p_{\theta}(z|\tilde{x}_{1:N})-\log
p_{\theta}({\mathcal{D}}|\tilde{x}_{1:N})\right.$ $\displaystyle\left.\hskip
62.0pt-\log q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N}|z,\tilde{x}_{1:N})+\log
p_{\theta}({\mathcal{D}}|z)+\log p_{\theta}(z)-\log
p_{\theta}({\mathcal{D}})-\log q_{\phi}(z|{\mathcal{D}})\right]$
$\displaystyle=\mathbb{E}_{z\sim q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N}|z,\tilde{x}_{1:N})+\log
p_{\theta}({\mathcal{D}}|z)\right]-D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z)\right)-\log
p_{\theta}({\mathcal{D}})$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\sum_{n=1}^{N}\log
p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z)+\sum_{t=1}^{T}\log
p_{\theta}(y_{t}|x_{t},z)\right]$ $\displaystyle\,\hskip 12.0pt-
D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z)\right)-\underbrace{\log
p_{\theta}({\mathcal{D}})}_{\mathrm{const.}}$
We can derive a similar bound for unsupervised settings (Eq. 2):
$\displaystyle\log p_{\theta}(\tilde{x}_{1:N}|{\mathcal{D}})$
$\displaystyle=-\log p_{\theta}(z|\tilde{x}_{1:N},{\mathcal{D}})+\log
p_{\theta}(\tilde{x}_{1:N},z|{\mathcal{D}})$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log q_{\phi}(z|{\mathcal{D}})-\log
p_{\theta}(z|\tilde{x}_{1:N},{\mathcal{D}})+\log
p_{\theta}(\tilde{x}_{1:N},z|{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$
$\displaystyle=D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z|{\mathcal{D}})\right)+\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{x}_{1:N},z|{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle\geq\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{x}_{1:N},z|{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{x}_{1:N}|z,{\mathcal{D}})+\log
p_{\theta}(z|{\mathcal{D}})-\log q_{\phi}(z|{\mathcal{D}})\right]$
$\displaystyle=\mathbb{E}_{z\sim q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{x}_{1:N}|z)+\log p_{\theta}({\mathcal{D}}|z)+\log
p_{\theta}(z)-\log p_{\theta}({\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log p_{\theta}(\tilde{x}_{1:N}|z)+\log
p_{\theta}({\mathcal{D}}|z)\right]-D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z)\right)-\log
p_{\theta}({\mathcal{D}})$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\sum_{n=1}^{N}\log
p_{\theta}(\tilde{x}_{n}|z)+\sum_{t=1}^{T}\log
p_{\theta}(x_{t}|z)\right]-D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\,\middle\|\,p_{\theta}(z)\right)-\underbrace{\log
p_{\theta}({\mathcal{D}})}_{\mathrm{const.}}$
It is noteworthy that Neural Process (Garnelo et al., 2018b) instead
approximates $\log p_{\theta}(z|\tilde{x}_{1:N},{\mathcal{D}})$ of Eq. 8 with
$\log q_{\phi}(z|\tilde{x}_{1:N},{\mathcal{D}})$:
$\displaystyle\mathbb{E}_{z\sim q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N}|z,\tilde{x}_{1:N})+\log
p_{\theta}(z|\tilde{x}_{1:N},{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle\approx\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\log
p_{\theta}(\tilde{y}_{1:N}|z,\tilde{x}_{1:N})+\log
q_{\phi}(z|\tilde{x}_{1:N},{\mathcal{D}})-\log
q_{\phi}(z|{\mathcal{D}})\right]$ $\displaystyle=\mathbb{E}_{z\sim
q_{\phi}(z|{\mathcal{D}})}\left[\sum_{n=1}^{N}\log
p_{\theta}(\tilde{y}_{n}|\tilde{x}_{n},z)\right]-D_{\mathrm{KL}}\left(q_{\phi}(z|{\mathcal{D}})\middle\|q_{\phi}(z|\tilde{x}_{1:N},{\mathcal{D}})\right)$
Since we can use the Bayes rule to convert $\log
p_{\theta}(z|\tilde{x}_{1:N},{\mathcal{D}})$ into $\log
p_{\theta}({\mathcal{D}}|z,\tilde{x}_{1:N})+\log
p_{\theta}(z|\tilde{x}_{1:N})-\log p_{\theta}({\mathcal{D}}|\tilde{x}_{1:N})$,
which is subsequently reduced to $\log p_{\theta}({\mathcal{D}}|z)+\log
p_{\theta}(z)-\log p_{\theta}({\mathcal{D}})$ by conditional independence,
such an approximation is not necessary.
## Appendix B Architecture Diagrams
For better understanding of the architectures used in the experiments, we
provide detailed diagrams for each experiment. In Fig. 4, we present the
notations used in the architecture diagrams. In Fig. 5-9, we present the
architectures used in the classification, rotation, completion, VAE, and DDPM
experiments, respectively.
Figure 4: Notations for architecture diagrams. Figure 5: Architectures for
classification experiments. Figure 6: Architectures for rotation experiments.
Figure 7: Architectures for completion experiments. Figure 8: Architectures
for VAE experiments. Figure 9: Architectures for DDPM experiments.
## Appendix C Qualitative Examples of Deep Generative MCL
In Fig. 10-15, we present qualitative examples of the deep generative model
experiments. For VAEs, we use a binarized CASIA dataset for easier likelihood
calculation, while using unmodified CASIA and MS-Celeb-1M datasets for DDPMs.
With each meta-trained MCL method, we train a VAE or DPMM on a 5-task 10-shot
training stream in Fig. 10 or 11, which are sampled from the meta-test set.
Then, we extract 20 generation samples for the VAE (Fig. 13) and the DDPM
(Fig. 15 and 15). For the VAE, we also visualize the reconstructions of the
test images in Fig. 13.
Figure 10: An example training stream from CASIA. Figure 11: An example
training stream from Celeb.
Figure 12: VAE reconstruction samples (CASIA).
(a) OML
(b) OML-Rep
(c) SB-MCL
Figure 13: VAE generation samples (CASIA).
(a) OML
(b) OML-Rep
(c) SB-MCL
Figure 14: DDPM generation samples (CASIA).
(d) OML
(e) OML-Rep
(f) SB-MCL
Figure 15: DDPM generation samples (Celeb).
Although the scores of OML and OML-Reptile are much worse than SB-MCL, the
reconstruction results in Fig. 13 do not seem to show a significant
difference. However, the generation results in Fig. 13 of OML and OML-Reptile
are not properly structured, showing that OML and OML-Reptile have difficulty
in training VAE on a non-stationary stream. On the other hand, the VAE with
SB-MCL produces significantly better samples, demonstrating the effectiveness
of our approach.
All the DDPM samples in Fig. 15 and 15 are of much higher quality compared to
VAE and are hard to distinguish from real images. Since the DDPMs meta-learn
general concepts from the large-scale meta-training set, they can produce
high-fidelity images. The key difference to notice is whether the DDPM has
learned new knowledge from the training stream. Since the training stream is
from the meta-test set, it cannot produce the classes in the training stream
unless it actually learns from it. Among the samples from OML and OML-Reptile,
it is hard to find the classes in the training stream, suggesting that they
are producing samples from the meta-training distribution. On the other hand,
the DDPMs with SB-MCL produce samples remarkably similar to the ones in Fig.
10 and 11. This experiment confirms that SB-MCL can be an effective solution
for modern deep generative models.
## Appendix D Extended Experimental Results
Table 7: CASIA classification with more tasks. Method | Tasks
---|---
10 | 20 | 50 | 100 | 200 | 500
Offline | $.165^{\pm.028}$ | $.284^{\pm.033}$ | $.444^{\pm.038}$ | $.700^{\pm.038}$ | $.714^{\pm.034}$ | $.725^{\pm.031}$
Online | $.963^{\pm.020}$ | $.925^{\pm.031}$ | $.963^{\pm.020}$ | $.963^{\pm.020}$ | $.963^{\pm.013}$ | $.970^{\pm.007}$
OML | $.015^{\pm.001}$ | $.033^{\pm.001}$ | $.085^{\pm.001}$ | $.159^{\pm.001}$ | $.286^{\pm.002}$ | $.564^{\pm.001}$
OML-Rep | $.057^{\pm.003}$ | $.104^{\pm.002}$ | $.215^{\pm.004}$ | $.359^{\pm.002}$ | $.559^{\pm.005}$ | $.796^{\pm.003}$
TF | $.004^{\pm.000}$ | $.510^{\pm.001}$ | $.804^{\pm.001}$ | $.903^{\pm.001}$ | $.952^{\pm.000}$ | $.980^{\pm.000}$
SB-MCL | $.002^{\pm.000}$ | $.003^{\pm.000}$ | $.007^{\pm.000}$ | $.012^{\pm.000}$ | $.019^{\pm.000}$ | $.036^{\pm.000}$
SB-MCL (MAP) | $.002^{\pm.000}$ | $.003^{\pm.000}$ | $.007^{\pm.000}$ | $.012^{\pm.000}$ | $.019^{\pm.000}$ | $.036^{\pm.000}$
Table 8: CASIA classification with more shots. Method | Shots
---|---
10 | 20 | 50 | 100 | 200
Offline | $.165^{\pm.028}$ | $.176^{\pm.021}$ | $.076^{\pm.021}$ | $.024^{\pm.013}$ | $.012^{\pm.005}$
Online | $.963^{\pm.020}$ | $.838^{\pm.032}$ | $.662^{\pm.041}$ | $.550^{\pm.074}$ | $.388^{\pm.065}$
OML | $.015^{\pm.001}$ | $.019^{\pm.001}$ | $.026^{\pm.002}$ | $.031^{\pm.002}$ | $.039^{\pm.001}$
OML-Rep | $.057^{\pm.003}$ | $.066^{\pm.002}$ | $.083^{\pm.004}$ | $.101^{\pm.002}$ | $.121^{\pm.003}$
TF | $.004^{\pm.000}$ | $.505^{\pm.001}$ | $.800^{\pm.000}$ | $.899^{\pm.001}$ | $.899^{\pm.000}$
Linear TF | $.006^{\pm.000}$ | $.530^{\pm.010}$ | $.768^{\pm.028}$ | $.804^{\pm.031}$ | $.818^{\pm.038}$
SB-MCL | $.002^{\pm.000}$ | $.002^{\pm.000}$ | $.001^{\pm.000}$ | $.001^{\pm.000}$ | $.002^{\pm.000}$
SB-MCL (MAP) | $.002^{\pm.000}$ | $.002^{\pm.000}$ | $.001^{\pm.000}$ | $.001^{\pm.000}$ | $.001^{\pm.000}$
Table 9: Sine classification with more tasks. Method | Tasks
---|---
10 | 20 | 50 | 100 | 200 | 500
Offline | $.005^{\pm.000}$ | $.004^{\pm.001}$ | $.005^{\pm.001}$ | $.008^{\pm.001}$ | $.036^{\pm.008}$ | $.198^{\pm.021}$
Online | $.550^{\pm.037}$ | $.525^{\pm.032}$ | $.590^{\pm.030}$ | $.549^{\pm.031}$ | $.526^{\pm.022}$ | $.569^{\pm.013}$
OML | $.016^{\pm.001}$ | $.034^{\pm.002}$ | $.082^{\pm.001}$ | $.153^{\pm.002}$ | $.270^{\pm.000}$ | $.484^{\pm.002}$
OML-Rep | $.027^{\pm.001}$ | $.054^{\pm.002}$ | $.115^{\pm.003}$ | $.201^{\pm.004}$ | $.335^{\pm.005}$ | $.559^{\pm.003}$
TF | $.001^{\pm.000}$ | $.238^{\pm.020}$ | $.454^{\pm.011}$ | $.535^{\pm.011}$ | $.586^{\pm.013}$ | $.615^{\pm.006}$
Linear TF | $.003^{\pm.000}$ | $.201^{\pm.011}$ | $.409^{\pm.011}$ | $.489^{\pm.006}$ | $.526^{\pm.003}$ | $.543^{\pm.002}$
SB-MCL | $.001^{\pm.000}$ | $.002^{\pm.000}$ | $.007^{\pm.000}$ | $.020^{\pm.000}$ | $.065^{\pm.001}$ | $.228^{\pm.001}$
Table 10: Sine classification with more shots. Method | Shots
---|---
10 | 20 | 50 | 100 | 200
Offline | $.005^{\pm.000}$ | $.003^{\pm.000}$ | $.003^{\pm.000}$ | $.002^{\pm.000}$ | $.002^{\pm.000}$
Online | $.550^{\pm.037}$ | $.446^{\pm.031}$ | $.376^{\pm.031}$ | $.273^{\pm.018}$ | $.219^{\pm.017}$
OML | $.016^{\pm.001}$ | $.018^{\pm.001}$ | $.017^{\pm.001}$ | $.017^{\pm.001}$ | $.018^{\pm.001}$
OML-Rep | $.027^{\pm.001}$ | $.027^{\pm.001}$ | $.027^{\pm.002}$ | $.027^{\pm.002}$ | $.027^{\pm.002}$
TF | $.001^{\pm.000}$ | $.152^{\pm.030}$ | $.212^{\pm.044}$ | $.221^{\pm.034}$ | $.199^{\pm.039}$
Linear TF | $.003^{\pm.000}$ | $.140^{\pm.012}$ | $.212^{\pm.017}$ | $.228^{\pm.026}$ | $.252^{\pm.022}$
SB-MCL | $.001^{\pm.000}$ | $.001^{\pm.000}$ | $.001^{\pm.000}$ | $.001^{\pm.000}$ | $.001^{\pm.000}$
Table 11: CASIA completion with more tasks. Method | Tasks
---|---
10 | 20 | 50 | 100 | 200 | 500
Offline | $.146^{\pm.009}$ | $.154^{\pm.006}$ | $.146^{\pm.005}$ | $.146^{\pm.006}$ | $.133^{\pm.005}$ | $.141^{\pm.004}$
Online | $.290^{\pm.023}$ | $.188^{\pm.007}$ | $.163^{\pm.007}$ | $.153^{\pm.007}$ | $.153^{\pm.005}$ | $.154^{\pm.003}$
OML | $.105^{\pm.000}$ | $.107^{\pm.000}$ | $.108^{\pm.000}$ | $.110^{\pm.000}$ | $.110^{\pm.000}$ | $.111^{\pm.000}$
OML-Rep | $.104^{\pm.000}$ | $.106^{\pm.000}$ | $.107^{\pm.000}$ | $.108^{\pm.000}$ | $.108^{\pm.000}$ | $.109^{\pm.000}$
TF | $.097^{\pm.000}$ | $.183^{\pm.018}$ | $.208^{\pm.031}$ | $.287^{\pm.053}$ | $.389^{\pm.062}$ | $.347^{\pm.060}$
Linear TF | $.101^{\pm.000}$ | $.125^{\pm.002}$ | $.127^{\pm.002}$ | $.128^{\pm.001}$ | $.132^{\pm.002}$ | $.132^{\pm.001}$
SB-MCL | $.100^{\pm.001}$ | $.103^{\pm.001}$ | $.106^{\pm.001}$ | $.107^{\pm.002}$ | $.108^{\pm.002}$ | $.109^{\pm.002}$
SB-MCL (MAP) | $.100^{\pm.001}$ | $.103^{\pm.001}$ | $.106^{\pm.001}$ | $.107^{\pm.002}$ | $.108^{\pm.002}$ | $.109^{\pm.002}$
Table 12: CASIA completion with more shots. Method | Shots
---|---
10 | 20 | 50 | 100 | 200
Offline | $.146^{\pm.009}$ | $.144^{\pm.006}$ | $.134^{\pm.005}$ | $.144^{\pm.007}$ | $.129^{\pm.007}$
Online | $.290^{\pm.023}$ | $.204^{\pm.008}$ | $.151^{\pm.008}$ | $.152^{\pm.008}$ | $.156^{\pm.008}$
OML | $.105^{\pm.000}$ | $.105^{\pm.000}$ | $.105^{\pm.000}$ | $.106^{\pm.000}$ | $.106^{\pm.000}$
OML-Rep | $.104^{\pm.000}$ | $.104^{\pm.000}$ | $.105^{\pm.000}$ | $.106^{\pm.000}$ | $.107^{\pm.000}$
TF | $.097^{\pm.000}$ | $.184^{\pm.019}$ | $.212^{\pm.032}$ | $.301^{\pm.064}$ | $.403^{\pm.062}$
Linear TF | $.101^{\pm.000}$ | $.123^{\pm.002}$ | $.125^{\pm.002}$ | $.126^{\pm.002}$ | $.130^{\pm.002}$
SB-MCL | $.100^{\pm.001}$ | $.100^{\pm.001}$ | $.100^{\pm.001}$ | $.100^{\pm.002}$ | $.100^{\pm.002}$
SB-MCL (MAP) | $.100^{\pm.001}$ | $.100^{\pm.001}$ | $.100^{\pm.001}$ | $.100^{\pm.002}$ | $.100^{\pm.002}$
Table 13: CASIA rotation with more tasks. Method | Tasks
---|---
10 | 20 | 50 | 100 | 200 | 500
Offline | $.544^{\pm.045}$ | $.591^{\pm.047}$ | $.603^{\pm.057}$ | $.510^{\pm.046}$ | $.463^{\pm.044}$ | $.312^{\pm.039}$
Online | $1.079^{\pm.081}$ | $.986^{\pm.073}$ | $.862^{\pm.085}$ | $.616^{\pm.040}$ | $.810^{\pm.059}$ | $.784^{\pm.029}$
OML | $.052^{\pm.002}$ | $.052^{\pm.001}$ | $.052^{\pm.001}$ | $.053^{\pm.000}$ | $.053^{\pm.000}$ | $.053^{\pm.001}$
OML-Rep | $.050^{\pm.002}$ | $.050^{\pm.001}$ | $.052^{\pm.001}$ | $.053^{\pm.001}$ | $.055^{\pm.001}$ | $.056^{\pm.001}$
TF | $.034^{\pm.001}$ | $.077^{\pm.003}$ | $.118^{\pm.012}$ | $.122^{\pm.010}$ | $.133^{\pm.006}$ | $.150^{\pm.013}$
Linear TF | $.068^{\pm.002}$ | $.078^{\pm.004}$ | $.086^{\pm.003}$ | $.087^{\pm.002}$ | $.094^{\pm.005}$ | $.091^{\pm.004}$
SB-MCL | $.039^{\pm.001}$ | $.042^{\pm.000}$ | $.045^{\pm.001}$ | $.046^{\pm.000}$ | $.047^{\pm.000}$ | $.047^{\pm.001}$
SB-MCL (MAP) | $.040^{\pm.001}$ | $.042^{\pm.001}$ | $.045^{\pm.001}$ | $.046^{\pm.000}$ | $.047^{\pm.000}$ | $.047^{\pm.000}$
Table 14: CASIA rotation with more shots. Method | Shots
---|---
10 | 20 | 50 | 100 | 200
Offline | $.544^{\pm.045}$ | $.527^{\pm.043}$ | $.465^{\pm.054}$ | $.365^{\pm.053}$ | $.313^{\pm.040}$
Online | $1.079^{\pm.081}$ | $.852^{\pm.062}$ | $.916^{\pm.078}$ | $.649^{\pm.062}$ | $.668^{\pm.073}$
OML | $.052^{\pm.002}$ | $.051^{\pm.001}$ | $.052^{\pm.003}$ | $.052^{\pm.002}$ | $.050^{\pm.001}$
OML-Rep | $.050^{\pm.002}$ | $.050^{\pm.001}$ | $.047^{\pm.001}$ | $.046^{\pm.001}$ | $.045^{\pm.000}$
TF | $.034^{\pm.001}$ | $.068^{\pm.004}$ | $.087^{\pm.010}$ | $.086^{\pm.007}$ | $.093^{\pm.008}$
Linear TF | $.068^{\pm.002}$ | $.073^{\pm.004}$ | $.072^{\pm.003}$ | $.075^{\pm.002}$ | $.079^{\pm.006}$
SB-MCL | $.039^{\pm.001}$ | $.038^{\pm.001}$ | $.036^{\pm.001}$ | $.035^{\pm.001}$ | $.035^{\pm.001}$
SB-MCL (MAP) | $.040^{\pm.001}$ | $.039^{\pm.001}$ | $.036^{\pm.001}$ | $.036^{\pm.001}$ | $.035^{\pm.001}$
Table 15: CASIA VAE with more tasks. Method | Tasks
---|---
10 | 20 | 50 | 100 | 200 | 500
Offline | $.664^{\pm.018}$ | $.645^{\pm.027}$ | $.590^{\pm.014}$ | $.571^{\pm.012}$ | $.594^{\pm.017}$ | $.594^{\pm.012}$
Online | $.862^{\pm.009}$ | $.801^{\pm.013}$ | $.760^{\pm.013}$ | $.775^{\pm.019}$ | $.745^{\pm.007}$ | $.736^{\pm.007}$
OML | $.442^{\pm.003}$ | $.441^{\pm.003}$ | $.440^{\pm.003}$ | $.440^{\pm.003}$ | $.440^{\pm.003}$ | $.439^{\pm.003}$
OML-Rep | $.454^{\pm.000}$ | $.455^{\pm.001}$ | $.457^{\pm.001}$ | $.457^{\pm.001}$ | $.458^{\pm.001}$ | $.459^{\pm.001}$
SB-MCL | $.428^{\pm.001}$ | $.428^{\pm.001}$ | $.429^{\pm.001}$ | $.429^{\pm.001}$ | $.429^{\pm.001}$ | $.429^{\pm.001}$
SB-MCL (MAP) | $.428^{\pm.001}$ | $.428^{\pm.001}$ | $.429^{\pm.001}$ | $.429^{\pm.001}$ | $.429^{\pm.001}$ | $.429^{\pm.001}$
Table 16: CASIA VAE with more shots. Method | Shots
---|---
10 | 20 | 50 | 100 | 200
Offline | $.664^{\pm.018}$ | $.580^{\pm.014}$ | $.570^{\pm.018}$ | $.564^{\pm.015}$ | $.531^{\pm.014}$
Online | $.862^{\pm.009}$ | $.805^{\pm.016}$ | $.740^{\pm.027}$ | $.780^{\pm.017}$ | $.726^{\pm.017}$
OML | $.442^{\pm.003}$ | $.440^{\pm.003}$ | $.440^{\pm.003}$ | $.440^{\pm.002}$ | $.440^{\pm.003}$
OML-Rep | $.454^{\pm.000}$ | $.455^{\pm.002}$ | $.455^{\pm.002}$ | $.456^{\pm.001}$ | $.459^{\pm.001}$
SB-MCL | $.428^{\pm.001}$ | $.428^{\pm.001}$ | $.427^{\pm.001}$ | $.428^{\pm.000}$ | $.428^{\pm.002}$
SB-MCL (MAP) | $.428^{\pm.001}$ | $.427^{\pm.001}$ | $.428^{\pm.001}$ | $.428^{\pm.001}$ | $.428^{\pm.001}$
Table 17: CASIA DDPM with more tasks. Method | Tasks
---|---
10 | 20 | 50 | 100 | 200 | 500
Offline | $.0451^{\pm.0022}$ | $.0408^{\pm.0013}$ | $.0372^{\pm.0017}$ | $.0383^{\pm.0013}$ | $.0382^{\pm.0010}$ | $.0379^{\pm.0008}$
Online | $.1408^{\pm.0032}$ | $.1090^{\pm.0020}$ | $.0787^{\pm.0019}$ | $.0698^{\pm.0016}$ | $.0601^{\pm.0007}$ | $.0511^{\pm.0004}$
OML | $.0353^{\pm.0001}$ | $.0352^{\pm.0001}$ | $.0353^{\pm.0001}$ | $.0353^{\pm.0001}$ | $.0353^{\pm.0001}$ | $.0353^{\pm.0001}$
OML-Rep | $.0353^{\pm.0001}$ | $.0353^{\pm.0001}$ | $.0353^{\pm.0001}$ | $.0353^{\pm.0001}$ | $.0352^{\pm.0001}$ | $.0352^{\pm.0001}$
SB-MCL | $.0345^{\pm.0001}$ | $.0347^{\pm.0001}$ | $.0349^{\pm.0001}$ | $.0351^{\pm.0001}$ | $.0351^{\pm.0001}$ | $.0352^{\pm.0000}$
SB-MCL (MAP) | $.0345^{\pm.0001}$ | $.0348^{\pm.0000}$ | $.0350^{\pm.0001}$ | $.0351^{\pm.0001}$ | $.0352^{\pm.0000}$ | $.0353^{\pm.0001}$
Table 18: CASIA DDPM with more shots. Method | Shots
---|---
10 | 20 | 50 | 100 | 200
Offline | $.0451^{\pm.0022}$ | $.0412^{\pm.0011}$ | $.0358^{\pm.0014}$ | $.0380^{\pm.0009}$ | $.0372^{\pm.0018}$
Online | $.1408^{\pm.0032}$ | $.1072^{\pm.0026}$ | $.0826^{\pm.0029}$ | $.0688^{\pm.0020}$ | $.0590^{\pm.0016}$
OML | $.0353^{\pm.0002}$ | $.0352^{\pm.0002}$ | $.0353^{\pm.0003}$ | $.0352^{\pm.0001}$ | $.0351^{\pm.0002}$
OML-Rep | $.0353^{\pm.0001}$ | $.0353^{\pm.0002}$ | $.0352^{\pm.0001}$ | $.0353^{\pm.0002}$ | $.0352^{\pm.0001}$
SB-MCL | $.0345^{\pm.0001}$ | $.0345^{\pm.0001}$ | $.0345^{\pm.0001}$ | $.0345^{\pm.0002}$ | $.0345^{\pm.0001}$
SB-MCL (MAP) | $.0345^{\pm.0001}$ | $.0345^{\pm.0001}$ | $.0345^{\pm.0000}$ | $.0344^{\pm.0001}$ | $.0346^{\pm.0001}$
|
# Massively Multiagent Minigames for
Training Generalist Agents
Kyoung Whan Choe (최경환), Ryan Sullivan, Joseph Suárez
###### Abstract
We present Meta MMO, a collection of many-agent minigames for use as a
reinforcement learning benchmark. Meta MMO is built on top of Neural MMO, a
massively multiagent environment that has been the subject of two previous
NeurIPS competitions. Our work expands Neural MMO with several computationally
efficient minigames. We explore generalization across Meta MMO by learning to
play several minigames with a single set of weights. We release the
environment, baselines, and training code under the MIT license. We hope that
Meta MMO will spur additional progress on Neural MMO and, more generally, will
serve as a useful benchmark for many-agent generalization.
## 1 Introduction
Intelligence in the real world requires simultaneous competence on a broad
range of tasks. The first wave of modern deep reinforcement learning (RL)
research focused on narrow competency in individual tasks, such as singular
Atari games [4, 30]. This line of work evaluates generalization using sticky
actions [29] or by using Atari games with different modes [13]. Several more
recent benchmarks include procedural level generation [7, 25] that enables
more variation among environment instances. Multi-task environments like XLand
[44, 43], and Minecraft [15, 23, 24] introduce large distributions of
objectives and training scenarios that demand even greater generalization.
Of these, XLand has 2 agents and is not publicly available. The rest are
single-agent. In contrast, Neural MMO features 100+ agents in a multi-task,
open-source environment that allows us to study generalization across tasks,
opponents, and maps [40, 41]. In Neural MMO, agents are presented with diverse
challenges including collecting resources, engaging in combat, training
professions, and trading on a player-controlled market. Most of the progress
on Neural MMO has been driven by competitions at NeurIPS and IJCAI totaling
1300+ participants. In the most recent NeurIPS 2023 competition, participants
trained goal-conditioned agents capable of completing a variety of tasks.
Despite years of sustained interest, the best Neural MMO agents are only
proficient at a few tasks and cannot reach high levels or play effectively as
a team.
Meta MMO extends Neural MMO with a diverse set of minigames. Our main
contributions are:
1. 1.
Meta MMO as a benchmark for many-agent generalization. Minigames feature free-
for-all and team settings, built-in domain randomization, and adaptive
difficulty.
2. 2.
Optimized training up to 3x faster with minigames. Each of our experiments is
run on a commercial off-the-shelf desktop with a single RTX 4090.
3. 3.
A generalist agent capable of playing several minigames with a single set of
weights. It is trained using PPO and a simple curriculum learning method.
Neural MMO evaluates generalization over tasks, opponents, and maps. Meta MMO
enables further evaluation over variations in gameplay mechanics and runs up
to 10 times faster than Neural MMO 2. We demonstrate that a RL can learn
sophisticated behaviors on multiple individual and team-based minigames with a
single set of weights in less than a day of training using a single GPU. To
support further research, we release Meta MMO, baselines, and training
code111https://github.com/kywch/meta-mmo as free and open-source software
under the MIT license.
Figure 1: Meta MMO’s minigame framework enables fine-grained control over game
objectives, agent spawning, team assignments, and various game elements.
Subsystems manage resource generation, combat rules, NPC behavior, item
supply, and market dynamics, each of which can be customized using
configurable attributes (see Appendix A.1 for more details). These
configurable settings provide a convenient method for creating adaptive
difficulty, allowing for the implementation of curriculum learning techniques
that gradually introduce agents to more challenging tasks during training.
## 2 Meta MMO
### 2.1 Minigame Design
Meta MMO can be viewed as a sort of "configuration configuration" that allows
users to specify multiple distributions of Neural MMO environments with
different gameplay and objectives. As shown in Fig 1, Meta MMO provides fine-
grained control over game elements including combat rules, NPC behavior,
market rules, map size, terrain and resource generation, agent spawning rules,
and win conditions. An explanation of each game system as well as a list of
configurable attributes is listed in Appendix A.1. Meta MMO also hooks into
the Neural MMO task system [41], which allows flexible objective assignment to
arbitrary groups of agents. One particularly useful property of Meta MMO is
that it provides a convenient method of creating adaptive difficulty. This
produces a form of curriculum learning by which agents are gradually
introduced to harder tasks over the course of training (Appendix A.2). To our
knowledge, these features are unique to our work: they are not available in
the base Neural MMO environment or in any other setting of comparable
complexity.
Figure 2: Snapshots of King of the Hill (A) and Sandwich (B), showcasing the
same policy’s adaptability to different game settings. (A) When the resource
subsystem is enabled, team members spread out to forage for food and water.
(B) When the resource subsystem is disabled, each team groups together to
maximize their offensive and defensive capabilities.
### 2.2 Implemented Minigames
This work includes implementations of several minigames that showcase the
flexibility and diversity of Meta MMO. Additionally, we include the rulesets
of the 2022 and 2023 Neural MMO competitions as separate minigames called Team
Battle and Multi-task Training respectively. A.3 provides links to a sample of
minigame replays.
Survival is the default Meta MMO minigame. The objective for each agent is to
stay alive until the end of the episode (1024 ticks). 128 agents are spawned
around the edge of a 128x128 map and rewarded for each tick they survive. By
default, this minigame runs with all the game elements, but it can also be
minimized to competitive foraging with no direct combat.
Team Battle replicates the 2022 NeurIPS Neural MMO challenge, where the last
team standing wins. 16 teams of 8 agents are spawned around the edge of a
128x128 map, with team members starting in the same tile. Agents are rewarded
for each tick they remain alive. Compared to the challenging 2022 competition,
this minigame provides additional configuration options for simplifying the
task, such as by disabling the need to forage for food.
Multi-task Training/Evaluation replicates the 2023 NeurIPS Neural MMO
challenge, a free-for-all that evaluated how well agents could generalize to
tasks, opponents, and maps not seen during training. We include the 1,298
training tasks and 63 evaluation tasks (Appendix A.4) used in the competition.
128 agents are spawned around the edge of a 128x128 map and assigned random
tasks from the task set. Agents are rewarded for progress toward completing
their task, defined using the original 2023 competition metrics.
In addition to expanding the configuration options for the competition
rulesets, we introduce four new minigames. These showcase the diversity of
games that can be created by combining existing Neural MMO components.
Protect the King is a variation of Team Battle where each team has a
designated leader. If the leader dies, the entire team is eliminated.
Succeeding in this environment requires an additional layer of coordination
and strategy.
Race to the Center focuses on foraging and navigation. An agent wins if it
reaches the center tile first. This minigame requires agents to forage for
food and water efficiently on the way to the center. The map size can be
adaptively scaled from 40x40 to 128x128 to increase the difficulty as agents
improve (Appendix A.2).
King of the Hill (Fig. 2A) combines foraging and team combat in a 60x60 map. A
team consists of 8 agents and wins by seizing and defending the center tile
for a specified duration. If no team has seized the center by the time the
episode ends, there is no winner. Teams must forage, survive, and fend off
other teams, making it difficult to maintain control of the hill for long. The
required defense duration can also be adaptively scaled from 10 to 200 ticks
as agents become more proficient.
Sandwich (Fig. 2B) focuses on team combat against NPCs and other teams in an
80x80 map. Eight teams of 16 agents each are spawned in a circle. To win, a
team must defeat all other teams and survive for at least 500 ticks. This
minigame does not include foraging, but it features three external threats:
(1) scripted NPCs spawned at the edge of the map, (2) a death fog pushing
agents towards the center, and (3) NPCs constantly being spawned from the
center of the map. The number of spawned NPCs can be adaptively increased
throughout training.
Table 1: Game subsystems enabled in each minigame. Team Battle was used in both Full and Mini Config experiments but with different subsystems enabled. Extras: The rest of the subsystems – Item, Equipment, Profession, Progression, and Exchange. Experiment | Minigame | Team | Resources | Combat | NPC | Comm | Extras
---|---|---|---|---|---|---|---
Full Config | Survival | | ✓ | ✓ | ✓ | ✓ | ✓
Team Battle | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
Multi-task Training | | ✓ | ✓ | ✓ | ✓ | ✓
Mini Config | Team Battle | ✓ | ✓ | ✓ | ✓ | ✓ |
Protect the King | ✓ | ✓ | ✓ | ✓ | ✓ |
Race to the Center | | ✓ | | | |
| King of the Hill | ✓ | ✓ | ✓ | | ✓ |
| Sandwich | ✓ | | ✓ | ✓ | ✓ |
### 2.3 Team Game Support
The core Neural MMO environment does not assume any relationships between
agents, does not impose any constraints on actions, and does not provide masks
based on team assignments. For example, agents on the same team can attack and
potentially kill their teammates, and agents can give items or gold to
opposing agents. In Meta MMO, we create a general wrapper for team-based
minigames that implements the following functions:
Action Masking: Meta MMO masks attack actions targeting an agent’s teammates,
which could delay learning in the previous iterations of Neural MMO.
Shared Reward: Meta MMO implements team-level reward using the built-in task
system. Minigames can define and assign tasks to each team. In the current
baseline, the team task reward is added to the individual agents’ rewards.
Observation Augmentation: Neural MMO’s observations do not include team
information. To facilitate collaboration, Meta MMO augments the entity and
tile observations to indicate which agents belong to which team. This led to
an increase in coordination in our experiments.
Spawning: Neural MMO can be particularly sensitive to initial conditions. Meta
MMO can be configured to spawns agents on the same team at the same location
on the edge of the map. This behavior can be set per episode, supporting game-
dependent team spawning. Minigames can also set custom spawn locations or
create custom spawning behaviors if necessary.
Communication: Neural MMO provides a basic communication system that lets
agents sent an integer token (1-127) to all agents within visual range. Meta
MMO provides a communication protocol that allows an agent to instead
broadcast its health, the number of nearby NPCs and foes, and the presence of
key targets. This could enable agents to share information beyond their visual
range and develop communication protocols, though we leave a thorough study of
multi-agent communication in Meta MMO for future work.
## 3 Experiments
We train generalist policies capable of playing multiple games with a single
set of weights. Throughout this section, we will refer to the Appendix, which
contains extensive environment and experimental details. Our experiments
consider two sets of Meta MMO configurations. The "full" configuration
features resource collection, combat, professions, trade, and all of the other
complexities present in Neural MMO. In the "mini" config, each minigame uses a
subset of the following: team-based play, resource collection, combat, NPCs,
and communication. For each configuration, we trained two types of policies:
specialists and generalist. Specialist policies learn to play a single
minigame. Generalist policies were trained on multiple minigames
simultaneously. The full experimental details are stated in Appendix A.5.
Our main result is as follows: a generalist can match the capability of a
specialist when trained on the same number of samples from the target task.
Using a simple curriculum learning method, adding samples from other tasks
does not degrade performance on the target task. Instead, the generalist is
able to simultaneously solve several minigames. Stated differently: generalist
policies performed comparably to or better than specialist policies after
training on the same number of samples of the specialist’s task, plus extra
auxiliary data.
Our baseline builds upon the winning solution from the 2023 competition. The
policy architecture (Appendix A.6) comprises encoders for tiles, agents,
tasks, items, and market information, followed by a recurrent layer (LSTM
[18]), an action decoder, and a value network head. We use the Independent PPO
(IPPO) algorithm [39, 8] with historical self-play, utilizing PufferLib’s
Clean PuffeRL script, which extends CleanRL’s PPO implementation [19] to
support many-agent training.
Training and execution are performed in a decentralized manner using only
local observations, allowing flexible team sizes and compositions. We provide
trained checkpoints at various training steps, along with scripts for
evaluation using either an Elo rating system for competitive games or task
completion metrics tailored for the multi-task setting (Appendix A.7). The
trained models, training scripts, and hyperparameters are publicly available
in our GitHub repository.
### 3.1 Full Config Experiment
We trained specialist policies for Survival, Team Battle, and Multi-task
Training, respectively, and a generalist policy that can play all three
minigames. Figure 3 shows the training curves of the policies. As training
progresses, agents learn to survive longer and engage with more game
subsystems (Appendix A.8).
Figure 3: Training curves for the Full Config experiment. For the generalist
policy, only samples from the target minigame were counted. As training
progresses, agents learn to survive longer, engage with more game subsystems
(Appendix A.8), and encounter diverse events, as evidenced by the unique event
count.
To evaluate the trained policies (Figure 4), we used multiple metrics. We used
Elo rating for Survival and Team Battle, where the last standing agent or team
is declared the winner, and task completion rate for Multi-task Training. To
ensure a fair comparison with the training samples, we selected checkpoints at
25M, 50M, 75M, and 100M agent steps for specialist policies, and checkpoints
at 100M, 200M, 300M, and 400M agent steps for the generalist policy.
As a sanity check, we confirmed that training on more samples results in a
better policy. In Survival and Team Battle, we observed that the generalist
policy performed better than the specialist policies even when trained with
fewer samples, suggesting that the generalist policy benefited from positive
transfer learning. In Multi-task Evaluation, the generalist and specialist
policies performed comparably across all training sample sizes tested.
At the same time, the task completion rate below 16% observed in Multi-task
Evaluation, even for the best-performing checkpoint, underscores the
significant challenges posed by Meta MMO. It is also important to note that
these evaluations were conducted in a "checkpoint vs. checkpoint" setting,
where the increasing capability of opponents makes maintaining current score
levels more difficult, further emphasizing the inherent complexity of multi-
agent RL.
Figure 4: Evaluations for the Full Config experiment. See Appendix A.7 for
methods. An Elo rating of 1000 represents the initial anchor value. Training
samples of the generalist checkpoints were adjusted based on the minigame
sampling ratio during training (Appendix A.9).
### 3.2 Mini Config Experiment
This section explores the optimization benefits of Meta MMO. Using a
restricted set of Neural MMO’s features, as in Mini Config, causes the
environment runs faster and the action and observation spaces to become
smaller. As a result, the overall training throughput can be increased more
than twofold compared to the full configuration (Table 2).
Figure 5 displays the training curves of the policies with different metrics
as proxies for learning, depending on the minigame. In Team Battle and Protect
the King, which are team survival games, trained agents survive longer. Race
to the Center is easily solved by baseline agents, and after training, the
agents’ starting locations largely determine the winner; agents starting on
the node should travel twice as far as those starting on the edges. For King
of the Hill, we observed that after training, possession of the center tile
switched multiple times until the end. See Appendix A.3 for a sample of
replays for each minigame.
We also observed agents adapting their behavior based on the game dynamics
caused by toggling a subsystem (Figure 2). When the resource subsystem is
enabled, as in King of the Hill, team members spread out to forage for food
and water, because it takes time for a foliage tile to regenerate its
resources. However, when the resource subsystem is disabled (i.e., Sandwich),
each team moves in tight formations, maximizing their offensive and defensive
capabilities.
Figure 5: Training curves for the Mini Config experiment, showing metrics
specific to each minigame. In Team Battle and Protect the King, agent lifespan
increases with training. In Race to the Center and King of the Hill, agents
learned to navigate maps and hold the center within 25M steps. In Sandwich,
the generalist policy did not converge to the maximum NPC multiplier after
100M steps.
We used Elo to assess the competency of the trained policies (Figure 6). The
generalist policy outperformed the specialist policies with less training
samples in Team Battle, Protect the King, Race to the Center, and King of the
Hill. This was most pronounced in the more challenging minigames like Protect
the King and King of the Hill, where the objectives were harder (e.g.,
protecting the key agent or tile). In Sandwich, the generalist policy
performed comparably to the specialist policy.
Figure 6: Evaluations for the Mini Config experiment. Training samples of the generalist checkpoints were adjusted based on the minigame sampling ratio during training (Appendix A.9). Table 2: Training performance. Throughput is the average agent steps per second during the entire RL learning process, providing a realistic wall time estimate for training. Experiment | Minigame/Note | Agent Steps | Duration | Throughput
---|---|---|---|---
Full Config | Survival | 100 M | 9h 46m | 2858
Team Battle | 100 M | 9h 15m | 3019
Multi-task Training | 100 M | 11h 00m | 2535
Generalist | 400 M | 37h 17m | 2997
Mini Config | Team Battle | 101 M | 4h 08m | 6758
Protect the King | 100 M | 4h 40m | 5976
Race to the Center | 100 M | 3h 28m | 8047
King of the Hill | 100 M | 4h 11m | 6672
Sandwich | 100 M | 3h 48m | 7359
Generalist | 400 M | 16h 17m | 6866
| 2023 NeurIPS Competition
---
NMMO 2.0 Multi-task
Competition Baseline | 10 M | 3h 34m | 779
## 4 Related Work
Previous works like IMPALA [12] and PopArt [17] have trained multi-task
polices on multiple distinct Atari environments. The field of curriculum
learning and unsupervised environment design seek to train agents that are
competent at a broad range of tasks in multi-task environments [21, 9, 20].
These works typically focus on closely related tasks, such as environment
seeds or map configurations. Other recent works such as Gato [35], Genie [6],
and SIMA [45] learn to play diverse sets of games from large-scale offline
datasets rather than online interaction with the environment.
NetHack [25] and MiniHack [38] exemplify how simplifying complex environments
can accelerate research progress. NetHack is a procedurally generated
stochastic video game with hundreds of enemies, items, and playable
characters. Winning a game of NetHack is incredibly challenging even for
proficient human players, making it difficult for researchers to make
reasonable progress. MiniHack was introduced as a small scale, flexible
framework for building NetHack levels and testing specific objectives. This
benchmark has led to significant progress on curriculum learning, unsupervised
environment design, and exploration [22, 16]. Similarly, our work takes the
most difficult many-agent RL benchmark and provides a flexible tool for
designing small-scale challenges within the Neural MMO framework.
Other many-agent environments exist to support different research focuses.
Griddly [3], Megaverse [34], and Melting Pot [27, 1] facilitate rapid
prototyping and generation of diverse scenarios, but typically involve fewer
agents. Multi-particle environments [28], VMAS [5], JaxMARL [36], and Gigastep
[26] prioritize efficient many-agent communication and coordination, but with
simpler per-agent complexity. Lux AI [10, 42], SMAC [37], and SMAC V2 [11]
feature heterogeneous agent teams trained on fixed objectives and are limited
to two-team scenarios. Hide-and-Seek [2] teaches a small number of agents to
hide from their opponents by manipulating a few interactive objects. XLand
[44, 43] offers a diverse task space and up to three agents, but is not open
source and requires prohibitively expensive training for academic budgets.
XLand-Minigrid [32] introduced an efficient grid-based implementation of
XLand’s task system but does not currently support multiagent games. Broadly,
they are all either complex environments with few agents or simple
environments with many agents.
Meta MMO differs from these environments by introducing minigames that feature
a large population of agents, multiple teams with flexible sizes, high per-
agent complexity, and the flexibility to define diverse environments and
objectives. The platform accommodates both intra-team collaboration and inter-
team competition on a large scale. All of these features are provided within
an open-source and computationally efficient framework, positioning this work
as a valuable contribution to the study of generalization and skill transfer
in many-agent RL.
## 5 Discussion
Task vs. Minigames. Neural MMO 2 is sufficiently rich, making it possible to
define meaningfully different objectives (e.g., reach the center tile vs. make
profit from trade, last team standing vs. protect the leader) within the same
simulation, similar to XLand [44, 43]. However, minigames with different
subsystem configurations can lead to even more distinct challenges. For
example, enabling the resources subsystem encourages agents to spread out and
forage, while disabling it with the combat subsystem encourages agents to
group up and fight together (Figure 2).
Meta MMO’s minigames also allow researchers to optimize training performance
by selectively enabling subsystems. Improvements in the environment,
infrastructure, and hyperparameters have resulted in 3x faster training
compared to the previous competition
baseline222https://github.com/NeuralMMO/baselines/tree/2.0 (Table 2). By using
Meta MMO to select minimal subsystems, researchers can also triple the
training speed for specific research questions, then generalize by gradually
adding complexity similar to the approach in MiniHack [38]. Meta MMO
simplifies generalization experiments across diverse minigames by maintaining
consistent observation/action spaces. Furthermore, since Meta MMO’s tasks and
minigames are defined in code, it is possible to generate novel tasks and
minigames endlessly, enabling open-ended learning based on unsupervised
environment design [9, 47].
Strength of Generalization. While minigames may be more distant from each
other than tasks, they are still closer to each other than completely
independent games. They share common elements such as the structure of
observations and basic gameplay features. At the same time, there are few
successes in the literature concerning generalization at small scale, and even
fewer in many-agent learning settings. We claim no method for evaluating how
impressive our results are, save that our environments are likely to be useful
to other researchers. However, we would like to take a moment to address the
problem of evaluation more broadly.
A major difficulty of work in this space is that there is little intuition as
to what we should expect. A person that plays one Atari game may then be able
to learn to play a second more quickly, but a person also benefits from a
wealth of external knowledge. It is quite likely that, from the perspective of
any reasonable tabula rasa learner, two Atari games will look much more
different from each other than they look to a human. This makes quantifying
"reasonable" transfer performance difficult. One might assume that the broader
the training curriculum, the more likely it is that there is room for positive
transfer. In Gato [35], the authors showed positive transfer with around 600
tasks. In our case, we are surprised that it works at all with only a handful
of tasks, even taking into account the relative similarities of minigames and
the presence of domain randomization. Previously, we had expected to only
achieve competence over one randomized minigame per policy.
The Meta MMO baseline has incorporated multi-task training, allowing agents to
learn a complex task embedding space produced by large language models (LLMs)
and perform zero-shot generalization to unseen tasks. The success of this
approach likely depends on the LLM’s performance, code comprehension, and
prompt design. Although the training required for good generalization is
uncertain, Meta MMO’s faster training is beneficial. These features
collectively provide a rich playground for curriculum learning research.
Multiagent Coordination. We observed compelling team behaviors, with stronger
team performance emerging from increased training. IPPO, which uses only local
observations for decentralized training and execution, performed well in our
experiments, consistent with previous research [8, 48, 11]. IPPO’s advantages
include compatibility with arbitrary team sizes and efficiency in training and
inference. In contrast, pooling all team agents’ observations can
substantially slow training; the 2022 competition winner solution took weeks
to train. Future research should explore other multi-agent RL algorithms to
further improve team performance and training efficiency. Meta MMO provides a
complex, yet efficient many-agent environment that can democratize research in
coordination, credit assignment [33], multiagent autocurricula [2], and the
emergence of language [31].
Limitations. Meta MMO may have game balance issues as the capable agents that
can stress test the game mechanics became available only recently. Meta MMO
does not have an interactive client, limiting its potential for human-multi-
agent collaboration research. The lack of absolute scoring metrics makes
multi-agent evaluation challenging, calling for an openly available diverse
policy population and peer-to-peer arena.
Potential Negative Societal Impacts. Meta MMO minigames are abstract game
simulations with basic combat and commerce systems, substantially different
from real-world counterparts. We believe that Meta MMO is not directly
applicable to developing real-world systems with societal impact. Its primary
goal is to advance research on learning agents’ capabilities.
## Acknowledgments and Disclosure of Funding
We thank PufferAI for sponsoring the compute used in this work.
## References
* Agapiou et al. [2023] J. P. Agapiou, A. S. Vezhnevets, E. A. Duéñez-Guzmán, J. Matyas, Y. Mao, P. Sunehag, R. Köster, U. Madhushani, K. Kopparapu, R. Comanescu, D. Strouse, M. B. Johanson, S. Singh, J. Haas, I. Mordatch, D. Mobbs, and J. Z. Leibo. Melting pot 2.0, 2023.
* Baker et al. [2020] B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch. Emergent tool use from multi-agent autocurricula. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net, 2020. URL https://openreview.net/forum?id=SkxpxJBKwS.
* Bamford et al. [2022] C. Bamford, S. Huang, and S. Lucas. Griddly: A platform for ai research in games, 2022.
* Bellemare et al. [2013] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_ , 47:253–279, June 2013. ISSN 1076-9757. doi: 10.1613/jair.3912. URL http://dx.doi.org/10.1613/jair.3912.
* Bettini et al. [2022] M. Bettini, R. Kortvelesy, J. Blumenkamp, and A. Prorok. Vmas: A vectorized multi-agent simulator for collective robot learning. _The 16th International Symposium on Distributed Autonomous Robotic Systems_ , 2022.
* Bruce et al. [2024] J. Bruce, M. Dennis, A. Edwards, J. Parker-Holder, Y. Shi, E. Hughes, M. Lai, A. Mavalankar, R. Steigerwald, C. Apps, Y. Aytar, S. Bechtle, F. Behbahani, S. Chan, N. Heess, L. Gonzalez, S. Osindero, S. Ozair, S. Reed, J. Zhang, K. Zolna, J. Clune, N. de Freitas, S. Singh, and T. Rocktäschel. Genie: Generative interactive environments, 2024.
* Cobbe et al. [2020] K. Cobbe, C. Hesse, J. Hilton, and J. Schulman. Leveraging procedural generation to benchmark reinforcement learning. In _International conference on machine learning_ , pages 2048–2056. PMLR, 2020.
* de Witt et al. [2020] C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. S. Torr, M. Sun, and S. Whiteson. Is independent learning all you need in the starcraft multi-agent challenge?, 2020.
* Dennis et al. [2021] M. Dennis, N. Jaques, E. Vinitsky, A. Bayen, S. Russell, A. Critch, and S. Levine. Emergent complexity and zero-shot transfer via unsupervised environment design, 2021.
* Doerschuk-Tiberi and Tao [2021] B. Doerschuk-Tiberi and S. Tao. Lux AI Challenge Season 1, 7 2021. URL https://github.com/Lux-AI-Challenge/Lux-Design-2021.
* Ellis et al. [2023] B. Ellis, J. Cook, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. N. Foerster, and S. Whiteson. Smacv2: An improved benchmark for cooperative multi-agent reinforcement learning, 2023.
* Espeholt et al. [2018] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, S. Legg, and K. Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures, 2018.
* Farebrother et al. [2018] J. Farebrother, M. C. Machado, and M. Bowling. Generalization and regularization in dqn. _arXiv preprint arXiv:1810.00123_ , 2018.
* Guo et al. [2024] D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y. Wu, Y. Li, et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. _arXiv preprint arXiv:2401.14196_ , 2024.
* Guss et al. [2021] W. H. Guss, M. Y. Castro, S. Devlin, B. Houghton, N. S. Kuno, C. Loomis, S. Milani, S. Mohanty, K. Nakata, R. Salakhutdinov, et al. The minerl 2020 competition on sample efficient reinforcement learning using human priors. _arXiv preprint arXiv:2101.11071_ , 2021.
* Henaff et al. [2022] M. Henaff, R. Raileanu, M. Jiang, and T. Rocktäschel. Exploration via elliptical episodic bonuses. _Advances in Neural Information Processing Systems_ , 35:37631–37646, 2022.
* Hessel et al. [2018] M. Hessel, H. Soyer, L. Espeholt, W. Czarnecki, S. Schmitt, and H. van Hasselt. Multi-task deep reinforcement learning with popart, 2018.
* Hochreiter and Schmidhuber [1997] S. Hochreiter and J. Schmidhuber. Long short-term memory. _Neural computation_ , 9(8):1735–1780, 1997.
* Huang et al. [2021] S. Huang, R. F. J. Dossa, C. Ye, and J. Braga. Cleanrl: High-quality single-file implementations of deep reinforcement learning algorithms, 2021.
* Jiang et al. [2021a] M. Jiang, M. Dennis, J. Parker-Holder, J. Foerster, E. Grefenstette, and T. Rocktäschel. Replay-guided adversarial environment design. _Advances in Neural Information Processing Systems_ , 34:1884–1897, 2021a.
* Jiang et al. [2021b] M. Jiang, E. Grefenstette, and T. Rocktäschel. Prioritized level replay. In _International Conference on Machine Learning_ , pages 4940–4950. PMLR, 2021b.
* Jiang et al. [2022] M. Jiang, M. Dennis, J. Parker-Holder, A. Lupu, H. Küttler, E. Grefenstette, T. Rocktäschel, and J. Foerster. Grounding aleatoric uncertainty for unsupervised environment design. _Advances in Neural Information Processing Systems_ , 35:32868–32881, 2022.
* Kanervisto et al. [2022] A. Kanervisto, S. Milani, K. Ramanauskas, N. Topin, Z. Lin, J. Li, J. Shi, D. Ye, Q. Fu, W. Yang, W. Hong, Z. Huang, H. Chen, G. Zeng, Y. Lin, V. Micheli, E. Alonso, F. Fleuret, A. Nikulin, Y. Belousov, O. Svidchenko, and A. Shpilman. Minerl diamond 2021 competition: Overview, results, and lessons learned. In D. Kiela, M. Ciccone, and B. Caputo, editors, _Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track_ , volume 176 of _Proceedings of Machine Learning Research_ , pages 13–28. PMLR, 06–14 Dec 2022. URL https://proceedings.mlr.press/v176/kanervisto22a.html.
* Kanitscheider et al. [2021] I. Kanitscheider, J. Huizinga, D. Farhi, W. H. Guss, B. Houghton, R. Sampedro, P. Zhokhov, B. Baker, A. Ecoffet, J. Tang, O. Klimov, and J. Clune. Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft, 2021.
* Küttler et al. [2020] H. Küttler, N. Nardelli, A. H. Miller, R. Raileanu, M. Selvatici, E. Grefenstette, and T. Rocktäschel. The nethack learning environment, 2020.
* Lechner et al. [2023] M. Lechner, L. Yin, T. Seyde, T.-H. Wang, W. Xiao, R. Hasani, J. Rountree, and D. Rus. Gigastep - one billion steps per second multi-agent reinforcement learning. In _Advances in Neural Information Processing Systems_ , 2023. URL https://openreview.net/forum?id=UgPAaEugH3.
* Leibo et al. [2021] J. Z. Leibo, E. Duéñez-Guzmán, A. S. Vezhnevets, J. P. Agapiou, P. Sunehag, R. Koster, J. Matyas, C. Beattie, I. Mordatch, and T. Graepel. Scalable evaluation of multi-agent reinforcement learning with melting pot, 2021.
* Lowe et al. [2020] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments, 2020.
* Machado et al. [2018] M. C. Machado, M. G. Bellemare, E. Talvitie, J. Veness, M. Hausknecht, and M. Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. _Journal of Artificial Intelligence Research_ , 61:523–562, 2018.
* Mnih et al. [2013] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning, 2013.
* Mordatch and Abbeel [2018] I. Mordatch and P. Abbeel. Emergence of grounded compositional language in multi-agent populations. 2018\.
* Nikulin et al. [2024] A. Nikulin, V. Kurenkov, I. Zisman, A. Agarkov, V. Sinii, and S. Kolesnikov. Xland-minigrid: Scalable meta-reinforcement learning environments in jax, 2024.
* OpenAI et al. [2019] OpenAI, C. Berner, G. Brockman, B. Chan, V. Cheung, P. Dębiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, R. Józefowicz, S. Gray, C. Olsson, J. Pachocki, M. Petrov, H. P. de Oliveira Pinto, J. Raiman, T. Salimans, J. Schlatter, J. Schneider, S. Sidor, I. Sutskever, J. Tang, F. Wolski, and S. Zhang. Dota 2 with large scale deep reinforcement learning, 2019. URL https://arxiv.org/abs/1912.06680.
* Petrenko et al. [2021] A. Petrenko, E. Wijmans, B. Shacklett, and V. Koltun. Megaverse: Simulating embodied agents at one million experiences per second, 2021.
* Reed et al. [2022] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg, T. Eccles, J. Bruce, A. Razavi, A. Edwards, N. Heess, Y. Chen, R. Hadsell, O. Vinyals, M. Bordbar, and N. de Freitas. A generalist agent, 2022.
* Rutherford et al. [2023] A. Rutherford, B. Ellis, M. Gallici, J. Cook, A. Lupu, G. Ingvarsson, T. Willi, A. Khan, C. S. de Witt, A. Souly, S. Bandyopadhyay, M. Samvelyan, M. Jiang, R. T. Lange, S. Whiteson, B. Lacerda, N. Hawes, T. Rocktaschel, C. Lu, and J. N. Foerster. Jaxmarl: Multi-agent rl environments in jax, 2023.
* Samvelyan et al. [2019] M. Samvelyan, T. Rashid, C. S. de Witt, G. Farquhar, N. Nardelli, T. G. J. Rudner, C.-M. Hung, P. H. S. Torr, J. Foerster, and S. Whiteson. The starcraft multi-agent challenge, 2019.
* Samvelyan et al. [2021] M. Samvelyan, R. Kirk, V. Kurin, J. Parker-Holder, M. Jiang, E. Hambro, F. Petroni, H. Küttler, E. Grefenstette, and T. Rocktäschel. Minihack the planet: A sandbox for open-ended reinforcement learning research, 2021.
* Schulman et al. [2017] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms, 2017.
* Suarez et al. [2019] J. Suarez, Y. Du, P. Isola, and I. Mordatch. Neural mmo: A massively multiagent game environment for training and evaluating intelligent agents. _arXiv preprint arXiv:1903.00784_ , 2019.
* Suarez et al. [2023] J. Suarez, P. Isola, K. W. Choe, D. Bloomin, H. X. Li, R. Sullivan, N. K. Ravichandran, D. Scott, R. S. Shuman, H. Bradley, L. Castricato, K. You, Y. Jiang, Q. Li, J. Chen, and X. Zhu. Neural MMO 2.0: A massively multi-task addition to massively multi-agent learning. In _Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track_ , 2023. URL https://openreview.net/forum?id=DSYuRMJnaY.
* Tao et al. [2023] S. Tao, I. Pan, B. Doerschuk-Tiberi, and A. Howard. Lux ai season 2, 2023. URL https://kaggle.com/competitions/lux-ai-season-2.
* Team et al. [2023] A. A. Team, J. Bauer, K. Baumli, S. Baveja, F. Behbahani, A. Bhoopchand, N. Bradley-Schmieg, M. Chang, N. Clay, A. Collister, V. Dasagi, L. Gonzalez, K. Gregor, E. Hughes, S. Kashem, M. Loks-Thompson, H. Openshaw, J. Parker-Holder, S. Pathak, N. Perez-Nieves, N. Rakicevic, T. Rocktäschel, Y. Schroecker, J. Sygnowski, K. Tuyls, S. York, A. Zacherl, and L. Zhang. Human-timescale adaptation in an open-ended task space, 2023.
* Team et al. [2021] O. E. L. Team, A. Stooke, A. Mahajan, C. Barros, C. Deck, J. Bauer, J. Sygnowski, M. Trebacz, M. Jaderberg, M. Mathieu, N. McAleese, N. Bradley-Schmieg, N. Wong, N. Porcel, R. Raileanu, S. Hughes-Fitt, V. Dalibard, and W. M. Czarnecki. Open-ended learning leads to generally capable agents, 2021.
* Team et al. [2024] S. Team, M. A. Raad, A. Ahuja, C. Barros, F. Besse, A. Bolt, A. Bolton, B. Brownfield, G. Buttimore, M. Cant, S. Chakera, S. C. Y. Chan, J. Clune, A. Collister, V. Copeman, A. Cullum, I. Dasgupta, D. de Cesare, J. D. Trapani, Y. Donchev, E. Dunleavy, M. Engelcke, R. Faulkner, F. Garcia, C. Gbadamosi, Z. Gong, L. Gonzales, K. Gupta, K. Gregor, A. O. Hallingstad, T. Harley, S. Haves, F. Hill, E. Hirst, D. A. Hudson, J. Hudson, S. Hughes-Fitt, D. J. Rezende, M. Jasarevic, L. Kampis, R. Ke, T. Keck, J. Kim, O. Knagg, K. Kopparapu, A. Lampinen, S. Legg, A. Lerchner, M. Limont, Y. Liu, M. Loks-Thompson, J. Marino, K. M. Cussons, L. Matthey, S. Mcloughlin, P. Mendolicchio, H. Merzic, A. Mitenkova, A. Moufarek, V. Oliveira, Y. Oliveira, H. Openshaw, R. Pan, A. Pappu, A. Platonov, O. Purkiss, D. Reichert, J. Reid, P. H. Richemond, T. Roberts, G. Ruscoe, J. S. Elias, T. Sandars, D. P. Sawyer, T. Scholtes, G. Simmons, D. Slater, H. Soyer, H. Strathmann, P. Stys, A. C. Tam, D. Teplyashin, T. Terzi, D. Vercelli, B. Vujatovic, M. Wainwright, J. X. Wang, Z. Wang, D. Wierstra, D. Williams, N. Wong, S. York, and N. Young. Scaling instructable agents across many simulated worlds, 2024.
* Terry et al. [2021] J. K. Terry, B. Black, N. Grammel, M. Jayakumar, A. Hari, R. Sullivan, L. Santos, R. Perez, C. Horsch, C. Dieffendahl, N. L. Williams, Y. Lokesh, and P. Ravi. Pettingzoo: Gym for multi-agent reinforcement learning, 2021.
* Wang et al. [2019] R. Wang, J. Lehman, J. Clune, and K. O. Stanley. Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions, 2019.
* Yu et al. [2022] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, and Y. Wu. The surprising effectiveness of ppo in cooperative, multi-agent games, 2022.
## Checklist
1. 1.
For all authors…
1. (a)
Do the main claims made in the abstract and introduction accurately reflect
the paper’s contributions and scope? [Yes] We introduce Meta MMO’s minigames,
speed improvement, and generalist policy. These are freely available for
download.
2. (b)
Did you describe the limitations of your work? [Yes] See Section 5, where we
describe three shortcomings.
3. (c)
Did you discuss any potential negative societal impacts of your work? [Yes]
See Section 5. Minigames are simulated games that are substantially abstracted
from real-world scenarios.
4. (d)
Have you read the ethics review guidelines and ensured that your paper
conforms to them? [Yes] This paper conforms to the ethics review guidelines.
2. 2.
If you are including theoretical results…
1. (a)
Did you state the full set of assumptions of all theoretical results? [N/A]
2. (b)
Did you include complete proofs of all theoretical results? [N/A]
3. 3.
If you ran experiments (e.g. for benchmarks)…
1. (a)
Did you include the code, data, and instructions needed to reproduce the main
experimental results (either in the supplemental material or as a URL)? [Yes]
The repository url, https://github.com/kywch/meta-mmo, is mentioned in both
the Introduction and Appendix.
2. (b)
Did you specify all the training details (e.g., data splits, hyperparameters,
how they were chosen)? [Yes] We specified these in detail in Appendix A.5.
3. (c)
Did you report error bars (e.g., with respect to the random seed after running
experiments multiple times)? [Yes] The training curves in Figs 3 and 5 were
generated with five random seeds, and the error bars were presented
accordingly.
4. (d)
Did you include the total amount of compute and the type of resources used
(e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] We summarized
the training duration and included links to the wandbs in Table 2. The
hardware configuration (RTX 4090) is described in Appendix A.5.
4. 4.
If you are using existing assets (e.g., code, data, models) or
curating/releasing new assets…
1. (a)
If your work uses existing assets, did you cite the creators? [Yes] This work
is based on Neural MMO 2 and CleanRL, both cited in this paper, and PufferLib,
which is currently unpublished.
2. (b)
Did you mention the license of the assets? [Yes] Everything is published under
the MIT license.
3. (c)
Did you include any new assets either in the supplemental material or as a
URL? [Yes] The updates made to Neural MMO 2 have been merged into the Neural
MMO repository and are now freely available.
4. (d)
Did you discuss whether and how consent was obtained from people whose data
you’re using/curating? [N/A] This work does not have human data.
5. (e)
Did you discuss whether the data you are using/curating contains personally
identifiable information or offensive content? [N/A] This work does not
contain personally identifiable information or offensive content.
5. 5.
If you used crowdsourcing or conducted research with human subjects…
1. (a)
Did you include the full text of instructions given to participants and
screenshots, if applicable? [N/A]
2. (b)
Did you describe any potential participant risks, with links to Institutional
Review Board (IRB) approvals, if applicable? [N/A]
3. (c)
Did you include the estimated hourly wage paid to participants and the total
amount spent on participant compensation? [N/A]
## Appendix A Appendix
The Meta MMO baselines, training, and evaluation code are available at
https://github.com/kywch/meta-mmo. The Meta MMO environment is available at
https://github.com/NeuralMMO/environment/tree/2.1, as Neural MMO 2.1. Both are
published under the MIT license. The authors confirm that they have the
permission to license these as such and bear all responsibility in the case of
violation of rights.
Hosting and Maintenance: The code, documentation, and baselines will continue
to be hosted on the Neural MMO GitHub account, as they were for the last five
years. Support is available on the Neural MMO Discord, available from
https://neuralmmo.github.io/. We will continue to update the platform to
resolve major breaking changes.
Reproducibility: We provide the training and evaluation scripts to reproduce
the results in the repository. These may be used as baselines by future works.
### A.1 Meta MMO Subsystems and Configurable Attributes
Meta MMO’s minigame framework allows a single policy to be trained on multiple
minigames simultaneously, even when they have different observation and action
spaces. For example, Race to the Center is a free-for-all minigame without
observations or actions related to combat, items, or the market, while Team
Battle is a team-based minigame that includes these features. To facilitate
concurrent training on the minigames with different observation and action
spaces, the environment is initialized with a superset of observations and
actions that encompass all minigames, and each subsystem can be turned on and
off during reset. During training, the appropriate observations and actions
are used based on the current minigame, allowing the policy to learn from
diverse game configurations seamlessly. This feature enables researchers to
easily train generalist agents out of the box and investigate the impact of
diverse curricula on generalist learning.
Table A1: Neural MMO subsystems and associated observation/action spaces. Subsystem | Obs space | Action space
---|---|---
Base | Tick (1), AgentId (1), Task (27),
Tile (225x7), Entity (100x31) | Move (5)
Terrain | . | .
Resource | . | .
Combat | . | Attack style (3), target (101)
NPC | . | .
Communication | Comm (32x4) | Comm token (127)
Item | Inventory (12x16) | Use (13), Destroy (13),
Give item (13), target (101)
Equipment | . | Use acts as equip and unequip
Profession | . | .
Progression | . | .
Exchange | Market (384x16) | Sell item (13), price (99), Buy (385),
GiveGold target (101), amount (99)
The sections below list the configurable attributes in each subsystem.
Base: The base attributes that do not belong to any subsystems.
* •
HORIZON: Number of steps before the environment resets.
* •
ALLOW_MOVE_INTO_OCCUPIED_TILE: Whether agents can move into occupied tiles.
* •
PLAYER_VISION_RADIUS: Number of visible tiles in any direction.
* •
PLAYER_HEALTH_INCREMENT: Health increment per tick for players.
* •
DEATH_FOG_ONSET: Ticks before spawning death fog, None for no fog.
* •
DEATH_FOG_SPEED: Tiles per tick the fog moves.
* •
DEATH_FOG_FINAL_SIZE: Fog radius from center.
* •
MAP_CENTER: Playable map size in tiles per side.
* •
MAP_RESET_FROM_FRACTAL: Whether to regenerate map from fractal.
Terrain: Procedurally generate maps.
* •
TERRAIN_FLIP_SEED: Whether to negate the seed used for terrain generation.
* •
TERRAIN_FREQUENCY: Base noise frequency range for terrain generation.
* •
TERRAIN_FREQUENCY_OFFSET: Noise frequency octave offset for terrain
generation.
* •
TERRAIN_LOG_INTERPOLATE_MIN: Min interpolation log-strength for noise freqs.
* •
TERRAIN_LOG_INTERPOLATE_MAX: Max interpolation log-strength for noise freqs.
* •
TERRAIN_TILES_PER_OCTAVE: Number of octaves sampled from log2 spaced
TERRAIN_FREQUENCY range.
* •
TERRAIN_VOID: Noise threshold for void generation.
* •
TERRAIN_WATER: Noise threshold for water generation.
* •
TERRAIN_GRASS: Noise threshold for grass generation.
* •
TERRAIN_FOILAGE: Noise threshold for foilage (food tile) generation.
* •
TERRAIN_RESET_TO_GRASS: Make all tiles grass when resetting from the fractal
noise.
* •
TERRAIN_DISABLE_STONE: Whether to disable stone (obstacle) tiles.
* •
TERRAIN_SCATTER_EXTRA_RESOURCES: Scatter extra food and water on the map when
resetting from the fractal noise.
Resource: Add food and water foraging to maintain agent health. Requires
Terrain.
* •
RESOURCE_BASE: Initial level and capacity for food and water.
* •
RESOURCE_DEPLETION_RATE: Depletion rate for food and water.
* •
RESOURCE_STARVATION_RATE: Damage per tick without food.
* •
RESOURCE_DEHYDRATION_RATE: Damage per tick without water.
* •
RESOURCE_RESILIENT_POPULATION: Proportion resilient to starvation/dehydration.
* •
RESOURCE_DAMAGE_REDUCTION: Damage reduction for resilient agents.
* •
RESOURCE_FOILAGE_CAPACITY: Maximum foilage tile harvests before decay.
* •
RESOURCE_FOILAGE_RESPAWN: Probability harvested foilage regenerates per tick.
* •
RESOURCE_HARVEST_RESTORE_FRACTION: Fraction of maximum capacity restored on
harvest.
* •
RESOURCE_HEALTH_REGEN_THRESHOLD: Resource capacity fraction required to regen
health.
* •
RESOURCE_HEALTH_RESTORE_FRACTION: Health fraction restored when above
threshold.
Combat: Allow agents to fight other agents and NPCs with Melee, Range, and
Magic.
* •
COMBAT_SPAWN_IMMUNITY: Ticks before new agents can be attacked.
* •
COMBAT_ALLOW_FLEXIBLE_STYLE: Whether agents can attack with any style.
* •
COMBAT_STATUS_DURATION: Ticks combat status lasts after event.
* •
COMBAT_WEAKNESS_MULTIPLIER: Multiplier for super-effective attacks.
* •
COMBAT_MINIMUM_DAMAGE_PROPORTION: Minimum damage proportion to inflict.
* •
COMBAT_DAMAGE_FORMULA: Damage formula for combat.
* •
COMBAT_MELEE_DAMAGE: Melee attack damage.
* •
COMBAT_MELEE_REACH: Reach of attacks using the Melee skill.
* •
COMBAT_RANGE_DAMAGE: Range attack damage.
* •
COMBAT_RANGE_REACH: Reach of attacks using the Range skill.
* •
COMBAT_MAGE_DAMAGE: Mage attack damage.
* •
COMBAT_MAGE_REACH: Reach of attacks using the Mage skill.
NPC: Add Non-Playable Characters of varying hostility. Requires Combat.
* •
NPC_N: Maximum number of NPCs spawnable in the environment.
* •
NPC_DEFAULT_REFILL_DEAD_NPCS: Whether to refill dead NPCs.
* •
NPC_SPAWN_ATTEMPTS: Number of NPC spawn attempts per tick.
* •
NPC_SPAWN_AGGRESSIVE: Percentage distance threshold for aggressive NPCs.
* •
NPC_SPAWN_NEUTRAL: Percentage distance threshold from spawn for neutral NPCs.
* •
NPC_SPAWN_PASSIVE: Percentage distance threshold from spawn for passive NPCs.
* •
NPC_LEVEL_MIN: Minimum NPC level.
* •
NPC_LEVEL_MAX: Maximum NPC level.
* •
NPC_BASE_DEFENSE: Base NPC defense.
* •
NPC_LEVEL_DEFENSE: Bonus NPC defense per level.
* •
NPC_BASE_DAMAGE: Base NPC damage.
* •
NPC_LEVEL_DAMAGE: Bonus NPC damage per level.
* •
NPC_LEVEL_MULTIPLIER: Multiplier for NPC level damage and defense.
* •
NPC_ALLOW_ATTACK_OTHER_NPCS: Whether NPCs can attack other NPCs.
Communication: Add limited-bandwidth team messaging obs and action.
* •
COMMUNICATION_N_OBS: Number of same-team players sharing obs.
* •
COMMUNICATION_NUM_TOKENS: Number of distinct COMM tokens.
Item: Add inventory and item-related actions.
* •
ITEM_N: Number of unique base item classes.
* •
ITEM_INVENTORY_CAPACITY: Number of inventory spaces.
* •
ITEM_ALLOW_GIFT: Whether agents can give gold/item to each other.
* •
INVENTORY_N_OBS: Number of distinct item observations.
Equipment: Add armor, ammunition, and weapons to increase agents’ offensive
and defensive capabilities. Requires Item.
* •
WEAPON_DROP_PROB: Chance of getting a weapon while harvesting ammunition.
* •
EQUIPMENT_WEAPON_BASE_DAMAGE: Base weapon damage.
* •
EQUIPMENT_WEAPON_LEVEL_DAMAGE: Added weapon damage per level.
* •
EQUIPMENT_AMMUNITION_BASE_DAMAGE: Base ammunition damage.
* •
EQUIPMENT_AMMUNITION_LEVEL_DAMAGE: Added ammunition damage per level.
* •
EQUIPMENT_TOOL_BASE_DEFENSE: Base tool defense.
* •
EQUIPMENT_TOOL_LEVEL_DEFENSE: Added tool defense per level.
* •
EQUIPMENT_ARMOR_BASE_DEFENSE: Base armor defense.
* •
EQUIPMENT_ARMOR_LEVEL_DEFENSE: Base equipment defense.
Profession: Add resources and tools to practice Herbalism, Fishing,
Prospecting, Carving, and Alchemy. Requires Terrain and Item.
* •
PROFESSION_TREE_CAPACITY: Maximum tree tile harvests before decay.
* •
PROFESSION_TREE_RESPAWN: Probability harvested tree regenerates per tick.
* •
PROFESSION_ORE_CAPACITY: Maximum ore tile harvests before decay.
* •
PROFESSION_ORE_RESPAWN: Probability harvested ore regenerates per tick.
* •
PROFESSION_CRYSTAL_CAPACITY: Maximum crystal tile harvests before decay.
* •
PROFESSION_CRYSTAL_RESPAWN: Probability harvested crystal regenerates per
tick.
* •
PROFESSION_HERB_CAPACITY: Maximum herb tile harvests before decay.
* •
PROFESSION_HERB_RESPAWN: Probability harvested herb regenerates per tick.
* •
PROFESSION_FISH_CAPACITY: Maximum fish tile harvests before decay.
* •
PROFESSION_FISH_RESPAWN: Probability harvested fish regenerates per tick.
* •
PROFESSION_CONSUMABLE_RESTORE: Food/water restored by consuming item.
Progression: Add levels to skills, items, and equipment to increase agents’
and item attributes.
* •
PROGRESSION_BASE_LEVEL: Initial skill level.
* •
PROGRESSION_LEVEL_MAX: Max skill level.
* •
PROGRESSION_EXP_THRESHOLD: Experience thresholds for each level.
* •
PROGRESSION_COMBAT_XP_SCALE: Add XP for Melee/Range/Mage attacks.
* •
PROGRESSION_AMMUNITION_XP_SCALE: XP for Prospecting/Carving/Alchemy.
* •
PROGRESSION_CONSUMABLE_XP_SCALE: Add XP for Fishing/Herbalism harvests.
* •
PROGRESSION_MELEE_BASE_DAMAGE: Base Melee attack damage.
* •
PROGRESSION_MELEE_LEVEL_DAMAGE: Bonus Melee damage per level.
* •
PROGRESSION_RANGE_BASE_DAMAGE: Base Range attack damage.
* •
PROGRESSION_RANGE_LEVEL_DAMAGE: Bonus Range damage per level.
* •
PROGRESSION_MAGE_BASE_DAMAGE: Base Mage attack damage.
* •
PROGRESSION_MAGE_LEVEL_DAMAGE: Bonus Mage damage per level.
* •
PROGRESSION_BASE_DEFENSE: Base defense.
* •
PROGRESSION_LEVEL_DEFENSE: Bonus defense per level.
Exchange: Add gold and market actions to enable trading items and equipment
with other agents on a global market. Requires Item.
* •
EXCHANGE_BASE_GOLD: Initial gold amount.
* •
EXCHANGE_LISTING_DURATION: Ticks item is listed for sale.
* •
MARKET_N_OBS: Number of distinct item observations.
* •
PRICE_N_OBS: Number of distinct price observations and max price.
### A.2 Adaptive Difficulty and Domain Randomization Examples
The code snippets below are excerpts from https://github.com/kywch/meta-
mmo/blob/main/reinforcement_learning/environment.py.
Adaptive Difficulty. During reset, the _set_config() function can override the
default config values. Thus, it is possible for a minigame to look at the
history of game results and adjust the config for the next episode. The
following is an excerpt from Race to the Center, where the difficulty is
determined by the map size.
⬇
class RacetoCenter(Game):
def _set_config(self):
self.config.reset()
…
self._determine_difficulty() # sets the map_size
self.config.set_for_episode("MAP_CENTER", self.map_size)
\pardef _determine_difficulty(self):
# Determine the difficulty (the map size) based on the previous results
if self.adaptive_difficulty and self.history \ and self.history[-1]["result"]:
# the last game was won
last_results = [r["result"] for r in self.history if r["map_size"] ==
self.map_size]
if sum(last_results) >= self.num_game_won \ and self.map_size <=
self.config.original["MAP_CENTER"] - self.step_size:
self._map_size += self.step_size
Domain Randomization can also be achieved using the _set_config() function. To
maintain determinism, use the environment’s random number generator,
self._np_random.
⬇
class Survive(ng.DefaultGame):
def _set_config(self):
self.config.reset()
…
\parfog_onset = self._next_fog_onset or self._np_random.integers(32, 256)
fog_speed = self._next_fog_speed or 1 / self._np_random.integers(7, 12)
self.config.set_for_episode("DEATH_FOG_ONSET", fog_onset)
self.config.set_for_episode("DEATH_FOG_SPEED", fog_speed)
\parnpc_num = self._next_num_npc or self._np_random.integers(64, 256)
self.config.set_for_episode("NPC_N", npc_num)
### A.3 Minigame Replays
Full Config Minigames
* •
Survival
* •
Team Battle
* •
Multi-task Training
Mini Config Minigames
* •
Team Battle
* •
Protect the King
* •
Race to the Center
* •
King of the Hill
* •
Sandwich
Making New Replays can be done using the scripts and policies provided in the
baselines. The checkpoints should be copied or symlinked into a directory; in
the baseline repository, each experiment folder contains four specialist and
four generalist checkpoints. Running python train.py -m replay generates a
replay. The -p argument specifies the directory containing the policies, and
the -g argument specifies the minigames to run. The \--train.seed argument can
be used to specify a random seed.
⬇
# Full config experiments
$ python train.py -m replay -p experiments/full_sv -g survive
$ python train.py -m replay -p experiments/full_mt -g task
$ python train.py -m replay -p experiments/full_tb -g battle –train.seed 11
\par# Mini config experiments need –use-mini flag
$ python train.py -m replay –use-mini -p experiments/mini_tb -g battle
$ python train.py -m replay –use-mini -p experiments/mini_pk -g ptk
$ python train.py -m replay –use-mini -p experiments/mini_rc -g race
$ python train.py -m replay –use-mini -p experiments/mini_kh -g koh
$ python train.py -m replay –use-mini -p experiments/mini_sw -g sandwich
### A.4 Multi-task Training and Evaluation Tasks
The full training and evaluation tasks are available in the baseline
repository: https://github.com/kywch/meta-
mmo/blob/main/curriculum/neurips_curriculum.py. The evaluation tasks are
tagged with tags=["eval"].
There are 63 evaluation tasks across six categories. The task progress metric
is obtained by averaging all the maximum progress from each task. To calculate
a normalized score (max 100), each category is assigned a weight of 100/6, and
within each category, the maximum progress across all tasks was averaged to
determine the category score.
Survival:
* •
TickGE: num_tick = 1024
Combat:
* •
CountEvent: PLAYER_KILL n=20
* •
DefeatEntity: type=npc, level=1+, n=20
* •
DefeatEntity: type=npc, level=3+, n=20
Exploration:
* •
CountEvent: GO_FARTHEST n=64
* •
OccupyTile: row=80, col=80
Skill:
* •
AttainSkill: skill=Melee, level=10
* •
AttainSkill: skill=Mage, level=10
* •
AttainSkill: skill=Range, level=10
* •
AttainSkill: skill=Fishing, level=10
* •
AttainSkill: skill=Herbalism, level=10
* •
AttainSkill: skill=Prospecting, level=10
* •
AttainSkill: skill=Alchemy, level=10
* •
AttainSkill: skill=Carving, level=10
Item:
* •
HavestItem: item=Whetstone, level=1+, n=20
* •
HavestItem: item=Arrow, level=1+, n=20
* •
HavestItem: item=Runes, level=1+, n=20
* •
HavestItem: item=Whetstone, level=3+, n=20
* •
HavestItem: item=Arrow, level=3+, n=20
* •
HavestItem: item=Runes, level=3+, n=20
* •
ConsumeItem: item=Ration, level=1+, n=20
* •
ConsumeItem: item=Potion, level=1+, n=20
* •
ConsumeItem: item=Ration, level=3+, n=20
* •
ConsumeItem: item=Potion, level=3+, n=20
* •
EquipItem: item=Hat, level=1+, n=1
* •
EquipItem: item=Top, level=1+, n=1
* •
EquipItem: item=Bottom, level=1+, n=1
* •
EquipItem: item=Spear, level=1+, n=1
* •
EquipItem: item=Bow, level=1+, n=1
* •
EquipItem: item=Wand, level=1+, n=1
* •
EquipItem: item=Axe, level=1+, n=1
* •
EquipItem: item=Gloves, level=1+, n=1
* •
EquipItem: item=Rod, level=1+, n=1
* •
EquipItem: item=Pickaxe, level=1+, n=1
* •
EquipItem: item=Chisel, level=1+, n=1
* •
EquipItem: item=Whetstone, level=1+, n=1
* •
EquipItem: item=Arrow, level=1+, n=1
* •
EquipItem: item=Runes, level=1+, n=1
* •
EquipItem: item=Hat, level=3+, n=1
* •
EquipItem: item=Top, level=3+, n=1
* •
EquipItem: item=Bottom, level=3+, n=1
* •
EquipItem: item=Spear, level=3+, n=1
* •
EquipItem: item=Bow, level=3+, n=1
* •
EquipItem: item=Wand, level=3+, n=1
* •
EquipItem: item=Axe, level=3+, n=1
* •
EquipItem: item=Gloves, level=3+, n=1
* •
EquipItem: item=Rod, level=3+, n=1
* •
EquipItem: item=Pickaxe, level=3+, n=1
* •
EquipItem: item=Chisel, level=3+, n=1
* •
EquipItem: item=Whetstone, level=3+, n=1
* •
EquipItem: item=Arrow, level=3+, n=1
* •
EquipItem: item=Runes, level=3+, n=1
* •
FullyArmed: skill=Melee, level=1+, n=1
* •
FullyArmed: skill=Mage, level=1+, n=1
* •
FullyArmed: skill=Range, level=1+, n=1
* •
FullyArmed: skill=Melee, level=3+, n=1
* •
FullyArmed: skill=Mage, level=3+, n=1
* •
FullyArmed: skill=Range, level=3+, n=1
Market:
* •
CountEvent: EARN_GOLD n=20
* •
CountEvent: BUY_ITEM n=20
* •
EarnGold: amount=100
* •
HoardGold: amount=100
* •
MakeProfit: amount=100
### A.5 Experimental Details
#### A.5.1 Hardware Configuration
The training sessions presented in Table 2 were conducted using a consumer-
grade desktop with an i9-13900K CPU, 128GB RAM, and a single RTX 4090 GPU,
totaling around $4,000 USD retail.
#### A.5.2 Experiment Configs: Mini and Full
The code snippets below are excerpts from https://github.com/kywch/meta-
mmo/blob/main/reinforcement_learning/environment.py.
Mini Config: Below are the details of the subsystems and configurations used
in the Mini Config experiment. The default values used in the baseline
repository are included as comments. The size of observation space is 5,068.
⬇
import nmmo.core.config as nc
\parclass MiniGameConfig(
nc.Medium,
nc.Terrain,
nc.Resource,
nc.Combat,
nc.NPC,
nc.Communication,
):
def __init__(self, env_args: Namespace):
super().__init__()
\parself.set("PROVIDE_ACTION_TARGETS", True)
self.set("PROVIDE_NOOP_ACTION_TARGET", True)
self.set("PROVIDE_DEATH_FOG_OBS", True)
self.set("TASK_EMBED_DIM", 16)
self.set("MAP_FORCE_GENERATION", env_args.map_force_generation) # False
self.set("PLAYER_N", env_args.num_agents) # 128
self.set("HORIZON", env_args.max_episode_length) # 1024
self.set("MAP_N", env_args.num_maps) # 256
# num_agent_per_team = 8, but minigames can override the below
self.set("TEAMS", get_team_dict(env_args.num_agents,
env_args.num_agents_per_team))
self.set("PATH_MAPS", f"{env_args.maps_path}/{env_args.map_size}/") #
"maps/train/"
self.set("MAP_CENTER", env_args.map_size) # 128
\parself.set("RESOURCE_RESILIENT_POPULATION", env_args.resilient_population) #
0
self.set("COMBAT_SPAWN_IMMUNITY", env_args.spawn_immunity) # 20
\par# The default is "curriculum/neurips_curriculum_with_embedding.pkl"
self.set("CURRICULUM_FILE_PATH", env_args.curriculum_file_path)
\par# Make the high-level npcs weaker. Huge impact on the difficulty
self.set("NPC_LEVEL_MULTIPLIER", 0.5)
Full Config: Below are the details of the subsystems and configurations used
in the Full Config experiment. The full config adds Progression, Item,
Equipment, Profession, and Exchange subsystems to the mini config. The size of
observation space is 12,241.
⬇
class FullGameConfig(
MiniGameConfig,
nc.Progression,
nc.Item,
nc.Equipment,
nc.Profession,
nc.Exchange,
):
pass
Curriculum Learning with Minigames
When training a generalist, each minigame is sampled with equal probability
during reset. The code snippet below shows how the current baseline implements
a simple curriculum learning method.
⬇
def make_env_creator(
reward_wrapper_cls: BaseParallelWrapper,
train_flag: str = None,
use_mini: bool = False,
):
if train_flag is None or train_flag == "full_gen":
game_packs = [
(Survive, 1),
(TeamBattle, 1),
(MultiTaskTraining, 1),
]
elif train_flag == "sv_only":
game_packs = [(Survive, 1)]
…
elif train_flag == "mini_gen":
game_packs = [
(TeamBattle, 1),
(ProtectTheKing, 1),
(RacetoCenter, 1),
(KingoftheHill, 1),
(Sandwich, 1),
]
\pardef env_creator(*args, **kwargs):
if use_mini is True:
config = MiniGameConfig(kwargs["env"])
else:
config = FullGameConfig(kwargs["env"])
config.set("GAME_PACKS", game_packs)
\parenv = nmmo.Env(config)
env = reward_wrapper_cls(env, **kwargs["reward_wrapper"])
env = pufferlib.emulation.PettingZooPufferEnv(env)
return env
\parreturn env_creator
#### A.5.3 Baseline Components
StatWrapper: This wrapper subclasses Pettingzoo [46]’s BaseParallelWrapper and
handles the training metrics logged to Weights & Biases. The main metrics
tracked are the total agent steps (sum of all agents’ lifespans in an episode)
and the normalized progress toward the center (0 at the edge, 1 at the center,
averaged across agents). Progressing toward the center is crucial in Neural
MMO since higher-level NPCs and items are concentrated there. Additional game-
specific metrics include the proportion of agents that performed various
events (e.g., eating food, drinking water, scoring hits, killing players,
firing ammunition, consuming items, etc.), and agent achievements such as
maximum skill levels, item levels, kill counts, and the number of unique
events.
TeamWrapper: Subclassing the StatWrapper, this component handles team-related
observation augmentation and manual action overriding, as described in Section
2.3. It also augments the task observation with a team game flag, agent game
flag, and on/off flags for each subsystem.
RewardWrapper: Subclassing the TeamWrapper, this wrapper implements custom
reward shaping based on factors like agent health, experience, attack and
defense capabilities, and gold, in addition to the task reward. Team-level
reward shaping like Team Spirit [33] could be incorporated here.
Task Embedding: To condition agents during training and evaluation, each agent
receives a task embedding vector consisting of 27 floats: 11 one-hot encodings
for agent/team game and subsystem enablement, and 16 floats for the task
embedding itself. For minigames, task embeddings are created by taking the
SHA-256 hash of the reward function’s source code. For Multi-task Training and
Evaluation, task embeddings are generated by (1) prompting a coding language
model (DeepSeek-Coder-1.3b-Instruct [14]) with the reward function’s source
code and provided kwargs, and (2) reducing the resulting 2048-dimensional
vector to 16 dimensions using principal component analysis. We recognize the
importance of task embeddings for steering generalist agents and highlight
opportunities for improvement in this area.
#### A.5.4 Training Scripts
Mini Config Experiment: \--use-mini sets the mini config mode. The -t argument
is used to specify the minigames for training. The default training steps are
100M for specialists, and the generalist policy was trained for 400M steps.
⬇
# Train specialists for Team Battle (tb), Protect the King (pk),
# Race to the Center (rc), King of the Hill (kh), and Sandwich (sw)
$ python train.py –use-mini -t tb_only
$ python train.py –use-mini -t pk_only
$ python train.py –use-mini -t rc_only
$ python train.py –use-mini -t kh_only
$ python train.py –use-mini -t sw_only
\par# Train a generalist for playing all five games
$ python train.py –use-mini -t mini_gen –train.total-timesteps 400_000_000
Full Config Experiment: Running the script without \--use-mini sets up the
full config and policy.
⬇
# Train specialists for Survive (sv), Team Battle (tb), Multi-task Training
(mt)
$ python train.py -t sv_only
$ python train.py -t tb_only
$ python train.py -t mt_only
\par# Train a generalist for playing all three games
$ python train.py -t full_gen –train.total-timesteps 400_000_000
#### A.5.5 Training Hyperparameters
Pufferlib 0.7.3 was used for training. These values can be found at
https://github.com/kywch/meta-mmo/blob/main/config.yaml.
PPO parameters
---
learning_rate | 1.0e-4
anneal_lr | True
gamma | 0.99
gae_lambda | 0.95
norm_adv | True
clip_coef | 0.1
clip_vloss | True
ent_coef | 0.01
vf_coef | 0.5
vf_clip_coef | 0.1
max_grad_norm | 0.5
batch_size | 32768
batch_rows | 128
bptt_horizon | 8
update_epochs | 2
Vec-env parameters
env_pool | True
num_envs | 15
envs_per_worker | 1
envs_per_batch | 6
Historic self-play parameters
pool_kernel | [0] * 112 + [1]*16
### A.6 Model architectures
The source code of the policy is at https://github.com/kywch/meta-
mmo/blob/main/agent_zoo/baseline/policy.py.
Mini Config model consists of three encoders (TileEncoder, PlayerEncoder, and
TaskEncoder), fully-connected layers to the hidden layer (256 units), 1 layer
of LSTM, an action decoder, and a value network. The number of parameters is
1.74M.
⬇
RecurrentPolicy(
(policy): Recurrent(
(policy): Policy(
(tile_encoder): TileEncoder(
(type_embedding): Embedding(16, 30)
(entity_embedding): Embedding(8, 15)
(rally_embedding): Embedding(8, 15)
(tile_resnet): ResnetBlock(
(model): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): LayerNorm((64, 15, 15), eps=1e-05, elementwise_affine=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): LayerNorm((64, 15, 15), eps=1e-05, elementwise_affine=True)
)
)
(tile_conv_1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1))
(tile_conv_2): Conv2d(32, 8, kernel_size=(3, 3), stride=(1, 1))
(tile_fc): Linear(in_features=968, out_features=256, bias=True)
(tile_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(player_encoder): PlayerEncoder(
(embedding): Embedding(7936, 32)
(id_embedding): Embedding(512, 64)
(agent_mlp): MLPBlock(
(model): Sequential(
(0): Linear(in_features=93, out_features=256, bias=True)
(1): ReLU()
(2): Linear(in_features=256, out_features=256, bias=True)
)
)
(agent_fc): Linear(in_features=256, out_features=256, bias=True)
(my_agent_fc): Linear(in_features=256, out_features=256, bias=True)
(agent_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(my_agent_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(task_encoder): TaskEncoder(
(fc): Linear(in_features=27, out_features=256, bias=True)
(norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(proj_fc): Linear(in_features=768, out_features=256, bias=True)
(action_decoder): ActionDecoder(
(layers): ModuleDict(
(attack_style): Linear(in_features=256, out_features=3, bias=True)
(attack_target): Linear(in_features=256, out_features=256, bias=True)
(comm_token): Linear(in_features=256, out_features=127, bias=True)
(move): Linear(in_features=256, out_features=5, bias=True)
)
)
(value_head): Linear(in_features=256, out_features=1, bias=True)
)
(recurrent): LSTM(256, 256)
)
)
Full Config model: ItemEncoder and MarketEncoder were added to the Mini Config
model, and the action decoder supports the full action space. The number of
parameters is 3.33M.
⬇
RecurrentPolicy(
(policy): Recurrent(
(policy): Policy(
(tile_encoder): TileEncoder(
(type_embedding): Embedding(16, 30)
(entity_embedding): Embedding(8, 15)
(rally_embedding): Embedding(8, 15)
(tile_resnet): ResnetBlock(
(model): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): LayerNorm((64, 15, 15), eps=1e-05, elementwise_affine=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): LayerNorm((64, 15, 15), eps=1e-05, elementwise_affine=True)
)
)
(tile_conv_1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1))
(tile_conv_2): Conv2d(32, 8, kernel_size=(3, 3), stride=(1, 1))
(tile_fc): Linear(in_features=968, out_features=256, bias=True)
(tile_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(player_encoder): PlayerEncoder(
(embedding): Embedding(7936, 32)
(id_embedding): Embedding(512, 64)
(agent_mlp): MLPBlock(
(model): Sequential(
(0): Linear(in_features=93, out_features=256, bias=True)
(1): ReLU()
(2): Linear(in_features=256, out_features=256, bias=True)
)
)
(agent_fc): Linear(in_features=256, out_features=256, bias=True)
(my_agent_fc): Linear(in_features=256, out_features=256, bias=True)
(agent_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(my_agent_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(task_encoder): TaskEncoder(
(fc): Linear(in_features=27, out_features=256, bias=True)
(norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(item_encoder): ItemEncoder(
(embedding): Embedding(256, 32)
(item_mlp): MLPBlock(
(model): Sequential(
(0): Linear(in_features=76, out_features=256, bias=True)
(1): ReLU()
(2): Linear(in_features=256, out_features=256, bias=True)
)
)
(item_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(inventory_encoder): InventoryEncoder(
(fc): Linear(in_features=3072, out_features=256, bias=True)
(norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(market_encoder): MarketEncoder(
(fc): Linear(in_features=256, out_features=256, bias=True)
(norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
(proj_fc): Linear(in_features=1280, out_features=256, bias=True)
(action_decoder): ActionDecoder(
(layers): ModuleDict(
(attack_style): Linear(in_features=256, out_features=3, bias=True)
(attack_target): Linear(in_features=256, out_features=256, bias=True)
(market_buy): Linear(in_features=256, out_features=256, bias=True)
(comm_token): Linear(in_features=256, out_features=127, bias=True)
(inventory_destroy): Linear(in_features=256, out_features=256, bias=True)
(inventory_give_item): Linear(in_features=256, out_features=256, bias=True)
(inventory_give_player): Linear(in_features=256, out_features=256, bias=True)
(gold_quantity): Linear(in_features=256, out_features=99, bias=True)
(gold_target): Linear(in_features=256, out_features=256, bias=True)
(move): Linear(in_features=256, out_features=5, bias=True)
(inventory_sell): Linear(in_features=256, out_features=256, bias=True)
(inventory_price): Linear(in_features=256, out_features=99, bias=True)
(inventory_use): Linear(in_features=256, out_features=256, bias=True)
)
)
(value_head): Linear(in_features=256, out_features=1, bias=True)
)
(recurrent): LSTM(256, 256)
)
)
### A.7 Evaluation Metrics
The performance of an agent or policy in multiagent settings is relative to
other agents or policies in the environment. In our baseline repository, we
include trained policy checkpoints at different training steps, along with
scripts for evaluating policies in a "checkpoint vs. checkpoint" manner.
Elo Rating: Elo ratings can be used for all minigames involving multiple
checkpoints. The game score for each checkpoint in an episode is calculated by
averaging the maximum task progress of the agents controlled by that
checkpoint, and then adding a large bonus to the winning agent or team to mark
the winner. The evaluation script (evaluate.py) runs 200 episodes with a
random seed and saves the game scores in a JSON file. We used 10 random seeds,
resulting in 2000 episodes for evaluation. The Elo script (proc_elo.py)
converts these result files with game scores into pairwise win-loss records
for each checkpoint pair (e.g., for four checkpoints, six win-loss pairs are
created) and calculates the corresponding Elo ratings.
Task Completion: For the multi-task evaluation setting, which implements the
2023 multi-task completion challenge, the 63 evaluation tasks are randomly
assigned to each agent, which may be controlled by different checkpoints. The
evaluation script (evaluate.py) runs 200 episodes with a random seed and saves
the task progress in a JSON file. We used 10 random seeds, resulting in 2000
episodes for evaluation. The scoring script (proc_task_eval.py) aggregates the
progress for each checkpoint, printing the average lifespan, average task
completion rate across the 63 tasks, and a score normalized across six
categories: survival, combat, exploration, skill, item, and market.
Evaluation Scripts: The evaluate.py script runs the evaluation. A directory
with checkpoints must be specified; in the baseline repository, each
experiment folder contains four specialist and four generalist checkpoints.
The -g argument specifies the minigame, and the -r argument specifies the
number of repetitions. The proc_elo.py script takes two arguments: a directory
with the result JSON and the prefix of the results files, and it prints out
the Elo ratings for each policy. The proc_task_eval.py script only takes the
directory and prints out the task completion metrics.
⬇
# Full config minigames: survive, task, battle
$ python evaluate.py experiments/full_sv -g survive -r 10
$ python proc_elo.py experiments/full_sv survive
$ python evaluate.py experiments/full_mt -g task -r 10
$ python proc_task_eval.py experiments/full_mt task
$ python evaluate.py experiments/full_tb -g battle -r 10
$ python proc_elo.py experiments/full_tb battle
\par# Mini config minigames: battle, ptk, race, koh, sandwich
$ python evaluate.py experiments/mini_tb -g battle -r 10
$ python proc_elo.py experiments/mini_tb battle
$ python evaluate.py experiments/mini_pk -g ptk -r 10
$ python proc_elo.py experiments/mini_pk ptk
$ python evaluate.py experiments/mini_rc -g race -r 10
$ python proc_elo.py experiments/mini_rc race
$ python evaluate.py experiments/mini_kh -g koh -r 10
$ python proc_elo.py experiments/mini_kh koh
$ python evaluate.py experiments/mini_sw -g sandwich -r 10
$ python proc_elo.py experiments/mini_sw sandwich
### A.8 Extended Training Curves from the Full Config Experiment
The panels below represent diverse events provided by Meta MMO’s full
configuration. As training progresses, agents learn to engage with more game
subsystems and encounter a variety of events.
Figure A1: Survival specialist
Figure A2: Team Battle specialist
Figure A3: Multi-task Training specialist
### A.9 Minigame Sampling Ratio for Generalists Training
When training the generalist policy, the minigames in each episode are sampled
with equal probability during reset. The minigame sampling ratio is calculated
as the cumulative agent steps collected in the minigame divided by the total
agent steps. Some minigames are oversampled because the length and/or total
agent steps of each episode may vary across minigames and change due to
training.
Figure A4: Task sampling ratio for training the Full Config generalist policy.
Figure A5: Task sampling ratio for training the Mini Config generalist policy.
|
# NeuralLog:
Natural Language Inference with Joint Neural and Logical Reasoning
Zeming Chen† Qiyue Gao† Lawrence S. Moss‡
†Rose-Hulman Institute of Technology, Terre Haute, IN, USA
‡Indiana University, Bloomington, IN, USA
{chenz16<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Deep learning (DL) based language models achieve high performance on various
benchmarks for Natural Language Inference (NLI). And at this time, symbolic
approaches to NLI are receiving less attention. Both approaches (symbolic and
DL) have their advantages and weaknesses. However, currently, no method
combines them in a system to solve the task of NLI. To merge symbolic and deep
learning methods, we propose an inference framework called NeuralLog, which
utilizes both a monotonicity-based logical inference engine and a neural
network language model for phrase alignment. Our framework models the NLI task
as a classic search problem and uses the beam search algorithm to search for
optimal inference paths. Experiments show that our joint logic and neural
inference system improves accuracy on the NLI task and can achieve state-of-
art accuracy on the SICK and MED datasets.
(a) Entailment generation path
(b) Overview of NeuralLog
Figure 1: (a) An entailment generation path from the premise A motorcyclist
with a red helmet is riding a blue motorcycle down the road to the hypothesis
A motorcyclist is riding a motorbike along a roadway. There are two phrasal
monotonicity inferences: motorcyclist with a red helmet $\xrightarrow{}$
motorcyclist, blue motorcycle $\xrightarrow{}$ motorcycle, and one syntactic
variation: down the road $\xrightarrow{}$ along a roadway. (b) A complete
diagram of the full system.
## 1 Introduction
Currently, many NLI benchmarks’ state-of-the-art systems are exclusively deep
learning (DL) based language models Devlin et al. (2019); Lan et al. (2020);
Liu et al. (2020); Yin and Schütze (2017). These models often contain a large
number of parameters, use high-quality pre-trained embeddings, and are trained
on large-scale datasets, which enable them to handle diverse and large test
data robustly. However, several experiments show that DL models lack
generalization ability, adopt fallible syntactic heuristics, and show
exploitation of annotation artifacts Glockner et al. (2018); McCoy et al.
(2019); Gururangan et al. (2018). On the other hand, there are logic-based
systems that use symbolic reasoning and semantic formalism to solve NLI
Abzianidze (2017); Martínez-Gómez et al. (2017); Yanaka et al. (2018); Hu et
al. (2020). These systems show high precision on complex inferences involving
difficult linguistic phenomena and present logical and explainable reasoning
processes. However, these systems lack background knowledge and do not handle
wide-coverage sentences with syntactic variations well, which makes them poor
competitors with state-of-the-art DL models. Both DL and logic-based systems
show a major issue with NLI models: they are too one-dimensional (either
purely DL or purely logic), and no method has combined these two approaches
together for solving NLI.
This paper makes several contributions, as follows: first, we propose a new
framework for combining logic-based inference with deep-learning-based network
inference for better performance on solving NLI. We model the NLI task as a
path-searching problem between the premises and the hypothesis. We use the
beam-search algorithm to find an optimal path that can transform a premise to
a hypothesis through a series of inference steps. This way, different
inference modules can be inserted into the system. For example, DL inference
modules will handle inferences with diverse syntactic changes and logic
inference modules will handle inferences that require complex reasoning.
Second, we introduce a new method to handle syntactic variations in natural
language through sequence chunking and DL based paraphrase detection. We
evaluate our system by conducting experiments on the SICK and MED datasets.
Experiments show that joint logical and neural reasoning show state-of-art
accuracy and recall on these datasets.
## 2 Related Work
Perhaps the closest systems to NeuralLog are Yanaka et al. (2018) and MonaLog
Hu et al. (2020). Using Martínez-Gómez et al. (2016) to work with logic
representations derived from CCG trees, Yanaka et al. (2018) proposes a
framework that can detect phrase correspondences for a sentence pair, using
natural deduction on semantic relations and can thus extract various
paraphrases automatically. Their experiments show that assessing phrase
correspondences help improve NLI accuracy. Our system uses a similar
methodology to solve syntactic variation inferences, where we determine if two
phrases are a pair of paraphrase. Our method is rather different on this
point, since we call on neural language models to detect paraphrases between
two sentences. We feel that it would be interesting to compare the systems on
a more theoretical level, but we have not done this.
NeuralLog inherits the use of polarity marking found in MonaLog Hu et al.
(2020). (However, we use the dependency-based system of Chen and Gao (2021)
instead of the CCG-based system of Hu and Moss (2018).) MonaLog did propose
some integration with neural models, using BERT when logic failed to find
entailment or contradiction. We are doing something very different, using
neural models to detect paraphrases at several levels of “chunking”. In
addition, the exact algorithms found in Sections 3 and 4 are new here. In a
sense, our work on alignment in NLI goes back to MacCartney and Manning (2009)
where alignment was used to find a chain of edits that changes a premise to a
hypothesis, but our work uses much that simply was not available in 2009.
## 3 Method
Our system contains three components: (1) a polarity annotator, (2) three
sentence inference modules, (3) a search engine, and (4) a sentence generation
controller, where generation and searching are performed simultaneously.
Figure 1(b) shows a component diagram of the full system. The system first
annotates a sentence with monotonicity information (polarity marks) using
Udep2Mono Chen and Gao (2021). The polarity marks include monotone
($\uparrow$), antitone ($\downarrow$), and no monotonicity information (=)
polarities. Next, the polarized dependency parse tree is passed to the search
engine. There is a beam search algorithm and a sentence generation controller
in the search engine. The beam search algorithm searches for the optimal
inference path from a premise to a hypothesis. The search space is generated
from a sentence generator, which contains a generation controller and three
inference modules: lexical, phrasal, and syntactic variation. Through graph
alignment, the sentence generation controller will select a generation module
to apply to the premise and produce a set of new premises that potentially
form entailment relations with the hypothesis. The system returns Entail if an
inference path is found. If the inference path does not exist, the controller
will determine if the premise and hypothesis form a contradiction, by
searching for counter example signatures and returns Contradict accordingly.
If neither Entail nor Contradict is returned, the system returns Neutral.
Figure 1(a) shows an example of an optimal generation path, and Figure 1(b)
presents a diagram for the full system.
### 3.1 Polarity Annotator
The system first annotates a given premise with monotonicity information using
Udep2Mono, a polarity annotator that determines polarization of all
constituents from universal dependency trees. The annotator first parses the
premise into a binarized universal dependency tree and then conducts
polarization by recursively marks polarity on each node . An example can be
Every↑ healthy↓ person↓ plays↑ sports↑.
### 3.2 Search Engine
To efficiently search for the optimal inference path from a premise
$\mathcal{P}$ to a hypothesis $\mathcal{H}$, we use a beam search algorithm
which has the advantage of reducing search space by focusing on sentences with
higher scores. We modified the traditional beam search algorithm, by replacing
the heuristic function with a sentence generation controller that can guide
the search direction and thus make the search more accurate and efficient.
##### Scoring
In beam search, a priority queue $\mathcal{Q}$ maintains the set of generated
sentences. A core operation is the determination of the highest-scoring
generated sentence for a given input under a learned scoring model. In our
case, the maximum score is equivalent to the minimum distance:
$\displaystyle\mathbf{y}^{\star}$
$\displaystyle=\operatorname*{arg\,max}_{\mathrm{s}\in\mathcal{S}}\mathrm{score}(\mathrm{s},\mathcal{H})$
$\displaystyle\mathbf{y}^{\star}$
$\displaystyle=\operatorname*{arg\,min}_{\mathrm{s}\in\mathcal{S}}\mathrm{dist}(\mathrm{s},\mathcal{H})$
where $\mathcal{H}$ is the hypothesis and $\mathcal{S}$ is a set of generated
sentences produced by the three (lexical, phrasal, syntactic variation)
inference modules. We will present more details about these inference modules
in section 4. We formulate the distance function as the Euclidean distance
between the sentence embeddings of the premise and hypothesis. To obtain
semantically meaningful sentence embeddings efficiently, we use Reimers and
Gurevych (2019)’s language model, Sentence-BERT (SBERT), a modification of the
BERT model. It uses siamese and triplet neural network structures to derive
sentence embeddings which can be easily compared using distance functions.
### 3.3 Sentence Generation Controller
In each iteration, the search algorithm expands the search space by generating
a set of potential sentences using three inference modules: (1) lexical
inference, (2) phrasal inference, and (3) syntactic variation inference. To
guide the search engine to select the most applicable module, we designed a
generation controller that can recommend which of the labels the overall
algorithm should proceed with. For example, for a premise All animals eat food
and a hypothesis All dogs eat food, only a lexical inference of animals to
dogs would be needed. Then, the controller will apply the lexical inference to
the premise, as we discuss below.
#### 3.3.1 Sentence Representation Graph
The controller makes its decision based on graph-based representations for the
premise and the hypothesis. We first build a sentence representation graph
from parsed input using Universal Dependencies. Let
$\mathcal{V}=\mathcal{V}_{m}\cup\mathcal{V}_{c}$ be the set of vertices of a
sentence representation graph, where $\mathcal{V}_{m}$ represents the set of
modifiers such as tall in Figure 4, and ${V}_{c}$ represents the set of
content words (words that are being modified) such as man in Figure 4. Let
$\mathcal{E}$ be the set of directed edges in the form $\langle
v_{c},v_{m}\rangle$ such that $v_{m}\in\mathcal{V}_{m}$ and
$v_{c}\in\mathcal{V}_{c}$. A sentence representation graph is then defined as
a tuple $\mathrm{G}=\langle\mathcal{V},\mathcal{E}\rangle$. Figure 2(a) shows
an example graph.
rootmanAtallrunningisroadthedown
(a) Sentence representation graph
(b) Graph alignment visualization
Figure 2: (a) A sentence representation graph for A tall man is running down
the road. (b) Visualization for the graph alignment. The lines between two
words represent their similarity. The orange lines are the pairs with maximum
similarities for a blue word. Through bi-directional alignment, we eliminate
word pairs with non-maximum similarity and gets the final alignment pairs.
#### 3.3.2 Graph Alignment
To observe the differences between two sentences, we rely on graph alignment
between two sentence representation graphs. We first align nodes from
subjects, verbs and objects, which constitutes what we call a component level.
Define $\mathrm{G}_{p}$ as the graph for a premise and $\mathrm{G}_{h}$ as the
graph for a hypothesis. Then, $\mathcal{C}_{p}$ and $\mathcal{C}_{h}$ are
component level nodes from the two graphs. We take the Cartesian product
$\mathcal{C}_{p}\times\mathcal{C}_{h}=\\{(\mathrm{c}_{p},\mathrm{c}_{h}):\mathrm{c}_{p}\in\mathcal{C}_{p},\mathrm{c}_{h}\in\mathcal{C}_{h}\\}$.
In the first round, we recursively pair the child nodes of each
$\mathrm{c}_{p}$ to child nodes of each $\mathrm{c}_{h}$. We compute word
similarity between two child nodes $\mathrm{c}_{p}^{i}$ and
$\mathrm{c}_{h}^{i}$ and eliminate pairs with non-maximum similarity. We
denote the new aligned pairs as a set $\mathcal{A}^{*}$. At the second round,
we iterate through the aligned pairs in $\mathcal{A}^{*}$. If multiple child
nodes from the first graph are paired to a child node in the second graph, we
only keep the pair with maximum word similarity. In the final round, we
perform the same check for each child node in the first graph to ensure that
there are no multiple child nodes from the second graph paired to it. Figure
2(b) shows a brief visualization of the alignment process.
#### 3.3.3 Generation Module Recommendation
After aligning the premise graph $\mathcal{G}_{p}$ with hypothesis graph
$\mathcal{G}_{h}$, the controller checks through each node in the two graphs.
If a node does not get aligned, the controller considers to delete the node or
insert it depending on which graph the node belongs to and recommends phrasal
inference. If a node is different from its aligned node, the controller
recommends lexical inference. If additional lexical or phrasal inferences are
detected under this node, the controller decides that there is a more complex
transition under this node and recommends a syntactic variation.
#### 3.3.4 Contradiction Detection
Additionally, we determine whether the premise and the hypothesis contradict
each other inside the controller by searching for potential contradiction
transitions from the premise to the hypothesis. For instance, a transition in
the scope of the quantifier (a $\longrightarrow$ no) from the same subject
could be what we call a contradiction signature (possible evidence for a
contradiction). With all the signatures, the controller decides if they can
form a contradiction as a whole. To avoid situations when multiple signatures
together fail to form a complete contradiction, such as double negation, the
controller checks through the contradiction signatures to ensure a
contradiction. For instance, in the verb pair (not remove, add), the
contradiction signature not would cancel the verb negation contradiction
signature from remove to add so the pair as a whole would not be seen as a
contradiction. Nevertheless, other changes from the premise to the hypothesis
may change the meaning of the sentence. Hence, our controller would go through
other transitions to make sure the meaning of the sentence does not change
when the contradiction sign is valid. For example, in the neutral pair P: A
person is eating and H: No tall person is eating, the addition of tall would
be detected by our controller. But the aligned word of the component it is
applied to, person in P, has been marked downward monotone. So this transition
is invalid. This pair would then be classified as neutral.
signature type | example
---|---
quantifier negation | no dogs $\Longrightarrow$ some dogs
verb negation | is eating $\Longrightarrow$ is not eating
noun negation | some people $\Longrightarrow$ nobody
action contradiction | is sleeping $\Longrightarrow$ is running
direction contradiction | The turtle is following the fish $\Longrightarrow$
| The fish is following the turtle
Table 1: Examples of contradiction signatures.
For P2 and H2 in Figure 3, the controller notices the contradictory quantifier
change around the subject man. The subject man in P2 has upward monotone so
the deletion of tall is valid. Our controller also detects the meaning
transition from down the road to inside the building, which affects the
sentence’s meaning and cancels the previous contradiction signature. The
controller thus will not classify P2 and H2 as a pair of contradiction.
Figure 3: Example of contradiction signatures. P1 and H1 form a contradiction.
P2 and H2 does not form a contradiction because the meaning after the verb
running has changed.
## 4 Inference Generation
### 4.1 Lexical Monotonicity Inference
Lexical inference is word replacement based on monotonicity information for
key-tokens including nouns, verbs, numbers, and quantifiers. The system uses
lexical knowledge bases including WordNet Miller (1995) and ConceptNet Liu and
Singh (2004). From the knowledge bases, we extract four word sets: hypernyms,
hyponyms, synonyms, and antonyms. Logically, if a word has a monotone polarity
($\uparrow$), it can be replaced by its hypernyms. For example, swim $\leq$
move; then swim can be replaced with move. If a word has an antitone polarity
($\downarrow$), it can be replaced by its hyponyms. For example, flower $\geq$
rose. Then, flower can be replaced with rose. We filter out irrelevant words
from the knowledge bases that do not appear in the hypothesis. Additionally,
we handcraft knowledge relations for words like quantifiers and prepositions
that do not have sufficient taxonomies from knowledge bases. Some handcrafted
relations include: all = every = each $\leq$ most $\leq$ many $\leq$ several
$\leq$ some = a, up $\perp$ down.
Type | Premise | Hypothesis
---|---|---
Verb Phrase Variation | Two men are standing near the water and | Two men are standing near the water and
are holding fishing poles | are holding tools used for fishing
Noun Phrase Variation | A man with climbing equipment is hanging | A man with equipment used for climbing is
from rock which is vertical and white | hanging from a white, vertical rock.
Table 2: Examples of phrasal alignments detected by the syntactic variation
module
rootmanAtallrunningisroadthedownrootmanAwhoistallrunningisroadwayaalong0.980.990.030.02
Figure 4: A graph representation of the monolingual phrase alignment process.
Here the left graph represents the premise: A tall man is running down the
road. The right graph represents the hypothesis A man who is tall is running
along a roadway. The blue region represents phrase chunks extracted by the
chunker from the graph. An alignment score is calculated for each pair of
chunks. The pair $\langle$ tall man, man who is tall $\rangle$ is a pair of
paraphrases, and thus has a high alignment score (0.98). The pair $\langle$
tall man, running along a road way $\rangle$ has two unrelated phrases, and
thus has a low alignment score(0.03).
### 4.2 Phrasal Monotonicity Inference
Phrasal replacements are for phrase-level monotonicity inference. For example,
with a polarized sentence A ↑ woman↑ who↑ is↑ beautiful↑ is↑ walking↑ in↑ the↑
rain=, the monotone mark ↑ on woman allows an upward inference: woman
$\sqsupseteq$ woman who is beautiful, in which the relative clause who is
beautiful is deleted. The system follows a set of phrasal monotonicity
inference rules. For upward monotonicity inference, modifiers of a word are
deleted. For downward monotonicity inference, modifiers are inserted to a
word. The algorithm traverses down a polarized UD parse tree, deletes the
modifier sub-tree if a node is monotone ($\uparrow$), and inserts a new sub-
tree if a node is antitone ($\downarrow$). To insert new modifiers, the
algorithm extracts a list of potential modifiers associated to a node from a
modifier dictionary. The modifier dictionary is derived from the hypothesis
and contains word-modifier pairs for each dependency relation. Below is an
example of a modifier dictionary from There are no beautiful flowers that open
at night:
* •
obl: [head: open, mod: at night]
* •
amod: [head: flowers, mod: beautiful]
* •
acl:relcl: [head: flowers, mod: that open at night]
### 4.3 Syntactic Variation Inference
We categorize linguistic changes between a premise and a hypothesis that
cannot be inferred from monotonicity information as _syntactic variations_.
For example, a change from red rose to a rose which is red is a syntactic
variation. Many logical systems rely on handcrafted rules and manual
transformation to enable the system to use syntactic variations. However,
without accurate alignments between the two sentences, these methods are not
robust enough, and thus are difficult to scale up for wide-coverage input.
Recent development of pretrained transformer-based language models are showing
state-of-art performance on multiple benchmarks for Natural Language
Understanding (NLU) including the task for paraphrase detection Devlin et al.
(2019); Lan et al. (2020); Liu et al. (2020) exemplify phrasal knowledge of
syntactic variation. We propose a method that incorporates transformer-based
language models to robustly handle syntactic variations. Our method first uses
a sentence chunker to decompose both the premise and the hypothesis into
chunks of phrases and then forms a Cartesian product of chunk pairs. For each
pair, we use a transformer model to calculate the likelihood of a pair of
chunks being a pair of paraphrases.
#### 4.3.1 Sequence Chunking
To obtain phrase-level chunks from a sentence, we build a sequence chunker to
extract chunks from a sentence using its universal dependency information.
Instead of splitting a sentence into chunks, our chunker composes word tokens
recursively to form meaningful chunks. First, we construct a sentence
representation graph of a premise from the controller. Recall that a sentence
representation graph is defined as
$\mathrm{G}=\langle\mathcal{V},\mathcal{E}\rangle$, where
$\mathcal{V}=\mathcal{V}_{m}\cup\mathcal{V}_{c}$ is the set of modifiers
($\mathcal{V}_{m}$) and content words ($\mathcal{V}_{c}$), and $\mathcal{E}$
is the set of directed edges. To generate the chunk for a content word in
$\mathcal{V}_{c}$, we arrange its modifiers, which are nodes it points to,
together with the content word by their word orders in the original sentence
to form a word chain. Modifiers that make the chain disconnected are discarded
because they are not close enough to be part of the chunk. For instance, the
chunk from the verb eats in the sentence A person eats the food carefully
would not contain its modifier carefully because they are separated by the
object the food. If the sentence is stated as A person carefully eats the
food, carefully now is next to eat and it would be included in eat’s chunk. To
obtain chunks for a sentence, we iterate through each main component node,
which is a node for subject, verb, or object, in the sentence’s graph
representation and construct verb phrases by combining verbs’ chunks with
their paired objects’ chunks. There are cases when a word modifies other words
and gets modified in the same time. They often occur when a chunk serves as a
modifier. For example, in The woman in a pink dress is dancing, in a pink
dress modifies woman whereas dress is modified by in, a and pink. Then edges
from dress to in, a, pink with the edge from woman to dress can be drawn.
Chunks in a pink dress and the woman in a pink dress will be generated for
dress and woman respectively.
#### 4.3.2 Monolingual Phrase Alignment
After the chunker outputs a set of chunks from a generated sentence and from
the hypothesis, the system selects chunk pairs that are aligned by computing
an alignment score for each pair of chunks. Formally, we define
$\mathcal{C}_{s}$ as the set of chunks from a generated sentence and
$\mathcal{C}_{h}$ as the set of chunks from the hypothesis. We build the
Cartesian product from $\mathcal{C}_{s}$ and $\mathcal{C}_{h}$, denoted
$\mathcal{C}_{s}\times\mathcal{C}_{h}$. For each chunk pair
($\mathrm{c}_{si}$, $\mathrm{c}_{hj})\in\mathcal{C}_{s}\times\mathcal{C}_{h}$,
we compute an alignment score $\boldsymbol{\alpha}$:
$\displaystyle\mathbf{y}_{\langle\mathbf{c_{si}},\mathbf{c_{hi}}\rangle}$
$\displaystyle=\mathrm{ALBERT}.\mathrm{forward}(\langle\mathbf{c_{si}},\mathbf{c_{hi}}\rangle)$
$\displaystyle\boldsymbol{\alpha}_{\langle\mathbf{c_{si}},\mathbf{c_{hi}}\rangle}$
$\displaystyle=\mathrm{p}(\mathbf{c_{si}}\mid\mathbf{c_{hj}})$
$\displaystyle\boldsymbol{\alpha}_{\langle\mathbf{c_{si}},\mathbf{c_{hi}}\rangle}$
$\displaystyle=\frac{\exp^{\mathbf{y}_{\langle\mathbf{c_{si}},\mathbf{c_{hi}}\rangle_{0}}}}{\sum_{j=1}^{2}\exp^{\mathbf{y}_{\langle\mathbf{c_{si}},\mathbf{c_{hi}}\rangle_{j}}}}$
If $\boldsymbol{\alpha}>0.85$, the system records this pair of phrases as a
pair of syntactic variation. To calculate the alignment score, we use an
ALBERT Lan et al. (2020) model for the paraphrase detection task, fine tuned
on the Microsoft Research Paraphrase Corpus Dolan and Brockett (2005). We
first pass the chunk pair to ALBERT to obtain the logits. Then we apply a
softmax function to the logits to get the final probability. A full
demonstration of the alignment between chunks is shown in Figure 4.
## 5 Data
### 5.1 The SICK Dataset
The SICK Marelli et al. (2014) dataset is an English benchmark that provides
in-depth evaluation for compositional distribution models. There are 10,000
English sentence pairs exhibiting a variety of lexical, syntactic, and
semantic phenomena. Each sentence pair is annotated as Entailment,
Contradiction, or Neutral. we use the 4,927 test problems for evaluation.
### 5.2 The MED Dataset
The Monotonicity Entailment Dataset (MED), is a challenge dataset designed to
examine a model’s ability to conduct monotonicity inference (Yanaka et al.,
2019a). There are 5382 sentence pairs in MED, where 1820 pairs are upward
inference problems, 3270 pairs are downward inference problems, and 292 pairs
are problems with no monotonicity information. MED’s problems cover a variety
of linguistic phenomena, such as lexical knowledge, reverse, conjunction and
disjunction, conditional, and negative polarity items.
## 6 Evaluation
### 6.1 Experiment Setup
For Universal Dependency parsing, we follow Chen and Gao (2021)’s framework
and use a parser from Stanford’s natural language analysis package, Stanza Qi
et al. (2020). In the parser, we use a neural parsing model pretrained on the
UD English GUM corpus Zeldes (2017) with 90.0 LAS Zeman et al. (2018)
evaluation score. For Sentence-BERT, we selected the BERT-large model pre-
trained on STS-B Cer et al. (2017). For ALBERT, we used textattack’s ALBERT-
base model pretrained on MRPC from transformers. For word alignment in the
controller, we select Řehůřek and Sojka (2010)’s Gensim framework to calculate
word similarity from pre-trained word embedding. We evaluated our model on the
SICK and MED datasets using the standard NLI evaluation metrics of accuracy,
precision, and recall. Additionally, we conducted two ablation tests focusing
on analyzing the contributions of the monotonicity inference modules and the
syntactic variation module.
Model | P | R | acc.
---|---|---|---
ML/DL-based systems
BERT (base, uncased) | 86.8 | 85.4 | 86.7
Yin and Schütze (2017) | – | – | 87.1
Beltagy et al. (2016) | – | – | 85.1
Logic-based systems
Abzianidze (2017) | 98.0 | 58.1 | 81.4
Martínez-Gómez et al. (2017) | 97.0 | 63.6 | 83.1
Yanaka et al. (2018) | 84.2 | 77.3 | 84.3
Hu et al. (2020) | 83.8 | 70.7 | 77.2
Hu et al. (2020)+BERT | 83.2 | 85.5 | 85.4
Abzianidze (2020) | 94.3 | 67.9 | 84.4
Our System
NeuralLog (full system) | 88.0 | 87.6 | 90.3
$\,\,\,\,\,\,-\,\,$ALBERT-SV | 68.9 | 79.3 | 71.4
$\,\,\,\,\,\,-\,\,$Monotonicity | 74.5 | 75.1 | 74.7
Table 3: Performance on the SICK test set
### 6.2 Results
##### SICK
Table 3 shows the experiment results tested on SICK. We compared our
performance to several logic-based systems as well as two deep learning based
models. As the evaluation results show, our model achieves the state-of-art
performance on the SICK dataset. The best logic-based model is Abzianidze
(2020) with 84.4 percent accuracy. The best DL-based model is Yin and Schütze
(2017) with 87.1 percent accuracy. Our system outperforms both of them with
90.3 percent accuracy. Compare to Hu et al. (2020) \+ BERT, which also
explores a way of combining logic-based methods and deep learning based
methods, our systems shows higher accuracy with a 4.92 percentage point
increase. The good performance proves that our framework for joint logic and
neural reasoning can achieve state-of-art performance on inference.
##### Ablation Test
In addition to the standard evaluation on SICK, we conducted two ablation
tests, and the results are shown in Table 3. First, we remove the syntactic
variation module that uses neural network for alignment ($-$ALBERT-SV). As the
table shows, the accuracy drops 18.9 percentage points. This large drop in
accuracy indicates that the syntactic variation module plays a major part in
our overall inference process. The result also proves our hypothesis that deep
learning methods for inference can improve the performance of traditional
logic-based systems significantly. Secondly, when we remove the monotonicity-
based inference modules ($-$Monotonicity), the accuracy shows another large
decrease in accuracy, with a 15.6 percentage point drop. This result
demonstrates the important contribution of the logic-based inference modules
toward the overall state-of-the-art performance. Compared to the previous
ablation test which removes the neural network based syntactic variation
module, the accuracy does not change much (only 3.3 differences). This similar
performance indicates that neural network inference alone cannot achieve
state-of-art performance on the SICK dataset, and additional guidance and
constrains from the logic-based methods are essential parts of our framework.
Overall, we believe that the results reveal that both modules, logic and
neural, contribute equally to the final performance and are both important
parts that are unmovable.
Model | Up | Down | All
---|---|---|---
DeComp (Parikh et al., 2016) | 71.1 | 45.2 | 51.4
ESIM (Chen et al., 2017) | 66.1 | 42.1 | 53.8
BERT (Devlin et al., 2019) | 82.7 | 22.8 | 44.7
BERT+ (Yanaka et al., 2019a) | 76.0 | 70.3 | 71.6
NeuralLog (ours) | 91.4 | 93.9 | 93.4
Table 4: Results comparing model compared to state-of-art NLI models
evaluated on MED. Up, Down, and All stand for the accuracy on upward
inference, downward inference, and the overall dataset.
##### MED
Table 4 shows the experimental results tested on MED. We compared to multiple
deep learning based baselines. Here, DeComp and ESIM are trained on SNLI and
BERT is fine-tuned with MultiNLI. The BERT+ model is a BERT model fine-tuned
on a combined training data with the HELP dataset, Yanaka et al. (2019b), a
set of augmentations for monotonicity reasoning, and the MultiNLI training
set. Both models were tested in Yanaka et al. (2019a). Overall, our system
(NeuralLog) outperforms all DL-based baselines in terms of accuracy, by a
significant amount. Compared to BERT+, our system performs better both on
upward (+15.4) and downward (+23.6) inference, and shows significant higher
accuracy overall (+21.8). The good performance on MED validates our system’s
ability on accurate and robust monotonicity-based inference.
### 6.3 Error Analysis
For entailment, a large amount of inference errors are due to an incorrect
dependency parse trees from the parser. For example, P: A black, red, white
and pink dress is being worn by a woman, H: A dress, which is black, red,
white and pink is being worn by a woman, has long conjunctions that cause the
parser to produce two separate trees from the same sentence. Secondly, a lack
of hard background knowledge causes the system to fail to make inferences
which would be needed to obtain a correct label. For example, P: One man is
doing a bicycle trick in midair, H: The cyclist is performing a trick in the
air requires the system to know that a man doing a bicycle trick is a cyclist.
This kind of knowledge can only be injected to the system either by
handcrafting rules or by extracting it from the training data. For
contradiction, our analysis reveals inconsistencies in the SICK dataset. We
account for multiple sentence pairs that have the same syntactic and semantic
structures, but are labeled differently. For example, P: A man is folding a
tortilla, H: A man is unfolding a tortilla has gold-label Neutral while P: A
man is playing a guitar, H: A man is not playing a guitar has gold-label
Contradiction. These two pair of sentences clearly have similar structures but
have inconsistent gold-labels. Both gold-labels would be reasonable depending
on whether the two subjects refer to the same entity.
## 7 Conclusion and Future Work
In this paper, we presented a framework to combine logic-based inference with
deep-learning based inference for improved Natural Language Inference
performance. The main method is using a search engine and an alignment based
controller to dispatch the two inference methods (logic and deep-learning) to
their area of expertise. This way, logic-based modules can solve inference
that requires logical rules and deep-learning based modules can solve
inferences that contain syntactic variations which are easier for neural
networks. Our system uses a beam search algorithm and three inference modules
(lexical, phrasal, and syntactic variation) to find an optimal path that can
transform a premise to a hypothesis. Our system handles syntactic variations
in natural sentences using the neural network on phrase chunks, and our system
determines contradictions by searching for contradiction signatures (evidence
for contradiction). Evaluations on SICK and MED show that our proposed
framework for joint logical and neural reasoning can achieve state-of-art
accuracy on these datasets. Our experiments on ablation tests show that
neither logic nor neural reasoning alone fully solve Natural Language
Inference, but a joint operation between them can bring improved performance.
For future work, one plan is to extend our system with more logic inference
methods such as those using dynamic semantics Haruta et al. (2020) and more
neural inference methods such as those for commonsense reasoning Levine et al.
(2020). We also plan to implement a learning method that allows the system to
learn from mistakes on a training dataset and automatically expand or correct
its rules and knowledge bases, which is similar to Abzianidze (2020)’s work.
## References
* Abzianidze (2017) Lasha Abzianidze. 2017. LangPro: Natural language theorem prover. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 115–120, Copenhagen, Denmark. Association for Computational Linguistics.
* Abzianidze (2020) Lasha Abzianidze. 2020. Learning as abduction: Trainable natural logic theorem prover for natural language inference. In _Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics_ , pages 20–31, Barcelona, Spain (Online). Association for Computational Linguistics.
* Beltagy et al. (2016) I. Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J. Mooney. 2016\. Representing meaning with a combination of logical and distributional models. _Computational Linguistics_ , 42(4):763–808.
* Cer et al. (2017) Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017\. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_ , pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
* Chen et al. (2017) Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017\. Enhanced LSTM for natural language inference. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1657–1668, Vancouver, Canada. Association for Computational Linguistics.
* Chen and Gao (2021) Zeming Chen and Qiyue Gao. 2021. Monotonicity marking from universal dependency trees. _CoRR_ , abs/2104.08659.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Dolan and Brockett (2005) William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In _Proceedings of the Third International Workshop on Paraphrasing (IWP2005)_.
* Glockner et al. (2018) Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 650–655, Melbourne, Australia. Association for Computational Linguistics.
* Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics.
* Haruta et al. (2020) Izumi Haruta, Koji Mineshima, and Daisuke Bekki. 2020. Combining event semantics and degree semantics for natural language inference. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 1758–1764, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Hu et al. (2020) Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukherjee, Lawrence S. Moss, and Sandra Kuebler. 2020. MonaLog: a lightweight system for natural language inference based on monotonicity. In _Proceedings of the Society for Computation in Linguistics 2020_ , pages 334–344, New York, New York. Association for Computational Linguistics.
* Hu and Moss (2018) Hai Hu and Larry Moss. 2018. Polarity computations in flexible categorial grammar. In _Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics_ , pages 124–129, New Orleans, Louisiana. Association for Computational Linguistics.
* Lan et al. (2020) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In _International Conference on Learning Representations_.
* Levine et al. (2020) Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2020. SenseBERT: Driving some sense into BERT. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4656–4667, Online. Association for Computational Linguistics.
* Liu and Singh (2004) H. Liu and P. Singh. 2004. Conceptnet — a practical commonsense reasoning tool-kit. _BT Technology Journal_ , 22(4):211–226.
* Liu et al. (2020) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach.
* MacCartney and Manning (2009) Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In _Proceedings of the Eighth International Conference on Computational Semantics (IWCS-8)_ , Tilburg, Netherlands.
* Marelli et al. (2014) Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In _Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14)_ , pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA).
* Martínez-Gómez et al. (2016) Pascual Martínez-Gómez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2016. ccg2lambda: A compositional semantics system. In _Proceedings of ACL 2016 System Demonstrations_ , pages 85–90, Berlin, Germany. Association for Computational Linguistics.
* Martínez-Gómez et al. (2017) Pascual Martínez-Gómez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2017. On-demand injection of lexical knowledge for recognising textual entailment. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 710–720, Valencia, Spain. Association for Computational Linguistics.
* McCoy et al. (2019) Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3428–3448, Florence, Italy. Association for Computational Linguistics.
* Miller (1995) George A. Miller. 1995. Wordnet: A lexical database for english. _Commun. ACM_ , 38(11):39–41.
* Parikh et al. (2016) Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 2249–2255, Austin, Texas. Association for Computational Linguistics.
* Qi et al. (2020) Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020\. Stanza: A python natural language processing toolkit for many human languages. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , pages 101–108, Online. Association for Computational Linguistics.
* Řehůřek and Sojka (2010) Radim Řehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In _Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks_ , pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/publication/884893/en.
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
* Yanaka et al. (2019a) Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019a. Can neural networks understand monotonicity reasoning? In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 31–40, Florence, Italy. Association for Computational Linguistics.
* Yanaka et al. (2019b) Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019b. HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning. In _Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)_ , pages 250–255, Minneapolis, Minnesota. Association for Computational Linguistics.
* Yanaka et al. (2018) Hitomi Yanaka, Koji Mineshima, Pascual Martínez-Gómez, and Daisuke Bekki. 2018. Acquisition of phrase correspondences using natural deduction proofs. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 756–766, New Orleans, Louisiana. Association for Computational Linguistics.
* Yin and Schütze (2017) Wenpeng Yin and Hinrich Schütze. 2017. Task-specific attentive pooling of phrase alignments contributes to sentence matching. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 699–709, Valencia, Spain. Association for Computational Linguistics.
* Zeldes (2017) Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. _Language Resources and Evaluation_ , 51(3):581–612.
* Zeman et al. (2018) Daniel Zeman, Jan Hajič, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 1–21, Brussels, Belgium. Association for Computational Linguistics.
|
$\begin{split}\hat{\mathcal{F}_{\textnormal{Q}}}(rW)&=\sup_{|{\psi}\rangle}\big{\\{}r\langle{W}\rangle_{\psi}-4(\Delta
J_{z})^{2}_{\psi}\big{\\}}\\\
&=\sup_{|{\psi}\rangle}\big{\\{}r\langle{W}\rangle_{\psi}-4\langle{J_{z}^{2}}\rangle_{\psi}+4\langle{J_{z}}\rangle^{2}_{\psi}\big{\\}}\\\
&=\sup_{|{\psi}\rangle}\big{\\{}\langle{rW-4J_{z}^{2}}\rangle_{\psi}+\langle{2J_{z}}\rangle^{2}_{\psi}\big{\\}},\end{split}$
(4.7)
where we have used the fact that the QFI can be expressed as a convex roof of
$(\Delta J_{z})^{2}$ and we arrive at the problem of an optimization over a
single parameter for simplicity on the following derivations. Equation (4.7)
can be rewritten as an optimization linear in operator expectation values and
over a parameter $\mu$ as
$\hat{\mathcal{F}}_{\text{Q}}(rW)=\sup_{|{\psi}\rangle,\mu}\big{\\{}\langle{rW-4J_{z}^{2}}\rangle_{\psi}+8\mu\langle{J_{z}}\rangle_{\psi}-4\mu^{2}\mathbbm{1}\big{\\}},$
(4.8)
which, making use of $\max\\{\langle{A}\rangle\\}=\lambda_{\max}[A]$ for any
observable, can be reformulated as
$\begin{split}\hat{\mathcal{F}}_{\text{Q}}(rW)&=\sup_{|{\psi}\rangle}\big{\\{}\lambda_{\max}[rW-4J_{z}^{2}+8\mu
J_{z}-4\mu^{2}]\big{\\}}\\\
&=\sup_{|{\psi}\rangle}\big{\\{}\lambda_{\max}[rW-4(J_{z}-\mu)^{2}]\big{\\}},\end{split}$
(4.9)
where we omitted in writing $\mathbbm{1}$ for clarity and $\lambda_{\max}[A]$
stands for the maximum eigenvalue of the operator $A$. At the extremum, the
derivative with respect to $\mu$ must be zero, hence at the optimum
$\mu=\langle{J_{z}}\rangle_{\text{opt}}$ which represents the expectation
value of $J_{z}$ should have considering the optimal state in Eq. (4.7). This
also means that we have to test $\mu$ values in the interval
$-N/2\leqslant\mu\leqslant N/2$ only for spin-half systems.
The full optimization problem to be solved consists of Eqs. (4.2) and (4.9)
substituting $g(\rho)$ by
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$,
$\mathcal{B}_{\,\mathcal{F}}(w)=\sup_{r}\big{\\{}rw-\sup_{\mu}\\{\lambda_{\max}[rW-4(J_{z}-\mu)^{2}]\\}\big{\\}}.$
(4.10)
It is crucial that the optimization over $r$ is a concave function, since the
theory tells us that $\hat{\mathcal{F}}_{\text{Q}}(rW)$ is a convex function
[101], even when the multi-parameter case is considered. Thus the optimum can
be determined easily with simple methods, e.g., the gradient method, looking
for the maximum in $r$. Based on Eq. (4.2), we can see that even if we do not
find the global optimum in $r$, we obtain a valid lower bound. The extension
of this bound to the multi-parameter case is done using the recipe given in
Eq. (4.6). On the other hand, the function to be optimized for $\mu$ does not
have a single maximum in general. Moreover, not finding the optimal $\mu$
leads to an overestimating of the bound. Thus, a large care must be taken when
optimizing over $\mu$.
We stress again the generality of these findings beyond linear interferometers
covered in the following sections. For nonlinear interferometers [66, 82, 85,
84, 81, 102], the phase $\theta$ must be estimated assuming unitary dynamics
$U=\exp{-iG\theta}$, where $G$ is not a sum of single spin operators, hence,
it is different from the angular momentum components.
#### 4.1.4 Exploiting the symmetries
When making calculations for quantum systems with an increasing number of
qubits, we soon run into difficulties when computing the largest eigenvalue of
Eq. (4.9). The reason is that for $N$ qubits, we need to handle $2^{N}\times
2^{N}$ size matrices, hence we are limited to systems of 10 to 15 qubits.
We can obtain bounds for much larger particle numbers, if we restrict
ourselves to the symmetric subspace [95, 96]. This approach can give optimal
bounds for many systems, such as Bose-Einstein condensates of two-level atoms,
which are in a symmetric multiparticle state. The bound computed for the
symmetric subspace might not be correct and generally might overestimate the
real bound for general cases.
Finally, it is important to note that if the operators $W_{k}$ are
permutationally invariant and the eigenstate with the maximal eigenvalue in
Eq. (4.9) is non-degenerate, then we can do the computations on the symmetric
subspace only. The resulting maximal eigenvalue is the maximal eigenvalue even
when the whole Hilbert space is taken into account for the maximization.
Hence, the lower bound obtained in the symmetric subspace is valid even for
the general case.
For completeness, we follow presenting the proof of the observation mentioned
above. Let us denote the ground state of a permutationally invariant
Hamiltonian by $|{\Psi}\rangle.$ This is at the same time the $T=0$ thermal
ground state, hence it must be a permutationally invariant pure state. For
such states
$S_{kl}|{\Psi}\rangle{\\!}\langle{\Psi}|S_{kl}=|{\Psi}\rangle{\\!}\langle{\Psi}|$,
where $S_{kl}$ is the swap operator exchanging qubits $k$ and $l$. Based on
this, follows that $S_{kl}|{\Psi}\rangle=c_{kl}|{\Psi}\rangle$, and
$c_{kl}\in{-1,+1}$. There are three possible cases to consider:
1. i)
All $c_{kl}=+1$. In this case, for all permutation operator $\Pi_{j}$ we have
$\Pi_{j}|{\Psi}\rangle=|{\Psi}\rangle,$ (4.11)
since any permutation operator $\Pi_{j}$ can be constructed as
$\Pi_{j}=\prod_{i}S_{k_{i}l_{i}}$. Equation (4.11) means that the state
$|{\Psi}\rangle$ is symmetric.
2. ii)
All $c_{kl}=-1$. This means that the state is anti-symmetric, however this
state exists only for $N=2$ qubits.
3. iii)
Not all $c_{kl}$ are identical to each other. In this case, there must be
$k_{+},l_{+},k_{-},k_{-}$ such that
$\begin{split}S_{k_{+},l_{+}}|{\Psi}\rangle&=+|{\Psi}\rangle,\\\
S_{k_{-},l_{-}}|{\Psi}\rangle&=-|{\Psi}\rangle.\end{split}$ (4.12)
Let us assume that $k_{+},l_{+},k_{-},l_{-}$ are index different from each
other. In this case,
$|{\Psi^{\prime}}\rangle=S_{k_{+},k_{-}}S_{l_{+},l_{-}}|{\Psi}\rangle$ another
ground state of the Hamiltonian $H$ such that
$\begin{split}S_{k_{+},l_{+}}|{\Psi^{\prime}}\rangle&=-|{\Psi^{\prime}}\rangle,\\\
S_{k_{-},l_{-}}|{\Psi^{\prime}}\rangle&=+|{\Psi^{\prime}}\rangle.\end{split}$
(4.13)
Comparing Eqs. (4.12) and (4.13) we can conclude that
$|{\Psi^{\prime}}\rangle\neq|{\Psi}\rangle$, while due to the permutational
invariance of $H$ we have that
$\langle{H}\rangle_{\Psi^{\prime}}=\langle{H}\rangle_{\Psi}$. Thus,
$|{\Psi}\rangle$ is not a non-degenerate ground state. The proof works in an
analogous way for the only nontrivial case $k_{+}=k_{-}$, when
$S_{k_{+},k_{-}}=\mathbbm{1}$.
Hence, if $N>2$ then only i) is possible and $|{\Psi}\rangle$ must be
symmetric.
Next, we will demonstrate the use of our approach for several experimentally
relevant situations. In the many-particle case, often symmetric operators can
be used to describe accurately the system, which makes it possible to carry
out calculations for thousand of particles, as will be shown later in this
chapter.
### 4.2 Examples
In this section, we show how to obtain lower bounds based on the fidelities
with respect to the GHZ state and the unpolarized Dicke state as well as with
different sets of powers of collective angular momentum operators, e.g., the
set
$\\{\langle{J_{y}}\rangle,\langle{J_{x}}\rangle,\langle{J_{x}^{2}}\rangle\\}$.
#### 4.2.1 Fidelity measurements
Let us consider the case when $W$ is a projector onto a pure quantum state.
First, we consider GHZ states. Hence $W$ is the projector
$|{\textnormal{GHZ}}\rangle{\\!}\langle{\textnormal{GHZ}}|$, where
$|{\textnormal{GHZ}}\rangle=\tfrac{1}{\sqrt{2}}(|{0\cdots 0}\rangle+|{1\cdots
1}\rangle)$ (4.14)
for spin-$\frac{1}{2}$ particles, and $\langle{W}\rangle=F_{\textnormal{GHZ}}$
is the fidelity with respect to the GHZ state. Based on knowing
$F_{\textnormal{GHZ}}$, we would like to estimate
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$§§§ Not tight lower
bounds on the quantum Fisher information based on the fidelity have been
presented in [103]..
Using Eq. (4.10), we will obtain an analytical tight lower bound on the QFI
based on the fidelity $F_{\textnormal{GHZ}}$. The calculation that we have to
carry out is computing the bound
$\mathcal{B}_{\,\mathcal{F}}(F_{\textnormal{GHZ}})=\sup_{r}\big{\\{}rF_{\textnormal{GHZ}}-\sup_{\mu}\\{\lambda_{\max}[r|{\textnormal{GHZ}}\rangle{\\!}\langle{\textnormal{GHZ}}|-4(J_{z}-\mu)^{2}]\\}\big{\\}}.$
(4.15)
We will make our calculations in the $J_{z}$ orthonormal basis, which is
defined with the $2^{N}$ basis vectors $b_{0}=|{00\dots 000}\rangle$,
$b_{1}=|{00\dots 001}\rangle$, …, $b_{(2^{N}-2)}=|{11\dots 110}\rangle$, and
$b_{(2^{N}-1)}=|{11\dots 111}\rangle$, as it can be found in Eq. (A.2) for
$j=\frac{1}{2}$. It is easy to see that the matrix in the argument of
$\lambda_{\max}$ in the Eq. (4.15) is almost diagonal in the $J_{z}$ basis. To
be more specific, the only non-diagonal matrix block comes from
$|{\textnormal{GHZ}}\rangle{\\!}\langle{\textnormal{GHZ}}|$, which has non-
trivial matrix elements only in the $\\{b_{0},b_{(2^{N}-1)}\\}$ basis. Thus,
we have to diagonalize the following matrix
$r|{\textnormal{GHZ}}\rangle{\\!}\langle{\textnormal{GHZ}}|-4(J_{z}-\mu)^{2}=\begin{pmatrix}\frac{r}{2}-4(\frac{N}{2}-\mu)^{2}&\frac{r}{2}\\\
\frac{r}{2}&\frac{r}{2}-4(\frac{N}{2}+\mu)^{2}\end{pmatrix}\oplus D,$ (4.16)
where $D$ is already a $(2^{N}-2)\times(2^{N}-2)$ diagonal matrix with
$D_{k}=-4(\langle{J_{z}}\rangle_{b_{k}}-\mu)^{2}$ negative eigenvalues for
$k=1,2,\dots,(2^{N}-2)$. This means that the Eq. (4.16) can be diagonalized as
$\text{diag}[\lambda_{+},\lambda_{-},D_{1},D_{2},\dots,D_{2^{N}-2}]$, where
the two eigenvalues $\lambda_{\pm}$ are
$\lambda_{\pm}=\frac{r}{2}-N^{2}-4\mu^{2}\pm\sqrt{16\mu^{2}N^{2}+\frac{r^{2}}{4}}.$
(4.17)
Next, we show a way that can simplify our calculations considerably. As
indicated in Eq. (4.15), we have to look for the maximal eigenvalue and then
optimize it over $\mu$. We exchange the order of the two steps, that is, we
look for the maximum of each eigenvalue over $\mu$, and then find the maximal
one. The eigenvalues of $D$ are negative and for some $\mu$’s some of them can
be zero. Due to this, the problem can be simplified to the following equation
$\begin{split}\sup_{\mu}\\{\lambda_{\max}[r|{\textnormal{GHZ}}\rangle{\\!}\langle{\textnormal{GHZ}}|-4(J_{z}-\mu)^{2}]\\}:=&\max\\{0,\sup_{\mu}(\lambda_{+})\\}\\\
=&\left\\{\begin{aligned} &0,&&\text{ if }r<0,\\\
&\frac{r}{2}+\frac{r^{2}}{16N^{2}}&&\text{if }0\leqslant r\leqslant 4N^{2},\\\
&-N^{2}+r&&\text{if }r>4N^{2},\end{aligned}\right.\end{split}$ (4.18)
where we did not have to have to look for the maximum of $\lambda_{-}$ over
$\mu$ since clearly $\lambda_{+}\geqslant\lambda_{-}$. Finally, we have to
substitute Eq. (4.18) into Eq. (4.15), and carry out the optimization over
$r$, considering $F_{\textnormal{GHZ}}\in[0,1]$.
This way we arrive at a lower bound of the QFI based on the fidelity with
respect to the GHZ state as
$\mathcal{B}_{\,\mathcal{F}}(F_{\textnormal{GHZ}})=\left\\{\begin{aligned}
&N^{2}(1-F_{\textnormal{GHZ}})^{2}&&\text{if }F_{\textnormal{GHZ}}<1/2,\\\
&0&&\text{if }F_{\textnormal{GHZ}}\leqslant 1/2.\end{aligned}\right.$ (4.19)
This equation is plotted in Figure 4.1-(a). Note that in the figure the plot
is normalized by $N^{2}$ and that the resulting semi-parabola is independent
of the number of particles. Moreover, the bound is zero for
$F_{\textnormal{GHZ}}\leqslant 1/2$. This is consistent with the fact that for
the product states $\rho=|{111\dots 11}\rangle$ or $\rho=|{000\dots
00}\rangle$ we have $F_{\textnormal{GHZ}}=1/2$, while
$\mathcal{F}_{\text{Q}}[\rho,J_{z}]=0$.
\begin{overpic}[scale={.65}]{img/LT_fidGHZ.pdf} \put(5.0,10.0){\small(a)}
\end{overpic}\begin{overpic}[scale={.65}]{img/LT_fidDicke.pdf}
\put(5.0,10.0){\small(b)} \end{overpic}
Figure 4.1: (a) Analytical solution of the bound
$\mathcal{B}_{\,\mathcal{F}}$ for different values of the fidelity with
respect to the GHZ state. (b) Numerical results for the minimum quantum Fisher
information as a function of the fidelity with respect of unpolarized Dicke
states perpendicular to the magnetic field, $|\text{D}_{N}^{0}\rangle$. (line)
For systems with 4 particles and (dashed) for systems with 40 particles. One
may note that when the fidelity is at its maximum the bound approaches 0.5 as
it is the quantum Fisher information for a large particle number.
Next, let us consider a symmetric unpolarized Dicke state with even $N$
particles along the $x$-direction $|{\textnormal{D}_{N}}\rangle_{x}$, given by
Eq. (3.1). This state is known to be highly entangled [95, 104] and allows for
a Heisenberg limited interferometry [97]. In the following we may omit the
subscript $x$ since this Dicke state will be always at the center of our
attention, the unpolarized Dicke state perpendicular to the magnetic field in
this case along the $z$-direction. The witness operator that can be used for
noisy Dicke states is
$W=|{\textnormal{D}_{N}}\rangle{\\!}\langle{\textnormal{D}_{N}}|$, hence the
expectation value of the witness is just the fidelity with respect to the
Dicke state, i.e., $\langle{W}\rangle=F_{\text{Dicke}}$. In Figure 4.1-(b), we
plotted the results for symmetric Dicke states of various particle numbers.
$F_{\text{Dicke}}=1$ corresponds to
$\mathcal{F}_{\text{Q}}[\rho,J_{z}]=N(N+2)/2$. At this point, note that for
the examples presented above, the QFI bound scales as $\mathcal{O}(N^{2})$ in
the asymptotic limit if the quantum state has been prepared
perfectly¶¶¶$\mathcal{O}(x)$ is the usual Landau notation used to describe the
asymptotic behavior for large $x$ [77, 92]..
Note that estimating $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$
based on $F_{\text{Dicke}}$ was possible for 40 qubits for Figure 4.1-(b),
since we carried out the calculations for the symmetric subspace. For our
case, the witness operator $W$ is permutationally invariant and it has a non-
degenerate eigenstate corresponding to the maximal eigenvalue. Hence, based on
the arguments of the Section 4.1.4 the bound is valid even for the general
case, i.e., non-symmetric states.
We now compute several quantities for the large $N$ case. We show that if the
fidelity with respect to the Dicke state is larger than a bound then
$\mathcal{B}_{\,\mathcal{F}}>0$, where we omit the arguments for brevity.
Moreover, we have seen in Figure 4.1-(b) that the lower bound on
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$ as a function of the
fidelity $F_{\text{Dicke}}$ normalized by $N^{2}$ is not the same curve for
all $N$. Next, we will demonstrate by numerical evidence that the lower bound
normalized by $N^{2}$ collapses to a nontrivial curve for large $N$.
As a first step, let us consider the state completely polarized along
$z$-direction $|{1}\rangle_{y}^{\otimes N}$. This state does not change under
rotations around the $z$-axis, hence
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]=0$. Its fidelity with
respect to the Dicke state $|{\textnormal{D}_{N}}\rangle_{x}$ is
$F_{\text{Dicke}}(|{1}\rangle_{y}^{\otimes
N})=\frac{1}{2^{N}}\binom{N}{N/2}\approx\sqrt{\frac{2}{\pi N}}.$ (4.20)
From convexity of the bound on the quantum Fisher information in
$F_{\text{Dicke}}$, it immediately follows that for $F_{\text{Dicke}}$ smaller
than Eq. (4.20) the optimal bound on
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$ will give zero.
Next, we examine what happens if the fidelity is larger than Eq. (4.20). For
that we note first that $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$
is the convex roof of $4(\Delta J_{z})^{2}$ [88, 89]. Hence, if we have a
mixed state for which $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$
is zero, then it can always be decomposed into the mixture of pure states for
which $\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\Psi}\rangle,J_{z}}]$ is zero
too. As a consequence, the extremal states of the set of states for which
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]=0$ are pure states, and
we can restrict our search for pure states. The optimization problem we have
to solve is given as
$F_{\text{opt}}=\big{\\{}\max_{\Psi}|\langle{\Psi}|{\textnormal{D}_{N}}\rangle_{x}|^{2}\,:\,\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\Psi}\rangle,J_{z}}]=0\big{\\}}.$
(4.21)
Hence, we have to carry out the optimization over pure states $|{\Psi}\rangle$
that are invariant under $U_{\theta}=\exp(-iJ_{z}\theta)$ for any $\theta$.
Such states are the eigenstates of $J_{z}$. In order to maximize the overlap
with the Dicke state $|{\textnormal{D}_{N}}\rangle_{x}$, we have to look for
symmetric eigenstates of $J_{z}$. These are the symmetric Dicke states in the
$z$-basis $|{\textnormal{D}_{N,m}}\rangle_{z}$. Then, using the following
identity
$\sum_{k=0}^{q}(-1)^{k}\binom{n}{k}\binom{n}{q-k}=\left\\{\begin{aligned}
&\binom{n}{q/2}(-1)^{q/2}&&\text{for even }q,\\\ &0&&\text{for odd
}q,\end{aligned}\right.$ (4.22)
one finds that the squared overlap is given by
$|\langle{\textnormal{D}_{N,m}}|{{}_{z}}|{\textnormal{D}_{N}}\rangle_{x}|^{2}=\left\\{\begin{aligned}
&\frac{\binom{N/2}{m/2}^{2}\binom{N}{N/2}}{2^{N}\binom{N}{m}}&&\text{for even
}m,\\\ &0&&\text{for odd }m,\end{aligned}\right.$ (4.23)
which is maximal in the case of even $N$ when $m=N$ or $m=0$, the state
totally polarized along the $+z$-direction or along the $-z$-direction
respectively. We skip the case in which $N$ is odd. For detailed calculations
of Eq. (4.23) see Appendix F.
Next, we will examine the behavior of our lower bound on
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$ based on the fidelity
$F_{\text{Dicke}}$ for large $N$. In Figure 4.2, the calculations up to
$N=500$ present a strong evidence that for fidelity values
$F_{\text{Dicke}}=0.2,0.5,0.7$ the lower bound on QFI has a
$\mathcal{O}(N^{2})$ scaling for increasing $N$. If this is correct then
reaching a fidelity larger than a certain bound for large $N$ would imply
Heisenberg scaling for the bound on the quantum Fisher information. Note that
it is difficult to present a similar numerical evidence for small values of
$F_{\text{Dicke}}$ since in that case the bound for QFI is nonzero only for
large $N$ due to Eq. (4.20).
Figure 4.2: Lower bound on QFI normalized by $N^{2}$ for various particle
numbers $N=50,100,200,300,$ $400,500$. (circles) Lower bound for
$F_{\text{Dicke}}=0.2$, (stars) for $F_{\text{Dicke}}=0.5$, and (triangles)
for $F_{\text{Dicke}}=0.7$. For a better visibility we use a logarithmic scale
for the $y$-axis.
#### 4.2.2 Spin-squeezed states
In the case of spin squeezing, the quantum state has a large spin in the
$y$-direction, while a decreased variance in the $x$-direction. By measuring
$\langle{J_{y}}\rangle$ and $(\Delta J_{x})^{2}$ we can estimate the lower
bound on the quantum Fisher Information by Eq. (2.50). However, this formula
does not necessarily give the best lower bound for all values of the
collective observables. With our approach we can find the best bound.
To give a concrete example, we choose $W_{1}=J_{y}$, $W_{2}=J_{x}^{2}$ and
$W_{3}=J_{x}$ for the operators to be measured. We vary $w_{1}$ and $w_{2}$ in
some interval. We also require that $w_{3}=0$, since we assume that the mean
spin points into the $y$-direction111 Due to symmetries of the problem, when
minimizing $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$ with the
constraints on $\langle{J_{z}}\rangle$ and $\langle{J_{x}^{2}}\rangle$, we do
not have to add explicitly the constraint $\langle{J_{x}}\rangle=0$. The
optimization with only the first two constraints will give the same bound..
This is reasonable since in most spin-squeezing experiments we know the
direction of the mean spin.
Our result can be seen in Figure 4.3. We chose $N=4$ particles since for small
$N$ the main features of the plot are clearly visible. The hatched area
corresponds to non-physical combination of expectation values. States at the
boundary can be obtained as ground states of
$H_{\text{bnd}}^{(\pm)}(\lambda)=\pm J_{x}^{2}-\lambda J_{y}$, see Appendix D.
In Figure 4.3, the state fully polarized in the $y$-direction, and initial
state for spin-squeezing experiments, corresponds to point T. The unpolarized
Dicke state along the $x$-direction Eq. (3.1) corresponds to point D.
Figure 4.3: We show as a function of the expectation value,
$\langle{J_{y}}\rangle$, and the variance in the perpendicular direction,
$(\Delta J_{x})^{2}$, the minimum sensitivity for a 4-qubit system. (hatched)
The physically forbidden region is indicated. Interesting quantum states: (M)
Mixed state defined in the text, (T) totally polarized state, (S) singlet
state, and (D) Dicke state. (W) Any mixture of the singlet state and the
completely mixed state of the symmetric subspace. Other states can be found on
this line, for instance, the completely mixed state of the whole Hilbert
space. (dashed) Shot-noise threshold. Below this line non-classical
sensitivities can be achieved. (cross) In Figure 4.4, we compute the bound
when an additional expectation value is measured.
We add that outside the symmetric subspace, there are other states with
$\langle{J_{y}}\rangle=\langle{J_{x}^{2}}\rangle=0$, which also correspond to
the point D, e.g the singlet state labeled by the point S. However, usual
spin-squeezing procedures remain in the symmetric subspace, thus we discuss
only the Dicke state. Spin-squeezing makes $(\Delta J_{x})^{2}$ decrease,
while $\langle{J_{y}}\rangle$ also decreased somewhat. Hence, at least for
small squeezing it corresponds to moving down from point T to point D
following the boundary, while the metrological usefulness is increasing. Below
the dashed line $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]>N$,
hence the state possesses metrologically useful entanglement [3]. The equal
mixture of $|{000\dots 00}\rangle_{x}$ and $|{111\dots 11}\rangle_{x}$
corresponds to point M, with
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]=N$. Finally, the
completely mixed state rests on the line W. It cannot be used for metrology,
hence $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]=0$.
We now compare the difference between our bound and the bound of L. Pezzè and
A. Smerzi Eq. (2.50). First, we consider the experimentally relevant region
for which $(\Delta J_{x})^{2}\leqslant 1$. We find that for points away from
the physical boundary at least by 0.001 on the vertical axis, the difference
between the two bounds is smaller than $2\times 10^{-6}$. Hence, Eq. (2.50)
practically coincides with the optimal bound for $(\Delta J_{x})^{2}<1$.
For points at the boundary, the difference is somewhat larger, but still
small, the relative difference is smaller than $2\%$ for 4 particles. We
compute the difference between the Eq (2.50) and our bound for different
number of particles and for states at the boundary from the state totally
polarized T to the unpolarized Dicke state at D, see Figure 4.4-(a).
\begin{overpic}[scale={.65}]{img/LT_edge_diff.pdf} \put(5.0,10.0){\small(a)}
\end{overpic}\begin{overpic}[scale={.65}]{img/LT_ss_add_jx4.pdf}
\put(5.0,10.0){\small(b)} \end{overpic}
Figure 4.4: (a) Difference between the bound of Pezzè-Smerzi and the optimal
bound for the quantum Fisher information normalized by the value of the
optimal bound itself for the bosonic ground states of $H=J_{x}^{2}-\lambda
J_{y}$ for $\forall\lambda\in[0,\infty)$. From dark to lighter colors (solid-
dark, dotted, dash-dotted, dashed, pointed, solid-light), results for
different particle numbers, $N=4,6,10,20,1000$ respectively. For large
particle numbers the difference is largest when the polarization is around two
thirds of the maximal polarization and that this difference is less than
$2.6\%$. (b) Lower bound on QFI for $\langle{J_{y}}\rangle=1.5$, $(\Delta
J_{x})^{2}=0.567$, as a function of $\langle{J_{x}^{4}}\rangle$. The
corresponding point in Figure 4.3 is denoted by a cross. (gray-area) Lower
bound on precision below the shot-noise limit. (dashed) Lower bound without
constraining $\langle{J_{x}^{4}}\rangle$. (dash-dotted) Lower bound when
bosonic symmetry is considered. As can be seen, an additional constraint or
assuming symmetry improves the bound.
We now consider regions on Figure 4.3 for which $(\Delta J_{x})^{2}>1$. The
difference between the two bounds is now larger. It is larger at point M, for
which the bound Eq. (2.50) is zero. Hence for measurement values corresponding
to points close to M, our method improve the formula Eq. (2.50).
It is important from the point of view of applying our method to spin-
squeezing experiments that the bound Eq. (2.50) can be substantially improved
for $(\Delta J_{x})^{2}<1$, if we assume bosonic symmetry for the system, or
we measure an additional quantity, such as $\langle{J_{x}^{4}}\rangle$ as
shown in Figure 4.4-(b).
#### 4.2.3 Dicke states
In this section, we use our method to find lower bounds on the QFI for states
close to the Dicke states (3.1) along the $x$-direction, based on collective
measurements. We discuss what operators have to be measured to estimate the
metrological usefulness of the state. In Section 4.3.2, we will test our
approach for a realistic system with very many particles.
In order to estimate the metrological usefulness of states created in such
experiments, we choose to measure $W_{1}=J_{x}^{2}$, $W_{2}=J_{y}^{2}$ and
$W_{3}=J_{z}^{2}$ since the expectation values of these operators uniquely
define the ideal Dicke state, and they have been already used for entanglement
detection [24]. In cold gas experiments of nowadays, the state created is
invariant under transformations of the type $U_{x}(\phi)=\exp(-iJ_{x}\phi)$
[105]. For such states $\langle{J_{y}^{2}}\rangle=\langle{J_{z}^{2}}\rangle$,
which we also use as a constraint in our optimization.
Let us demonstrate how our method works in an example for small systems.
Figure 4.5 shows the result for 6 qubits for symmetric states for which
$\langle{J_{x}^{2}+J_{y}^{2}+J_{z}^{2}}\rangle=\frac{N(N+2)}{4}=:\mathcal{J}_{N/2},$
(4.24)
which was introduced in Chapter 3. It can be seen that the lower bound on
quantum Fisher Information is the largest for $\langle{J_{x}^{2}}\rangle=0$.
It reaches the value corresponding to the ideal Dicke state,
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]/N=(N+2)/2=4$. It is
remarkable that the state is also useful for metrology if
$\langle{J_{x}^{2}}\rangle$ is very large. In this case
$\langle{J_{y}^{2}}\rangle$ and $\langle{J_{z}^{2}}\rangle$ are smaller than
$\langle{J_{x}^{2}}\rangle$.
Figure 4.5: Optimal lower bound on the quantum Fisher Information for
symmetric states with $\langle{J_{y}^{2}}\rangle=\langle{J_{z}^{2}}\rangle$.
Even if it is metrologicaly useful for a wide range of
$\langle{J_{x}^{2}}\rangle$, the numerics shows us a tiny region where the
metrological gain is surpassing the shot-noise limit.
### 4.3 Calculations for experimental data
In this section, we use our method to find tight lower bound on the QFI based
on experimental data. In particular, we will determine the bound for several
experiments in photons and trapped ions creating GHZ states and Dicke states,
in which the fidelity has been measured [60, 52, 54, 49, 55, 56, 106, 107,
108, 109], which is much easier than obtaining the quantum Fisher Information
from the density matrix [77], or estimation it from a metrological procedure
[8]. We will obtain a bound on the QFI for a spin-squeezing experiment with
thousand of particles [7]. Based on numerical examples, we see that the bound
Eq. (2.50) is close to the optimal even for not completely polarized states.
Assuming symmetry or knowing additional expectation values can improve the
bound Eq. (2.50). Finally, we will also obtain the bound for the QFI for a
recent experiment with Dicke states [24]. The estimate of the precision based
on considering the particular case when $\langle{J_{x}^{2}}\rangle$ is
measured for parameter estimation [105] is close to the optimal bound computed
by our method.
#### 4.3.1 Few-particle experiments
Now, we will estimate the quantum Fisher information based on the fidelity
with respect to Dicke states and GHZ states for several experiments with
photons and trapped cold ions, following the ideas of Section 4.2.1.
Our results are summarized in Table 4.1. The experiments in [54, 109] are with
hyperentangled qubits, while in the rest of experiments a single qubit is
stored in a particle. Ref. [56] describes experiments with 2-14 ions, we
presented only results of two of them. Finally, for the experiment of Ref.
[110] we used the fidelity estimated using reasonable assumptions discussed in
that paper, while the worst case fidelity is lower.
Physical | Target | Fidelity | $\mathcal{B}_{\,\mathcal{F}}/N$ | Ref.
---|---|---|---|---
system | quantum state
photons | $|{\textnormal{D}_{4}}\rangle$ | $0.844\pm 0.008$ | $1.432\pm 0.044$ | [106]
$0.78\pm 0.008$ | $1.124\pm 0.236$ | [109]
$0.8872\pm 0.0055$ | $1.680\pm 0.036$ | [60]
$0.873\pm 0.005$ | $1.44\pm 0.024$ | [34]
$|{\textnormal{D}_{6}}\rangle$ | $0.654\pm 0.024$ | $0.564\pm 0.076$ | [107]
$0.56\pm 0.02$ | $0.304\pm 0.048$ | [108]
photons | $|{\textnormal{GHZ}_{4}}\rangle$ | $0.840\pm 0.007$ | $1.848\pm 0.076$ | [110]
$|{\textnormal{GHZ}_{5}}\rangle$ | $0.68$ | $0.65$ | [110]
$|{\textnormal{GHZ}_{8}}\rangle$ | $0.59\pm 0.02$ | $0.256\pm 0.128$ | [111]
$|{\textnormal{GHZ}_{8}}\rangle$ | $0.776\pm 0.06$ | $2.4376\pm 0.1072$ | [54]
$|{\textnormal{GHZ}_{10}}\rangle$ | $0.561\pm 0.019$ | $0.15\pm 0.11$ | [54]
trapped-ions | $|{\textnormal{GHZ}_{3}}\rangle$ | $0.89\pm 0.03$ | $1.824\pm 0.291$ | [49]
$|{\textnormal{GHZ}_{4}}\rangle$ | $0.57\pm 0.02$ | $0.08\pm 0.052$ | [55]
$|{\textnormal{GHZ}_{6}}\rangle$ | $0.509\pm 0.004$ | $0.0018\pm 0.0018$ | [112]
$|{\textnormal{GHZ}_{8}}\rangle$ | $0.817\pm 0.004$ | $3.21\pm 0.08$ | [56]
$|{\textnormal{GHZ}_{10}}\rangle$ | $0.626\pm 0.006$ | $0.64\pm 0.06$ | [56]
Table 4.1: Fidelity values and the corresponding bound for the QFI for
several experiments with Dicke states and GHZ states. Bounds normalized with
$N$ are shown. The ones surpassing the value one in the fourth column show
quantum entanglement enhanced metrological usefulness. For Dicke states the
maximum is achieved at $(N+2)/2$, i.e., $3$ for the
$|{\textnormal{D}_{4}}\rangle$ case and $4$ for the
$|{\textnormal{D}_{6}}\rangle$ case. For the case in which GHZ states are used
the limit for the normalized bound is $N$, the particle number.
We can compare our estimate to the quantum Fisher information of the state for
the experiment of Ref. [60], where the QFI for the density matrix was obtained
as $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]/N=(10.326\pm
0.093)/N=(2.5816\pm 0.02325)$. As can be seen in Table 4.1, this value is
larger than we obtained, however, it was calculated by knowing the entire
matrixm, while our bound is obtained from the fidelity alone.
#### 4.3.2 Many-particle experiments
In this section, we will estimate the quantum Fisher information based on
collective measurements for experiments aiming to create spin-squeezed states
and Dicke states.
Spin-squeezing experiment
We turn our attention to a recent many-particle spin-squeezing experiment in
cold gases to use our method to find lower bounds on the quantum Fisher
information, following the ideas of Section 4.2.2. With that we show that the
lower bound given in Eq. (2.50) is close to the optimal. We also demonstrate
that we carry out calculations for real systems.
In particular, for our calculations we use the data from spin-squeezing
experiments of Ref. [7]. The particle number is $N=2300$, and the spin-
squeezing parameter defined as
$\xi_{\textnormal{s}}^{2}=N\frac{(\Delta
J_{x})^{2}}{\langle{J_{y}}\rangle^{2}}$ (4.25)
has the value
$\xi_{\textnormal{s}}^{2}=-8.2\textnormal{dB}=10^{-8.2/10}=0.1514$. The spin
length $\langle{J_{y}}\rangle$ has been close to maximal. In our calculations,
we choose
$\langle{J_{y}}\rangle=\alpha\frac{N}{2},$ (4.26)
where we will test our method with various values for $\alpha$. For each
$\alpha$ we use $(\Delta J_{x})^{2}$ will be given such that we get the
experimentally obtained spin-squeezing parameter Eq. (4.25). Moreover, we
assume $\langle{J_{x}}\rangle=0$, as the $y$-direction was the direction of
the mean spin in the experiment. Based on Eq. (2.50), the bound for the
quantum Fisher information is obtained as
$\frac{\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]}{N}\geqslant\frac{1}{\xi_{\textnormal{s}}^{2}}=6.605.$
(4.27)
For our computations we need a tool to handle large systems. We will carry out
the calculations for symmetric states. this way we obtain a lower bound on the
QFI that we will denote by $\mathcal{B}_{\,\textnormal{sym}}$. As already
mentioned, we could obtain a bound for the QFI that is valid even for general
case, not necessarily symmetric states if the matrix from which compute the
maximum eigenvalue Eq. (4.9) has a non-degenerated largest eigenvalue. This is
not the case in general for the spin-squeezing problem. However, we still know
that our bound obtained with our calculations in the symmetric subspace cannot
be smaller than the optimal bound $\mathcal{B}_{\,\mathcal{F}}$, which must be
larger or equal to the Eq. (2.50) since it cannot be smaller than the optimal
bound for general states. These relations can be summarized as
$\mathcal{B}_{\,\textnormal{sym}}\geqslant\mathcal{B}_{\,\mathcal{F}}\geqslant\frac{\langle{J_{y}}\rangle^{2}}{(\Delta
J_{x})^{2}},$ (4.28)
where on the right-hand side we just used the bound in Eq. (2.50).
Our calculations lead to
$\frac{\mathcal{B}_{\,\textnormal{sym}}(\langle{J_{y}}\rangle,(\Delta
J_{x})^{2})}{N}=6.605$ (4.29)
for a wide range of values of $\alpha$. That is, based on numerics, the left-
hand side and the right-hand side of Eq. (4.29) seem to be equal. This implies
that the lower bound Eq. (2.50) is optimal for estimating the QFI for the
system.
We follow giving the details of our calculations for $\alpha=0.5,0.85$ and we
show examples in which we can improve the bound Eq. (2.50) with our approach,
if symmetry is assumed. We present a simple scheme that can be used to handle
large systems, and make calculations for larger particle number. Thus, we need
fewer steps for the numerical optimization for large system sizes, which makes
our computations faster. Second, while we will be able to carry out the
calculation for the particle number of the experiment, we will also see that
we could even extrapolate the results from the results obtained for lower
particle numbers. This is useful for future application of our method for very
large systems.
The basic idea is that we transform the collective quantities from $N$ to a
smaller particle number using the scaling relation
$\displaystyle\langle{J_{y}}\rangle$
$\displaystyle=\frac{N^{\prime}}{2}\alpha,$ (4.30) $\displaystyle(\Delta
J_{x})^{2}$
$\displaystyle=\xi_{\textnormal{s}}^{2}\frac{N^{\prime}}{4}\alpha^{2}.$ (4.31)
We see that for the scaling we consider, for all $N^{\prime}$ the bound in Eq.
(2.50) is valid, i.e., is obtained as
$\frac{\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho_{N^{\prime}},J_{z}}]}{N^{\prime}}\geqslant\frac{1}{\xi_{\textnormal{s}}^{2}}=6.605.$
(4.32)
Let us first take $\alpha=0.85$, which is somewhat smaller than the
experimental value, however, it helps us to see various characteristics of the
method. At the end of the section we will also discuss the results for other
values of $\alpha$. Based on these ideas, we compute the bound
$\mathcal{B}_{\,\textnormal{sym}}$ for the quantum Fisher information for an
increasing system size $N^{\prime}$.
The results can be seen in Figure 4.6-(a). The bound obtained this way is
close to the bound in Eq. (4.27) even for small $N^{\prime}$. For larger
particle number it is constant and coincides with the bound in Eq. (4.27) This
also strongly supports the idea that we could use the result for small
particle numbers to extrapolate the bound for $N$. Since for the experimental
particle number we obtain that $\mathcal{B}_{\,\text{sym}}$ equals the bound
in Eq. (4.27), we find that all three lower bounds in Eq. (4.29) must be
equal. Hence, Eq. (2.50) is optimal for the experimental system and $\alpha$
considered before in this section. Besides, these results present also a
strong argument for the correctness of our approach.
\begin{overpic}[scale={.65}]{img/LT_spsq_scaling_1.pdf}
\put(5.0,10.0){\small(a)}
\end{overpic}\begin{overpic}[scale={.65}]{img/LT_spsq_scaling_2.pdf}
\put(5.0,10.0){\small(b)} \end{overpic}
Figure 4.6: (Color line) Lower bound on the QFI based on
$\langle{J_{y}}\rangle$ and $(\Delta J_{x})^{2}$ obtained for the symmetric
subspace for different particle numbers $N^{\prime}$. $N{=}2300$ corresponds
to the spin-squeezing experiment [7]. (a) Almost fully polarized spin-squeezed
state. Even for a moderate $N^{\prime}$, the bound is practically identical to
the right-hand side of the Eq. (4.32). (b) Spin-squeezed state that is not
fully polarized. For large $N^{\prime}$, the bound converges to the right-hand
side of the Eq. (4.32), represented by a dashed line. (dots) Results of our
calculations, which are connected by straight lines.
We now give more details of the calculation. We were able to carry out the
optimizations up to $N^{\prime}=2300$ with a usual laptop computer using
MATLAB programming language∥∥∥ For MATLAB R2015a, see
http://www.mathworks.com.. We started the calculation for each given particle
number with the $r_{k}$ parameters obtained for the previous simulation with a
smaller particle number. This allows for faster finding of the solution than
if we would start the $r_{k}$ parameters arbitrarily.
Let us consider a spin-squeezed state that is not fully polarized and
$\alpha=0.5$. In Figure 4.6-(b), we can see that for small particle numbers we
have a larger bound on $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$
than the one obtained from Eq (2.50). Thus for the case in which the particle
number would be smaller we could improve the bound Eq. (2.50) by assuming
symmetry. On the other hand, for large particle number we recover the bound
Eq. (2.50).
Finally, we add a note on the technical details. We carried out our
calculations with the constraints on $(\Delta J_{x})^{2}$ and
$\langle{J_{y}}\rangle$, with the additional constraint
$\langle{J_{x}}\rangle=0$. For the experimental particle numbers, one can show
that our results are valid even if we constrained only $(\Delta J_{x})^{2}$
and $\langle{J_{y}}\rangle$, and did not use the $\langle{J_{x}}\rangle=0$
constraint. This way, in principle, we could only get a lower bound that is
equal to the one we obtained before or lower. However, we obtained before a
value identical to the analytical bound Eq. (2.50). The optimal bound cannot
be below the analytic bound, since then the analytic bound would overestimate
the quantum Fisher information, and it would not be a valid bound. Hence, even
an optimization without the $\langle{J_{x}}\rangle=0$ constraint could not
obtain a smaller value than our results.
Experiment creating Dicke states
In this section, we present our calculations for an experiment aiming at
creating Dicke states in cold gases [24]. The basic ideas are similar to the
ones explained in Section 4.2.3 for small systems. The experimental data, as
in previous Section 3.4, are $N=7900$, $\langle{J_{y}^{2}}\rangle=112\pm 31$,
$\langle{J_{x}^{2}}\rangle=\langle{J_{z}^{2}}\rangle=6\times 10^{6}\pm
0.6\times 10^{6}$ [105]. Applying some simple transformations, we can make
calculations for a very large numbers of particles, and obtain results even
for general, non-symmetric systems.
In the general, non-symmetric case, we can handle only small problems. Thus,
we have to transform the collective quantities such that the corresponding
quantum state, i.e., it has to fulfill
$\langle{J_{x}^{2}}\rangle_{\text{sym}}+\langle{J_{x}^{2}}\rangle_{\text{sym}}+\langle{J_{x}^{2}}\rangle_{\text{sym}}=\mathcal{J}_{N/2},$
(4.33)
where $\mathcal{J}_{N/2}$ is defined on Eq. (4.24). A mapping of this type can
be realized equally scaling all second moments of the angular momentum
projections as
$\langle{J_{l}^{2}}\rangle_{\text{sym},N}=\gamma\langle{J_{l}^{2}}\rangle_{N},$
(4.34)
where we now added the label $N$ to avoid confusions in upcoming equations,
$l=x,y,z$ and where we used the coefficient $\gamma$ to be
$\gamma=\frac{\mathcal{J}_{N/2}}{\langle{J_{x}^{2}}\rangle_{N}+\langle{J_{y}^{2}}\rangle_{N}+\langle{J_{z}^{2}}\rangle_{N}}.$
(4.35)
Note that $\gamma=1$ if the original state is symmetric.
Next, based on the ideas of this chapter, we calculate the lower bound on the
quantum Fisher information for symmetric systems, which we denote
$\mathcal{B}_{\,\text{sym},N}$. To obtain the results for the original non-
symmetric case, the convex nature of the $\mathcal{B}_{\,N}$ implies that
$\mathcal{B}_{\,N}\leqslant\frac{1}{\gamma}\mathcal{B}_{\,\text{sym},N},$
(4.36)
where $\mathcal{B}_{\,\text{sym},N}$ corresponds to the bound one would obtain
in the symmetric subspace for expectation values given by the Eq. (4.34). This
can also be seen using an auxiliary state $\tilde{\rho}$ that mixes the
symmetric state that has the expectation values computed with Eq. (4.34) and
the singlet state that has zero value for all these expectation values. Hence,
if we construct a mixture of this type as follows
$\tilde{\rho}_{N}=(1-\gamma^{-1})\rho_{\text{singlet},N}+\gamma^{-1}\rho_{\text{sym},N},$
(4.37)
we have that $\tilde{\rho}_{N}$ has the same expectation values as the
original non-symmetric case. This way, we can relate the bound for general
systems to the quantum Fisher information for symmetric cases as
$\mathcal{B}_{\,N}\leqslant\mathcal{F}_{\textnormal{Q}}[{\textstyle\tilde{\rho}_{N},J_{z}}]=\frac{1}{\gamma}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho_{\text{sym,N}},J_{z}}].$
(4.38)
Here, the inequality comes due to that our bound cannot be larger than the QFI
of any state having the given set of expectation values. On the other hand,
the equality holds due to the fact that both $\tilde{\rho}$ and $J_{z}$ can be
written as block-diagonal matrix of blocks corresponding to different
eigenvalues of $\boldsymbol{J}^{2}$. In particular, $\rho_{\text{singlet},N}$
has non-zero elements only in the blocks for which
$\langle{\boldsymbol{J}^{2}}\rangle=0$, while $\rho_{\text{sym},N}$ has
nonzero elements only in the blocks in which
$\langle{\boldsymbol{J}^{2}}\rangle$ is maximal. Note that
$\boldsymbol{J}^{2}$ is a shorthand of $J_{x}^{2}+J_{y}^{2}+J_{z}^{2}$. Then
we can use the general formula [68]
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\bigoplus_{k}p_{k}\rho_{k},\bigoplus_{k}A_{k}}]=\sum_{k}p_{k}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho_{k},A_{k}}],$
(4.39)
where $\rho_{k}$ are density matrices with unit trace, $\sum_{k}p_{k}=1$ and
the $k$ index represent the block subspaces of the system and the operators
$A_{k}$.
Extensive numerics for small systems show that the inequality in Eq. (4.38) is
very close to an equality within the numerical precision
$\mathcal{B}_{\,N}\approx\frac{1}{\gamma}\mathcal{B}_{\,\text{sym},N}.$ (4.40)
To obtain the lower bound $\mathcal{B}_{\,N}$ we also use an increasing system
size $N^{\prime}$ as we have done in at the beginning of this section.
However, in this case we will not be able to do the calculation for the
experimental particle number, and we will use extrapolation from the results
obtained for smaller particle numbers.
First, we transform the measured second moments to values corresponding to a
symmetric system using Eqs. (4.34) and (4.35). For our case, $\gamma=1.301$.
This way, we obtain
$\begin{split}\langle{J_{y}^{2}}\rangle_{\text{sym},N}&=145.69,\\\
\langle{J_{x}^{2}}\rangle_{\text{sym},N}&=\langle{J_{z}^{2}}\rangle_{\text{sym},N}=7.8\times
10^{6}.\end{split}$ (4.41)
Next, we will carry out calculations for symmetric systems. We will consider a
smaller system $N^{\prime}$ that keeps expectation values such that the
corresponding quantum state must be symmetric. Hence, we will use the
following relation to find the target expectation values for smaller systems
$\begin{split}\langle{J_{y}^{2}}\rangle_{\text{sym},N^{\prime}}&=\langle{J_{y}^{2}}\rangle_{\text{sym},N},\\\
\langle{J_{x}^{2}}\rangle_{\text{sym},N^{\prime}}&=\langle{J_{z}^{2}}\rangle_{\text{sym},N^{\prime}}=\frac{1}{2}(\mathcal{J}_{N^{\prime}/2})-\langle{J_{y}^{2}}\rangle_{\text{sym},N^{\prime}}),\end{split}$
(4.42)
where $\mathcal{J}_{N^{\prime}/2}$ is defined in Eq. (4.24). Note that with
Eq. (4.24) holds for all $N^{\prime}$, hence the state must be symmetric.
Hence, the main characteristics of the scaling relation can be summarized as
follows, $\langle{J_{y}^{2}}\rangle_{\text{sym},N^{\prime}}$ remains constant
for all $N^{\prime}$ while $\langle{J_{x}^{2}}\rangle_{\text{sym},N^{\prime}}$
and $\langle{J_{z}^{2}}\rangle_{\text{sym},N^{\prime}}$ are chosen such that
they are equal to each other and the state is symmetric. For large N, this
implies
$\langle{J_{x}^{2}}\rangle_{\text{sym},N}=\langle{J_{z}^{2}}\rangle_{\text{sym},N}\sim
N(N+2)/8$.
Let us now turn to central quantities of our chapter, the lower bounds on the
quantum Fisher information. A central point in our scheme is that due to the
scaling properties of the system, we can obtain the value for the particle
number $N$ from the values of a smaller particle number $N^{\prime}$ as [98]
$\mathcal{B}_{\,\text{sym},N}\approx\frac{\mathcal{J}_{N/2}}{\mathcal{J}_{N^{\prime}/2}}\mathcal{B}_{\,\text{sym},N^{\prime}},$
(4.43)
which we will verify numerically. Note that for large $N$, we have
$\mathcal{J}_{N/2}/\mathcal{J}_{N^{\prime}/2}\sim N^{2}/(N^{\prime})^{2}$.
As last step, we have to return from the symmetric system to our real system,
not fully symmetric one. Based on Eq. (4.43) and assuming Eq. (4.40), a
relation for the lower bound for the original problem can be obtained from the
bound on the symmetric problem with $N^{\prime}$ particles as
$\mathcal{B}_{\,N}\approx\frac{1}{\gamma}\frac{\mathcal{J}_{N/2}}{\mathcal{J}_{N^{\prime}/2}}\mathcal{B}_{\,\text{sym},N^{\prime}}=\frac{\langle{J_{x}^{2}}\rangle_{N}+\langle{J_{y}^{2}}\rangle_{N}+\langle{J_{z}^{2}}\rangle_{N}}{\mathcal{J}_{N^{\prime}/2}}\mathcal{B}_{\,\text{sym},N^{\prime}}.$
(4.44)
In Figure 4.7, we plotted the right-hand side of Eq. (4.44) as the function of
$N^{\prime}$ divided by $N$. We can see that $\mathcal{B}_{\,N^{\prime}}/N$ is
constant or slightly increasing for $N^{\prime}>400$. This is a strong
evidence that Eq. (4.43) is valid for relatively large particle numbers. With
this, we arrive at the result for the experimental system
$\frac{\mathcal{B}_{\,N}(\langle{J_{y}^{2}}\rangle,\langle{J_{x}^{2}}\rangle=\langle{J_{z}^{2}}\rangle)}{N}\approx
2.94.$ (4.45)
The $\approx$ sign is used referring to the fact that we assume that the
inequality in Eq. (4.38) is close to be saturated and that we did sufficient
numerics for an increasing system size $N^{\prime}$ to have a good asymptotic
approach to the real value Eq. (4.43).
Figure 4.7: Asymptotic behavior of the bound as a function of $N^{\prime}$.
The bound is first obtained for a symmetric subspace of $N^{\prime}$ particles
and then the bound for $N$ particles is computed using the Eq. (4.44). The
function is monotonically increasing. Hence, with $N^{\prime}\approx 200$ we
already obtain a good lower bound. This approach does not overestimate the
precision bound.
It is instructive to compare the value of Eq. (4.45) to the one obtained in
Section 3.4, where the same system was characterized base on its metrological
usefulness. Such result implies
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]/N\geqslant 3.3$ which is
somewhat larger than our recent result as we did not use the knowledge of the
fourth moments, only the second moments. The closeness of the two results is a
strong argument for the correctness of our calculations.
### 4.4 Scaling of $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$
with $N$
Recent important works examine the scaling of the quantum Fisher information
with the particle number for metrology under the presence of decoherence [61,
62]. They consider the QFI defined now for the non-unitary, noisy evolution.
They find that for small $N$ it is close to the value obtained by considering
coherent dynamics. Hence, even the Heisenberg scaling, $\mathcal{O}(N^{2})$,
can be reached. However, if $N$ is sufficiently large, then, due to the
decoherence during the parameter estimation, the QFI scales as
$\mathcal{O}(N)$.
We have to stress that the findings of B. M. Escher et al [61] and R.
Demkowicz-Dobrzański et al [62] are not applicable to our case. Our methods
estimate the quantum Fisher information assuming a perfect unitary dynamics.
The quantum Fisher information can be smaller that what we expect ideally only
due to the imperfect preparation of the state****** This is also relevant for
Ref. [103], where $\mathcal{F}_{\textnormal{Q}}=\mathcal{O}(N^{2})$ is reached
with weakly entangled states.. We can even find simple conditions on the state
preparation that lead to a Heisenberg scaling. Based on Eq. (4.18), if we
could realize quantum states $\rho_{N}$ such that
$F_{\text{GHZ}}(\rho_{N})\geqslant 0.5+\epsilon$ for $N\rightarrow\infty$ for
some $\epsilon>0$, then we would reach
$\mathcal{B}_{\,\mathcal{F}}(F_{\text{GHZ}})=\mathcal{O}(N^{2})$. Strong
numerical evidence suggest that a similar relation holds for fidelity
$F_{\text{Dicke}}$ and $\mathcal{B}_{\,\mathcal{F}}(F_{\text{GHZ}})$, see
Section 4.2.3. From another point of view, our method can estimate
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$ for large particle
numbers, while a direct measurement of the metrological sensitivity
considerably underestimates it.
## 5 Precision bound for gradient field estimation with atomic ensembles
Precision bound for gradient field estimation with atomic ensembles
"To consult the statistician after an experiment is finished is often merely
to ask him to conduct a post mortem examination. He can perhaps say what the
experiment died of."
Ronald Fisher
In this chapter, one of the most fundamental two-parameter estimation tasks in
magnetometry is considered, namely gradient magnetometry. We will add the
gradient of the magnetic field as the second parameter beside the constant
(homogeneous) part of the field. While most works in magnetometry with a
single ensemble focus only on the determination of the strength and direction
of the magnetic field, certain measurement schemes for the gradient have
already been proposed and tested experimentally. We study gradient
magnetometry with an ensemble of atoms described by a very general probability
distribution function for the position of the atoms, and considering atoms
with an arbitrary spin. Some schemes use an imaging of the ensemble with a
high spatial resolution, however, they do not count as single-ensemble methods
in the sense we use this expression in our paper, since in this case not only
collective observables are measured [18, 19, 17]. There is a method based on
collective measurements of the spin length of a fully polarized ensemble [37].
Finally, there is a scheme where they use as a trial state a many-body singlet
states, which is described in Ref. [57].
We calculate precision bounds for estimating the gradient of the magnetic
field based on the quantum Fisher information. For quantum states that are
invariant under the homogeneous magnetic field, a single parameter estimation
is sufficient. In contrast to this, for states that are sensitive to the
homogeneous fields, a two-parameter estimation problem must be solved to
obtain the gradient parameter, since the homogeneous field must also be taken
into account. We use our method to calculate precision bounds for gradient
estimation with a chain of atoms and even with two spatially separated atomic
ensembles which feel different magnetic fields. As we said, we also consider a
single atomic ensemble with an arbitrary density profile, in which atoms
cannot be addressed individually, which is a very relevant case for
experiments. Our model can take into account even correlations between
particle positions.
The atoms will be distributed along the $x$-axis, so $y=z=0$, and in principle
they will be able to feel differences in the magnetic field at different
points of the axis. The magnetic field at the atoms will be given by a linear
function of the position $x$
$\boldsymbol{B}(x,0,0)=\boldsymbol{B}_{0}+x\boldsymbol{B}_{1}+O(x^{2}),$ (5.1)
where we will neglect the terms of order two or higher. We will consider the
magnetic field pointing along the $z$-direction direction only,
$\boldsymbol{B}_{0}=B_{0}\boldsymbol{k}$ and
$\boldsymbol{B}_{1}=B_{1}\boldsymbol{k}$, where $\boldsymbol{k}$ is the
unitary vector pointing on the $z$-direction. For this configuration, due to
the Maxwell equations, with no currents or changing electric fields, we have
$\displaystyle\nabla\cdotp\boldsymbol{B}$ $\displaystyle=0,$
$\displaystyle\nabla\times\boldsymbol{B}$ $\displaystyle=\boldsymbol{0},$
(5.2)
where $\boldsymbol{0}\equiv(0,0,0)$ is the 3-dimensional null vector. This
implies $\sum_{l=x,y,z}\partial_{l}B_{l}=0$ and
$\partial_{m}B_{l}-\partial_{l}B_{m}=0$ for all $l\neq m$, where
$\partial_{m}\equiv\partial/\partial_{m}$ stands for the partial derivative
over the variable $m$. Thus, the spatial derivatives of the field components
are not independent of each other. However, in the case of a linear arranged
particle ensemble only the derivative along the $x$-axis has an influence on
the quantum dynamics of the atoms.
We will determine the precision bounds for the estimation of the magnetic
field gradient $B_{1}$ based on the quantum Fisher information [2, 76, 31, 87,
113, 33]. In this context the Heisenberg and shot-noise scaling are defined as
usual. The achievable precision in terms of the number of particles scales as
$(\Delta\theta)^{-2}\sim N$ and $(\Delta\theta)^{-2}\sim N^{2}$ for shot-noise
scaling and Heisenberg scaling, respectively. We will show that with spin
chains or two ensembles at different positions the Heisenberg scaling is
possible. Concerning the case of a single ensemble, we will show the
following. Since in such systems the atoms cannot be be individually
addressed, we will assume that the quantum state is permutationally invariant.
We will show that for states insensitive to the homogeneous magnetic field,
one can reduce the problem to a one-parameter estimation scenario. Such states
can arise in a single-ensemble scheme, however, it will be shown that the
Heisenberg limit cannot be reached in this case. When the state is sensitive
to the homogeneous field, the spatial correlation between the atoms must be
taken into account in order to show whether the system can overcome the shot-
noise scaling and achieve the Heisenberg scaling. Nevertheless, single-
ensemble measurements have certain advantages since the spatial resolution can
be higher and the experimental requirements are smaller since only a single
ensemble must be prepared.
On the other hand, for states sensitive to the homogeneous field, the
classical limit can be overcome only if the particle positions are highly
correlated with each other. Our calculations are generally valid for any
measurement, thus they are relevant to many recent experiments [13, 14, 15,
16, 17, 18, 19, 37]. We note that in the case of the singlet, our precision
bounds are saturated by the metrological scheme presented in Ref. [57].
We can also connect our results to entanglement theory [65, 35, 36]. We find
that even in the case of gradient magnetometry the shot-noise scaling cannot
be surpassed with separable states, while the Heisenberg scaling can be
reached with entangled states. However, in the single-ensemble scenario, the
shot-noise scaling can be surpassed only if the particle positions are
correlated, which is the case if the particles attract each other. We will go
into the details in Section 5.4.
The chapter is organized as follows. In Section 5.1, we will present the setup
of the system. In Section 5.2, the metrological basic concepts used in the
chapter are presented. In Section 5.3, we will show the results for the chain
of ions and for when two distant ensembles are considered In Section 5.4, we
restrict our calculations to single permutationally invariant atomic ensembles
and we develop some particular cases, such as the singlet spin state or the
totally polarized state.
### 5.1 The setup
In this section, we will present the characteristics of our setup. For
simplicity, as well as following recent experiments (e.g., Ref. [17]), we will
consider an ensemble of spin-$j$ particles placed in a one-dimensional trap or
a chain. Furthermore, we will assume that the particles are point-like and
distinguishable. On the other hand, we assume that the particles have a spin,
which is a quantum degree of freedom. Such a model is used typically to
describe experiments with cold atomic ensembles.
Based on these considerations, we assume that the state is factorizable into a
spatial and a spin part as
$\rho=\rho^{(\text{x})}\otimes\rho^{(\text{s})},$ (5.3)
and that the spatial part can be characterized as an incoherent mixture of
point-like particles that can be written as
$\rho^{(\text{x})}=\int\text{Pr}(\boldsymbol{x})|{\varphi_{\boldsymbol{x}}}\rangle{\\!}\langle{\varphi_{\boldsymbol{x}}}|\,\text{d}^{N}\boldsymbol{x},$
(5.4)
where $|{\varphi_{\boldsymbol{x}}}\rangle$ is a pure state of each particle
been placed at $\boldsymbol{x}=(x_{1},x_{2},\dots,x_{N})$, respectively. Each
part of the state acts on different Hilbert spaces denoted by
$\mathcal{H}^{(\text{x})}$ and $\mathcal{H}^{(\text{s})}$, respectively. Note
that we skip to write the superscript $(\text{x})$, denoting the Hilbert space
to which $|{\varphi_{\boldsymbol{x}}}\rangle$ belongs, for simplicity.
In order to write the operators, including the state $\rho^{(\text{x})}$,
acting on the Hilbert space $\mathcal{H}^{(\text{x})}$, we will invoke the
completeness relation found in the literature [70, 69] for the spatial
continuous Hilbert space
$\int|{\boldsymbol{x}}\rangle{\\!}\langle{\boldsymbol{x}}|\,\text{d}^{N}\boldsymbol{x}=\mathbbm{1},$
(5.5)
where
$|{\boldsymbol{x}}\rangle=|{x_{1}}\rangle^{(1,\text{x})}\otimes|{x_{2}}\rangle^{(2,\text{x})}\cdots\otimes|{x_{N}}\rangle^{(N,\text{x})}$
is the tensor product of the position eigenvectors of each particle which obey
$\langle{\boldsymbol{x}}|{\boldsymbol{y}}\rangle=\delta(\boldsymbol{x}-\boldsymbol{y}),$
(5.6)
where $\delta(\boldsymbol{x}-\boldsymbol{y})$ is the Dirac delta found in the
literature [70, 69].
To see how our notation works, let us write the vector of the position
operators for each particle as
$\hat{\boldsymbol{x}}\equiv(x^{(1)},x^{(2)},\dots,x^{(N)})$, where we used the
$\hat{\cdot}$ notation on top of $\boldsymbol{x}$ to distinguish it from the
vector of position variables. It is known that $\hat{\boldsymbol{x}}$ acting
on $|{\boldsymbol{x}}\rangle$ will give
$\boldsymbol{x}|{\boldsymbol{x}}\rangle$ [70, 69]. Hence,
$\hat{\boldsymbol{x}}=\hat{\boldsymbol{x}}\mathbbm{1}=\hat{\boldsymbol{x}}\int|{\boldsymbol{x}}\rangle{\\!}\langle{\boldsymbol{x}}|\,\text{d}^{N}\boldsymbol{x}=\int\hat{\boldsymbol{x}}|{\boldsymbol{x}}\rangle{\\!}\langle{\boldsymbol{x}}|\,\text{d}^{N}\boldsymbol{x}=\int\boldsymbol{x}|{\boldsymbol{x}}\rangle{\\!}\langle{\boldsymbol{x}}|\,\text{d}^{N}\boldsymbol{x}.$
(5.7)
We can follow similar arguments to rewrite the pure state
$|{\varphi_{\boldsymbol{x}}}\rangle$. First, note also that the expectation
value of the position operator for such a state is always $\boldsymbol{x}$.
Hence, such a state must be proportional to $|{\boldsymbol{x}}\rangle$. On the
other hand, a pure state must be normalized to one, hence, we can construct
such a state by dividing the eigenvector $|{\boldsymbol{x}}\rangle$ by the
square-root of its norm as
$|{\varphi_{\boldsymbol{x}}}\rangle\equiv\frac{|{\boldsymbol{x}}\rangle}{\sqrt{\langle{\boldsymbol{x}}|{\boldsymbol{x}}\rangle}}.$
(5.8)
The meaning of Eq. (5.8) is clear, while a rigorous form of how various limits
are taken could overcomplicate our discussion. For illustrative purposes, we
compute in this basis the expectation value of the position operator for these
pure states as
$\langle{\hat{\boldsymbol{x}}}\rangle_{\varphi_{\boldsymbol{x}}}=\langle{\varphi_{\boldsymbol{x}}}|{\int\boldsymbol{y}|{\boldsymbol{y}}\rangle{\\!}\langle{\boldsymbol{y}}|\,\text{d}^{N}\boldsymbol{y}}|{\varphi_{\boldsymbol{x}}}\rangle=\int\boldsymbol{y}\frac{\langle{\boldsymbol{x}}|{\boldsymbol{y}}\rangle\langle{\boldsymbol{y}}|{\boldsymbol{x}}\rangle}{\langle{\boldsymbol{x}}|{\boldsymbol{x}}\rangle}\,\text{d}^{N}\boldsymbol{y}=\boldsymbol{x}\frac{\langle{\boldsymbol{x}}|{\boldsymbol{x}}\rangle}{\langle{\boldsymbol{x}}|{\boldsymbol{x}}\rangle}=\boldsymbol{x},$
(5.9)
where in the step before the last we have used
$\langle{\boldsymbol{y}}|{\boldsymbol{x}}\rangle=\delta(\boldsymbol{y}-\boldsymbol{x})$
to compute the integral. We can rewrite the spatial state Eq. (5.4) as
$\rho^{(\text{x})}=\int\frac{\text{Pr}(\boldsymbol{x})}{\langle{\boldsymbol{x}}|{\boldsymbol{x}}\rangle}|{\boldsymbol{x}}\rangle{\\!}\langle{\boldsymbol{x}}|\,\text{d}^{N}\boldsymbol{x}.$
(5.10)
Note also that if the eigen-decomposition of the internal state is
$\rho^{(\text{s})}=\sum_{\lambda}p_{\lambda}|{\lambda}\rangle{\\!}\langle{\lambda}|$,
then the whole state is decomposed as
$\rho=\int\sum_{\lambda}\frac{\text{Pr}(\boldsymbol{x})}{\langle{\boldsymbol{x}}|{\boldsymbol{x}}\rangle}p_{\lambda}|{\boldsymbol{x},\lambda}\rangle{\\!}\langle{\boldsymbol{x},\lambda}|\,\text{d}^{N}\boldsymbol{x},$
(5.11)
where $|{\boldsymbol{x},\lambda}\rangle$ is a shorthand for
$|{\boldsymbol{x}}\rangle^{(\text{x})}\otimes|{\lambda}\rangle^{(\text{s})}$,
the eigenstates, where their corresponding eigenvalues are in this case
$\frac{\text{Pr}(\boldsymbol{x})}{\langle{\boldsymbol{x}}|{\boldsymbol{x}}\rangle}p_{\lambda}$.
At this point, we want to emphasize that our method could easily be extended
to the case of Bose-Einstein condensates, or any other spatial configuration,
not considered in this paper. In the case of BECs, the spatial state of the
particles would be a pure state, and we would have
$\rho^{(\text{x})}=(|{\Psi}\rangle{\\!}\langle{\Psi}|)^{\otimes N},$ where
$|{\Psi}\rangle$ is a spatial single-particle state.
Although in our case the parameter to be estimated is $B_{1}$, the time-
evolution of the state is usually also affected by the second unknown
parameter, the homogeneous field $B_{0}$, which means that we generally have
to consider a two-parameter estimation problem. The angular momentum of an
individual atom is coupled to the magnetic field, yielding the following
interaction term
$h^{(n)}=\gamma B_{z}^{(n,\text{x})}\otimes j_{z}^{(n,\text{s})},$ (5.12)
where the operator $B_{z}^{(n)}=B_{0}+B_{1}x^{(n)}$ acts on the spatial part
of the $n^{\text{th}}$ particle Hilbert space $\mathcal{H}^{(n,\text{x})}$.
The sum of all single-particle interactions with the magnetic field provide
the total Hamiltonian
$H=\gamma\sum_{n=1}^{N}B_{z}^{(n,\text{x})}\otimes j_{z}^{(n,\text{s})},$
(5.13)
which will generate the time evolution of the system.
We will calculate lower bounds on the precision of estimating $B_{1}$ based on
measurements on the state after it passed through the unitary dynamics
$U=\exp(-i\frac{H}{\hbar}t)$, where $t$ is the time spent by the system under
the influence of the magnetic field. The unitary operator can be rewritten in
the following way
$U=e^{-i\left(b_{0}H_{0}+b_{1}H_{1}\right)},$ (5.14)
where the $b_{i}=\gamma B_{i}t/\hbar$ and therefore $b_{1}$ encodes the
gradient of the magnetic field $B_{1}$. Here, the generator describing the
effect of the homogeneous field is given as
$H_{0}=\sum_{n=1}^{N}j_{z}^{(n)}=J_{z},$ (5.15)
while the generator describing the effect of the gradient is
$H_{1}=\sum_{n=1}^{N}x^{(n)}j_{z}^{(n)}.$ (5.16)
As in Eq. (5.16), we will usually omit $\otimes$ and the superscripts
$(\text{x})$ and $(\text{s})$ for simplicity, and will use it only if it is
necessary to make our explanation clearer.
Note that the operators $H_{0}$ and $H_{1}$ commute with each other. These two
commuting dynamics are the two simplest in an atomic ensemble as they are
based on collective operators not requiring an individual access to the
particles. This is mainly because the spatial part in Eq. (5.12) is
represented by a single-particle operator and not by a scalar depending on the
position of the particle. The last approach, where the position of the
particle is encoded in a scalar, would require to know in advance the location
of the particles to construct the Hamiltonian $H_{1}$ Eq. (5.16), which would
yield finally to a non-collective operator for $H_{1}$. This approach is
widely adopted by the community, since it simplifies the problem considerably
[57, 114].
Note also that it is not necessarily true that the operators we have to
measure in order to estimate $b_{0}$ and $b_{1}$ must commute with each other.
On the other hand, in schemes in which the gradient is calculated based on
measurements in two separate atomic ensembles or different atoms in a chain,
the measuring operators can always commute with each other [13, 14, 98].
### 5.2 Cramér-Rao bound for gradient estimation
In this section, we show how the Cramér-Rao bound and the QFI help us to
obtain the precision bound that is valid for any measurement scenario. We will
discuss gradient magnetometry using quantum states that are insensitive to
homogeneous fields, which is a single-parameter estimation task. Then, we
discuss the case of quantum states sensitive to homogeneous fields, which is a
two-parameter estimation problem. We show that the precision bound obtained
does not change under spatial translation, which is one of the main tools to
derive our bounds. For the two-parameter estimation task, we will introduce
the two-parameter Cramér-Rao bound and the corresponding two-parameter QFI
matrix, and we adapt those expressions to our problem.
For clarity, we present our main tools in subsequent paragraphs before going
into details. Here we define a functional very similar to the QFI Eq. (2.51).
This expression will be used along this chapter and it will be useful for the
transition to the multi-parameter problem, i.e, it is equivalent to the QFI
for the single parameter estimation problem but still it gives the chance to
switch to the multi-parameter case easily. The function is defined as follows.
For two arbitrary operators $A$ and $B$, it is written as
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,B}]:=2\sum_{\lambda,\nu}\frac{(p_{\lambda}-p_{\nu})^{2}}{p_{\lambda}+p_{\nu}}{A}_{\lambda,\nu}{B}_{\nu,\lambda},$
(5.17)
where the subscript for $A$ and $B$ stand for the matrix elements on the
eigenbasis of the initial state
$\rho=\sum_{\lambda}p_{\lambda}|{\lambda}\rangle{\\!}\langle{\lambda}|$. If
the two operators are the same, the usual form of the QFI Eq. (2.51) is
recovered [2, 76, 31, 87, 113, 33],
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,A}]:=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A}]=2\sum_{\lambda,\nu}\frac{(p_{\lambda}-p_{\nu})^{2}}{p_{\lambda}+p_{\nu}}|{A}_{\lambda,\nu}|^{2}.$
(5.18)
We mention that in our case the operators $A$ and $B$ will commute in all
situations, making the computations easier. We also make use of the fact that
the QFI as written in Eq. (5.17) is linear in the second and the third
arguments,
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,\textstyle{\sum}_{i}b_{i}}]=\sum_{i}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,b_{i}}].$
(5.19)
It also holds for commuting $A$ and $B$, that the last two arguments can be
exchanged without affecting the outcome,
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,B}]=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,B,A}]$.
Similar to Eq. (2.54), Eq. (5.17) can be rewritten as
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,B}]=4\langle{AB}\rangle-8\sum_{\lambda,\nu}\frac{(p_{\lambda}-p_{\nu})^{2}}{p_{\lambda}+p_{\nu}}{A}_{\lambda,\nu}{B}_{\nu,\lambda},$
(5.20)
when the operators $A$ and $B$ commute. This form leads to simpler arguments
in our derivations through the following sections.
For pure states it simplifies also to
$\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\psi}\rangle,A,B}]=4\left(\langle{AB}\rangle_{\psi}-\langle{A}\rangle_{\psi}\langle{B}\rangle_{\psi}\right).$
(5.21)
Note that we recover
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,A}]=4(\Delta A)_{\rho}^{2}$ as
can be found in the Eq. (2.54) for single-parameter estimation with pure
states [2, 88].
Another important feature of the function Eq. (5.17) is that it is convex on
the states. This property is written as follows
$\mathcal{F}_{\textnormal{Q}}[{\textstyle
q\rho_{1}{+}(1{-}p)\rho_{2}}]\leqslant
p\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho_{1}}]{+}(1{-}p)\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho_{2}}],$
(5.22)
where we omit in writing the last two arguments for simplicity. Finally, it is
also useful to note that additive under the tensor product as
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(1)}\otimes\rho^{(2)},A^{(1)}\otimes\mathbbm{1}^{(2)}+\mathbbm{1}^{(1)}\otimes
A^{(2)},B^{(1)}\otimes\mathbbm{1}^{(2)}+\mathbbm{1}^{(1)}\otimes
B^{(2)}}]=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(1)},A^{(1)},B^{(1)}}]+\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(2)},A^{(2)},B^{(2)}}].$
(5.23)
In the following subsections we show the general form for the precision bounds
for states insensitive to the homogeneous fields and for states sensitive to
them. We also show that both bounds are invariant under spatial translation of
the system which makes the computing for particular cases much easier.
#### 5.2.1 States insensitive to the homogeneous field: Single-parameter
estimation
Let us consider quantum states that are insensitive to the homogeneous field.
For these states, $[\rho,H_{0}]=0$ and hence the evolved state is a function
of a single unknown parameter, $b_{1}$. For the unitary dynamics we consider,
the QFI for single-parameter estimation problem can be expressed in terms of
the eigenstates and eigenvalues of the density matrix as [2, 76, 31, 87, 113,
33]
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1}}]=2\sum_{\lambda,\nu}\frac{(p_{\lambda}-p_{\nu})^{2}}{p_{\lambda}+p_{\nu}}|\langle{\lambda}|{H_{1}}|{\nu}\rangle|^{2}.$
(5.24)
Note that here the eigenstates $|{\lambda}\rangle$ and $|{\nu}\rangle$ live on
both the external and internal Hilbert spaces. Due to the Cramér-Rao formula,
the QFI gives us an upper bound for the precision
$(\Delta
b_{1})^{-2}|_{\max}=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1}}].$
(5.25)
Note that it is _always_ possible to find a measurement that saturates the
precision bound above. Hence, we denote it using the "$|_{\max}=$" notation.
Here, $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1}}]$ denotes the QFI
that depends, in the case of unitary transformation of the form Eq. (5.14), on
the state $\rho$ and on the generator of the evolution $H_{1}$.
For the particular case in which the state has the form of Eqs. (5.3) and
(5.10), Eq. (5.24) can be simplified in the following way. Note that we have
to compute the matrix elements of $H_{1}$ which is already diagonal in the
spatial subspace. Therefore, the following holds for the matrix elements of
$H_{1}$
$\begin{split}(H_{1})_{\boldsymbol{x},\lambda;\boldsymbol{y},\nu}&=\langle{\boldsymbol{x},\lambda}|{H_{1}}|{\boldsymbol{y},\nu}\rangle\\\
&=\langle{\boldsymbol{x},\lambda}|{\sum_{n=1}^{N}x^{(n)}j^{(n)}}|{\boldsymbol{y},\nu}\rangle\\\
&=\delta(\boldsymbol{x}-\boldsymbol{y})\sum_{n=1}^{N}x_{n}\langle{\lambda}|{j^{(n)}}|{\mu}\rangle,\end{split}$
(5.26)
where $|{\lambda}\rangle$ and $|{\mu}\rangle$ refer now to eigenstates of the
internal state $\rho^{(\text{s})}$ and we use
$\langle{\boldsymbol{x}}|x^{(n)}|{\boldsymbol{y}}\rangle=\delta(\boldsymbol{x}-\boldsymbol{y})x_{n}$.
We will use the Dirac delta function appearing in Eq. (5.26) to further
simplify the Eq. (5.24).
We show now that spatial translations does not change the sensitivity of
gradient estimation. The translation operator $U_{d}$ moves the state to a new
position at a distance $d$ from its previous location, and it is written as
$U_{d}=e^{-idP_{x}/\hbar},$ (5.27)
where $P_{x}$ is the sum of all single-particle linear momentum operators
$p_{x}^{(n)}$ in the $x$-direction and it only acts on the external degrees of
freedom of the state, i.e., the external Hilbert space
$\mathcal{H}^{(\text{x})}$. To show that the precision is unchanged, we use
the Heisenberg picture in which the operators are transformed instead of the
states. Thus, we compute the transformation of $H_{1}$ as
$\begin{split}U_{d}:H_{1}\rightarrow
H_{1}(d)&=U_{d}^{\dagger}H_{1}U^{\phantom{\dagger}}_{d}\\\
&=\sum_{n=1}^{N}U_{d}^{\dagger}x^{(n)}U^{\phantom{\dagger}}_{d}\otimes
j_{z}^{(n)}\\\ &=\sum_{n=1}^{N}(x^{(n)}-d)j_{z}^{(n)}\\\
&=H_{1}-dH_{0}.\end{split}$ (5.28)
Hence, the new unitary evolution operator to represent the translated system,
instead of Eq. (5.14), is
$U=e^{-i(b_{0}H_{0}+b_{1}H_{1}(d))}=e^{-i((b_{0}-b_{1}d)H_{0}+b_{1}H_{1})}.$
(5.29)
Eq. (5.29) is equivalent to Eq. (5.14) for states insensitive to the
homogeneous fields, since in this case $[\rho,H_{0}]=0$.
To compute the QFI, we used the Dirac delta function appearing in Eq. (5.26),
and the state defined by Eq. (5.11). See Appendix G for details.
The following bound in the precision of the estimation of the gradient
parameter $b_{1}$ holds for states insensitive to the homogeneous magnetic
fields
$(\Delta
b_{1})^{-2}|_{\max}=\sum_{n,m}^{N}\int\text{Pr}(\boldsymbol{x})x_{n}x_{m}\,\text{d}^{N}\boldsymbol{x}\,\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},j_{z}^{(m)}}],$
(5.30)
where the integral has the form of a two-point correlation function of the
spatial state.
#### 5.2.2 States sensitive to the homogeneous field: Two-parameter
dependence
In order to obtain the precision bound for states sensitive to the homogeneous
field, one has to consider the effect on the state of a second unknown
parameter, in this case $b_{0}$, which represents the homogeneous magnetic
field. The homogeneous field will rotate all the spins in the same way, while
the field gradient rotates the spins differently depending on the position of
the particles. Now, instead to the Cramér-Rao bound Eq. (5.25), we have a
matrix inequality [2]
$\text{Cov}(b_{0},b_{1})\geqslant\mathbfcal{F}^{-1},$ (5.31)
where $\text{Cov}(b_{0},b_{1})$ is the covariance matrix for $b_{0}$ and
$b_{1}$.
For the matrix inequality Eq. (5.31), we have the inverse of QFI matrix
$\mathbfcal{F}$ on one hand, which depends on $\rho$ and the two generators
$H_{0}$ and $H_{1}$, and the covariance matrix on the other hand. In this
section, we are only interested in the variance of the gradient parameter,
$(\Delta b_{1})^{2}$. Since we have to compute the inverse of the QFI matrix
and then look at the element corresponding to the $(\Delta b_{1})^{2}$, the
determinant of $\mathbfcal{F}$ cannot be zero. $H_{0}$ and $H_{1}$ are
Hermitian operators and commute with each other. For unitary dynamics of the
type Eq. (5.14), the QFI matrix elements are computed as
$\mathbfcal{F}_{ij}\equiv\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{i},H_{j}}],$
following the definition given in Eq. (5.17).
In the two-parameter estimation problem, $\mathbfcal{F}$ is a $2\times 2$
matrix and the precision bound for the estimation of the gradient is
$(\Delta
b_{1})^{-2}\leqslant\mathbfcal{F}_{11}-\frac{\mathbfcal{F}_{01}\mathbfcal{F}_{10}}{\mathbfcal{F}_{00}},$
(5.32)
where the inequality is saturated only if there exists a set of compatible
measurements to determine both parameters $b_{0}$ and $b_{1}$, which is not
true in general and must be studied for each particular case [2, 115]. We
distinguish this case from the Eq. (5.25), in which the bound is surely
saturated by some measurement, using an inequality "$\leqslant$" instead of
"$|_{\max}=$".
To compute the bound Eq. (5.32), we will need to simplify the matrix elements
of $H_{0}$ and $H_{1}$ written in the eigenbasis of the state Eq. (5.11), see
Eq. (5.17). Note that the matrix elements for $H_{1}$ were already computed in
Eq. (5.26). Hence, we now calculate
$(H_{0})_{\boldsymbol{x},\lambda;\boldsymbol{y},\nu}$ in a similar way as we
did for Eq. (5.26)
$\begin{split}(H_{0})_{\boldsymbol{x},\lambda,\boldsymbol{y},\nu}&=\langle{\boldsymbol{x},\lambda}|{H_{0}}|{\boldsymbol{y},\nu}\rangle\\\
&=\langle{\boldsymbol{x},\lambda}|{J_{z}}|{\boldsymbol{y},\nu}\rangle\\\
&=\delta(\boldsymbol{x}-\boldsymbol{y})\langle{\lambda}|{J_{z}}|{\nu}\rangle.\end{split}$
(5.33)
With this we are now in the position to compute the missing matrix elements of
$\mathbfcal{F}$. One can find most of the computations of the matrix elements
of $\mathbfcal{F}$ in the Appendix G. First of all, we compute
$\mathbfcal{F}_{11}=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1},H_{1}}]$
which turns to be equal to Eq. (5.30) for obvious reasons,
$\mathbfcal{F}_{11}=\sum_{n,m}^{N}\int
x_{n}x_{m}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}\,\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},j_{z}^{(m)}}].$
(5.34)
Second, the most trivial matrix element is $\mathbfcal{F}_{00}$ which turns to
depend only on the internal state $\rho^{(\text{s})}$,
$\mathbfcal{F}_{00}=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},J_{z}}].$
(5.35)
Finally, we compute both $\mathbfcal{F}_{01}$ and $\mathbfcal{F}_{10}$. To
compute them, note that the two matrix elements are equal
$\mathbfcal{F}_{01}=\mathbfcal{F}_{01}$, due to the properties of Eq. (5.17)
for commuting $H_{0}$ and $H_{1}$. Therefore, we have to compute only one of
them
$\mathbfcal{F}_{01}=\sum_{n=1}^{N}\int
x_{n}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}\,\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},J_{z}}].$
(5.36)
With these results the bound for the precision for states sensitive to the
homogeneous field which have the form of Eq. (5.11) is
$\begin{split}(\Delta b_{1})^{-2}\leqslant&\sum_{n,m}^{N}\int
x_{n}x_{m}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}\,\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},j_{z}^{(m)}}]-\frac{\left(\sum_{n=1}^{N}\int
x_{n}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}\,\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},J_{z}}]\right)^{2}}{\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},J_{z}}]}.\end{split}$
(5.37)
To simplify our calculations, we show that the bound Eq. (5.37) is invariant
under displacements of the system. We use the linearity of the last two
arguments of $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,B}]$, Eq. (5.19),
the fact that $H_{0}$ remains unchanged in the Heisenberg picture, and we also
use the shifted $H_{1}(d)$ operator computed in Eq. (5.28). Hence, the
translated QFI matrix elements are written as
$\displaystyle\mathbfcal{F}_{00}(d)$
$\displaystyle=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{0}(d)}]=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{0}}],$
(5.38a) $\displaystyle\mathbfcal{F}_{01}(d)$
$\displaystyle=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{0}(d),H_{1}(d)}]$
$\displaystyle=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{0},H_{1}-dH_{0}}]=\mathbfcal{F}_{01}-d\mathbfcal{F}_{00},$
(5.38b) $\displaystyle\mathbfcal{F}_{11}(d)$
$\displaystyle=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1}(d),H_{1}(d)}]$
$\displaystyle=\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1}-dH_{0},H_{1}-dH_{0}}]$
$\displaystyle=\mathbfcal{F}_{11}-2d\mathbfcal{F}_{01}+d^{2}\mathbfcal{F}_{00}.$
(5.38c)
Simple algebra shows that the bound for the precision of the estimation of the
gradient remains constant under spatial translations,
$\begin{split}(\Delta
b_{1})^{-2}_{d}\leqslant\,&\mathbfcal{F}_{11}(d)-\frac{(\mathbfcal{F}_{01}(d))^{2}}{\mathbfcal{F}_{00}(d)}\\\
=\,&\mathbfcal{F}_{11}-2d\mathbfcal{F}_{01}+d^{2}\mathbfcal{F}_{00}\\\
&-\frac{\mathbfcal{F}_{01}^{2}-2d\mathbfcal{F}_{01}\mathbfcal{F}_{00}+d^{2}\mathbfcal{F}_{00}^{2}}{\mathbfcal{F}_{00}}\\\
=\,&\mathbfcal{F}_{11}-\frac{\mathbfcal{F}_{01}^{2}}{\mathbfcal{F}_{00}}.\end{split}$
(5.39)
These observations make the computations of the precision bounds in the next
sections easier, since now we can place the system arbitrarily wherever we
choose. It also allows us, for example, to place the origin of our coordinate
system as well as the system itself where the magnetic field is zero. So, we
can write the linear magnetic field simply as
$\boldsymbol{B}(x)=xB_{1}\boldsymbol{k}$ where $\boldsymbol{k}$ is the unitary
vector pointing on the $z$-direction perpendicular to $x$\- and $y$-axis. The
discourse we have had on the preceding section has a vital importance to
understand properly our results.
### 5.3 Testing our approach
Despite the generality of the tools we developed in Section 5.2, it is always
useful to start with simple but concise examples. For this, we consider two of
the most relevant cases for the external state $\rho^{(\text{x})}$ which we
know that behave well for estimating the gradient parameter. We will study the
chain of atoms, where the atoms are placed one-by-one in a $1$-dimensional
lattice with a constant separation $a$, and the two-ensembles of atoms, where
half of the atoms are in $x=-a$ and the rest in $x=+a$.
#### 5.3.1 Distinguishable atoms in a 1D lattice
As we have said, the first spatial state we consider will be given by $N$
particles all placed equidistantly from each other in a one-dimensional spin
chain, i.e., a chain of atoms in a $1$-dimensional lattice [116], see Figure
5.1. The probability density function describing the system is
$\text{Pr}(\boldsymbol{x})=\prod_{n=1}^{N}\delta(x_{n}-na).$ (5.40)
\begin{overpic}[scale={1.15}]{img/GM_evolution_chain_initial.pdf}
\put(5.0,10.0){\small(a)}
\end{overpic}\begin{overpic}[scale={1.15}]{img/GM_evolution_chain_final.pdf}
\put(5.0,10.0){\small(b)} \end{overpic}
Figure 5.1: (blue-circles) A system of $N$-atoms of spin-$j$ trapped in a
chain configuration. (green-area) Magnetic field gradient. Note that the field
is pointing outwards of the figures. (red-arrows) Spins of the particles
initially all aligned. (a) The ensemble is initially totally polarized along a
perpendicular direction, in this case $y$-direction, of the magnetic field
$B_{z}$. The internal state can be written as $|{+j}\rangle_{y}^{\otimes N}$,
the number represents $m_{y}$ the eigenvalue of the one particle operator
$j_{y}^{(n)}$. (b) One can see how the gradient field affects with a varying
field strength the different spins when they are placed in different
positions.
With this at hand we compute the single-point averages and the two-point
averages corresponding to the ion-chain. For the single-point average, one of
the integrals appearing in Eq. (5.37), we have that
$\int x_{n}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}=na,$ (5.41)
and for the two-point correlation, which appears in Eqs. (5.30) and (5.37), we
have the following for the case of the chain of atoms,
$\int
x_{n}x_{m}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}=nma^{2}.$
(5.42)
On the other hand, we use first a totally polarized state in the $y$-direction
for the internal degrees of freedom,
$\rho^{(\text{s})}=(|{+j}\rangle{{}_{y}}\langle{+j}|_{y})^{\otimes N}$
appeared in Eq. (2.33). This state is sensitive to the homogeneous field, so
using that the state is pure and separable
$\mathcal{F}_{\textnormal{Q}}[{\textstyle|{+j}\rangle_{y}^{\otimes},j_{z}^{(n)},j_{z}^{(m)}}]=4(\langle{j_{z}^{(n)},j_{z}^{(m)}}\rangle-\langle{j_{z}^{(n)}}\rangle\langle{j_{z}^{(m)}}\rangle)=2j\delta_{n,m}.$
(5.43)
Since this function is linear in the second and third arguments, we have that
$\mathcal{F}_{\textnormal{Q}}[{\textstyle|{+j}\rangle_{y}^{\otimes},j_{z}^{(n)},J_{z}}]=2j$
and
$\mathcal{F}_{\textnormal{Q}}[{\textstyle|{+j}\rangle_{y}^{\otimes},J_{z}}]=2jN$.
With this, we can now write the precision bound for a chain of atoms when the
internal state is totally polarized along the $y$-axis as
$\begin{split}(\Delta b_{1})^{-2}_{\text{ch,tp}}&\leqslant
a^{2}\left\\{\sum_{n=1}^{N}n^{2}2j-\frac{(\sum_{n=1}^{N}n2j)^{2}}{N2j}\right\\}\\\
&=a^{2}\frac{N^{2}-1}{12}2jN.\end{split}$ (5.44)
Despite that Eq. (5.44) is a third order function of the particle number $N$
and that it seems to overcome the ultimate scaling Heisenberg scaling, note
that the length of the chain increases as we introduce more particles into the
system. Hence, we have to normalize the bound with the effective size of the
system, such that separable states would scale as shot-noise scaling. This
restores the ultimate threshold of the precision to $\sim N^{2}$ as usual.
In this section, we will use the standard deviation of the averaged particle
position as the length measure of the system. We also include in our next
definitions the averaged correlation of two different particle positions,
since it will appear in the following sections and for completeness. They are
computed as
$\displaystyle\mu$
$\displaystyle=\int\frac{\sum_{n=1}^{N}x_{n}}{N}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x},$
(5.45a) $\displaystyle\sigma^{2}$
$\displaystyle=\int\frac{\sum_{n=1}^{N}x_{n}^{2}}{N}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}-\mu^{2},$
(5.45b) $\displaystyle\eta$ $\displaystyle=\int\frac{\sum_{n\neq
m}x_{n}x_{m}}{N(N-1)}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}-\mu^{2},$
(5.45c)
where $\mu$ denotes the averaged position of the particles, $\sigma^{2}$ the
variance of position of the particles and $\eta$ the position correlation
between different particles. So, the system effective width for the chain of
atoms, computed by the variance Eq. (5.45b), is given as
$\sigma_{\text{ch}}^{2}=a^{2}\frac{N^{2}-1}{12}.$ (5.46)
It turns out that Eq. (5.46) exactly coincides with one of the factors we have
in Eq. (5.44). Substituting this into the Eq. (5.44) we have that, for ion-
chains where the particles are separated by a constant distance and where the
spin-state $\rho^{(\text{s})}$ is the totally polarized state along the
$y$-direction, the precision bound is given by
$(\Delta b_{1})^{-2}_{\text{ch,tp}}\leqslant\sigma_{\text{ch}}^{2}2jN,$ (5.47)
in terms of $\sigma_{\text{ch}}$, the spin of each particle $j$ and the
particle number $N$.
#### 5.3.2 Differential magnetometry with two ensembles
We now turn our attention to the case of two ensembles of distinguishable
atoms. Two ensembles of spin-$j$ atoms spatially separated from each other
have been realized in cold gases (e.g., Ref. [41]), and can be used for
differential interferometry [14, 117]. We will also use an internal state such
the maximal QFI is achieved so the reader gets familiar with our approach and
sees how the best state to measure the gradient parameter looks like in our
framework.
The spatial part is described by the following probability density function,
where for an even number of particles half of the particles are at one
position and the rest at another, both places at a distance of $a$ from the
origin
$\text{Pr}(\boldsymbol{x})=\prod_{n=1}^{N/2}\delta(x_{n}+a)\prod_{n=N/2+1}^{N}\delta(x_{n}-a).$
(5.48)
This could be realized in a double-well trap, where the width of the wells is
negligible compared to the distance of the wells. Note that a state defined by
Eq. (5.48) and Eq. (5.10) is a pure state in the position Hilbert space. To
distinguish the two wells we will use the labels "L" and "R" for the left and
right wells respectively, which is a shorthand which collects the first half
of the particles and the last half into a single index, respectively. With
this we are able to compute the single-point and two-point correlation
functions as
$\displaystyle\int x_{n}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}$
$\displaystyle=\left\\{\begin{aligned} &{-a}&&\text{if }n\in\text{L},\\\
&a&&\text{if }n\in\text{R},\end{aligned}\right.$ (5.49a) $\displaystyle\int
x_{n}x_{m}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}$
$\displaystyle=\left\\{\begin{aligned} &{-a^{2}}&&\text{if
}(n,m)\in\text{(L,L) or (R,R),}\\\ &a^{2}&&\text{if }(n,m)\in\text{(L,R) or
(L,R).}\end{aligned}\right.$ (5.49b)
We will try states insensitive to the homogeneous fields. Since the spatial
part is a pure state in the position subspace, we will try pure states in the
spin subspace due to the fact that the QFI is maximized by pure states as it
was shown in Section 2.3. For pure states we have that the QFI is computed
directly as four times the variance of the gradient Hamiltonian,
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1}}]=4(\Delta H_{1})^{2}$.
So, we just choose a spin-state that maximizes the variance of $H_{1}$ when
the spatial part is Eq. (5.48). Hence, the best state in this case is
$|{\Psi}\rangle=\frac{1}{\sqrt{2}}(|{{+j}\cdots{+j}}\rangle^{(\text{L})}|{{-j}\cdots{-j}}\rangle^{(\text{R})}+|{{-j}\cdots{-j}}\rangle^{(\text{L})}|{{+j}\cdots{+j}}\rangle^{(\text{R})}),$
(5.50)
where it can be seen as the superposition of the state with the smallest
energy and the state with the greatest energy when the Hamiltonian is $H_{1}$,
see Figure 5.2. The state $|{\Psi}\rangle$ is indeed insensitive to the
homogeneous field since both states of the superposition in Eq. (5.50) are
eigenstates of $H_{0}$ with the same eigenvalue. Moreover, the state is also
maximally entangled.
Figure 5.2: (blue-circle) Atoms located at (L) or (R). (red-arrow) Spin state
of each of the atoms. (green-area) Linear magnetic field $B_{z}$. Note that
the $z$-axis is to represent the direction of the spins. On the other hand,
the state is a linear superposition of the upper state and the lower state,
represented by $|{\cdot}\rangle$ and $+$ sign. Note that all particles at (L)
or (R) are assumed to be in the same spatial spot.
We compute first the
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},j_{z}^{(m)}}]$
for $|{\Psi}\rangle$ as
$\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\Psi}\rangle,j_{z}^{(n)},j_{z}^{(m)}}]=\left\\{\begin{aligned}
&4j^{2}&&\text{if }(n,m)\in\text{(L,L) or (R,R)},\\\ &{-4j^{2}}&&\text{if
}(n,m)\in\text{(L,R) or (R,L)}.\end{aligned}\right.$ (5.51)
Now, if we separate the terms of the sum in Eq. (5.30) into two groups, such
that one of the sums is for indexes $(n,m)\in\text{(L,L) or (R,R)}$, and the
other is for indexes $(n,m)\in\text{(L,R) or (R,L)}$, we can compute the bound
for the best state for the two ensemble case as
$\begin{split}(\Delta
b_{1})^{2}|_{\max}&=\sum_{\genfrac{}{}{0.0pt}{}{(n,m)\in}{\text{(L,L) or
(R,R)}}}a^{2}4j^{2}+\sum_{\genfrac{}{}{0.0pt}{}{(n,m)\in}{\text{(L,R) or
(R,L)}}}-a^{2}(-4j^{2})\\\ &=4a^{2}N^{2}j^{2}.\end{split}$ (5.52)
On the other hand, if we compute now the standard deviation Eq. (5.45b), as we
did before for the case of the chain Eq. (5.46), we have that for the two
ensembles case $\mu=0$ and the standard deviation for the spatial state is
$\sigma_{\text{te}}^{2}=a^{2},$ (5.53)
with which
$(\Delta b_{1})^{2}_{\text{HL}}|_{\max}=4\sigma_{\text{te}}^{2}N^{2}j^{2}.$
(5.54)
For a general $\sigma^{2}$, the Eq. (5.54) can be seen as the Heisenberg limit
for gradient metrology. Note that the state is insensitive to the homogeneous
field, hence this bound is saturable by some measurement, and that the state
$|{\Psi}\rangle$ maximizes the variance of $H_{1}$ for any given $\sigma^{2}$.
Before concluding, we want to show another more usual approach to the same
problem.
Knowing that the QFI is convex in the state and considering the spatial state
to be Eq. (5.48), we reduce our problem to the internal subspace in which the
state that maximizes the QFI is the one that maximizes $(\Delta H_{1})^{2}$.
In this case, taking into account the particle locations are given and that we
have zero magnetic field at the origin, we obtain
$H_{1,\text{eff}}^{(\text{s})}=a(\mathbbm{1}^{(\text{L})}\otimes
J_{z}^{(\text{R})}-J_{z}^{(\text{L})}\otimes\mathbbm{1}^{(\text{R})}),$ (5.55)
where we write the effective Hamiltonian that the particles on the left and
right feel. This proves that we have used the right state, since it maximizes
the variance $(\Delta H_{1,\text{eff}}^{(\text{s})})^{2}$ [117].
States separable into $|{\psi}\rangle^{({\rm L})}\otimes|{\psi}\rangle^{({\rm
R})}$
Now that we have already introduced the reader to the case of the two
ensembles, we take the opportunity to show some more important results for
states of the form of
$|{\psi}\rangle^{(\text{L})}\otimes|{\psi}\rangle^{(\text{R})}$. These states
can reach the Heisenberg limit, while they are easier to realize
experimentally than states in which the particles on the left and particles on
the right are entangled with each other.
First, we will compute the bound for states insensitive to the homogeneous
field. For such states we only have to compute the QFI for $H_{1,\text{eff}}$
Eq. (5.55) such that
$(\Delta
b_{1})^{-2}|_{\max}=\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\psi}\rangle^{(\text{L})}\otimes|{\psi}\rangle^{(\text{R})},a(\mathbbm{1}^{(\text{L})}\otimes
J_{z}^{(\text{R})}-J_{z}^{(\text{R})}\otimes\mathbbm{1}^{(\text{L})})}]=2a^{2}\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\Psi}\rangle^{(\text{L})},J_{z}^{(\text{L})}}],$
(5.56)
where we used the general rule Eq. (2.60) and that any scalar multiplying the
second argument of the QFI can be extracted outside the function squared.
Now, we analyze how the bound would look like for states sensitive to the
homogeneous field. Note that the single-point correlation function for
particles at "(L)" and "(R)" is $a$ and $-a$ respectively, and the two-point
correlation function is $a^{2}$ for both. Thus, in the case of computing the
bound for the states sensitive to the homogeneous fields, we have that
$F_{01}^{(\text{L})}=-F_{01}^{(\text{R})}$, which we used the superscript to
indicate in this case over which subspace is computed the QFI matrix element,
whereas the other two matrix elements we have to compute are equal for both
subspaces "(L)" and "(R)". The precision bound for states sensitive to the
homogeneous fields is obtained as
$\begin{split}(\Delta
b_{1})^{-2}\leqslant&F_{11}+\frac{(F_{01})^{2}}{F_{00}}\\\
=&F_{11}^{(\text{L})}+F_{11}^{(\text{R})}+\frac{(F_{01}^{(\text{L})}+F_{01}^{(\text{R})})^{2}}{F_{00}^{(\text{L})}+F_{00}^{(\text{R})}}\\\
=&2F_{11}^{(\text{L})}+\frac{(F_{01}^{(\text{L})}-F_{01}^{(\text{L})})^{2}}{2F_{00}^{(\text{L})}}\\\
=&2F_{11}^{(\text{L})}\\\
=&2a^{2}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(L)},J_{z}^{(L)}}],\end{split}$
(5.57)
where we use in the first line the identities for additions under tensor
products Eqs. (2.60) and (5.23), and in the last line we extract the common
factor $a^{2}$ and we use the linearity on the arguments the QFI. Note that
this is the same precision bound we will obtain for states insensitive to the
homogeneous fields. Note that this bound relates how good the state on the
"(L)" or "(R)" subsystem is in sensing the homogeneous field with the
precision achievable for the gradient parameter. This is reasonable because
the state in "(L)" is not interacting neither correlated with "(R)". Hence,
after the homogeneous field is estimated for "(L)" and "(R)" independently,
the gradient can also be estimated as the difference between the two estimates
divided by the square distance $a^{2}$.
In the literature one can find several states that can be used to measure a
homogeneous field, such as the GHZ states [118], unpolarized Dicke states, and
spin-squeezed states. Note that if $|{\Psi}\rangle$ is separable, then based
in Eq. (2.64) and for any of the two bounds Eqs. (5.56) and (5.57), we have
$\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\Psi}\rangle_{\text{sep}}^{(\text{L})}|{\Psi}\rangle_{\text{sep}}^{(\text{R})},\,a(J_{z}^{(\text{L})}\mathbbm{1}^{(\text{R})}-\mathbbm{1}^{(\text{L})}J_{z}^{(\text{R})})}]=2a^{2}\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\Psi}\rangle_{\text{sep}}^{(\text{L})},J_{z}^{(\text{L})}}]=2a^{2}4\frac{N}{2}j^{2}\leqslant
4a^{2}Nj^{2}.$ (5.58)
Note that each of the ensembles has half of the total particle number $N$. Eq.
(5.58) can be seen as the shot-noise limit when two ensembles are used for
gradient metrology. In Table 5.1, we summarized the precision bounds for
states of type $|{\Psi}\rangle^{(\text{L})}\otimes|{\Psi}\rangle^{(\text{R})}$
for the two-ensemble case.
States in (L) and (R) | $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]$ for $N/2$ | $(\Delta b_{1})^{-2}\leqslant$
---|---|---
$|{+j}\rangle_{l}^{\otimes N/2}\otimes|{+j}\rangle_{l}^{\otimes N/2}$ | $Nj$ | $2a^{2}Nj$
$|{\textnormal{GHZ}}\rangle\otimes|{\textnormal{GHZ}}\rangle$ | $N^{2}/4$ | $a^{2}N^{2}/2$
$|{\textnormal{D}_{{N/2}}}\rangle_{x}\otimes|{\textnormal{D}_{{N/2}}}\rangle_{x}$ | $N(N+4)/8$ | $a^{2}N(N+4)/4$
$|{\Psi}\rangle_{\text{sep}}\otimes|{\Psi}\rangle_{\text{sep}}$ | $2Nj^{2}$ | $4a^{2}Nj^{2}$
Table 5.1: (first column) The complete state as a tensor product of the state
in (L) and the state in (R). Note that for the GHZ state and the unpolarized
Dicke state, the spin is $j=\frac{1}{2}$. The last state $|{\Psi}\rangle$ is
the best separable state for the estimation of the homogeneous field. Hence,
the bound for $|{\Psi}\rangle$ coincides with the shot-noise limit for
gradient metrology with two ensembles. (second column) Precision of the
estimation of the homogeneous magnetic field in one of the ensembles. (third
column) From the second column and based on Eqs. (5.56) and (5.57), we compute
the precision for differential magnetometry for various product quantum states
in two ensembles spatially separated from each other by a distance $a$. Note
that all states are sensitive to the homogeneous field so the saturability of
of the bound is not ensured. This is the reason we use "$\leqslant$" instead
of "$|_{\max}$".
In this section we have shown to the reader how one should handle the spatial
width of the system for classifying it for gradient metrology as well as a
state-of-the-art system in which the Heisenberg limit is achieved. Moreover,
we have shown how to use the tools developed in the previous section to
compute simple bounds. In the next section we will focus on single cold-atom
ensembles since they play an important role in quantum technology, and many
groups are trying to realize them whit great success but with few theoretical
support.
### 5.4 Magnetometry with a single atomic ensemble
In this section, we discuss magnetometry with a single atomic ensemble in more
detail. We consider a one-dimensional ensemble of spin-$j$ atoms placed in a
one dimensional trap, which is elongated in the $x$-direction. The magnetic
field points in the $z$-direction, and has a constant gradient along the
$x$-direction. The setup is depicted in Fig. 5.3. In the last part of this
section, we calculate precision bounds for the gradient estimation for some
important multi-particle quantum states, for instance, Dicke states or GHZ
states. Note that all these states are permutationally invariant, since we
assume that a permutationally invariant procedure prepared the states.
Figure 5.3: (blue-point) Atomic ensemble with their spins (red-arrow)
pointing randomly in any direction coupled with a linear magnetic field
$B_{z}$ (green). The spatial state $\rho^{(\text{x})}$ is assumed to be
permutationally invariant. The ensemble is centered around the place at which
the magnetic field is zero due to the invariance of the precision bound under
translations of the system.
#### 5.4.1 Precision bound
In an atomic ensemble of very many atoms, typically the atoms cannot be
individually addressed. This can be taken into account, if we consider quantum
states that are permutationally invariant. Hence, we will consider states for
which both the internal state $\rho^{(\text{s})}$ and the probability
distribution function $\text{Pr}(\boldsymbol{x})$, appearing in Eq. (5.10),
are permutationally invariant. The permutational invariance of
$\text{Pr}(\boldsymbol{x})$ implies that
$\text{Pr}(\boldsymbol{x})=\tfrac{1}{N!}\sum_{k}\mathcal{P}_{k}[\text{Pr}(\boldsymbol{x})],$
(5.59)
where the summation is over all the possible permutations of the variables
$x_{n}$ denoted by $\mathcal{P}_{k}$.
As we have shown in Section 5.2, the precision bound is invariant under
spatial translations. This allows us to place the "center of mass" of the
system at the origin of the coordinate system. With this simplifying
assumption and based on Eqs. (5.45a), (5.45b) and (5.45c), the single-point
average appearing in Eq. (5.37) is
$\int
x_{n}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}=\int\frac{\sum_{n=1}^{N}x_{n}}{N}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}=\mu=0,$
(5.60)
where we used the permutational invariance of $\text{Pr}(\boldsymbol{x})$
substitute $x_{n}$ by the sum appearing in Eq. (5.45a). In a similar way we
obtain
$\int
x_{n}x_{m}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}=\left\\{\begin{aligned}
&\sigma^{2}&&\text{for }n=m,\\\ &\frac{\eta}{N-1}&&\text{for }n\neq
m,\end{aligned}\right.$ (5.61)
where we used that the system is placed at the origin $\mu=0$. An interesting
property of the covariance of this type is that its a value is bounded from
below and from above by the variance itself and the particle number $N$ in the
following way,
$\frac{-\sigma^{2}}{N-1}\leq\eta\leq\sigma^{2}.$ (5.62)
Note that in the first sum in Eq. (5.37) there are in total $N(N-1)$ terms
proportional to $\eta/(N-1)$ and $N$ terms proportional to $\sigma^{2}$.
From the linearity in the second and third arguments of
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A,B}]$ Eq. (5.17) and for states
insensitive to the homogeneous field, where
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,J_{z}}]=0$, we have that
$\sum_{n=1}^{N}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)}}]=-\sum_{n\neq
m}^{N}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)},j_{z}^{(m)}}].$
(5.63)
From the definition of the QFI for states insensitive to the homogeneous
field, Eq. (5.30), we compute the bound for single ensembles as
$\begin{split}(\Delta b_{1})^{-2}|_{\max}&=\sum_{n,m}^{N}\int
x_{n}x_{m}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},j_{z}^{(m)}}]\\\
&=\sum_{n=1}^{N}\sigma^{2}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)}}]+\sum_{n\neq
m}^{N}\eta\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)},j_{z}^{(m)}}].\end{split}$
(5.64)
Together with Eq. (5.63) we write the precision bound for states insensitive
to the homogeneous fields as
$(\Delta
b_{1})^{-2}|_{\max}=(\sigma^{2}-\eta)\sum_{n=1}^{N}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)}}].$
(5.65)
Note that the bound in Eq. (5.65) can be saturated by an optimal measurement.
Nevertheless, it cannot surpass the shot-noise scaling, $\sim N$, because
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)}}]$, the
QFI for the single-particle operator $j_{z}^{(n)}$, cannot be larger than
$j^{2}$.
To compute the bound for states sensitive to the homogeneous field, note that
in the second term appearing in Eq. (5.37) is proportional to the single-point
average Eq. (5.60) which was chosen to be equal to zero. Hence, we only have
to compute the first term of the Eq. (5.37) as
$\begin{split}(\Delta b_{1})^{-2}\leqslant&\sum_{n,m}^{N}\int
x_{n}x_{m}\text{Pr}(\boldsymbol{x})\,\text{d}^{N}\boldsymbol{x}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},j_{z}^{(m)}}]\\\
=&\sum_{n=1}^{N}\sigma^{2}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)}}]+\sum_{n\neq
m}^{N}\eta\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)},j_{z}^{(m)}}]\\\
=&(\sigma^{2}-\eta)\sum_{n=1}^{N}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)}}]+\eta\sum_{n,m}^{N}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)},j_{z}^{(m)}}],\end{split}$
(5.66)
where in the second line we compute the diagonal and the off-diagonal terms of
the sum separately and in the last line we add
$\eta\sum_{n=1}^{N}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,j_{z}^{(n)}}]$
to the last term and subtract it from the first term to make the expression
more similar to Eq. (5.65).
Hence, for states sensitive to homogeneous fields, the precision of estimating
the gradient is bounded from above as
$(\Delta
b_{1})^{-2}\leqslant(\sigma^{2}-\eta)\sum_{n=1}^{N}\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)}}]+\eta\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},J_{z}}].$
(5.67)
The second term on the right-hand side of Eq. (5.67) is new in the sense that
it did not appear in the bound for states insensitive to homogeneous fields.
Note that the bound in Eq. (5.67) is not necessarily saturable if the optimal
measurements to estimate the gradient parameter and the homogeneous parameter
do not commute with each other. Note also that even if the first term cannot
overcome the shot-noise scaling, in the second term the covariance is
multiplied by QFI for estimating the homogeneous field and therefore this
concrete term can make the bound, for extremely correlated particle positions,
to scale as Heisenberg scaling.
#### 5.4.2 Precision bounds for different spin-states
In this section, we present the precision limits for different classes of
important quantum states such as the totally polarized state, the state having
the largest precision among separable states, or the singlet state. We will
calculate the precision bounds presented before, Eqs. (5.65) and (5.67), for
these systems. We show first the results for singlets that are insensitive to
homogeneous fields. In this case, the bounds can be achieved by choosing the
appropriate magnitude to measure. The rest of the results are for states
sensitive to homogeneous fields which in general are not necessarily
achievable bounds.
Before going into the details of our computations we present a summary of the
results obtained in this section. The summary for different states can be
found in the Table 5.2.
States | $(\Delta b_{1})^{-2}$
---|---
permutationally invariant singlet states | $|_{\max}=(\sigma^{2}-\eta)\frac{4Nj(j+1)}{3}$
$|{+j}\rangle_{y}^{\otimes N}$ | $\leqslant\sigma^{2}2Nj$
Best separable state $|{\Psi}\rangle$ | $\leqslant\sigma^{2}4Nj^{2}$
$|{\textnormal{D}_{N}}\rangle_{z}$ | $|_{\max}=(\sigma^{2}-\eta)N$
$|{\textnormal{D}_{N}}\rangle_{x}$ | $\leqslant(\sigma^{2}-\eta)N+\eta\frac{N(N+2)}{2}$
$|{\textnormal{GHZ}}\rangle$ | $\leqslant(\sigma^{2}-\eta)N+\eta N^{2}$
Table 5.2: Precision bounds for differential magnetometry for various quantum
states. For the definition of the states, see the text. If the bound are
proved to be saturable then the "$|_{\max}=$" subscript is used instead of an
inequality.
Permutationally invariant singlet states
We consider now the singlet state, which is invariant under the influence of a
homogeneous field along any direction. So, we have to compute the formula for
the bound of the precision Eq. (5.65). A pure singlet state is an eigenstate
of the collective $J_{z}$ and $J^{2}$ operators, with an eigenvalue zero in
both cases. There are many different singlet states for an ensemble of $N$
spin-$j$ particles, which some of them are permutationally invariant.
Surprisingly the precision bound we compute is the same for any
permutationally invariant singlet. Atomic ensembles in a singlet state have
been experimentally created with cold gases [34, 58].
In an $N$-particle system, there are several singlets pairwise orthogonal to
each other. The number of such singlets, $D_{0}$, depends on the particle spin
$j$ and the number of particles $N$.
The most general singlet state can be written in the total angular momentum
basis, using $D$ to label the degenerate states, see Appendix A. In its
eigenbasis the singlet is written as
$\rho^{(\text{s})}=\sum_{D=1}^{D_{0}}\lambda_{D}|{0,0,D}\rangle{\\!}\langle{0,0,D}|,$
(5.68)
where $\sum_{D}\lambda_{D}=1$. In its complete form the eigenvalues of the
spin density matrix are $\lambda_{J,M_{z},D}=\delta_{0,J}\lambda_{D}$.
Looking at Eq. (5.65), we must compute the QFI for the one-particle operator
$j_{z}^{(n)}$ in order to compute the precision bound for permutationally
invariant singlet states. For that purpose we use the fact that when
$j_{z}^{(n)}$ acts on a singlet state, it produces a state outside of the
singlet subspace. This can be proved by noting that
$e^{i\pi J_{x}}j_{z}^{(n)}e^{-i\pi J_{x}}=-j_{z}^{(n)}$ (5.69)
and that $e^{-i\pi J_{x}}|{0,0,D}\rangle=|{0,0,D}\rangle$ holds for any pure
singlet state. Hence, we can arbitrarily flip the sign of $j_{z}^{(n)}$ so
$\langle{0,0,D}|{j_{z}^{(n)}}|{0,0,D^{\prime}}\rangle=-\langle{0,0,D}|{j_{z}^{(n)}}|{0,0,D^{\prime}}\rangle,$
(5.70)
which implies
$\langle{0,0,D}|{j_{z}^{(n)}}|{0,0,D^{\prime}}\rangle=0,$ (5.71)
for any pair of pure singlet singlet states.
In order to compute the QFI for the singlet state we use Eq. (5.20). Hence, we
can write the following for the second term of Eq. (5.20),
$8\sum_{D,D^{\prime}}\tfrac{\lambda_{D}\lambda_{D^{\prime}}}{\lambda_{D}+\lambda_{D^{\prime}}}|\langle{0,0,D}|{j_{z}^{(n)}}|{0,0,D^{\prime}}\rangle|^{2}=0.$
(5.72)
It follows that the QFI of $j_{z}^{(n)}$ for any singlet is indeed simply
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)}}]=4\operatorname{tr}({\rho^{(\text{s})}(j_{z}^{(n)})^{2}}).$
(5.73)
Finally, we must compute the expectation value of the operator
$(j_{z}^{(n)})^{2}$. For that we have that
$\operatorname{tr}(\rho^{(\text{s})}(j_{k}^{(n)})^{2})=\operatorname{tr}(\rho^{(\text{s})}(j_{l}^{(n)})^{2}),$
(5.74)
for any pair $k,l\in x,y,z$ due to the rotational invariance of the singlet,
i.e, all the singlets remain invariant under a $SU(2)$ transformation of the
kind $U=e^{i\phi J_{\boldsymbol{n}}}$, where $\boldsymbol{n}$ is an unitary
vector belonging to the positional space. Then we can write that
$\langle{(j_{x}^{(n)})^{2}+(j_{y}^{(n)})^{2}+(j_{z}^{(n)})^{2}}\rangle=j(j+1),$
(5.75)
for any state, since it represents the spin number of the particle, which is
fixed. Hence, the expectation value of $(j_{z}^{(n)})^{2}$ on the singlet is
$\operatorname{tr}(\rho^{(\text{s})}(j_{z}^{(n)})^{2})=\frac{j(j+1)}{3},$
(5.76)
for all the singlets. Inserting this into Eq. (5.73) and using Eq. (5.65), we
obtain
$(\Delta
b_{1})^{-2}_{\text{s}}|_{\max}=\left(\sigma^{2}-\eta\right)\frac{4Nj(j+1)}{3}.$
(5.77)
To conclude, singlet states are insensitive to homogeneous magnetic fields,
hence determining the gradient leads to a single-parameter estimation problem.
This implies that there is an optimal operator that saturates the precision
bound given by Eq. (5.77). However, it is usually very hard to find this
optimal measurement, although a formal procedure for this exists [2, 115]. In
Ref. [57], a particular set-up for determining the magnetic gradient with
permutationally invariant singlet states was suggested by the measurement of
the $J_{x}^{2}$ collective operator. For this scenario the precision is given
by the error propagation formula as
$(\Delta
b_{1})^{-2}=\frac{|\partial_{b_{1}}\langle{J_{x}^{2}}\rangle|^{2}}{\langle{J_{x}^{4}}\rangle-\langle{J_{x}^{2}}\rangle^{2}}.$
(5.78)
Totally polarized state
The totally polarized state can easily be prepared experimentally. It has
already been used for gradient magnetometry with a single atomic ensemble [17,
18]. For the gradient measurement as for the measurement of the homogeneous
field, the polarization must be perpendicular to the field we would like to
measure in order to take advantage of the interaction between the particles
and the field. Here we chose as before the totally polarized state along the
$y$-axis which is written as $|{j}\rangle_{y}^{\otimes N}$. Note that this
state is sensitive to the homogeneous field, hence, we must use the Eq. (5.67)
to compute the bound.
For the pure states we have that
$\mathcal{F}_{\textnormal{Q}}[{\textstyle|{\psi}\rangle,A}]=4(\Delta A)^{2}$.
Together with, $(\Delta j_{z}^{(n)})^{2}=j/2$ and $(\Delta J_{z})^{2}=Nj/2$,
when the polarization is perpendicular to the $z$-direction, the precision
will be computed straightforward from Eq. (5.67).
Therefore, the Cramér-Rao bound fixes the highest value for the precision of
the totally polarized state as
$(\Delta b_{1})^{-2}_{\text{TP}}\leqslant\sigma^{2}2Nj.$ (5.79)
Note that the precision bound for the totally polarized state is smaller than
that of the optimal separable state we present later on. We can see clearly
that the precision scales as $\mathcal{O}(N)$.
Let us now see, which quantities have to be measured to estimate the field
gradient with a totally polarized state. The homogeneous field rotates all
spins by the same angle, while the gradient rotates the spins at different
positions by a different angle. Due to that, the homogeneous field rotates the
collective spin, but does not change its absolute value. On the other hand,
the field gradient decreases the absolute value of the spin, since it has been
prepared to be maximal, which has been used in Ref. [37] for gradient
magnetometry, see Figure 5.1.
The best separable state
We will now turn our attention to the precision bound for all separable spin
states. It is useful to obtain this value so we have a direct comparison on
what the best classically achievable precision is. It will turn out that for
$j>\frac{1}{2},$ it is possible to achieve a precision higher than with the
fully polarized state. One has to take into account that if the state is
insensitive to the homogeneous field the bound can be saturated for sure, and
if the state is sensitive to homogeneous fields, it would depend on the
measurements compatibility and on the system as we discussed before. From
another point of view and instead of using the Eqs. (5.65) and (5.67), what we
have is that the bound is the same
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,H_{1}}]$ for both cases. Note
that we can place the system at the point in which the magnetic field is zero
without changing the result. Thus, it is easy to argue that the precision
bound itself is a convex function of the states. Moreover, it is also a convex
function of the states when the external state $\rho^{(\text{x})}$ is fixed
and only the internal $\rho^{(\text{s})}$ is considered.
In the single ensemble configuration, Eq. (5.34) must be computed only, where
the two-point correlation function returns $\sigma^{2}$ or $\eta$ based on Eq.
(5.61). On the other hand, for pure states we have that
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(\text{s})},j_{z}^{(n)},j_{z}^{(m)}}]$
is four times the correlation
$\langle{j_{z}^{(n)}j_{z}^{(m)}}\rangle-\langle{j_{z}^{(n)}}\rangle\langle{j_{z}^{(m)}}\rangle$.
If the state is a product state, then we have
$\langle{j_{z}^{(n)}j_{z}^{(m)}}\rangle-\langle{j_{z}^{(n)}}\rangle\langle{j_{z}^{(m)}}\rangle=0$
for all $n\neq m$. Hence, $\eta$ does not play any role in the precision.
Finally, we have to maximize only the variance of each of the single-particle
operators $4(\Delta j_{z}^{(n)})^{2}$. From the definition of the variance,
$(\Delta
j_{z}^{(n)})^{2}=\langle{(j_{z}^{(n)})^{2}}\rangle-\langle{j_{z}^{(n)}}\rangle^{2}.$
(5.80)
Hence, We try a state that approaches to zero its polarization on the
$z$-direction and maximizes $\langle{(j_{z}^{(n)})^{2}}\rangle$.
We have that $|{\Psi}\rangle=(|{+j}\rangle+|{-j}\rangle)/\sqrt{2}$ is ideal
for this, for any $j$. Hence, we write the entire internal state as
$\rho^{(\text{s})}=(|{\Psi}\rangle{\\!}\langle{\Psi}|)^{\otimes N}$. This
state gives $(\Delta j_{z}^{(n)})^{2}=j^{2}$ which can be used in Eq. (5.34)
after multiplying by four. Note that this state is permutationally invariant,
hence we have finished the search for the best separable permutationally
invariant state. Moreover, the state is sensitive to the homogeneous field.
Finally, the best achievable precision for separable states is written as
$(\Delta b_{1})^{-2}_{\text{SNL}}\leqslant\sigma^{2}4Nj^{2},$ (5.81)
where the state itself is sensitive to homogeneous fields and the shot-noise
limit is achieved. Note that in the two ensembles case
$\sigma^{2}_{\text{te}}=a^{2}$ which tells us that both bounds Eqs. (5.58) and
(5.81) are equal. This bound coincides with the totally polarized state
studied before when the spin number $j=\frac{1}{2}$.
In the following we try to find a better precision bound making use of the
presumably better entangled states. Note that the bound for the singlet state,
even if it is entangled, is above the bound for the totally polarized state
but below of the bound defined for the best separable state. Nevertheless,
when the singlet state is used the effect of the homogeneous magnetic field
has not to be compensated since the state is insensitive to it and thus the
bound can be saturated with an optimal estimator for the gradient field.
The unpolarized Dicke states $|{\textnormal{D}_{N}}\rangle_{z}$ and
$|{\textnormal{D}_{N}}\rangle_{x}$
Unpolarized Dicke states play an important role in quantum optics and quantum
information science. The unpolarized Dicke state
$|{\textnormal{D}_{N}}\rangle_{l}$ with a maximal
$\langle{J_{x}^{2}+J_{y}^{2}+J_{z}^{2}}\rangle=\mathcal{J}_{N/2}$, defined in
Eq. (A.3), and $\langle J_{l}\rangle=0$ for any $l\in x,y,z$ is particularly
interesting due to its entanglement properties and its metrological
usefulness. This state has been created in photonic experiments [106, 107,
109] and in cold atoms [8, 59], while a Dicke state with $\langle
J_{z}\rangle>0$ has been created with cold trapped ions [119].
The Dicke state $|{\textnormal{D}_{N}}\rangle_{z}$ is an eigenstate of $J_{z}$
so it is insensitive to homogeneous magnetic field pointing into the
$z$-direction, thus the precision can be saturated by some measurement. In the
following, $|{\textnormal{D}_{N}}\rangle_{z}$ without the subscript $z$ refers
to $|{\textnormal{D}_{N}}\rangle_{z}$. On the other hand, the Dicke state
$|{\textnormal{D}_{N}}\rangle_{x}$ is sensitive to the homogeneous field.
Moreover it is very useful for homogeneous magnetometry as it has been shown
in Ref. [97]. Here we consider large particle numbers, to make the results
simpler.
Since both Dicke states are pure, and following the procedure we used in
previous sections, we have that to compute all the
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(s)},j_{z}^{(n)}}]=4(\langle{j_{z}^{(n)}j_{z}^{(m)}}\rangle-\langle{j_{z}^{(n)}}\rangle\langle{j_{z}^{(m)}}\rangle)$
and $\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho^{(s)},J_{z}}]$ appearing in
Eqs. (5.65) and (5.67). Since both states are unpolarized and permutationally
invariant, we have that $\langle{J_{z}}\rangle=0$ and
$\langle{j_{z}^{(n)}}\rangle=0$ for both cases. Therefore, we only need to
compute the second moments to compute the needed variances.
To distinguish between the to cases, $|{\textnormal{D}_{N}}\rangle$ and
$|{\textnormal{D}_{N}}\rangle_{x}$, we will denote their expectation values by
$\langle{\cdot}\rangle_{\textnormal{D}}$ and
$\langle{\cdot}\rangle_{\textnormal{D},x}$, respectively.
First of all, from the definition of the Dicke states we have that
$\langle{J_{x}^{2}+J_{y}^{2}+J_{z}^{2}}\rangle=\mathcal{J}_{N/2}=\frac{N}{2}\left(\frac{N}{2}+1\right),$
(5.82)
for both cases. Moreover, $\langle{J_{l}^{2}}\rangle=0$ holds for
$|{\textnormal{D}_{N}}\rangle_{l}$. The other two second moments of Eq. (5.82)
are equal to the invariance of the states under rotations around the $l$-axis.
Hence, we can write that
$\displaystyle\langle{J_{z}^{2}}\rangle_{\textnormal{D}}$ $\displaystyle=0,$
(5.83a) $\displaystyle\langle{J_{z}^{2}}\rangle_{\textnormal{D},x}$
$\displaystyle=\frac{\mathcal{J}_{N/2}}{2}.$ (5.83b)
For the single spin components
$\langle{(j_{x}^{(n)})^{2}+(j_{y}^{(n)})^{2}+(j_{z}^{(n)})^{2}}\rangle=\mathcal{J}_{1/2}$
(5.84)
holds. Invoking the rotational symmetry again and that
$\langle{J_{l}^{2}}\rangle=\sum_{n,m}^{N}\langle{j_{l}^{(n)}j_{l}^{(m)}}\rangle$,
we arrive at
$\displaystyle\langle{(j_{z}^{(n)})^{2}}\rangle_{\textnormal{D}}$
$\displaystyle=\frac{1}{4},$ (5.85a)
$\displaystyle\langle{(j_{z}^{(n)})^{2}}\rangle_{\textnormal{D},x}$
$\displaystyle=\frac{1}{4},$ (5.85b)
after solving a system of linear equations.
Substituting Eqs. (5.83) and (5.85) into Eqs. (5.65) and (5.67), the bounds
for unpolarized Dicke states insensitive to the homogeneous field and
sensitive to the homogeneous field are
$\displaystyle(\Delta b_{1})^{-2}_{\textnormal{D}}|_{\max}$
$\displaystyle=(\sigma^{2}-\eta)N,$ (5.86a) $\displaystyle(\Delta
b_{1})^{-2}_{\textnormal{D},x}$
$\displaystyle\leqslant(\sigma^{2}-\eta)N+\eta\frac{N(N+2)}{2},$ (5.86b)
where Eq. (5.86b) shows in principal a Heisenberg scaling behavior in the
second term, whenever the particles are very correlated among each other in
the position subspace. This is due to the metrological enhancement of sensing
the homogeneous field. In the next section, we will see another example of a
state useful for homogeneous field estimation that is also useful for gradient
magnetometry.
The GHZ state
The Greenberger-Horne-Zeilinger (GHZ) states are also highly entangled and
play an important role in quantum information theory [48]. They have been
created experimentally in photonic systems [51, 120, 53] and trapped ions [55,
56].
We invoke the definition of the GHZ states Eq. (4.14) given as
$|{\textnormal{GHZ}}\rangle=\tfrac{1}{\sqrt{2}}(|{0\cdots 0}\rangle+|{1\cdots
1}\rangle),$ (5.87)
where $|{0}\rangle$ and $|{1}\rangle$ stands for particles with eigenvalue
$-1/2$ and $+1/2$ respectively for the one-particle $j_{z}^{(n)}$ operator.
The state Eq. (5.87) is very sensitive to the homogeneous field.
In order to calculate the bound explicitly, let us recall that for pure states
the QFI is simplified to
$\mathcal{F}_{\textnormal{Q}}[{\textstyle\rho,A}]=4(\Delta A)^{2}$ Eq. (2.53).
Following the Eq. (5.67), for the GHZ state the expectation values of
$j_{z}^{(n)}$ and $J_{z}$ are equal to zero, and
$\langle{(j_{z}^{(n)})^{2}}\rangle=\frac{1}{4}$ and
$\langle{J_{z}^{2}}\rangle=\frac{N^{2}}{4}$. Hence, the variances of
$j_{z}^{(n)}$ and $J_{z}$ can be computed. Finally, we obtain the precision
bound for gradient magnetometry for the GHZ state as
$(\Delta b_{1})^{-2}_{\textnormal{GHZ}}\leqslant(\sigma^{2}-\eta)N+\eta
N^{2}.$ (5.88)
This means that we can reach the Heisenberg-limit with such states, but only
in cases where $\eta$ is positive, i.e, that the particles stay spatially
correlated.
In summary, we have considered the experimentally most relevant spatial
distributions of particles, which could be used for gradient metrology. We
have have also applied our methods to calculate the quantum Fisher information
for various spin states. As we have seen, in some cases the system overcomes
the shot-noise limit, even when the spatial state is a single ensemble of
atoms, which opens up the possibility of ultra-precise gradient magnetometry
with a smaller experimental effort.
## 6 Conclusions Conclusions
In this thesis we have presented some aspects of quantum metrology from three
different perspectives. Besides the introductory part, Chapters 1 and 2, our
main results can be found in Chapters 3, 4 and 5. In Chapter 3, we have
developed the theory of quantum metrology for metrology with noisy Dicke
states. In Chapter 4, we have presented a method for witnessing the QFI with
expectation values of some general observables. Finally in Chapter 5, we have
computed precision bounds for gradient magnetometry.
In Chapters 3 and 4, we were constructing bounds on the quantum Fisher
information based on the expectation values of some observables of the initial
state. It turns out that to compute the quantum Fisher information is not a
trivial task and there is not a measurement scheme to obtain it from the
initial state apart from a complete tomography, which is very demanding for
large system sizes. Hence, some shortcuts to compute the bound of the QFI are
necessary.
In Chapter 3, we computed the precision bound for noisy unpolarized Dicke
states based on some initial expectation values. Moreover, we first reduced
the number of expectation values needed to four. More explicitly, we have to
measure only the second and the fourth moments of the $y$-component and the
$x$-component of the collective angular momentum in order to estimate the
metrological usefulness of the system. In practice, the fourth-order moments
can also be well approximated by the second-order moments.
In Chapter 4, we developed a method based on the Legendre transform. Based on
this method, we are able to obtain a tight lower bound on the quantum Fisher
information as a function of a set of expectation values of the initial state.
Furthermore, we tested our approach on extensive experimental data of photonic
and cold gas experiments, and demonstrated that it works even for the case of
thousands particles. In the future, it would be interesting to use our method
to test the optimality of various recent formulas giving a lower bound on the
quantum Fisher information [98, 121], as well as to improve the lower bounds
for spin-squeezed states and Dicke states allowing for the measurement of more
observables than the ones used in this publication.
On the other hand, in Chapter 5, we have investigated the precision limits for
measuring the gradient of a magnetic field with atomic ensembles arranged in
different geometries and initialized in different states. In particular, we
studied spin-chain configurations as well as the case of two atomic ensembles
localized at two different positions, and also the experimentally relevant
set-up of a single atomic ensemble with an arbitrary density profile of the
atoms was considered. We discussed the usefulness of various quantum states
for measuring the field strength and the gradient. Some quantum states, such
as singlet states, are insensitive to the homogeneous field. Using these
states, it is possible to estimate the gradient and saturate be Cramér-Rao
bound, while for states that are sensitive to the homogeneous magnetic field,
compatible measurements are needed for this task. For spin chains and the two-
ensemble case, the precision of the estimation of the gradient can reach the
Heisenberg limit. For the single ensemble case, only if strong correlation
between the particles is allowed can the shot-noise limit be surpassed and
even the Heisenberg limit be achieved. However, even if the Heisenberg limit
is not reached, single-ensemble methods can have a huge practical advantage
compared to methods based on two or more atomic ensembles, since using a
single ensemble makes the experiment simpler and can also result in a better
spatial resolution.
## Appendices
### A Multiparticle angular momentum subspaces
As we mentioned in the Section 2.2.1, when dealing with many particle systems,
the Hilbert space is represented by tensor a product of the subspaces with a
fixed spin. So, the final dimension is the product of all single-particle
dimensions which lead to an exponentially large Hilbert space. In order to
simplify our calculations, it is worth to note that some interesting
structures arise from this kind of tensor product construction.
Let us name some basic assumptions with which the problem of adding angular
momentum subspaces can be simplified. First of all, the single-particle
Hilbert space must be discrete and finite, hence it can be represented by a |
# Derived isogenies and isogenies for abelian surfaces
Zhiyuan Li SCMS, Fudan University, No,2005 Songhu Road, Shanghai, China
<EMAIL_ADDRESS>and Haitao Zou SCMS, Fudan University, No,2005
Songhu Road, Shanghai, China<EMAIL_ADDRESS>
###### Abstract.
In this paper, we study the twisted Fourier-Mukai partners of abelian
surfaces. Following the work of Huybrechts [31], we introduce the twisted
derived equivalences (also called derived isogenies) between abelian surfaces.
We show that there is a twisted derived Torelli theorem for abelian surfaces
over algebraically closed fields with characteristic $\neq 2,3$. Over the
complex numbers, the derived isogenies correspond to rational Hodge isometries
between the second cohomology groups, which is in analogy to the work of
Huybrechts and Fu–Vial on K3 surfaces. Their proof relies on the global
Torelli theorem over $\mathbb{C}$, which is missing in positive
characteristics. To overcome this issue, we firstly extend a trick given by
Shioda on integral Hodge structures, to rational Hodge structures, $\ell$-adic
Tate modules and $F$-crystals. Then we make use of Tate’s isogeny theorem to
give a characterization of the derived isogenies between abelian surfaces via
so called principal isogenies. As a consequence, we show the two abelian
surfaces are principally isogenous if and only if they are derived isogenous.
###### Key words and phrases:
abelian surface, isogenies, derived categories, twisted sheaves, Torelli
theorems
###### 2020 Mathematics Subject Classification:
Primary 14F08, 14K02; Secondary 14G17
The authors are supported by the NKRD Program of China (No. 2020YFA0713200),
NSFC General Program (No. 12171090) and Shanghai Pilot Program for Basic
Research (No. 21TQ00).
###### Contents
1. 1 Introduction
2. 2 Twisted abelian surface
3. 3 Cohomological realizations of derived isogeny
4. 4 Shioda’s Torelli theorem for abelian surfaces
5. 5 Derived isogeny in characteristic zero
6. 6 Derived isogeny in positive characteristic
## 1\. Introduction
### 1.1. Background
In the study of abelian varieties, a natural question is to classify the
Fourier-Mukai partners of abelian varieties. Due to Orlov and Polishchuk’s
_derived Torelli theorem_ for abelian varieties in (cf. [52, 54]), there is a
geometric/cohomological classification of derived equivalences between them.
More generally, one can consider the twisted derived equivalence or so called
derived isogeny between abelian varieties in the spirit of [31]: two abelian
varieties $X$ and $Y$ are derived isogenous if they can be connected by
derived equivalences between twisted abelian varieties, i.e. there exist
twisted abelian varieties $(X_{i},\alpha_{i})$ and $(X_{i},\beta_{i})$ such
that there is a zig-zag of derived equivalences
${\operatorname{D}^{b}(X,\alpha)}$${\operatorname{D}^{b}(X_{1},\beta_{1})}$${\operatorname{D}^{b}(X_{1},\alpha_{2})}$${\operatorname{D}^{b}(X_{2},\beta_{2})}$${\vdots}$${\operatorname{D}^{b}(X_{n},\alpha_{n+1})}$${\operatorname{D}^{b}(Y,\beta_{n})}$$\scriptstyle{\simeq}$$\scriptstyle{\simeq}$$\scriptstyle{\simeq}$
(1.1.1)
where $\operatorname{D}^{b}(X,\alpha)$ is the bounded derived category of
$\alpha$-twisted coherent sheaves on $X$.
In [60], Stellari proved that derived isogenous complex abelian surfaces are
isogenous using the the Kuga–Satake varieties associated to their
transcendental lattices (cf. Theorem 1.2 in loc.cit.). However, the converse
is not true as there are isogenous abelian surfaces which are not derived
isogenous (cf. Remark 4.4 (ii) in loc.cit.). The main goal of this paper is to
give a cohomological and geometric classification of derived isogenies between
abelian surfaces over algebraically closed fields of arbitrary characteristic.
### 1.2. Twisted derived Torelli theorem for abelian surfaces in
characteristic zero
Let us first classify the derived isogenous between abelian surfaces in term
of isogenies. For this purpose, we need to introduce a new type of isogeny. We
say two abelian surfaces $X$ and $Y$ are _principally isogenous_ if there is a
isogeny $f$ from $X$ or $\widehat{X}$ to $Y$ of square degree. The first main
result is
###### Theorem 1.2.1.
Let $X$ and $Y$ be two abelian surfaces over $k=\bar{k}$ with
$\operatorname{char}(k)=0$. The following statements are equivalent.
1. (i)
$X$ and $Y$ are derived isogenous.
2. (ii)
$X$ and $Y$ are principally isogenous.
A notable fact for abelian surfaces is that besides their $1^{st}$ cohomology
groups, their $2^{nd}$ cohomology groups also carry rich structures. In the
untwisted case, Mukai and Orlov have showed [48, 52] that
$\mathrm{D}^{b}(X)\cong\mathrm{D}^{b}(Y)\Leftrightarrow\widetilde{\mathrm{H}}(X,\mathbb{Z})\cong_{\operatorname{Hdg}}\widetilde{\mathrm{H}}(Y,\mathbb{Z})\Leftrightarrow\mathrm{T}(X)\cong_{\operatorname{Hdg}}\mathrm{T}(Y),$
where $\widetilde{\mathrm{H}}(X,\mathbb{Z})$ and
$\widetilde{\mathrm{H}}(Y,\mathbb{Z})$ are the Mukai lattices,
$\mathrm{T}(X)\subseteq\mathrm{H}^{2}(X,\mathbb{Z})$ and
$\mathrm{T}(Y)\subseteq\mathrm{H}^{2}(Y,\mathbb{Z})$ denote the transcendental
lattices, $\cong_{\operatorname{Hdg}}$ means integral Hodge isometries (cf.
[12, Theorem 5.1]). The following result can be viewed as a generalization of
Mukai and Orlov’s result.
###### Corollary 1.2.2.
The statement (i) is also equivalent to the following equivalent conditions
1. (iii)
the associated Kummer surfaces $\operatorname{Km}(X)$ and
$\operatorname{Km}(Y)$ are derived isogenous;
2. (iv)
Chow motives $\mathfrak{h}(X)\cong\mathfrak{h}(Y)$ are isomorphic as Frobenius
exterior algebras;
3. (v)
even degree Chow motives $\mathfrak{h}^{even}(X)\cong\mathfrak{h}^{even}(Y)$
are isomorphic as Frobenius algebra.
When $k=\mathbb{C}$, then the conditions above are also equivalent to
1. (6)
$\mathrm{H}^{2}(X,\mathbb{Q})\cong\mathrm{H}^{2}(Y,\mathbb{Q})$ as a rational
Hodge isometry;
2. (7)
$\widetilde{\mathrm{H}}(X,\mathbb{Q})\cong\widetilde{\mathrm{H}}(Y,\mathbb{Q})$
as a rational Hodge isometry;
3. (8)
$\mathrm{T}(X)\otimes\mathbb{Q}\cong\mathrm{T}(Y)\otimes\mathbb{Q}$ as a
rational Hodge isometry.
Here, the motive $\mathfrak{h}(X)$ admits a canonical motivic decomposition
produced by Deninger–Murre [19]
$\mathfrak{h}(X)=\bigoplus_{i=0}^{4}\mathfrak{h}^{i}(X)$ (1.2.1)
such that $\mathrm{H}^{*}(\mathfrak{h}^{i}(X))\cong\mathrm{H}^{i}(X)$ for any
Weil cohomology $\mathrm{H}^{*}(-)$. It satisfies
$\mathfrak{h}^{i}(X)=\bigwedge^{i}\mathfrak{h}^{1}(X)$ for all $i$,
$\mathfrak{h}^{4}(X)\simeq\mathds{1}(-4)$ and
$\bigwedge^{i}\mathfrak{h}^{1}(X)=0$ for $i>4$ (cf. [37]). The motive
$\mathfrak{h}(X)$ is a Frobenius exterior algebra objects in the category of
Chow motives over $k$ and the even degree part
$\mathfrak{h}^{even}(X)=\bigoplus_{k\geq
0}^{2}\bigwedge^{2k}\mathfrak{h}^{1}(X)$ (1.2.2)
forms a Frobenius algebra object in the sense of [23].
The equivalences $(i)\Leftrightarrow(iv)\Leftrightarrow(v)$ are motivic
realizations of derived isogenies between abelian surfaces, which can be
viewed as an analogy of the motivic global Torelli theorem on K3 surfaces (cf.
[31, Conjecture 0.3] and [23, Theorem 1]). The equivalences
$(i)\Leftrightarrow(iii)\Leftrightarrow(viii)$ can be viewed as a
generalization of [60, Theorem 1.2]. The Hodge-theoretic realization
$(i)\Leftrightarrow(vi)$ follows a similar strategy of [31, Theorem 0.1],
which makes use of Shioda’s period map and Cartan–Dieudonné decomposition of a
rational isometry. The equivalences
$(vi)\Leftrightarrow(vii)\Leftrightarrow(viii)$ follow from the Witt
cancellation theorem (see §5.3).
### 1.3. Shioda’s trick
The proof of Theorem 1.2.1 is concluded by a new ingredient so called rational
Shioda’s trick on abelian surfaces. The original Shioda’s trick in [58] plays
a key role in the proof of Shioda’s global Torelli theorem for abelian
surfaces, which links the weight-1 integral Hodge structure to the weight-2
integral Hodge structure of an abelian surface. We generalize it in the
following form.
###### Theorem 1.3.1 (Shioda’s trick, see §4).
Let $X$ and $Y$ be two complex abelian surfaces. Then for any admissible Hodge
isometry
$\psi\colon\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})$
we can find an isogeny $f\colon Y\to X$ of degree $d^{2}$ such that
$\psi=\frac{f^{*}}{d}$.
As an application, the generalized Shioda’s trick gives the algebraicity of
some cohomological cycles. For any integer $d$, one can consider a Hodge
similitude of degree $d$
$\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q}),$
called a Hodge isogeny of degree $d$. Due to the Hodge conjecture on products
of abelian surfaces, we know that every Hodge isogeny is algebraic. Our
generalized Shioda’s trick actually shows that it is induced by certain
isogenies. Similarly, we prove the $\ell$-adic and $p$-adic Shioda’s trick,
which gives a proof of Tate conjecture for isometries between the
$2^{nd}$-cohomology groups (as either Galois-modules or crystals) of abelian
surfaces over finitely generated fields. See Corollary 4.6.4 for more details.
### 1.4. Results in positive characteristic
The second part of this paper is to investigate the twisted derived Torelli
theorem over positive characteristic fields. Due to the absence of a
satisfactory global Torelli theorem, one can not follow the argument in
characteristic zero directly. Instead, we need some input from $p$-adic Hodge
theory. Our formulation is the following.
###### Theorem 1.4.1.
Let $X$ and $Y$ be two abelian surfaces over $k=\bar{k}$ with
$\operatorname{char}(k)=p>3$. Then the following statements are equivalent.
1. (i′)
$X$ and $Y$ are prime-to-$p$ derived isogenous.
2. (ii′)
$X$ and $Y$ are prime-to-$p$ principally isogenous.
Moreover, in case that $X$ is supersingular, then $Y$ is derived isogenous to
$X$ if and only if $Y$ is supersingular.
Here, we say a derived isogeny as (1.1.1) is prime-to-$p$ if its crystalline
realization is integral (see Definition 3.1.3 for details), which is a
condition somewhat technical. The main ingredients in the proof of Theorem
1.4.1 are the lifting-specialization technique, which works well for prime-
to-$p$ derived isogenies. Actually, our method shows that there is an
implication $(i^{\prime})\Rightarrow(ii^{\prime})$ for derived isogenies which
are not necessarily being prime-to-$p$ (see Theorem 6.5.1). Conversely, we
believe that the existence of quasi-liftable isogenies will imply the
existence of derived isogeny (see Conjecture 6.5.2). The only obstruction is
the existence of the specialization of non-prime-to-$p$ derived isogenies
between abelian surfaces. See Remark 6.3.3.
Another natural question is whether two abelian surfaces are derived isogenous
if and only if their associated Kummer surfaces are derived isogenous over
positive characteristic fields. Unfortunately, we can not fully prove the
equivalence. Instead, we provide a partial solution of this question. See
Theorem 6.5.3 for more details.
Similarly, one may ask whether such results also hold for K3 surfaces. Recall
that two K3 surfaces $S$ and $S^{\prime}$ over a finite field $\mathbb{F}_{q}$
are (geometrically) isogenous in the sense of [65] if there exists an
algebraic correspondence $\Gamma$ which induces an isometry of
$\operatorname{Gal}(\bar{\mathbb{F}}_{p}/k)$-modules
$\Gamma^{\ast}_{\ell}\colon\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(S_{\bar{\mathbb{F}}_{p}},\mathbb{Q}_{\ell})\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(S^{\prime}_{\bar{\mathbb{F}}_{p}},\mathbb{Q}_{\ell}),$
for all $\ell\nmid p$ and an isometry of isocrystals
$\Gamma^{\ast}_{p}\colon\mathrm{H}^{2}_{\operatorname{crys}}(S_{k}/K)\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{crys}}(S^{\prime}_{k}/K),$
for some finite extension $k/\mathbb{F}_{q}$ and the fraction field $K$ of
$W=W(k)$. Then we say the isogeny is prime-to-$p$ if the isometry
$\Gamma^{\ast}_{p}$ is integral, i.e.,
$\Gamma^{\ast}_{p}\left(\mathrm{H}^{2}_{\operatorname{crys}}(S_{k}/W)\right)=\mathrm{H}^{2}_{\operatorname{crys}}(S^{\prime}_{k}/W)$.
Then we have a formulation of the twisted derived Torelli conjecture for K3
surfaces.
###### Conjecture 1.4.2.
For two K3 surfaces $S$ and $S^{\prime}$ over a finite field $k$ with
$\mathrm{char}(k)=p>0$, then the following are equivalent.
1. (a)
There exists a prime-to-$p$ derived isogeny
$\operatorname{D}^{b}(S)\sim\operatorname{D}^{b}(Y)$.
2. (b)
There exists a prime-to-$p$ isogeny between $S$ and $S^{\prime}$.
The implication $(a)\Rightarrow(b)$ is clear, while the converse remains open.
In the case of Kummer surfaces, our results provide some evidence of
Conjecture 1.4.2. We shall mention that recently Bragg and Yang have studied
the derived isogenies between K3 surfaces over positive characteristic fields
and they provided a weaker version of the statement in Conjecture 1.4.2 (cf.
[9, Theorem 1.2]).
### Organization of the paper.
We will start with two preliminary sections, in which we include some well-
known constructions and facts: In Section 2, we perform the computations of
the Brauer group of abelian surfaces via the Kummer construction. This allows
us to prove the lifting lemma for twisted abelian surfaces of finite height.
In Section 3, we collect the knowledge on derived isogenies between abelian
surfaces and their cohomological realizations, which include the motivic
realization, the $\mathbf{B}$-field theory, the twisted Mukai lattices, a
filtered Torelli theorem and its relation to the moduli space of twisted
sheaves.
In Section 4, we revise Shioda’s work and extend it to rational Hodge
isogenies. This is the key ingredient for proving Theorem 1.2.1. Furthermore,
after introducing the admissible $\ell$-adic and $p$-adic bases, we prove the
$\ell$-adic and $p$-adic Shioda’s trick for admissible isometries on abelian
surfaces. As an application, we prove the algebracity of these isometries on
abelian surfaces over finitely generated fields.
Section 5 and 6 are devoted to proving Theorem 1.2.1 and Theorem 1.4.1.
Theorem 1.2.1 is essentially Theorem 5.1.3 and Theorem 5.2.5. The proof of
Theorem 1.4.1 is much more subtle. We establish the lifting and the
specialization theorem for prime-to-$p$ derived isogeny. Then one can conclude
$(i^{\prime})\Leftrightarrow(ii^{\prime})$ from Theorem 1.2.1 for abelian
surfaces of finite heights. At the end of Section 6, we follow Bragg and
Lieblich’s twistor line argument in [7] to conclude the supersingular case of
Theorem 1.4.1.
### Acknowledgement
The authors are grateful to the useful comments by Ziquan Yang.
### Notations and Conventions
Throughout this paper, we will use the symbol $k$ to denote a field. If $k$ is
a perfect field and $\operatorname{char}{k}=p>0$, we denote $W\coloneqq W(k)$
for the ring of Witt vectors in $k$, which is equipped with a morphism
$\sigma\colon W\rightarrow W$ induced by the Frobenius map on $k$. If $k$ is
not perfect, we choose a Cohen ring $W\subset W(\bar{k})$ with $W/pW=k$,
inside the ring of Witt vectors in a fixed algebraic closure $\bar{k}$ of $k$.
Let $X$ be a smooth projective variety over $k$. We denote by
$\mathrm{H}^{\bullet}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Z}_{\ell})$
the $\ell$-adic étale cohomology group of $X_{\bar{k}}$. The
$\mathbb{Z}_{\ell}$-module
$\mathrm{H}^{\bullet}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Z}_{\ell})$
has been endowed with a canonical
$G_{k}=\operatorname{Gal}(\bar{k}/k)$-action. We use
$\mathrm{H}^{i}_{\operatorname{crys}}(X/W)$ to denote the $i$-th crystalline
cohomology group of $X$ over the $p$-adic base $W\twoheadrightarrow k$, which
is a $W$-module. It is endowed with a natural $\sigma$-linear map
$\varphi\colon\mathrm{H}^{i}_{\operatorname{crys}}(X/W)\rightarrow\mathrm{H}^{i}_{\operatorname{crys}}(X/W)$
induced from the absolute Frobenius morphism $F_{X}\colon X\to X$.
We denote by $\operatorname{D}^{b}(X)$ the bounded derived category of
coherent sheaves $X$. A derived equivalence means a $k$-linear exact
equivalence between triangulated categories in the form
$\Psi\colon\operatorname{D}^{b}(X)\xrightarrow{\sim}\operatorname{D}^{b}(Y).$
If $\Psi$ is of the form
$\Psi^{P}(E)=\mathbf{R}\\!q_{*}(p^{*}E\otimes\mathcal{P}),$
then we call it a Fourier–Mukai transform with a kernel
$P\in\operatorname{D}^{b}(X\times Y)$ and the projections $p\colon X\times
Y\to X$, $q\colon X\times Y\to Y$, and $X,Y$ are called a pair of
Fourier–Mukai partners.
When $X$ is an abelian variety over $k$, we denote $\widehat{X}$ for its dual
abelian variety and $X[p^{\infty}]$ for the associated $p$-divisible group.
There is a natural identification of its contravariant Dieudonné module with
its first crystalline cohomology:
$\mathbb{D}(X[p^{\infty}])\coloneqq
M(X[p^{\infty}]^{\vee})\cong\mathrm{H}^{1}_{\operatorname{crys}}(X/W),$
where $M(-)$ is the Dieudonné module functor on $p$-divisible groups defined
in [44, 17].
For any abelian group $G$ and an integer $n$, we denote $G[n]$ for the
subgroup of $n$-torsions in $G$ and $G\\{n\\}$ for the union of all $n$-power
torsions.
For a lattice $L$ in $\mathbb{Z}$ or $\mathbb{Q}$ and an integer $n$, we use
$L(n)$ for the lattice twisted by $n$, i.e., $L=L(n)$ as $\mathbb{Z}$ or
$\mathbb{Q}$-module, but
$\langle x,y\rangle_{L(n)}=n\langle x,y\rangle_{L}.$
The reader shall not confuse it with the Tate twist.
## 2\. Twisted abelian surface
In this section, we give some preliminary results in the theory of twisted
abelian surfaces, especially for those in positive characteristic. As most of
these results are well-known in characteristic zero, the readers who are only
interested in this case may skip this part.
We will frequently use the terminology “gerbe”, on which the readers may refer
[25] or [39] for more details.
### 2.1. Gerbes on abelian surfaces and associated Kummer surfaces
Let $X$ be an abelian surface over a field $k$ and let $\mathscr{X}\rightarrow
X$ be a $\mu_{n}$-gerbe over $X$. This corresponds to a pair $(X,\alpha)$ for
some $\alpha\in\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{n})$, where the
cohomology group is with respect to the flat topology. Since $\mu_{n}$ is
commutative, there is a bijection of sets
$\left\\{\text{$\mu_{n}$-gerbes on
$X$}\right\\}\\!/\simeq\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{n}),$
where $\simeq$ is the $\mu_{n}$-equivalence defined as in [25, IV.3.1.1]. We
may write $\alpha=[\mathscr{X}]$. The Kummer exact sequence induces a
surjective map
$\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{n})\rightarrow\operatorname{Br}(X)[n]$
(2.1.1)
where the right-hand side is the _cohomological Brauer group_
$\operatorname{Br}(X)\coloneqq\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{G}_{m})$.
For any $\mu_{n}$ gerbe $\mathscr{X}$ on $X$, there is an associated
$\mathbb{G}_{m}$-gerbe on $X$ via (2.1.1), denoted by
$\mathscr{X}_{\mathbb{G}_{m}}$. Let $\mathscr{X}^{(m)}$ be the gerbe
corresponding to cohomological class
$m[\mathscr{X}]\in\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{n})$. If
$[\mathscr{X}_{\mathbb{G}_{m}}]=0$, then we will call $\mathscr{X}$ an
essentially-trivial $\mu_{n}$-gerbe.
If $k$ has characteristic $p\neq 2$, there is an associated Kummer surface
$\widetilde{X}$ constructed as follows:
${\widetilde{X}}$${X}$${\operatorname{Km}(X)}$${X/{\iota}}$$\scriptstyle{\widetilde{\sigma}}$$\scriptstyle{\pi}$$\scriptstyle{\sigma}$
(2.1.2)
where
* •
$\iota$ is the involution of $X$;
* •
$\sigma$ is the crepant resolution of quotient singularities;
* •
$\widetilde{\sigma}$ is the blow-up of $X$ along the closed subscheme
$X[2]\subset X$. Its birational inverse is denoted by
$\widetilde{\sigma}^{-1}$.
Let $E\subset\widetilde{X}$ be the exceptional locus of $\widetilde{\sigma}$.
Then we have a composition of the sequence of morphisms
$(\widetilde{\sigma}^{-1})^{*}:\operatorname{Br}(\widetilde{X})\to\operatorname{Br}(\widetilde{X}\setminus
E)\cong\operatorname{Br}(X\setminus X[2])\cong\operatorname{Br}(X).$
Here, the last isomorphism
$\operatorname{Br}(X)\to\operatorname{Br}(X\setminus X[2])$ is due to
Grothendieck’s purity theorem (cf. [27, 63]).
###### Proposition 2.1.1.
When $k=\bar{k}$ and $\operatorname{char}(k)\neq 2$, the
$(\widetilde{\sigma}^{-1})^{*}\pi^{*}$ induces an isomorphism between
cohomological Brauer groups
$\Theta\colon\operatorname{Br}(\operatorname{Km}(X))\to\operatorname{Br}(X).$
(2.1.3)
In particular, when $X$ is supersingular over $\bar{k}$, then
$\operatorname{Br}(X)$ is isomorphic to the additive group $\bar{k}$.
###### Proof.
For torsions of (2.1.3) whose orders are coprime to $p$, the proof is
essentially the same as [59, Proposition 1.3] by the Hochschild–Serre spectral
sequence and the fact that $\mathrm{H}^{2}(\mathbb{Z}/2\mathbb{Z},k^{*})=0$
(cf. [64, Proposition 6.1.10]) as $\operatorname{char}(k)>2$. See also [60,
Lemma 4.1] for the case $k=\mathbb{C}$. For $p$-primary torsion part, we have
$\operatorname{Br}(\operatorname{Km}(X))\\{p\\}\cong\operatorname{Br}(X)^{\iota}\\{p\\}$
from the Hochschild–Serre spectral sequence, where
$\operatorname{Br}(X)^{\iota}$ is the $\iota$-invariant subgroup. Hence it
suffices to prove that $\iota$ acts trivially on $\operatorname{Br}(X)$.
In fact, $\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{p})$ can be
$\iota$-equivariantly embedded to $\mathrm{H}^{2}_{\operatorname{dR}}(X/k)$ by
de Rham–Witt theory (cf. [50, Proposition 1.2]). The action of $\iota$ on
$\mathrm{H}^{2}_{\operatorname{dR}}(X/k)=\wedge^{2}\mathrm{H}^{1}_{\operatorname{dR}}(X/k)$
is the identity, as its action on $\mathrm{H}^{1}_{\operatorname{dR}}(X/k)$ is
given by $x\mapsto-x$. Thus the involution on
$\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{p})$ is trivial. Then by the exact
sequence
$0\to{\rm
NS}(X)\otimes\mathbb{Z}/p\to\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{p})\to\operatorname{Br}(X)[p]\to
0,$
we can deduce that $\operatorname{Br}(X)[p]$ is invariant under the
involution. Furthermore, for $p^{n}$-torsions with $n\geq 2$, we can proceed
by induction on $n$. Assume that all elements in $\operatorname{Br}(X)[p^{d}]$
are $\iota$-invariant if $1\leq d<n$. By abuse of notation, we still use
$\iota$ to denote the induced map
$\operatorname{Br}(X)\to\operatorname{Br}(X)$. For
$\alpha\in\operatorname{Br}(X)[p^{n}]$,
$p\alpha\in\operatorname{Br}(X)[p^{n-1}]$ is $\iota$-invariant. This gives
$p\alpha=\iota(p\alpha)=p\iota(\alpha),$
which implies $\alpha-\iota(\alpha)\in\operatorname{Br}(X)[p]$. Applying
$\iota$ on $\alpha-\iota(\alpha)$, we can obtain
$\alpha-\iota(\alpha)=\iota(\alpha)-\alpha.$
It implies $\alpha-\iota(\alpha)$ is also a $2$-torsion element. Since $p$ is
coprime to $2$, we can conclude that $\alpha=\iota(\alpha)$.
If $X$ is supersingular, then $\operatorname{Km}(X)$ is also supersingular. We
have already known that the Brauer group of a supersingular K3 surface is
isomorphic to $k$ by [2]. Thus $\operatorname{Br}(X)\cong k$. ∎
###### Remark 2.1.2.
In the case $A$ being supersingular, the method of [2] can not be directly
applied to show that $\operatorname{Br}(X)=k$ as
$\mathrm{H}^{1}_{\operatorname{fl}}(X,\mu_{p^{n}})$ is not trivial in general
for an abelian surface $X$.
###### Remark 2.1.3.
For abelian surfaces over a non-algebraically closed field or more general
ring, we still have the canonical map (2.1.3), but it is not necessarily an
isomorphism.
###### Remark 2.1.4.
For a cohomology theory $\mathrm{H}^{\bullet}(-)$ with nice properties e.g.
satisfying the blow-up formula, we have a canonical decomposition
$\mathrm{H}^{2}(\operatorname{Km}(X))\cong\mathrm{H}^{2}(X)\oplus\pi_{*}\Sigma,$
where $\Sigma$ is the summand in $\mathrm{H}^{2}(\widetilde{X})$ generated by
the exceptional divisors of $\widetilde{\sigma}$.
### 2.2. A lifting lemma
In [5], Bragg has shown that a twisted K3 surface can be lifted to
characteristic $0$. Though his method can not be directly applied to twisted
abelian surfaces, one can still obtain a lifting result for twisted abelian
surfaces via using the Kummer construction. The following result will be
frequently used in this paper.
###### Lemma 2.2.1.
Let $\mathscr{X}\to X$ be a $\mathbb{G}_{m}$-gerbe on an abelian surface $X$
over $k=\bar{k}$. Suppose $\mathrm{char}(k)>2$ and $X$ has finite height. Then
there exists a lifting $\mathfrak{X}\to\mathcal{X}$ of $\mathscr{X}\to X$ over
some discrete valuation ring $W^{\prime}$ whose residue field is $k$ such that
the specialization map
${\rm NS}(\mathcal{X}_{K^{\prime}})\to{\rm NS}(X)$
on Néron-Severi groups is an isomorphism. Here, $K^{\prime}$ is the fraction
field of $W^{\prime}$ and $\mathcal{X}_{K^{\prime}}$ is the generic fiber of
$\mathcal{X}\to\operatorname{Spec}W^{\prime}$.
###### Proof.
The existence of such lifting is ensured by [5, Theorem 7.10], [38, Lemma 3.9]
and Proposition 2.1.1. Roughly speaking, let
$\mathscr{S}\to\operatorname{Km}(X)$ be the associated twisted Kummer surface
via the isomorphism (2.1.3) in Proposition 2.1.1. Then [5, Theorem 7.10]
asserts that there exists a lifting $\mathfrak{S}\to\mathcal{S}$ of
$\mathscr{S}\to\operatorname{Km}(X)$ such that the specialization map of
Néron-Severi groups is an isomorphism
${\rm NS}(\mathcal{X}_{K^{\prime}})\xrightarrow{\sim}{\rm NS}(X).$ (2.2.1)
Then [38, Lemma 3.9] says that one can find a lifting $\mathcal{X}/W^{\prime}$
of $X$ such that $\operatorname{Km}(\mathcal{X})\cong\mathcal{S}$ over
$W^{\prime}$. According to Remark 2.1.3, there is a canonical map
$\Theta:\operatorname{Br}(\operatorname{Km}(\mathcal{X}))\to\operatorname{Br}(\mathcal{X})$
as in (2.1.3). Consider the image
$\Theta([\mathfrak{S}])\in\operatorname{Br}(\mathcal{X})$, one can take
$\mathfrak{X}\to\mathcal{X}$ to be the associated twisted abelian surface.
Then $\mathfrak{X}\to\mathcal{X}$ will be a lifting of $\mathscr{X}\to X$ as
the restriction of the Brauer class $[\mathfrak{X}]$ to $X$ is
$[\mathscr{X}]$. ∎
## 3\. Cohomological realizations of derived isogeny
In this section, we briefly recall the action of derived isogenies on the
cohomology groups of abelian surfaces and define the prime-to-$\ell$ derived
isogenies. This action has the following two forms
1. (1)
the motivic realization, which induces rational isomorphisms on the cohomology
groups;
2. (2)
the realization on the integral twisted Mukai lattices.
The story over $\mathbb{C}$ comes back to [66, 33, 31]. Over a general field,
we refer [41] for the non-twisted Mukai realization, [40, 6] for the
definition of twisted Mukai lattices, and [30, 23] for the motivic
realization.
Following the works in [41, 28], we extend the filtered Torelli theorem to
twisted abelian surfaces over an algebraically closed field $k$ with
$\operatorname{char}(k)\neq 2$. As a corollary, we show that any Fourier–Mukai
partner of a twisted abelian surface is isomorphic to a moduli space of stable
twisted sheaves on itself or its dual (cf. Theorem 3.4.5).
### 3.1. Motivic realization of derived isogeny on cohomology groups
In [30, 31], Huybrechts shows that (twisted) derived equivalent K3 surfaces
over a field $k$ have isomorphic Chow motives, which also holds for general
algebraic surfaces over $k$ (as remarked in §2.4 of loc.cit.). Moreover, Lie
and Vial proved that any twisted derived equivalence induces an isomorphism
between the second component of Chows motives by a weight-argument (cf. [23,
§1.2]). In this part, we record their results for the convenience of the
reader. We will focus on abelian surfaces over $k$ as a typical type of
examples.
For any abelian surface $X$ over a field $k$, one may consider idempotent
correspondences $\pi^{2}_{\operatorname{alg},X}$ and
$\pi^{2}_{\operatorname{tr},X}$ in $\operatorname{CH}^{2}(X\times
X)_{\mathbb{Q}}$ defined as
$\pi_{\operatorname{alg},X}^{2}\coloneqq\sum_{i=1}^{\rho}\frac{1}{\deg(E_{i}\cdot
E_{i})}E_{i}\times
E_{i},\quad\pi^{2}_{\operatorname{tr},X}=\pi^{2}_{X}-\pi^{2}_{\operatorname{alg},X},$
where $\pi^{2}_{X}$ is the idempotent correspondence given by the Chow–Künneth
decomposition (1.2.1) and $E_{i}$ are non-isotropic divisors generating the
Néron–Severi group ${\rm NS}(X_{k^{s}})$ as a orthogonal basis. Consider the
decomposition of $\mathfrak{h}^{2}(X)$:
$\mathfrak{h}^{2}(X)=\mathfrak{h}_{\operatorname{alg}}^{2}(X)\oplus\mathfrak{h}_{\operatorname{tr}}^{2}(X)$
given by $\pi^{2}_{\operatorname{alg},X}$ and $\pi^{2}_{\operatorname{tr},X}$.
It is not hard to see $\mathfrak{h}^{2}_{\operatorname{alg}}(X)$ is a Tate
motive after base change to the separable closure $k^{s}$, whose Chow
realization is
$\operatorname{CH}_{\mathbb{Q}}^{*}(\mathfrak{h}^{2}_{\operatorname{alg}}(X_{k^{s}}))\cong{\rm
NS}(X_{k^{s}})_{\mathbb{Q}}.$
According to the main result in [14], any derived equivalence
$\operatorname{D}^{b}(X,\alpha)\xrightarrow{\sim}\operatorname{D}^{b}(Y,\beta)$
can be uniquely (up to isomorphism) written as a Fourier-Mukai transform with
kernel $\mathcal{P}\in\operatorname{D}^{b}(X\times
Y,\alpha^{-1}\boxtimes\beta)$
$\Phi^{\mathcal{P}}\colon\operatorname{D}^{b}(X,\alpha)\xrightarrow{\sim}\operatorname{D}^{b}(Y,\beta).$
Consider the cycle class
$[\Gamma_{\operatorname{tr}}]=v_{2}(\mathcal{P})\in\operatorname{CH}^{2}(\mathscr{X}\times\mathscr{Y})_{\mathbb{Q}}\cong\operatorname{CH}^{2}(X\times
Y)_{\mathbb{Q}},$
where $v_{2}(\mathcal{P})$ is the dimension two component of the Mukai vector
of $\mathcal{P}$. It will induce an isomorphism of motives by a weight
argument (cf. [23, §§1.2.3])
$[\Gamma_{\operatorname{tr}}]_{2}\coloneqq\pi^{2}_{\operatorname{tr},Y}\circ[\Gamma_{\operatorname{tr}}]\circ\pi^{2}_{\operatorname{tr},X}\colon\mathfrak{h}^{2}_{\operatorname{tr}}(X)\xrightarrow{\sim}\mathfrak{h}^{2}_{\operatorname{tr}}(Y).$
Since twisted derived equivalent algebraic surfaces have same Picard number
(over $k$), one can choose a invertible correspondence
$[\Gamma_{\operatorname{alg}}]\colon\mathfrak{h}^{2}_{\operatorname{alg}}(X)\xrightarrow{\sim}\mathfrak{h}^{2}_{\operatorname{alg}}(Y),$
whose inverse is given by its transpose (see [23, §3.1] for more details).
This gives an isomorphism
$[\Gamma]\coloneqq[\Gamma_{\operatorname{tr}}]_{2}+[\Gamma_{\operatorname{alg}}]\colon\mathfrak{h}^{2}(X)\xrightarrow{\sim}\mathfrak{h}^{2}(Y).$
Any cohomologlical realization of such isomorphism clearly preserves the
Poincaré pairing by the construction. Therefore, by taking the corresponding
cohomological realization, we obtain
###### Proposition 3.1.1.
Assume $\mathrm{char}(k)=p\neq 2$. Let $\ell$ be a prime not equal to $p$. If
$X$ and $Y$ are twisted derived equivalent over $k$, then $[\Gamma]$ will
induce a $\mathrm{Gal}(\bar{k}/k)$-equivariant isometry
$\varphi_{\ell}\colon\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Q}_{\ell})\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(Y_{\bar{k}},\mathbb{Q}_{\ell}).$
(3.1.1)
Suppose $k$ is perfect, it will induce an isometry between $F$-isocrystals
$\varphi_{K}\colon\mathrm{H}^{2}_{\operatorname{crys}}(X/K)\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{crys}}(Y/K).$
(3.1.2)
###### Remark 3.1.2.
The weight-argument in [13, §§1.2.3] actually provides an isomorphism
$\mathfrak{h}(X)\xrightarrow{\sim}\mathfrak{h}(Y),$
which preserves the even-degree parts
$\mathfrak{h}^{even}(-)\coloneqq\bigoplus^{2}_{k=0}\mathfrak{h}^{2k}(-)\cong\bigoplus^{2}_{k=0}\bigwedge^{2k}\mathfrak{h}^{1}(-).$
The cohomological realizations in Proposition 3.1.1 are not integral in
general. We can introduce the prime-to-$\ell$ derived isogeny via the integral
cohomological realizations, which will be used in the rest of the paper.
###### Definition 3.1.3.
Let $\ell$ be a prime and $\operatorname{char}(k)=p$. When $\ell\neq p$, a
derived isogeny $\operatorname{D}^{b}(X)\sim\operatorname{D}^{b}(Y)$ given by
${\operatorname{D}^{b}(X,\alpha)}$${\operatorname{D}^{b}(X_{1},\beta_{1})}$${\operatorname{D}^{b}(X_{1},\alpha_{2})}$${\operatorname{D}^{b}(X_{2},\beta_{2})}$${\vdots}$${\operatorname{D}^{b}(X_{n},\alpha_{n+1})}$${\operatorname{D}^{b}(Y,\beta_{n})}$$\scriptstyle{\simeq}$$\scriptstyle{\simeq}$$\scriptstyle{\simeq}$
is called prime-to-$\ell$ if each cohomological realization in the zig-zag
sequence
$\varphi_{\ell}^{i}\colon\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X_{i-1,\bar{k}},\mathbb{Q}_{\ell})\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X_{i,\bar{k}},\mathbb{Q}_{\ell})$
is integral, i.e.
$\varphi_{\ell}\left(\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Z}_{\ell})\right)=\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(Y_{\bar{k}},\mathbb{Z}_{\ell})$.
In the case $\ell=p$, it is called prime-to-$p$ if each
$\varphi_{p}^{i}\colon\mathrm{H}^{2}_{\operatorname{crys}}(X_{i-1}/K)\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{crys}}(X_{i}/K)$
is integral.
### 3.2. Mukai lattices and $\mathbf{B}$-fields
At the beginning, we shall remark that we are able to transfer many
cohomological statements for twisted K3 surfaces to the case of twisted
abelian surfaces via the the Kummer construction thanks to Proposition 2.1.1
and 2.1.4. For this reason, we will omit many technical details which are
well-known in the case of K3 surfaces in the following discussions.
If $X$ is a complex abelian surface, the _Mukai lattice_ is defined as
$\widetilde{\mathrm{H}}(X,\mathbb{Z})\coloneqq\mathrm{H}^{0}(X,\mathbb{Z}(-1))\oplus\mathrm{H}^{2}(X,\mathbb{Z})\oplus\mathrm{H}^{4}(X,\mathbb{Z}(1))$
with the Mukai pairing
$\langle(r,c,\chi),(r^{\prime},c^{\prime},\chi^{\prime})\rangle\coloneqq
cc^{\prime}-r\chi^{\prime}-r^{\prime}\chi,$ (3.2.1)
and a pure $\mathbb{Z}$-Hodge structure of weight $2$.
For general algebraically closed field $k$ and an abelian surface $X$ over
$k$, we also have the following notion of Mukai lattices [41, §2].
* •
Let $\widetilde{N}(X)$ be the _extended Néron–Severi lattice_ defined as
$\widetilde{N}(X)\coloneqq\mathbb{Z}\oplus{\rm NS}(X)\oplus\mathbb{Z},$
with Mukai pairing
$\langle(r_{1},c_{1},\chi_{1}),(r_{2},c_{2},\chi_{2})\rangle=c_{1}c_{2}-r_{1}\chi_{2}-r_{2}\chi_{1}.$
The Chow realization $\operatorname{CH}^{*}_{\mathbb{Q}}(-)$ of
$\mathfrak{h}^{0}(X)\oplus\mathfrak{h}^{2}_{\operatorname{alg}}(X)\oplus\mathfrak{h}^{4}(X)$
can be identified with $\widetilde{N}(X)_{\mathbb{Q}}$.
* •
if $\operatorname{char}(k)\geq 0$, then the $\ell$-adic Mukai lattice is
defined on the even degrees of integral $\ell$-adic cohomology of $X$ for
$\ell$ coprime to $\operatorname{char}(k)$
$\mathrm{H}^{0}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(-1))\oplus\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell})\oplus\mathrm{H}^{4}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1)),$
with Mukai pairing defined in a similar formula as (3.2.1) denoted by
$\widetilde{\mathrm{H}}(X,\mathbb{Z}_{\ell})$; or
* •
if $\operatorname{char}(k)=p>0$, then the $p$-adic Mukai lattice
$\widetilde{\mathrm{H}}(X,W)$ is defined on the even degrees of crystalline
cohomology of $X$ with coefficients in $W(k)$
$\mathrm{H}^{0}_{\operatorname{crys}}(X/W(k))(-1)\oplus\mathrm{H}^{2}_{\operatorname{crys}}(X/W(k))\oplus\mathrm{H}^{4}_{\operatorname{crys}}(X/W(k))(1),$
where the twist $(i)$ is given by changing the Frobenius $F\mapsto p^{-i}F$,
and the Mukai pairing is given similarly in the formula (3.2.1).
### Hodge $\mathbf{B}$-field
For any $B\in\mathrm{H}^{2}(X,\mathbb{Q})$, we define the twisted Mukai
lattice as
$\widetilde{\mathrm{H}}(X,\mathbb{Z};B)\coloneqq\exp(B)\cdot\widetilde{\mathrm{H}}(X,\mathbb{Z})\subset\widetilde{\mathrm{H}}(X,\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{Q},$
which is naturally a lattice in $\widetilde{\mathrm{H}}(X,\mathbb{Z})$, and is
equipped with a induced pure Hodge structure of weight 2 from
$\widetilde{\mathrm{H}}(X,\mathbb{Q})$ (cf. [33, Definition 2.3]) i.e.,
$\widetilde{\mathrm{H}}^{0,2}(X;B)=\exp(B)\widetilde{\mathrm{H}}^{0,2}(X).$
The (extended) twisted Néron–Severi lattice is defined to be ${\rm
NS}(X;B)\coloneqq\widetilde{\mathrm{H}}^{1,1}(X,\mathbb{Z};B)$.
For such $B$, we can associate a Brauer class $\alpha_{B}=\exp(B^{0,2})$ via
the exponential sequence
$\mathrm{H}^{2}(X,\mathbb{Z})\to\mathrm{H}^{2}(X,\mathcal{O}_{X})\xrightarrow{\exp}\mathrm{H}^{2}(X,\mathcal{O}^{*}_{X})=\operatorname{Br}(X).$
Conversely, given $\alpha\in\operatorname{Br}(X)$, one can find a lift $B$ of
$\alpha$ in $\mathrm{H}^{2}(X,\mathcal{O}_{X})$ because $\operatorname{Br}(X)$
is torsion and $\mathrm{H}^{3}(X,\mathbb{Z})$ is torsion-free. The exponential
sequence implies $nB\in\mathrm{H}^{2}(X,\mathbb{Z})$ for the integer $n$ such
that $\alpha^{n}=1$, and so we have $B\in\mathrm{H}^{2}(X,\mathbb{Q})$. Any
such $B$ is called a $\mathbf{B}$-field lift of $\alpha$. It is clear that a
different choice of such lift $B^{\prime}$ satisfies
$B-B^{\prime}\in\mathrm{H}^{2}(X,\mathbb{Z})$ by the exponential sequence, and
thus there is a Hodge isometry
$\exp(B-B^{\prime})\colon\widetilde{\mathrm{H}}(X,\mathbb{Z};B^{\prime})\xrightarrow{\sim}\widetilde{\mathrm{H}}(X,\mathbb{Z};B).$
This implies that for any Brauer class $\alpha\in\operatorname{Br}(X)$, the
twisted Mukai lattice $\widetilde{\mathrm{H}}(X,\mathbb{Z};B)$ and the twisted
Néron–Severi lattice $\widetilde{N}(X;B)$ is independent of the choice of
$\mathbf{B}$-field lift $B$ up to isometry. Thus for any
$\mathbb{G}_{m}$-gerbe $\mathscr{X}\to X$ over a complex abelian surface, we
also denote $\widetilde{N}(\mathscr{X})$ for the twisted Néron–Severi lattice.
As shown in [33], for any twisted derived equivalence
$\Phi^{\mathcal{P}}\colon\operatorname{D}^{b}(X,\alpha)\xrightarrow{\sim}\operatorname{D}^{b}(Y,\beta)$,
we can associated it with a Hodge isometry
$\varphi=\varphi_{B,B^{\prime}}\colon\widetilde{\mathrm{H}}(X,\mathbb{Z};B)\xrightarrow{\sim}\widetilde{\mathrm{H}}(Y,\mathbb{Z};B^{\prime})$
(3.2.2)
for suitable $\mathbf{B}$-field lifts $B,B^{\prime}$ of $\alpha$ and $\beta$
respectively.
### $\ell$-adic and crystalline $\mathbf{B}$-field
For the sake of completeness, we will briefly recall the following generalized
notions of B-fields in both $\ell$-adic cohomology (cf. [40, §3.2]) and
crystalline cohomology (cf. [6, §3]) as an analogue to that in Betti
cohomology. We refer [9, §2] for the full consideration of both $\ell$-adic
and $p$-adic case, which is for K3 surfaces, but also works for abelian
surfaces. The readers who are only interested on our main results may skip
this part as we only use these generalized $\mathbf{B}$-fields in the next
subsection and in the supersingular twisted derived Torelli theorem in
§§6.6.1.
For a prime $\ell\neq p$ and $n\in\mathbb{N}$, the Kummer sequence of étale
sheaves
$1\to\mu_{\ell^{n}}\to\mathbb{G}_{m}\xrightarrow{(\cdot)^{n}}\mathbb{G}_{m}\to
1,$ (3.2.3)
induces a long exact sequence
$\cdots{\rm Pic}(X)\xrightarrow{\cdot l^{n}}{\rm
Pic}{X}\to\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mu_{\ell^{n}})\rightarrow\operatorname{Br}(X)[\ell^{n}]\to
0.$
Taking the inverse limit $\varprojlim_{n}$, we get a map
$\pi_{\ell}\colon\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1))=\varprojlim_{n}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mu_{\ell^{n}})\to\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mu_{\ell^{n}})\twoheadrightarrow\operatorname{Br}(X)[\ell^{n}].$
###### Lemma 3.2.1.
The map $\pi_{\ell}$ is surjective.
###### Proof.
We have a short exact sequence (cf. [45, Chap.V, Lemma 1.11])
$0\to\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1))/\ell^{n}\to\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mu_{\ell^{n}})\to\mathrm{H}^{3}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1))[\ell^{n}]\to
0.$
As $\mathrm{H}^{3}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1))$ is
torsion-free for any abelian surface $X$, we have an isomorphism
$\mathrm{H}_{\operatorname{{\acute{e}t}}}^{2}(X,\mathbb{Z}_{\ell}(1))/\ell^{n}\cong\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mu_{\ell^{n}}).$
Therefore, the reduction morphism
$\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1))\to\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mu_{\ell^{n}})$
can be identified with
$\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1))\twoheadrightarrow\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell}(1))/\ell^{n},$
which is surjective. The assertion then follows from it. ∎
For any $\alpha\in\operatorname{Br}(X)[\ell^{n}]$ such that $\ell\neq p$, let
$B_{\ell}(\alpha)\coloneqq\pi^{-1}_{\ell}(\alpha)$, which is non-empty by
Lemma 3.2.1.
For Brauer class $\alpha\in\operatorname{Br}(X)[p^{n}]$, we need the following
commutative diagram via the de Rham-Witt theory (cf. [35, I.3.2, II.5.1,
Théorème 5.14])
${0}$${\mathrm{H}^{2}(X,\mathbb{Z}_{p}(1))}$${\mathrm{H}_{\operatorname{crys}}^{2}(X/W)}$${\mathrm{H}_{\operatorname{crys}}^{2}(X/W)}$${\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{p^{n}})}$${\mathrm{H}^{2}_{\operatorname{crys}}(X/W_{n})}$$\scriptstyle{p_{n}\coloneqq(\otimes
W_{n})}$$\scriptstyle{p-F}$$\scriptstyle{d\log}$ (3.2.4)
where
$\mathrm{H}^{2}(X,\mathbb{Z}_{p}(1))\coloneqq\varprojlim_{n}\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{p^{n}})$.
The $d\log$ map is known to be injective by flat duality (cf. [50, Proposition
1.2]). Since the crystalline cohomology groups of an abelian surface are
torsion-free, the mod $p^{n}$ reduction map $p_{n}$ is surjective. Consider
the canonical surjective map
$\pi_{p}\colon\mathrm{H}^{2}_{\operatorname{fl}}(X,\mu_{p^{n}})\twoheadrightarrow\operatorname{Br}(X)[p^{n}],$
induced by the Kummer sequence. We set
$B_{p}(\alpha)\coloneqq\left\\{b\in\mathrm{H}^{2}_{\operatorname{crys}}(X/W)|p_{n}(b)=d\log(t)\text{
such that }\pi_{p}(t)=\alpha\right\\}.$
Following [9, Definition 2.16, 2.17], we can introduce the (mixed) $B$-fields
for twisted abelian surfaces.
###### Definition 3.2.2.
Let $\ell$ be a prime and let $\alpha\in\operatorname{Br}(X)[\ell^{n}]$ be a
Brauer class of $X$ of order $\ell^{n}$.
* •
If $\ell\neq p$, an $\ell$-adic B-field lift of $\alpha$ on $X$ is an element
$B=\frac{b}{\ell^{n}}\in\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Q}_{\ell})$
for some
$b\in\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mathbb{Z}_{\ell})$ such
that $b\in B_{\ell}(\alpha)$.
* •
If $\ell=p$, a crystalline B-field lift of $\alpha$ is an element
$B=\frac{b}{p^{n}}\in\mathrm{H}^{2}_{\operatorname{crys}}(X/W)[\frac{1}{p}]$
with $b\in\mathrm{H}^{2}_{\operatorname{crys}}(X/W)$ such that $b\in
B_{p}(\alpha)$ .
More generally, for any $\alpha\in\operatorname{Br}(X)$, a mixed
$\mathbf{B}$-field lift of $\alpha$ is a set $B=\\{B_{\ell}\\}\cup\\{B_{p}\\}$
consisting of a choice of an $\ell$-adic $\mathbf{B}$-field lift $B_{\ell}$ of
$\alpha$ for each $\ell\neq p$ and a crystalline $\mathbf{B}$-field lift
$B_{p}$ of $\alpha$.
###### Remark 3.2.3.
Not all elements in $\mathrm{H}^{2}_{\operatorname{crys}}(X/W)[\frac{1}{p}]$
are crystalline B-fields since the map $d\log$ is not surjective. From the
first row in the diagram (3.2.4), we can see
$B\in\mathrm{H}^{2}_{\operatorname{crys}}(X/W)[\frac{1}{p}]$ is a B-field lift
of some Brauer class if and only if $F(B)=pB$.
For an $\ell$-adic or crystalline $B$-field $B=\frac{b}{m}$, let
$\exp(B)=1+B+\frac{B^{2}}{2}$. We define the twisted Mukai lattice as
$\widetilde{\mathrm{H}}(X,B)=\begin{cases}\exp(B)\widetilde{\mathrm{H}}(X,\mathbb{Z}_{\ell})&\text{if}~{}p\nmid
m\\\ ~{}\\\ \exp(B)\widetilde{\mathrm{H}}(X,W)&\text{if}~{}m=p^{n}\end{cases}$
(3.2.5)
under the Mukai pairing (3.2.1). Moreover, for crystalline $\mathbf{B}$-field
$B$, $\widetilde{\mathrm{H}}(X,B)$ is a $W$-lattice in
$\widetilde{\mathrm{H}}(X,K)$ stable under the Frobenius action. Sometimes, we
denote $\widetilde{\mathrm{H}}(\mathscr{X},\mathbb{Z}_{\ell})$ and
$\widetilde{\mathrm{H}}(\mathscr{X},W)$ for the twisted Mukai lattices if we
want to emphasis the coefficient other than the choice of the
$\mathbf{B}$-field lift.
Now let $\mathscr{X}\to X$ be a $\mu_{n}$-gerbe over $X$ whose associated
Brauer class is $\alpha$. The category $\mathtt{Coh}(\mathscr{X})$ of
_$\alpha$ -twisted coherent sheaves_ consists of $1$-fold
$\mathscr{X}$-twisted coherent sheaves in the sense of Lieblich (cf. [39]),
which is proven to be a Grothendieck category. Let
$\operatorname{D}^{b}(\mathscr{X})$ be the bounded derived category of
$\mathtt{Coh}(\mathscr{X})$. Consider the Grothendieck group
$\mathrm{K}_{0}(\mathscr{X})$ of $\mathtt{Coh}(\mathscr{X})$. There is a
_twisted Chern character_ map
$\operatorname{ch}_{B}\colon\mathrm{K}_{0}(\mathscr{X})\to\widetilde{\mathrm{H}}(X,B),$
see [40, §3.3] and [6, Appendix A3] for $\ell$-adic and crystalline cases
respectively. The twisted Chern character $\operatorname{ch}_{B}$ factors
through the rational extended Néron-Severi lattice
$\widetilde{N}(X)_{\mathbb{Q}}$:
${\mathrm{K}_{0}(\mathscr{X})}$${\widetilde{\mathrm{H}}(X,B)}$${\widetilde{N}(X)_{\mathbb{Q}},}$$\scriptstyle{\operatorname{ch}_{B}}$$\scriptstyle{\operatorname{ch}_{\mathscr{X}}}$$\scriptstyle{\exp(B)\operatorname{cl}_{\mathrm{H}}}$
where $\operatorname{cl}_{\mathrm{H}}$ is the cycle class map. The image of
$\mathrm{K}_{0}(\mathscr{X})$ in $\widetilde{N}(X)_{\mathbb{Q}}$ under
$\operatorname{ch}_{B}$ is denoted by $\widetilde{\mathrm{N}}(\mathscr{X})$.
For any $\mathscr{X}$-twisted sheaf $\mathcal{E}$ on $X$, the Mukai vector
$v_{B}(\mathcal{E})$ is defined to be
$\operatorname{ch}_{B}([\mathcal{E}])\sqrt{\operatorname{Td}(X)}\in\widetilde{\mathrm{H}}(X,B).$
Since the Todd class $\operatorname{Td}(X)$ is trivial when $X$ is an abelian
surface,
$v_{B}(\mathcal{E})=\operatorname{ch}_{B}([\mathcal{E}])\in\widetilde{\mathrm{H}}(X,B)$.
For any Fourier–Mukai transform
$\Phi^{\mathcal{P}}\colon\operatorname{D}^{b}(\mathscr{X})\to\operatorname{D}^{b}(\mathscr{Y})$,
[9, Thereom 3.6] shows that there is an isometry of Mukai lattices for
suitable (mixed) $\mathbf{B}$-field lifts $B$ and $B^{\prime}$
$\varphi_{B,B^{\prime}}\colon\widetilde{\mathrm{H}}(X,B)\to\widetilde{\mathrm{H}}(Y,B).$
(3.2.6)
### 3.3. A filtered Torelli Theorem
In [41, 42], Lieblich and Olsson introduce the notion of filtered derived
equivalence and show that filtered derived equivalent K3 surfaces are
isomorphic. In this part, we will give an analogue for (twisted) abelian
surfaces, whose proof is much more simple than the K3 surface case as the
bounded derived category of a (twisted) abelian surface is a generic K3
category in the sense of [32].
The rational numerical Chow ring
$\operatorname{CH}^{*}_{\operatorname{num}}(X)_{\mathbb{Q}}$ is equipped with
a codimension filtration
$\operatorname{Fil}^{i}\operatorname{CH}^{*}_{\operatorname{num}}(X)_{\mathbb{Q}}\coloneqq\bigoplus_{i\geq
k}\operatorname{CH}^{k}_{\operatorname{num}}(X)_{\mathbb{Q}}.$
As $X$ is a surface, we have a natural identification
$\widetilde{N}(X)_{\mathbb{Q}}\cong\operatorname{CH}^{*}_{\operatorname{num}}(X)_{\mathbb{Q}}$,
which gives a filtration of the rational extended Néron-Severi lattice. Let
$\Phi^{\mathcal{P}}$ be a Fourier-Mukai transform with respect to
$\mathcal{P}\in\operatorname{D}^{b}(X\times Y)$. The equivalence
$\Phi^{\mathcal{P}}$ is called _filtered_ if the induced numerical Chow
realization $\Phi^{P}_{\operatorname{CH}}$ preserves the codimension
filtration. It is not hard to see that $\Phi^{\mathcal{P}}$ is filtered if and
only it sends the Mukai vector $(0,0,1)$ to $(0,0,1)$. A filtered twisted
Fourier-Mukai transform is defined in a same way since the twisted Chern
character $\operatorname{ch}_{\mathscr{X}}$ maps onto
$\widetilde{N}(\mathscr{X})\subset\widetilde{N}(X)_{\mathbb{Q}}$.
At the cohomological level, the codimension filtration on
$\widetilde{\mathrm{H}}(X)[\frac{1}{\ell}]$ (the prime $\ell$ depends on the
choice of $\ell$-adic or crystalline twisted Mukai lattice) is given by
$F^{i}=\oplus_{r\geq i}\mathrm{H}^{2r}(X)[\frac{1}{\ell}]$. Let $B$ be a
B-field lift of $[\mathscr{X}]$. The filtration on
$\widetilde{\mathrm{H}}(X,B)$ is defined by
$F^{i}\widetilde{\mathrm{H}}(X,B)=\widetilde{\mathrm{H}}(X,B)\cap
F^{i}\widetilde{\mathrm{H}}(X)[\frac{1}{\ell}].$
A direct computation shows that the graded pieces of $F^{\bullet}$ are
$\displaystyle{\rm
Gr}^{0}_{F}\widetilde{\mathrm{H}}(X,B)=\left\\{(r,rB,\frac{rB^{2}}{2})\Big{|}r\in\mathrm{H}^{0}(X)\right\\},$
(3.3.1) $\displaystyle{\rm
Gr}^{1}_{F}\widetilde{\mathrm{H}}(X,B)=\left\\{(0,c,c\cdot
B)|c\in\mathrm{H}^{2}(X)\right\\}\cong\mathrm{H}^{2}(X),$ $\displaystyle{\rm
Gr}_{F}^{2}\widetilde{\mathrm{H}}(X,B)=\left\\{(0,0,s)|s\in\mathrm{H}^{4}(X)\right\\}\cong\mathrm{H}^{4}(X)(1).$
###### Lemma 3.3.1.
A twisted Fourier-Mukai transform
$\Phi^{\mathcal{P}}:\mathrm{D}^{b}(\mathscr{X})\to\operatorname{D}^{b}(\mathscr{Y})$
is filtered if and only if its cohomological realization is filtered for
certain B-field lifts.
###### Proof.
It is clear that being filtered implies being cohomologically filtered. This
is because the map
$\exp(B)\cdot\operatorname{cl}_{H}\colon\widetilde{N}(X,\mathbb{Q})\to\widetilde{\mathrm{H}}(X,B)$
preserves the filtrations for any B-field lift $B$ of $[\mathscr{X}]$.
For the converse, just notice that $\Phi^{\mathcal{P}}$ is filtered if and
only if the induced map $\Phi^{\mathcal{P}}_{\operatorname{CH}}$ takes the
vector $(0,0,1)$ to $(0,0,1)$. As $\Phi^{\mathcal{P}}$ is cohomologically
filtered for $B$, we can see the cohomological realization of
$\Phi^{\mathcal{P}}$ preserves the graded piece ${\rm Gr}_{F}^{2}$ in (3.3.1).
This implies that $\Phi^{\mathcal{P}}_{\operatorname{CH}}$ takes $(0,0,1)$ to
$(0,0,1)$. ∎
###### Proposition 3.3.2 (filtered Torelli theorem for twisted abelian
surfaces).
Suppose $k=\bar{k}$. Let $\mathscr{X}\to X$ and $\mathscr{Y}\to Y$ be
$\mu_{n}$-gerbes on abelian surfaces. Then following statements are equivalent
1. (1)
There is an isomorphism between associated $\mathbb{G}_{m}$-gerbes
$\mathscr{X}_{\mathbb{G}_{m}}$ and $\mathscr{Y}_{\mathbb{G}_{m}}$.
2. (2)
There is a filtered Fourier-Mukai transform $\Phi^{\mathcal{P}}$ from
$\mathscr{X}$ to $\mathscr{Y}$.
###### Proof.
For untwisted case, i.e. $\mathscr{X}=X$ and $\mathscr{Y}=Y$, this is exactly
[28, Proposition 3.1]. Here we extend it to the twisted case. As one direction
is obvious, it suffices to show that (2) can imply (1). Set
$\mathcal{P}_{x}\coloneqq\Phi^{\mathcal{P}}(k(x))=\mathcal{P}|_{\\{x\\}\times
Y},$
the image of the skyscraper sheaf $k(x)$ for a closed point $x\in X$. Since
$\mathtt{Coh}(\mathscr{Y})$ admits no spherical objects (cf. [32, §§3.2]),
$\operatorname{D}^{b}(\mathscr{Y})$ are generic K3-categories and the semi-
rigid objects in $\operatorname{D}^{b}(\mathscr{Y})$ are in
$\mathtt{Coh}(\mathscr{Y})$ up to shift of degree. We can see there is an
integer $m$ such that $H^{i}(\mathcal{P}_{x})=0$ for all $i\neq m$ and closed
points $x$ (cf. [32, Proposition 3.18]). Therefore, there is a
$\mathscr{X}^{(-1)}\times\mathscr{Y}$-twisted sheaf
$\mathcal{E}\in\mathtt{Coh}(\mathscr{X}^{(-1)}\times\mathscr{Y})$ such that
$\mathcal{P}\cong\mathcal{E}[m]$. Since
$\Phi^{\mathcal{P}}_{\mathscr{X}\to\mathscr{Y}}$ sends $(0,0,1)$ to $(0,0,1)$,
$\mathcal{E}_{x}$ is just a skyscraper sheaf on $\\{x\\}\times Y$. Then one
can proceed the arguments as in [14, Corollary 5.3] or [29, Corollary 5.22,
5.23] to show that there is an isomorphism $f\colon X\to Y$ such that
$f^{*}([\mathscr{Y}_{\mathbb{G}_{m}}])=[\mathscr{X}_{\mathbb{G}_{m}}]$. ∎
### 3.4. Twisted FM partners via moduli space of twisted sheaves
Keep the notations as before, we denote by $\mathscr{M}_{H}(\mathscr{X},v)$
(or $\mathscr{M}_{H}^{\alpha}(X,v)$) the moduli stack of $H$-semistable
$\mathscr{X}$-twisted sheaves with Mukai vector
$v\in\widetilde{\mathrm{N}}(\mathscr{X})$, where $H$ is a $v$-generic ample
divisor on $X$ and $\alpha=[\mathscr{X}]$ the associated Brauer class of $X$
(cf. [39] or [66]). To characterize the Fourier–Mukai partners of twisted
abelian surfaces via the moduli space of twisted sheaves, we first need the
following criterion on non-emptiness of moduli space of (twisted) sheaves on
an abelian surface $X$ in positive characteristic. In the rest of this
section, we will always assume that $k=\bar{k}$ and
$\operatorname{char}(k)\neq 2$.
###### Proposition 3.4.1 (Minamide–Yanagida–Yoshioka, Bragg–Lieblich).
Let $n$ be a positive integer. Assume that either $p\nmid n$ or $X$ is
supersingular and $n=p$. Let $\mathscr{X}\to X$ be a $\mu_{n}$-gerbe on $X$.
Let $v=(r,\ell,s)\in\widetilde{\mathrm{N}}(\mathscr{X})$ be a primitive Mukai
vector such that $v^{2}=0$. Fix a $v$-generic ample divisor $H$. If one of the
following holds (called _positive_):
1. (1)
$r>0$.
2. (2)
$r=0$ and $\ell$ is effective.
3. (3)
$r=\ell=0$ and $s>0$.
then the coarse moduli space $M_{H}(\mathscr{X},v)\neq\emptyset$ and the
moduli stack $\mathscr{M}_{H}(\mathscr{X},v)$ is a $\mathbb{G}_{m}$-gerbe on
$M_{H}(\mathscr{X},v)$. Moreover, its coarse moduli space
$M_{H}(\mathscr{X},v)$ is an abelian surface.
###### Proof.
If $\mathscr{X}\to X$ is a $\mu_{n}$-gerbe such that $p\nmid n$, then the
statements are proven in [46, Proposition A.2.1] which is based on a statement
of lifting a Brauer classes on $X$ to characteristic 0 which requires the
condition $p\nmid n$ (see Lemma A.2.3 in loc.cit.).
If $X$ is supersingular and $\mathscr{X}\to X$ is a $\mu_{p}$-gerbe, then the
assertion will follow from a same argument in [7, Proposition 4.1.20], as we
will see in §6.6.2 that the twisor space of a supersingular abelian surface
can be constructed. ∎
###### Remark 3.4.2.
Actually, one can obtain the non-emptiness of $\mathscr{M}_{H}(\mathscr{X},v)$
for a $\mu_{n}$-gerbe $\mathscr{X}\to X$ over an abelian surface of finite
height with $p\mid n$ by following [46, Proposition A.2.1] together with the
lifting result 2.2.1.
###### Remark 3.4.3.
In the case $\mathscr{X}\to X$ is a essentially-trivial $\mu_{p}$-gerbe over a
supersingular abelian surface $X$, this can be proved by a standard lifting
argument (see also [22, Proposition 6.9]). When $\mathscr{X}\to X$ is non-
trivial, Bragg–Lieblich’s approach is to take the universal family of
$\mu_{p}$-gerbes
$f\colon\mathfrak{X}\to\mathbb{A}^{1}$
on the connected component
$\mathbb{A}^{1}\subset\operatorname{R}^{2}\pi_{*}\mu_{p}$ which contains
$\mathscr{X}$ (cf. Corollary 6.6.6). The fibers of $f$ contain $\mathscr{X}\to
X$ and the trivial $\mu_{p}$-gerbe over $X$. By taking the relative moduli
space of twisted sheaves (with suitable $v$-generic polarization) on
$\mathfrak{X}\to\mathbb{A}^{1}$, one can see the non-emptiness of
$M_{H}(\mathscr{X},v)$ from the case of essentially trivial gerbes.
Now, we are going to define the twisted Poincaré bundle for a gerbe on a given
abelian surface. Let $\mathscr{X}\to X$ be a $\mu_{n}$-gerbe on $X$ such that
$p\nmid n$. As an element in
$\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X,\mu_{n})$, we can (uniquely)
associate $\mathscr{X}$ with a symmetric morphism
$\varphi_{n}\colon X[n]\to\widehat{X}[n]$
by the Weil pairing (cf. [59, Lemma 16.22]). Dually, we have
$\varphi_{n}^{t}\colon\widehat{X}[n]\to X[n]$, which corresponds to a
$\mu_{n}$-gerbe on $\widehat{X}$, denoted by $\widehat{\mathscr{X}}$. We can
take a separable isogeny $f\colon Y\to X$ such that
$\widehat{f}[n]\circ\varphi_{n}\circ f[n]=0$. This implies
$f^{*}\mathscr{X}=0\in\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(Y,\mu_{n})$.
Then there is also a separable isogeny $f^{t}\colon\widehat{Y}\to\widehat{X}$
given by the Cartier dual $\ker(f)^{D}\subset\widehat{Y}$, which satisfies
$f^{t*}\widehat{\mathscr{X}}=0$. Let $\mathcal{P}_{0}$ be the Poincaré bundle
on $Y\times\widehat{Y}$. Consider
$f\times f^{t}\colon Y\times\widehat{Y}=V\to X\times\widehat{X}$
as a finite étale covering which trivializes the $\mu_{n}$-gerbe
$\mathscr{X}\times\widehat{\mathscr{X}}$, we will get a
$\mathscr{X}\times\widehat{\mathscr{X}}$-twisted sheaf
$\mathcal{P}_{\mathscr{X}}$ on $X\times\widehat{X}$ by the étale descent. We
have the following commutative diagram
${V}$${Y}$${X\times\widehat{X}}$${\widehat{Y}}$${X}$${\widehat{X}}$$\scriptstyle{p_{Y}}$$\scriptstyle{q_{Y}}$$\scriptstyle{f\times
f^{t}}$$\scriptstyle{f}$$\scriptstyle{p_{X}}$$\scriptstyle{q_{X}}$$\scriptstyle{f^{t}}$
###### Proposition 3.4.4.
The Fourier–Mukai functor
$\Phi^{\mathcal{P}_{\mathscr{X}}}\colon\operatorname{D}^{b}(\mathscr{X}^{(-1)})\to\operatorname{D}^{b}(\widehat{\mathscr{X}})$
is a derived equivalence.
###### Proof.
This statement can be checked étale locally on $\widehat{X}$. Then this
follows from the Bridgeland’s criterion (Theorem 2.3 and Theorem 3.3 in [11])
as in [29, Proposition 9.19], since $\mathcal{P}_{X}$ is étale locally the
Poincaré bundle: For any skyscraper sheaf $k(x)$ on $X$, which is naturally a
$\mathscr{X}^{(-1)}$-twisted sheaf, we have the following yoga
$\displaystyle\Phi^{\mathcal{P}_{X}}(k(x))|_{\widehat{Y}}$
$\displaystyle=(f^{t})^{*}q_{X*}(\mathcal{P}_{\mathscr{X}}\otimes
p^{*}_{X}k(x))$ $\displaystyle=q_{Y*}(f\times
f^{t})^{*}(\mathcal{P}_{\mathscr{X}}\otimes p_{X}^{*}k(x))$
$\displaystyle\cong q_{Y*}(\mathcal{P}_{0}\otimes p_{Y}^{*}f^{*}k(x))$
$\displaystyle\cong\bigoplus_{y\in f^{-1}(x)}q_{Y*}(\mathcal{P}_{0}\otimes
p_{Y}^{*}(k(y)))=\bigoplus_{y\in f^{-1}(x)}\mathcal{P}_{0,y}.$
where $\mathcal{P}_{0,y}$ is the line bundle on $\\{x\\}\times\widehat{Y}$
corresponding to $y\in Y\cong{\rm Pic}^{0}(\widehat{Y})$. ∎
The following is an extension of [28, Theorem 1.2].
###### Theorem 3.4.5.
With the same assumptions as in Proposition 3.4.1. Let $\mathscr{X}\to X$ be
$\mu_{n}$-gerbe on an abelian surface $X$ such that $p\nmid n$. Then the
associated $\mathbb{G}_{m}$-gerbe of any Fourier-Mukai partner of
$\mathscr{X}$ is isomorphic to a $\mathbb{G}_{m}$-gerbe on the moduli space of
$\mathscr{Y}$-twisted sheaves $M_{H}(\mathscr{Y},v)$ with $\mathscr{Y}$ being
$\mathscr{X}$ or $\widehat{\mathscr{X}}$.
###### Proof.
Let $\mathscr{M}$ be a Fourier-Mukai partner of $\mathscr{X}$. Let
$\Phi^{\mathcal{P}}_{\mathscr{M}\to\mathscr{X}}$ be the Fourier-Mukai
transform. Let $v$ be the image of $(0,0,1)$ under
$\Phi^{\mathcal{P}}_{\mathscr{M}\to\mathscr{X}}$. We can assume $v$ satisfying
one of the conditions in Proposition 3.4.1 by changing $\mathscr{X}$ to
$\widehat{\mathscr{X}}$ if necessary. It is proved that the moduli stack
$\mathscr{M}_{H}(\mathscr{X},v)$ is a $\mathbb{G}_{m}$-gerbe on
$M_{H}(\mathscr{X},v)$ in Proposition 3.4.1. Then there is a Fourier-Mukai
transform
$\Phi^{\mathcal{P}}\colon\operatorname{D}^{b}(\mathscr{M}_{H}(\mathscr{X},v)^{(-1)})\to\operatorname{D}^{b}(\mathscr{X}^{(1)})$
(3.4.1)
induced by the tautological sheaf $\mathcal{P}$ on
$\mathscr{M}_{H}(\mathscr{X},v)\times\mathscr{X}$, whose cohomological
realization maps the Mukai vector $(0,0,1)$ to $v$. Combining it with the
derived equivalence
$\Phi\colon\operatorname{D}^{b}(\mathscr{X})\to\operatorname{D}^{b}(\mathscr{M}),$
we will obtain a filtered derived equivalence from
$\mathscr{M}_{H}(\mathscr{X},v)^{(-1)}$ to $\mathscr{M}^{(1)}$. This induces
an isomorphism from $\mathscr{M}_{H}(\mathscr{X},v)^{(-1)}$ to
$\mathscr{M}^{(1)}_{\mathbb{G}_{m}}$ by Theorem 3.3.2. ∎
## 4\. Shioda’s Torelli theorem for abelian surfaces
In [58], Shioda noticed that there is a way to extract the information of the
$1^{\text{st}}$-cohomology of a complex abelian surface from its
$2^{\text{nd}}$-cohomology, called Shioda’s trick. This established a global
Torelli theorem for complex abelian surfaces via the
$2^{\text{nd}}$-cohomology, which is also a key step in Pjateckii-
Šapiro–Šafarevič’s proof of the Torelli theorem for K3 surfaces (cf. [53,
Lemma 4, Theorem 1]).
The aim of this section is to generalize Shioda’s method to all fields and
establish an isogeny theorem for abelian surfaces via the
$2^{\text{nd}}$-cohomology. We will deal with Shioda’s trick for Betti
cohomology, étale cohomology and crystalline cohomology separately.
### 4.1. Recap of Shioda’s trick for Hodge isometry
We first recall Shioda’s construction. Suppose $X$ is a complex abelian
surface. Its singular cohomology ring $\mathrm{H}^{\bullet}(X,\mathbb{Z})$ is
canonically isomorphic to the exterior algebra
$\wedge^{\bullet}\mathrm{H}^{1}(X,\mathbb{Z})$. Let $V$ be a free
$\mathbb{Z}$-module of rank $4$. We denote by $\Lambda$ the lattice
$(\wedge^{2}V,q)$ where $q:\wedge^{2}V\times\wedge^{2}V\to\mathbb{Z}$ is the
wedge product. After choosing a $\mathbb{Z}$-basis $\\{v_{i}\\}_{1\leq i\leq
4}$ for $\mathrm{H}^{1}(X,\mathbb{Z})$, we have an isometry of
$\mathbb{Z}$-lattice $\Lambda\xrightarrow{\sim}\mathrm{H}^{2}(X,\mathbb{Z})$.
The set of vectors
$\\{v_{ij}\coloneqq v_{i}\wedge v_{j}\\}_{0\leq i<j\leq 4}$
clearly forms a basis of $\mathrm{H}^{2}(X,\mathbb{Z})$, which will be called
an _admissible basis_ of $A$ for its second singular cohomology. For another
complex abelian surface $Y$, a Hodge isometry
$\psi\colon\mathrm{H}^{2}(Y,\mathbb{Z})\xrightarrow{\sim}\mathrm{H}^{2}(X,\mathbb{Z})$
will be called _admissible_ if $\det(\psi)=1$, with respect to some admissible
bases on $X$ and $Y$. It is clear that the admissibility of a morphism is
independent of the choice of admissible bases.
In terms of admissible bases, we can view $\psi$ as an element in
$\operatorname{SO}(\Lambda)$. On the other hand, we have the following exact
sequence of groups
$1\to\\{\pm
1\\}\to\operatorname{SL}_{4}(\mathbb{Z})\xrightarrow{\wedge^{2}}\operatorname{SO}(\Lambda)$
(4.1.1)
Shioda observed that the image of $\operatorname{SL}_{4}(\mathbb{Z})$ in
$\operatorname{SO}(\Lambda)$ is a subgroup of index two and does not contain
$-\operatorname{id}_{\Lambda}$. From this, he proved the following (cf. [58,
Theorem 1])
###### Theorem 4.1.1 (Shioda).
For any admissible integral Hodge isometry $\psi$, there is an isomorphism of
integral Hodge structures
$\varphi\colon\mathrm{H}^{1}(Y,\mathbb{Z})\xrightarrow{\sim}\mathrm{H}^{1}(X,\mathbb{Z})$
such that $\wedge^{2}(\varphi)=\psi$ or $-\psi$.
This is what we call “Shioda’s trick”. As we can assume a Hodge isometry being
admissible after possibly taking the dual abelian variety for one of them, we
can obtain the Torelli theorem for complex abelian surfaces by using the
weight two Hodge structures, i.e., $X$ is isomorphic to $Y$ or its dual
$\widehat{Y}$ if and only if there is an integral Hodge isometry
$\mathrm{H}^{2}(X,\mathbb{Z})\cong\mathrm{H}^{2}(Y,\mathbb{Z})$ (cf. [58,
Theorem 1]).
### 4.2. Admissible basis
In order to extend Shioda’s work to arbitrary fields, we need to define
admissibility for various cohomology theories (e.g. étale cohomology and
crystalline cohomology).
Let $k$ be a perfect field with $\operatorname{char}(k)=0$ or $p\geq 2$.
Suppose $X$ is an abelian surface over $k$ and $\ell\nmid p$ is a prime. For
simplicity of notations, we will denote $\mathrm{H}^{\bullet}(-)_{R}$ for one
of the following cohomology theories:
1. (1)
if $k\hookrightarrow\mathbb{C}$ and $R=\mathbb{Z}$ or any number field $E$,
then $\mathrm{H}^{\bullet}(X)_{R}=\mathrm{H}^{\bullet}(X(\mathbb{C}),R)$ the
singular cohomology.
2. (2)
if $R=\mathbb{Z}_{\ell}$ or $\mathbb{Q}_{\ell}$, then
$\mathrm{H}^{\bullet}(X)_{R}=\mathrm{H}^{\bullet}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},R)$,
the $\ell$-adic étale cohomology.
3. (3)
if $\operatorname{char}(k)=p>0$ and $R=W$ or $K$, then
$\mathrm{H}^{\bullet}(X)_{R}=\mathrm{H}^{\bullet}_{\operatorname{crys}}(X_{k^{\operatorname{perf}}}/W)$
or
$\mathrm{H}^{\bullet}_{\operatorname{crys}}(X_{k^{\operatorname{perf}}}/W)\otimes
K$, the crystalline cohomology.
There is an isomorphism between the cohomology ring
$\mathrm{H}^{\bullet}(X)_{R}$ and the exterior algebra
$\wedge^{\bullet}\mathrm{H}^{1}(X)_{R}$. We denote by
$\operatorname{tr}_{X}\colon\mathrm{H}^{4}(X)_{R}\xrightarrow{\sim}R$ the
corresponding trace map. Then the Poincaré pairing $\langle-,-\rangle$ on
$\mathrm{H}^{2}(X)_{R}$ can be realized as
$\langle\alpha,\beta\rangle=\operatorname{tr}_{X}(\alpha\wedge\beta).$
Analogous to §4.1, a $R$-basis $\\{v_{i}\\}$ of $\mathrm{H}^{1}(X)_{R}$ will
be called a $d$-admissible basis if it satisfies
$\operatorname{tr}_{X}(v_{1}\wedge v_{2}\wedge v_{3}\wedge v_{4})=d$
for some $d\in R^{\ast}$. When $d=1$, it will be called an _admissible basis_.
For any $d$-admissible (resp. admissible) basis $\\{v_{i}\\}$, the associated
$R$-basis $\\{v_{ij}\coloneqq v_{i}\wedge v_{j}\\}_{i<j}$ of
$\mathrm{H}^{2}(X)_{R}$ will also be called $d$-admissible (resp. admissible).
###### Example 4.2.1.
Let $\\{v_{1},v_{2},v_{3},v_{4}\\}$ be a $R$-linear basis of
$\mathrm{H}^{1}(X)_{R}$. Suppose
$\operatorname{tr}_{X}(v_{1}\wedge v_{2}\wedge v_{3}\wedge v_{4})=t\in R^{*}.$
For any $d\in R^{*}$, there is a natural $d$-admissible $R$-linear basis
$\\{\frac{d}{t}v_{1},v_{2},v_{3},v_{4}\\}$
###### Definition 4.2.2.
Let $X$ and $Y$ be abelian surfaces over $k$.
* •
a $R$-linear isomorphism
$\psi\colon\mathrm{H}^{1}(X)_{R}\to\mathrm{H}^{1}(Y)_{R}$ is $d$-admissible if
it takes an admissible basis to a $d$-admissible basis.
* •
a $R$-linear isomorphism
$\varphi\colon\mathrm{H}^{2}(X)_{R}\to\mathrm{H}^{2}(Y)_{R}$ is $d$-admissible
if
$\operatorname{tr}_{Y}\circ\wedge^{2}(\varphi)=d\operatorname{tr}_{X}$
for some $d\in R^{*}$, or equivalently, it sends an admissible basis to a
$d$-admissible basis. When $d=1$, it will also be called admissible.
The set of $d$-admissible isomorphisms will be denoted by
$\operatorname{Iso}^{\operatorname{ad},(d)}(\mathrm{H}^{1}(X)_{R},\mathrm{H}^{1}(Y)_{R})$
and
$\operatorname{Iso}^{\operatorname{ad},(d)}(\mathrm{H}^{2}(X)_{R},\mathrm{H}^{2}(Y)_{R})$
respectively.
For any isomorphism
$\varphi\colon\mathrm{H}^{2}(X)_{R}\xrightarrow{\sim}\mathrm{H}^{2}(Y)_{R}$,
let $\det(\varphi)$ be the determinant of the matrix with respect to some
admissible bases. It is not hard to see $\det(\varphi)$ is independent of the
choice of admissible bases, and $\varphi$ is admissible if and only if
$\det(\varphi)=1$.
###### Example 4.2.3.
For the dual abelian surface $\widehat{X}$, the dual basis $\\{v_{i}^{*}\\}$
with respect to the Poincaré pairing naturally forms an admissible basis,
under the identification
$\mathrm{H}^{1}(X)_{R}^{\vee}\cong\mathrm{H}^{1}(\widehat{X})_{R}$. Let
$\psi_{\mathcal{P}}\colon\mathrm{H}^{2}(X)_{R}\to\mathrm{H}^{2}(\widehat{X})_{R}$
be the isomorphism induced by the Poincaré bundle $\mathcal{P}$ on
$X\times\widehat{X}$. A direct computation (see e.g. [29, Lemma 9.3]) shows
that $\psi_{\mathcal{P}}$ is nothing but
$-\operatorname{D}\colon\mathrm{H}^{2}(X)_{R}\xrightarrow{\sim}\mathrm{H}^{2}(X)_{R}^{\vee}\cong\mathrm{H}^{2}(\widehat{X})_{R},$
where $\operatorname{D}$ is the Poincaré duality. For an admissible basis
$\\{v_{i}\\}$ of $X$, its $R$-linear dual $\\{v^{*}_{i}\\}$ with respect to
Poincaré pairing forms an admissible basis of $\widehat{X}$. By our
construction, we can see
$\operatorname{D}(v_{12},v_{13},v_{14},v_{23},v_{24},v_{34})=(v_{34}^{*},-v_{24}^{*},v_{23}^{*},v_{14}^{*},-v_{13}^{*},v_{12}^{*}),$
which implies that $\operatorname{D}$ is of determinant $-1$ under these
admissible bases. Thus the determinant of $\psi_{\mathcal{P}}$ is not
admissible.
###### Example 4.2.4.
Let $f\colon X\rightarrow Y$ be an isogeny of degree $d$ for some
$d\in\mathbb{Z}_{\geq 0}$ between two abelian surfaces. If $d$ is coprime to
$\ell$, then it will induce an isomorphism
$f^{\ast}\colon\mathrm{H}^{2}(Y)_{\mathbb{Z}_{\ell}}\xrightarrow{\sim}\mathrm{H}^{2}(X)_{\mathbb{Z}_{\ell}},$
which is $d$-admissible. If $d=n^{4}$, then $\frac{1}{n}f^{\ast}$ will be an
admissible $\mathbb{Z}_{\ell}$-integral isometry with respect to the Poincaré
pairing.
If $\ell\neq 2$, then $d$ or $-d$ is a square in $\mathbb{Z}_{\ell}$. Thus
there is some $\xi\in\mathbb{Z}_{\ell}^{*}$ such that $\pm d=\xi^{4}$.
Therefore, we can always find an admissible $\mathbb{Z}_{\ell}$-integral
isomorphism
$\frac{1}{\xi}f^{*}\colon\mathrm{H}^{1}(Y)_{\mathbb{Z}_{\ell}}\to\mathrm{H}^{1}(X)_{\mathbb{Z}_{\ell}}$
by possibly changing $Y$ to $\widehat{Y}$.
###### Example 4.2.5.
Suppose $X$ is an abelian surface over a perfect field $k$ with
$\operatorname{char}(k)=p>0$. Then $F$-crystal $\mathrm{H}^{1}(X)_{W}$
together with the trace map
$\operatorname{tr}_{X}\colon\mathrm{H}^{4}(X)_{W}\xrightarrow{\sim}W$
form an abelian crystal, in the sense of [50, §6]. We can see
$\mathrm{H}^{1}(X)_{W}\cong\mathrm{H}^{1}(Y)_{W}$ as abelian crystals if and
only if there is an admissible isomorphism
$\mathrm{H}^{1}(X)_{W}\xrightarrow{\sim}\mathrm{H}^{1}(Y)_{W}$.
### 4.3. More on admissible basis of $F$-crystals
In contrast to $\ell$-adic étale cohomology, the semilinear structure on
crystalline cohomology from its Frobenius is more tricky to work with.
Therefore, it seems necessary for us to spend more words on the interaction of
Frobenius with admissible bases.
We have the following Frobenius pull-back diagram:
${X}$${X^{(1)}}$${X}$${\operatorname{Spec}(k)}$${\operatorname{Spec}(k)}$$\scriptstyle{F_{X}^{(1)}}$$\scriptstyle{F_{X}}$$\scriptstyle{\sigma}$
Via the natural identification
$\mathrm{H}^{1}_{\operatorname{crys}}(X^{(1)}/W)\cong\mathrm{H}^{1}_{\operatorname{crys}}(X/W)\otimes_{\sigma}W$,
the $\sigma$-linearization of Frobenius action on
$\mathrm{H}^{1}_{\operatorname{crys}}(X/W)$ can be viewed as the injective
$W$-linear map
$F^{(1)}\coloneqq\left(F_{X}^{(1)}\right)^{*}\colon\mathrm{H}^{1}_{\operatorname{crys}}(X^{(1)}/W)\hookrightarrow\mathrm{H}^{1}_{\operatorname{crys}}(X/W).$
There is a decomposition
$\mathrm{H}^{1}_{\operatorname{crys}}(X/W)=H_{0}(X)\oplus H_{1}(X)$ such that
$F^{(1)}\left(\mathrm{H}^{1}_{\operatorname{crys}}(X^{(1)}/W)\right)\cong
H_{0}(X)\oplus pH_{1}(X),$ (4.3.1)
and ${\rm rank}_{W}H_{i}=2$ for $i=0,1$, which is related to the Hodge
decomposition of the de Rham cohomology of $X/k$ by Mazur’s theorem; see [4,
§8, Theorem 8.26].
The Frobenius map can be expressed in terms of admissible basis. We can choose
an admissible basis $\\{v_{i}\\}$ of
$\mathrm{H}^{1}_{\operatorname{crys}}(X/W)$ such that
$v_{1},v_{2}\in H_{0}(X)\quad\text{ and }\quad v_{3},v_{4}\in H_{1}(X).$
Then $\\{p^{\alpha_{i}}v_{i}\\}\coloneqq\\{v_{1},v_{2},pv_{3},pv_{4}\\}$ forms
an admissible basis of $\mathrm{H}^{1}_{\operatorname{crys}}(X^{(1)}/W)$ under
the identification (4.3.1), since
$\operatorname{tr}_{p}\circ\wedge^{4}F^{(1)}=p^{2}\sigma_{W}\circ\operatorname{tr}_{p}$.
In term of these basis, the Frobenius map can be written as
$F^{(1)}(p^{\alpha_{i}}v_{i})=\sum_{j}c_{ij}p^{\alpha_{j}}v_{j},$
where $C_{X}=(c_{ij})$ forms an invertible $4\times 4$-matrix with
coefficients in $W$.
Suppose $Y$ is another abelian surface over $k$ and
$\rho\colon\mathrm{H}^{1}_{\operatorname{crys}}(X/W)\to\mathrm{H}^{1}_{\operatorname{crys}}(Y/W)$
is an admissible map. Denote $\rho^{(1)}$ for the induced map
$\rho\otimes_{\sigma}W\colon\mathrm{H}^{1}_{\operatorname{crys}}(X^{(1)}/W)\to\mathrm{H}^{1}_{\operatorname{crys}}(Y^{(1)}/W)$.
The following lemma is clear.
###### Lemma 4.3.1.
The map $\rho$ is a morphism between $F$-crystals if and only if
$C_{Y}^{-1}\cdot\rho^{(1)}\cdot C_{X}=\rho$, where “$\cdot$” denotes by the
action of matrix with respect to the chosen admissible bases.
### 4.4. Generalized Shioda’s trick
Let us review some basic properties of the special orthogonal group scheme
over an integral domain. Let $\Lambda$ be an even $\mathbb{Z}$-lattice of rank
$2n$. Then we can associate it with a vector bundle $\underline{\Lambda}$ on
$\operatorname{Spec}(\mathbb{Z})$ with constant rank $2n$ equipped with a
quadratic form $q$ over $\operatorname{Spec}(\mathbb{Z})$ obtained from
$\Lambda$. Then the functor
$A\mapsto\left\\{g\in{\rm GL}(\Lambda_{A})\big{|}q_{A}(g\cdot
x)=q_{A}(x)\text{ for all }x\in\Lambda_{A}\right\\}$
represents a $\mathbb{Z}$-subscheme of ${\rm GL}(\Lambda)$, denoted by
$\operatorname{O}(\Lambda)$. There is a homomorphism between
$\mathbb{Z}$-group schemes
$D_{\Lambda}\colon\operatorname{O}(\Lambda)\to\underline{\mathbb{Z}/2\mathbb{Z}},$
which is called the Dickson morphism. It is surjective as $\Lambda$ is even,
and its formation commutes with any base change. The _special orthogonal group
scheme_ over $\mathbb{Z}$ with respect to $\Lambda$ is defined to be the
kernel of $D_{\Lambda}$, which is denoted by $\operatorname{SO}(\Lambda)$.
Moreover, we have
$\operatorname{SO}(\Lambda)_{\mathbb{Z}[\frac{1}{2}]}\cong\ker\left(\det\colon\operatorname{O}(\Lambda)\to\mathbb{G}_{m}\right)_{\mathbb{Z}[\frac{1}{2}]}.$
It is well-known that
$\operatorname{SO}(\Lambda)\to\operatorname{Spec}(\mathbb{Z})$ is smooth of
relative dimension $\frac{n(n-1)}{2}$ and with connected fibers; see [16,
Theorem C.2.11] for instance. Moreover, it is well-known that the special
orthogonal group scheme admits a universal covering (i.e., a simply connected
central isogeny)
$\operatorname{Spin}(\Lambda)\to\operatorname{SO}(\Lambda).$
See Appendix C.4 in loc.cit. for the construction. For any $\ell$, the special
orthogonal group scheme
$\operatorname{SO}(\Lambda_{\mathbb{Z}_{\ell}})\cong\operatorname{SO}(\Lambda)_{\mathbb{Z}_{\ell}}$
is smooth over $\mathbb{Z}_{\ell}$ with connected fibers, which implies its
generic fiber $\operatorname{SO}(\Lambda_{\mathbb{Q}_{\ell}})$ is connected.
Thus $\operatorname{SO}(\Lambda_{\mathbb{Z}_{\ell}})$ is clearly connected as
a group scheme over $\mathbb{Z}_{\ell}$ as
$\operatorname{SO}(\Lambda_{\mathbb{Q}_{\ell}})\subset\operatorname{SO}(\Lambda_{\mathbb{Z}_{\ell}})$
is dense.
Let $V$ be free $\mathbb{Z}$-module of rank $4$ and $\Lambda=\wedge^{2}V$ with
the natural pairing. Let $R$ be a ring of coefficients as listed in §§4.2.
Then we have
###### Lemma 4.4.1.
There is an exact sequence of smooth $R$-group schemes
$1\to\mu_{2,R}\to\operatorname{SL}(V)_{R}\xrightarrow{\wedge^{2}(-)_{R}}\operatorname{SO}(\Lambda)_{R}\to
1.$
(as fppf-sheaves if $\frac{1}{2}\notin R$.) Moreover, there is an exact
sequence
$1\to\\{\pm\operatorname{id}_{4}\\}\to\operatorname{SL}(V)(R)\xrightarrow{\wedge^{2}(-)_{R}}\operatorname{SO}(\Lambda)(R)\xrightarrow{\operatorname{SN}}R^{*}/(R^{*})^{2},$
(4.4.1)
where $\operatorname{SN}$ is the map of spinor norm (see [3, §3.3] for the
definition).
###### Proof.
For the first statement, it suffices to assume
$R=\operatorname{Spec}(\bar{k})$ for an algebraically closed field $\bar{k}$,
where it is clear from a computation.
Note that we have an exact sequence on rational points (cf. [25, Proposition
3.2.2])
$1\to\mu_{2}(R)\to\operatorname{SL}(V)(R)\to\operatorname{SO}(\Lambda)(R)\to\mathrm{H}^{1}(\operatorname{Spec}(R),\mu_{2}).$
From the Kummer sequence for $\mu_{2}$, we can see
$\mathrm{H}^{1}(\operatorname{Spec}(R),\mu_{2})\cong\mathrm{H}^{1}_{\operatorname{{\acute{e}t}}}(\operatorname{Spec}(R),\mu_{2})\cong
R^{*}/(R^{*})^{2}$
as ${\rm Pic}(R)[2]=0$.
For the last statement, it is sufficient to see that there is an isomorphism
of $R$-group schemes
$\operatorname{SL}(V)_{R}\xrightarrow{\sim}\operatorname{Spin}(\Lambda)_{R}$
such that the following diagram commutes
${\operatorname{Spin}(\Lambda)(R)}$${\operatorname{SL}(V)(R)}$${\operatorname{SO}(\Lambda)(R)}$${R^{*}/(R^{*})^{2}}$${R^{*}/(R^{*})^{2}}$$\scriptstyle{\sim}$$\scriptstyle{\operatorname{SN}}$$\scriptstyle{\sim}$
The group scheme $\operatorname{SL}(V)$ is simply-connected (as its geometric
fibers are semisimple algebraic group of type $A_{3}$). Thus the central
isogeny $\operatorname{SL}(V)_{R}\to\operatorname{SO}(\Lambda)_{R}$ forms the
universal covering of $\operatorname{SO}(\Lambda)_{R}$, which induces an
isomorphism
$\operatorname{SL}(V)_{R}\xrightarrow{\sim}\operatorname{Spin}(\Lambda)_{R}$
by using the Existence and Isomorphism Theorems over a general ring (see
e.g.,[16, Exercise 6.5.2]).
∎
###### Remark 4.4.2.
When $R=\mathbb{Z}_{\ell}$, we have
$\mathbb{Z}_{\ell}^{*}/(\mathbb{Z}_{\ell}^{*})^{2}\cong\begin{cases}\\{\pm
1\\}&\text{ if }\ell\neq 2,\\\ \\{\pm 1\\}\times\\{\pm 5\\}&\text{ if
}\ell=2.\end{cases}$
Thus the image of $\operatorname{SL}(V)(\mathbb{Z}_{\ell})$ is a finite index
subgroup in $\operatorname{SO}(\Lambda)(\mathbb{Z}_{\ell})$.
###### Remark 4.4.3.
When $R=W(k)$, we have
$W(k)^{*}/(W(k)^{*})^{2}\cong\begin{cases}\\{\pm 1\\}&\text{ if
$k=\mathbb{F}_{p^{s}}$ for $p>2,s\geq 1$}\\\ \\{\pm 1\\}\times\\{\pm
5\\}&\text{ if $k=\mathbb{F}_{p^{s}}$ for $p=2,s\geq 1$},\\\ \\{1\\}&\text{ if
$k=\bar{k}$ or $k^{s}\subset k,\operatorname{char}(k)>2$.}\end{cases}$
as $W(k)$ is Henselian. Thus the wedge map
$\operatorname{SL}(V)(W)\to\operatorname{SO}(\Lambda)(W)$ is surjective when
$k=\bar{k}$.
Let $V_{R}=\mathrm{H}^{1}(X)_{R}$. We can see the set
$\operatorname{Iso}^{\operatorname{ad},(d)}(\mathrm{H}^{1}(X)_{R},\mathrm{H}^{1}(Y)_{R})$
is a naturally (right) $\operatorname{SL}(V_{R})$-torsor if it is non-empty.
The wedge product provides a natural map
$\wedge^{2}\colon\operatorname{Iso}^{\operatorname{ad},(d)}\left(\mathrm{H}^{1}(X)_{R},\mathrm{H}^{1}(Y)_{R}\right)\to\operatorname{Iso}^{\operatorname{ad},(d)}\left(\mathrm{H}^{2}(X)_{R},\mathrm{H}^{2}(Y)_{R}\right).$
Let $\\{v_{i}\\}$ be an admissible basis of $\mathrm{H}^{1}(X)_{R}$ and let
$\\{v_{i}^{\prime}\\}$ be a $d$-admissible basis of $\mathrm{H}^{1}(Y)_{R}$
respectively. There is an $d$-admissible isomorphism
$\psi_{0}\in\operatorname{Iso}^{\operatorname{ad},(d)}(\mathrm{H}^{1}(X)_{R},\mathrm{H}^{1}(Y)_{R})$
such that $\psi_{0}(v_{i})=v_{i}^{\prime}$. For a $d$-admissible isometry
$\varphi\colon\mathrm{H}^{2}(X,R)\to\mathrm{H}^{2}(Y,R)$, we can see
$\varphi=\wedge^{2}(\psi_{0}^{-1})\circ g,~{}\text{for some
$g\in\operatorname{SO}(\Lambda_{R})$}.$
In this way, any $d$-admissible isomorphism $\varphi$ can be identified with
(unique) element $g\in\operatorname{SO}(\Lambda)(R)$ when the admissible bases
are fixed. This allows us to deal with $d$-admissible isomorphisms group-
theoretically. In particular, we have the following notion of spinor norm.
###### Definition 4.4.4.
The _spinor norm_ of the $d$-admissible isomorphism $\varphi$ is defined to
the image of $g$ under
$\operatorname{SN}\colon\operatorname{SO}(\Lambda)(R)\to R^{*}/(R^{*})^{2}$,
denoted by $\operatorname{SN}(\varphi)$.
###### Lemma 4.4.5.
The spinor norm $\operatorname{SN}(\varphi)$ is independent of the choice of
admissible bases.
###### Proof.
For different choice of admissible bases, we can see the resulted
$\widetilde{g}=KgK^{-1}$ for some $K\in\operatorname{SO}(\Lambda_{R})$.
Therefore $\operatorname{SN}(\widetilde{g})=\operatorname{SN}(g)$. ∎
###### Remark 4.4.6.
When $R$ is a field, the spinor norm can be computed by the Cartan-Dieudonné
decomposition. That means, we can write any
$g\in\operatorname{SO}(\Lambda)(R)$ as a the composition of reflections:
$\varphi_{b_{n}}\circ\varphi_{b_{n-1}}\circ\cdots\circ\varphi_{b_{1}}$
for some non-isotropic vectors $b_{1},\cdots,b_{n}\in\Lambda_{R}$, and
$\operatorname{SN}(g)=\left[(b_{1})^{2}\cdots(b_{n-1})^{2}(b_{n})^{2}\right]$.
###### Lemma 4.4.7.
The $d$-admissible isomorphism $\varphi$ is a wedge of some $d$-admissible
isomorphism $\psi\colon\mathrm{H}^{1}(X,R)\to\mathrm{H}^{1}(Y,R)$ if and only
if $\operatorname{SN}(\varphi)=1$.
###### Proof.
The exact sequence (4.4.1) shows that if
$\operatorname{SN}(\varphi)=\operatorname{SN}(g)=1$, then there is some
$h\in\operatorname{SL}(V_{R})$ such that $\wedge^{2}(h)=g$. Thus we can take
$\psi=\psi_{0}\circ h$ when $\operatorname{SN}(\varphi)=1$, and see that
$\wedge^{2}(\psi)=\wedge^{2}(\psi_{0})\circ\wedge^{2}(h)=\varphi.$
The converse is clear. ∎
### 4.5. Shioda’s trick for Hodge isogenies
When $k=\mathbb{C}$ and $d$ is an integer, we say an isometry
$\varphi\colon\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})(d)$
a Hodge isogeny of degree d if it is also a morphism of Hodge structures. In
particular, if $d=1$, then it is the classical Hodge isometry we usually talk
about. Clearly, a $d$-admissible rational Hodge isomorphism is a Hodge isogeny
of degree $d$. In terms of spinor norms, we can generalize Shioda’s theorem
4.1.1 to admissible rational Hodge isogenies.
###### Proposition 4.5.1 (Shioda’s trick on admissible Hodge isogenies).
1. (1)
A $d$-admissible Hodge isogeny of degree $d$
$\varphi\colon\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})(d)$
is a wedge of some rational Hodge isomorphism
$\psi\colon\mathrm{H}^{1}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{1}(Y,\mathbb{Q})$,
if its spinor norm is a square in $\mathbb{Q}^{*}$. In this case, the Hodge
isogeny is induced by a quasi-isogeny of degree $d^{2}$.
2. (2)
When $d=1$, any admissible Hodge isometry
$\varphi:\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})$
is induced by an isogeny $f\colon Y\to X$ of degree $n^{2}$ for some integer
$n$ such that $\varphi=\frac{f^{\ast}}{n}$.
###### Proof.
Under the assumption of $(1)$, we can find a $d$-admissible isomorphism $\psi$
by applying the Lemma 4.4.7. It remains to prove that $\psi$ preserves the
Hodge structure, which is essentially the same as in [58, Theorem 1].
For $(2)$, we shall suppose the spinor norm
$\operatorname{SN}(\varphi)=n\mathbb{Q}^{*2}\in\mathbb{Q}^{*}/\mathbb{Q}^{*2}$.
Let $E=\mathbb{Q}(\sqrt{n})$. We can see the base-change
$\mathrm{H}^{2}(X,E)\xrightarrow{\sim}\mathrm{H}^{2}(Y,E)$ is a Hodge isometry
with coefficients in $E$ such that $\operatorname{SN}(\varphi)=1\in
E^{*}/(E^{*})^{2}$. Then by applying Lemma 4.4.7, we will obtain an admissible
(fixing admissible bases for $\mathrm{H}^{1}(X,\mathbb{Q})$ and
$\mathrm{H}^{1}(Y,\mathbb{Q})$) Hodge isomorphism
$\psi\colon\mathrm{H}^{1}(X,E)\xrightarrow{\sim}\mathrm{H}^{1}(Y,E)$. Let
$\sigma\colon a+b\sqrt{n}\rightsquigarrow a-b\sqrt{n}$
be the generator of $\operatorname{Gal}(E/\mathbb{Q})$. As we have fixed the
$\mathbb{Q}$-linear admissible bases, the wedge map
$\operatorname{SL}_{4}(E)\xrightarrow{\wedge^{2}}\operatorname{SO}(\Lambda)(E)$
is defined over $\mathbb{Q}$, and so is $\sigma$-equivariant. Let $g$ be the
element in $\operatorname{SL}_{4}(E)$ corresponding to $\psi$. As
$\wedge^{2}(g)\in\operatorname{SO}(\Lambda)\subset\operatorname{SO}(\Lambda_{E})$,
we can see
$(\wedge^{2}(\sigma(g))=\sigma(\wedge^{2}(g))=\wedge^{2}(g).$
which implies that $\sigma(g)g^{-1}=\pm\operatorname{id}_{4}$ since
$\ker(\wedge^{2})=\\{\pm\operatorname{id}_{4}\\}$. If $\sigma(g)=g$, then
$g\in\operatorname{SL}_{4}(\mathbb{Q})$ and the statement trivially holds. If
$\sigma(g)=-g$, then $g_{0}=\sqrt{n}g$ is lying in ${\rm GL}_{4}(\mathbb{Q})$.
Let
$\psi_{0}\colon\mathrm{H}^{1}(X,\mathbb{Q})\to\mathrm{H}^{1}(Y,\mathbb{Q})$
be the corresponding element of $g_{0}$ in
$\operatorname{Iso}^{\operatorname{ad},(n^{2})}\left(\mathrm{H}^{1}(X,\mathbb{Q}),\mathrm{H}^{1}(Y,\mathbb{Q})\right)$.
As $\wedge^{2}\psi_{0}=n\varphi$ is a Hodge isogeny, the part (1) then implies
that $\psi_{0}$ is a Hodge isomorphism as well. Thus $\psi_{0}$ lifts to a
quasi-isogeny $f_{0}\colon Y\to X$ and we have
$\varphi=\wedge^{2}(\psi)=\frac{f_{0}^{*}}{n}\colon\mathrm{H}^{2}(X,\mathbb{Q})\to\mathrm{H}^{2}(Y,\mathbb{Q}).$
∎
###### Remark 4.5.2.
If a Hodge isometry
$\psi\colon\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})$
is not admissible, i.e., its determinant is $-1$ with respect to some
admissible bases, then we can take its composition with the isometry
$\psi_{\mathcal{P}}$ induced by the Poincaré bundle as in Example 4.2.3. After
that, we can see $\psi_{\mathcal{P}}\circ\psi$ is admissible and is induced by
an isogeny $f\colon\widehat{Y}\to X$.
### 4.6. $\ell$-adic and $p$-adic Shioda’s trick
For the integral $\ell$-adic étale cohomology, we have the following statement
similar to Shioda’s trick for integral Betti cohomology.
###### Proposition 4.6.1 ($\ell$-adic Shioda’s trick).
Suppose $\ell\neq 2$. For any $d$-admissible
$\varphi_{\ell}\colon\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(Y_{\bar{k}},\mathbb{Z}_{\ell})\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Z}_{\ell}),$
we can find a $d$-admissible $\mathbb{Z}_{\ell}$-isomorphism $\psi_{\ell}$
such that $\wedge^{2}(\psi_{\ell})=\varphi_{\ell}$ or $-\varphi_{\ell}$.
Moreover, if $\varphi_{\ell}$ is $G_{k}$-equivariant, then $\psi_{\ell}$ is
also $G_{k}$-equivariant after replacing $k$ by some finite extension.
###### Proof.
As $\mathbb{Z}_{\ell}^{*}/(\mathbb{Z}_{\ell}^{*})^{2}=\\{\pm 1\\}$ for any
$\ell\neq 2$, the spinor norm of $\varphi_{\ell}$ is equal to $\pm 1$. Thus
$\varphi_{\ell}$ or $-\varphi_{\ell}$ is of spinor norm one. Then the first
statement follows from Lemma 4.4.7.
Suppose $\varphi_{\ell}$ is $G_{k}$-equivariant. We may assume
$\wedge^{2}(\psi_{\ell})=\varphi_{\ell}$ for simplicity. For any $g\in G_{k}$,
we have
$\wedge^{2}(g^{-1}\psi_{\ell}g)=g^{-1}\wedge^{2}(\psi_{\ell})g=\varphi_{\ell}=\wedge^{2}(\psi_{\ell}).$
Therefore, $g^{-1}\psi_{\ell}g=\pm\psi_{\ell}$. By passing to a finite
extension $k^{\prime}/k$, we always have $g^{-1}\psi_{\ell}g=\psi_{\ell}$ for
all $g\in G_{k^{\prime}}$ which proves the assertion. ∎
For $F$-crystals attached to abelian surfaces, we can also play Shioda’s
trick.
###### Proposition 4.6.2 ($p$-adic Shioda’s trick).
Suppose $k$ is a finite field $\mathbb{F}_{p^{s}}$ with odd prime $p$. For any
$d$-admissible $W$-linear isomorphism
$\varphi_{W}\colon\mathrm{H}^{2}_{\operatorname{crys}}(Y/W)\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{crys}}(X/W),$
we can find a $d$-admissible $W$-linear isomorphism
$\rho:\mathrm{H}^{1}_{\operatorname{crys}}(Y/W)\xrightarrow{\sim}\mathrm{H}^{1}_{\operatorname{crys}}(X/W)$
such that $\wedge^{2}(\rho)=\varphi_{W}$ or $-\varphi_{W}$. Moreover, if
$\varphi_{W}$ is a morphism of $F$-crystals, then $\rho$ is an isomorphism of
$2^{nd}$-iterate of $F$-crystals.
###### Proof.
The first statement follows from a similar reason as in Proposition 4.6.1 as
$W^{*}/(W^{*})^{2}=\\{\pm 1\\}$ (see Remark 4.4.3).
For the second statement, we assume $\wedge^{2}(\rho)=\varphi_{W}$. If
$\varphi_{W}$ commutes with the Frobenius action, then we have
$\wedge^{2}(C_{X}^{-1}\cdot\rho^{(1)}\cdot C_{Y})=\varphi_{W}.$
as in §4.3. Thus $C_{X}^{-1}\cdot\rho^{(1)}\cdot C_{Y}=\pm\rho$, which implies
$\rho\circ F_{X}=\pm F_{Y}\circ\rho$
by Lemma 4.3.1. Therefore, $\rho$ commutes with the $2^{nd}$-iterate Frobenius
$F^{2}_{X}$ and $F_{Y}^{2}$. ∎
###### Remark 4.6.3.
If $k$ is an algebraically closed field or the separable closure in an
algebraic closure such that $\operatorname{char}(k)>2$, then Proposition 4.6.2
also holds. In addition, the first statement can be enforced to
$\Lambda^{2}(\rho)=\varphi_{W}$; see Remark 4.4.3.
Combined with Tate’s isogeny theorem, we have the following direct
consequences of Propositions 4.6.1 and 4.6.2. It includes a special case of
Tate’s conjecture.
###### Corollary 4.6.4.
Suppose $k$ is a finitely generated field over $\mathbb{F}_{p}$ with $p$ an
odd prime. Let $\ell\neq 2$ be a prime not equal to $p$.
1. (1)
For any admissible isometry of $\operatorname{Gal}(k^{s}/k)$-modules
$\varphi_{\ell}\colon\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(Y_{k^{s}},\mathbb{Z}_{\ell})\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X_{k^{s}},\mathbb{Z}_{\ell}),$
we can find a $\mathbb{Z}_{\ell}$-quasi-isogeny
$f_{\ell}\in\operatorname{Hom}_{k^{\prime}}(X_{k^{\prime}},Y_{k^{\prime}})\otimes\mathbb{Z}_{\ell}$
for some finite extension $k^{\prime}/k$, which induces $\varphi_{\ell}$ or
$-\varphi_{\ell}$. In particular, $\varphi_{\ell}$ is algebraic.
2. (2)
For any admissible isometry of $F$-crystals over the Cohen ring $W$
$\varphi_{W}\colon\mathrm{H}^{2}_{\operatorname{crys}}(Y/W)\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{crys}}(X/W),$
we can find a $\mathbb{Z}_{p^{2}}$-quasi-isogeny
$f_{p}\in\operatorname{Hom}_{k^{\prime}}(X_{k^{\prime}},Y_{k^{\prime}})\otimes\mathbb{Z}_{p^{2}}$
which induces $\varphi_{W}$ or $-\varphi_{W}$ for some finite extension
$k^{\prime}/k$, where $\mathbb{Z}_{p^{2}}=W(\mathbb{F}_{p^{2}})$. In
particular, $\varphi_{W}$ is algebraic.
###### Proof.
For $(1)$, Proposition 4.6.1 implies there is an
$\operatorname{Gal}(k^{s}/k)$-equivariant isomorphism
$\psi_{\ell}\colon\mathrm{H}^{1}_{\operatorname{{\acute{e}t}}}(Y_{k^{s}},\mathbb{Z}_{\ell})\xrightarrow{\sim}\mathrm{H}^{1}_{\operatorname{{\acute{e}t}}}(X_{k^{s}},\mathbb{Z}_{\ell}),$
inducing $\varphi_{\ell}$ or $-\varphi_{\ell}$ after a finite extension of
$k$. Then $f_{\ell}$ exists by the canonical bijection (cf. [20, VI, §3
Theorem 1])
$\operatorname{Hom}_{k}(X,Y)\otimes\mathbb{Z}_{\ell}\xrightarrow{\sim}\operatorname{Hom}_{\operatorname{Gal}(k^{s}/k)}\left(\mathrm{H}^{1}_{\operatorname{{\acute{e}t}}}(Y_{k^{s}},\mathbb{Z}_{\ell}),\mathrm{H}^{1}_{\operatorname{{\acute{e}t}}}(X_{k^{s}},\mathbb{Z}_{\ell})\right).$
For $(2)$, let $\bar{k}$ be an algebraic closure of $k$, then Proposition
4.6.2 and Remark 4.6.3 imply that there is an isomorphism
$\rho\colon\mathrm{H}^{1}_{\operatorname{crys}}(Y_{\bar{k}}/W(\bar{k}))\xrightarrow{\sim}\mathrm{H}^{1}_{\operatorname{crys}}(X_{\bar{k}}/W(\bar{k}))$
such that $F_{X_{\bar{k}}}\circ\rho=\pm\rho\circ F_{Y_{\bar{k}}}$. In fact,
the $\bar{k}$ in this formula can be replaced a finite extension $k^{\prime}$
of $k$ by a similar argument as the proof of (2) of Proposition 4.5.1.
Replace $k$ by $k^{\prime}$. If $F_{X}\circ\rho=\rho\circ F_{Y}$ then one can
conclude by the canonical isomorphisms
$\operatorname{Hom}_{k}(X,Y)\otimes\mathbb{Z}_{p}\xrightarrow{\sim}\operatorname{Hom}_{k}\left(X[p^{\infty}],Y[p^{\infty}]\right)\xrightarrow{\sim}\operatorname{Hom}_{F}\left(\mathrm{H}^{1}(Y/W),\mathrm{H}^{1}(X/W)\right),$
(4.6.1)
where the bijectivity of the first arrow is given by $p$-adic Tate’s isogeny
theorem (cf. [18, Theorem 2.6]) and the second one is the faithfulness of
taking Dieudonné module over $W$ (cf. [17, Theorem ]).
It remains to consider the case $F_{X}\circ\rho=-\rho\circ F_{Y}$. After
taking a finite extension of $k$, we may assume that
$\mathbb{Z}_{p^{2}}\subset W(k)$. Now there is $\xi\in W(k)$ such that
$\xi^{p-1}+1=0$. We can see that
$F_{X}\circ(\xi\rho)=\xi^{p}F_{X}\circ\rho=(\xi\rho)\circ F_{Y}.$
Again, the bijection (4.6.1) implies that $\xi\rho$ is induced by a
$\mathbb{Z}_{p}$-quasi-isogeny
$f_{0}\in\operatorname{Hom}_{k}(X,Y)\otimes\mathbb{Z}_{p}$. Note that
$\xi\in\mathbb{Z}_{p^{2}}^{*}$. We can take
$f_{p}=\frac{f_{0}}{\xi}\in\operatorname{Hom}_{k}(X,Y)\otimes\mathbb{Z}_{p^{2}}.$
∎
###### Remark 4.6.5.
In [67], Zarhin introduces the notion of _almost isomorphism_. Two abelian
varieties over $k$ are called almost isomorphic if their Tate modules
$T_{\ell}$ are isomorphic as Galois modules (replaced by $p$-divisible groups
when $\ell=p$). Proposition 4.6.1 and 4.6.2 imply that it is possible to
characterize almost isomorphic abelian surfaces by their
$2^{\text{nd}}$-cohomology groups.
## 5\. Derived isogeny in characteristic zero
In this section, we follow [23] and [31] to prove the twisted Torelli theorem
for abelian surfaces over algebraically closed fields of characteristic zero.
### 5.1. Over $\mathbb{C}$: Hodge isogeny versus derived isogeny
Let $X$ and $Y$ be complex abelian surfaces.
###### Definition 5.1.1.
A rational Hodge isometry
$\psi_{b}\colon\mathrm{H}^{2}(X,\mathbb{Q})\to\mathrm{H}^{2}(Y,\mathbb{Q})$ is
called _reflexive_ if it is induced by a reflection on $\Lambda$ along a
vector $b\in\Lambda$:
$\varphi_{b}\colon\Lambda_{\mathbb{Q}}\xrightarrow{\sim}\Lambda_{\mathbb{Q}}\quad
x\mapsto x-\frac{2(x,b)}{(b,b)}b.$
###### Lemma 5.1.2.
Any reflexive Hodge isometry $\psi_{b}$ induces a Hodge isometry on twisted
Mukai lattices
$\widetilde{\psi}_{b}\colon\widetilde{\mathrm{H}}(X,\mathbb{Z};B)\to\widetilde{\mathrm{H}}(Y,\mathbb{Z};B^{\prime}),$
where $B=\frac{2}{(b,b)}b\in\mathrm{H}^{2}(X,\mathbb{Q})$ (via some marking
$\Lambda\cong\mathrm{H}^{2}(X,\mathbb{Z})$) and $B^{\prime}=-\psi_{b}(B)$.
###### Proof.
The proof can be found in [31, §1.2]. ∎
In analogy to [31, Theorem 1.1], the following result characterizes the
reflexive Hodge isometries between abelian surfaces.
###### Theorem 5.1.3.
Let $X$ and $Y$ be two complex abelian surfaces. If there is a reflexive Hodge
isometry
$\psi_{b}\colon\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q}),$
for some $b\in\Lambda$, then there exist $\alpha\in\operatorname{Br}(X)$ and
$\beta\in\operatorname{Br}(Y)$ such that $\psi_{b}$ is induced by a derived
equivalence
$\operatorname{D}^{b}(X,\alpha)\simeq\operatorname{D}^{b}(Y,\beta).$
Equivalently, $X$ or $\widehat{X}$ is isomorphic to the coarse moduli space of
twisted coherent sheaves on $Y$, and $\psi_{b}$ is induced by the twisted
Fourier-Mukai transform associated to the universal twisted sheaf.
###### Proof.
According to Lemma 5.1.2, there is a Hodge isometry
$\widetilde{\psi}_{b}\colon\widetilde{\mathrm{H}}(X,\mathbb{Z};B)\xrightarrow{\sim}\widetilde{\mathrm{H}}(Y,\mathbb{Z};B^{\prime}).$
Let $v_{B^{\prime}}$ be the image of Mukai vector $(0,0,1)$ under
$\widetilde{\psi}_{b}$. From our construction, there is a Mukai vector
$v=\exp(-B^{\prime})\cdot
v_{B^{\prime}}\in\widetilde{\mathrm{H}}(Y,\mathbb{Z})$
satisfying $v_{B^{\prime}}=\exp(B^{\prime})\cdot v$. We can assume that $v$ is
positive (see Proposition 3.4.1) by some suitable autoequivalences of
$\operatorname{D}^{b}(Y)$ as in [34, §2]. Let $\beta$ be the Brauer class on
$Y$ with respect to $B^{\prime}$ and $\mathscr{Y}\to Y$ be the corresponding
$\mathbb{G}_{m}$-gerbe. For some $v_{B^{\prime}}$-generic polarization $H$,
the moduli stack $\mathscr{M}_{H}(\mathscr{Y},v_{B^{\prime}})$ of
$\beta$-twisted sheaves on $Y$ with Mukai vector $v_{B^{\prime}}$ forms a
$\mathbb{G}_{m}$-gerbe on its coarse moduli space
$M_{H}(\mathscr{Y},v_{B^{\prime}})$ such that
$[\mathscr{M}_{H}(\mathscr{Y},v_{B^{\prime}})]\in\operatorname{Br}(M_{H}(\mathscr{Y},v_{B}))[r]$
(cf. [39, Proposition 2.3.3.4, Corollary 2.3.3.7]).
The kernel $\mathscr{P}$ is the tautological twisted sheaf on
$\mathscr{Y}\times\mathscr{M}_{H}(\mathscr{Y},v_{B^{\prime}})$ which induces a
twisted Fourier-Mukai transform
$\Phi_{\mathscr{P}}\colon\operatorname{D}^{b}(Y,\beta)\to\operatorname{D}^{b}(\mathscr{M}_{H}(\mathscr{Y},v_{B^{\prime}}))\simeq\operatorname{D}^{b}(M_{H^{\prime}}(\mathscr{Y},v_{B^{\prime}}),\alpha),$
where
$\alpha=[\mathscr{M}_{H}(\mathscr{Y},v_{B^{\prime}})]\in\operatorname{Br}(M_{H^{\prime}}(\mathscr{Y},v_{B^{\prime}}))$
(cf. [66, Theorem 4.3]). It induces a Hodge isometry
$\widetilde{\mathrm{H}}(Y,\mathbb{Z};B^{\prime})\xrightarrow{\sim}\widetilde{\mathrm{H}}(M_{H}(\mathscr{Y},v_{B^{\prime}}),\mathbb{Z};B^{\prime\prime}),$
where $B^{\prime\prime}$ is a $\mathbf{B}$-field lift of $\alpha$. Its
composition with $\widetilde{\psi}_{b}$ is a Hodge isometry
$\widetilde{\mathrm{H}}(X,\mathbb{Z};B)\xrightarrow{\sim}\widetilde{\mathrm{H}}(M_{H}(\mathscr{Y},v_{B^{\prime}}),\mathbb{Z};B^{\prime\prime}),$
(5.1.1)
sending the Mukai vector $(0,0,1)$ to $(0,0,1)$ and preserving the Mukai
pairing. We can see $(1,0,0)$ is mapping to $(1,b,\frac{b^{2}}{2})$ for some
$b\in\mathrm{H}^{2}(Y,\mathbb{Z})$ via (5.1.1). Thus we can replace
$B^{\prime\prime}$ by $B^{\prime\prime}+b$, which will not change the
corresponding Brauer class, to obtain a Hodge isometry which takes $(1,0,0)$
to $(1,0,0)$ and $(0,0,1)$ to $(0,0,1)$ at the same time. This yields a Hodge
isometry
$\mathrm{H}^{2}(X,\mathbb{Z})\xrightarrow{\sim}\mathrm{H}^{2}(M_{H^{\prime}}(\mathscr{Y},v_{B^{\prime}}),\mathbb{Z}).$
Then we can apply Shioda’s Torelli Theorem of abelian surfaces [58] to
conclude that
$M_{H^{\prime}}(\mathscr{Y},v_{B^{\prime}})\cong X$ or $\widehat{X}$.
When $X\cong M_{H^{\prime}}(\mathscr{Y},v_{B^{\prime}})$, $\Phi_{\mathscr{P}}$
gives the derived equivalence as desired. When $\widehat{X}\cong
M_{H^{\prime}}(\mathscr{Y},v_{B^{\prime}})$, we can prove the assertion by
using the fact $X$ and $\widehat{X}$ are derived equivalent. ∎
Next, we are going to show that any rational Hodge isometry can be decomposed
into a chain of reflexive Hodge isometries. This is a special case of Cartan-
Dieudonné theorem which says that any element
$\varphi\in\operatorname{SO}(\Lambda_{\mathbb{Q}})$ can be decomposed as
products of reflections:
$\varphi=\varphi_{b_{1}}\circ\varphi_{b_{2}}\circ\cdots\circ\varphi_{b_{n}},$
(5.1.2)
such that $b_{i}\in\Lambda$, and $(b_{i})^{2}\neq 0$. Then from the
surjectivity of period map [58, Theorem II], for any rational Hodge isometry
$\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q}),$
we can find a sequence of abelian surfaces $\\{X_{i}\\}$ with
$\Lambda$-markings and Hodge isometries
$\psi_{b_{i}}\colon\mathrm{H}^{2}(X_{i-1},\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(X_{i},\mathbb{Q})$,
where $X_{0}=X$ and $X_{n}=Y$, such that $\psi_{b_{i}}$ induces
$\varphi_{b_{i}}$ on $\Lambda_{\mathbb{Q}}$. We can arrange them as (1.1.1):
${\mathrm{H}^{2}(X,\mathbb{Q})}$${\mathrm{H}^{2}(X_{1},\mathbb{Q})}$${\mathrm{H}^{2}(X_{1},\mathbb{Q})}$${\mathrm{H}^{2}(X_{2},\mathbb{Q})}$${\vdots}$${\mathrm{H}^{2}(X_{n-1},\mathbb{Q})}$${\mathrm{H}^{2}(Y,\mathbb{Q}).}$$\scriptstyle{\psi_{b_{1}}}$$\scriptstyle{\psi_{b_{2}}}$$\scriptstyle{\psi_{b_{n}}}$
(5.1.3)
Finally, this yields
###### Corollary 5.1.4.
If there is a rational Hodge isometry
$\varphi:\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})$,
then there is a derived isogeny from $X$ to $Y$, whose Hodge realization is
$\varphi$.
As a consequence, we get
###### Corollary 5.1.5.
There is a rational Hodge isometry
$\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})$
if and only if there is a derived isogeny from $\operatorname{Km}(X)$ to
$\operatorname{Km}(Y)$.
###### Proof.
Witt’s cancellation theorem implies that
$\mathrm{H}^{2}(X,\mathbb{Q})\simeq\mathrm{H}^{2}(Y,\mathbb{Q})\Leftrightarrow\mathrm{T}(X)_{\mathbb{Q}}\simeq\mathrm{T}(Y)_{\mathbb{Q}},$
as Hodge isometries, where $\mathrm{T}(-)$ denotes the transcendental part of
$\mathrm{H}^{2}(-)$. According to [31, Theorem 0.1], $\operatorname{Km}(X)$ is
derived isogenous to $\operatorname{Km}(Y)$ if and only if there is a Hodge
isometry
$\mathrm{T}(\operatorname{Km}(X))_{\mathbb{Q}}\simeq\mathrm{T}(\operatorname{Km}(Y))_{\mathbb{Q}}$.
Then the statement is clear from the fact there is a canonical integral Hodge
isometry $\mathrm{T}(X)(2)\simeq\mathrm{T}(\operatorname{Km}(X))$ (cf. [47,
Proposition 4.3(i)]). ∎
###### Remark 5.1.6.
A consequence of Corollary 5.1.4 is that any rational Hodge isometry between
abelian surfaces is algebraic, which is a special case of Hodge conjecture on
product of two abelian surface. Unlike the case of K3 surfaces, the Hodge
conjecture for product of abelian surfaces were known for a long time. See
[56, Theorem 3.15] for example.
Moreover, we may call a reflexive Hodge isometry
$\psi_{b}\colon\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})$
induced by a primitive vector $b\in\Lambda$ _prime-to- $\ell$_ if $\ell\nmid
n=\frac{(b)^{2}}{2}$. The following results imply that the Hodge realization
of prime-to-$\ell$ derived isogeny is a composition of finitely many prime-
to-$\ell$ reflexive Hodge isometries.
###### Lemma 5.1.7.
If the induced derived isogeny
$\operatorname{D}^{b}(X)\sim\operatorname{D}^{b}(Y)$ in Corollary 5.1.4 is
prime-to-$\ell$, then each reflexive Hodge isometry $\psi_{b}$ in (5.1.3) is
prime-to-$\ell$.
###### Proof.
Otherwise, we can take $\ell^{k}$ to be the $\ell$-factor of $n$. As
$\psi_{b}$ restricts to an isomorphism
$\mathrm{H}^{2}(X,\mathbb{Z})\otimes\mathbb{Z}_{(\ell)}\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Z})\otimes\mathbb{Z}_{(\ell)},$
we have $\ell^{k}\mid(x,b)$ for any $x\in\Lambda$. This means $\ell^{k}$
divides the divisibility of $b$, which is impossible as $\Lambda$ is
unimodular. ∎
###### Remark 5.1.8.
With notations in Theorem 5.1.3, if $\psi_{b}$ is prime-to-$\ell$ with
$n=\frac{(b)^{2}}{2}$, then the Fourier-Mukai equivalence
$\operatorname{D}^{b}(X,\alpha)\xrightarrow{\sim}\operatorname{D}^{b}(Y,\beta)$
in Theorem 5.1.3 satisfies
$\alpha^{n}=\exp(nB)=1\in\operatorname{Br}(X),$
which implies $\alpha\in\operatorname{Br}(X)[n]$. Similarly, $n$ divides the
order of $\beta=\exp(B^{\prime})\in\operatorname{Br}(Y)$.
### 5.2. Isogeny versus derived isogeny
Let us now describe derived isogenies via suitable isogenies.
Recall that the isogeny category of abelian varieties
$\mathtt{AV}_{\mathbb{Q},k}$ consists of all abelian varieties over a field
$k$ as objects, and the Hom-sets are
$\operatorname{Hom}_{\mathtt{AV}_{\mathbb{Q},k}}(X,Y)\coloneqq\operatorname{Hom}_{\mathtt{AV}_{k}}(X,Y)\otimes_{\mathbb{Z}}\mathbb{Q},$
where $\operatorname{Hom}_{\mathtt{AV}_{k}}(X,Y)$ is the abelian group of
homomorphisms from $X$ to $Y$ with the natural addition. We may also write
$\operatorname{Hom}^{0}(X,Y)$ for
$\operatorname{Hom}_{\mathtt{AV}_{\mathbb{Q},k}}(X,Y)$ if there are no
confusion on the field of definition $k$. An isomorphism $f$ from $X$ to $Y$
in the isogeny category $\mathtt{AV}_{\mathbb{Q},k}$ is called a quasi-isogeny
from $X$ to $Y$. For any quasi-isogeny $f$, we can find a minimal integer $n$
such that
$nf\colon X\to Y$
is an isogeny, i.e., a finite surjective morphism of abelian varieties. When
$k=\mathbb{C}$, with the uniformization of complex abelian varieties, we have
a canonical bijection
$\operatorname{Hom}_{\mathtt{AV}_{\mathbb{Q},\mathbb{C}}}(X,Y)\xrightarrow{\sim}\operatorname{Hom}_{\operatorname{Hdg}}\left(\mathrm{H}^{1}(Y,\mathbb{Q}),\mathrm{H}^{1}(X,\mathbb{Q})\right),$
where the right-hand side is the set of $\mathbb{Q}$-linear morphisms
preserving Hodge structures. Then the integer $n$ for $f$ is also the minimal
integer such that
$(nf)^{*}(\mathrm{H}^{1}(Y,\mathbb{Z}))\subseteq\mathrm{H}^{1}(X,\mathbb{Z})$.
It is well-known that the functor $\underline{\operatorname{Hom}}(X,Y)$ of
homomorphisms from $X$ to $Y$ (not just as scheme morphisms) is representable
by an étale group scheme over $k$ (see [62, (7.14)] for example). Therefore,
via Galois descent, we have
$\operatorname{Hom}_{\mathtt{AV}_{\bar{k}}}(X_{\bar{k}},Y_{\bar{k}})\xrightarrow{\sim}\operatorname{Hom}_{\mathtt{AV}_{\bar{K}}}(X_{\bar{K}},Y_{\bar{K}}),$
(5.2.1)
for any algebraically closed field $\bar{K}\supset k$. A similar statement
holds for derived isogenies.
###### Lemma 5.2.1.
Let $X$ and $Y$ are abelian surfaces defined over $k$ with
$\operatorname{char}(k)=0$. Let $\bar{K}\supseteq k$ be an algebraically
closed field containing $k$. Let $\bar{k}$ be the algebraically closure of $k$
in $\bar{K}$. Then if $X_{\bar{K}}$ and $Y_{\bar{K}}$ are twisted derived
equivalent, so is $X_{\bar{k}}$ and $Y_{\bar{k}}$.
###### Proof.
As $X_{\bar{K}}$ is twisted derived equivalent to $Y_{\bar{K}}$, by Theorem
3.4.5, there exist finitely many abelian surfaces $X_{0},X_{1},\ldots,X_{n}$
defined over $\bar{K}$ with $X_{0}=X_{\bar{K}}$ and
$X_{i}~{}\text{or}~{}\widehat{X_{i}}=M_{H_{i}}(\mathscr{X}_{i-1},v_{i})\quad
Y_{\bar{K}}~{}\hbox{or}~{}\widehat{Y}_{\bar{K}}\cong
M_{H_{n}}(\mathscr{X}_{n},v_{n})$
for some $[\mathscr{X}_{i-1}]\in\operatorname{Br}(X_{i-1})$. Let us construct
abelian surfaces over $\bar{k}$ to connect $X_{\bar{k}}$ and $Y_{\bar{k}}$ as
follows:
Set $X_{0}^{\prime}=X_{\bar{k}}$, then we take
$X_{1}^{\prime}=M_{H^{\prime}_{1}}(\mathscr{X}_{0}^{\prime},v_{1}^{\prime})$
where $\mathscr{X}_{0}^{\prime},H^{\prime}_{1}$ and $v_{1}^{\prime}$ are the
descent of $\mathscr{X}_{0},H_{1}$ and $v$ via the isomorphisms
$\operatorname{Br}(X_{\bar{K}})\cong\operatorname{Br}(X_{\bar{k}})$, ${\rm
Pic}(X_{\bar{K}})\cong{\rm Pic}(X_{\bar{k}})$ and
$\widetilde{\mathrm{H}}(X_{\bar{K}})\cong\widetilde{\mathrm{H}}(X_{\bar{k}})$.
Then inductively, we can define $X_{i}^{\prime}$ as the moduli space of
twisted sheaves
$M_{H_{i}^{\prime}}(\mathscr{X}_{i-1}^{\prime},v_{i}^{\prime})$ (or its dual
respectively) over $\bar{k}$. Note that we have natural isomorphisms
$(M_{H_{i}^{\prime}}(\mathscr{X}_{i-1}^{\prime},v_{i}^{\prime}))_{\bar{K}}\cong
M_{H_{i}}(\mathscr{X}_{i-1},v_{i})$
over $\bar{K}$. In particular,
$(M_{H_{i}^{\prime}}(\mathscr{X}_{n}^{\prime},v_{i}^{\prime}))_{\bar{K}}\cong
Y_{\bar{K}}$. It follows that
$M_{H_{i}^{\prime}}(\mathscr{X}_{n}^{\prime},v_{i}^{\prime})\cong
Y_{\bar{k}}.$ ∎
More generally, we can replace $\mathbb{Q}$ in $\mathtt{AV}_{\mathbb{Q},k}$ by
any ring $R$. Then any isomorphism from $X$ to $Y$ in $\mathtt{AV}_{R,k}$ will
be called a _$R$ -(quasi)-isogeny_. In particular,
###### Definition 5.2.2.
An element
$f\in\operatorname{Hom}_{k}(X,Y)\otimes_{\mathbb{Z}}\mathbb{Z}_{(\ell)}$ which
has an inverse in
$\operatorname{Hom}_{k}(Y,X)\otimes_{\mathbb{Z}}\mathbb{Z}_{(\ell)}$ is called
a prime-to-$\ell$ quasi-isogeny, where $\mathbb{Z}_{(\ell)}$ is the
localization of $\mathbb{Z}$ at $(\ell)$.
For any abelian surface $X_{\mathbb{C}}$ over $\mathbb{C}$, the spreading out
argument shows that there is a finitely generated field $k\subset\mathbb{C}$
and an abelian surface $X$ over $k$ such that $X\times_{k}\mathbb{C}\cong
X_{\mathbb{C}}$. We have the following Artin comparison
$\mathrm{H}^{i}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Z}_{\ell})\cong\mathrm{H}^{i}(X_{\mathbb{C}},\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{Z}_{\ell},$
(5.2.2)
for any $i\in\mathbb{Z}$ and $\ell$ a prime. Suppose $Y$ is another abelian
surface defined over $k$. Suppose $f\colon Y_{\mathbb{C}}\to X_{\mathbb{C}}$
is a prime-to-$\ell$ quasi-isogeny. By definition, it induces an isomorphism
of $\mathbb{Z}_{(\ell)}$-modules
$f^{*}\colon\mathrm{H}^{1}(X_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Z}_{(\ell)}\xrightarrow{\sim}\mathrm{H}^{1}(Y_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Z}_{(\ell)},$
such that there is a commutative diagram
${\mathrm{H}^{i}(X_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Z}_{(\ell)}}$${\mathrm{H}^{i}(Y_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Z}_{(\ell)}}$${\mathrm{H}^{i}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Z}_{\ell})}$${\mathrm{H}^{i}_{\operatorname{{\acute{e}t}}}(Y_{\bar{k}},\mathbb{Z}_{\ell})}$$\scriptstyle{\sim}$$\scriptstyle{\sim}$
for any $i$, under the comparison (5.2.2). For the converse, we have the
following simple fact given by a faithfully flat descent of modules along
$\mathbb{Z}_{(\ell)}\hookrightarrow\mathbb{Z}_{\ell}$ and the $\ell$-adic
Shioda thick.
###### Lemma 5.2.3.
A (quasi)-isogeny $f\colon Y_{\mathbb{C}}\to X_{\mathbb{C}}$ is prime-
to-$\ell$ if and only it induces an isomorphism of integral $\ell$-adic
realizations
$f^{*}\colon\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(X_{\bar{k}},\mathbb{Z}_{\ell})\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(Y_{\bar{k}},\mathbb{Z}_{\ell}).$
Inspired by Shioda’s trick for Hodge isogenies 4.5.1, we introduce the
following notions.
###### Definition 5.2.4.
Let $X$ and $Y$ be $g$-dimensional abelian varieties over $k$. We say $X$ and
$Y$ are (prime-to-$\ell$) _principally isogenous_ if there is a (prime-
to-$\ell$) isogeny $f$ from $X$ or $\widehat{X}$ to $Y$ of square degree,
i.e., $\deg(f)=d^{2}$ for some $d\in\mathbb{Z}$. In this case, we may call $f$
a _principal isogeny_.
Furthermore, we say $f$ is quasi-liftable if $f$ can be written as the
composition of finitely many isogenies that are liftable to characteristic
zero.
Now, we can state the main result in this section.
###### Theorem 5.2.5.
Suppose $\mathrm{char}(k)=0$. The following statements are equivalent:
1. (1)
$X$ is (prime-to-$\ell$) principally isogenous to $Y$ over $\bar{k}$.
2. (2)
$X$ and $Y$ are (prime-to-$\ell$) derived isogenous over $\bar{k}$.
###### Proof.
$(1)\Rightarrow(2)$: we can assume that $f:X\to Y$ is a principal isogeny
defined over a finitely generated field $k^{\prime}$. By embedding
$k^{\prime}$ into $\mathbb{C}$, two complex abelian surfaces $X_{\mathbb{C}}$
and $Y_{\mathbb{C}}$ are derived isogenous since there is a rational Hodge
isometry
$\frac{1}{n}f^{\ast}\otimes\mathbb{Q}:\mathrm{H}^{2}(Y_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Q}\cong\mathrm{H}^{2}(X_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Q}$
where $\deg(f)=n^{2}$. By Lemma 5.2.1, one can conclude $X_{\bar{k}}$ and
$Y_{\bar{k}}$ are derived isogenous, with the rational Hodge realization
$\frac{1}{n}f^{\ast}\otimes\mathbb{Q}$.
If $f$ is a prime-to-$\ell$ isogeny, the map $\frac{1}{n}f^{\ast}$ restricts
to an isomorphism
$\mathrm{H}^{2}(Y_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Z}_{(\ell)}\xrightarrow{\sim}\mathrm{H}^{2}(X_{\mathbb{C}},\mathbb{Z})\otimes\mathbb{Z}_{(\ell)}.$
Then one can take a prime-to-$\ell$ Cartan-Dieudonné decomposition (see Lemma
5.2.6 below), which decomposes $\frac{1}{n}f^{\ast}\otimes\mathbb{Q}$ into a
sequence of prime-to-$\ell$ reflexive Hodge isometries. The assertion follows
immediately.
To deduce $(2)\Rightarrow(1)$, we may assume $X$ and $Y$ are derived isogenous
over a finitely generated field $k^{\prime}$. Embed $k^{\prime}$ into
$\mathbb{C}$, $X_{\mathbb{C}}$ and $Y_{\mathbb{C}}$ are derived isogenous as
well. According to Proposition 4.5.1, they are principally isogenous over
$\mathbb{C}$. It follows that $X$ and $Y$ are principally isogenous over
$\bar{k}$ by Lemma (5.2.1).
If $\operatorname{D}^{b}(X)\sim\operatorname{D}^{b}(Y)$ is prime-to-$\ell$,
then each reflexive Hodge isometry $\psi_{b}$ in (5.1.3) is prime-to-$\ell$ by
Lemma 5.1.7. The principal isogeny which induces $\psi_{b}$ is prime-to-$\ell$
by Lemma 5.2.3. This proves the assertion.
∎
###### Lemma 5.2.6 (prime-to-$\ell$ Cartan-Dieudonné decomposition).
Let $\Lambda$ be an integral lattice over $\mathbb{Z}$ whose reduction mod
$\ell$ is still non-degenerated. Any orthogonal matrix
$A\in\operatorname{O}(\Lambda)(\mathbb{Z}_{(\ell)})\subset\operatorname{O}(\Lambda)(\mathbb{Q})$,
with $(\ell>2)$, can be decomposed into a sequence of prime-to-$\ell$
reflections.
###### Proof.
To prove the assertion, we will follow the proof of [57] to refine Cartan-
Dieudonné decomposition for any field. In general, if $\Lambda_{k}$ is
quadratic space over a field $k$ with the Gram matrix $G$. Let $I$ be the
identity matrix and let $R_{b}$ be the reflection with respect to
$b\in\Lambda_{k}$. The proof of Cartan-Dieudonné decomposition in [57] relies
on the following facts: for any element $A\in\operatorname{O}(\Lambda_{k})$,
we have
1. i)
$A$ is a reflection if ${\rm rank}(A-I)=1$ (cf. [57, Lemma 2])
2. ii)
if $S=G(A-I)$ is not skew symmetric, there exists $a\in\Lambda$ satisfying
$a^{t}Sa\neq 0$ and
$S+S^{t}\neq\frac{1}{a^{t}Sa}(Sb\cdot b^{t}S+S^{t}b\cdot b^{t}S^{t}),$
then ${\rm rank}(AR_{b}-I)={\rm rank}(A-I)-1$ and $G(AR_{b}-I)$ is not skew
symmetric with $b=(A-I)a$ satisfying $b^{2}=-2a^{t}Sa$. Such $a$ always
exists. (cf. [57, Lemma 4, Lemma 5]).
3. iii)
if $S=G(A-I)$ is skew symmetric, then there exists $b\in\Lambda$ such that
$G(AR_{b}-I)$ is not skew symmetric (cf. [57, Theorem 2]).
Then one can decompose $A$ as a series of reflections by repeatedly using ii).
In our case, it suffices to show that if $A$ is coprime to $\ell$, i.e. $nA$
is integral for some $n$ coprime to $\ell$, then
1. i’)
$A$ is a corpime to $\ell$ reflection if ${\rm rank}(A-I)=1$;
2. ii’)
if $S=G(A-I)$ is not skew symmetric and there exists $a\in\Lambda$ satisfying
$p\nmid a^{t}Sa$ and
$S+S^{t}\neq\frac{1}{a^{t}Sa}(Sb\cdot b^{t}S+S^{t}b\cdot b^{t}S^{t}),$
then $AR_{b}$ is coprime to $\ell$ and $G(AR_{b}-I)$ is not skew symmetric
with $b$ constructed above;
3. iii’)
if $S=G(A-I)$ is skew symmetric, then there exists $b\in\Lambda$ such that
$AR_{b}$ is coprime to $\ell$ and $G(AR_{b}-I)$ is not skew symmetric.
This means that we only need to find some prime-to-$\ell$ reflections
satisfying the conditions as above. By our assumption, the modulo $\ell$
reduction $\Lambda_{\mathbb{F}_{\ell}}$ of $\Lambda$ remains non-degenerate.
If $A$ is coprime to $\ell$, then we can consider the reduction $A\mod\ell$
and apply i)-iii) to
$A\mod\ell\in\operatorname{O}(\Lambda_{\mathbb{F}_{\ell}})$ to obtain
reflections over $\mathbb{F}_{\ell}$. We can lift the reflections to
$\operatorname{O}(\Lambda_{\mathbb{Q}})$, which are obviously coprime to
$\ell$. One can easily check such reflections satisfy ii’) and iii’). ∎
### 5.3. Proof of Theorem 1.2.1 and Corollary 1.2.2
Let us summarize all the results which conclude Theorem 1.2.1. By a similar
argument in Theorem 5.2.5, we can reduce them to the case $k=\mathbb{C}$.
#### Proof of $(i)\Leftrightarrow(ii)$
This is Theorem 5.2.5.
#### Proof of
$(i)\Leftrightarrow(vi)\Leftrightarrow(vii)\Leftrightarrow(viii)$
The equivalence $(i)\Leftrightarrow(vi)$ is Corollary 5.1.4. The equivalence
$(vi)\Leftrightarrow(vii)$ is from Witt cancellation theorem. For
$(vi)\Leftrightarrow(viii)$, note that a rational Hodge isometry
$\varphi\colon\mathrm{H}^{2}(X,\mathbb{Q})\xrightarrow{\sim}\mathrm{H}^{2}(Y,\mathbb{Q})$
induces a rational isometry ${\rm NS}(X)_{\mathbb{Q}}\xrightarrow{\sim}{\rm
NS}(Y)_{\mathbb{Q}}$. Then we have a Hodge isometry
$\mathrm{T}(X)_{\mathbb{Q}}\xrightarrow{\sim}\mathrm{T}(Y)_{\mathbb{Q}}$ by
Witt cancellation theorem. The converse is clear.
#### Proof of $(i)\Leftrightarrow(iii)$
This is Corollary 5.1.5.
#### Proof of $(ii)\Rightarrow(iv)\Rightarrow(v)$
This is from the computation in [23, Proposition 4.6]. Indeed, one may take
the correspondence
$\Gamma\coloneqq\bigoplus_{i}\Gamma_{2i}\colon\mathfrak{h}^{even}(X)\xrightarrow{\sim}\mathfrak{h}^{even}(Y),$
where
$\Gamma_{2i}\coloneqq\frac{1}{n^{i}}f^{*}\circ\pi_{X}^{2i}\colon\mathfrak{h}^{2i}(X)\to\mathfrak{h}^{2i}(Y),$
and $f\colon X\to Y$ is the given principal isogeny.
#### Proof of $(v)\Rightarrow(ii)$
Let
$\Gamma\colon\mathfrak{h}^{even}(X)\xrightarrow{\sim}\mathfrak{h}^{even}(Y)$
be the isomorphism of Frobenius algebra objects. The Betti realization of its
second component is a Hodge isometry by the Frobenius condition (cf. [23,
Theorem 3.3]). Thus $X$ and $Y$ are derived isogenous by Corollary 5.1.4, and
hence principally isogenous.
## 6\. Derived isogeny in positive characteristic
In this section, we will prove the twisted derived Torelli theorem for abelian
surfaces over odd characteristic fields. The principal strategy is to lift
everything to characteristic zero.
### 6.1. Prime-to-$p$ derived isogeny in mixed characteristic
Let us start with an important lemma for prime-to-$p$ derived isogenies, which
is the only place we require the characteristic $p>3$.
###### Lemma 6.1.1.
Let $K$ be a complete discretely valued field in characteristic zero, whose
residue field is perfect with characteristic $p>3$. Denote by
$\mathcal{O}_{K}$ the ring of integers. Let $\mathfrak{X}\to\mathcal{X}$ and
$\mathfrak{Y}\to\mathcal{Y}$ be twisted abelian surfaces over
$\mathcal{O}_{K}$ whose special fibers are $\mathscr{X}_{0}\to X_{0}$ and
$\mathscr{Y}\to Y_{0}$, and generic fibers are
$\mathfrak{X}_{K}\to\mathcal{X}_{K}$ and $\mathfrak{Y}_{K}\to\mathcal{Y}_{K}$.
Suppose
$f_{0}\colon\operatorname{D}^{b}(\mathscr{X}_{0})\xrightarrow{\sim}\operatorname{D}^{b}(\mathscr{Y}_{0})$
is a prime-to-$p$ derived equivalence and
$f\colon\operatorname{D}^{b}(\mathfrak{X})\xrightarrow{\sim}\operatorname{D}^{b}(\mathfrak{Y})$
is a lifting of $f_{0}$, then
$f_{K}\colon\operatorname{D}^{b}(\mathfrak{X}_{K})\xrightarrow{\sim}\operatorname{D}^{b}(\mathfrak{Y}_{K})$
is also prime-to-$p$.
###### Proof.
It suffices to prove that the $p$-adic realization of $f_{K}$ is integral.
This can be deduced from an argument from Cais–Liu’s crystalline cohomological
description for integral $p$-adic Hodge theory (cf. [13]).
Let us sketch the proof. As $f_{0}$ is prime-to-$p$, its cohomological
realization restricts to an isometry of $F$-crystals
$\varphi_{p}\colon\mathrm{H}^{2}_{\operatorname{crys}}(X_{0}/W)\simeq\mathrm{H}^{2}_{\operatorname{crys}}(Y_{0}/W)$
by our definition. The base-extension $\varphi_{p}\otimes K$ can be identified
with the cohomological realization of $f_{K}$ on the de Rham cohomology
$\varphi_{K}\colon\mathrm{H}^{2}_{\operatorname{dR}}(\mathcal{X}_{K}/K)\simeq\mathrm{H}^{2}_{\operatorname{dR}}(\mathcal{Y}_{K}/K)$
by Berthelot–Ogus comparison (cf. [24, Theorem B.3.1]). It also preserves the
Hodge filtration. Let $S$ be the divided power envelope of pair $(W\llbracket
u\rrbracket,\ker(W\llbracket u\rrbracket\to\mathcal{O}_{K}))$. Then the map
$\varphi_{p}\otimes_{W}S\colon\mathrm{H}^{2}_{\operatorname{crys}}(X_{0}/S)\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{crys}}(Y_{0}/S)$
is an isomorphism of strongly divisible $S$-lattices (cf. [13, §4]). If $p>3$,
then according to [13, Theorem 5.4], one can apply Breuil’s functor $T_{st}$
on it to see that $\varphi_{K}$ restricts to an $\mathbb{Z}_{p}$-integral
$\operatorname{Gal}(\bar{K}/K)$-equivariant isometry
$\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(\mathcal{X}_{\bar{K}},\mathbb{Z}_{p})\xrightarrow{\sim}\mathrm{H}^{2}_{\operatorname{{\acute{e}t}}}(\mathcal{Y}_{\bar{K}},\mathbb{Z}_{p}).$
∎
###### Remark 6.1.2.
The technical requirement for $p>3$ is needed in [13, Theorem 4.3 (3),(4)].
When $\mathcal{O}_{K}=W(k)$ is unramified, this condition can be released to
$p>2$ by using Fontaine’s result [21, Theorem 2 (iii)]. In general, when
$p=3$, a possible approach is to prove the Shioda’s trick as in §4 for
strongly divisible $S$-lattices (cf. [10, Definition 2.1.1]), which can reduce
the statement to crystalline Galois representations of Hodge–Tate weight one.
### 6.2. Serre–Tate theory and lifting of prime-to-$p$ quasi-isogeny
The Serre–Tate theorem says that the deformation theory of an abelian scheme
in characteristic $p$ is equivalent to the deformation theory of its
$p$-divisible group (cf. [44, Chapter V.§2, Theorem 2.3]). Let
$S_{0}\hookrightarrow S$ be an infinitesimal thickening of schemes such that
$p$ is locally nilpotent on $S$. Let $\mathcal{D}(S_{0},S)$ be the category of
pairs $(\mathcal{X}_{0},\mathcal{G})$, where $\mathcal{X}_{0}$ is an abelian
scheme over $S_{0}$ and $\mathcal{G}$ is a lifting of $p$-divisible group
$\mathcal{X}_{0}[p^{\infty}]$ to $S$. There is an equivalence of categories
$\displaystyle\\{\text{abelian schemes over
}S\\}\xrightarrow{\sim}\mathcal{D}(S_{0},S)$
$\displaystyle\mathcal{X}\mapsto(\mathcal{X}\times_{S}S_{0},\mathcal{X}[p^{\infty}]).$
Now we set $S_{0}=\operatorname{Spec}(k)$ and
$S=\operatorname{Spec}(V/(\pi^{n+1}))$ for a perfect field $k$, $V$ is a
totally ramified finite extension of $W(k)$ and an integer $n\geq 1$. Since
there is an equivalence between the category of $p$-divisible groups over $V$
and the category of inductive systems of $p$-divisible groups over
$V/(\pi^{n})$, we have an identification
$\mathcal{D}(k,V)=\varprojlim_{n}\mathcal{D}(k,V/(\pi^{n})).$
As a consequence, we get
###### Lemma 6.2.1.
There is an equivalence of categories
$\displaystyle\\{\text{formal abelian schemes over
}V\\}\xrightarrow{\sim}\mathcal{D}(k,V)$ $\displaystyle\quad
A\mapsto\left(A\times_{V}k,A[p^{\infty}]\right).$
One can lift separable isogenies between abelian surfaces, which is well-known
to experts.
###### Proposition 6.2.2.
Suppose $p>2$. Let $f\colon X\to Y$ be a separable isogeny. There are liftings
$\mathcal{X}\to\mathrm{Spec}(V)$ and $\mathcal{Y}\to\mathrm{Spec}(V)$ of $X$
and $Y$ respectively, such that the isogeny $f$ can be lifted to an isogeny
$f_{V}\colon\mathcal{X}\to\mathcal{Y}$. In particular, every prime-to-$p$
isogeny is liftable.
###### Proof.
Suppose we are given a lifting
$\widetilde{f}[p^{\infty}]\colon\mathcal{G}_{X}\to\mathcal{G}_{Y}$
of the isogeny of $p$-divisible groups $f[p^{\infty}]\colon X[p^{\infty}]\to
Y[p^{\infty}]$. Then we can apply Lemma 6.2.1 to get a formal lifting of $f$
to $\operatorname{Spec}(V)$:
$\widetilde{f}\colon\mathcal{X}\to\mathcal{Y},$
such that $\widetilde{f}$ is finite and $\mathcal{Y}$ admits an
algebraization. It suffices to show $\widetilde{f}$ also admits an
algebraization, which can be done by [26, Proposition (5.4.4)].
The required lifting of $p$-divisible groups can be constructed as follows.
Since $f[p^{\infty}]$ is separable, its kernel is a finite étale group scheme,
which is freely liftable. Therefore, we may assume that $f[p^{\infty}]$ is an
isomorphism. Let us fix a lifting of $\mathcal{X}$ to $V$. The $p$-divisible
group $\mathcal{G}_{X}\coloneqq\mathcal{X}[p^{\infty}]$ over $V$ forms a
lifting of $X[p^{\infty}]$ to $V$, and such lifting gives a filtered Dieudonné
module structure on $\mathbb{D}(Y[p^{\infty}])$ under the isomorphism
$f[p^{\infty}]$. Then the statement follows from the Grothendieck–Messing
theory (see the proof of [36, Proposition A.6] for example). ∎
### 6.3. Specialization of derived isogenies
Next, we shall show that prime-to-$p$ geometric derived isogenies are
preserved under reduction. The basic idea is to show that two smooth
projective family of abelian or K3 surfaces over a complete discrete valuation
ring whose geometric generic fibers are Fourier–Mukai partners will have
special fibers which are moduli space of stable twisted sheaves on each other.
For this, we only need to specialize the datum that form a moduli space.
###### Theorem 6.3.1.
Let $V$ be a discrete valuation ring with residue field $k$ and let $\eta$ be
its generic point. Assume that $\mathrm{char}(k)=p>2$. Let
$\mathcal{X}\to\mathrm{Spec}(V)$ and $\mathcal{Y}\to\mathrm{Spec}(V)$ be two
projective families of abelian surfaces or K3 surfaces over
$\mathrm{Spec}(V)$. If there is a derived equivalence
$\Psi^{\mathcal{P}}\colon\operatorname{D}^{b}(\mathcal{X}_{\bar{\eta}},\alpha)\xrightarrow{\sim}\operatorname{D}^{b}(\mathcal{Y}_{\bar{\eta}},\beta)$
between geometric generic fibers such that ${\rm ord}(\alpha)$ and ${\rm
ord}(\beta)$ are prime-to-$p$, then their special fibers are twisted derived
equivalent.
###### Proof.
We denote by $X_{0}$ and $Y_{0}$ the geometric special fibers of
$\mathcal{X}/V$ and $\mathcal{Y}/V$ respectively. From Theorem 3.4.5, it is
known that there is an isomorphism
$\mathcal{Y}_{\bar{\eta}}\cong
M^{\alpha}_{\mathcal{H}_{\eta}}(\mathcal{X}_{\bar{\eta}},v_{\bar{\eta}}),$
for some twisted Mukai vector
$v_{\bar{\eta}}\in\widetilde{\mathrm{N}}(\mathcal{X}_{{\bar{\eta}}},\alpha)$,
after replacing $(\mathcal{X},\alpha)$ by
$(\widehat{\mathcal{X}},\widehat{\alpha})$ if it is necessary. Up to taking a
finite extension of $V$, we may assume that the Brauer class $\alpha$ can be
defined over $\eta$.
We claim that one can extend $\alpha$ to a class in the Brauer group of the
total space $\operatorname{Br}(\mathcal{X})$ if $p\nmid{\rm ord}(\alpha)$. For
simplicity, we assume ${\rm ord}(\alpha)=\ell^{n}$ for some prime $\ell$. In
this case, the Gysin sequence and Gabber’s absolute purity gives an exact
sequence
$0\to\operatorname{Br}(\mathcal{X})\\{\ell\\}\to\operatorname{Br}(\mathcal{X}_{\eta})\\{\ell\\}\to\varinjlim_{n}\mathrm{H}^{1}_{\operatorname{{\acute{e}t}}}(X_{0},\mathbb{Z}/\ell^{n}).$
(6.3.1)
(cf. [15, Theorem 3.7.1 (ii)]). If $\mathcal{X}$ is a K3 surface, then we have
$\mathrm{H}^{1}_{\operatorname{{\acute{e}t}}}(X_{0},\mathbb{Z}/\ell^{n})=0$
for all $n$ as it is simply-connected, and thus one can find a lift
$\tilde{\alpha}\in\operatorname{Br}(\mathcal{X})$ of $\alpha$ by (6.3.1). When
$\mathcal{X}$ is an abelian surface over $\operatorname{Spec}(V)$, the Gysin
sequence can not directly give the existence of a extension of $\alpha$.
Again, one can use the trick of Kummer surfaces. Consider the relative Kummer
surface $\mathrm{Km}(\mathcal{X})\to\operatorname{Spec}(V)$, we have a
commutative diagram
${\operatorname{Br}(\operatorname{Km}(\mathcal{X}))\\{\ell\\}}$${\operatorname{Br}(\operatorname{Km}(\mathcal{X}_{\eta}))\\{\ell\\}}$${\operatorname{Br}(\mathcal{X})\\{\ell\\}}$${\operatorname{Br}(\mathcal{X}_{\eta})\\{\ell\\}}$$\scriptstyle{\sim}$$\scriptstyle{\Theta}$$\scriptstyle{\Theta_{\eta}}$
from Proposition 2.1.1. After passing to a finite extension, we can assume
$\alpha$ lies in the image of $\Theta_{\eta}$. As the top arrow is surjective
and $\Theta_{\eta}$ is an isomorphism, we may find an extension
$\widetilde{\alpha}\in\operatorname{Br}(\mathcal{X})\\{\ell\\}$ whose
restriction on $\mathcal{X}_{\eta}$ is $\alpha$.
As the family $\mathcal{X}\to\mathrm{Spec}(V)$ is smooth and proper, the
relative Picard scheme ${\rm Pic}_{\mathcal{X}/V}$ is proper. Now, under the
specialization of the Picard group
${\rm Pic}(\mathcal{X}_{\eta})\xleftarrow{\sim}{\rm Pic}(\mathcal{X})\to{\rm
Pic}(\mathcal{X}_{0}),$
we can pick extensions $v\in\widetilde{\mathrm{N}}(\mathcal{X})$ and
$\mathcal{H}\in{\rm Pic}(\mathcal{X})$ of $v_{\eta}$ and $\mathcal{H}_{\eta}$,
so that $v|_{\mathcal{X}_{\eta}}=v_{\eta}$ and
$\mathcal{H}|_{\mathcal{X}_{\eta}}=\mathcal{H}_{\eta}$. In general, the line
bundle $H_{0}=\mathcal{H}_{k}$ on the special fiber is not ample. However, we
may replace $\mathcal{H}$ by $\mathcal{H}\otimes\mathcal{M}^{\otimes n}$ for a
relative ample line bundle on $\mathcal{X}/V$ and $n\gg 0$, such that
$\mathcal{H}_{0}$ and $\mathcal{H}_{\eta}$ are both ample and $v$-generic
(i.e., not lie in a wall of the Mukai vector $v$), since the $v$-walls in the
ample cones of $\mathcal{X}_{\bar{\eta}}$ and $\mathcal{X}_{0}$ are known to
be (locally) finitely many hyperplanes (cf. [66, Proposition 3.10] for
$\operatorname{char}(k)=0$ or [7, Proposition 4.1.14] for
$\operatorname{char}(k)=p$). Then we let
$M^{\tilde{\alpha}}_{\mathcal{H}}(\mathcal{X},v)$ be the corresponding
relative moduli space of $\mathcal{H}$-stable twisted sheaves, which is smooth
over $V$. The generic fiber of
$M^{\widetilde{\alpha}}_{\mathcal{H}}(\mathcal{X},v)\to\operatorname{Spec}(V)$
is isomorphic to $\mathcal{Y}_{\eta}$ after a finite base extension since it
is geometrically birational to $\mathcal{Y}_{\bar{\eta}}$ by the wall-crossing
(see [46] for example).
Set $\alpha_{0}=\widetilde{\alpha}|_{X_{0}}\in\operatorname{Br}(X_{0})$. Note
that its special fiber is also isomorphic to
$M^{\alpha_{0}}_{H_{0}}(X_{0},v_{0})$ after some finite field extension, we
have the following commutative diagram after taking a finite ring extension of
$V$:
${M^{\widetilde{\alpha}}_{\mathcal{H}}(\mathcal{X},v)}$${M^{\alpha}_{\mathcal{H}_{\eta}}(\mathcal{X}_{\eta},v_{\eta})}$${\mathcal{Y}_{\eta}}$${\mathcal{Y}}$${\mathrm{Spec}(V)}$${\operatorname{Spec}(k(\eta))}$${\operatorname{Spec}(k(\eta))}$${\mathrm{Spec}(V)}$$\scriptstyle{\cong}$
According to Matsusaka–Mumford [43, Theorem 1], the isomorphism can be
extended to the special fiber. Thus $Y_{0}$ is isomorphic to
$M_{H_{0}}^{\alpha_{0}}(X_{0},v_{0})$. It follows that
$\mathrm{D}^{b}(X_{0},\alpha_{0})\simeq\mathrm{D}^{b}(Y_{0},\beta_{0})$.
∎
Using Remark 5.1.8, one can easily deduce that every prime-to-$p$ derived
isogeny can be specialized.
###### Remark 6.3.2.
In Theorem 6.3.1, the original derived equivalence $\Psi^{\mathcal{P}}$ is
shown to be extended to the whole family. We only replaced it by a
Fourier–Mukai transform given by the universal family of twisted sheaves which
is naturally be specialized.
###### Remark 6.3.3.
Our proof fails when $k$ is imperfect and the twisted derived equivalence is
not prime-to-$p$. This is because if the associated Brauer class $\alpha$ has
order $p^{n}$, the map
$\operatorname{Br}(\mathcal{X})[p^{n}]\to\operatorname{Br}(\mathcal{X}_{\eta})[p^{n}]$
may not be surjective (cf. [55, 6.8.2]).
### 6.4. Proof of Theorem 1.4.1
When $X$ or $Y$ is supersingular, the assertion will be will be proved in
Proposition 6.6.8. So we can assume that $X$ and $Y$ both have finite height.
#### Proof of $(i^{\prime})\Rightarrow(ii^{\prime})$
We can assume the prime-to-$p$ derived isogeny is defined over a finitely
generated subfield of $\bar{k}$. By the definition of prime-to-$p$ derived |
# Can a rudderless species survive? 111Keywords: branching Markov chain;
return probability
Rinaldo B. Schinazi 222Department of Mathematics, University of Colorado,
Colorado Springs, CO 80933-7150, USA<EMAIL_ADDRESS>
###### Abstract
Some species of salmon and sea turtle are famously good at finding their birth
place to reproduce after having travelled vast expanses of ocean. In contrast,
imagine now a species (maybe ancestral to the salmon or turtle) which has to
find its birth place to reproduce but has no navigation skills and relies on
chance alone. Would such an imaginary species survive? According to our (very
simple) model it would survive if and only if the probability that a given
individual find its birth place is strictly larger than 1/2.
## 1 The model
Let $S$ be a countable set. For $x$ and $y$ in $S$, let $p(x,y)$ be the
transition probability from $x$ to $y$ for an irreducible discrete time Markov
chain X on $S$. Let $O$ be a fixed site in $S$. We define a branching Markov
chain ${\bf Y}$ as follows. At time $0$, ${\bf Y}$ starts with a single
individual at $O$. At every discrete time, if the individual is at $x$ it
jumps to $y$ with probability $p(x,y)$ (the transition probabilities of X).
Before each step the individual has a probability $1-\alpha$ of dying where
$\alpha$ is a fixed parameter in $(0,1]$. Whenever the individual returns to
$O$ it gives birth to another individual which performs the same dynamics. All
individuals behave independently of each other. The process Y is said to
survive if it has at least one individual somewhere in $S$ at all times. Let
$\beta$ be the probability that the Markov chain X starting at $O$ eventually
returns to $O$. The next result shows that $\beta$ determines whether Y may
survive.
###### Theorem 1.
If $\beta\leq 1/2$ the branching Markov chain Y dies out for all $\alpha$ in
$(0,1)$. If $\beta>1/2$ there exists $\alpha_{c}\in(0,1)$ such that Y has a
positive probability of surviving for $\alpha>\alpha_{c}$ but dies out for
$\alpha\leq\alpha_{c}$.
Our branching Markov chain Y is a generalization of a model recently
introduced by Lebensztayn and Pereira (2023). There, $S=\mathbb{Z}$,
$p(x,x+1)=p$ and $p(x,x-1)=1-p$ where $p$ is a parameter in $[0,1]$. In this
setting the probability of return is known to be $\beta=1-|1-2p|$, see for
instance Grimmett and Stirzaker (2001). Note that $\beta>1/2$ if and only if
$1/4<p<3/4$. By direct computation Lebensztayn and Pereira (2023) proved that
survival is possible if and only if $p$ is in that range. This note was
motivated by the desire to understand their nice result.
As a consequence of our result we see that if the Markov chain X is recurrent
(i.e. $\beta=1$) then survival is always possible for some $\alpha$. On the
other hand if the Markov chain is too transient (i.e. $\beta\leq 1/2$) then
survival is possible for no $\alpha$. For instance, survival is possible for
the simple symmetric random walk on $S=\mathbb{Z}^{d}$ for $d=2$ since this is
a recurrent chain but not possible for $d\geq 3$, McCrea and Whipple (1940)
estimated $\beta$ to be about 0.34 in $d=3$.
Going back to our biological application we can think of $(p(x,y))$ as the
probabilities that an individual uses to pick a direction and as $\alpha$ as a
measure of the leniency of the environment. Whether the species will survive
depends on how likely an individual is to find its birth place in a perfectly
lenient environment (i.e. $\alpha=1$). This in turn depends on $S$ and
$(p(x,y))$. This model may have some biological relevance in that species with
great navigation skills may have evolved from species with poor skills.
However, primitive species had to survive in order to evolve into something
else.
## 2 Proof of Theorem 1
Following Lebensztayn and Pereira (2023) we define a Bienaymé-Galton-Watson
process (BGW in short) Z that keeps track of the genealogy of the process Y.
Let $Z_{0}=1$ and let $Z_{1}$ be the number of returns of the initial
individual to $O$. Since at each return a new individual is born $Z_{1}$ also
counts the number of children of the initial individual. We can think of
$Z_{1}$ as the number of individuals in the first generation. We define
$Z_{2}$ as the number of children born from the first generation (i.e. the
grandchildren of the initial individual) and so on. Since all the individuals
are independent of each other and follow the same dynamics ${\bf Z}$ is indeed
a BGW process. Moreover, the process ${\bf Z}$ survives if and only if the
process Y survives.
Note that the total offspring of one individual is the number of times this
individual returns to $O$ without being killed. Hence, the mean offspring per
individual for the process Z is for $0<\alpha<1$,
$\mu(\alpha)=\sum_{n\geq 1}\alpha^{n}p_{n}(O,O),$ (1)
where $p_{n}(O,O)$ denotes the probability that the Markov chain X starting at
time $0$ at $O$ returns to $O$ at time $n$.
We will need the following well known recurrence criterion, see for instance
Theorem 1.1 in Chapter 5 in Schinazi (2010). An irreducible Markov chain ${\bf
X}$ is recurrent if and only if
$\sum_{n\geq 1}p_{n}(O,O)=+\infty,$ (2)
for some state $O$. We also will need the following result for power series,
see Proposition A 1.9 in Port (1994).
###### Lemma 2.
Assume that $(b_{n})$ is a sequence of positive real numbers such that the
series $\sum_{n\geq 1}b_{n}s^{n}$ converges for all $s$ in $[0,1)$. Then,
$\lim_{s\to 1^{-}}\sum_{n\geq 1}b_{n}s^{n}=\sum_{n\geq 1}b_{n},$
where both sides of the equality may be infinite.
There are two cases to consider. Assume first that the Markov chain ${\bf X}$
is recurrent (i.e. $\beta=1$). Then, by Lemma 2 and (2),
$\lim_{\alpha\to 1^{-}}\mu(\alpha)=\sum_{n\geq 1}p_{n}(O,O)=+\infty.$
Since $\mu$ is continuous on $(0,1)$ and $\lim_{\alpha\to 0}\mu(\alpha)=0$,
there exists $\alpha_{c}$ in $(0,1)$ such that $\mu(\alpha_{c})=1$. Since
$\mu$ is strictly increasing, $\mu(\alpha)>1$ if and only if
$\alpha>\alpha_{c}$. Hence, the process ${\bf Z}$ ( and therefore ${\bf Y}$)
survives with positive probability if and only if $\alpha>\alpha_{c}$. This
proves Theorem 1 in the case $\beta=1$.
Consider now the case when the Markov chain ${\bf X}$ is transient. That is,
the probability $\beta$ to return to $O$ is strictly less than 1. By the
Markov property, the offspring distribution for the branching process Z is for
$\alpha=1$,
$P(Z_{1}=j|Z_{0}=1)=(1-\beta)\beta^{j},$
for $j=0,1,2\dots$. Observe that since $0<\beta<1$ this is a proper
probability distribution (it is not when $\beta=1$). Using this offspring
distribution we get that the mean offspring $\mu(\alpha)$ for $\alpha=1$ is,
$\mu(1)=\frac{\beta}{1-\beta}.$
Note that $\mu(1)>1$ if and only if $\beta>1/2$. Moreover, $\mu(\alpha)$ can
also be expressed using equation (1) for all $\alpha\leq 1$ (including
$\alpha=1$).
If $\beta>1/2$ then $\mu(1)>1$. By Lemma 2 the function $\mu$ is continuous on
$(0,1]$. It is also strictly increasing. Hence, there exists $\alpha_{c}<1$
such that $\mu(\alpha_{c})=1$ and $\mu(\alpha)>1$ if and only if
$\alpha>\alpha_{c}$. That is, the process ${\bf Y}$ survives with positive
probability if and only if $\alpha>\alpha_{c}$.
On the other hand if $\beta\leq 1/2$ then $\mu(1)\leq 1$. Since $\mu$ is an
increasing function, $\mu(\alpha)\leq 1$ for all $\alpha\leq 1$. The process
${\bf Y}$ survives for no value of $\alpha$. This concludes the proof of
Theorem 1.
## References
* [1] Grimmett, G. R., Stirzaker, D. R. (2001). Probability and Random Processes, 3rd ed. Oxford, NY: Oxford Univ. Press.
* [2] Lebensztayn, E., Pereira, V. (2023). On Random Walks with Geometric Lifetimes, The American Mathematical Monthly, DOI: 10.1080/00029890.2023.2274783
* [3] McCrea, W. H., Whipple, F. J. W. (1940). Random Paths in Two and Three Dimensions. Proc. Roy. Soc. Edinburgh 60, 281-298.
* [4] Port, S.C. (1994) Theoretical probability for applications. Wiley.
* [5] Schinazi, R.B. (2010). Classical and spatial stochastic processes, second ed. Birkhauser.
|
# Towards Open-Set Test-Time Adaptation
Utilizing the Wisdom of Crowds in Entropy Minimization
Jungsoo Lee1,2${}^{\text{*}}$ Debasmit Das1 Jaegul Choo2 Sungha Choi1†
1Qualcomm AI Research‡ 2KAIST
1{jungsool, debadas<EMAIL_ADDRESS>2{bebeto<EMAIL_ADDRESS>
###### Abstract
Test-time adaptation (TTA) methods, which generally rely on the model’s
predictions (e.g., entropy minimization) to adapt the source pretrained model
to the unlabeled target domain, suffer from noisy signals originating from 1)
incorrect or 2) open-set predictions. Long-term stable adaptation is hampered
by such noisy signals, so training models without such error accumulation is
crucial for practical TTA. To address these issues, including open-set TTA, we
propose a simple yet effective sample selection method inspired by the
following crucial empirical finding. While entropy minimization compels the
model to increase the probability of its predicted label (i.e., confidence
values), we found that noisy samples rather show decreased confidence values.
To be more specific, entropy minimization attempts to raise the confidence
values of an individual sample’s prediction, but individual confidence values
may rise or fall due to the influence of signals from numerous other
predictions (i.e., wisdom of crowds). Due to this fact, noisy signals
misaligned with such ‘wisdom of crowds’, generally found in the correct
signals, fail to raise the individual confidence values of wrong samples,
despite attempts to increase them. Based on such findings, we filter out the
samples whose confidence values are lower in the adapted model than in the
original model, as they are likely to be noisy. Our method is widely
applicable to existing TTA methods and improves their long-term adaptation
performance in both image classification (e.g., 49.4% reduced error rates with
TENT) and semantic segmentation (e.g., 11.7% gain in mIoU with TENT).
††∗Work done during an internship at Qualcomm AI Research.††† Corresponding
author. ‡ Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
## 1 Introduction
Figure 1: Utilizing confidence difference for selecting correct samples.
Pseudo-labeling samples (i.e., selecting correct samples) by using a fixed
threshold does not guarantee a reasonable level of pseudo-labeling
performance, which is demonstrated by the significantly low precision values.
On the other hand, we maintain a reasonable level of both precision and recall
by using the confidence difference between $\theta_{o}$ and $\theta_{a}$,
improving the test-time adaptation performance overall.
Despite the recent advancements of deep learning, models still show a
significant performance degradation when confronted with large domain shifts
(e.g., changes of cities with different landscapes during autonomous driving)
[8, 33, 39, 27, 11]. Among various studies, test-time adaptation (TTA) is at
the center of attention due to its practicality in not requiring 1) the source
data during the adaptation stage and 2) ground truth labels of the target
domain [57].
Figure 2: Description of open-set TTA. While previous studies assume covariate
shifts (i.e., Cityscapes to BDD-100K), they fail to address the semantic
shifts (i.e., guardrails only shown in BDD-100K). This paper addresses both
closed-set and open-set test-time adaptation.
TTA models widely utilize a self-training strategy (e.g., entropy
minimization), which uses the model’s prediction as the target of the loss
function [57, 9, 58, 46, 14, 13, 62, 47, 65, 24]. Since TTA models rely on
their own predictions during the adaptation, they are inevitably prone to
utilizing noisy signals. In this paper, noisy signals indicate supervisions
that originated from 1) incorrect or 2) open-set predictions. Fig.
LABEL:fig:teaser shows that performing adaptation with such noisy signals
significantly degrades the TTA performance. Specifically, the pink pixels
indicate the mispredicted pixels (e.g., predicting sidewalks as roads in the
second row), and the red ones are the predictions on open-set classes that
were not included in the train set (e.g., predicting guardrails and the
garbage truck as roads in the first and second rows, respectively). Such an
example clearly demonstrates that TTA in real-world applications needs to
address such open-set classes since mispredicting guardrails as roads may
cause serious accidents during autonomous driving. However, as shown in Fig.
2, previous studies focused on TTA with covariate shifts (i.e., domain shifts)
only and did not address TTA that also includes semantic shifts (i.e.,
including open-set classes). Regarding its significance and practicality,
adaptation with unknown classes included (i.e., open-set TTA) should be also
addressed.
Fig. 1 shows our empirical analysis that discloses an important finding to
address such an issue. While entropy minimization enforces the model to
increase the probability value of its predicted label (i.e., confidence
values), we found that it often fails to increase them on the wrong samples.
While previous studies [58, 46] resorted to finding an adequate confidence
value or loss value to prevent error accumulation, the process of determining
it is cumbersome, and utilizing such a static threshold shows limited
performance. We train TENT [57] with different thresholds for the analysis:
(a) without thresholding, (b) selecting samples with confidence value higher
or equal to 0.9111We used the best confidence value $p$ after grid search of
$p\in\\{0.5,0.8,0.9,0.95,0.99.\\}$, (c) selecting samples with loss values
smaller than the entropy threshold proposed in EATA [46], and (d) selecting
samples that achieve higher confidence value with the adaptation model
$\theta_{a}$ compared to that with the original model $\theta_{o}$. As shown,
using the confidence difference between $\theta_{o}$ and $\theta_{a}$ for
selecting correct samples outperforms utilizing the static thresholds. While
b) and c) show significantly high recall values (i.e., selecting actual
correct samples well), it rather indicates that they simply select most of the
samples and fail to filter out noisy samples considering the substantially low
precision values (i.e., low ratio of correct samples among the selected ones).
The intuition behind using the confidence difference is as follows. Although
entropy minimization enforces the model to increase the confidence value on
the predicted label of an individual sample, the individual confidence value
may rise or fall, influenced by the signals that originated from numerous
other predictions (i.e., wisdom of crowds). To be more specific, the noisy
signals that do not align with such ‘wisdom of crowds’, commonly found in the
correct signals, fail to raise the individual confidence scores of wrong
samples, even with the supervision from entropy minimization to increase them.
By using such an observation, we select samples that achieve higher confidence
value using $\theta_{a}$ compared to that using $\theta_{o}$. Since we reflect
the knowledge state of the model on each individual sample, our selection is
implicitly a dynamic thresholding strategy, which outperforms the previously-
used static strategies. Our simple yet effective sample selection method is
widely applicable to existing TTA methods and improves their performances on
both image classification and semantic segmentation.
Figure 3: As adaptation proceeds, the number of samples with decreased
confidence values increases (purple graph). Additionally, among those samples,
the ratio of wrongly predicted samples also increases (green graph). $t_{i}$
indicates the $i^{th}$ round during the long-term adaptation.
Our contributions are summarized as follows:
* •
We propose a novel sample selection method that filters out noisy samples
using the confidence difference between $\theta_{a}$ and $\theta_{o}$ based on
the finding that noisy samples, both closed-set wrong samples, and open-set
samples, generally show decreased confidence values on the originally
predicted label.
* •
This is the first paper to address open-set test-time adaptation, adapting to
a target domain including test samples of unknown classes, which has not been
explored in existing TTA studies despite its importance and practicality.
* •
Our proposed method can be applied to various test-time adaptation methods and
improves their performances on both image classification using CIFAR-10/100-C
and TinyImageNet-C (e.g., 49.38% reduced error rates with TENT in open-set
TTA), and semantic segmentation (e.g., 11.69% gain in mIoU with TENT) using
real-world datasets including BDD-100K and Mapillary.
## 2 Wisdom of Crowds in Entropy Minimization
### 2.1 Problem Setup
During the test-time adaptation, models adapt to a target domain with $N$
number of test samples in the test set $D_{T}$, $\\{x_{i},\\}^{N}_{i=1}\in
D_{T}$, without target labels provided. Given a pretrained model $\theta_{o}$,
we update $\theta_{o}$ to adapt to a novel target domain, where the adapted
model is then defined as $\theta_{a}$. For a test sample $x$, we define
$\tilde{y}=f(x;\theta_{o})\in\mathbb{R}^{C}$ and
$\hat{y}=f(x;\theta_{a})\in\mathbb{R}^{C}$ as the softmax outputs of the
original model $\theta_{o}$ and the adapted model $\theta_{a}$, respectively,
where $C$ denotes the number of classes. With the predicted class
$c_{o}=\operatorname*{argmax}_{c}f(x;\theta_{o})$ of the original model, we
define the probability value on the predicted label as confidence value
$\tilde{y}^{c_{o}}$. Similarly, the confidence value of the adapted model
$\theta_{a}$ on the label $c_{o}$, predicted by the original model, is defined
as $\hat{y}^{c_{o}}$. The main objective of test-time adaptation is to
correctly predict $c_{a}=\operatorname*{argmax}_{c}f(x;\theta_{a})$ using the
adapted model, especially under large data distribution shifts.
Figure 4: Utilizing the confidence difference distinguishes between the
correct samples (blue) and the wrong samples (red) better (AUROC of 89.42)
than using the confidence values (AUROC of 58.29). We used the same model
(i.e., TENT [57] adapted for 50 rounds) for the visualization.
### 2.2 Motivation
Figure 5: Overall procedure of our sample selection. We forward the mini-batch
of $n$ test images, $\\{x_{i}\\}^{n}_{i=1}$, to the original model
$\theta_{o}$ and the adaptation model $\theta_{a}$. Then, we compare the
probability values $\hat{y}^{c_{o}}$ and $\tilde{y}^{c_{o}}$ and select the
samples achieving ${\hat{y}^{c_{o}}\geq\tilde{y}^{c_{o}}}$. Finally, we apply
the entropy minimization only to the selected samples.
Figure 6: Cosine similarity of gradients between samples with the same
predicted label. We observe that wrong signals (i.e., off-diagonal elements)
misalign with the correct signals (i.e., diagonal elements) that dominate the
wisdom of crowds.
#### Decreased confidence values
While entropy minimization enforces the model to increase the confidence value
of its originally predicted label, we empirically found that wrong samples
mostly show decreased values (i.e., $\hat{y}^{c_{o}}<\tilde{y}^{c_{o}}$). For
the experiment, we perform test-time adaptation using TENT [57] for 50 rounds
using CIFAR-10-C to simulate a long-term adaptation. One round includes
continuously changing 15 corruption types, so we repeat it 50 times without
resetting the model. With $t_{i}$ indicating the $i^{th}$ round, Fig. 3
(purple graph) shows that the number of samples achieving
$\hat{y}^{c_{o}}<\tilde{y}^{c_{o}}$, showing decreased confidence values,
among $N$ number of test samples increases as adaptation proceeds even with
the entropy minimization that enforces the model to increase its confidence
value on the originally predicted label. In fact, the green graph in Fig. 3
shows that the ratio of wrong samples among the samples with decreased
confidence values also increases as adaptation proceeds. The main reason for
such an observation is due to the ‘wisdom of crowds’, the signals learned from
numerous other samples influencing the confidence level of individual samples.
Specifically, although the individual signal from each sample compels the
model to increase the confidence value of its own predicted label, this effect
may be canceled out if other dominant signals show different patterns.
#### Wisdom of crowds from correct samples
We empirically found that models generally learn the wisdom of crowds from the
correct samples. Fig. 4 demonstrates such a point with the histogram of 1)
confidence values and 2) confidence difference,
$\hat{y}^{c_{o}}-\tilde{y}^{c_{o}}$, using TENT [57] adapted for 50 rounds. We
observe that a substantial number of the samples achieving
$\hat{y}^{c_{o}}-\tilde{y}^{c_{o}}\geq 0$ are correct samples (blue). To be
more specific, utilizing the confidence difference for distinguishing correct
samples from wrong samples (red) achieves an AUROC of 89.42, which outperforms
utilizing the confidence value of the adaptation model, achieving an AUROC of
58.29.
Such an observation discloses two findings. First, since samples achieving
$\hat{y}^{c_{o}}\geq\tilde{y}^{c_{o}}$ are mostly correct ones, the dominant
signals necessary for increasing the confidence values (i.e., wisdom of
crowds) are originated from the correct samples. Second,
$\hat{y}^{c_{o}}-\tilde{y}^{c_{o}}$ is an adequate metric to distinguish
between correct and wrong samples.
#### Misaligned wrong signals
We further empirically analyze why signals from wrong samples fail to increase
the confidence values of the original model. The main reason is that signals
originated from wrong samples misalign with the ‘wisdom of crowds’ obtained
from the correct samples. For the analysis, we compute the cosine similarity
of gradients between two samples with the same predicted label in Fig. 2.2.
For a given predicted label $i$ (column), we compute $s^{j,i}$, the cosine
similarity of gradients obtained between samples of ground truth label $j$
(row) and those of predicted label $i$ as,
$s^{j,i}=\frac{1}{M_{1}M_{2}}\sum^{M_{1}}_{k=1}\sum^{M_{2}}_{l=1}\frac{g^{j,i}_{k}\cdot
g^{i,i}_{l}}{\|g^{j,i}_{k}\|\|g^{i,i}_{l}\|},l\neq k\hskip
2.84544pt\text{if}\hskip 2.84544ptj=i,$ (1)
where $g^{j,i}_{k}$ indicates the gradient vector of $k^{th}$ sample among
$M_{1}$ number of samples with the ground truth label $j$ and the predicted
label $i$, $g^{i,i}_{l}$ indicates the gradient vector of $l^{th}$ sample
among $M_{2}$ number of samples with the ground truth label $i$ and the
predicted label $i$ (i.e., correct samples), and $i,j\in C$. In other words,
given a certain predicted label, we compare the gradients of the correct
samples and those of the samples with the same predicted label either correct
or wrong. Thus, the diagonal elements are the results obtained by comparing
the gradients between correct samples and the off-diagonal elements are
obtained by comparing the gradients between correct samples and the wrong
samples with the same predicted label. We add the description of how the
cosine similarity of each pair is computed on the right side of Fig. 2.2.
Given a certain column in Fig. 2.2, all entropy minimization losses enforce
the model to increase the probability value of the same predicted label.
However, we found that the signals (i.e., gradients) may differ depending on
the actual ground truth labels. Specifically, the correct samples show high
cosine similarity of gradients (diagonal elements, e.g., $s_{2,2}$) compared
to the ones with wrong samples (off-diagonal elements, e.g., $s_{0,2}$). Since
Fig. 4 shows that the correct signals dominate the wisdom of crowds required
for increasing the confidence value of the originally predicted label, signals
that are different from these dominant signals can be suppressed and do not
raise confidence values.
We want to clarify that the wisdom of crowds does not guarantee a model to
utilize the correct signals only. Even with the wisdom of crowds, the model
supervises itself with wrong predictions if the noisy losses are not filtered
out. Such self-training with wrong knowledge significantly deteriorates the
TTA performance of models, especially during the long-term adaptation [25]. In
fact, such an issue has been widely studied in fields beyond TTA, known as the
confirmation bias [63, 2, 56, 35, 34]. To address such an issue in TTA, we
propose a sample selection method to filter out noisy samples by using the
wisdom of crowds.
Method | CIFAR-10-C | CIFAR-100-C | TinyImageNet-C | Average
---|---|---|---|---
Closed | Open | Closed | Open | Closed | Open | Closed | Open
Source [61] | 18.27 | 18.27 | 46.75 | 46.75 | 76.71 | 76.71 | 47.24 | 47.24
BN Adapt [43] | 14.49 | 15.73 | 39.26 | 42.67 | 61.90 | 63.00 | 38.55 | 40.47
GCE [64] | 43.76 | 87.94 | 44.45 | 88.69 | 97.25 | 99.00 | 61.82 | 91.88
Conjugate [14] | 49.57 | 92.25 | 98.97 | 98.79 | 99.38 | 99.46 | 82.64 | 96.83
ENT | 87.06 | 89.26 | 56.35 | 98.76 | 99.43 | 99.50 | 80.95 | 95.84
\+ Ours | 17.33 (-69.73) | 23.98 (-65.28) | 37.69 (-18.66) | 40.48 (-58.28) | 58.93 (-40.50) | 64.01 (-35.49) | 37.98 (-42.97) | 42.82 (-53.02)
TENT [57] | 45.84 | 85.22 | 42.34 | 85.22 | 98.10 | 99.16 | 62.09 | 89.87
\+ Ours | 14.10 (-31.74) | 15.77 (-69.45) | 38.62 (-3.72) | 42.57 (-42.65) | 60.87 (-37.23) | 63.13 (-36.03) | 37.86 (-24.23) | 40.49 (-49.38)
EATA [46] | 29.78 | 82.05 | 49.31 | 98.75 | 59.82 | 63.47 | 46.30 | 81.42
\+ Ours | 14.07 (-15.71) | 15.65 (-66.40) | 38.44 (-10.87) | 42.47 (-56.28) | 59.80 (-0.02) | 62.08 (-1.39) | 37.44 (-8.86) | 40.07 (-41.35)
SWR [9] | 10.21 | 90.55 | 35.78 | 73.05 | 62.39 | 76.13 | 36.13 | 79.91
\+ Ours | 10.12 (-0.09) | 72.58 (-17.97) | 35.64 (-0.14) | 45.68 (-27.37) | 55.15 (-7.24) | 61.91 (-14.22) | 33.64 (-2.49) | 60.06 (-19.85)
Table 1: Error rates of image classification after 50 rounds of adaptation
(i.e., long-term test-time adaptation). We note the performance gain by
reduced error rates.
Method | CIFAR-10-C | CIFAR-100-C | TinyImageNet-C | Average
---|---|---|---|---
Closed | Open | Closed | Open | Closed | Open | Closed | Open
Source [61] | 18.27 | 18.27 | 46.75 | 46.75 | 76.71 | 76.71 | 47.24 | 47.24
BN Adapt [43] | 14.49 | 15.73 | 39.26 | 42.67 | 61.90 | 63.00 | 38.55 | 40.47
GCE [64] | 12.81 | 25.70 | 35.83 | 45.78 | 62.84 | 71.41 | 37.16 | 47.63
Conjugate [14] | 12.84 | 24.96 | 36.67 | 81.19 | 82.83 | 92.66 | 44.11 | 66.27
ENT | 16.30 | 47.54 | 38.74 | 58.16 | 79.69 | 91.74 | 44.91 | 65.81
\+ Ours | 13.41 (-2.89) | 16.93 (-30.61) | 37.55 (-1.19) | 42.60 (-15.56) | 63.89 (-15.80) | 69.01 (-22.73) | 38.28 (-6.63) | 42.85 (-22.96)
TENT [57] | 12.56 | 27.80 | 36.04 | 45.26 | 68.53 | 80.93 | 39.04 | 51.33
\+ Ours | 12.39 (-0.17) | 14.94 (-12.86) | 36.18 (+0.14) | 39.62 (-5.64) | 59.90 (-8.63) | 63.31 (-17.62) | 36.16 (-2.88) | 39.29 (-12.04)
EATA [46] | 12.39 | 25.52 | 36.39 | 54.22 | 59.02 | 61.72 | 35.93 | 47.15
\+ Ours | 12.35 (-0.04) | 14.92 (-10.60) | 36.25 (-0.14) | 39.58 (-14.64) | 59.30 (+0.28) | 62.11 (+0.39) | 35.97 (+0.04) | 38.87 (-8.28)
SWR [9] | 10.76 | 29.32 | 34.21 | 44.79 | 60.34 | 65.18 | 35.10 | 46.43
\+ Ours | 10.74 (-0.02) | 27.52 (-1.80) | 34.23 (+0.02) | 41.52 (-3.27) | 58.50 (-1.84) | 62.94 (-2.24) | 34.49 (-0.61) | 44.00 (-2.43)
Table 2: Error rates of image classification after 1 round of adaptation
(i.e., short-term test-time adaptation). We note the performance gain by
reduced error rates.
### 2.3 Proposed Method
As shown in Fig. 5, we propose a simple yet effective sample selection method
using the confidence difference between $\tilde{y}^{c_{o}}$ and
$\hat{y}^{c_{o}}$. Our sample selection criterion is formulated as
$\Phi(\hat{y}^{c_{o}},\tilde{y}^{c_{o}})=\mathbb{1}\left(\hat{y}^{c_{o}}\geq\tilde{y}^{c_{o}}\right),$
(2)
where $\Phi(\cdot)$ is our sample selection criterion and $\mathbb{1}(\cdot)$
is the indicator function. Our total objective function using entropy
minimization is formulated as
$\mathcal{L}^{\text{main}}(x;\theta_{a})=\Phi(\hat{y}^{c_{o}},\tilde{y}^{c_{o}})\cdot
H(\hat{y_{i}})-\lambda_{\text{max}}H(\overline{y}).$ (3)
$H(p)=\Sigma^{C}_{k=1}p^{k}\log{p^{k}}$,
$\overline{y}=\frac{1}{N}\Sigma^{C}_{k=1}\hat{y_{i}}$, and $\lambda_{max}$ is
the scalar value for balancing the two loss values. Note that
$H(\overline{y})$ has been widely used in previous studies [9, 39, 29, 38, 3,
5] to prevent the model from making imbalanced predictions towards a certain
class.
Recent studies require the pre-deployment stage that obtains the necessary
information needed for each method by using the samples from the source data
before the adaptation phase [9, 46, 39]. However, we want to emphasize that
our method does not require such a pre-deployment stage as well as those
samples from the source distribution. Due to such an advantage, our method can
be easily applied to existing TTA methods without additional preparations.
Through extensive experiments, we demonstrate the wide applicability of our
method to existing TTA methods.
## 3 Experiments
Time | $t\xrightarrow{\hskip 199.16928pt}$ | Average
---|---|---
Method | Cityscapes | BDD-100K | Mapillary
Source [6] | 34.74 | 16.15 | 36.97 | 29.29
BN Adapt [43] | 40.77 | 25.21 | 39.10 | 35.03
TTN [39] | 46.28 | 28.07 | 45.46 | 39.94
TENT [57] | 46.73 | 29.59 | 35.69 | 37.34
\+ Ours | 46.76 (+0.03) | 30.55 (+0.96) | 43.42 (+7.73) | 40.24 (+2.90)
SWR [9] | 46.17 | 10.70 | 1.28 | 19.38
\+ Ours | 46.65 (+0.48) | 32.28 (+21.58) | 45.09 (+43.81) | 41.34 (+21.96)
(a)
(b)
Table 3: Semantic segmentation performance (mIoU) on continuously-changing
target domains with 1 round of adaptation. We evaluate with DeepLabV3Plus-
ResNet-50 [6] pretrained on GTAV dataset.
Method | BDD-100K | Mapillary | GTAV | SYNTHIA | Average
---|---|---|---|---|---
Round 1 | Round 10 | Round 1 | Round 10 | Round 1 | Round 10 | Round 1 | Round 10 | Round 1 | Round 10
Source [6] | 43.50 | 43.50 | 54.37 | 54.37 | 44.55 | 44.55 | 22.78 | 22.78 | 41.30 | 41.30
BN Adapt [43] | 43.60 | 43.60 | 47.66 | 47.66 | 43.22 | 43.22 | 25.72 | 25.72 | 40.05 | 40.05
TTN [39] | 48.43 | 48.43 | 57.28 | 57.28 | 46.71 | 46.71 | 26.41 | 26.41 | 44.71 | 44.71
TENT [57] | 48.90 | 47.57 | 57.94 | 53.36 | 48.14 | 17.91 | 26.88 | 13.36 | 45.47 | 33.05
\+ Ours | 48.90 | 48.88 (+1.31) | 57.94 | 56.49 (+3.13) | 48.28 (+0.14) | 47.98 (+30.07) | 26.90 (+0.02) | 25.62 (+12.26) | 45.51 (+0.04) | 44.74 (+11.69)
SWR [9] | 49.39 | 49.68 | 59.33 | 59.70 | 47.82 | 48.13 | 28.40 | 1.18 | 46.24 | 39.67
\+ Ours | 49.88 (+0.49) | 50.57 (+0.89) | 58.79 (-0.54) | 58.89 (-0.81) | 49.17 (+1.35) | 49.27 (+1.14) | 27.75 (-0.65) | 27.82 (+26.64) | 46.40 (+0.16) | 46.64 (+6.97)
Table 4: Semantic segmentation performance (mIoU) on a fixed target domain
with 10 rounds of adaptation. We use DeepLabV3Plus-ResNet-50 [6] pretrained on
Cityscapes dataset.
### 3.1 Experimental Setup
Datasets For the image classification task, we use the widely used corruption
benchmark datasets: CIFAR-10/100-C and TinyImageNet-C. We apply 15 different
types of corruptions (e.g., gaussian noise) to CIFAR-10/100 [30] and
TinyImageNet [31]. Pretrained models are trained on the clean train set and
adapted to the corrupted test set. For the open-set setting, we use SVHN [44]
for CIFAR-10/100-C, and ImageNet-O [20] for TinyImagenet-C, where we apply the
same corruption type as the original test sets. We term the datasets as SVHN-C
and ImageNet-O-C, respectively. We apply the identical corruption type in
order to construct open-set samples that are drawn from the same domain shift
but with unknown classes. For the semantic segmentation task under continually
changing domains, we use a model pretrained on GTAV [50], and evaluate it with
Cityscapes [10], BDD-100K [60], and Mapillary [45]. For semantic segmentation
with a fixed target domain with multiple rounds, we use the Cityscapes for the
source distribution and BDD-100K [60], GTAV [50], Mapillary [45], and SYNTHIA
[51] for the target distributions. Note that the semantic segmentation task
inherently includes open-set classes in the test set (e.g., traffic cones in
BDD100K not shown during training with Cityscapes).
#### Evaluation settings
Following the recent TTA studies, we evaluate TTA models under continuously
changing domains without resetting the model after each domain [58, 39, 46].
For the closed-set and open-set continual long-term TTA in the image
classification, we perform adaptation for 50 rounds to simulate a long-term
TTA with continuously changing domains. We report both TTA performances after
1 round (i.e., short-term TTA) and 50 rounds (i.e., long-term TTA). Note that
we evaluate predictions made during online model adaptation, not after
visiting the entire test set, strictly following the established TTA settings.
For the open-set TTA, we construct the mini-batch that includes an equal
number of closed-set samples (e.g., CIFAR-10-C, shot noise) and open-set
samples (e.g., SVHN-C, shot noise). Although included in the mini-batch, we
exclude open-set samples from the evaluation and only evaluate models with
closed-set samples. To the best of our knowledge, our work is the first paper
to conduct experiments with the open-set TTA. We report the error rates and
mean intersection of union (mIoU) for image classification and semantic
segmentation, respectively.
#### Baselines
We mainly compare our method with previous methods addressing noisy labels
[64] or improving pseudo-labeling performances in TTA [14, 46]. Note that ENT
denotes updating all parameters while TENT [57] only updates affine parameters
of the batch normalization layers, both utilizing the entropy minimization
loss function. Gray-shaded digits indicate the performance gain by applying
our method to each baseline model, and bold digits indicate the better
performance between the two methods.
#### Implementation details
For the image classification, we use the learning rate of $1e$-$3$ and
$1e$-$4$ for models updating only affine parameters (TENT [57], EATA [46], GCE
[64], Conjugate [14]) and all parameters (ENT, SWR [9]), respectively. We use
the batch size of 200 and Adam optimizer [28] for all experiments. For
experiments conducting small batch sizes in Table 8, we use the learning rate
of $1e$-$4$ and update models after 200 steps, following TTN [39]. For the
semantic segmentation, we use the learning rate of $1e$-$6$ and batch size of
2 following TTN. Regarding using TTN in semantic segmentation, we update the
test batch statistics in an online manner to further improve the segmentation
performance for all experiments. Further details on our experimental setup are
included in our supplementary.
Method | CIFAR-10 / SVHN-C | CIFAR-100 / SVHN-C
---|---|---
AUROC$\uparrow$ | FPR@TPR95$\downarrow$ | AUROC$\uparrow$ | FPR@TPR95$\downarrow$
MSP [19] | 51.87 | 92.39 | 60.69 | 87.96
Max Logit [18] | 54.68 | 90.31 | 64.88 | 85.45
Energy [40] | 54.68 | 90.30 | 64.87 | 85.46
Ours | 88.24 | 40.34 | 83.76 | 64.86
(a)
Method | CIFAR-10 / SVHN-C | CIFAR-100 / SVHN-C
---|---|---
AUROC$\uparrow$ | FPR@TPR95$\downarrow$ | AUROC$\uparrow$ | FPR@TPR95$\downarrow$
MSP [19] | 50.83 | 93.64 | 56.14 | 90.34
Max Logit [18] | 56.25 | 90.65 | 62.76 | 87.35
Energy [40] | 56.26 | 90.63 | 62.79 | 87.27
Ours | 83.50 | 54.46 | 82.17 | 73.16
(b)
Table 5: Utilizing the confidence difference for thresholding in open-set test
time adaptation. We use TENT [57] adapted to each target domain including
open-set classes (SVHN-C) for 50 rounds.
Method | CIFAR-10-C | CIFAR-100-C | CIFAR-10/100-C
---|---|---|---
Error Rate (%) | Error Rate (%) | Memory (MB) | Time (ms)
ENT | 88.16 | 77.56 | 1147 | 22.98
SWR [9] | 50.38 | 54.42 | 1155 | 47.97
TENT [57] | 65.53 | 63.78 | 556 | 18.38
EATA [46] | 55.92 | 74.03 | 559 | 37.04
TENT [57] \+ Ours | 14.94 | 40.60 | 565 | 26.62
Table 6: Comparisons on error rates (%), memory (MB), and time (ms). For the
time, we report the average time after 5000 trials on NVIDIA RTX A5000.
Method | ResNet50 [17] | WDR28 [61]
---|---|---
Closed | Open | Closed | Open
Source | 48.80 | 48.80 | 43.52 | 43.52
BN Adapt [43] | 16.01 | 16.89 | 20.43 | 23.61
TENT [57] | 61.69 | 83.62 | 56.00 | 77.72
\+ Ours | 15.28 (-46.41) | 16.99 (-66.63) | 20.16 (-35.84) | 23.70 (-54.02)
SWR [9] | 16.19 | 88.53 | 17.94 | 90.15
\+ Ours | 16.08 (-0.11) | 71.83 (-16.70) | 15.35 (-2.59) | 83.76 (-6.39)
(a)
Method | ResNet50 [17] | WDR28 [61]
---|---|---
Closed | Open | Closed | Open
Source | 48.80 | 48.80 | 43.52 | 43.52
BN Adapt [43] | 16.01 | 16.89 | 20.43 | 23.61
TENT [57] | 14.03 | 22.76 | 18.23 | 32.74
\+ Ours | 13.82 (-0.21) | 16.36 (-6.40) | 18.32 (+0.09) | 23.40 (-9.34)
SWR [9] | 13.81 | 45.58 | 16.62 | 83.08
\+ Ours | 13.80 (-0.01) | 43.35 (-2.23) | 15.73 (-0.89) | 75.89 (-7.19)
(b)
Table 7: Error rates of image classification on CIFAR-10-C using diverse
architectures.
Method | Learning rate | Std.$\downarrow$
---|---|---
0.005 | 0.001 | 0.0005 | 0.0001
Source | 76.71 | 76.71 | 76.71 | 76.71 | 0
TENT [57] | 99.51 | 89.91 | 75.02 | 63.83 | 15.79
\+ Ours | 64.14 | 60.04 | 59.59 | 58.76 | 2.40
(a)
Method | Batch size | Std.$\downarrow$
---|---|---
64 | 32 | 16 | 8
Source | 76.71 | 76.71 | 76.71 | 76.71 | 0
TENT [57] | 67.54 | 72.62 | 81.21 | 94.75 | 11.90
\+ Ours | 60.32 | 62.14 | 65.88 | 73.83 | 5.99
(b)
Table 8: Error rates of image classification on TinyImageNet-C with diverse
learning rates and batch sizes. Std. is the abbreviation of the standard
deviation.
### 3.2 Results
Image classification ††2Note that the performance variation of the source
model in Cityscapes is due to the order of data samples (e.g., challenging
ones in the later stage), not due to the adaptation. As shown in Table 1,
existing TTA models show a large performance degradation during the long-term
adaptation. This is mainly due to the confirmation bias, caused by the
unsupervised losses that inevitably include noisy losses. We significantly
improve the long-term performance of the existing four different TTA models in
both closed-set and open-set TTA. For example, we improve the error rate of
TENT [57] by an average of 24.23% and 49.38% in the closed-set and open-set
settings, respectively. Note that we do not use prior knowledge of whether the
target distribution includes open-set samples or not. Additionally, Table 2
shows that our method also generally improves the short-term TTA performances.
While previous studies focused on improving the performance of closed-set TTA
until now, our results show that they suffer from a large performance drop
when adapted with open-set classes included. We believe that this is a
practical setting since we can not guarantee that samples from the target
distributions are always drawn from the classes learned during the training
stage. Such results indicate that improving the TTA performance with open-set
classes is yet to be explored in the future.
Semantic segmentation Table 3 shows the semantic segmentation performance with
continuously changing domains. We evaluated a model pretrained on GTAV [50]
with real-domain datasets (Cityscapes [10], BDD-100K [60], and Mapillary [45])
in order to simulate the situation where real-world target datasets are not
available with only synthetic datasets provided. We observe that the
performance gain by applying our method increases as the adaptation proceeds.
For example, SWR [9] (Table LABEL:tab:continual_segmentation_quali \- red)
suffers from a large performance drop with the last target domain, Mapillary
(1.28 mIoU), while ours (Table LABEL:tab:continual_segmentation_quali \- blue)
shows a stable level of performance (45.09 mIoU). Regarding Table
LABEL:tab:continual_segmentation_quali, we evaluate models after certain steps
and show the average mIoU up to then. While the model without adaptation
(i.e., source) does not suffer from the error accumulation, it fails to bring
performance gain. On the other hand, our method not only brings performance
gain but also circumvents error accumulation by filtering the noisy losses.
Table 4 also reports the semantic segmentation performance with a fixed target
domain over multiple rounds of adaptation. We observe that applying our method
improves the performance of TENT [57] and SWR [9] by an average of 11.69 mIoU
and 6.97 mIoU, respectively, after 10 rounds. As aforementioned, performing
test-time adaptation in semantic segmentation needs to address not only the
wrong predictions but also the inherently included open-set classes in the
target distribution. Our method again improves TTA performance by effectively
discarding such noisy pixels. We believe such a filtering mechanism is
especially important in safety-critical applications in two aspects. First, it
prevents the performance drop caused by learning with noisy losses. Second,
when confronted with unknown objects, we could alarm a device immediately,
which could be the starting point for it to take a different action (e.g.,
autonomous vehicles swerving directions to avoid running into wild animals
unexpectedly shown on roads) [23].
## 4 Further Analysis
### 4.1 Utilizing Confidence Difference as Thresholds
We show that the confidence difference is an adequate metric to differentiate
between correct samples and noisy samples, given that a pretrained model is
adapting to a novel domain. For the evaluation, we train TENT [57] and compare
utilizing confidence difference as the thresholding metric with existing
prediction-based out-of-distribution (OoD) methods [19, 18, 40]. By setting
the correct samples as the positive samples, we analyze two different negative
samples: negative samples 1) including closed-set wrong samples and 2)
excluding closed-set wrong samples. The former case shows how well a given
metric differentiates between correct samples and noisy samples, including
both closed-set and open-set samples. The latter case evaluates how well a
given metric distinguishes between the correct samples and open-set samples
only. Table 5 shows that using confidence difference outperforms the existing
OoD metrics in both cases. In addition to the superior performance, another
advantage of using the confidence difference is that we can filter the noisy
samples immediately, while the existing OoD metrics need the entire test
samples in order to choose the threshold with the best AUROC score. Such a
result indicates that confidence difference can also be widely used to
distinguish out-of-distribution samples in future studies with adapted models.
### 4.2 Comparisons on Resource Costs
Along with the TTA performances, Table 6 compares the memory usage and the
time consumption of the baseline models and our method applied to TENT [57].
For the TTA performance, we average the long-term adaptation performance of
closed-set and open-set TTA for each dataset. For memory usage, we use the
official code of TinyTL [4] to calculate both the model parameters and the
intermediate activation size, following the previous studies [22, 59, 55]. The
time indicates the amount of time consumed for the forward process and the
backpropagation. Since we utilize the outputs of $\theta_{o}$, our method
accompanies an additional forward process. However, as shown, such an
additional forward process is negligible compared to the state-of-the-art
models. For example, our method applied to TENT brings a significant
performance gain with only half the memory and time compared to SWR [9].
Further details on resource costs, along with the results on semantic
segmentation, are included in our supplementary.
### 4.3 Applicability on Various Models
Since our method focuses on improving the pseudo-labeling quality of entropy
minimization, it does not rely on model architectures. Table 7 shows that
applying our method consistently outperforms the baseline models with ResNet50
[17] and WideResNet28 [61] that were used in previous TTA studies [9, 39].
Such results demonstrate that our method is widely applicable to various
architectures.
### 4.4 Robustness to Hyper-parameters
In real-world applications, we may not know an adequate learning rate before
encountering test samples or may not use an optimal batch size due to memory
constraints. In such a case, we need an approach with a stable performance
regardless of such hyper-parameters. Table 8 shows that our method is more
robust to such hyper-parameters compared to TENT [57], which is highly
dependent on them. Such results demonstrate the scalability of our method when
we do not know the optimal hyper-parameters.
## 5 Related Work
Test-Time Adaptation The main differences between TTA studies and other
studies addressing domain shifts such as domain generalization [66, 37, 26,
15, 42, 52] or unsupervised domain adaptation (UDA) [49, 7, 38, 41, 36] is
that TTA studies do not utilize 1) the source data during the adaptation stage
and 2) ground truth labels on the target distribution [57, 58, 46, 14, 9, 39].
Recent studies [58, 39, 12] show that TTA models suffer from a large
performance degradation with continually changing domains and a long-term
adaptation. To tackle such a challenging problem, this paper mainly evaluates
long-term adaptation with continually changing domains.
Noisy Signals in Test-Time Adaptation As aforementioned, one of the key
challenges in TTA is that the model is prone to utilizing wrong predictions.
Preventing the model from learning with noisy supervision has been studied
widely beyond TTA [64, 49, 21, 16, 1, 32, 48]. However, the main difference
between TTA and these studies is that TTA studies assume that we cannot
revisit the sample after performing adaptation with it. Such an assumption
limits from proposing methods that require knowledge of the full data
distributions [53, 1] or consistency of predictions for a given sample [49,
54]. Without such knowledge, we use the difference of confidence scores
between $\theta_{o}$ and $\theta_{a}$ by using the wisdom of crowds to improve
pseudo labeling.
## 6 Conclusion
This paper proposed a simple yet effective data sample selection that is
widely applicable to existing various test-time adaptation methods. Based on
the observation that signals from wrong samples fail to increase the
confidence values of the predicted labels even with entropy minimization, we
only select the samples that achieve higher confidence values with the
adaptation model compared to those with the original model. This is mainly due
to the wisdom of crowds, the dominant signals generally found in the correct
samples influencing signals of other samples. Our method improved TTA
performance on the existing TTA methods on both image classification and
semantic segmentation. Additionally, we proposed a novel evaluation setting,
an open-set TTA, which was overlooked until now even with its importance and
practicality. We hope our work inspires future researchers to conduct more
practical TTA research that improves both closed-set and open-set TTA.
Acknowledgement We would like to thank Kyuwoong Hwang, Sungrack Yun, Simyung
Chang, Hyunsin Park, Janghoon Cho, Juntae Lee, Hyoungwoo Park, Seokeon Choi,
Seunghan Yang, and Sunghyun Park of the Qualcomm AI Research team for their
valuable discussions.
## References
* [1] Unsupervised label noise modeling and loss correction. In ICML, 2019.
* [2] Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In IJCNN, 2020.
* [3] Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Armand Joulin, Nicolas Ballas, and Michael G. Rabbat. Semi-supervised learning of visual features by non-parametrically predicting view assignments with support samples. In ICCV, 2021.
* [4] Han Cai, Chuang Gan, Ligeng Zhu, and Song Han. Tinytl: Reduce memory, not parameters for efficient on-device learning. NeurIPS, 2020.
* [5] Dian Chen, Dequan Wang, Trevor Darrell, and Sayna Ebrahimi. Contrastive test-time adaptation. In CVPR, 2022.
* [6] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
* [7] Minghao Chen, Hongyang Xue, and Deng Cai. Domain adaptation for semantic segmentation with maximum squares loss. In ICCV, 2019.
* [8] Sungha Choi, Sanghun Jung, Huiwon Yun, Joanne T Kim, Seungryong Kim, and Jaegul Choo. Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening. In CVPR, 2021.
* [9] Sungha Choi, Seunghan Yang, Seokeon Choi, and Sungrack Yun. Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. ECCV, 2022.
* [10] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
* [11] Debasmit Das, Shubhankar Borse, Hyojin Park, Kambiz Azarian, Hong Cai, Risheek Garrepalli, and Fatih Porikli. Transadapt: A transformative framework for online test time adaptive semantic segmentation. In ICASSP, 2023.
* [12] Yulu Gan, Yan Bai, Yihang Lou, Xianzheng Ma, Renrui Zhang, Nian Shi, and Lin Luo. Decorate the newcomers: Visual domain prompt for continual test time adaptation, 2022.
* [13] Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, and Sung-Ju Lee. Robust continual test-time adaptation: Instance-aware bn and prediction-balanced memory. NeurIPS, 2023.
* [14] Sachin Goyal, Mingjie Sun, Aditi Raghunathan, and Zico Kolter. Test-time adaptation via conjugate pseudo-labels, 2022.
* [15] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. ICLR, 2021.
* [16] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, 2018.
* [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
* [18] Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Xiaodong Song. Scaling out-of-distribution detection for real-world settings. In ICML, 2022.
* [19] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017.
* [20] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. CVPR, 2021.
* [21] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.
* [22] Ziyu Jiang, Xuxi Chen, Xueqin Huang, Xianzhi Du, Denny Zhou, and Zhangyang Wang. Back razor: Memory-efficient transfer learning by self-sparsified backpropogation. In NeurIPS, 2022.
* [23] Sanghun Jung, Jungsoo Lee, Daehoon Gwak, Sungha Choi, and Jaegul Choo. Standardized max logits: A simple yet effective approach for identifying unexpected road obstacles in urban-scene segmentation. In ICCV, 2021.
* [24] Sanghun Jung, Jungsoo Lee, Nanhee Kim, Amirreza Shaban, Byron Boots, and Jaegul Choo. Cafa: Class-aware feature alignment for test-time adaptation, 2022.
* [25] Tommie Kerssies, Joaquin Vanschoren, and Mert Kılıçkaya. Evaluating continual test-time adaptation for contextual and semantic domain shifts. arXiv preprint arXiv:2208.08767, 2022.
* [26] Daehee Kim, Youngjun Yoo, Seunghyun Park, Jinkyu Kim, and Jaekoo Lee. Selfreg: Self-supervised contrastive regularization for domain generalization. In ICCV, 2021.
* [27] Jin Kim, Jiyoung Lee, Jungin Park, Dongbo Min, and Kwanghoon Sohn. Pin the memory: Learning to generalize semantic segmentation. In CVPR, 2022.
* [28] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, ICLR, 2015.
* [29] Andreas Krause, Pietro Perona, and Ryan Gomes. Discriminative clustering by regularized information maximization. In NeurIPS, 2010.
* [30] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
* [31] Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge. 2015\.
* [32] Jungsoo Lee, Jeonghoon Park, Daeyoung Kim, Juyoung Lee, Edward Choi, and Jaegul Choo. Revisiting the importance of amplifying bias for debiasing. AAAI, 2023.
* [33] Suhyeon Lee, Hongje Seong, Seongwon Lee, and Euntai Kim. Wildnet: Learning domain generalized semantic segmentation from the wild. In CVPR, 2022.
* [34] Jichang Li, Guanbin Li, Feng Liu, and Yizhou Yu. Neighborhood collective estimation for noisy label identification and correction. ECCV, 2022.
* [35] Junnan Li, Richard Socher, and Steven C.H. Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. In ICLR, 2020.
* [36] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In CVPR, 2020.
* [37] Xiaotong Li, Yongxing Dai, Yixiao Ge, Jun Liu, Ying Shan, and Ling-Yu Duan. Uncertainty modeling for out-of-distribution generalization. ICLR, 2022.
* [38] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In ICML, 2020.
* [39] Hyesu Lim, Byeonggeun Kim, Jaegul Choo, and Sungha Choi. TTN: A domain-shift aware batch normalization in test-time adaptation. In ICLR, 2023.
* [40] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. NeurIPS, 2020.
* [41] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. NeurIPS, 2016.
* [42] Lucas Mansilla, Rodrigo Echeveste, Diego H. Milone, and Enzo Ferrante. Domain generalization via gradient surgery. ICCV, 2021.
* [43] Zachary Nado, Shreyas Padhy, D Sculley, Alexander D’Amour, Balaji Lakshminarayanan, and Jasper Snoek. Evaluating prediction-time batch normalization for robustness under covariate shift. arXiv preprint arXiv:2006.10963, 2020.
* [44] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011\.
* [45] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In ICCV, 2017.
* [46] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Efficient test-time model adaptation without forgetting. ICML, 2022.
* [47] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and Mingkui Tan. Towards stable test-time adaptation in dynamic wild world. In ICLR, 2023.
* [48] Junwoo Park, Jungsoo Lee, Youngin Cho, Woncheol Shin, Dongmin Kim, Jaegul Choo, and Edward Choi. Deep imbalanced time-series forecasting via local discrepancy density, 2023.
* [49] Viraj Prabhu, Shivam Khare, Deeksha Kartik, and Judy Hoffman. Sentry: Selective entropy optimization via committee consistency for unsupervised domain adaptation. In ICCV, 2021.
* [50] Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016.
* [51] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, 2016.
* [52] Yuge Shi, Jeffrey Seely, Philip H. S. Torr, N. Siddharth, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization. ICLR, 2022.
* [53] Hwanjun Song, Minseok Kim, and Jae-Gil Lee. Selfie: Refurbishing unclean samples for robust deep learning. In ICML, 2019.
* [54] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Robust learning by self-transition for handling noisy labels. In KDD, 2021.
* [55] Junha Song, Jungsoo Lee, In So Kweon, and Sungha Choi. Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization. CVPR, 2023.
* [56] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. NeurIPS, 2017.
* [57] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. ICLR, 2021.
* [58] Qin Wang, Olga Fink, Luc Van Gool, and Dengxin Dai. Continual test-time domain adaptation. In CVPR, 2022.
* [59] Li Yang, Adnan Siraj Rakin, and Deliang Fan. Rep-net: Efficient on-device learning via feature reprogramming. In CVPR, 2022.
* [60] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In CVPR, 2020.
* [61] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. BMVC, 2016.
* [62] Marvin Zhang, Sergey Levine, and Chelsea Finn. Memo: Test time robustness via adaptation and augmentation. arXiv preprint arXiv:2110.09506, 2021.
* [63] Zixing Zhang, Fabien Ringeval, Bin Dong, Eduardo Coutinho, Erik Marchi, and Björn Schüller. Enhanced semi-supervised learning for multimodal emotion recognition. In ICASSP, 2016.
* [64] Zhilu Zhang and Mert R. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In NeurIPS, 2018.
* [65] Bowen Zhao, Chen Chen, and Shu-Tao Xia. Delta: degradation-free fully test-time adaptation. ICLR, 2023.
* [66] Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain generalization with mixstyle. ICLR, 2021.
Figure 10: Identifying wrong predictions and open-set samples in test-time
adaptation (TTA). During the long-term adaptation, previous models not only
show a large performance degradation but also predict the open-set samples as
one of the pre-defined classes learned during the training phase. By filtering
out noisy ones, both wrong and open-set samples, we can (a) prevent
performance degradation and (b) identify unexpected obstacles to prevent
accidents immediately. Red boxes indicate the regions of pixels that include
misclassified predictions or open-set classes. In the fourth column, on top of
the prediction of the model trained through our method, we color pixels that
are filtered out by our method as black.
## Appendix A Further Analysis on Semantic Segmentation
Fig. 10 shows how filtering out noisy samples is important in semantic
segmentation. As mentioned in the main paper, discarding noisy samples is
crucial in two aspects: we can 1) prevent significant performance degradation
caused by noisy samples and 2) immediately identify unknown objects that could
be highly dangerous if not detected. For example, TENT [57] predicts the
motorcycle (wrong prediction) or the guardrails (open-set predictions) as
roads in the first and the second rows, respectively. When applying TTA in
real-world applications (e.g., autonomous driving), such an issue could lead
to a serious accident. However, our method effectively identifies them
immediately (black pixels in the fourth column), which can prevent such
accidents.
Time | $t\xrightarrow{\hskip 199.16928pt}$ | Average
---|---|---
Method | Cityscapes | BDD-100K | Mapillary
TENT [57] | 46.73 | 29.59 | 35.69 | 37.34
TENT w/o open-set [57] | 47.04 | 31.12 | 38.66 | 38.94
\+ Ours | 46.76 (+0.03) | 30.55 (+0.96) | 43.42 (+7.73) | 40.24 (+2.90)
Table 9: Effect of removing open-set samples in semantic segmentation. We
filtered out open-set pixels using ground-truth labels for TENT. We observe
performance gain compared to the original performance of TENT.
Method | ImageNet-C | Average
---|---|---
Closed | Open
Source [61] | 81.99 | 81.99 | 81.99
BN Adapt [43] | 68.49 | 69.65 | 69.07
TENT [57] | 99.71 | 99.72 | 99.72
\+ Ours | 65.62 (-34.09) | 67.78 (-31.94) | 66.70 (-33.02)
SWR [9] | 65.20 | 68.40 | 66.80
\+ Ours | 64.35 (-0.85) | 66.33 (-2.07) | 65.34 (-1.46)
(a)
Method | ImageNet-C | Average
---|---|---
Closed | Open
Source [61] | 81.99 | 81.99 | 81.99
BN Adapt [43] | 68.49 | 69.65 | 69.07
TENT [57] | 95.79 | 97.53 | 96.66
\+ Ours | 60.82 (-34.97) | 64.33 (-33.20) | 62.58 (-34.08)
SWR [9] | 66.59 | 69.02 | 67.81
\+ Ours | 65.29 (-1.30) | 66.86 (-2.16) | 66.08 (-1.73)
(b)
Table 10: Comparisons on ImageNet-C. We note the performance gain by reduced
error rates.
Method | CIFAR-10-C | CIFAR-100-C | Memory (MB) | Time (ms)
---|---|---|---|---
(a) | (b) | (c) | (a) | (b) | (c) | (MB) | (ms)
TENT [57] | 56.00 | 45.84 | 45.84 | 45.20 | 42.34 | 42.34 | 556 | 18.38
CoTTA [58] | 31.28 | 75.97 | 83.19 | 41.40 | 94.52 | 97.43 | 36442 | 379.49
TENT + Ours | 20.16 | 14.10 | 14.10 | 33.39 | 38.62 | 38.62 | 565 | 26.62
(a)
Time | $t\xrightarrow{\hskip 199.16928pt}$ | Average
---|---|---
Method | Cityscapes | BDD-100K | Mapillary
TENT [57] | 46.73 | 29.59 | 35.69 | 37.34
CoTTA [58] | 41.03 | 26.42 | 40.03 | 33.23
TENT+ Ours | 46.76 (+0.03) | 30.55 (+0.96) | 43.42 (+7.73) | 40.24 (+2.90)
(b)
Table 11: Comparison between our method and CoTTA [58]. We show the results of
our method applied to TENT. We perform better than CoTTA even with a
substantially smaller amount of memory usage and time consumption.
Table 9 shows that open-set samples degrade the performance of TTA models in
semantic segmentation. For the analysis, we compare the performance of TENT
[57] and that of TENT trained without the backpropagation of the open-set
pixels. We use the ground truth labels and filter out the open-set pixels. As
shown, TENT achieves better performance without the backpropagation of the
open-set pixels compared to the original performance. Such a result again
demonstrates that addressing open-set samples is crucial for practical TTA.
Note that our approach still outperforms TENT adapted with open-set samples
filtered out after a long-term adaptation (e.g., Mapillary). This is mainly
due to the fact that our method discards the wrong predictions well in
addition to the open-set samples.
## Appendix B Comparisons on ImageNet-C
In Table 10, we also verify the effectiveness of our method on a large-scale
dataset, ImageNet-C [imagenetC]. Due to the fact that experimentation on
ImageNet-C is time consuming, we simulate the long-term adaptation with 10
rounds instead of the 50 rounds used in the main paper. We evaluate under
continuously changing target domains without resetting the model between each
domain. We use the batch size of 64 and the learning rate of 0.00025 with the
SGD optimizer [sgd], following the previous studies [57, 39, 9]. We observe
that our method again consistently improves the TTA performance on existing
baseline models in closed-set and open-set settings with short-term and long-
term adaptation. Regarding SWR [9], we observe a significant performance drop
of SWR when utilizing the adapted model of the previous iteration for the
regularization. Therefore, we use the source pretrained original model,
$\theta_{o}$, for the regularization. Other hyper-parameters are set as the
default values.
## Appendix C Comparisons with CoTTA
We also compare our method with CoTTA [58], another seminal work in the
continual test-time adaptation. Table 11 compares the performances of image
classification and semantic segmentation and the resource costs between CoTTA
and our method applied to TENT [57]. As shown, although our method utilizes a
significantly smaller amount of memory usage and time consumption, we achieve
better performance in both image classification and semantic segmentation. We
describe the results in detail.
Method | Memory (MB) | Time (ms)
---|---|---
TENT [57] | 2714 | 529
SWR [9] | 5969 | 625
CoTTA [58] | 20276 | 4499
TENT [57] \+ Ours | 3036 | 685
Table 12: Comparisons on memory usage (MB), and time consumption (ms) on
semantic segmentation. We evaluate with DeepLabV3Plus-ResNet-50 [6]. For
memory usage, we use the batch size of 2. For the time, we report the average
time after 5000 trials with the image resolution of 3$\times$800$\times$1455
on NVIDIA RTX A5000.
### C.1 Image Classification
We observe that CoTTA [58] shows performance variations depending on the
hyper-parameter $p_{th}$, which is a threshold to decide whether to use
ensembled predictions or a single prediction in CoTTA. However, we found it
challenging to find adequate $p_{th}$ for CoTTA with the model architecture
used in our work (i.e., WideResNet40 [61] for both CIFAR-10-C and
CIFAR-100-C). Although the supplementary of CoTTA illustrates how to find
$p_{th}$, we could not obtain identical values by using the architecture used
in CoTTA even with the description. Therefore, we report the comparisons
between CoTTA and our method with the following experimental setups: a)
architectures used in the CoTTA paper (i.e., WideResNet28 [61] for CIFAR-10-C
and ResNeXt-29 for CIFAR-100-C) with their default $p_{th}$ values, b)
architectures used in our main paper with their default $p_{th}$ values, c)
architectures used in our main paper with $p_{th}$ values we found by
following the description of the supplementary of CoTTA. Table
LABEL:tab:supple_cotta_image shows that our method outperforms CoTTA in all
three cases even with a substantially smaller amount of memory usage and time
consumption. For the experiments, we use the official repository of
CoTTA222https://github.com/qinenergy/cotta.
### C.2 Semantic Segmentation
Regarding semantic segmentation, we evaluate CoTTA with continuously changing
target domains with a model pretrained on GTAV, as done in the main paper.
While TENT [57] and our method show performance gains by using TTN [39], CoTTA
achieves better performance by utilizing batch normalization with the test
statistics (i.e., TBN) than by using TTN. Therefore, we report the performance
of CoTTA using the TBN and the results of TENT and ours using TTN. In Table
LABEL:tab:supple_cotta_segment, we again observe that our method outperforms
CoTTA with real-domain shifts in semantic segmentation.
Additionally, we compare the memory usage and time consumption of our method
applied to TENT and other baseline models on semantic segmentation in Table
12. As shown, our method accompanies a negligible amount of resource cost. For
example, while our method outperforms CoTTA, we accompany a substantially
smaller amount of resource cost compared to CoTTA.
## Appendix D Further Details on Experimental Setup
### D.1 Datasets
Image classification For constructing SVHN-C and Imagenet-O-C, we apply
corruption types used for CIFAR-10/100-C and TinyImagnet-C by using the
official code333https://github.com/hendrycks/robustness of Hendrycks
[imagenetC]. Since the image sizes of Imagenet-O [20] and TinyImageNet [31]
are different, we resize the resolution of Imagenet-O images to 64$\times$64\.
Among the 5 severity levels, we use corruption level 5, the most corrupted
version. Fig. 12 shows the example images of the datasets used in our work.
Semantic segmentation For the experiments with continuously changing domains,
we use the train sets of each target domain in order to conduct experiments
with a long-term adaptation without using multiple rounds. Note that each
target domain includes a different number of images. For example, Cityscapes,
BDD-100K, and Mapillary include 2975, 7000, and 18000 images, respectively.
Due to this fact, for showing the mIoU changes in Table 3b of the main paper,
we evaluate models an equal number of times (i.e., 20 times) for each target
domain, not after certain steps. For the experiment with a fixed target domain
over multiple rounds, we use the validation sets of each target domain.
### D.2 Baselines
Conjugate [14] Conjugate pseudo labeling was recently proposed on the
observation that conjugate functions are approximate to the optimal loss
function. We use the official
codes444https://github.com/locuslab/tta_conjugate of Conjugate [14].
Figure 12: Examples of datasets used in our work. We use CIFAR-10-C and SVHN-C
for the images. In the closet-set TTA, all images in the mini-batch only
include the covariate shift (i.e., domain shift). On the other hand, in the
open-set TTA, half of the images in the mini-batch only include covariate
shift while the other half includes both covariate shift and semantic shift
(i.e., open-set samples).
GCE [64] Generalized Cross Entropy (GCE) loss was first introduced to address
the noisy labels in image classification. It emphasizes the learning of
correct samples by imposing high weights on the gradients of the samples
achieving low loss values, which are highly likely to be correctly annotated.
Following Conjugate [14], we use GCE as the baseline model to show that simply
applying existing noisy-labeling studies does not guarantee preventing the
error accumulation in TTA. Since the official repository of Conjugate includes
GCE codes, noted as RPL, we use the same codes in our work.
EATA [46] EATA555https://github.com/mr-eggplant/EATA filters out samples that
achieve loss values higher than a pre-defined static threshold and utilizes
the fisher regularization to prevent catastrophic forgetting. For the fisher
regularization, the original paper utilizes the test set of the source
distribution to obtain the weight importance $w(\theta)$. However, we believe
that such an assumption is not valid since the currently widely used corrupted
test sets apply the corruptions to the test samples of the source
distribution. In other words, such an approach necessitates the test samples
to obtain the weight importance before encountering the test samples.
Therefore, we use the train set of the source distribution to obtain the
weight importance. For the fisher coefficient, we use 1 for CIFAR-10/100-C and
2000 for TinyImageNet-C, which are the default values reported in the main
paper. For applying our method to EATA, we only replace the filtering method
and utilize the fisher regularization.
SWR [9] SWR proposes 1) updating domain-sensitive weight parameters more than
the insensitive ones and 2) aligning the prototype vectors of the source and
the target distributions [9]. Since SWR does not have an official repository,
we re-implemented the codes and report the results.
### D.3 Implementation Details
Image classification For the image classification on CIFAR-10/100-C, we mainly
use WideResNet40 [61] which applied the AugMix [augmix] during the pre-
training stage, following the previous recent TTA studies [39, 9, 55]. The
pretrained model is available from RobustBench [robustbench]. For the
TinyImageNet-C, we use ResNet50 [17]. We pretrained ResNet50 for 50 epochs
with a batch size of 256 and a learning rate of 0.01 with cosine annealing
applied using the SGD optimizer [sgd]. We set $\lambda_{max}=0.5$ for
experiments on SWR and $\lambda_{max}=0.25$ for the rest of the experiments.
Semantic segmentation For all semantic segmentation experiments which utilize
the backpropagation, we use TTN [39] since it brings further performance gain
compared to using TBN. For applying our method on semantic segmentation, we
use a relaxed version: we select pixels achieving
$\hat{y}^{c_{o}}-\tilde{y}^{c_{o}}\geq-0.2$. For applying our method on SWR,
we reduce the coefficient of the mean entropy maximization loss
($\lambda_{max}$) from 0.5 to 0.2. The main reason is that the mean entropy
maximization works as regularization and reduces the effect of entropy
minimization loss. However, since our work improves the quality of entropy
minimization, the mean entropy maximization rather hampers further performance
gain from our method. By reducing the coefficient of mean entropy
maximization, our method improves the semantic segmentation performance. Such
an observation again demonstrates that our method improves the quality of the
entropy minimization loss. We set other hyper-parameters as the default
values.
## Appendix E Further Details on Resource Costs
We illustrate how we compute the resource costs including memory usage and
time consumption. For memory usage, as mentioned in the main paper, we use the
official code provided by TinyTL [4]. Note that the activation size occupies
memory usage more than the parameter size [4, 55]. For ENT, which updates all
parameters, we add the parameter size and activation size of all parameters.
For TENT [57], which updates the affine parameters in the batch normalization
layers, we only add the parameter size and activation size of the affine
parameters. For SWR [9], which updates all parameters and utilizes an
additional model for the regularization, we add the parameter size of the
whole model parameters in addition to the memory usage of ENT. For EATA [46],
which also utilizes an additional model for the fisher regularization, we only
add the parameter size of the affine parameters in addition to the memory
usage of TENT. For our method applied to TENT, in addition to the memory usage
of TENT, we add 1) the parameter size of all parameters and 2) the parameter
size of the output tensors. We add the parameter size of all parameters since
we need the whole model parameters in order to compute $\tilde{y}$. Also,
since we utilize $\tilde{y}$, we add the memory of the output tensors that is
negligible compared to the parameter size of the whole model.
Method | CIFAR-10-C | CIFAR-100-C | TinyImageNet-C
---|---|---|---
Source [61] | 18.27 | 46.75 | 76.71
BN Adapt [43] | 14.49 | 39.26 | 61.90
TENT [57] | 45.84 | 42.34 | 98.10
\+ Ours (logit) | 33.46 (-12.38) | 72.08 (+29.74) | 92.24 (-5.86)
\+ Ours (softmax) | 14.10 (-31.74) | 38.62 (-3.72) | 60.87 (-37.23)
Table 13: Variant of our method. We observe that utilizing the softmax outputs
outperforms utilizing the logit values.
## Appendix F Variant of Proposed Method
To compare the prediction values between $\theta_{a}$ and $\theta_{o}$, our
method utilizes the probability values of the softmax outputs. In Table 13, we
also analyze our method by using the logit values instead of the softmax
values. We observe that utilizing logit values fails to bring large
performance gains compared to using the softmax values. The main reason is
that the logit values generally increase regardless of the correct or wrong
samples. However, such an issue is not found in the softmax outputs since the
values are normalized to sum-to-one vectors.
|
###### Abstract
Cell image analysis is crucial in Alzheimer’s research to detect the presence
of A$\beta$ protein inhibiting cell function. Deep learning speeds up the
process by making only low-level data sufficient for the fruitful inspection.
We first found Unet is most suitable in augmented microscopy by comparing
performance in multi-class semantics segmentation. We develop the augmented
microscopy method to capture nuclei in a brightfield image and the transformer
using Unet model to convert an input image into a sequence of topological
information. The performance regarding Intersection-over-Union is consistent
concerning the choice of image preprocessing and ground-truth generation.
Training model with data of a specific cell type demonstrates transfer
learning applies to some extent.
The topological transformer aims to extract persistence silhouettes or
landscape signatures containing geometric information of a given image of
cells. This feature extraction facilitates studying an image as a collection
of one-dimensional data, substantially reducing computational costs. Using the
transformer, we attempt grouping cell images by their cell type relying solely
on topological features. Performances of the transformers followed by SVM,
XGBoost, LGBM, and simple convolutional neural network classifiers are
inferior to the conventional image classification. However, since this
research initiates a new perspective in biomedical research by combining deep
learning and topology for image analysis, we speculate follow-up investigation
will reinforce our genuine regime.
###### Contents
1. 1 Introduction
1. 1.1 Related Works
1. 1.1.1 Softwares
2. 2 Deep Learning and Image Segmentation
1. 2.1 Fundamentals of Deep Learning
1. 2.1.1 Types of Layers
2. 2.1.2 Optimization
2. 2.2 Image Segmentations Models
1. 2.2.1 Fully Convolutional Networks
2. 2.2.2 The U-Net
3. 2.2.3 DeepLabv3
3. 3 Topological Data Analysis
1. 3.1 Fundamentals
1. 3.1.1 Persistent Homology
2. 3.1.2 Barcodes and Persistence Diagram
2. 3.2 Machine Learning with TDA
1. 3.2.1 Persistence Landscape
2. 3.2.2 Signature Features
4. 4 Results
1. 4.1 Data Description
2. 4.2 Deep Learning Simulations
1. 4.2.1 Multiclass Semantic Segmentation
2. 4.2.2 Augmented Microscopy
3. 4.3 Topological Data Analysis Simulations
5. 5 Conclusions
## 1 Introduction
Alzheimer’s disease (AD) is known as the most common cause of dementia, one of
the disorders that has not conquered yet. Nearly 850,000 people in the UK
suffer from AD, and the number will increase to one million in 2025 [1].
Amyloid-$\beta$-peptide (A$\beta$) composes amyloid plagues abundantly found
in Alzheimer’s disease patients [2], and is widely understood to contribute to
the development of the disease. For instance, this toxic protein is known for
impairing neuronal activities by reducing the number of activated synapses and
causing calcium ion channels malfunction [3]. Hence, diminishing the level of
$A\beta$ is the core of current AD treatments, although none of them can
completely cure the disease [4].
Phenotypic screening of target cells aims to test the efficacy of a candidate
protein inhibiting $A\beta$ [5]. If a chemical alleviates $A\beta$-driven
damage, there will be visible perturbations in cell-level features. Cell
Painting [6] is a morphological profiling method accessible to organelles
using different fluorescent dyes. Unlike conventional screening methods, cell
painting can capture multiple cell components, more than three or four, at
once by staining channel by channel. Therefore, it is suitable for large-scale
experiments.
CellProfiler [7] is a fluorescent microscopy image analysis software that
quantifies morphological features, including shape, size, location, and count.
It uses classical machine learning algorithms to analyze data from cell
painting [7]. It facilitates thresholding or edge detection rather than modern
techniques, including deep learning-based image classification and
segmentation. Therefore, although CellProfiler performs feature extraction
tasks with high fidelity, it is not suitable for further analysis for latent
features. Beyond the old-school machine learning techniques, biomedical image
analysis demands a new paradigm: deep learning.
Figure 1: The Cell Painting assay in two different types of cells: U2OS and
A549. Each fluorescent dyes reveals corresponding cell components, colored in
bright white in each column. Figure from [6].
There are thousands of potential candidates suppressing the effect of
$A\beta$, but it is costly and inefficient to examine them one by one. High-
throughput screening (HTS) allows scientists to carry out tests with speed and
accuracy where deep learning takes the integral role. Deep neural networks
guarantee effective and efficient research, including discovering unknown
compounds with antibacterial activity [8], predicting compound activity
through image analysis of assays [9, 10], and undergoing cell type
classification and cell-level morphological phenotyping [11].
Topological data analysis (TDA) is a state-of-the-art framework to access the
shape of a complex dataset. Topological inference in statistics takes
advantage of its deterministic environment, while deep learning is often
unstable and prone to overfitting [12]. TDA is not only applied for feature
engineering [13, 14] but also used for model selection [15]. TDA fortifies
biomedical research by analyzing morphological transition of cell
configuration reacting from a specific treatment, for instance.
This dissertation aims to design novel machine learning methods for cellular
image analysis in Alzheimer’s disease research. We start by reviewing
fundamental concepts in deep learning and basic semantic segmentation models.
Then, we survey topological data analysis with concepts in algebraic topology
and its extension to computational fields. In section 4, we first share
results in deep-learning-based multiclass image segmentation with different
network models. Also, we examine the augmented microscopy method to segment
out nuclei from a brightfield image. We evaluate the transferability of both
approaches. In topological data analysis simulation, we implement two
topological transformers, the silhouette and the signature transformers, which
can extract topological features from the 2D grayscale image and convert them
into one-dimensional sequences. Applying these transformers, we compare
fluorescent nuclei image classification performances of various machine
learning models to a traditional convolutional neural network. In the end, we
discuss how our deep learning and topological data analysis methods bring an
impact on biomedical research and a potential pipeline to synthesize the
augmented microscopy and TDA-based image classification. All important codes
are provided in Appendix A.
### 1.1 Related Works
CellProfiler is prominent in studying the size or intensity of cell images
[16]. Since it relies on a general machine learning scheme, CellProfiler is
not limited to a few types of cells. It can evaluate treatment performance of
leukaemias and lymphomas through cancer cell image analysis [17], examine
virus structure [18], and portrait growth of bacteria [19]. There are some
variations of the CellProfiler. CellProfiler 3.0 [20] can work on 3D images,
CellProfiler Analyst [21] for high-dimensional images containing complex
features , and CellProfiler Tracer [22] for a time-dependent dataset.
Deep learning facilitates biomedical researches based on substantial amounts
of data. One main application of it is cellular image analysis [10]. A neural
network can classify images of cells showing morphological transition [23] or
partition the images into substructures and label each structure [24, 25].
DeepCell [26] is a biological image segmentation library which enables biology
laboratories to access deep learning more conveniently. Caicedo et al (2019)
[27] shows deep learning outperforms CellProfiler in nucleus segmentation
tasks, especially Unet and a convolutional neural network based on DeepCell.
Also, instead of a monochrome brightfield image, one can set up the augmented
microscopy by stacking fluorescent channels and performing in silico labelling
[28]. Recent research about applying deep learning for automated cell painting
experiment to capture signatures from Parkinson’s disease patients shows the
deep learning model can point out that phenotypes differ individually [29].
Topological data analysis in medical research is relatively new but still used
in different fields. Topological descriptors are used in classification of
breast cancer type [30], traumatic brain injury [31], fMRI images [32]. Not
only image analysis but TDA is also applied in studying molecular structures
[33] or relationship to protein functionality [34]. In most researches, TDA
makes unsupervised pattern detection to reveal novel signatures in data,
meaning that TDA is used for feature extraction, followed by applying for
statistical machine learning. We formulate classification of topological
features through neural networks like [35] but for images instead.
#### 1.1.1 Softwares
Gudhi [36] is a famous Python-based libraries for topological data analysis,
and TDAstats [37] is an R-based package for computing persistent homology.
Although computing persistent homology is a computationally expensive task,
the development of the discrete Morse theory relaxes it. Gudhi, the one we use
mainly, can construct a complex, draw a persistence diagram, produce a
persistence landscape, and compute metrics between barcodes. Those barcodes
are our primary interest, which is the input of classification tasks. See
section 3.3 for more details about the fundamentals of topological data
analysis and how these software work.
## 2 Deep Learning and Image Segmentation
Deep learning is a ’black box model’ composed of many layers computing
multiple matrix operations. Deep learning methods consist of two steps before
training: constructing a sequence of layers and selecting a proper
optimization method.
### 2.1 Fundamentals of Deep Learning
#### 2.1.1 Types of Layers
By increasing a number of layers and units within a layer, a neural network
can take into account more complex task [38]. Let $h^{0}=\textbf{x}$ be a
representation of the input x. Then following representations can be
recursively defined by composition of layers $f^{(i)}$ which is
$\begin{split}h^{l}&=f^{(l)}(h^{l-1}),\\\ f(\textbf{x})&=h^{m}=f^{(m)}\circ
f^{(m-1)}\circ\cdots f^{(1)}(\textbf{x})\end{split}$ (1)
where $h^{m}$ is the output layer. A most elementary type of layer is a fully-
connected layer or simply linear module. It takes the input vector
$\textbf{x}\in\mathbb{R}^{n}$ and print output $f(x)\in\mathbb{R}^{m}$ by
$f(\textbf{x})=W\textbf{x}+\textbf{b}$ (2)
where $W\in\mathbb{R}^{m\times n}$ and $\textbf{b}\in\mathbb{R}^{m}$ are
weights and biases respectively. If there are multiple linear modules in a
network, then each modules takes an input that is the output of the preceding
linear module.
However, using only linear modules is sometimes insufficient to demonstrate
the complexity of the model. Thus, we may add some nonlinear units after a
linear layer. These units are called activation layers and have an integral
role in the computation, increasing the performance of a deep feedforward
network model. Followings are the three most popular choices:
1. 1.
rectified linear unit: $\text{ReLU}(x)=\max{\\{0,x\\}}$
2. 2.
tanh: $\tanh{(x)}=\exp{(x)}-\exp{(-x)}/\exp{(x)}+\exp{(-x)}$
3. 3.
sigmoid: $\text{sigmoid}(x)=1/(1+\exp{(-x)})$.
Figure 2: Plots of ReLU, tanh, and sigmoid activation units.
Figure 2 shows a plot of the three hidden units. These are component-wise
functions, so it returns the output vector with the same dimension as the
input vector. Most modern deep learning models prefer the ReLU unit since it
preserves non-negative values and passes more information to the next layer
unlike the other two [38]. Also, the tanh unit is known to perform better in
general than sigmoid [38]. First, it is closer to the identity function:
tanh(0) = 0 while sigmoid(0) = 0.5. This implies when tanh takes the value
near zero, it behaves like a linear function, making a network to be trained
easier.
If a model requires to print a probability vector, such as image
classification task whose output vector corresponds to probability to be
assigned to each class, softmax layer is added at the end:
$\text{softmax}([x_{1},\dots
x_{n}])=\left[\frac{\exp{(x_{1})}}{\sum_{i=1}^{n}\exp{(x_{i})}},\dots,\frac{\exp{(x_{n})}}{\sum_{i=1}^{n}\exp{(x_{i})}}\right].$
(3)
Note that the softmax layer is not element-wise. Multi-layer perceptron (MLP)
is a concatenation of multiple linear modules followed by activation layers,
which is the most fundamental structure of a deep feedforward network.
Although MLP is simple and intuitive, it does not fit in every deep learning
task. Since it collapses the structure of the input, it is not suitable for
the dataset like images where the location information of each entry, or
pixel, is significant [39]. Therefore, starting from the classic LeNet [40,
41], convolutional neural network (CNN) is widely used in many tasks, mainly
in computer vision. Unlike the linear module, a convolutional layer has a
fixed size of the kernel, sliding on the input matrix (or vector if its one-
dimensional CNN). Each convolutional layer has a kernel with fixed parameters,
therefore, instead of optimizing $m\times n$ parameters in a linear module, a
convolutional layer requires much fewer parameters.
$\begin{split}\text{module}&\sim\text{Conv2d}(c_{in},c_{out},d_{1},d_{2})\\\
x^{out}&=\text{module}(x^{in})\\\
x^{out}_{i^{\prime},j^{\prime},k^{\prime}}&=\sum_{i=1}^{d_{1}}\sum_{j=1}^{d_{2}}\sum_{k=1}^{c_{in}}w_{i,j,k,k^{\prime}}\cdot
x_{i^{\prime}+i-1,j^{\prime}+j-1,k}^{in}\hskip 14.22636pt\forall
i^{\prime},j^{\prime},k^{\prime}.\end{split}$ (4)
The $d_{1}\times d_{2}$ matrix $(w_{ij})$ is a kernel, and every input-output
channel pair $(k,k^{\prime})$ has its own kernel.
Some additional variations are available. In Figure 3, the kernel starts with
the input center at $a_{22}$. We can add zero padding, surrounding the input
with 0’s to make the convolution start at $a_{11}$, for instance. $P-$padding
determines how thick the zeros. If the input in Figure 3 get 1-padding, then
the resultant input will be $7\times 7$. Stride determines the step of sliding
the kernel. 1-stride makes the kernel moves to adjacent center pixel $(a_{22}$
to $a_{23})$, but if the stride is 2, then the kernel jumps to center at
$a_{24}$. Indeed, an input or kernel need not to be a square. Hence, in the 2D
convolution, an input image with size $(H_{in},W_{in})$ with Kernel size
$(d_{1},d_{2})$, padding $(P_{1},P_{2})$, stride $(S_{1},S_{2})$ transforms to
an output with size $(H_{out},W_{out})$ such that
$\displaystyle H_{out}$ $\displaystyle=\frac{H_{in}+2P_{1}-d_{1}}{S_{1}}+1$
$\displaystyle W_{out}$ $\displaystyle=\frac{W_{in}+2P_{2}-d_{2}}{S_{2}}+1.$
Figure 3: An illustration of a convolutional layer mapping $5\times 5$ input
to $3\times 3$ output with $3\times 3$ kernel, no padding and stride 1. The
kernel is placed on the input, compute convolution at its position, and locate
the result value in the output. It slides on the whole input to compute
remaining convolution values.
Convolutional layers reduce the size of the image and increase the number of
channel in general. On the contrary, deconvolutional, or convolutional
transpose, layers do the exact opposite thing. Up-sampling through
deconvolutional layers is necessary in generative tasks, for instance DC-GAN
[42] or semantic segmentation [43].
Instead of strided convolutions, one can use pooling layer. For example,
maxpooling layer [44] takes the maximum value of over elements of a region
with fixed size:
$\begin{split}x^{out}&=\text{Maxpool}_{d_{1},d_{2}}(x^{in})\\\
x^{out}_{i^{\prime},j^{\prime},k^{\prime}}&=\max_{i=1}^{d_{1}}\max_{j=1}^{d_{2}}x_{i+d_{1}(i^{\prime}-1),j+d_{2}(j^{\prime}-1),k^{\prime}}.\end{split}$
(5)
Figure 4 shows how maxpooling works with given input and pooling size. Unlike
the convolutional layer, a pooling layer partitions the input and not channel-
dependent. Minimum or average value is also applicable as an alternative of
maximum.
Figure 4: Example of a maxpooling layer, mapping $9\times 9$ input to $3\times
3$ output.
Batch normalization layers [45] reparametrize layers to reduce complexity
[38]. While training, the distribution of each layer keep changes, requiring
it to adapt a new distribution every time [45]. Batch normalization solves
this internal covariance shift through normalizing each dimension with respect
to mean and variance from each mini-batch. So, let $\textbf{B}=\\{B_{1},\dots
B_{m}\\}$ be a mini-batch of size $m$. The transformation replaces $B$ with
$\textbf{B}^{\prime}=\frac{\textbf{B}-\mu}{\sigma}$
where $\mu=\frac{1}{n}\sum_{i}B_{i}$ and
$\sigma=\sqrt{\epsilon+\frac{1}{n}\sum_{i}(B_{i}-\mu)^{2}}$. The $\epsilon$ is
a small constant to prevent the denominator becoming zero. At the end, instead
of the raw normalized output $\textbf{B}^{\prime}$, batch normalization
exploits affinely transformed version $\gamma\textbf{B}+\beta$ with learnable
parameters $\gamma$ and $\beta$ to keep the network demonstration.
#### 2.1.2 Optimization
Assume each $f^{(i)}$ in (1) is differentiable and have paramter vector
$\theta^{(i)}$. So the inclusive function $f$ has parameter vector
$\theta=[\theta^{1},\dots\theta^{m}]$. Then, optimizing a neural network is
equivalent to the following empirical risk minimization task.
$\min_{\theta}\hat{R}(\theta),\hskip 56.9055pt\hat{R}(\theta)=\lambda
r(\theta)+\frac{1}{n}\sum_{i=1}^{n}L(y_{i},f_{\theta}(x_{i}))$ (6)
where $r$ is a regularizer and $\lambda$ is a predefined regularization
strength. Starting from the initial assumption of $\theta$ $\theta_{0}$,
gradient descent step forms
$\theta_{t}=\theta_{t-1}-\alpha\nabla_{\theta}\hat{R}(\theta_{t-1}),$ (7)
where $\alpha$ is the step size and $n$ is the size of the training set. Then
we get the recusrive form of $\frac{\partial L_{i}}{\partial\theta^{(l)}}$
which is
$\frac{\partial L_{i}}{\partial\theta^{(l)}}=\frac{\partial L_{i}}{\partial
h_{i}^{m}}\frac{\partial h_{i}^{m}}{\partial h_{i}^{m-1}}\cdots\frac{\partial
h_{i}^{l+1}}{\partial h_{i}^{l}}\frac{\partial
h_{i}^{l}}{\partial\theta^{(l)}}$ (8)
where each $\displaystyle\frac{\partial h_{i}^{k}}{\partial h_{i}^{k-1}}$
represents the Jacobian matrix of $f^{(k)}$ differentiated by $h_{i}^{(k-1)}$.
The backward computation, starting from $\frac{\partial L_{i}}{\partial
h_{i}^{m}}$, is less costly than the computation with reversed direction. This
is called backpropagation. Although backpropagation is computationally
lighter, it requires more memory.
$n$ in (6) represents the size of training set. However, computing the global
gradients every step is impractical. Instead, Stochastic Gradient Descent
(SGD) method facilitates randomly-chosen minibatches are used in the
computation of the gradient. So, let $B={x_{1},\dots x_{m}}$ be a randomly
chosed subset of the training set with $|B|\ll n$. Then the stochastic
estimate of the gradient is
$\widehat{\nabla_{\theta}\hat{R}(\theta)}=\lambda\frac{\partial
r}{\partial\theta}+\frac{1}{|B|}\sum_{x\in B}\frac{\partial
L_{i}}{\partial\theta}|_{y,f(x)}.$ (9)
So the dataset is split into $n/m$ mini-batches. After running each epoch, the
mini-batch selection is initialized and computed again. Note that the model is
not always globally convex: there is a possibility to reach local minima, not
global.
The choice of the step size, or learning rate, $\alpha$ is significant in
optimization. Too large $\alpha$ makes the optimization unstable, but too
small $\alpha$ leads to very slow convergence. Instead of manually defining
the step size, learning schedulers make the step size varies in the training
process. It determines the current state and makes an appropriate update:
reduce the step size to prevent SGD from being too variant, or increase if the
process is far away from the optimum.
For example, the cosine annealing scheduler makes the learning rate decreases
with a fixed rule. Let $\eta_{max}$ be the initial learning rate, $\eta_{min}$
be the predifined lower bound, and $T_{cur}$ be the number of epochs passed.
Then $\eta_{t}$ is given as follows:
$\eta_{t}=\eta_{min}+\frac{1}{2}(\eta_{max}-\eta_{min})(1+\cos{\frac{T_{cur}}{T_{max}}\pi}).$
(10)
### 2.2 Image Segmentations Models
Image segmentation partitions a given image into multiple parts. Unlike image
classification which assigns an image to its label, image segmentation
classifies each pixel, so it can define boundaries or locate an object. Image
segmentation is widely used in self-driving cars [46], tumor detection in lung
X-ray [47], and satellite images [48].
There are mainly two types of image segmentation. Semantic segmentation is the
way to detect the ’class’ of each image. So even if there are multiple images
with the same class, they are classified as the same. On the other hand,
instance segmentation identifies each pixel to the instance of an object, so
even if two different objects are in the same class, instance segmentation
regards them as distinct. In our research, we only do semantic segmentation
tasks.
Now we introduce some most famous models for semantic segmentation.
#### 2.2.1 Fully Convolutional Networks
As its name, a fully convolutional network consists of convolutional layers
only, followed by a deconvolutional layer as the output layer. Since fully
connected layers collapse information related to the location of each pixel,
replacing them with convolutional layers is more reliable in the context of
image segmentation where positional information is crucial.
Figure 5: Visualization of a fully convolutional network. Figure from [43]
Instead of bilinear interpolation, FCN utilizes deconvolutional, or
convolution transpose layers composed of learnable parameters. Figure 5 shows
the pipeline of an FCN. After multiple convolutional layers, the data passes a
single deconvolution layer. The deconvolution resolution can be modified, and
finer deconvolution leads to more accurate segmentation. However, performing a
single up-sampling loses much information. Therefore, FCN uses skip connection
to preserve features from shallow layers by adding them to deeper layers.
Also, lowering the stride of deconvolution increases segmentation performance
[43].
#### 2.2.2 The U-Net
U-net [24] is initially developed for biomedical image analysis. It admits a
fully convolutional structure but has multiple up-sampling blocks and
concatenates the skip connections, unlike the FCN. The U-net consists of two
main paths. In the left half of Figure 6, downsampling blocks contains two
$3\times 3$ convolution layers followed by ReLU activation and max-pooling,
doubling the number of channels. Also, results after convolution in each block
are saved, concatenated to the corresponding upsampling block.
Figure 6: Visualization of the structure of the U-net. Figure from [24].
On the other hand, in the right half, max-pool layers are replaced by
deconvolution layers in the upsampling part, which factor the number of
channels by two. Multiple upsampling blocks make a more detailed recovery of
the input image. In binary segmentation, each pixel of the ground truth is 0
or 1: 0 refers to the background, and 1 refers to the object. So, each pixel
in the output of the U-net is equivalent to a 2-dimensional vector showing
probability to be assigned to the corresponding channel. For example, if a
pixel in output has a value (0.7, 0.3), it means the pixel corresponds to the
background with a probability of 0.7 and the object with a probability of 0.3.
It implies channel-wise softmax layer includes at the end of the U-net. In
Figure 6, an output image is smaller than the input because convolutions in
upsampling blocks have 0-padding, but this condition can be relaxed to make
the output have the same size as the input.
#### 2.2.3 DeepLabv3
Similar to the two previous models, DeepLabv3 [49] admits a convolutional
upsampling structure. However, DeepLabv3 exploits atrous convolutions instead
of deconvolutional layers in downsampling. Given two-dimensional input $x$ and
a learnable kernel $w$, each pixel $y[i]$ of the output $y$ is
$y[i]=\sum_{k}x[i+r\cdot k]w[k]$ (11)
with stride rate $r$. $r=1$ in the usual convolution, but when $r>1$, it makes
a sparse window of pixel as displayed in Figure 7. Computing and stacking
multiple atrous convolution layers with different rates, it returns Atrous
Spatial Pyramida Pooling (ASPP) [50] which preserves more details than normal
convolution followed by pooling [49]. Resnet, mainly Resnet-50 or Resnet-101
[51], is the default backbone of deeplabv3, whose spirit comes from the skip
connection: adding features from previous convolutions to pertain low-level
features to some level.
Figure 7: Module propagation in downsampling of DeepLabv3 with Atrous Spatial
Pyramidal Pooling (ASPP). Figure from [49].
Deeplabv3+ is an extension of DeepLabv3 plus encoder-decoder structure [52].
DeepLabv3+ admits concatenation of low-level features to intermediate outputs
in a decoder, as U-net did. So, module cascade in Figure 7 substitutes the
encoder block in Figure 8. Figure 8 shows the model applies $1\times 1$
convolution to the low-level feature before the concatenation to compress the
number of the channels. Compression to 48 or 32 channels is optimal, which is
a trade-off between model complexity and preserving features [52].
Figure 8: Diagram of DeepLabv3+ model which utilize DeepLabv3 in its encoder.
Figure from [52]
## 3 Topological Data Analysis
Topological Data Analysis (TDA), first introduced in [53], is a geometrical
method to access latent structure of the data [12]. Suppose we have two sets
of data points sampled from $\mathbb{R}^{2}$. We need to know the shape of the
underlying distribution to determine geometric difference between the
datasets. Calculating pointwise mean squared error does not work if the
numbers of data points are different, and a point pairing rule is well-defined
in general. TDA solves the problem by detecting holes and cliques of the data
using concepts in algebraic topology. For example, a disc and an annulus are
different since the latter has a hole, but the former does not.
TDA views the data as a point cloud embedded in an ambient metric space, such
as $\mathbb{R}^{n}$ with the Euclidean metric. Then it requires a cascade of
the data to demonstrate a continuous change of the underlying topology. TDA
extracts quantified topological features from each step and uses them to
describe the shape of the data.
### 3.1 Fundamentals
Constructing a cascade of the given data demands a sequence of simplicial
complexes. Let $X=\\{x_{0},\dots x_{m}\\}$ be a given dataset, a subset of a
metric space $(M,d)$. Then a simplicial complex $K$ of $X$ is a subset of the
power set $P(X)$ such that every singleton subset of $X$ is in $K$, and every
subset of an element in $K$ belongs to $K$. Since $K$ is not uniquely
determined unless $X$ is a singleton, there exist several methods to construct
a simplicial complex from $X$.
###### Definition 3.1 (Vietoris-Rips Complex)
The Vietoris-Rips complex $VR_{\epsilon}(X)$ is a set of simplices $\sigma\in
P(X)$ such that $d(\alpha,\beta)<\epsilon$ for all $\alpha,\beta\in\sigma$.
###### Definition 3.2 (C̆ech Complex)
The C̆ech Complex $C_{\epsilon}$ is a set of simplices $\sigma$ such that
$\bigcap_{\alpha\in\sigma}\bar{B}_{\epsilon}(\alpha)\neq\emptyset$, where
$\bar{B}_{\epsilon}(\alpha)$ denotes the closed ball centered at $\alpha$ with
radius $\epsilon$.
Figure 9 displays a point cloud $X$ consists of three data points and its two
different complex constructions. In Vietoris-Rips complex, after two
1-simplices arising, it immediately jumps to the filled triangle, the solid
2-simplex. If all $\\{x_{0},x_{1}\\}$, $\\{x_{1},x_{2}\\}$, and
$\\{x_{0},x_{2}\\}$ are in $VR_{\epsilon}(X)$, then $\\{x_{0},x_{1},x_{2}\\}$
must be in the complex either. However, in C̆ech complex, it passes the case
of hollow 2-simplex. Also, the scale of $\epsilon$ is different:
$\\{x_{0},x_{1}\\}$ arises at $\epsilon=a$ in Vietoris-Rips but at
$\epsilon=a/2$ in C̆ech. This gives the following inequality:
$VR_{\epsilon}(X)\leq C_{\epsilon}(X)\leq VR_{2\epsilon}(X).$ (12)
Figure 9: An example point cloud $X=\\{x_{0},x_{1},x_{2}\\}$ and its
corresponding Vietoris-Rips and C̆ech complexes for each $\epsilon$. $a,b$,
and $c$ denote lengths of 1-simplices and $r$ is the radius of the
circumscribed circle.
Note that the closed balls of $C_{\epsilon}(X)$ forms a cover of $X$. The
nerve theorem ensures that $C_{\epsilon}(X)$ is homotopic equivalent to the
cover.
###### Theorem 3.1 (Nerve theorem)
Let ($U_{i}$) be a cover of $X$. Then the nerve $N(U_{\bullet})$ is the
simplicial complex over the index set I of the cover where $\sigma\in
U_{\bullet}$ if and only if
$\textbf{Supp}(\sigma)\colon=\bigcap_{i\in\sigma}U_{i}\neq\emptyset$. If
$\textbf{Supp}(\sigma)$ is contractible for any simplex $\sigma$ in
$N(U_{\bullet})$, then $|N(U_{\bullet})|$ is homotopy equivalent to X.
Given a cover consisting of closed balls centered at vertices with radius
$\epsilon$, $C_{\epsilon}(X)$ is the nerve of the cover. It implies C̆ech
complexes maintain the topological features of the data, which is not
guaranteed for Vietoris-Rips Complex. On the other hand, Vietoris-Rips
complexes only require pairwise distances, where C̆ech complexes need
distances computed further, for instance $r$ in figure 9. This enables
Vietoris-Rips complexes compute topology much faster.
The nested sequences $(VR_{i}(X))_{i\in\mathbb{R}_{+}}$ and
$(C_{i}(X))_{i\in\mathbb{R}_{+}}$ induce filtrations of simplicial complexes.
A filtration of a simplicial complex $K$ over $X$ is a nested sequence of
subcomplexes of $K$ such that:
$X=K_{0}\leq K_{1}\leq K_{2}...\leq K_{n}=K.$ (13)
We can naturally extend the indices to the non-negative reals.
However, computing filtrations of both Vietoris-Rips and C̆ech complexes are
inefficient for a large dataset. Both complexes access at most k-1-dimensional
simplicial complexes if there are $k$ data points, even if the underlying
topology is simple [54]. One modern alternative is the alpha complex [55], a
subcomplex of the Delaunay triangulation.
###### Definition 3.3 (Alpha Complex)
Let $V(x)$ be the Voronoi diagram of the point cloud $X$ containing $x\in X$.
Let $R_{\epsilon}(x)=B_{\epsilon}(x)\cap V(x)$. Then the alpha complex
$A_{\epsilon}(X)$ is defined by $A_{\epsilon}(X)=\\{\sigma\in
K\mid\bigcap_{x\in\sigma}R_{\epsilon}(x)\\}$.
Since $R_{\epsilon}(x)\subset B_{\epsilon}(x)$, the alpha complex is a
subcomplex of the C̆ech complex. Since all partitions of the Voronoi diagram
and closed balls are contractible, geometric realizations of the two complexes
are homotopy equivalent. The alpha complex filtration is composed of
simplicial complexes $(K_{i})$ where $K_{i}=A_{\sqrt{i}}(X)$.
Adapting the Delaunay triangulation exempts computing high-dimensional
cliques, but it is efficient triangulation computation that is the problem
[54]. An example to relieve the complexity is the witness complex [54], which
diminishes the computational complexity by introducing landmark points, a
relaxed version of the Delaunay triangulation.
#### 3.1.1 Persistent Homology
A filtration of simplicial complexes induces the sequence of homology groups.
Persistent homology tracks the evolution of those homology groups. Homology is
roughly a measure to detect cycles that are not a boundary of a manifold. For
each $k\geq 0$, the $k$-th chain group of a simplicial complex $K$ is the
vector space $C_{k}(K)$ over $\mathbb{F}$ spanned by the $k$-simplicies in
$K$. Let $\sigma=\\{v_{0},\dots,v_{k}\\}$ be a $k$-simplex in $K$. Assuming
all simplices in K are ordered, the i-face of $\sigma$ is the $(k-1)-$simplex
$\sigma_{-i}$ obtained by getting rid of $v_{i}$ from $\sigma$. For example, a
2-simplex $\\{x_{0},x_{1},x_{2}\\}$ in Figure 9 has three $i-$faces, each of
them is precisely an edge of the triangle. This leads to the definition of the
(algebraic) boundary.
###### Definition 3.4 (Boundary)
Let $\sigma$ be a $k-$dimensional simplex in $K$. Then the boundary of
$\sigma$ is
$\partial_{k}(\sigma)=\sum_{i=0}^{k}(-1)^{k}\sigma_{-i}.$ (14)
Thus, the boudnary of $\sigma$ is in the image of the map $\partial_{k}^{K}:$
$C_{k}(K)\longrightarrow C_{k-1}(K)$. Therefore, we can construct a sequence
of chain groups connected by boundary maps, such as
$\cdots
C_{k}(K)\xrightarrow{\partial_{k}^{K}}C_{k-1}(K)\xrightarrow{\partial_{k-1}^{K}}\cdots\xrightarrow{\partial_{1}^{K}}C_{0}(K)\xrightarrow{}0.$
(15)
The collection $(C_{*}(K),\partial_{*}^{K})$ is a chain complex if and only if
$\partial_{k}^{K}\circ\partial_{k+1}^{K}=0$ for all $k\geq 0$.
###### Definition 3.5 (Homology)
The $k-$th homology group of the chain complex
$(C_{*}(K),\partial_{*}^{K})$ is the quotient space
$H_{k}(C_{*},\partial_{*})=\ker{\partial_{k}}/\operatorname{img}{\partial_{k+1}}.$
(16)
The $k-$th homology groups of a given filtration defines a sequence of vector
spaces with associated linear map $a_{i\rightarrow j}:$
$H_{k}(F_{i}K)\xrightarrow{}H_{k}(F_{j}K)$ induced from the simplicial
inclusion map $\iota_{i\rightarrow j}:F_{i}K\hookrightarrow F_{j}K$. The
sequence $(V_{*},a_{*})=(H_{k}(F_{*}K),a_{*})$ is called a persistence module.
Persistent homology group of a persistence module is given by
$PH_{i\rightarrow j}(V_{*},a_{*})=\operatorname{img}{a_{i\rightarrow j}}.$
(17)
Note that persistence homology familiarizes when a non-boundary loop arises
and dies. For example, a $k$\- dimensional loop is born at $t=t_{1}$ becomes a
basis element of $H_{k}(F_{t}(K))$. If it disappears at $t=t_{2}$, then the
map $a_{t_{1}\rightarrow t_{2}}$ maps the loop to zero. But for all
$t_{3}<t_{2}$, $a_{t_{1}\rightarrow t_{3}}$ has a non-trivial image.
#### 3.1.2 Barcodes and Persistence Diagram
The barcode of a persistence module carries information on the lifespans of
the loops. For each pair of $0\leq i\leq j$, the interval module
$(I_{*}^{i,j},c_{*}^{i,j})$ over a field $\mathbb{F}$ is given by
$I_{k}^{i,j}=\begin{cases}\mathbb{F}\hskip 8.5359pt\text{if}\hskip
2.84544pti\leq k\leq j\\\ 0\hskip
8.5359pt\text{otherwise}\end{cases}\text{and}\hskip
7.11317ptc_{k}^{i,j}=\begin{cases}\operatorname{id}_{\mathbb{F}}\hskip
8.5359pt\text{if}\hskip 2.84544pti\leq k\leq j\\\ 0\hskip
8.5359pt\text{otherwise}\end{cases}.$
So, an interval module is a bar of length $j-i$. The structure theorem allows
the unique decomposition of any persistence module.
###### Theorem 3.2 (The structure theorem of a persistence module)
For any persistence module $(V_{*},a_{*})$, there exists a set of intervals
$B(V_{*},a_{*})$ such that
$(V_{*},a_{*})\cong\bigoplus_{[i,j]\in
B(V_{*},a_{*})}(I_{*}^{i,j},c_{*}^{i,j})^{\mu(i,j)},$ (18)
where $\mu(i,j)$ is the multiplicity of the interval $[i,j]$.
Figure 10: The barcode of a C̆ech filtration of the point cloud in Figure 9.
$x_{0},x_{1},x_{2}$ are embedded to points (0,0), (1,2), and (3,0)
respectively. The list of points are converted to Simplextree data using
Gudhi. The red and blue bars denote 0-dimemnsional and 1-dimensional
persistence respectively.
Figure 10 shows the barcode of the C̆ech filtration in Figure 9. Remark that
the 0-dimensional homology group is equivalent to space spanned by one’s
connected components. At $t=0$, there exists three connected components. So,
$H_{0}(F_{0}K)=\mathbb{F}^{3}$. As $d(x_{0},x_{1})=\sqrt{5}$,
$\\{x_{0},x_{1}\\}$ is born at $t=1.25=(\sqrt{5}/2)^{2}.$ $x_{0}$ component is
’absorbed’ to the $x_{1}$, hence the bottom red bar ends. At $t=2.25$, all the
three 1-simplices are alive but not the 2-simplex. Hence, there exists a loop
consisting of the edges of a triangle, and $H_{1}$ is non-trivial. At $t=2.5$,
the interior of the triangle is filled, so the loop disappears. From the
homology groups, the persistence homology $PH_{0\rightarrow 1}$, for instance,
is $\mathbb{F}^{2}$ since it maps $\mathbb{F}^{3}$ to $\mathbb{F}^{2}$ for
which $x_{0}$ and $x_{1}$ are mapped to same basis element and $x_{2}$ to the
other.
Measuring similarity between two barcodes requires a notion of metric. Before
its definition, we need to highlight some terminologies.
###### Definition 3.6 (homological critical value)
Let $\mathbb{X}$ be a topological space, and a function $f$ maps $\mathbb{X}$
to $\mathbb{R}$. A homological critical value of $f$ is a real number $a$ such
that $\forall\epsilon>0$ $\exists k\in\mathbb{Z}$ s.t.
$H_{k}(f^{-1}([-\infty,a-\epsilon])\rightarrow
H_{k}(f^{-1}(-\infty,a+\epsilon])$ is not an isomorphism.
We say a function $f$ is tame if it has finitely many homological critical
values and $H_{k}(f^{-1}(-\infty,a])$ is finite-dimensional for each
homological critical value $a$. Similarly, a persistent module $V_{*}$ is tame
if it has a tame function. Indeed, any interval module $I_{*}^{i,j}$ is tame
since its homological critical values are precisely $i$ and $j$. Then we
obtain the definition of a persistence diagram.
###### Definition 3.7 (persistence diagram)
The persistence diagram $\mathcal{D}(f)\in\mathbb{R}^{2}\cup{\infty}$ of $f$
is the set of pairs of homological critical points $(a_{i},a_{j})$, counted
with multiplicity $\mu_{i}^{j}$, union all points on $y=x$.
Figure 11 shows point clouds consisting of 200 points sampled from the unit
disc $\mathbb{D}^{2}$ and the annulus $Ann(0.5,1)$ and their corresponding
persistence diagrams. A persistence diagram is a different visualization
method of the barcode on the first quadrant by plotting each interval $[b,d]$
as a point $(b,d)$. Since $b\leq d$ always, all points in a persistence
diagram locates above the line $y=x$. The persistence diagram of the disc
depicts persistences from all dimensions are born at early phases and demise
rapidly. However, in the case of the annulus, there exists the 1-dimensional
persistence which dies around at $t=0.25$. Note that these persistence
diagrams have different $x$ and $y$ coordinate limits: the inner boundary of
the annulus produces distinct topological features. Unlike the disc, the
coverings of points near the inner boundary form nerve as a cycle of
1-simplices, which dies out when the whole intersection of the cover becomes
nonempty. On the other hand, points are distributed evenly, preventing the
formation of such loops. Still, there exists plenty of small loops, but they
can be ignored as topological noise. Since their $H_{0}$ persistence is
similar, it is the unique topological difference according to the diagrams,
and this also matches nicely to the theoretical difference of homology between
the two objects.
Figure 11: Example point clouds sampled from the unit disc and the annulus
$Ann(0.5,1)$ (left column) and their corresponding persistence diagrams (right
column) of their Alpha complex filtrations. Red and Blue dots denote
persistence of $H_{0}$ and $H_{1}$ homology groups respectively. The red point
on the horizontal line marked with $+\infty$ in each diagram represents the
immortal connected component.
Here is the definition of the bottleneck distance.
###### Definition 3.8 (bottleneck distance)
Let $\mathcal{D}$ and $\mathcal{E}$ be persistence diagrams. Let $\eta:$
$\mathcal{D}\rightarrow\mathcal{E}$ be any bijection. Then the bottleneck
distance between $\mathcal{D}$ and $\mathcal{E}$ is given by
$d_{B}(\mathcal{D},\mathcal{G})=\inf_{\eta}\sup_{\textbf{x}\in\mathcal{D}}\|x-\eta(x)\|.$
(19)
Now we have one of our main theorems, the stability theorem [56].
###### Theorem 3.3 (Stability Theorem)
Let $\mathcal{X}$ be a triangulable topological space with tame functions $f$,
$g$. Then the following inequality holds:
$d_{B}(D(f),D(g))\leq\|f-g\|_{\infty}.$ (20)
So, the bottleneck distance between any two persistence diagram is bounded by
the $L^{\infty}$ distance between their tame functions. The theorem implies
the metric is independent of the noise of the data.
### 3.2 Machine Learning with TDA
#### 3.2.1 Persistence Landscape
Since the barcode is a multiset of intervals, it is hard to handle it in
machine learning. Persistence landscape [57] is a method to vectorize the
barcode, making it statistically tractable.
Let $M$ be a persistent module consisting of the filtration of a complex of
the point cloud. For any pair of real numbers $i\leq j$, define the betti
number by
$\beta^{i,j}=\dim{\operatorname{img}{a_{i\rightarrow j}}}.$ (21)
Then we have $\beta^{i,l}\leq\beta^{j,k}$ for any quadruple $i\leq j\leq k\leq
l$ since $a_{i\rightarrow l}=a_{k\rightarrow l}\circ a_{j\rightarrow k}\circ
a_{i\rightarrow j}$. So for each interval module in the barcorde which is born
at $b$ and dies at $d$, define the rank function [57] $\lambda^{\prime}(b,d)$
as
$\lambda^{\prime}(b,d)=\begin{cases}\beta^{b,d}\hskip
14.22636pt\text{if}\hskip 2.84544ptb\leq d\\\ 0\hskip
14.22636pt\text{otherwise}.\end{cases}$ (22)
So $\lambda^{\prime}$ returns the corresponding Betti number only if the input
interval is a well-defined interval module in the barcode. Then, define the
rescaled rank function [57] given by
$\lambda(m,h)=\begin{cases}\beta^{m-h,m+h}\hskip 14.22636pt\text{if}\hskip
2.84544ptb\leq d\\\ 0\hskip 14.22636pt\text{otherwise}.\end{cases}$ (23)
where
$m=\frac{b+d}{2},\hskip 56.9055pth=\frac{d-b}{2}.$ (24)
Similarly, we have for $0\leq h_{1}\leq h_{2}$,
$\lambda(t,h_{1})\geq\lambda(t,h_{2})$. Now we have the definition of the
persistence landscape [57].
###### Definition 3.9 (Persistence Landscape)
The persistence landscape is a sequence $\lambda=(\lambda_{k})$ of functions
$\lambda_{k}:$ $\mathbb{R}\rightarrow\mathbb{R}\cup\\{\infty\\}$ where
$\lambda_{k}(t)=\sup{\\{m\geq 0\mid\lambda(t,m)\geq k\\}}.$ (25)
Contracting the input domain of each $\lambda_{k}$ to $[0,1]$ makes the
function to be a path. Figure 12 illustrates the top three persistence
landscape of the point clouds in Figure 11. $H_{0}$ persistence landscapes do
not show the significant difference, but $H_{1}$ persistence landscapes show
distinct configurations. $\lambda_{1}$ dominates the others in the annulus but
shows more stochastic behaviours in the disc. Instead of simply observing
persistence diagrams, vectorized entities transform the topological features
into more statistics-friendly objects. Also, we do not need to access all
$\lambda_{k}$, sufficient to study only the first few landscapes.
Figure 12: Persistence landscapes of the disc and annulus point clouds in
Figure 11 and their silhouettes. Three paths (k = 1 (blue), 2 (orange), 3
(green) in equation 25) are sampled with resolution 100 from each persistence
diagrams. On the right column, we use the constant weight function to produce
the silhouettes.
Analogous to the bottleneck distance, the difference between two persistence
landscapes is also measurable by defining its norm. However, since a
persistence landscape is a group of numerical vectors, the definition of
distance is more natural and statistically tractable.
###### Definition 3.10 (Norms for Persistence Landscapes)
The $p-$norm of a persistence landscape $\lambda$ is given by
$\|\lambda\|_{p}=\left(\sum_{k=1}^{\infty}\|\lambda_{k}\|_{p}\right)^{\frac{1}{p}}.$
(26)
The norm manifests probability space $(\Omega,\mathcal{F},\mathbb{P})$ with
the persistence landscape $\Lambda$ as a random variable embedded in the
normed space. So for each $\omega\in\Omega$, $X(\omega)$ is the corresponding
persistence data, and $\Lambda(\omega)=\lambda(X(\omega))$ [57]. Hence for
each persistence $X$, we have a random variable $\Lambda$ as the topological
summary statistics.
So let $X_{1},\dots X_{n}$ be iid samples and $\Lambda^{1},\dots\Lambda^{n}$
be the corresponding persistence landscapes. The mean of the landscapes is
defined as
$\bar{\Lambda}^{n}=\bar{\lambda}^{n}_{k}(t)=\frac{1}{n}\sum_{i=1}^{n}\lambda_{k}^{i}(t).$
(27)
Then we obtain two important theorems for statistical inference applying
persistence landscapes.
###### Theorem 3.4 (Central Limit Theorem for Persistence Landscapes)
Let $p\geq 2$. If the both first and second moments of $\|\Lambda\|$ are
finite, then
$({\bar{\Lambda}^{n}-\mathbb{E}[\Lambda]})/({\sigma/\sqrt{n}})\rightarrow
N(0,1)\hskip 5.69046pt\text{as}\hskip 5.69046ptn\rightarrow\infty$ (28)
###### Theorem 3.5 (Strong Law of Large Numbers for Persistence Landscapes)
$\frac{1}{n}\sum_{i}\Lambda^{i}\xrightarrow{a.s.}\mathbb{E}[\Lambda]$ (29)
if and only if $\mathbb{E}[\Lambda]<\infty$.
Based on the two theorems, one can perform a statistical test, for example
[57]. The norm in (26) induces $p-$ landscape distance between two persistence
modules. Like the bottleneck distance, it satisfies the stability theorem, but
it does not require the tameness condition anymore [57].
Persistence landscapes give a sequence of path. Instead of multiple paths, a
silhouette [58] of a persistence diagram returns a weighted average of the
diagram. Silhouette compresses the whole diagram into a single path per
dimension.
###### Definition 3.11 (Silhouette)
For each persistence point $p=(\frac{d+b}{2},\frac{d-b}{2})$, where $b$ and
$d$ denote the birth and the death of the point, define a function
$\zeta_{p}(t)$ as following:
$\zeta_{p}(t)=\begin{cases}t-b\hskip 14.22636pt\text{if}\hskip
2.84544ptt\in[b,\frac{d+b}{2}]\\\ d-t\hskip 14.22636pt\text{if}\hskip
2.84544ptt\in[\frac{d+b}{2},d]\\\ 0\hskip
14.22636pt\text{otherwise}.\end{cases}$
Then, the silhouette $S(t)$ is the weighted average of $\zeta_{p}(t)$:
$S(t)=\frac{\sum_{p}w_{p}\zeta_{p}(t)}{\sum_{p}w_{p}}.$
Note that $\lambda_{k}(t)=\text{k}\max_{p}\zeta_{p}(t)$ by definition [58],
where $\text{k}\max$ denotes the $k-th$ largest value. Figure 12 represents
constant weight silhouettes of the disc and the annulus. As persistence
landscape, $H_{0}$ shapes are similar but the maximum values are different,
where $H_{1}$ silhouettes are clearly different.
#### 3.2.2 Signature Features
Even though persistence landscape maps topological features to which
statistical learning is applicable, it might contain some artefact caused by
choice of the feature map [59]. Signature features prevent this by mapping the
paths of persistence landscapes into tensor algebra [59]. They characterize
features of the sequence of paths [60]. Moreover, granted by its tensorized
structure, signature transform allows faster computation in both CPUs and
GPUs, which is crucial for efficient statistical learning [61].
###### Definition 3.12 (Signature)
The signature of a path $X=(X_{t}^{1},\dots X_{t}^{n}):$
$[a,b]\rightarrow\mathbb{R}^{n}$ is a collection of integrals of X such that
$S(X)_{a,b}=(S(X)_{a,b}^{0},S(X)_{a,b}^{1},\dots,S(X)_{a,b}^{n},S(X)_{a,b}^{1,1}\dots
S(X)_{a,b}^{1,2},\dots)$
with $S(X)_{a,b}^{0}-0$ and
$S(X)_{a,t}^{i_{1},\dots,i_{k}}=\int_{a<t_{k}<t}\cdots\int_{a<t_{1}<t_{2}}dX_{t_{1}}^{i_{1}}\cdots
dX_{t_{i}}^{i_{k}}.$ (30)
The $k-th$ level signature is the collection of $S(X)^{i_{i},\dots i_{k}}$
such that $1\leq i_{i},\dots i_{k}\leq k$. Hence, a $k-th$ level signature is
composed of $n^{k}$ values. One important property of the signature is its
independence of time reparametrization. Suppose $X,Y$ are path with domain
$[a,b]$, and let $\psi:[a,b]\rightarrow[a,b]$. Let $\tilde{X}_{t}=X_{\psi(t)}$
and $\tilde{Y}_{t}=Y_{\psi(t)}$ defined on the same domain. Then we have
$\dot{\tilde{X}}_{t}=\dot{X}_{\psi(t)}\dot{\psi}(t)$
which leads to
$\int_{a}^{b}\tilde{Y}_{t}d\tilde{X}_{t}=\int_{a}^{b}Y_{\psi(t)}\dot{X}_{\psi(t)}\dot{\psi}(t)dt=\int_{a}^{b}Y_{u}du$
(31)
by substituting $u=\psi(t)$ [60]. Therefore, since the signature is a nested
integral in (31), $S(\tilde{(}X))_{a,b}^{i_{1},\dots
i_{k}}=S(X)_{a,b}^{i_{1},\dots i_{k}}$ for all $i_{m}\in\\{1,\dots,n\\}$.
Another important property is the shuffle identity [60, 62]. A $(k,m)-shuffle$
of the set $\\{1,\dots,k+m\\}$ is a permutation $\sigma$ on the set such that
$\sigma^{-1}(1)<\dots<\sigma^{-1}(k)$ and
$\sigma^{-1}(k+1)<\dots<\sigma^{-1}(k+m)$. $Sh(k,m)$ indicates the set of all
(k,m)-shuffles. Let $I=(i_{1},\dots,i_{k})$ and $J=(j_{1},\dots j_{m})$ be two
multi indexes, $i_{1},\dots,i_{k},j_{1},\dots j_{m}\in\\{1,\dots d\\}$.
###### Definition 3.13 (Shuffle product)
The shuffle product $I\\#J$ of $I$ and $J$ is the set of multi-indexes such
that
$I\hskip 2.84544pt\\#\hskip 2.84544ptJ=\\{(r_{\sigma(1)},\dots
r_{\sigma(k+m)}\mid\sigma\in Sh(k,m)\\}.$ (32)
###### Theorem 3.6 (Shuffle identity [60])
For a path $X$: $[a,b]\rightarrow\mathbb{R}^{n}$ and multi-indexes
$I=(i_{1},\dots i_{k})$, $J=(j_{1},\dots,j_{m})$ whose entries are from
$\\{1,\dots,n\\}$,
$S(X)^{I}_{a,b}S(X)^{J}_{a,b}=\sum_{K\in I\hskip 2.84544pt\\#\hskip
2.84544ptJ}S(X)^{K}_{a,b}.$ (33)
This enables product of the values across the level. For instance, when $n=2$,
$S(X)_{a,b}^{1}S(X)_{a,b}^{2}=S(X)_{a,b}^{1,2}+S(X)_{a,b}^{2,1}$ and
$S(X)_{a,b}^{1}S(X)_{a,b}^{1,2}=S(X)_{a,b}^{1,1,2}+S(X)_{a,b}^{1,2,1}$. The
shuffling identity hence simplifies the arbitrary product into linear
combination of elements from higher channel.
So, if we take persistence landscape as input, $t_{1},\dots,t_{k}$ corresponds
to the points where the landscape values are sampled. For instance, in Figure
12, sequence of $t_{i}$’s is the linspace(0,1,20). Features extracted through
the signature transform is widely used in machine learning, especially models
concerning time series data prominent in finance [63].
## 4 Results
### 4.1 Data Description
The data consists of three cell types: Astrocyte, Microglia, and Neuron.
Astrocyte and Microglia are glial cells in the brain which support neuronal
activities. Initially, iPSC (induced pluripotent stem cell) lines were
acquired from StemBANCC and nurtured. Differentiation of stem cells directed
to cortical neurons reaches astrocyte or neuron [64]. Microglia nurture in
monocyte plants after the embroid state[65].
Before the stain treatment, cells of each cell types are plated on 384 well
plates, coated and incubated. They are thawed, resuspended into corresponding
media, seeded into wells and cultured. Cells are imaged in four stains plus
Brightfield, and table 1 shows the name of compound used for staining.
MitoTracker Deep Red is added to wells at a concentration of 500 nM followed
by PFA for cell fixation. Afterwards, remaining Concanavalin A Alexa488
conjugate, Phalloidin, and Hoescht 33342 are added. After resting cells absorb
treatments, they are washed twice with PBS (Phosphate-buffered saline).
Name | Marker | Image channel
---|---|---
MitoTracker | Mitochondria | Cy5
Concanavalin A | Endoplasmic Reticulum | FITC
Phalloidin | Actin cytoskeleton | dsRed
Hoescht 33342 | Nuclei | DAPI
BF | None (bright field image) | TL-Brightfield
Table 1: Staining compounds assigned to their target organelles and
corresponding image channels.
There are 19,200 grayscale tiff images of size $2048\times 2048$ per plate and
five plates per cell line. Images are collected on the IN Cell Analyzer 6000
with a 20X objective, followed by image enhancement through CellProfiler and
sequence of Cell Painting analysis. Cell lines 803 and 808 corresponds to AD
patients with PSEN1 mutation [66], while 840 and 856 to people with no AD. So
we can develop a semantic segmentation model which can capture cell organelles
from each cell type. Figure 13 shows an example set of Microglia images.
Figure 13: Four stains and a brightfield image of Microglia from the C - 10
well of the first plate of the cell line 803. Given Images are enhanced from
the raw data for clearer display.
### 4.2 Deep Learning Simulations
#### 4.2.1 Multiclass Semantic Segmentation
We build a semantic segmentation model to partition all four cell organelles
taking stacked RGB images as inputs. We do not choose the brightfield as input
since its features are too insufficient to be unravelled. We have five images
from each cell well. Neglecting the brightfield images, we construct a stacked
RGB image by accumulating stains of mitochondria, cytoskeleton, and ER,
preserving the order followed by summing nuclei image to the first and the
second channel, making it yellow (red plus green is yellow) in the
visualization of the input. Figure 14 shows how the target label of our input
looks like. Images are resided to $128\times 128$ to lighten computation and
inputs are normalized prior to the training.
Figure 14: Brightfield images and their corresponding ground truth and stacked
inputs.
We need the ’ground truth’ to train a semantic segmentation model. We
threshold the stained images to produce the labels. Pixels are normalized and
fitted into $[0,255]\cap\mathbb{Z}$. Label and input pairs of all three cell
types are shown in Figure 14. Each ground truth image is transformed to a
1-channel image whose pixel values are in $\\{0,1,2,3,4\\}$. The background
pixel has value 0, where the assignment follows the order in table 1.
We admit cross-entropy loss for the risk minimization. Remark the object of
image segmentation is pixel-wise classification. So if a pixel has an output
$[p_{0},p_{1},\dots,p_{n}]$ whose corresponding true label is $i$, the model
would like to know how much $[p_{0},p_{1},\dots,p_{n-1}]$ and
$\delta_{i,0},\delta_{i,1},\dots,\delta_{i,i},\dots,\delta_{i,n-1}$ are
different. So it is equivalent to compute the loss as
$L(x,i)=-\log{\left(\frac{\exp{x[i]}}{\sum_{j}\exp{x[j]}}\right)}$
if there are $n$ classes including the background in total.
Figure 15 shows the training and validation losses of the Unet model. We
omitted the scheduler and selected SGD for the optimization method with
learning rate = 0.01, running 20 epochs. The training set consists of all cell
types, 1000 images for each cell type. Overfitting does not occur during the
training. Figure 16 shows the sample test images and their predictions.
Figure 15: Training and Validation losses over 20 epochs (left) and learning
rate propagation (right). Learning rate is fixed over the training. Figure 16:
Input stacked RGB images, model prediction and ground truth of an example in
validation set from each cell type.
However, performances of FCN and DeepLabv3 are inferior to Unet in terms of
training and validation losses. Figure 17 shows training and validation losses
of FCN and Deeplabv3 models with two different encoding backbones, Resnet-50
and Resnet-101 where the other conditions are remaining invariant. We expect
this is due to the coarse downsampling process in both models, where the Unet
has multiple decoding procedures. Since our cell image segmentation model has
a small number of classes but sharp boundaries, only a few decoding layers are
insufficient for detailed segmentation.
Figure 17: Training and validation losses plot of FCN and DeepLabv3 with Resnet-50 and Resnet-101 backbones. Loss type | FCN | Deeplabv3 | Unet
---|---|---|---
Training loss | 0.6791 (Res50) 0.6526 (Res101) | 0.7365 (Res50) 0.7180 (Res101) | 0.1913
Validation loss | 0.7138 (Res50) 0.8092 (Res101) | 0.7400 (Res50) 0.7315 (Res101) | 0.1791
Table 2: Training and validation losses of five models in the multi-class
semantic segmentation at the last epoch.
Until now, the models are trained with images of all three cell types, but
this leads to a question if the model is transferable: showing high accuracy
in a different class of data. Suppose we have a model only trained with
astrocyte data. Will the model be consistent in segmenting microglia or neuron
images?
Figure 18 shows the model for the transfer learning task. We train the model
with 3000 Astrocyte images from cell line 803, sticking on the Unet structure
due to its better performance in our regime. Additionally, we include the
cosine annealing scheduler after each validation step in every epoch. After
running 40 epochs, we tested the model by the cross-entropy loss using 100
images from each cell type. After partitioning each test class into ten
batches, we compute the average test loss per batch.
The plot on the right of Figure 18 shows all three test losses do not show a
high discrepancy to the validation loss of the model. Especially, test losses
of the microglia dataset are significantly lower than the others since
microglia images consist of a higher proportion of backgrounds (see Figure
16).
Figure 18: Left: training and validation losses of the transfer learning model
trained only with astrocyte data. The learning rate plot against the number of
epochs passed is illustrated on the right. Training and validation losses are
0.2375 and 0.2620 after the last epoch. Right: Test losses of astrocyte,
microglia, and neuron images. The mean test losses along all batches are
0.2696 (Astrocyte), 0.1257 (Microglia), and 0.3075 (Neuron).
#### 4.2.2 Augmented Microscopy
Unfortunately, the multi-class segmentation is impractical in biology
research; we are re-stacking images already segmented by the cell painting.
Still, it shows the Unet is the most appropriate for our approach.
However, segmenting brightfield images is demanding in real-world research
indeed. In Figure 14, brightfield images do not show explicit features, but
can a deep learning model figure out their latent features? If there is such
an in-silico approach, it will accelerate the research procedure with high
accuracy.
Figure 19 displays the result of augmented microscopy experiment. We have 3000
images of astrocytes, and run them for 30 epochs with batch size 16. We use
parallel computing with 3 GPU’s internally supported by Pytorch. Training is
not different to the previous simulations, but we exploit Intersection-over-
Union (IoU) score, or Jaccard’s index, to quantify the performance of the
model. For each class $i$, we count the number of overlapping pixels of value
$i$ (say $i$-pixel) in the both divided by number of $I$-pixels occurring at
least one of the truth of the prediction. For instance, in Figure 20, IoU of
the class 1 is $Yellow/(Yellow+Red_{truth}+Red_{pred})=6/(6+3+3)=0.5$, and
that of the class 0 is $13/(13+3+3)\leavevmode\nobreak\ 0.68$. The total IoU
is computed by averaging all class-wise IoU’s, so $(0.5+0.68)/2=0.59$.
In Figure 19, images are resized before the training but now to $512\times
512$ for higher resolution. In terms of IoU, the performance is not optimal
since object IoU is mostly close to 0. However, for some training examples,
the object IoU bursts up to 0.45. Surprisingly, comparing the input images of
maximum object IoU against minimum object IoU, nuclei are much more explicitly
displayed in the brightfield image in the former but not in the latter. It is
because brightfield images can be noisy, unlike carefully harvested
fluorescent channel images. This result provides whatever the input is, the
model learns the feature of the input images regardless of the ground truth.
Figure 19: Training procedure and test results of Unet based augmented
microscopy model. (A) Training and validation losses over 30 epochs. Learning
rates are plotted on the left, decreasing due to the cosine annealing
scheduler. (B) Background, object, and total IoU (Intersection-over-Union)
score of the test image set consisting of 200 images. (C) (Input, Prediction,
Truth) triples in the test set with the largest (top) object IoU and the
smallest (bottom) object IoU. Figure 20: An example of finding IoU score.
Now the ground truth producing protocol changes to thresholding pixels with
top 5% strength. Training and validation losses get smaller as Figure 21 (C)
shows fewer mismatching pixels of which has smaller IoU. However, there is no
considerable difference in total IoU; the latter has a higher maximum object
IoU but compromised by lower average background IoU.
Resizing might lose vital information contained in every pixel. We also
examine folded images, where inputs produced by partitioning a raw $2048\times
2048$ image to 16 $512\times 512$ images. We folded 2000 brightfield images,
so the dataset has 32000 images. In Figure 22, though it gives smaller
training losses, its performance is similar to the past models for maximum
object IoU and average total IoU. Therefore, we can conclude pre-
transformation procedures are independent of the model performance. We will
maintain the last option for the remaining simulations.
Figure 21: Everything is the same with Figure 19 but the ground truths are
sampled through thresholding all pixels which have top 5% strength. Figure
22: Results from the folded dataset. IoU’s are computed after aggregating the
sliced images into the original size.
As the multi-class segmentation, we test the model transferablility. Table 3
and 4 represent IoU’s of three cell type test data of models trained only with
astrocyte data and the all cell types respectively. Comparing the third row,
performance for testing astrocyte is decreased but increased or remained
constant for other cell types. Big rise of Microglia IoU is notable.
Considering number of astrocyte training set decreased, we confirm training
the augmented microscopy model with all three data shows improved overall
performance.
IoU | Astrocyte | Microglia | Neuron
---|---|---|---
Average Object IoU | 0.0964 | 0.1181 | 0.4178
Average Background IoU | 0.9722 | 0.9742 | 0.9469
Average Total IoU | 0.5343 | 0.5462 | 0.6823
Maximum Object IoU | 0.4731 | 0.3992 | 0.7094
Table 3: IoU scores from testing the model trained with only astrocyte images. IoU | Astrocyte | Microglia | Neuron
---|---|---|---
Average Object IoU | 0.0614 | 0.4745 | 0.4038
Average Background IoU | 0.9719 | 0.9839 | 0.9610
Average Total IoU | 0.5167 | 0.7292 | 0.6824
Maximum Object IoU | 0.3908 | 0.7891 | 0.7332
Table 4: IoU scores of the model trained with all three types of cell images,
1000 from each.
### 4.3 Topological Data Analysis Simulations
We are going to classify fluorescent nuclei images by its cell type only using
topological features, naming the topological transformer of the data. As the
toy simulation in the section 3.2.1, it requires to transform an input raw
image into a point cloud. Here we use two methods. One is that we resize a
$2048\times 2048$ image to a $64\times 64$ image and regard every nonzero
pixel as a point in the plane. Although the image is robustly deformed,
information relating to cell size and distance between cells are well-
preserved by constrained proportion. The other approach is first thresholding
all nonzero pixels as we produced the ground truth in the semantic
segmentation experiment, and draw contours of each target area. Then, using
OpenCV library, we plot the centre of each contour.
Figure 23: Resizing (top) and Contour (bottom) methods for point cloud
generation.
We are going to examine two different topological transformers: using the
silhouette or the signature. Primarily, we compute the persistence diagram of
the alpha complex of a point cloud. Silhouette transformer computes the
silhouette of the constant weight per dimension. Here we fix the maximum
dimension to 1 and the resolution of the silhouette to 200. Stacking the
sequence along the channel gives $2\times 200$ data. The signature transformer
generates the signature from the top five persistence landscapes of each
dimension. It accumulates signatures up to the third level. So the length of
each signature is $5^{0}+5^{1}+5^{2}+5^{3}=156$, but we omit the first term
since it is always 1 and could dominate the remaining sequence. So, we have
$2\times 155$ arrays as input.
We first perform classification using three statistical machine learning
classifiers: Support Vector Machine (SVM), XGBoost [67], and Light Gradient
Boosting Machine (LGBM) [68]. Both XGBoost and LGBM root on the Gradient
Boosting Decision Tree algorithm, but LGBM accelerate the training by
exclusive feature bundling plus gradient-based one-side sampling [68].
Table 5 shows the resizing results in better performance for all three
classifiers. We infer this as resizing eliminates more noises. For resizing,
XGBoost win by a margin, but LGBM shows higher accuracy in the contour method
but with a larger difference. So, we conclude LGBM outperforms the other
classifiers for all four topological transformers.
Transformer | SVM | XGBoost | LGBM
---|---|---|---
Resize_silhouette | 78.8% | 86.9% | 86.6%
Resize_signature | 75.6% | 80.8% | 80.1%
Contour_silhouette | 58.7% | 70.4% | 71.2%
Contour_signature | 57.8% | 63.6% | 65.1%
Table 5: Test accuracy of the three classifiers toward each topological
transformer. Each training set consists of 2000 images per cell type, and
training and test set are split into ration 8:2.
Beyond the statistical approach, we build a TDA-CNN to classify topological
features using a simple 1-dimensional convolutional neural network. The CNN we
use consists of four convolutional layers followed by two fully connected
layers. ReLU activation is present after all layers except the output, and we
apply batch-normalization after each convolutional layer. Each convolutional
layer doubles the number of input channels with kernel size = 4, stride 2, and
padding 1. Table 6 shows neither TDA-CNN outperforms the optimal classifier in
Table 5. Also, it is the contour_silhouette method that shows the highest
accuracy along with a neural network. Still, all the topological methods fall
behind a conventional neural network of an identical structure to TDA-CNN
except discharging 2D convolutions.
Transformer | Test Accuracy
---|---
Resize_silhouette | 82.8%
Resize_signature | 77.6%
Contour_silhouette | 84.2%
None | 93.8%
Table 6: Test accuracy of TDA-CNN with different transformers and a normal CNN
without topological preprocessing.
Table 6 displays utilizing only topological features is insufficient for cell
image classification. Figure 14 illustrates that astrocyte images contain many
small cells stationed densely but sparser in microglia images. On the other
hand, neuron images do not contain cells as many as the others, plus some of
them are much larger. Such information manifests into the persistence of the
image point cloud, but the topological transformers neglect other vital
information like the shape of each cell.
## 5 Conclusions
We confirm Unet is most suitable for our multi-class semantic segmentation
task, superior to FCN and DeepLabv3. Also, the transferability of the multi-
class semantic segmentation implies the model learns hidden features only
based on pixel data, regardless of the cell type. Furthermore, we present the
performance of the augment microscopy is independent of the choice of ground
truth producing protocol or data scaling. Finally, a model ignores turbulence
in the ground truths and successfully adapts necessary latent features. Even
though the transferability of the augmented microscopy model is unfavourable,
further research in transfer learning of cell image segmentation is promising.
In topological data analysis experiments, we found Resize_silhouette
transformer show the best performance for all three SVM, XGBoost, and LGBM
classifiers. The Contour_silhouette is the most suitable for classification
involving a neural network, even though none of the TDA-CNN models surpasses
the model excluding a topological transformer.
To improve the results, We first require more refined ground truths: naive
thresholding might be unfit in general. A different segmentation model is
considerable since Unet is quite old and more modern models are being
developed. Moreover, the models are not guaranteed to perform equally to data
collected with different methods. Therefore, we should examine the model if a
different cell painting or image-collecting method prohibits the transfer of
the model. Finally, An image containing numerous cells impairs detailed
segmentation. Using high-resolution data comprise a few cells can make
segmentation and augmented microscopy ameliorated.
Our research has lots of potentials for further applications. Augmented
microscopy can ease experiment procedure by only examining brightfield images.
Also, extension to multi-class augmented microscopy promises massive
versatility in biomedical research. We also need to focus on which topological
transformer reforms an image into a 1-dimensional sequence while preserving
significant geometric features. The visual transformer [69] also converts
images into a sequence by the famous transformer method in natural language
processing. Transformation into 1D data enables us to examine the images in
recurrent neural networks. As the visual transformer reduces computational
costs, we expect the topological transformer facilitates it similarly. If the
topological features of the input data are explicit, then TDA-CNN can
outperform conventional CNN with lower computational cost. For example,
suppose A$\beta$ influences cell configuration by scattering cells. If a
treatment suppresses the A$\beta$ plagues, then alteration of the topology is
recorded in a topological transformer. Instead of using a whole raw image as
an input, extracted topological features can be sufficient to detect the
treatment functionality. More complicated neural networks than vanilla CNN is
also applicable.
Indeed, we can merge our two different approaches into a single, exclusive
deep learning framework: detect interested cell organelles from a brightfield
image and compute their topological properties. Even though our methodologies
call for further pruning, but once completed, this elegant synthesis of state-
of-the-art machinery and pure mathematics will be sensational in not only
Alzheimer research but also general biomedical research and even further.
## References
* [1] Martin Prince, Martin Knapp, Maelenn Guerchet, Paul McCrone, Matthew Prina, Adelina Comas-Herrera, Raphael Wittenberg, Bayo Adelaja, Bo Hu, Derek King, et al. Dementia uk: update. Alzheimer’s Society, 1014:43–4, 2014.
* [2] FS Esch, PS Keim, EC Beattie, RW Blacher, AR Culwell, T Oltersdorf, D McClure, and PJ Ward. Cleavage of amyloid beta peptide during constitutive processing of its precursor. Science, 248(4959):1122–1124, 1990.
* [3] N. Arispe, E. Rojas, and H. B. Pollard. Alzheimer disease amyloid beta protein forms calcium channels in bilayer membranes: blockade by tromethamine and aluminum. Proceedings of the National Academy of Sciences of the United States of America, 90(2):567–571, Jan 1993. 8380642[pmid].
* [4] RJ Mark, K Hensley, DA Butterfield, and MP Mattson. Amyloid beta-peptide impairs ion-motive atpase activities: evidence for a role in loss of neuronal ca2+ homeostasis and cell death. Journal of Neuroscience, 15(9):6239–6249, 1995.
* [5] David C Swinney. The contribution of mechanistic understanding to phenotypic screening for first-in-class medicines. Journal of biomolecular screening, 18(10):1186–1192, 2013.
* [6] Mark-Anthony Bray, Shantanu Singh, Han Han, Chadwick T Davis, Blake Borgeson, Cathy Hartland, Maria Kost-Alimova, Sigrun M Gustafsdottir, Christopher C Gibson, and Anne E Carpenter. Cell painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes. Nature protocols, 11(9):1757, 2016.
* [7] Anne E Carpenter, Thouis R Jones, Michael R Lamprecht, Colin Clarke, In Han Kang, Ola Friman, David A Guertin, Joo Han Chang, Robert A Lindquist, Jason Moffat, et al. Cellprofiler: image analysis software for identifying and quantifying cell phenotypes. Genome biology, 7(10):1–11, 2006.
* [8] Jonathan M. Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M. Donghia, Craig R. MacNair, Shawn French, Lindsey A. Carfrae, Zohar Bloom-Ackermann, Victoria M. Tran, Anush Chiappino-Pepe, Ahmed H. Badran, Ian W. Andrews, Emma J. Chory, George M. Church, Eric D. Brown, Tommi S. Jaakkola, Regina Barzilay, and James J. Collins. A deep learning approach to antibiotic discovery. Cell, 180(4):688 – 702.e13, 2020.
* [9] Jaak Simm, Günter Klambauer, Adam Arany, Marvin Steijaert, Jörg Kurt Wegner, Emmanuel Gustin, Vladimir Chupakhin, Yolanda T. Chong, Jorge Vialard, Peter Buijnsters, Ingrid Velter, Alexander Vapirev, Shantanu Singh, Anne E. Carpenter, Roel Wuyts, Sepp Hochreiter, Yves Moreau, and Hugo Ceulemans. Repurposing high-throughput image assays enables biological activity prediction for drug discovery. Cell Chemical Biology, 25(5):611 – 618.e3, 2018.
* [10] Erick Moen, Dylan Bannon, Takamasa Kudo, William Graf, Markus Covert, and David Van Valen. Deep learning for cellular image analysis. Nature methods, 16(12):1233, 2019.
* [11] Kai Yao, Nash D. Rochman, and Sean X. Sun. Cell type classification and unsupervised morphological phenotyping from low-resolution images using deep learning. Scientific Reports, 9(1):13467, Sep 2019.
* [12] Frédéric Chazal and Bertrand Michel. An introduction to topological data analysis: fundamental and practical aspects for data scientists, 2021.
* [13] Violeta Kovacev-Nikolic, Peter Bubenik, Dragan Nikolić, and Giseon Heo. Using persistent homology and dynamical distances to analyze protein binding. Statistical applications in genetics and molecular biology, 15(1):19–38, 2016.
* [14] Chunyuan Li, Maks Ovsjanikov, and Frederic Chazal. Persistence-based structural recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1995–2002, 2014.
* [15] Gunnar Carlsson and Rickard Brüel Gabrielsson. Topological approaches to deep learning. In Topological Data Analysis, pages 119–146. Springer, 2020.
* [16] Mark-Anthony Bray, Martha S Vokes, and Anne E Carpenter. Using cellprofiler for automatic identification and measurement of biological objects in images. Current Protocols in Molecular Biology, 109(1):14–17, 2015.
* [17] Berend Snijder, Gregory I Vladimer, Nikolaus Krall, Katsuhiro Miura, Ann-Sofie Schmolke, Christoph Kornauth, Oscar Lopez de la Fuente, Hye-Soo Choi, Emiel van der Kouwe, Sinan Gültekin, et al. Image-based ex-vivo drug screening for patients with aggressive haematological malignancies: interim results from a single-arm, open-label, pilot study. The Lancet Haematology, 4(12):e595–e606, 2017.
* [18] Yasuteru Sakurai, Andrey A Kolokoltsov, Cheng-Chang Chen, Michael W Tidwell, William E Bauta, Norbert Klugbauer, Christian Grimm, Christian Wahl-Schott, Martin Biel, and Robert A Davey. Two-pore channels control ebola virus host cell entry and are drug targets for disease treatment. Science, 347(6225):995–998, 2015.
* [19] Sarah A Stanley, Amy K Barczak, Melanie R Silvis, Samantha S Luo, Kimberly Sogi, Martha Vokes, Mark-Anthony Bray, Anne E Carpenter, Christopher B Moore, Noman Siddiqi, et al. Identification of host-targeted small molecules that restrict intracellular mycobacterium tuberculosis growth. PLoS Pathog, 10(2):e1003946, 2014.
* [20] Claire McQuin, Allen Goodman, Vasiliy Chernyshev, Lee Kamentsky, Beth A Cimini, Kyle W Karhohs, Minh Doan, Liya Ding, Susanne M Rafelski, Derek Thirstrup, et al. Cellprofiler 3.0: Next-generation image processing for biology. PLoS biology, 16(7):e2005970, 2018.
* [21] Thouis R Jones, In Han Kang, Douglas B Wheeler, Robert A Lindquist, Adam Papallo, David M Sabatini, Polina Golland, and Anne E Carpenter. Cellprofiler analyst: data exploration and analysis software for complex image-based screens. BMC bioinformatics, 9(1):1–16, 2008.
* [22] Mark-Anthony Bray and Anne E Carpenter. Cellprofiler tracer: exploring and validating high-throughput, time-lapse microscopy image data. BMC bioinformatics, 16(1):1–7, 2015.
* [23] William J Godinez, Imtiaz Hossain, Stanley E Lazic, John W Davies, and Xian Zhang. A multi-scale convolutional neural network for phenotyping high-content cellular images. Bioinformatics, 33(13):2010–2019, 02 2017.
* [24] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing.
* [25] Jianxu Chen, Lin Yang, Yizhe Zhang, Mark Alber, and Danny Z. Chen. Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation, 2016.
* [26] David A Van Valen, Takamasa Kudo, Keara M Lane, Derek N Macklin, Nicolas T Quach, Mialy M DeFelice, Inbal Maayan, Yu Tanouchi, Euan A Ashley, and Markus W Covert. Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLoS computational biology, 12(11):e1005177, 2016.
* [27] Juan C Caicedo, Jonathan Roth, Allen Goodman, Tim Becker, Kyle W Karhohs, Matthieu Broisin, Csaba Molnar, Claire McQuin, Shantanu Singh, Fabian J Theis, et al. Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytometry Part A, 95(9):952–965, 2019.
* [28] Eric M Christiansen, Samuel J Yang, D. Michael Ando, Ashkan Javaherian, Gaia Skibinski, Scott Lipnick, Elliot Mount, Alison O’neil, Kevan Shah, Alicia K Lee, Piyush Goyal, William Fedus, Ryan Poplin, Andre Esteva, Marc Berndl, Lee L Rubin, Philip Nelson, and Steven Finkbeiner. In silico labeling: Predicting fluorescent labels in unlabeled images. Cell (Cambridge), 173(3):792–803.e19, 2018.
* [29] Lauren Schiff, Bianca Migliori, Ye Chen, Deidre Carter, Caitlyn Bonilla, Jenna Hall, Minjie Fan, Edmund Tam, Sara Ahadi, Brodie Fischbacher, et al. Deep learning and automated cell painting reveal parkinson’s disease-specific signatures in primary patient fibroblasts. bioRxiv, 2020.
* [30] Nikhil Singh, Heather D Couture, JS Marron, Charles Perou, and Marc Niethammer. Topological descriptors of histology images. In International Workshop on Machine Learning in Medical Imaging, pages 231–239. Springer, 2014.
* [31] Jessica L Nielson, Jesse Paquette, Aiwen W Liu, Cristian F Guandique, C Amy Tovar, Tomoo Inoue, Karen-Amanda Irvine, John C Gensel, Jennifer Kloke, Tanya C Petrossian, et al. Topological data analysis for discovery in preclinical spinal cord injury and traumatic brain injury. Nature communications, 6(1):1–12, 2015.
* [32] Bernadette J. Stolz, Tegan Emerson, Satu Nahkuri, Mason A. Porter, and Heather A. Harrington. Topological data analysis of task-based fmri data from experiments on schizophrenia, 2020.
* [33] Jeremy A Pike, Abdullah O Khan, Chiara Pallini, Steven G Thomas, Markus Mund, Jonas Ries, Natalie S Poulter, and Iain B Styles. Topological data analysis quantifies biological nano-structure from single molecule localization microscopy. Bioinformatics, 36(5):1614–1621, 2020.
* [34] Julien Dorier, Dimos Goundaroulis, Fabrizio Benedetti, and Andrzej Stasiak. Knoto-id: a tool to study the entanglement of open protein chains using the concept of knotoids. Bioinformatics, 34(19):3402–3404, 2018.
* [35] Yuhei Umeda. Time series classification via topological data analysis. Information and Media Technologies, 12:228–239, 2017.
* [36] Clément Maria, Jean-Daniel Boissonnat, Marc Glisse, and Mariette Yvinec. The gudhi library: Simplicial complexes and persistent homology. In International Congress on Mathematical Software, pages 167–174. Springer, 2014.
* [37] Raoul R Wadhwa, Drew FK Williamson, Andrew Dhawan, and Jacob G Scott. Tdastats: R pipeline for computing persistent homology in topological data analysis. Journal of open source software, 3(28):860, 2018.
* [38] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
* [39] M.V. Valueva, N.N. Nagornov, P.A. Lyakhov, G.V. Valuev, and N.I. Chervyakov. Application of the residue number system to reduce hardware costs of the convolutional neural network implementation. Mathematics and Computers in Simulation, 177:232–243, 2020.
* [40] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989.
* [41] Yann LeCun, Patrick Haffner, Léon Bottou, and Yoshua Bengio. Object recognition with gradient-based learning. In Shape, contour and grouping in computer vision, pages 319–345. Springer, 1999.
* [42] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
* [43] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation, 2015.
* [44] Y-T Zhou, Rama Chellappa, Aseem Vaid, and B Keith Jenkins. Image restoration using a neural network. IEEE transactions on acoustics, speech, and signal processing, 36(7):1141–1151, 1988.
* [45] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 448–456, Lille, France, 07–09 Jul 2015\. PMLR.
* [46] Michael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, et al. Speeding up semantic segmentation for autonomous driving. In MLITS, NIPS Workshop, volume 2, 2016.
* [47] Ju Gang Nam, Eui Jin Hwang, Da Som Kim, Seung-Jin Yoo, Hyewon Choi, Jin Mo Goo, and Chang Min Park. Undetected lung cancer at posteroanterior chest radiography: Potential role of a deep learning–based detection algorithm. Radiology: Cardiothoracic Imaging, 2(6):e190222, 2020.
* [48] Ashish Kumar Bhandari, Vineet Kumar Singh, Anil Kumar, and Girish Kumar Singh. Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using kapur’s entropy. Expert Systems with Applications, 41(7):3538–3560, 2014.
* [49] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation, 2017.
* [50] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017.
* [51] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [52] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation, 2018.
* [53] Gunnar Carlsson. Topology and data. Bulletin of the American Mathematical Society, 46(2):255–308, 2009\.
* [54] Vin De Silva and Gunnar E Carlsson. Topological estimation using witness complexes. SPBG, 4:157–166, 2004.
* [55] Herbert Edelsbrunner. The union of balls and its dual shape. In Proceedings of the ninth annual symposium on Computational geometry, pages 218–231, 1993.
* [56] David Cohen-Steiner, Herbert Edelsbrunner, and John Harer. Stability of persistence diagrams. Discrete & computational geometry, 37(1):103–120, 2007.
* [57] Peter Bubenik. Statistical topological data analysis using persistence landscapes. J. Mach. Learn. Res., 16(1):77–102, 2015.
* [58] Frédéric Chazal, Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, and Larry Wasserman. Stochastic convergence of persistence landscapes and silhouettes, 2013\.
* [59] Ilya Chevyrev, Vidit Nanda, and Harald Oberhauser. Persistence paths and signature features in topological data analysis. IEEE transactions on pattern analysis and machine intelligence, 42(1):192–202, 2018.
* [60] Ilya Chevyrev and Andrey Kormilitzin. A primer on the signature method in machine learning, 2016.
* [61] Patrick Kidger and Terry Lyons. Signatory: differentiable computations of the signature and logsignature transforms, on both cpu and gpu. arXiv preprint arXiv:2001.00706, 2020.
* [62] Rimhak Ree. Lie elements and an algebra associated with shuffles. Annals of Mathematics, pages 210–220, 1958.
* [63] Lajos Gergely Gyurkó, Terry Lyons, Mark Kontkowski, and Jonathan Field. Extracting information from the signature of a financial data stream. arXiv preprint arXiv:1307.7244, 2013.
* [64] Jianping Liu, Yanmei Liu, Honggang Wang, Haojie Hao, Qingwang Han, Jing Shen, Jun Shi, Chunlin Li, Yiming Mu, and Weidong Han. Direct differentiation of hepatic stem-like wb cells into insulin-producing cells using small molecules. Scientific reports, 3(1):1–8, 2013.
* [65] Walther Haenseler, Stephen N Sansom, Julian Buchrieser, Sarah E Newey, Craig S Moore, Francesca J Nicholls, Satyan Chintawar, Christian Schnell, Jack P Antel, Nicholas D Allen, et al. A highly efficient human pluripotent stem cell microglia model displays a neuronal-co-culture-specific expression profile and inflammatory response. Stem cell reports, 8(6):1727–1742, 2017.
* [66] Juan Fortea, Roser Sala-Llonch, David Bartrés-Faz, Beatriz Bosch, Albert Llado, Nuria Bargallo, Jose Luis Molinuevo, and Raquel Sanchez-Valle. Increased cortical thickness and caudate volume precede atrophy in psen1 mutation carriers. Journal of Alzheimer’s Disease, 22(3):909–922, 2010.
* [67] Tianqi Chen and Carlos Guestrin. Xgboost. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug 2016.
* [68] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
* [69] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020.
|
# Rethinking Stability for Attribution-based Explanations
Chirag Agarwal1, Nari Johnson2, Martin Pawelczyk3, Satyapriya Krishna4,
Eshika Saxena4, Marinka Zitnik4 & Himabindu Lakkaraju4
1 Media and Data Science Research Lab, Adobe
2 Carnegie Mellon University
3 University of Tübingen
4 Harvard University
###### Abstract
As attribution-based explanation methods are increasingly used to establish
model trustworthiness in high-stakes situations, it is critical to ensure that
these explanations are stable, e.g., robust to infinitesimal perturbations to
an input. However, previous works have shown that state-of-the-art explanation
methods generate unstable explanations. Here, we introduce metrics to quantify
the stability of an explanation and show that several popular explanation
methods are unstable. In particular, we propose new Relative Stability metrics
that measure the change in output explanation with respect to change in input,
model representation, or output of the underlying predictor. Finally, our
experimental evaluation with three real-world datasets demonstrates
interesting insights for seven explanation methods and different stability
metrics.
## 1 Introduction
With machine learning (ML) models being increasingly employed in high-stakes
domain such as criminal justice, finance, and healthcare, it is essential to
ensure that the relevant stakeholders understand these models’ decisions.
However, existing approaches to explain the predictions of complex machine
learning (ML) models suffer from several critical shortcomings. Recent works
have shown that explanations generated using attribution-based methods are not
stable (Ghorbani et al., 2019; Slack et al., 2020; Dombrowski et al., 2019;
Adebayo et al., 2018; Alvarez-Melis, Jaakkola, 2018; Bansal et al., 2020),
e.g. that infinitesimal perturbations to an input can result in substantially
different explanations.
Existing metrics (Alvarez-Melis, Jaakkola, 2018) measure the change in
explanation only with respect to the input perturbations, e.g., they only
assume black-box access to the predictive model, and don’t leverage
potentially meaningful information such as the model’s internal
representations to evaluate stability. To address these limitations of
existing stability metrics, we propose Relative Stability that measures the
change in output explanation with respect to the behavior of the underlying
predictive model (Section 3.3). Finally, we present extensive theoretical and
empirical analysis (Section 4.2) for comparing the stability of seven state-
of-the-art explanation methods using multiple real-world datasets.
## 2 Related Works
This paper draws from two main areas of prior work: 1) attribution-based
explanation methods, and 2) stability analysis of explanations.
Attribution-based Explanation Methods. While a variety of approaches have been
proposed to explain model decisions for classifiers, our work focuses on
_local feature attribution explanations_ , which measure the contribution of
each feature to the model’s prediction on a point. In particular, we study two
broad types of feature attribution explanations: gradient-based and
approximation-based. Gradient-based feature attribution methods like
VanillaGrad (Simonyan et al., 2014), SmoothGrad (Smilkov et al., 2017),
Integrated Gradients (Sundararajan et al., 2017), and Gradient$\times$Input
(Shrikumar et al., 2017) leverage model gradients to quantify how a change in
each feature would affect the model’s prediction. Approximation-based methods
like LIME (Ribeiro et al., 2016), SHAP (Lundberg, Lee, 2017), Anchors (Ribeiro
et al., 2018), BayesLIME, and BayesSHAP (Slack et al., 2021) leverage
perturbations of individual inputs to construct a local approximation model
from which feature attributions are derived.
Explanation Stability. Recent works have formalized desirable properties for
feature attribution explanations (Agarwal et al., 2022). Our work specifically
focuses on the _stability_ of explanations. Alvarez-Melis, Jaakkola (2018)
argued that “similar inputs should lead to similar explanations” and is the
first work to formalize a metric to measure the stability of local explanation
methods. We highlight potential issues with this stability metric that
measures stability only w.r.t. the change in _input_.
## 3 Stability Analysis for Evaluating Explanations
### 3.1 Notation and Preliminaries
Machine Learning Model. Given a feature domain $\mathcal{X}$ and label domain
$\mathcal{Y}$, we denote a classification model $f{:}\leavevmode\nobreak\
\mathcal{X}{{\,\rightarrow\,}}\mathcal{Y}$ that maps a set of features
${\mathbf{x}}{\in}\mathcal{X}$ to labels ${\mathbf{y}}{\in}\mathcal{Y}$, where
${\mathbf{x}}\in\mathbb{R}^{d}$ is a $d$-dimensional feature vector,
${\mathbf{y}}\in\\{0,1,\dots,\text{C}\\}$ where C is the total number of
classes in the dataset. We use
${\mathbf{X}}=\\{{\mathbf{x}}_{1},{\mathbf{x}}_{2},\dots,{\mathbf{x}}_{N}\\}$
to denote all the $N$ instances in the dataset. In addition, we define
$f({\mathbf{x}}){=}\sigma(h({\mathbf{x}}))$, where
$h:\mathcal{X}{{\,\rightarrow\,}}\mathbb{R}$ is a scoring function (e.g.,
logits) and $\sigma:\mathbb{R}{{\,\rightarrow\,}}\mathcal{Y}$ is an activation
function that maps output logit scores to discrete labels. Finally, for a
given input ${\mathbf{x}}$, the output predicted class label is:
$\hat{y}_{{\mathbf{x}}}{=}\operatorname*{arg\,max}_{c}f({\mathbf{x}})$. We
assume access to the gradients and intermediate representations of model $f$.
Explainability Methods. An attribution-based explanation method $\mathcal{E}$
generates an explanation $\mathbf{e}_{{\mathbf{x}}}\in\mathbb{R}^{d}$ to
explain model prediction $f({\mathbf{x}})$. To calculate our stability
metrics, we generate perturbations ${\mathbf{x}}^{\prime}$ by adding
infinitesimal noise to ${\mathbf{x}}$, and denote their respective explanation
as $\mathbf{e}_{{\mathbf{x}}^{\prime}}$.
### 3.2 Existing Definition and Problems
Alvarez-Melis, Jaakkola (2018) formalize the first stability metric for local
explanation methods, arguing that explanations should be robust to local
perturbations of an input. To evaluate the stability of an explanation for
instance $\mathbf{x}$, perturbed instances $\mathbf{x^{\prime}}$ are generated
by adding infinitesimally small noise to the clean instance $\mathbf{x}$ such
that $\hat{y}_{\mathbf{x}}=\hat{y}_{\mathbf{x^{\prime}}}$:
$\displaystyle\text{S}(\mathbf{x},\mathbf{x^{\prime}},\mathbf{e}_{\mathbf{x}},\mathbf{e}_{\mathbf{x^{\prime}}})=\max_{\mathbf{x^{\prime}}}\frac{||\leavevmode\nobreak\
\mathbf{e}_{\mathbf{x}}-\mathbf{e}_{\mathbf{x^{\prime}}}\leavevmode\nobreak\
||}{||\leavevmode\nobreak\ \mathbf{x}-\mathbf{x^{\prime}}\leavevmode\nobreak\
||},\leavevmode\nobreak\ \forall\mathbf{x^{\prime}}\leavevmode\nobreak\
\text{s.t.}\leavevmode\nobreak\
\mathbf{x^{\prime}}\in\mathcal{N}_{\mathbf{x}};\leavevmode\nobreak\
\hat{y}_{\mathbf{x}}=\hat{y}_{\mathbf{x^{\prime}}}$ (1)
where $\mathcal{N}_{\mathbf{x}}$ is a neighborhood of instances
$\mathbf{x}^{\prime}$ similar to $\mathbf{x}$, and $\mathbf{e}_{\mathbf{x}}$
and $\mathbf{e}_{\mathbf{x^{\prime}}}$ denote the explanations corresponding
to instances $\mathbf{x}$ and $\mathbf{x}^{\prime}$, respectively. For each
point ${\mathbf{x}}^{\prime}$, the stability ratio in Equation 1 measures how
the output explanation varies with respect to the change in the _input_.
Because the neighborhood of instances $\mathcal{N}_{\mathbf{x}}$ are sampled
to be similar to the original instance $\mathbf{x}$, the authors argue that
points that are similar should have similar model explanations, e.g., we
desire the ratio in Equation 1 to be close to $1$ (Alvarez-Melis et al.,
2021). This stability definition relies on the point-wise neighborhood-based
local Lipschitz continuity of the explanation method
$\mathbf{e}_{{\mathbf{x}}}$ around $\mathbf{x}$.
Figure 1: Decision boundaries and embeddings of a two-layer neural network
predictor $f$ with 100 units trained on the circles dataset. The heatmaps
(left and middle column) shows the models’ confidence for the positive-class
(in blue), test set examples ${\mathbf{x}}$ ( , ), and a set of perturbed
samples ${\mathbf{x}}^{\prime}$ ( ). While all perturbed samples
$\mathbf{x}^{\prime}$ are predicted to the same class as
$\mathbf{x}^{\prime}$, the embeddings (right column) for some
$\mathbf{x}^{\prime}$ are far from the embeddings of $\mathbf{x}^{\prime}$ and
similar to the embeddings of Class 0, highlighting the need of incorporating
the model behavior using its internal embeddings (Equations 3,5).
Problems. We note two key problems with the existing stability definition: i)
it only assumes black-box access to the prediction model $f$, and does not
leverage potentially meaningful information such as the model’s internal
representations for evaluating stability; and ii) it implicitly assumes that
$f$ has the same _behavior_ on inputs ${\mathbf{x}}$ and
${\mathbf{x}}^{\prime}$ that are similar. While this may be the case for
underlying prediction models that are smooth or robust, this assumption may
not hold in a large number of cases. In Figure 1, we discuss a toy example
where perturbed samples $\mathcal{N}_{{\mathbf{x}}}$ have drastically
different intermediate representations than the original point ${\mathbf{x}}$.
Note that since the goal of an explanation is to faithfully and accurately
represent the behavior of the underlying prediction model (Agarwal et al.,
2022), we argue that an explanation method _should_ vary for points
${\mathbf{x}}$ and ${\mathbf{x}}^{\prime}$ where the prediction model’s
behavior differs. Thus, we argue for the inclusion of new stability metrics
that measure how much explanations vary with respect to the behavior of the
underlying prediction model.
### 3.3 Proposed metric: Relative Stability
To address the aforementioned challenges, we propose Relative Stability that
leverages model information to evaluate the stability of an explanation with
respect to the change in the a) input data, b) intermediate representations,
and c) output logits of the underlying prediction model.
a) Relative Input Stability. We extend the stability metric in Equation 1 and
define Relative Input Stability that measures the relative distance between
explanations $\mathbf{e}_{{\mathbf{x}}}$ and
$\mathbf{e}_{{\mathbf{x}}^{\prime}}$ with respect to the distance between the
two inputs ${\mathbf{x}}$ and ${\mathbf{x}}^{\prime}$.
$\displaystyle\text{RIS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})$
$\displaystyle=\max_{{\mathbf{x}}^{\prime}}\frac{||\frac{(\mathbf{e}_{{\mathbf{x}}}-\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}}{\max(||\frac{({\mathbf{x}}-{\mathbf{x}}^{\prime})}{{\mathbf{x}}}||_{p},\epsilon_{min})},\leavevmode\nobreak\
\forall{\mathbf{x}}^{\prime}\leavevmode\nobreak\
\text{s.t.}\leavevmode\nobreak\
{\mathbf{x}}^{\prime}\in\mathcal{N}_{{\mathbf{x}}};\leavevmode\nobreak\
\hat{y}_{{\mathbf{x}}}=\hat{y}_{{\mathbf{x}}^{\prime}}$ (2)
where the numerator of the metric measures the $\ell_{p}$ norm of the _percent
change_ of explanation $\mathbf{e}_{{\mathbf{x}}^{\prime}}$ on the perturbed
instance ${\mathbf{x}}^{\prime}$ with respect to the explanation
$\mathbf{e}_{{\mathbf{x}}}$ on the original point ${\mathbf{x}}$, the
denominator measures the $\ell_{p}$ norm between (normalized) inputs
${\mathbf{x}}$ and ${\mathbf{x}}^{\prime}$ and the $\max$ term prevents
division by zero in cases when norm
$||\frac{({\mathbf{x}}-{\mathbf{x}}^{\prime})}{{\mathbf{x}}}||_{p}$ is less
than some small $\epsilon_{min}{>}0$. Here, we use the percent change from the
explanation on the original point to the explanation on the perturbed instance
in contrast to the absolute difference between the explanations (as in
Equation 1) to enable comparison across different attribution-based
explanation methods that have vastly different ranges or magnitudes.
Intuitively, one can expect similar explanations for points that are similar –
the percent change in explanations (numerator) should be _small_ for points
that are close, or have a _small_ $l_{p}$ norm (denominator). Note that the
metric in Equation 2 measures instability of an explanation and higher values
indicate higher instability.
b) Relative Representation Stability. Previous stability definitions in
Equation 1-2 do not cater to cases where the model uses different logic paths
(e.g., activating different neurons in a deep neural network) to predict the
same label for the original and perturbed instance. In addition, past works
have presented empirical evidence that the intermediate representations of a
model are related to the underlying behavior or reasoning of the model
(Agarwal et al., 2021). Thus, we leverage the internal features or
representation learned by the underlying model and propose Relative
Representation Stability as:
$\displaystyle\text{RRS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})$
$\displaystyle=\max_{{\mathbf{x}}^{\prime}}\frac{||\frac{(\mathbf{e}_{{\mathbf{x}}}-\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}}{\max(||\frac{(\mathcal{L}_{{\mathbf{x}}}{-}\mathcal{L}_{{\mathbf{x}}^{\prime}})}{\mathcal{L}_{{\mathbf{x}}}}||_{p},\epsilon_{min})},\leavevmode\nobreak\
\forall{\mathbf{x}}^{\prime}\leavevmode\nobreak\ \leavevmode\nobreak\
\text{s.t.}\leavevmode\nobreak\ \leavevmode\nobreak\
{\mathbf{x}}^{\prime}\in\mathcal{N}_{{\mathbf{x}}};\leavevmode\nobreak\
\leavevmode\nobreak\ \hat{y}_{{\mathbf{x}}}{=}\hat{y}_{{\mathbf{x}}^{\prime}}$
(3)
where $\mathcal{L}(\cdot)$ denotes the internal model representation, e.g.,
output embeddings of hidden layers, and $\delta$ is an infinitesimal constant.
Due to insufficient knowledge about the data generating mechanism, we follow
the perturbation mechanisms described above to generate perturbed samples
${\mathbf{x}}^{\prime}$ but use additional checks to ensure that for certain
perturbations the model behaves similar to its training behavior. For any
given instance ${\mathbf{x}}$, we generate $m$ local perturbed samples such
that $||{\mathbf{x}}-{\mathbf{x}}^{\prime}||_{p}\leq\epsilon$, and
$\hat{y}_{{\mathbf{x}}}{=}\hat{y}_{{\mathbf{x}}^{\prime}}$. For every
perturbed sample, we calculate the difference in their respective explanations
and using Equation 3 calculate the relative stability of an explanation. Note
that, as before, the metric in Equation 3 measures instability of an
explanation and higher values indicate higher instability.
Finally, we show that the Relative Input Stability can be bounded using the
Lipschitzness of the underlying model. In particular, we proof that RIS is
upper bounded by a product of the Lipschitz constant $L_{1}$ of the
intermediate model layer (assuming a neural network classifier) and our
proposed Relative Representation Stability. See Appendix A for the complete
proof.
$\displaystyle\text{RIS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})<\lambda_{1}L_{1}\times\leavevmode\nobreak\
\text{RRS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})$
(4)
c) Relative Output Stability. Note that Relative Representation Stability
assumes that the underlying ML model is white-box, i.e., explanation method
has access to the internal model knowledge. Hence, for black-box ML models we
define Relative Output Stability as:
$\displaystyle\text{ROS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})$
$\displaystyle=\max_{{\mathbf{x}}^{\prime}}\frac{||\frac{(\mathbf{e}_{{\mathbf{x}}}-\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}}{\max(||h({\mathbf{x}}){-}h({\mathbf{x}}^{\prime})||_{p},\epsilon_{min})},\leavevmode\nobreak\
\forall{\mathbf{x}}^{\prime}\leavevmode\nobreak\ \leavevmode\nobreak\
\text{s.t.}\leavevmode\nobreak\ \leavevmode\nobreak\
{\mathbf{x}}^{\prime}\in\mathcal{N}_{{\mathbf{x}}};\leavevmode\nobreak\
\leavevmode\nobreak\ \hat{y}_{{\mathbf{x}}}{=}\hat{y}_{{\mathbf{x}}^{\prime}}$
(5)
where $h({\mathbf{x}})$ and $h({\mathbf{x}}^{\prime})$ are the output logits
for ${\mathbf{x}}$ and ${\mathbf{x}}^{\prime}$, respectively. Again, we proof
that RRS is upper bounded by a product of the Lipschitz constant $L_{2}$ of
the output model layer and our proposed Relative Output Stability. See
Appendix A for the complete proof.
$\displaystyle\text{RRS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})<\lambda_{2}L_{2}\leavevmode\nobreak\
\times\leavevmode\nobreak\
\text{ROS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})$
(6)
## 4 Experiments
To demonstrate the utility of relative stability, we systematically compare
and evaluate the stability of seven explanation methods using three real-world
datasets using equations defined in Section 3. Further, we show that, in
contrast to relative input stability, relative representation and output
stability better captures the stability of the underlying black-box model.
### 4.1 Datasets and Experimental Setup
Datasets. We use real-world structured datasets to empirically analyze the
stability behavior of explanation methods and consider 3 benchmark datasets
from high-stakes domains: i) the German Credit dataset (Dua, Graff, 2017)
which has records of 1,000 clients in a German bank. The downstream task is to
classify clients into good or bad credit risks, ii) the COMPAS dataset (Mattu
et al., 2016) which has records of 18,876 defendants who got released on bail
at the U.S state courts during 1990-2009. The dataset comprises of features
representing past criminal records and demographics of the defendants and the
goal is to classify them into bail or no bail, and iii) the Adult dataset
(Dua, Graff, 2017) which has records of 48,842 individuals including
demographic, education, employment, personal, and financial features. The
downstream task is to predict whether an individual’s income exceeds $50K per
year.
Predictors. We train logistic regression (LR) and artificial neural network
(ANN) as our predictive models. Details in Appendix B.
Explanation methods. We evaluate seven attribution-based explanation methods,
including VanillaGrad (Simonyan et al., 2014), Integrated Gradients
(Sundararajan et al., 2017), SmoothGrad (Smilkov et al., 2017),
Input$\times$Gradients (Shrikumar et al., 2017), LIME (Ribeiro et al., 2016),
and SHAP (Lundberg, Lee, 2017). Following Agarwal et al. (2022), we also
include a random assignment of importance as a control setting. Details on
implementation and hyperparameter selection for the explanation methods are in
Appendix B.
Setup. For each dataset and predictor, we: (1) train the prediction model on
the respective dataset; (2) randomly sample $100$ points from the test set;
(3) generate $50$ perturbations for each point in the test set; (4) generate
explanations $\mathbf{e}_{{\mathbf{x}}^{\prime}}$ for each test set point and
its perturbations using seven explanation methods; and (5) evaluate the
stability of the explanations for these test points using all stability
metrics (Equations 2,3,5).
Figure 2: Theoretical upper bounds for the (log) relative input stability
(RIS) computed using the right-hand-side of Equation 4 across seven
explanation methods for an ANN predictor trained on Adult dataset. Results
show that RIS is upper bounded by the product of $L_{1}$ and RRS (relative
representation stability), where $L_{1}$ is the Lipschitz constant between the
input and hidden layer of the ANN model. Results for the Compas and German
dataset are shown in Appendix 5.
### 4.2 Results
Empirically verifying our theoretical bound. We empirically evaluated our
theoretical bounds by computing the LHS of Equation 4 for all seven
explanation methods. Results in Figure 2 illustrate the empirical and
theoretical bounds for the Relative Input Stability, confirming that none of
our theoretical bounds are violated. In addition, we observe that, on average
across all explanation methods, our upper bounds are tight with the mean
theoretical bounds being 233% higher than that of the empirical values.
Similar results are found for other datasets in Appendix 5.
(a) Adult dataset
(b) Compas dataset
(c) German Credit dataset
Figure 3: Empirically calculated log stability of relative stability variants
(Equations 2-5) across seven explanation methods. Results on the Adult (a),
Compas (b), and German (c) dataset trained with ANN predictor show that
SmoothGrad generates the most stable explanation across RRS and ROS variants.
Results for all datasets trained on Logistic Regression models are shown in
Appendix 4.
Evaluating the stability of explanation methods. We compare the stability of
explanation methods by computing instability using all three variants as
described in Section 3.3. Results in Figure 3 show that the median instability
of all explanation methods using Relative Input Stability (Figure 3; blue) are
lower than that for the Representation (Figure 3; green) and Output Stability
(Figure 3; orange) because the relative input stability (Equation 2) scores
are highly influenced by the input differences
(${\mathbf{x}}-{\mathbf{x}}^{\prime}$), i.e., the median RIS scores across all
explanation methods are always lower than RRS and ROS. Finally, we observe
that while no explanation method is completely stable, on average across all
datasets and representation stability variants, the SmoothGrad explanation
method generates the most stable explanation and outperforms other methods by
12.7%.
## 5 Conclusion
We introduce Relative Stability metrics that measure the change in output
explanation with respect to the behavior of the underlying predictive model.
To this end, we analyze the stability performance of seven state-of-the-art
explanation methods using multiple real-world datasets and predictive models.
Our theoretical and empirical analysis demonstrate that representation and
output stability indicates that SmoothGrad explanation method generates the
most stable explanation. We believe that our work is an important step towards
developing a broader set of evaluation metrics that incorporate the behavior
of the underlying prediction model.
## References
* Adebayo et al. (2018) Adebayo Julius, Gilmer Justin, Muelly Michael, Goodfellow Ian, Hardt Moritz, Kim Been. Sanity checks for saliency maps // NeurIPS. 2018.
* Agarwal et al. (2021) Agarwal Chirag, Lakkaraju Himabindu, Zitnik Marinka. Towards a unified framework for fair and stable graph representation learning // UAI. 2021.
* Agarwal et al. (2022) Agarwal Chirag, Zitnik Marinka, Lakkaraju Himabindu. Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods // arXiv. 2022.
* Alvarez-Melis, Jaakkola (2018) Alvarez-Melis David, Jaakkola Tommi S. On the robustness of interpretability methods // ICML Workshop on Human Interpretability in Machine Learning. 2018.
* Alvarez-Melis et al. (2021) Alvarez-Melis David, Kaur Harmanpreet, II Hal Daum ̵́e, Wallach Hanna, Vaughan Jennifer Wortman. From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence // HCOMP. 2021.
* Bansal et al. (2020) Bansal Naman, Agarwal Chirag, Nguyen Anh. Sam: The sensitivity of attribution methods to hyperparameters // CVPR. 2020.
* Dombrowski et al. (2019) Dombrowski Ann-Kathrin, Alber Maximilian, Anders Christopher J, Ackermann Marcel, Müller Klaus-Robert, Kessel Pan. Explanations can be manipulated and geometry is to blame // arXiv. 2019.
* Dua, Graff (2017) Dua Dheeru, Graff Casey. UCI Machine Learning Repository. 2017.
* Ghorbani et al. (2019) Ghorbani Amirata, Abid Abubakar, Zou James. Interpretation of neural networks is fragile // AAAI. 2019.
* Gouk et al. (2021) Gouk Henry, Frank Eibe, Pfahringer Bernhard, Cree Michael J. Regularisation of neural networks by enforcing lipschitz continuity // Machine Learning. 2021.
* Lundberg, Lee (2017) Lundberg Scott M, Lee Su-In. A unified approach to interpreting model predictions // NeurIPS. 2017.
* Mattu et al. (2016) Mattu S, Kirchner L, Angwin J. How we analyzed the COMPAS recidivism algorithm. // ProPublica. 2016.
* Ribeiro et al. (2016) Ribeiro Marco Tulio, Singh Sameer, Guestrin Carlos. ” Why should i trust you?” Explaining the predictions of any classifier // KDD. 2016.
* Ribeiro et al. (2018) Ribeiro Marco Tulio, Singh Sameer, Guestrin Carlos. Anchors: High-precision model-agnostic explanations // AAAI. 2018.
* Shrikumar et al. (2017) Shrikumar Avanti, Greenside Peyton, Kundaje Anshul. Learning important features through propagating activation differences // ICML. 2017.
* Simonyan et al. (2014) Simonyan Karen, Vedaldi Andrea, Zisserman Andrew. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps // ICLR. 2014.
* Slack et al. (2021) Slack Dylan, Hilgard Anna, Singh Sameer, Lakkaraju Himabindu. Reliable post hoc explanations: Modeling uncertainty in explainability // NeurIPS. 2021.
* Slack et al. (2020) Slack Dylan, Hilgard Sophie, Jia Emily, Singh Sameer, Lakkaraju Himabindu. How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods // AIES. 2020.
* Smilkov et al. (2017) Smilkov Daniel, Thorat Nikhil, Kim Been, Viégas Fernanda B., Wattenberg Martin. SmoothGrad: removing noise by adding noise // ICML Workshop on Visualization for Deep Learning. 2017.
* Sundararajan et al. (2017) Sundararajan Mukund, Taly Ankur, Yan Qiqi. Axiomatic attribution for deep networks // ICML. 2017\.
## Appendix A Theoretical Interpretation
Prior works have shown that commonly used artificial neural network (ANN)
models comprise of linear and activation layers which satisfy Lipschitz
continuity (Gouk et al., 2021). Let us consider a 2-layer ANN model $f$, where
$h_{1}(\cdot)$ and $h_{2}(\cdot)$ represent the outputs of the first and
second hidden layers, respectively. For a given input ${\mathbf{x}}$ and its
perturbed counterpart ${\mathbf{x}}^{\prime}$, we can write the Lipschitz form
for the first hidden layer as:
$\displaystyle||\leavevmode\nobreak\
h_{1}({\mathbf{x}})-h_{1}({\mathbf{x}}^{\prime})\leavevmode\nobreak\
||_{p}\leq L_{1}\leavevmode\nobreak\ ||\leavevmode\nobreak\
{\mathbf{x}}-{\mathbf{x}}^{\prime}\leavevmode\nobreak\ ||_{p},$ (7)
where $L$ is the Lipschitz constant of the hidden layer $h_{1}(\cdot)$. Taking
the reciprocal of Equation 7, we get:
$\displaystyle\frac{1}{||\leavevmode\nobreak\
h_{1}({\mathbf{x}})-h_{1}({\mathbf{x}}^{\prime})\leavevmode\nobreak\
||_{p}}>\frac{1}{L_{1}}\leavevmode\nobreak\ \frac{1}{||\leavevmode\nobreak\
{\mathbf{x}}-{\mathbf{x}}^{\prime}\leavevmode\nobreak\ ||_{p}},$ (8)
Multiplying both sides with $||\leavevmode\nobreak\
\frac{(\mathbf{e}_{{\mathbf{x}}}{-}\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}$,
we get:
$\displaystyle\frac{||\leavevmode\nobreak\
\frac{(\mathbf{e}_{{\mathbf{x}}}{-}\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}}{||\leavevmode\nobreak\
h_{1}({\mathbf{x}})-h_{1}({\mathbf{x}}^{\prime})\leavevmode\nobreak\
||_{p}}>\frac{1}{L_{1}}\leavevmode\nobreak\ \frac{||\leavevmode\nobreak\
\frac{(\mathbf{e}_{{\mathbf{x}}}{-}\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}}{||\leavevmode\nobreak\
{\mathbf{x}}-{\mathbf{x}}^{\prime}\leavevmode\nobreak\ ||_{p}},$ (9)
With further simplifications, we get:
$\displaystyle\frac{||\leavevmode\nobreak\
\frac{(\mathbf{e}_{{\mathbf{x}}}{-}\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}}{||h_{1}({\mathbf{x}})||_{p}||\frac{h_{1}({\mathbf{x}})-h_{1}({\mathbf{x}}^{\prime})}{h_{1}({\mathbf{x}})}||_{p}}>\frac{1}{L_{1}}\leavevmode\nobreak\
\frac{||\leavevmode\nobreak\
\frac{(\mathbf{e}_{{\mathbf{x}}}{-}\mathbf{e}_{{\mathbf{x}}^{\prime}})}{\mathbf{e}_{{\mathbf{x}}}}||_{p}}{||{\mathbf{x}}||_{p}||\frac{{\mathbf{x}}-{\mathbf{x}}^{\prime}}{{\mathbf{x}}}||_{p}}$
(10)
For a given ${\mathbf{x}}^{\prime}$ and representations from model layer
$h_{1}(\cdot)$, using Equations 2-3, we get:
$\displaystyle\frac{\text{RRS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})}{||h_{1}({\mathbf{x}})||_{p}}>\frac{1}{L_{1}}\leavevmode\nobreak\
\frac{\text{RIS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})}{||{\mathbf{x}}||_{p}},$
(11)
$\displaystyle\Rightarrow\text{RIS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})<\big{(}L_{1}\frac{||h_{1}({\mathbf{x}})||_{p}}{||{\mathbf{x}}||_{p}}\big{)}\leavevmode\nobreak\
\times\leavevmode\nobreak\
\text{RRS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}}),$
(12)
where we find that the Relative Input Stability score is upper bounded by
$L_{1}$ times
$\lambda_{1}{=}\frac{||h_{1}({\mathbf{x}})||_{p}}{||{\mathbf{x}}||_{p}}$ times
the Relative Representation Stability score. Finally, we can also extend the
above analysis by substituting $h_{1}(\cdot)$ with the output logit layer
$h_{2}(\cdot)$ and show that the same relation holds for Relative Output
Stability:
$\displaystyle\text{RRS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}})<\lambda_{2}L_{2}\leavevmode\nobreak\
\times\leavevmode\nobreak\
\text{ROS}({\mathbf{x}},{\mathbf{x}}^{\prime},\mathbf{e}_{{\mathbf{x}}},\mathbf{e}_{{\mathbf{x}}^{\prime}}),$
(13)
where $\lambda_{2}=||h_{1}({\mathbf{x}})||_{p}$.
## Appendix B Implementation Details
Predictors. We train logistic regression (LR) and artificial neural network
(ANN) models. Details in Appendix B. The ANN models have 1 hidden layer of
width $100$ followed by a ReLU activation function and the output Softmax
layer.
Predictor Training. To train all predictive models, we used a 80-10-10 train-
test-validation split. We used the RMSProp optimizer with learning rate
$2e-03$, the binary cross entropy loss function, and batchsize $32$. We
trained for $100$ epochs and selected the model at the epoch with the highest
validation set accuracy as the final prediction model to be explained in our
experiments.
Explanation Method Implementations. We used existing public implementations of
all explanation methods in our experiments. We used the following captum
software package classes: i) captum.attr.Saliency for VanillaGrad; ii)
captum.attr.IntegratedGradients for IntegratedGradients; iii)
captum.attr.NoiseTunnel; iv) captum.attr.Saliency for SmoothGrad; v)
captum.attr.InputXGradient for Gradients$\times$Input; and vi)
captum.attr.KernelShap for SHAP. We use the authors’ LIME python package for
LIME.
Metric hyperparameters. For all metrics, we generate a neighborhood
$\mathcal{N}_{{\mathbf{x}}}$ of size $50$ for each point ${\mathbf{x}}$. The
neighborhood points were generated by perturbing the clean sample
${\mathbf{x}}$ with noise from $\mathcal{N}({\mathbf{x}},0.05)$. For data sets
with with discrete binary inputs we used independent Bernoulli random
variables for the pertubations: for each discrete dimension, we replaced the
original values with those that were drawn from a Bernoulli distribution with
parameter $p=0.03$. Choosing a small $p$ ensures that only a small fraction of
samples are perturbed to reduce the likelihood of sampling an out-of-
distribution point. For internal model representations
$\mathcal{L}_{{\mathbf{x}}}$ we use the pre-softmax input linear layer output
embedding for the LR models, and the pre-ReLU output embedding of the first
hidden layer for the ANN.
Explanation Method | Hyperparameter | Value
---|---|---
| n_samples | $1000$
LIME | kernel_width | $0.75$
| std | $0.05$
SHAP | n_samples | $500$
SmoothGrad | std | 0.05
Integrated Gradients | baseline | train data means
Random Baseline | attributions from $\mathcal{N}(0,1)$ |
Table 1: Hyperparameters used for explanation methods. For hyperparameters not
listed, we used their package defaults.
(a) Adult dataset
(b) Compas dataset
(c) German dataset
Figure 4: Empirically calculated log stability of all three relative stability
variants (Equations 2-5) across seven explanation methods. Results on the
Adult dataset trained with Logistic Regression predictor show that SmoothGrad
generates the most stable explanation across representation and output
stability variants.
(a) Compas dataset
(b) German credit dataset
Figure 5: Theoretical upper bounds for the (log) relative input stability
(RIS) computed using the right-hand-side of Equation 4 across seven
explanation methods for an ANN predictor trained on the Compas and German
credit datasets. Results show that RIS is upper bounded by the product of
$L_{1}$ and RRS (relative representation stability), where $L_{1}$ is the
Lipschitz constant between the input and hidden layer of the ANN model.
|
# Büchi-like characterizations for
Parikh-recognizable omega-languages
Mario Grobler<EMAIL_ADDRESS>University of Bremen, Bremen, Germany
Sebastian Siebertz University of Bremen, Bremen, Germany siebertz@uni-
bremen.de
###### Abstract
Büchi’s theorem states that $\omega$-regular languages are characterized as
languages of the form $\bigcup_{i}U_{i}V_{i}^{\omega}$, where $U_{i}$ and
$V_{i}$ are regular languages. Parikh automata are automata on finite words
whose transitions are equipped with vectors of positive integers, whose sum
can be tested for membership in a given semi-linear set. We give an intuitive
automata theoretic characterization of languages of the form
$U_{i}V_{i}^{\omega}$, where $U_{i}$ and $V_{i}$ are Parikh-recognizable.
Furthermore, we show that the class of such languages, where $U_{i}$ is
Parikh-recognizable and $V_{i}$ is regular is exactly captured by a model
proposed by Klaedtke and Ruess [Automata, Languages and Programming, 2003],
which again is equivalent to (a small modification of) reachability Parikh
automata introduced by Guha et al. [FSTTCS, 2022]. We finish this study by
introducing a model that captures exactly such languages for regular $U_{i}$
and Parikh-recognizable $V_{i}$.
###### keywords:
Automata theory, Parikh automata, infinite words, Büchi’s theorem
††journal: arXiv
## 1 Introduction
In his groundbreaking work [4] from 1960 Büchi initiated the study of
$\omega$-regular languages and introduced Büchi automata. By his famous
theorem $\omega$-regular languages are characterized as languages of the form
$\bigcup_{i}U_{i}V_{i}^{\omega}$, where the $U_{i}$ and $V_{i}$ are regular
languages. One shortcoming of Büchi automata, and also of many more powerful
models, is that they cannot count. For example, the language
$\\{a^{n}b^{n}c^{n}\mid n\in\mathbb{N}\\}^{\omega}$ is not $\omega$-regular,
and not even $\omega$-context-free. This shortcoming led to the study of
automata on infinite words with counters, see e.g. [1, 3, 9].
Parikh automata (PA) are another model of automata (on finite words) with
counters [8]. A PA with $d$ counters is a non-deterministic finite automaton
that is additionally equipped with a semi-linear set $C$. Furthermore, every
transition is equipped with a $d$-tuple of non-negative integers and every
time a transition is used, the counters are incremented by the values in the
tuple accordingly. A finite input word is accepted if the PA ends in a final
state and additionally, the resulting $d$-tuple in the counters lies in $C$.
The class of languages recognized by PA contains all regular languages, and
even some, but not all, context-sensitive languages, e.g. the language
$\\{a^{n}b^{n}c^{n}\mid n\in\mathbb{N}\\}$.
Recently, several possible extensions of Parikh automata on infinite words
were proposed and studied by Grobler et al. [6] and Guha et al. [7]. In fact,
it turns out that one of the models proposed in [6] is equivalent to
synchronous blind counter machines, which were introduced by Fernau and Stiebe
[5]. Fernau and Stiebe also considered the class of all $\omega$-languages of
the form $\bigcup_{i}U_{i}V_{i}^{\omega}$, where the $U_{i},V_{i}$ are Parikh-
recognizable languages of finite words. They called this class
$\mathcal{K}_{*}$ and proved that the class of $\omega$-languages recognized
by blind counter machines is a proper subset of $\mathcal{K}_{*}$.
In the light of Büchi’s famous theorem it is a natural question to find an
automata theoretic characterization of the class $\mathcal{K}_{*}$. In fact,
more generally, we define the four classes
$\mathcal{L}_{\mathsf{Reg,Reg}}^{\omega}$,
$\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$,
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$ and
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$ of $\omega$-languages of the form
$\bigcup_{i}U_{i}V_{i}^{\omega}$, where the $U_{i},V_{i}$ are regular or
Parikh-recognizable languages of finite words, respectively. By Büchi’s
theorem the class $\mathcal{L}_{\mathsf{Reg,Reg}}^{\omega}$ is the class of
$\omega$-regular languages. In this work we provide automata theoretic
characterizations of the other three classes.
We first introduce the new model of _limit Parikh Büchi automata_ (LPBA),
which was suggested in the concluding remarks of Klaedtke and Ruess [8]. An
LPBA accepts an infinite word if an accepting state is visited infinitely
often (satisfies the Büchi condition) and the infinite sum of the counters
belongs to the semi-linear set, which for this purpose is extended with the
symbol $\infty$ if the sum of some counter diverges (satisfies the newly
introduced _limit Parikh condition_).
We also introduce a new model, which is obtained by a small modification of
reachability Parikh automata as introduced by Guha et al. [7], that we call
_reachability Parikh Büchi automata_ (RPBA). An RPBA accepts an infinite word
if an accepting state is visited infinitely often (satisfies the Büchi
condition) and satisfies the Parikh condition _once_.
Quite surprisingly, both models turn out to capture exactly the class
$\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$, and hence are equivalent.
We then study _strong reset Parikh automata_ (SPBA), which were introduced by
Grobler et al. [6]. We consider the automata as directed graphs and provide
two graph theoretic definitions of subclasses of SPBA that exactly capture the
classes $\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$ and
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$. These definitions are based on an
analysis of the strongly connected components of the underlying graph, where
the accepting states can be found and how they are connected to the rest of
the graph.
We believe that our results provide interesting insights into the theory of
Parikh-recognizable $\omega$-languages. It remains an interesting open
question to characterize the new classes of $\omega$-languages by logics.
## 2 Preliminaries
### 2.1 Finite and infinite words
We write $\mathbb{N}$ for the set of non-negative integers including $0$. Let
$\Sigma$ be an alphabet, i. e., a finite non-empty set and let $\Sigma^{*}$ be
the set of all finite words over $\Sigma$. For a word $w\in\Sigma^{*}$, we
denote by $|w|$ the length of $w$, and by $|w|_{a}$ the number of occurrences
of the letter $a\in\Sigma$ in $w$. We write $\varepsilon$ for the empty word
of length $0$.
An _infinite word_ over an alphabet $\Sigma$ is a function
$\alpha:\mathbb{N}\setminus\\{0\\}\rightarrow\Sigma$. We often write
$\alpha_{i}$ instead of $\alpha(i)$. Thus, we can understand an infinite word
as an infinite sequence of symbols
$\alpha=\alpha_{1}\alpha_{2}\alpha_{3}\ldots$ For $m\leq n$, we abbreviate the
finite infix $\alpha_{m}\ldots\alpha_{n}$ by $\alpha[m,n]$. We denote by
$\Sigma^{\omega}$ the set of all infinite words over $\Sigma$. We call a
subset $L\subseteq\Sigma^{\omega}$ an _$\omega$ -language_. Moreover, for
$L\subseteq\Sigma^{*}$, we define $L^{\omega}=\\{w_{1}w_{2}\dots\mid w_{i}\in
L\setminus\\{\varepsilon\\}\\}\subseteq\Sigma^{\omega}$.
### 2.2 Regular and $\omega$-regular languages
A _Nondeterministic Finite Automaton_ (NFA) is a tuple
$\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F)$, where $Q$ is the finite set of
states, $\Sigma$ is the input alphabet, $q_{0}\in Q$ is the initial state,
$\Delta\subseteq Q\times\Sigma\times Q$ is the set of transitions and
$F\subseteq Q$ is the set of accepting states. A _run_ of $\mathcal{A}$ on a
word $w=w_{1}\ldots w_{n}\in\Sigma^{*}$ is a (possibly empty) sequence of
transitions $r=r_{1}\ldots r_{n}$ with $r_{i}=(p_{i-1},w_{i},p_{i})\in\Delta$
such that $p_{0}=q_{0}$. We say $r$ is _accepting_ if $p_{n}\in F$. The empty
run on $\varepsilon$ is accepting if $q_{0}\in F$. We define the _language
recognized by $\mathcal{A}$_ as
$L(\mathcal{A})=\\{w\in\Sigma^{*}\mid\text{there is an accepting run of
$\mathcal{A}$ on $w$}\\}$. If a language $L$ is recognized by some NFA
$\mathcal{A}$, we call $L$ _regular_.
A _Büchi Automaton_ (BA) is an NFA $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F)$
that takes infinite words as input. A _run_ of $\mathcal{A}$ on an infinite
word $\alpha_{1}\alpha_{2}\alpha_{3}\dots$ is an infinite sequence of
transitions $r=r_{1}r_{2}r_{3}\dots$ with
$r_{i}=(p_{i-1},\alpha_{i},p_{i})\in\Delta$ such that $p_{0}=q_{0}$. We say
$r$ is _accepting_ if there are infinitely many $i$ such that $p_{i}\in F$. We
define the _$\omega$ -language recognized by $\mathcal{A}$_ as
$L_{\omega}(\mathcal{A})=\\{\alpha\in\Sigma^{\omega}\mid\text{there is an
accepting run of $\mathcal{A}$ on $\alpha$}\\}$. If an $\omega$-language $L$
is recognized by some BA $\mathcal{A}$, we call $L$ _$\omega$ -regular_.
Büchi’s theorem establishes an important connection between regular and
$\omega$-regular languages:
###### Theorem 2.1 (Büchi).
A language $L\subseteq\Sigma^{\omega}$ is $\omega$-regular if and only if
there are regular languages $U_{1},V_{1},\dots,U_{n},V_{n}\subseteq\Sigma^{*}$
for some $n\geq 1$ such that $L=U_{1}V_{1}^{\omega}\cup\dots\cup
U_{n}V_{n}^{\omega}$.
### 2.3 Semi-linear sets
A _linear set_ of dimension $d$ for $d\geq 1$ is a set of the form
$\\{b_{0}+b_{1}z_{1}+\dots+b_{\ell}z_{\ell}\mid
z_{1},\dots,z_{\ell}\in\mathbb{N}\\}\subseteq\mathbb{N}^{d}$ for
$b_{0},\ldots,b_{\ell}\in\mathbb{N}^{d}$. A _semi-linear set_ is the finite
union of linear sets. For vectors
$\mathbf{u}=(u_{1},\dots,u_{c})\in\mathbb{N}^{c},\mathbf{v}=(v_{1},\dots,v_{d})\in\mathbb{N}^{d}$,
we denote by
$\mathbf{u}\cdot\mathbf{v}=(u_{1},\dots,u_{c},v_{1},\dots,v_{d})\in\mathbb{N}^{c+d}$
the _concatenation of $\mathbf{u}$ and $\mathbf{v}$_. We extend this
definition to sets of vectors. Let $C\subseteq\mathbb{N}^{c}$ and
$D\subseteq\mathbb{N}^{d}$. Then $C\cdot
D=\\{\mathbf{u}\cdot\mathbf{v}\mid\mathbf{u}\in C,\mathbf{v}\in
D\\}\subseteq\mathbb{N}^{c+d}$. We denote by $\mathbf{0}^{d}$ (or simply
$\mathbf{0}$ if $d$ is clear from the context) the all-zero vector, and by
$\mathbf{e}^{d}_{i}$ (or simply $\mathbf{e}_{i})$ the $d$-dimensional vector
where the $i$th entry is $1$ and all other entries are 0. We also consider
semi-linear sets over $(\mathbb{N}\cup\\{\infty\\})^{d}$, that is semi-linear
sets with an additional symbol $\infty$ for infinity. As usual, addition of
vectors and multiplication of a vector with a number is defined component-
wise, where $z+\infty=\infty+z=\infty+\infty=\infty$ for all $z\in\mathbb{N}$,
$z\cdot\infty=\infty\cdot z=\infty$ for all $z>0\in\mathbb{N}$, and
$0\cdot\infty=\infty\cdot 0=0$.
### 2.4 Parikh-recognizable languages
A _Parikh Automaton_ (PA) is a tuple $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$
where $Q$, $\Sigma$, $q_{0}$, and $F$ are defined as for NFA, $\Delta\subseteq
Q\times\Sigma\times\mathbb{N}^{d}\times Q$ is the set of _labeled transitions_
, and $C\subseteq\mathbb{N}^{d}$ is a semi-linear set. We call $d$ the
_dimension_ of $\mathcal{A}$ and refer to the entries of a vector $\mathbf{v}$
in a transition $(p,a,\mathbf{v},q)$ as _counters_. Similar to NFA, a _run_ of
$\mathcal{A}$ on a word $w=x_{1}\dots x_{n}$ is a (possibly empty) sequence of
labeled transitions $r=r_{1}\dots r_{n}$ with
$r_{i}=(p_{i-1},x_{i},\mathbf{v}_{i},p_{i})\in\Delta$ such that $p_{0}=q_{0}$.
We define the _extended Parikh image_ of a run $r$ as $\rho(r)=\sum_{i\leq
n}\mathbf{v}_{i}$ (with the convention that the empty sum equals
$\mathbf{0}$). We say $r$ is accepting if $p_{n}\in F$ and $\rho(r)\in C$,
referring to the latter condition as the _Parikh condition_. We define the
_language recognized by $\mathcal{A}$_ as
$L(\mathcal{A})=\\{w\in\Sigma^{*}\mid\text{there is an accepting run of
$\mathcal{A}$ on $w$}\\}$. If a language $L\subseteq\Sigma^{*}$ is recognized
by some PA, then we call $L$ _Parikh-recognizable_.
### 2.5 Graphs
A _(directed) graph_ $G$ consists of its vertex set $V(G)$ and edge set
$E(G)\subseteq V(G)\times V(G)$. In particular, a graph $G$ may have loops,
that is, edges of the form $(u,u)$. A _path_ from a vertex $u$ to a vertex $v$
in $G$ is a sequence of pairwise distinct vertices $v_{1}\dots v_{k}$ such
that $v_{1}=u$, $v_{k}=v$, and $(v_{i},v_{i+1})\in E(G)$ for all $1\leq i<k$.
Similarly, a _cycle_ in $G$ is a sequence of pairwise distinct vertices
$v_{1}\dots v_{k}$ such that $(v_{i},v_{i+1})\in E(G)$ for all $1\leq i<k$,
and $(v_{k},v_{1})\in E(G)$. If $G$ has no cylces, we call $G$ a directed
acyclic graph (DAG). For a subset $U\subseteq V(G)$, we denote by $G[U]$ the
graph $G$ _induced by_ $U$, i. e., the graph with vertex set $U$ and edge set
$\\{(u,v)\in E(G)\mid u,v\in U\\}$. A _strongly connected component_ (SCC) in
$G$ is a maximal subset $U\in V(G)$ such that for all $u,v\in U$ there is a
path from $u$ to $v$, i. e., all vertices in $U$ are reachable from each
other. We write $SCC(G)$ for the set of all strongly connected components of
$G$ (observe that $SCC(G)$ partitions $V(G)$). The _condensation_ of $G$,
written $C(G)$, is the DAG obtained from $G$ by contracting each SCC of $G$
into a single vertex, that is $V(C(G))=SCC(G)$ and $(U,V)\in E(C(G))$ if and
only if there is $u\in U$ and $v\in V$ with $(u,v)\in E(G)$. We call the SCCs
with no outgoing edges in $C(G)$ leaves. Note that an automaton can be seen as
a labeled graph. Hence, all definitions translate to automata by considering
the underlying graph (to be precise, an automaton can be seen as a labeled
multigraph; however, we simply drop parallel edges).
## 3 Parikh automata on infinite words
In this section, we recall the relevant definitions of Parikh automata
operating on infinite words introduced by Grobler et al. [6] and Guha et al.
[7]. We then propose further definitions and compare the resulting automata.
A _Parikh-Büchi automaton_ (PBA) is a PA
$\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$. A run of $\mathcal{A}$ on an
infinite word $\alpha=\alpha_{1}\alpha_{2}\alpha_{3}\dots$ is an infinite
sequence of labeled transitions $r=r_{1}r_{2}r_{3}\dots$ with
$r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})\in\Delta$ such that
$p_{0}=q_{0}$. The automata defined below differ only in their acceptance
conditions. In the following, whenever we say that an automaton $\mathcal{A}$
accepts an infinite word $\alpha$, we mean that there is an accepting run of
$\mathcal{A}$ on $\alpha$.
Let us first recall the definition of _(strong) reset PBA_ (SPBA) introduced
by Grobler et al. [6]: let $k_{0}=0$ and denote by $k_{1},k_{2},\dots$ the
positions of all accepting states in $r$. Then $r$ is accepting if
$k_{1},k_{2},\dots$ is an infinite sequence and $\rho(r_{k_{i-1}+1}\dots
r_{k_{i}})\in C$ for all $i\geq 1$. The $\omega$-language recognized by an
SPBA $\mathcal{A}$ is $S_{\omega}(\mathcal{A})=\\{\alpha\mid\mathcal{A}\text{
accepts }\alpha\\}$. Intuitively worded, whenever an SPBA enters an accepting
state, the Parikh condition _must_ be satisfied. Then the counters are reset.
Let us now recall the definition of _(synchronous) reachability Parikh
automata_ (RPA) introduced by Guhe et al. [7]. The run $r$ is accepting if
there is an $i\geq 1$ such that $p_{i}\in F$ and $\rho(r_{1}\dots r_{i})\in
C$. We say there is an accepting hit in $r_{i}$. The $\omega$-language
recognized by an RPA $\mathcal{A}$ is
$R_{\omega}(\mathcal{A})=\\{\alpha\mid\mathcal{A}\text{ accepts }\alpha\\}$.
Let us finally recall the definition of _prefix PBA_ (PPBA) introduced by
Grobler et al. [6], which are obviously equivalent to _(synchronous) Büchi
Parikh automata_ introduced by Guhe et al. [7]. The run $r$ is accepting if
there are infinitely many $i\geq 1$ such that $p_{i}\in F$ and
$\rho(r_{1}\dots r_{i})\in C$. The $\omega$-language recognized by a PPBA
$\mathcal{A}$ is $P_{\omega}(\mathcal{A})=\\{\alpha\mid\mathcal{A}\text{
accepts }\alpha\\}$. Hence, a PPBA can be seen as a stronger variant of RPA
where we require infinitely many accepting hits instead of a single one.
We remark that we stick to the notation of the original papers, that is, we
abbreviate (strong) reset Parikh-Büchi automata by SPBA and (synchronous)
reachability Parikh automata by RPA. The attentive reader may have noticed
that we do not use the term RPBA. The reason for this (in addition to sticking
to the original notation) is that, unlike Büchi automata, RPA do _not_ need to
see an accepting state infinitely often. We show that this property implies
that there are $\omega$-regular languages that are not RPA-recogniazble. This
motivates the study of RPA that additionally need to satisfy the Büchi-
condition (which we hence will call RPBA).
We begin with a simple lemma. A similar result was proved in Theorem 3 of [7],
however, RPA in [7] are assumed to be _complete_ , i.e., for every state and
every symbol there is at least one transition. The proof presented in [7] does
not go through for the more general setting of non-complete RPA.
###### Lemma 3.1.
There is an $\omega$-regular language that is not recognized by any RPA.
###### Proof.
We show that $L=\\{\alpha\in\\{a,b\\}^{\omega}\mid|\alpha|_{a}=\infty\\}$ is
not RPA-recognizable. Assume that there is an RPA $\mathcal{A}$ with
$R_{\omega}(\mathcal{A})=L$ and let $n$ be the number of states of
$\mathcal{A}$. As $\alpha=(a^{n}b^{n})^{\omega}\in L$, there is an accepting
run $r=r_{1}r_{2}r_{3}\dots$ of $\mathcal{A}$ on $\alpha$. Let $i$ be the
first position of an accepting hit in $r$. Now consider
$\beta=(a^{n}b^{n})^{i}\cdot b^{\omega}\notin L$. As $\alpha$ and $\beta$
share the same prefix of length $i$, the run $r_{1}\dots r_{i}$ is also a
partial run on $\beta[1,i]$, hence, generating an accepting hit. Now observe
that in every infix of the form $b^{n}$ a state is visited at least twice by
the pigeonhole principle. Hence, we can “infinitely pump” a $b$-block after
the accepting hit in $r_{i}$ and obtain an accepting run on $\beta$,
contradicting $R_{\omega}(\mathcal{A})=L$. ∎
###### Remark 3.1.
In fact, complete RPA are strictly weaker than general RPA, as there is no
complete RPA recognizing $\\{a\\}^{\omega}$ over $\Sigma=\\{a,b\\}$; an
$\omega$-language that is obviously RPA-recognizable.
As mentioned above, this weakness motivates the study of _reachability Parikh-
Büchi automata_ (RPBA). The run $r$ is accepting if there is an $i\geq 1$ such
that $\rho(r_{1}\dots r_{i})\in C$ and $p_{i}\in F$, and if there are
infinitely many $j$ such that $p_{j}\in F$. We define the $\omega$-language
recognized by an RPBA $\mathcal{A}$ as
$B_{\omega}(\mathcal{A})=\\{\alpha\mid\mathcal{A}\text{ accepts }\alpha\\}$.
Every $\omega$-regular language is RPBA-recognizable, as we can turn an
arbitrary Büchi automaton into an equivalent RPBA by labeling every transition
with $0$ and setting $C=\\{0\\}$.
Finally, we define a variant of PBA motivated by a proposal of Klaedke and
Ruess [8]. Here we consider semi-linear sets over
$(\mathbb{N}\cup\\{\infty\\})^{d}$ and compute the extended Parikh image of an
infinite run using transfinite induction. A _limit Parikh-Büchi automaton_
(LPBA) is a PA $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ where $C$ may use the
symbol $\infty$. The run $r$ is accepting if there are infinitely many $i\geq
1$ such that $p_{i}\in F$, and if additionally $\rho(r)\in C$, where the
$j$-th component of $\rho(r)$ is computed as follows. If there are infinitely
many $i\geq 1$ such that the $j$-th component of $\mathbf{v}_{i}$ has a non-
zero value, then the $j$-th component of $\rho(C)$ is $\infty$. In other
words, if the sum of values in a component diverges, then its value is set to
$\infty$. Otherwise, the infinite sum yields a positive integer. We define the
$\omega$-language recognized by an LPBA $\mathcal{A}$ as
$L_{\omega}(\mathcal{A})=\\{\alpha\mid\mathcal{A}\text{ accepts }\alpha\\}$.
###### Schnexample 1.
$q_{0}$$q_{1}$$a,\begin{pmatrix}1\\\ 0\end{pmatrix}$$b,\begin{pmatrix}0\\\
1\end{pmatrix}$$b,\begin{pmatrix}0\\\ 1\end{pmatrix}$$a,\begin{pmatrix}1\\\
0\end{pmatrix}$ Figure 1: The automaton $\mathcal{A}$ with
$C=\\{(z,z),(z,\infty)\mid z\in\mathbb{N}\\}$ from Example 1.
Let $\mathcal{A}$ be the automaton in Figure 1 with $C=\\{(z,z),(z,\infty)\mid
z\in\mathbb{N}\\}$.
* 1.
If we interpret $\mathcal{A}$ as a PA (over finite words), then we have
$L(\mathcal{A})=\\{w\in\\{a,b\\}^{*}\cdot\\{b\\}\mid|w|_{a}=|w|_{b}\\}$. The
automaton is in the accepting state after reading a $b$. The first counter
counts the number of read $a$s, the second one reads the number of read $b$s.
By definition of $C$ the automaton only accepts when both counters are equal
(note that vectors containing an $\infty$-entry have no additional effect).
* 2.
If we interpret $\mathcal{A}$ as an SPBA, then we have
$S_{\omega}(\mathcal{A})=\\{ab\\}^{\omega}$. Whenever the automaton reaches an
accepting state also the Parikh condition must be satisfied. In the example
this is only possible after reading exactly one $a$ and one $b$. After that
the counters are reset.
* 3.
If we interpret $\mathcal{A}$ as a PPBA, then we have
$P_{\omega}(\mathcal{A})=L(\mathcal{A})^{\omega}$. The automaton accepts a
word if infinitely often the Parikh condition is satisfied in the accepting
state. Observe that $C$ has no base vector and the initial state as well as
the accepting state have the same outgoing edges.
* 4.
If we interpret $\mathcal{A}$ as an LPBA, then we have
$L_{\omega}(\mathcal{A})=\\{\alpha\in\\{a,b\\}^{\omega}\mid|\alpha|_{a}<\infty\\}$.
The automaton must visit the accepting state infinitely often. At the same
time the extended Parikh image must belong to $C$, which implies that the word
contains only some finite number $z$ of $a$s (note that only the vectors of
the form $(z,\infty)$ have an effect here, as at least one symbol must be seen
infinitely often by the infinite pigeonhole principle).
* 5.
If we interpret $\mathcal{A}$ as an RPA, then we have
$R_{\omega}(\mathcal{A})=\\{\alpha\in\\{a,b\\}^{\omega}\mid\alpha\text{ has a
prefix in }$ $L(\mathcal{A})\\}$. The automaton has satisfied the reachability
condition after reading a prefix in $L(\mathcal{A})$. Since the automaton is
complete it cannot get stuck and accepts any continuation after that.
* 6.
If we interpret $\mathcal{A}$ as an RPBA, then we have
$B_{\omega}(\mathcal{A})=\\{\alpha\in\\{a,b\\}^{\omega}\mid\alpha\text{ has a
prefix}$ in $L(\mathcal{A})\text{ and }|\alpha|_{b}=\infty\\}$. After having
met the reachability condition the automaton still needs to satisfy the Büchi
condition, which enforces infinitely many visits of the accepting state.
###### Remark 3.2.
The automaton $\mathcal{A}$ in the last example is deterministic. We note that
$L_{\omega}(\mathcal{A})$ is not deterministic $\omega$-regular but
deterministic LPBA-recognizable.
We denote the class of $\omega$-languages recognized by an SPBA (PPBA, RPA,
RPBA, LPBA) by $\mathcal{L}_{\mathsf{SPBA}}$ ($\mathcal{L}_{\mathsf{PPBA}}$,
$\mathcal{L}_{\mathsf{RPA}}$, $\mathcal{L}_{\mathsf{RPBA}}$,
$\mathcal{L}_{\mathsf{LPBA}}$). Furthermore, denote by
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$ the class of $\omega$-languages of the
form $\bigcup_{i}U_{i}V_{i}^{\omega}$, where the $U_{i}$ and $V_{i}$ are
Parikh-recognizable, by $\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$ such
languages where the $U_{i}$ are Parikh-recognizable and the $V_{i}$ are
regular, and by $\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$ such languages where
the $U_{i}$ are regular and the $V_{i}$ are Parikh-recognizable. As shown by
Guhe et al. we have
$\mathcal{L}_{\mathsf{RPA}}\subsetneq\mathcal{L}_{\mathsf{PPBA}}$ [7].
Likewise, Grobler et al. [6] have shown
$\mathcal{L}_{\mathsf{PPBA}}\subsetneq\mathcal{L}_{\mathsf{PA,PA}}^{\omega}\subsetneq\mathcal{L}_{\mathsf{SPBA}}$.
We conclude this section by showing
$\mathcal{L}_{\mathsf{RPA}}\subsetneq\mathcal{L}_{\mathsf{RPBA}}\subsetneq\mathcal{L}_{\mathsf{PPBA}}$.
In the next section we show that
$\mathcal{L}_{\mathsf{RPBA}}=\mathcal{L}_{\mathsf{LPBA}}=\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$.
###### Lemma 3.2.
$\mathcal{L}_{\mathsf{RPA}}\subsetneq\mathcal{L}_{\mathsf{RPBA}}$.
###### Proof.
We show $\mathcal{L}_{\mathsf{RPA}}\subseteq\mathcal{L}_{\mathsf{RPBA}}$.
Strictness follows from Lemma 3.1. The proof is very similar to the proof that
$\mathcal{L}_{\mathsf{RPA}}\subsetneq\mathcal{L}_{\mathsf{PPBA}}$ [7, Theorem
3]. Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ be an RPA. The idea is to
create two copies of $\mathcal{A}$, where the second copy is modified such
that all counters are zero and all states are accepting. Then we use non-
determinism to guess the accepting hit and transition into the second copy
where the counters are frozen and all states are accepting. Let
$\mathcal{A}^{\prime}=(Q^{\prime},\Sigma,q_{0},\Delta^{\prime},F^{\prime},C)$
where $Q^{\prime}=\\{q,q^{\prime}\mid q\in Q\\}$,
$F^{\prime}=\\{q^{\prime}\mid q\in Q\\}$ and
$\Delta^{\prime}=\Delta\cup\\{(p^{\prime},a,\mathbf{0},q^{\prime})\mid(p,a,\mathbf{v},q)\in\Delta\\}\cup\\{(p,a,\mathbf{v},q^{\prime})\mid(p,a,\mathbf{v},q)\in\Delta,q\in
F\\}$ be a RPBA. We claim that
$R_{\omega}(\mathcal{A})=B_{\omega}(\mathcal{A}^{\prime})$.
$\Rightarrow$ To show $R_{\omega}(\mathcal{A})\subseteq
B_{\omega}(\mathcal{A}^{\prime})$, let $\alpha\in R_{\omega}(\mathcal{A})$
with accepting run $r=r_{1}r_{2}r_{3}\dots$ where
$r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$. Let $i\geq 1$ be an
arbitrary position such that there is an accepting hit in $r_{i}$ (which
exists by definition). Let
$r^{\prime}_{i}=(p_{i-1},\alpha_{i},\mathbf{0},p_{i}^{\prime})$ and define
$r^{\prime}_{j}=(p^{\prime}_{j-1},\alpha_{j},\mathbf{0},p^{\prime}_{j})$ for
all $j>i$. Then $r^{\prime}=r_{1}r_{2}\dots
r_{i-1}r^{\prime}_{i}r^{\prime}_{i+1}r^{\prime}_{i+2}\dots$ is a run of
$\mathcal{A}^{\prime}$ on $\alpha$. Furthermore, $r^{\prime}$ is accepting:
the accepting hit in $r_{i}$ translates one-to-one to an accepting hit in
$r^{\prime}_{i}$. Furthermore, all $p_{j}$ for $j\geq i$ are accepting by the
definition of $F^{\prime}$. Hence, $r^{\prime}$ is accepting, thus $\alpha\in
B_{\omega}(\mathcal{A}^{\prime})$.
$\Leftarrow$ To show $B_{\omega}(\mathcal{A}^{\prime})\subseteq
R_{\omega}(\mathcal{A})$, let $\alpha\in B_{\omega}(\mathcal{A}^{\prime})$
with accepting run
$r^{\prime}=r^{\prime}_{1}r^{\prime}_{2}r^{\prime}_{3}\dots$ where
$r^{\prime}_{i}=(\hat{p}_{i-1},\alpha_{i},\mathbf{v}_{i},\hat{p}_{i})$ with
$\hat{p}_{i}\in\\{p_{i},p^{\prime}_{i}\\}$. Again, let $i\geq 1$ be an
arbitrary position such that there is an accepting hit in $r^{\prime}_{i}$. As
there are no accepting states in the first copy of $\mathcal{A}$ in
$\mathcal{A}^{\prime}$, there is a point where $r$ transitions from the first
copy to the second copy, i. e., there is a $j\leq i$ such that
$r^{\prime}_{j}=(p_{j-1},\alpha_{j},\mathbf{v}_{j},p_{j}^{\prime})$. Observe
that the counters are frozen after transitioning to the second copy, hence, we
have $\rho(r^{\prime}_{1}\dots r^{\prime}_{i})=\rho(r^{\prime}_{1}\dots
r^{\prime}_{j})\in C$. Furthermore, we have $p_{j}\in F$ by the choice of
$\Delta^{\prime}$. Finally, observe that $\hat{p}_{\ell}=p_{\ell}$ for all
$\ell<j$, and $\hat{p}_{\ell}=p^{\prime}_{\ell}$ for all $\ell\geq j$. Hence,
we can replace $r^{\prime}_{j}$ by
$r_{j}=(p_{j-1},\alpha_{j},\mathbf{v}_{j},p_{j})$, and for all $\ell>j$ we
replace $r^{\prime}_{\ell}$ by
$r_{\ell}=(p_{\ell-1},\alpha_{\ell},\mathbf{v}_{\ell},p_{\ell})$, where
$\mathbf{v}_{\ell}$ is arbitrary such that $r_{\ell}\in\Delta$ (observe that
at least one such $\mathbf{v}$ exists by definition of $\Delta^{\prime}$).
Then $r=r^{\prime}_{1}r^{\prime}_{2}\dots
r^{\prime}_{j-1}r_{j}r_{j+1}r_{j+2}\dots$ is a run of $\mathcal{A}$ on
$\alpha$ that is furthermore accepting as witnessed by the accepting hit in
$r_{j}$. Hence $\alpha\in R_{\omega}(\mathcal{A})$. ∎
Observe that a very similar construction can be used to turn an arbitrary RPBA
into an equivalent PPBA. The only difference is that we choose
$F^{\prime}=\\{q^{\prime}\mid q\in F\\}$. Hence we obtain the following
corollary.
###### Corollary 3.1.
$\mathcal{L}_{\mathsf{RPBA}}\subseteq\mathcal{L}_{\mathsf{PPBA}}$.
Finally, we show that this inclusion is also strict.
###### Lemma 3.3.
There is an $\omega$-language that is PPBA-recognizable but not RPBA-
recognizable.
###### Proof.
Consider
$L=\\{\alpha\in\\{a,b\\}^{\omega}\mid|\alpha[1,i]|_{a}=|\alpha[1,i]|_{b}\text{
for infinitely many $i$}\\}$, which is obviously PPBA-recognazable.
Assume that $L$ is recognized by an RPBA $\mathcal{A}$ and let $n$ be the
number of states of $\mathcal{A}$. Consider an accepting run
$r=r_{1}r_{2}r_{3}\ldots$ of $\mathcal{A}$ on $\alpha=(a^{n}b^{n})^{\omega}$
where $r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$, and let $k$ be the
position of the accepting hit, i. e. $p_{k}\in F$ and $\rho(r_{1}\dots
r_{k})\in C$. By definition there are infinitely many $j\geq 1$ (and hence
infinitely many $j\geq k$) such that $p_{j}\in F$. By the pigeonhole
principle, there is a state $q$ that is visited twice while reading an
arbitrary $a^{n}$-infix, say at positions $k\leq c<d$, i. e., $p_{c}=p_{d}=q$
and $d-c<n$. Hence, we can pump an infix of the form $a^{d-c}$ and obtain an
accepting run on an infinite word of the form
$(a^{n}b^{n})^{*}(a^{n+d-c}b^{n})(a^{n}b^{n})^{\omega}$, which is not in $L$,
a contradiction. ∎
## 4 Characterization of $\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$ by limit
PBA and reachability PBA
The main goal of this section is to prove the following theorem.
###### Theorem 4.1.
The following are equivalent for all $\omega$-languages
$L\subseteq\Sigma^{\omega}$.
1. 1.
$L$ is of the form $\bigcup_{i}U_{i}V_{i}^{\omega}$, where
$U_{i}\in\Sigma^{*}$ is Parikh-recognizable, and $V_{i}\subseteq\Sigma^{*}$ is
regular.
2. 2.
$L$ is LPBA-recognizable.
3. 3.
$L$ is RPBA-recognizable.
Observe that in the first item we may assume that $L$ is of the form
$\bigcup_{i}U_{i}V_{i}$, where $U_{i}\in\Sigma^{*}$ is Parikh-recognizable,
and $V_{i}\subseteq\Sigma^{\omega}$ is $\omega$-regular. Then, by simple
combinatorics and Büchi’s theorem we have
$\bigcup_{i}U_{i}V_{i}=\bigcup_{i}U_{i}(\bigcup_{j_{i}}X_{j_{i}}Y_{j_{i}}^{\omega})=\bigcup_{i,j_{i}}U_{i}(X_{j_{i}}Y_{j_{i}}^{\omega})=\bigcup_{i,j_{i}}(U_{i}X_{j_{i}})Y_{j_{i}}^{\omega}$,
for regular languages $X_{j_{i}},Y_{j_{i}}$, where $U_{i}X_{j_{i}}$ is
regular, as regular languages are closed under concatenation.
To simplify the proof, it is convenient to consider the following
generalizations of Büchi automata. A _generalized Büchi automaton_ (GBA) is a
tuple $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,\mathcal{F})$ where $Q,\Sigma,q_{0}$
and $\Delta$ are defined as for BA, and $\mathcal{F}\subseteq 2^{Q}$ is a
collection of sets of accepting states. Then a run $r_{1}r_{2}r_{3}\dots$ with
$r_{i}=(p_{i-1},\alpha_{i},p_{i})$ is accepting if for all $F\in\mathcal{F}$
there are infinitely many $i$ such that $p_{i}\in F$. It is well-known that
GBA are not more expressive than BA, see e. g. [2, Theorem 4.56].
Furthermore, we consider a variant of GBA where acceptance is not defined via
states that are seen infinitely often, but rather via transitions that are
used infinitely often. A _Generalized Transition Büchi Automaton_ (GTBA) is a
tuple $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,\mathcal{T})$ where
$\mathcal{T}\subseteq 2^{\Delta}$ is a collection of sets of transitions. Then
a run $r_{1}r_{2}r_{3}\dots$ is accepting if for all $T\in\mathcal{T}$ there
are infinitely many $i$ such that $r_{i}\in T$.
###### Lemma 4.1.
GTBA and BA have the same expressiveness.
###### Proof.
Obviously, every BA can be turned into an equivalent GTBA by choosing
$\mathcal{T}=\\{\\{(p,a,q)\mid q\in F\\}\\}$, hence we focus on the other
direction.
Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,\mathcal{T})$ be a GTBA. As GBA are as
expressive as BA, it is sufficient to convert $\mathcal{A}$ into an equivalent
GBA. The idea is basically to consider the line graph of $\mathcal{A}$, that
is, to use $\Delta$ as the new state set, keeping the initial state. Then
there is a $b$-transition from the _state_ $(p,a,q)$ to every state of the
form $(q,b,t)$. Hence, the acceptance component translates directly. To be
precise, we construct the GBA
$\mathcal{A}^{\prime}=(\Delta\cup\\{q_{0}\\},\Sigma,q_{0},\Delta^{\prime},\mathcal{T})$
where
$\Delta^{\prime}=\\{((p,a,q),b,(q,b,t)\mid(p,a,q),(q,b,t)\in\Delta\\}\cup\\{(q_{0},a,(q_{0},a,q)\mid(q_{0},a,q)\in\Delta\\}$.
It is now easily verified that
$L_{\omega}(\mathcal{A})=L_{\omega}(\mathcal{A}^{\prime})$. ∎
Theorem 4.1 will be a direct consequence from the following lemmas. The first
lemma shows the implication $(1)\Rightarrow(2)$.
###### Lemma 4.2.
If $L\in\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$, then $L$ is LPBA-
recognizable.
###### Proof.
First observe that $\mathcal{L}_{\mathsf{LPBA}}$ is closed under union (this
can be shown using a standard construction). Hence, it is sufficient to show
how to construct an LPBA for an $\omega$-language of the form $L=UV^{\omega}$,
where $U$ is Parikh-recognizable and $V$ is regular.
Let $\mathcal{A}_{1}=(Q_{1},\Sigma,q_{1},\Delta_{1},F_{1},C)$ be a PA with
$L(\mathcal{A}_{1})=U$ and
$\mathcal{A}_{2}=(Q_{2},\Sigma,q_{2},\Delta_{2},F_{2})$ be a Büchi automaton
with $L_{\omega}(\mathcal{A}_{2})=V^{\omega}$. We use the following standard
construction for concatenation. Let $\mathcal{A}=(Q_{1}\cup
Q_{2},\Sigma,q_{1},\Delta,F_{2},C)$ be an LPBA where
$\Delta=\Delta_{1}\cup\\{(p,a,\mathbf{0},q)\mid(p,a,q)\in\Delta_{2}\\}\cup\\{(f,a,\mathbf{0},q)\mid(q_{2},a,q)\in\Delta_{2},f\in
F_{1}\\}.$
We claim that $L_{\omega}(\mathcal{A})=L$.
$\Rightarrow$ To show $L_{\omega}(\mathcal{A})\subseteq L$, let $\alpha\in
L_{\omega}(\mathcal{A})$ with accepting run $r_{1}r_{2}r_{3}\dots$ where
$r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$. As only the states in
$F_{2}$ are accepting, there is a position $j$ such that $p_{j-1}\in F_{1}$
and $p_{j}\in Q_{2}$. In particular, all transitions of the copy of
$\mathcal{A}_{2}$ are labeled with $\mathbf{0}$, i. e.,
$\mathbf{v}_{i}=\mathbf{0}$ for all $i\geq j$. Hence $\rho(r)=\rho(r_{1}\dots
r_{j-1})\in C$ (in particular, there is no $\infty$ value in $\rho(r)$). We
observe that $r_{1}\dots r_{j-1}$ is an accepting run of $\mathcal{A}_{1}$ on
$\alpha[1,j-1]$, as $p_{j-1}\in F_{1}$ and $\rho(r_{1}\dots r_{j-1})\in C$.
For all $i\geq j$ let $r^{\prime}_{i}=(p_{i-1},\alpha_{i},p_{i})$. Now observe
that $(q_{2},\alpha_{j},p_{j})r^{\prime}_{j+1}r^{\prime}_{j+2}\dots$ is an
accepting run of $\mathcal{A}_{2}$ on
$\alpha_{j}\alpha_{j+1}\alpha_{j+2}\dots$, hence $\alpha\in
L(\mathcal{A}_{1})\cdot L_{\omega}(\mathcal{A}_{2})=L$.
$\Leftarrow$ To show $L=UV^{\omega}\subseteq L_{\omega}(\mathcal{A})$, let
$w\in L(\mathcal{A}_{1})=U$ with accepting run $s$, and $\alpha\in
L_{\omega}(\mathcal{A}_{2})=V^{\omega}$ with accepting run
$r=r_{1}r_{2}r_{3}\dots$, where $r_{i}=(p_{i-1},\alpha_{1},p_{i})$. Observe
that $s$ is also a partial run of $\mathcal{A}$ on $w$, ending in an accepting
state $f$. By definition of $\Delta$, we can continue the run $s$ in
$\mathcal{A}$ basically as in $r$. To be precise, let
$r^{\prime}_{1}=(f,\alpha_{1},\mathbf{0},p_{1})$, and, for all $i>1$ let
$r^{\prime}_{i}=(p_{i-1},\alpha_{i},\mathbf{0},p_{i})$. Then
$sr^{\prime}_{1}r^{\prime}_{2}r^{\prime}_{3}\dots$ is an accepting run of
$\mathcal{A}$ on $w\alpha$, hence $w\alpha\in L_{\omega}(\mathcal{A})$. ∎
Observe that the construction in the proof of the lemma works the same way
when we interpret $\mathcal{A}$ as an RPBA (every visit of an accepting state
has the same good counter value; this argument is even true if we interpret
$\mathcal{A}$ as a PPBA), showing the implication $(1)\Rightarrow(3)$.
###### Corollary 4.1.
If $L\in\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$, then $L$ is RPBA-
recognizable.
For the backwards direction we need an auxiliary lemma, essentially stating
that semi-linear sets over $C\subseteq(\mathbb{N}\cup\\{\infty\\})^{d}$ can be
modified such that $\infty$-entries in vectors in $C$ are replaced by
arbitrary integers, and remain semi-linear.
###### Lemma 4.3.
Let $C\subseteq(\mathbb{N}\cup\\{\infty\\})^{d}$ be semi-linear and
$D\subseteq\\{1,\dots,d\\}$. Let $C_{D}\subseteq\mathbb{N}^{d}$ be the set
obtained from $C$ by the following procedure.
1. 1.
Remove every vector $\mathbf{v}=(v_{1},\dots,v_{d})$ where $v_{i}=\infty$ for
an $i\notin D$.
2. 2.
As long as $C_{D}$ contains a vector $\mathbf{v}=(v_{1},\dots,v_{n})$ with
$v_{i}=\infty$ for an $i\leq d$: replace $\mathbf{v}$ by all vectors of the
form $(v_{1},\dots v_{i-1},z,v_{i+1},\dots,v_{d})$ for $z\in\mathbb{N}$.
Then $C_{D}$ is semi-linear.
###### Proof.
For a vector
$\mathbf{v}=(v_{1},\dots,v_{d})\in(\mathbb{N}\cup\\{\infty\\})^{d}$, let
$\mathsf{Inf}(\mathbf{v})=\\{i\mid v_{i}=\infty\\}$ denote the positions of
$\infty$-entries in $\mathbf{v}$. Furthermore, let
$\bar{}\mathbf{v}=(\bar{v}_{1},\dots,\bar{v}_{d})$ denote the vector obtained
from $v$ by replacing every $\infty$-entry by 0, i. e., $\bar{v}_{i}=0$ if
$v_{i}=\infty$, and $\bar{v}_{i}=v_{i}$ otherwise.
We carry out the following procedure for every linear set of the semi-linear
set independently, hence we assume that
$C=\\{b_{0}+b_{1}z_{1}+\dots+b_{\ell}z_{\ell}\mid
z_{1},\dots,z_{\ell}\in\mathbb{N}\\}$ is linear. We also assume that there is
no $b_{j}$ with $\mathsf{Inf}(b_{j})\not\subseteq D$, otherwise, we simply
remove it.
Now, if $\mathsf{Inf}(b_{0})\not\subseteq D$, then $C_{D}=\varnothing$.
Otherwise,
$C_{D}=\\{b_{0}+\sum_{j\leq\ell}\bar{b}_{j}z_{j}+\sum_{i\in\mathsf{Inf}(b_{j})}\mathbf{e}_{i}z_{ij}\mid
z_{j},z_{ij}\in\mathbb{N}\\}$, which is linear by definition. ∎
We are now ready to prove the following lemma, showing the implication
$(2)\Rightarrow(1)$.
###### Lemma 4.4.
If $L$ is LPBA-recognizable, then
$L\in\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$.
###### Proof.
Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ be an LPBA of dimension $d$. The
idea is as follows. We guess a subset $D\subseteq\\{1,\dots,d\\}$ of counters
whose values we expect to be $\infty$. Observe that every counter not in $D$
has a finite value, hence for every such counter there is a point where all
transitions do not increment the counter further. For every subset
$D\subseteq\\{1,\dots,d\\}$ we decompose $\mathcal{A}$ into a PA and a GTBA.
In the first step we construct a PA where every counter not in $D$ reaches its
final value and is verified. In the second step we construct a GTBA ensuring
that for every counter in $D$ at least one transition adding a non-zero value
to that counter is used infinitely often. This can be encoded directly into
the GTBA. Furthermore we delete all transitions that modify counters not in
$D$.
Fix $D\subseteq\\{1,\dots,d\\}$ and $f\in F$, and define the PA
$\mathcal{A}^{D}_{f}=(Q,\Sigma,q_{0},\Delta,\\{f\\},C_{D})$ where $C_{D}$ is
defined as in Lemma 4.3. Furthermore, we define the GTBA
$\mathcal{B}^{D}_{f}=(Q,\Sigma,f,\Delta^{D},\mathcal{T}^{D})$ where
$\Delta^{D}$ contains the subset of transitions of $\Delta$ where the counters
not in $D$ have zero-values (just the transitions without vectors for the
counters, as we construct a GTBA). On the other hand, for every counter $i$ in
$D$ there is one acceptance component in $\mathcal{T}^{D}$ that contains
exactly those transitions (again without vectors) where the $i$-th counter has
a non-zero value. Finally, we encode the condition that at least one accepting
state in $F$ needs to by seen in $\mathcal{T}^{D}$ by further adding the
component $\\{(p,a,q)\in\Delta\mid q\in F\\}$.
We claim that $L_{\omega}(\mathcal{A})=\bigcup_{D\subseteq\\{1,\dots,d\\},f\in
F}L(\mathcal{A}^{D}_{f})\cdot L_{\omega}(\mathcal{B}^{D}_{f})$, which by the
comment below Theorem 4.1 and Lemma 4.1 implies the statement of the lemma.
$\Rightarrow$ To show
$L_{\omega}(\mathcal{A})\subseteq\bigcup_{D\subseteq\\{1,\dots,d\\},f\in
F}L(\mathcal{A}^{D}_{f})\cdot L_{\omega}(\mathcal{B}^{D}_{f})$, let $\alpha\in
L_{\omega}(\mathcal{A})$ with accepting run $r_{1}r_{2}r_{3}\dots$ where
$r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$. Let $D$ be the positions of
$\infty$-entries in $\rho(r)=(v_{1},\dots,v_{d})$. As the $v_{i}$ with
$i\notin D$ have integer values, there is a position $j$ such that in all
$\mathbf{v}_{k}$ for $k\geq j$ the $i$-th entry of $\mathbf{v}_{k}$ is 0. Let
$\ell\geq j$ be minimal such that $p_{\ell}$ in $F$. We split $\alpha=w\beta$,
where $w=\alpha[1,\ell]$, and $\beta=\alpha_{\ell+1}\alpha_{\ell+2}\dots$.
First we argue that $w\in L_{\omega}(\mathcal{A}^{D}_{p_{\ell}})$. Observe
that $\mathcal{A}^{D}_{p_{\ell}}$ inherits all transitions from $\mathcal{A}$,
hence $r_{1}\dots r_{\ell}$ is a run of $\mathcal{A}^{D}_{p_{\ell}}$ on $w$.
As $p_{\ell}$ is accepting by definition, it remains to show that
$\rho(r_{1},\dots r_{\ell})\in C_{D}$. By the choice of $\ell$, all counters
not in $D$ have reached their final values. As $C_{D}$ contains all vectors of
$C$ where all $\infty$-entries are replaced by arbitrary values, the claim
follows, hence $w\in L(\mathcal{A}^{D}_{p_{\ell}})$.
Now we argue that $\beta\in L_{\omega}(\mathcal{B}^{D}_{p_{\ell}})$. For every
$k>\ell$ define $r^{\prime}_{k}=(p_{k-1},\alpha_{k},p_{k})$. Observe that
$r^{\prime}=r^{\prime}_{k+1}r^{\prime}_{k+2}\dots$ is a run of
$\mathcal{B}^{D}_{p_{\ell}}$ on $\beta$ (all $r^{\prime}_{k+1}$ exist in
$\mathcal{B}^{D}_{p_{\ell}}$, as the counters not on $D$ of all transitions
$r_{k}$ have zero-values by the definition of $\ell$). It remains to show that
$r^{\prime}$ is accepting, i. e., that for every counter in $D$ at least one
transition with a non-zero values is used infinitely often, and an accepting
state is visited infinitely often. This is the case, as these counter values
are $\infty$ in $\rho(r)$ and by the acceptance condition of LPBA, hence
$\beta\in L_{\omega}(\mathcal{B}^{D}_{p_{\ell}})$.
We conclude $\alpha\in\bigcup_{D\subseteq\\{1,\dots,d\\},f\in
F}L(\mathcal{A}^{D}_{f})\cdot L_{\omega}(\mathcal{B}^{D}_{f})$. $\lrcorner$
$\Leftarrow$ To show $\bigcup_{D\subseteq\\{1,\dots,d\\},f\in
F}L(\mathcal{A}^{D}_{f})\cdot L_{\omega}(\mathcal{B}^{D}_{f})\subseteq
L_{\omega}(\mathcal{A})$, let $w\in L(\mathcal{A}^{D}_{f})$ and $\beta\in
L_{\omega}(\mathcal{B}^{D}_{f})$ for some $D\subseteq\\{1,\dots,d\\}$ and
$f\in F$. We show that $w\beta\in L_{\omega}(\mathcal{A})$.
Let $s$ be an accepting run of $\mathcal{A}^{D}_{f}$ on $w$, which ends in the
accepting state $f$ by definition. Let $\rho(s)=(v_{1},\dots,v_{d})$. By
definition of $C_{D}$, there is a vector $\mathbf{u}=(u_{1},\dots,u_{d})$ in
$C$ where $u_{i}=\infty$ if $i\in D$, and $u_{i}=v_{i}$ if $i\notin D$.
Furthermore, let $r=r_{1}r_{2}r_{3}\dots$, where
$r_{i}=(p_{i-1},\alpha_{i},p_{i})$, be an accepting run of
$\mathcal{B}^{D}_{f}$ on $\beta$, which starts in the accepting state $f$ by
definition. By definition of $\mathcal{T}^{d}$, for every counter $i\in D$ at
least one transition where the $i$-th counter of the corresponding transition
in $\Delta$ is non-zero is used infinitely often. Hence, let
$r^{\prime}=r^{\prime}_{1}r^{\prime}_{2}r^{\prime}_{3}\dots$ where
$r^{\prime}_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$ for a suitable
vector $\mathbf{v}_{i}$. Furthermore, the labels of transitions of counters
not in $D$ have a value of zero, hence $\rho(r^{\prime})=(x_{1},\dots,x_{d})$,
where $x_{i}=\infty$ if $i\in D$, and $x_{i}=0$ if $i\notin D$. A technical
remark: it might be the case that there are more than one transitions in
$\Delta$ that collapse to the same transition in $\Delta^{D}$, say
$\delta_{1}=(p,a,\mathbf{u},q)$ and $\delta_{2}=(p,a,\mathbf{v},q)$ appear in
$\Delta$ and collapse to $(p,a,q)$ in $\Delta^{D}$. If both transitions,
$\delta_{1}$ and $\delta_{2}$, are seen infinitely often, we need to take care
that we also see both infinitely often when translating the run $r$ back. This
is possible using a round-robin procedure.
Now observe that $sr^{\prime}$ is a run of $\mathcal{A}$ on $w\beta$ (recall
that $s$ ends in $f$, and $r^{\prime}$ starts in $f$). Furthermore, we have
$\rho(sr^{\prime})=\rho(s)+\rho(r^{\prime})=(v_{1}+x_{1},\dots,v_{d}+x_{d})$,
where $v_{i}+x_{i}=\infty$ if $i\in D$, and $v_{i}+x_{i}=v_{i}$ if $i\notin D$
by the observations above. Hence $\rho(sr^{\prime})\in C$. Finally,
$\mathcal{T}^{D}$ enforces that at least one accepting state in
$\mathcal{B}^{D}_{f}$ is seen infinitely often, hence $w\beta\in
L_{\omega}(\mathcal{A})$. ∎
Finally we show the implication $(3)\Rightarrow(1)$.
###### Lemma 4.5.
If $L$ is RPBA-recognizable, then
$L\in\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$.
###### Proof.
Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ be an RPBA. The intuition is as
follows. An RPBA just needs to verify the counters a single time. Hence, we
can recognize the prefixes of infinite words $\alpha\in
B_{\omega}(\mathcal{A})$ that generate the accepting hit with a PA. Further
checking that an accepting state is seen infinitely often can be done with a
Büchi automaton.
Fix $f\in F$ and let $\mathcal{A}_{f}=(Q,\Sigma,q_{0},\Delta,\\{f\\},C)$ be
the PA that is syntactically equal to $\mathcal{A}$ with the only difference
that $f$ is the only accepting state. Similarly, let
$\mathcal{B}_{f}=(Q,\Sigma,f,\\{(p,a,q)\mid(p,a,\mathbf{v},q)\in\Delta\\},F)$
be the Büchi automaton obtained from $\mathcal{A}$ by setting $f$ as the
initial state and the forgetting the vector labels.
We claim that $B_{\omega}(\mathcal{A})=\bigcup_{f\in F}L(\mathcal{A}_{f})\cdot
L_{\omega}(\mathcal{B}_{f})$.
$\Rightarrow$ To show $B_{\omega}(\mathcal{A})\subseteq\bigcup_{f\in
F}L(\mathcal{A}_{f})\cdot L_{\omega}(\mathcal{B}_{f})$, let $\alpha\in
B_{\omega}(\mathcal{A})$ with accepting run $r=r_{1}r_{2}r_{3}\dots$ where
$r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$. Let $k$ be arbitrary such
that there is an accepting hit in $r_{k}$ (such a $k$ exists by definition)
and consider the prefix $\alpha[1,k]$. Obviously $r_{1}\dots r_{k}$ is an
accepting run of $\mathcal{A}_{p_{k}}$ on $\alpha[1,k]$. Furthermore, there
are infinitely many $j$ such that $p_{j}\in F$ by definition. In particular,
there are also infinitely many $j\geq k$ with this property. Let
$r^{\prime}_{i}=(p_{i-1},\alpha_{i},p_{i})$ for all $i>k$. Then
$r^{\prime}_{k+1}r^{\prime}_{k+2}\dots$ is an accepting run of
$\mathcal{B}_{p_{k}}$ on $\alpha_{k+1}\alpha_{k+2}\dots$ (recall that $p_{k}$
is the initial state of $\mathcal{B}_{p_{k}}$). Hence we have $\alpha[1,k]\in
L(\mathcal{A}_{p_{k}})$ and $\alpha_{k+1}\alpha_{k+2}\dots\in
L_{\omega}(\mathcal{B}_{p_{k}})$.
$\Leftarrow$ To show $\bigcup_{f\in F}L(\mathcal{A}_{f})\cdot
L_{\omega}(\mathcal{B}_{f})\subseteq B_{\omega}(\mathcal{A})$, let $w\in
L(\mathcal{A}_{f})$ and $\beta\in L_{\omega}(\mathcal{B}_{f})$ for some $f\in
F$. We show $w\beta\in B_{\omega}(\mathcal{A})$. Let $s=s_{1}\dots s_{n}$ be
an accepting run of $\mathcal{A}_{f}$ on $w$, which ends in the accepting
state $f$ with $\rho(s)\in C$ by definition. Furthermore, let
$r=r_{1}r_{2}r_{3}\dots$ be an accepting run of $\mathcal{B}^{D}_{f}$ on
$\beta$ which starts in the accepting state $f$ by definition. It is now
easily verified that $sr^{\prime}$ with
$r^{\prime}=r^{\prime}_{1}r^{\prime}_{2}r^{\prime}_{3}\dots$ where
$r^{\prime}_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$ (for an arbitrary
$\mathbf{v}_{i}$ such that $r^{\prime}_{i}\in\Delta)$ is an accepting run of
$\mathcal{A}$ on $w\beta$, as there is an accepting hit in $s_{n}$, and the
(infinitely many) visits of an accepting state in $r$ translate one-to-one,
hence $w\beta\in B_{\omega}(\mathcal{A})$. ∎
## 5 Characterization of $\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$ and
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$
In this section we give a characterization of
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$ and a characterization of
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$. Grobler et al. [6] have shown that
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}\subsetneq\mathcal{L}_{\mathsf{SPBA}}$,
i. e., SPBA are too strong to capture this class. However, restrictions of
SPBA are a good candidate to capture $\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$
as well as $\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$. In fact we show that it
is sufficient to restrict the appearances of accepting states to capture
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$, as specified by the first theorem of
this section. Further restricting the vectors yields a model capturing
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$, as specified in the second theorem
of this section. Recall that the condensation of $\mathcal{A}$ is the DAG of
strong components of the underlying graph of $\mathcal{A}$.
###### Theorem 5.1.
The following are equivalent for all $\omega$-languages
$L\subseteq\Sigma^{\omega}$.
1. 1.
$L$ is of the form $\bigcup_{i}U_{i}V_{i}^{\omega}$, where
$U_{i},V_{i}\subseteq\Sigma^{*}$ are Parikh-recognizable.
2. 2.
$L$ is recognized by an SPBA $\mathcal{A}$ with the property that accepting
states appear only in the leaves of the condensation of $\mathcal{A}$, and
there is at most one accepting state per leaf.
In fact, the proof of Grobler et al. [6] showing
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}\subseteq\mathcal{L}_{\mathsf{SPBA}}$ is
constructive and _almost_ yields an SPBA with the desired property. A key
notion are _normalized_ PA (on finite words), where a PA
$\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F)$ is normalized if $F=\\{f\\}$ and $f$
has no outgoing transitions. It was shown that one can, given a PA
$\mathcal{A}$, construct a normalized PA $\mathcal{A}_{N}$ with
$L(\mathcal{A}_{N})=L(\mathcal{A})\setminus\\{\varepsilon\\}$. For our proofs
it is convenient to introduce a similar, yet stronger notion.
###### Lemma 5.1.
Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ be a PA of dimension $d$. Then
there exists a PA $\mathcal{A}^{IO}$ of dimension $d+1$ with the following
properties.
* 1.
The initial state of $\mathcal{A}^{IO}$ is the only accepting state.
* 2.
$L(\mathcal{A})\setminus\\{\varepsilon\\}=L(\mathcal{A}^{IO})\setminus\\{\varepsilon\\}$.
* 3.
$SCC(\mathcal{A})=\\{Q\\}$.
We say that $\mathcal{A}^{IO}$ is _IO-normalized_.
###### Proof.
Define
$\mathcal{A}^{IO}=\\{Q\cup\\{q_{0}^{\prime}\\},\Sigma,q_{0}^{\prime},\Delta^{IO},\\{q_{0}^{\prime}\\},C\cdot\\{1\\})$,
where
$\displaystyle\Delta^{IO}=$ $\displaystyle\ \\{(p,a,\mathbf{v}\cdot
0,q)\mid(p,a,\mathbf{v},q)\in\Delta\\}$ $\displaystyle\cup$ $\displaystyle\
\\{(q_{0}^{\prime},a,\mathbf{v}\cdot
0,q)\mid(q_{0},a,\mathbf{v},q)\in\Delta\\}$ $\displaystyle\cup$
$\displaystyle\ \\{(p,a,\mathbf{v}\cdot
1,q_{0}^{\prime})\mid(p,a,\mathbf{v},f)\in\Delta,f\in F\\}$
$\displaystyle\cup$ $\displaystyle\ \\{(q_{0}^{\prime},a,\mathbf{v}\cdot
1,q_{0}^{\prime})\mid(q_{0},a,\mathbf{v},f)\in\Delta,f\in F\\}.$
That is, $\mathcal{A}^{IO}$ is obtained from $\mathcal{A}$ by adding a fresh
state $q_{0}^{\prime}$, which is the initial state and only accepting state,
inherits all outgoing transitions from $q_{0}$ and all in-going transitions
from the accepting states. Furthermore, all transitions get a new counter,
which is set to 0 except for the new ingoing transitions of $q_{0}^{\prime}$
where the counter is set to $1$, and all vectors in $C$ are concatenated with
$1$. Finally, we remove all states that cannot reach $q^{\prime}_{0}$ (such
states can appear when shortcutting the ingoing transitions of $F$, and are
useless in the sense that their removal does not change the accepted language;
however, this removal is necessary for the third property). We claim that
$L(\mathcal{A})\setminus\\{\varepsilon\\}=L(\mathcal{A}^{IO})\setminus\\{\varepsilon\\}$.
$\Rightarrow$ To show $L(\mathcal{A})\setminus\\{\varepsilon\\}\subseteq
L(\mathcal{A}^{IO})\setminus\\{\varepsilon\\}$, let $w_{1}\dots w_{n}\in
L(\mathcal{A})$ for $n\geq 1$ with accepting run $r=r_{1}\dots r_{n}$ where
$r_{i}=(p_{i-1},w_{i},\mathbf{v}_{i},p_{i})$. By definition of $\Delta^{IO}$,
there is a transition
$r^{\prime}_{1}=(q_{0}^{\prime},w_{1},\mathbf{v}_{1}\cdot 0,p_{1})$ as well as
a transition $r^{\prime}_{n}=(p_{n-1},w_{n},\mathbf{v}_{n}\cdot
1,q_{0}^{\prime})$ (or in case $n=1$ the loop
$(q_{0}^{\prime},w_{1},\mathbf{v}_{1}\cdot 1,q_{0}^{\prime})$). For all
$1<i<n$ define $r^{\prime}_{i}=(p_{i-1},w_{i},\mathbf{v}_{i}\cdot 0,p_{i})$.
It is now easily verified that $r^{\prime}_{1}\dots r^{\prime}_{n}$ (or simply
$(q_{0}^{\prime},w_{1},\mathbf{v}_{1}\cdot 1,q_{0}^{\prime})$) is an accepting
run of $\mathcal{A}^{IO}$ on $w_{1}\dots w_{n}$.
$\Leftarrow$ To show $L(\mathcal{A}^{IO})\setminus\\{\varepsilon\\}\subseteq
L(\mathcal{A})\setminus\\{\varepsilon\\}$, let $w_{1}\dots w_{n}\in
L(\mathcal{A}^{IO})$ for $n\geq 1$ with accepting run
$r^{\prime}=r^{\prime}_{1}\dots r^{\prime}_{n}$ where
$r^{\prime}_{i}=(p_{i-1},w_{i},\mathbf{v}_{i}\cdot c_{i},p_{i})$ with
$c_{i}\in\\{0,1\\}$. Observe that $p_{0}=p_{n}=q_{0}^{\prime}$, and for all
$0<i<n$ we have $p_{i}\neq q_{0}^{\prime}$ as enforced by the additional
counter (that is, $c_{n}=1$ and $c_{i}=0$ for all $i<n$, as $C\cdot\\{1\\}$ is
the semi-linear set of $\mathcal{A}^{IO}$). By definition of $\Delta^{IO}$
there is a transition $r_{1}=(q_{0},w_{1},\mathbf{v}_{1},p_{1})$, and a
transition $r_{n}=(p_{n-1},w_{n},\mathbf{v}_{n},f)$ for some $f\in F$ in
$\Delta$ (or in case $n=1$ the transition $(q_{0},w_{1},\mathbf{v}_{1},f)$).
For all $1<i<n$ define $r_{i}=(p_{i-1},w_{i},\mathbf{v}_{i},p_{i})$. It is now
easily verified that $r_{1}\dots r_{n}$ (or simply
$(q_{0},w_{1},\mathbf{v}_{1},f)$) is an accepting run of $\mathcal{A}$ on
$w_{1}\dots w_{n}$. ∎
Observe that $L(\mathcal{A})^{\omega}=L(\mathcal{A}^{IO})^{\omega}$ for every
PA $\mathcal{A}$, as $L^{\omega}=(L\setminus\\{\varepsilon\\})^{\omega}$ for
every language $L$ by definition. In fact, it is easily observed that we even
have $S_{\omega}(\mathcal{A}^{IO})=L(\mathcal{A})^{\omega}$. We are now ready
to proof the main theorem.
###### Proof of Theorem 5.1.
$(1)\Rightarrow(2)$. Let
$\mathcal{A}_{i}=(Q_{i},\Sigma,q_{i},\Delta_{i},F_{i})$ for $i\in\\{1,2\\}$ be
PA and let $L=L(\mathcal{A}_{1})\cdot L(\mathcal{A}_{2})^{\omega}$. By Lemma
5.1 and the observation above we can equivalently write
$L=L(\mathcal{A}_{1})\cdot S_{\omega}(\mathcal{A}^{IO}_{2})$. As
$\mathcal{A}^{IO}_{2}$ is IO-normalized it satisfies the property of the
theorem.
We can now easily adapt the construction in [6] showing that the concatenation
of a Parikh-recognizable language and an SPBA-recognizable $\omega$-language
is SPBA-recognizable to obtain an SPBA for $L(\mathcal{A}_{1})\cdot
S_{\omega}(\mathcal{A}^{IO}_{2})$ that only keeps the accepting state of
$\mathcal{A}^{IO}_{2}$, maintaining the property of the theorem. Finally, the
closure under union is shown using a standard construction, hence combining
SPBA with the desired property still yields an SPBA with the property.
Overall, we obtain an SPBA $\mathcal{A}$ recognizing $L$ where the only
accepting states appear in the leaves of $C(\mathcal{A})$. $\lrcorner$
$(2)\Rightarrow(1)$. Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ be an SPBA
of dimension $d$ with the property of the theorem. Let $f\in F$ and let
$\mathcal{A}_{f}=(Q,\Sigma,q_{0},\Delta_{f},\\{f\\},C\cdot\\{1\\})$ with
$\Delta_{f}=\\{p,a,\mathbf{v}\cdot 0,q)\mid(p,a,\mathbf{v},q)\in\Delta,q\neq
f\\}\cup\\{(p,a,\mathbf{v}\cdot 1,f)\mid(p,a,\mathbf{v},f)\in\Delta\\}$ be the
PA of dimension $d+1$ obtained from $\mathcal{A}$ by setting $f$ as the only
accepting state with an additional counter that is 0 at every transition
except of the ingoing transitions of $f$, where the counter is set to 1.
Additionally all vectors in $C$ are concatenated with $1$. Similarly, let
$\mathcal{A}_{f,f}=(Q,\Sigma,f,\Delta_{f},\\{f\\},C\cdot\\{1\\})$ be the PA of
dimension $d+1$ obtained from $\mathcal{A}$ by setting $f$ as the initial
state and only accepting state, where $\Delta_{f}$ is defined as for
$\mathcal{A}_{f}$. We claim $S_{\omega}(\mathcal{A})=\bigcup_{f\in
F}L(\mathcal{A}_{f})\cdot L(\mathcal{A}_{f,f})^{\omega}$.
$\Rightarrow$ To show $S_{\omega}(\mathcal{A})\subseteq\bigcup_{f\in
F}L(\mathcal{A}_{f})\cdot L(\mathcal{A}_{f,f})^{\omega}$, let $\alpha\in
S_{\omega}(\mathcal{A})$ with accepting run $r=r_{1}r_{2}r_{3}\dots$ where
$r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$. Let $k_{1}<k_{2}<\dots$ be
the positions of accepting states in $r$, i. e., $p_{k_{i}}\in F$ for all
$i\geq 1$. First observe that the property in the theorem implies
$p_{k_{i}}=p_{k_{j}}$ for all $i,j\geq 1$, i. e., no two distinct accepting
states appear in $r$, since accepting states appear only in different leaves
of the condensation of $\mathcal{A}$.
For all $j\geq 1$ define
$r^{\prime}_{j}=(p_{j-1},\alpha_{j},\mathbf{v}_{j}\cdot 0,p_{j})$ if $j\neq
k_{i}$ for all $i\geq 1$, and
$r^{\prime}_{j}=(p_{j-1},\alpha_{j},\mathbf{v}_{j}\cdot 1,p_{j})$ if $j=k_{i}$
for some $i\geq 1$, i. e., we replace every transition $r_{j}$ by the
corresponding transition in $\Delta_{f}$.
Now consider the partial run $r_{1}\dots r_{k_{1}}$ and observe that
$p_{i}\neq p_{k_{1}}$ for all $i<k_{1}$, and $\rho(r_{1}\dots r_{k_{1}})\in C$
by the definition of SPBA. Hence $r^{\prime}=r^{\prime}_{1}\dots
r^{\prime}_{k_{1}}$ is an accepting run of $\mathcal{A}_{p_{k_{1}}}$ on
$\alpha[1,k_{1}]$, as only a single accepting state appears in $r^{\prime}$,
the newly introduced counter has a value of $1$ when entering $p_{k_{1}}$, i.
e., $\rho(r^{\prime})\in C\cdot\\{1\\}$, hence $\alpha[1,k_{1}]\in
L(\mathcal{A}_{p_{k_{1}}})$.
Finally, we show that $\alpha[k_{i}+1,k_{i+1}]\in
L(\mathcal{A}_{p_{k_{1}},p_{k_{1}}})$. Observe that $r^{\prime}_{k_{i}+1}\dots
r^{\prime}_{k_{i+1}}$ is an accepting run of
$\mathcal{A}_{p_{k_{1}},p_{k_{1}}}$ on $\alpha[k_{1}+1,k_{i+1}]$: we have
$\rho(r_{k_{i}+1}\dots r_{k_{i+1}})=\mathbf{v}\in C$ by definition. Again, as
only a single accepting state appears in $r^{\prime}_{k_{i}+1}\dots
r_{k_{i+1}}$, we have $\rho(r^{\prime}_{k_{i}+1}\dots
r_{k_{i+1}})=\mathbf{v}\cdot 1\in C\cdot\\{1\\}$, and hence
$\alpha[k_{1}+1,k_{i+1}]\in L(\mathcal{A}_{p_{k_{1}},p_{k_{1}}})$. We conclude
$\alpha\in L(\mathcal{A}_{p_{k_{1}}})\cdot
L(\mathcal{A}_{p_{k_{1}},p_{k_{1}}})^{\omega}$.
$\Leftarrow$ To show $\bigcup_{f\in F}L(\mathcal{A}_{f})\cdot
L(\mathcal{A}_{f,f})^{\omega}\subseteq S_{\omega}(\mathcal{A})$, let $u\in
L(\mathcal{A}_{f})$, and $v_{1},v_{2},\dots\in L(\mathcal{A}_{f,f})$ for some
$f\in F$. We show that $uv_{1}v_{2}\dots\in S_{\omega}(\mathcal{A})$.
First let $u=u_{1}\dots u_{n}$ and $r^{\prime}=r^{\prime}_{1}\dots
r^{\prime}_{n}$ with $r^{\prime}_{i}=(p_{i-1},u_{i},\mathbf{v}_{i}\cdot
c_{i},p_{i})$, where $c_{i}\in\\{0,1\\}$, be an accepting run of
$\mathcal{A}_{f}$ on $u$. Observe that $\rho(r^{\prime})\in C\cdot\\{1\\}$,
hence $\sum_{i\leq n}c_{i}=1$, i. e., $p_{n}$ is the only occurrence of an
accepting state in $r^{\prime}$ (if there was another, say $p_{j}$, then
$c_{j}=1$ by the choice of $\Delta_{f}$, hence $\sum_{i\leq n}c_{i}>1$, a
contradiction). For all $1\leq i\leq n$ let
$r_{i}=(p_{i-1},u_{i},\mathbf{v}_{i},p_{i})$. Then $r_{1}\dots r_{n}$ is a
partial run of $\mathcal{A}$ on $w$ with $\rho(r_{1}\dots r_{n})\in C$ and
$p_{n}=f$.
Similarly, no run of $\mathcal{A}_{f,f}$ on any $v_{i}$ visits an accepting
state before reading the last symbol, hence we continue the run from $r_{n}$
on $v_{1},v_{2},\dots$ using the same argument. Hence $uv_{1}v_{2}\dots\in
S_{\omega}(\mathcal{A})$, concluding the proof. ∎
As a side product of the proof of Theorem 5.1 we get the following corollary,
which is in general not true for SPBA.
###### Corollary 5.1.
Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ be an SPBA with the property
that accepting states appear only in the leaves of the condensation of
$\mathcal{A}$, and there is at most one accepting state per leaf. Then we have
$S_{\omega}(\mathcal{A})=\bigcup_{f\in
F}S_{\omega}(Q,\Sigma,q_{0},\Delta,\\{f\\},C)$.
By even further restricting the power of SPBA, we get the following
characterization of $\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$.
###### Theorem 5.2.
The following are equivalent for all $\omega$-languages
$L\subseteq\Sigma^{\omega}$.
1. 1.
$L$ is of the form $\bigcup_{i}U_{i}V_{i}^{\omega}$, where
$U_{i}\subseteq\Sigma^{*}$ is regular and $V_{i}\subseteq\Sigma^{*}$ is
Parikh-recognizable.
2. 2.
$L$ is recognized by an SPBA $\mathcal{A}$ with the following properties.
1. (a)
At most one state $q$ per leaf of the condensation of $\mathcal{A}$ may have
ingoing transitions from outside the leaf, this state $q$ is the only
accepting state in the leaf, and there are no accepting states in non-leaves.
2. (b)
only transitions connecting states in a leaf may be labeled with a non-zero
vector.
Observe that property (a) is a stronger property than the one of Theorem 5.1,
hence, SPBA with this restriction are at most as powerful as those that
characterize $\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$. However, as a side
product of the proof we get that property (a) is equivalent to the property of
Theorem 5.1. Hence, property (b) is mandatory to sufficiently weaken SPBA such
that they capture $\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$. In fact, using the
notion of IO-normalization, we can re-use most of the ideas in the proof of
Theorem 5.1.
###### Proof of Theorem 5.2.
$(1)\Rightarrow(2)$. We can trivially convert an NFA into an equivalent PA by
labeling every transition with $0$ and choosing $C=\\{0\\}$. Let $\mathcal{A}$
be an arbitrary PA and observe that $\mathcal{A}^{IO}$ has only a single SCC
by definition. Again, we have
$L(\mathcal{A})^{\omega}=S_{\omega}(\mathcal{A}^{IO})$ and the constructions
for concatenation and union do not destroy the properties, hence we obtain an
SPBA of the desired form. $\lrcorner$
$(2)\Rightarrow(1)$ Let $\mathcal{A}=(Q,\Sigma,q_{0},\Delta,F,C)$ be an SPBA
of dimension $d$ with properties (a) and (b). Fix $f\in F$ and let
$\mathcal{B}_{f}=(Q_{f},\Sigma,q_{0},\\{(p,a,q)\mid(p,a,\mathbf{v},q)\in\Delta,p,q\in
Q_{f}\\},\\{f\\})$ with $Q_{f}=\\{q\in Q\mid q\text{ appears in a non-leaf SCC
of }C(\mathcal{A})\\}\cup\\{f\\}$ be the NFA obtained from $\mathcal{A}$ by
removing all leaf states except $f$, and removing all labels from the
transitions. Recycling the automaton from Theorem 5.1, let
$\mathcal{A}_{f,f}=(Q,\Sigma,f,\Delta_{f},\\{f\\},C\cdot\\{1\\})$ with
$\Delta_{f}=\\{p,a,\mathbf{v}\cdot 0,q)\mid(p,a,\mathbf{v},q)\in\Delta,q\neq
f\\}\cup\\{(p,a,\mathbf{v}\cdot 1,f)\mid(p,a,\mathbf{v},f)\in\Delta\\}$. We
claim $S_{\omega}(\mathcal{A})=\bigcup_{f\in F}L(\mathcal{B}_{f})\cdot
L(\mathcal{A}_{f})^{\omega}$.
$\Rightarrow$ To show $S_{\omega}(\mathcal{A})=\bigcup_{f\in
F}L(\mathcal{B}_{f})\cdot L(\mathcal{A}_{f,f})^{\omega}$, let $\alpha\in
S_{\omega}(\mathcal{A})$ with accepting run $r=r_{1}r_{2}r_{3}\dots$ where
$r_{i}=(p_{i-1},\alpha_{i},\mathbf{v}_{i},p_{i})$, and let $k_{1}<k_{2}<\dots$
be the positions of the accepting states in $r$, and consider the partial run
$r_{1}\dots r_{k_{1}}$ (if $k_{1}=0$, i. e., the initial state is already
accepting, then $r_{1}\dots r_{k_{1}}$ is empty).
By property (a) we have that $p_{k_{1}}$ is the first state visited in $r$
that is located in a leaf of $C(\mathcal{A})$. Hence $r^{\prime}_{1}\dots
r^{\prime}_{k_{1}}$, where $r^{\prime}_{i}=(p_{i-1},\alpha_{i},p_{i})$, is an
accepting run of $\mathcal{B}_{p_{k_{1}}}$ on $\alpha[1,k_{1}]$ (in the case
$k_{1}=0$ we define $\alpha[1,k_{1}]=\varepsilon$).
By the same argument as in the proof of Theorem 5.1 we have
$p_{k_{i}}=p_{k_{j}}$ for all $i,j\geq 1$, hence $\alpha[k_{i}+1,k_{i+1}]\in
L(\mathcal{A}_{p_{k_{1}},p_{k_{1}}})$, and hence $\alpha\in
L(\mathcal{B}_{p_{k}})\cdot L(\mathcal{A}_{p_{k_{1}},p_{k_{1}}})^{\omega}$.
$\Leftarrow$ To show $\bigcup_{f\in F}L(\mathcal{A}_{f})\cdot
L(\mathcal{A}_{f,f})^{\omega}\subseteq S_{\omega}(\mathcal{A})$, let $u\in
L(\mathcal{B}_{f})$, and $v_{1},v_{2},\dots\in L(\mathcal{A}_{f,f})$ for some
$f\in F$. We show that $uv_{1}v_{2}\dots\in S_{\omega}(\mathcal{A})$.
First observe that properties (a) and (b) enforce that $\mathbf{0}\in C$, as
the accepting state of a leaf of $C(\mathcal{A})$ is visited before a
transition labeled with a non-zero can be used. Let $u=u_{1}\dots u_{n}$ and
$s_{1}\dots s_{n}$ with $s_{i}=(p_{i_{1}},u_{i},p_{i})$ be an accepting run of
$\mathcal{B}_{f}$ on $u$. Define
$s^{\prime}_{i}=(p_{i_{1}},u_{i},\mathbf{0},p_{i})$ and observe that
$s^{\prime}_{1}\dots s^{\prime}_{n}$ is a partial run of $\mathcal{A}$ with
$\rho(s^{\prime}_{1}\dots s^{\prime}_{n})\in C$ and $p_{n}=f$ by the
observation above.
Again we can very similarly continue the run on $v_{1},v_{2},\dots$ using the
same argument. Hence $uv_{1}v_{2}\dots\in S_{\omega}(\mathcal{A})$, concluding
the proof. ∎
## 6 Conclusion
We conclude with an overview of our results shown in Figure 2.
complete RPARPARPBA = LPBA = $\mathcal{L}_{\mathsf{PA,Reg}}^{\omega}$BA =
$\mathcal{L}_{\mathsf{Reg,Reg}}^{\omega}$SPBA $(**)$ =
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$PPBASPBA $(*)$ =
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$SPBARemark 3.1Lemma 3.2Corollary 3.1[5,
6], [7][6]$\neq$Lemma 3.1$\neq$$\neq$$\neq$Theorem 4.1Theorem 5.2Theorem
2.1Theorem 5.1$(*)$ At most one state $q$ per leaf of $C(\mathcal{A})$ may
have ingoing transitions from outside the leaf, this $(*)$ state $q$ is the
only accepting state in the leaf, and there are no accepting states in non-
leaves;$(**)$ and only transitions connecting states in leaves may be labeled
with non-zero vectors. Figure 2: Overview of our results. Arrows mean strict
inclusions and $\neq$ means orthogonal.
To finalize the picture, we observe the following.
###### Observation 6.1.
1. 1.
There are RPA-recognizable $\omega$-languages not contained in
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$, for example $\\{a^{n}b^{n}\mid n\geq
0\\}\\{a\\}^{\omega}$.
2. 2.
We have
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}\subseteq\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$
by definition, and there are $\omega$-languages contained in
$\mathcal{L}_{\mathsf{PA,PA}}^{\omega}$ that are not contained in
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$, for example $\\{a^{n}b^{n}\mid n\geq
1\\}\\{c^{n}d^{n}\mid n\geq 1\\}^{\omega}$.
3. 3.
We have
$\mathcal{L}_{\mathsf{Reg,Reg}}^{\omega}\subseteq\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$
by definition, and there are $\omega$-languages contained in
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$ that are neither PPBA-recognizable
nor $\omega$-regular, as witnessed by many $\omega$-closures of Parikh-
recognizable languages (which are trivially contained in
$\mathcal{L}_{\mathsf{Reg,PA}}^{\omega}$), for example $\\{a^{n}b^{n}\mid
n\geq 1\\}^{\omega}$ (a formal proof can be found for blind counter machines
in [5], which are known to be equivalent to PPBA [6]).
Finally, we recall that deterministic $\omega$-regular languages are
characterized as regular arrow-languages $\vec{L}$, where
$\vec{L}=\\{\alpha\mid\alpha[1,i]\in L\text{ for infinitely many }i\\}$ [10].
This characterization can easily be adapted to show that deterministic PPBA-
recognizable $\omega$-languages are captured by arrows of deterministic
Parikh-recognizable languages. We conjecture that PPBA-recognizable
$\omega$-languages are captured by arrows of Parikh-recognizable languages
yielding a similar characterization of $\mathcal{L}_{\mathsf{PPBA}}$.
## References
* [1] Joël Allred and Ulrich Ultes-Nitsche. $k$-counting automata. RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications, 46(4):461–478, 2012.
* [2] Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. The MIT Press, 2008.
* [3] Mikolaj Bojanczyk. Beyond omega-Regular Languages. In Jean-Yves Marion and Thomas Schwentick, editors, 27th International Symposium on Theoretical Aspects of Computer Science, volume 5 of Leibniz International Proceedings in Informatics (LIPIcs), pages 11–16, Dagstuhl, Germany, 2010. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [4] J. Richard Büchi. Weak second-order arithmetic and finite automata. Mathematical Logic Quarterly, 6(1‐6):66–92, 1960.
* [5] Henning Fernau and Ralf Stiebe. Blind counter automata on omega-words. Fundam. Inform., 83:51–64, 2008.
* [6] Mario Grobler, Leif Sabellek, and Sebastian Siebertz. Parikh automata on infinite words, 2023.
* [7] Shibashis Guha, Ismaël Jecker, Karoliina Lehtinen, and Martin Zimmermann. Parikh automata over infinite words, 2022.
* [8] Felix Klaedtke and Harald Rueß. Monadic second-order logics with cardinalities. In Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, and Gerhard J. Woeginger, editors, Automata, Languages and Programming, pages 681–696, Berlin, Heidelberg, 2003. Springer.
* [9] Dario Della Monica, Angelo Montanari, and Pietro Sala. Beyond $\omega$BS-regular languages: $\omega$t-regular expressions and counter-check automata. Electronic Proceedings in Theoretical Computer Science, 256:223–237, 2017.
* [10] Wolfgang Thomas. Automata on Infinite Objects, page 133–191. MIT Press, Cambridge, MA, USA, 1991.
|
# Planning Mm-Wave Access Networks With Reconfigurable Intelligent Surfaces
Eugenio Moro∗, Ilario Filippini∗, Antonio Capone∗ and Danilo De Donno†
∗ANTLab - Advanced Network Technologies Laboratory, Politecnico di Milano,
Milan, Italy
†Milan Research Center, Huawei Technologies Italia S.r.l, Milan, Italy
Email: $\\{$eugenio.moro, ilario.filippini<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
With the capability to support gigabit data rates, millimetre-wave (mm-Wave)
communication is unanimously considered a key technology of future cellular
networks. However, the harsh propagation at such high frequencies makes these
networks quite susceptible to failures due to obstacle blockages. Recently
introduced Reconfigurable Intelligent Surfaces (RISs) can enhance the coverage
of mm-Wave communications by improving the received signal power and offering
an alternative radio path when the direct link is interrupted. While several
works have addressed this possibility from a communication standpoint, none of
these has yet investigated the impact of RISs on large-scale mm-Wave networks.
Aiming to fill this literature gap, we propose a new mathematical formulation
of the coverage planning problem that includes RISs. Using well-established
planning methods, we have developed a new optimization model where RISs can be
installed alongside base stations to assist the communications, creating what
we have defined as Smart Radio Connections. Our simulation campaigns show that
RISs effectively increase both throughput and coverage of access networks,
while further numerical results highlight additional benefits that the
simplified scenarios analyzed by previous works could not reveal.
## I Introduction
Current and future mobile radio network generations are challenged to cope
with ever-expanding mobile data demands, spurred by our increasingly connected
society [1].
At the same time, cellular communication systems based on sub-6GHz frequencies
are currently experiencing a bandwidth shortage [2] as they struggle to
deliver the required level of performance.
Millimetre-wave (mm-wave) based cellular communications have been recognized
as the key technology to address both these crucial issues, as they can fulfil
the promise of supporting Gbps demands while also solving the spectrum
scarcity issue [3].
Although its standardization in cellular networks for mobile access began only
recently with 3GPP Release 15, this technology has already been largely
employed in satellite links and cellular backhauling [4] and its limitations
are well known. In particular, mm-waves are affected by harsh propagation
typical of such high frequency that leads to high free space attenuation.
Simultaneously, high penetration losses and poor diffraction mean that any
obstacle crossing the line of sight might easily cause mm-Wave communications
to fail.
While emergent technologies - such as massive MIMO and beamforming - can
effectively compensate for the increased pathloss [5], the problem of blockage
resiliency in mobile access has not encountered the same luck.
Among the candidate technologies that can potentially address the issue above,
the recent emerging concept of Reconfigurable Intelligent Surface (RIS) has
gained extreme popularity among the academic community [6].
RISs are described as quasi-passive planar structures whose electromagnetic
properties can be electronically controlled to manipulate impinging radio
waves in a variety of ways. While an RIS can produce several types of these
electromagnetic manipulations, the ability to reflect and focus impinging
waves in any direction has the potential of transforming these surfaces in
passive relays [7]. This ability is exciting for mm-Wave communications, as an
RIS can increase the blockage resilience by creating an alternative
electromagnetic path. As opposed to active relays, RISs also show
significantly higher energy efficiency [8] and prototypal works [9] have shown
how they can be effectively built with cheap materials. Indeed, part of the
attention that RISs are generating might be well justified by the opportunity
of reducing the cost of deploying and maintaining a resilient wireless access
network as opposed to more traditional and expensive approaches [10].
Theoretical works [11][12][13] have extensively analyzed this particular RIS
configuration from a communication perspective, providing practical
mathematical tools to model the propagation characteristics of such scenarios.
However, these analyses are carried out at the link level with simplified
network scenarios.
In this work, instead, we focus on the planning of large-scale mm-Wave radio
access networks employing intelligent surfaces and, to the best of our
knowledge, it is the first to tackle this challenge. We have employed well-
established coverage planning methods to develop a new mathematical
formulation of the coverage planning problem where both base stations and RISs
can be installed in given locations of an arbitrary geographic area. We have
introduced the concept of Smart Radio Connection (SRC), a logical abstraction
of the well-known concept of the RIS-enabled Smart Radio Environment [14]. An
SRC consists of a radio link assisted by an intelligent surface and, in our
planning model, SRCs can be established alongside traditional connections
between UEs and base stations to increase the coverage and system performance.
Our extensive numerical analysis campaign testifies how the well known point-
to-point benefits of employing RISs do scale well at the system level for
mobile access. Results show that including RISs when planning a radio access
network can simultaneously increase coverage, throughput and blockage
resiliency. Additionally, our results give new interesting insights on the
benefits of employing RISs for coverage planning of mm-wave networks that
could not be noticed in the highly simplified scenarios of related works. In
particular, our model can identify the RIS configurations and the deployment
budget conditions that provide tangible performance advantages when RISs are
considered.
The rest of this paper is structured as follows: Sec. II presents some
relevant related works, Sec. III details a baseline mm-wave coverage planning
model that does not include the presence of RISs, Sec. IV describes the
modeling choices that lead us to develop a RIS-aware planning model and
presents the novel mathematical formulation. Finally, Sec. V shows the
simulation setup and the numerical results.
## II Related Works
Reconfigurable Intelligent Surfaces represent the latest technological
proposition in the domain of propagation waves control [15]. Their use as
passive relays has been proposed in [7], where preliminary link-level
simulations have shown the potential benefits with respect to more traditional
active relaying approaches.
From a communication standpoint, the problem of jointly optimizing the base
station pre-coding and the RIS elements phase shifts has been studied in [11],
where an iterative algorithm addresses the non-convexity challenges. In [12],
a closed-form solution of the same problem is derived exploiting the
characteristic of mm-Wave channels.
Finally, authors of [9] have shown how a prototype RIS can effectively enhance
the coverage of indoor mm-Wave networks.
Historically, the problem of coverage planning has been applied to different
radio access technologies.
However, mm-Wave coverage planning works have only lately appeared in the
literature, given the relatively recent interest. Understandably, these works
have studied the coverage problem with a focus on the network resilience
against blockages.
In particular, authors of [16] study the problem of optimizing the layout of
an mm-Wave access network in a dense urban environment such that the LOS
availability is maximized. A similar analysis is carried out in [17] for mm-
Wave vehicular communication scenarios.
In [18], the coverage planning problem is studied through a network cost
minimization that employs a link availability stochastic model. Finally,
authors of [10] have studied the impact of different network planning
approaches on the blockage resiliency of mm-Wave deployments.
None of the planning works mentioned above has included reconfigurable
intelligent surfaces in their investigations. To the best of the authors’
knowledge, this is the first published work to present such an analysis.
## III Basic mm-Wave Model
In this section, we give a basic description of a mathematical programming
model for mm-Wave access network coverage planning. Similarly to other
coverage planning works [19, 10], we identify a set $\mathcal{C}$ of candidate
positions (i.e. Candidate Sites, CSs) over a given geographic area where Base
Stations (BS) can be installed. A discrete set of Test Points (TP)
$\mathcal{T}$ represents the traffic/user distribution.
Binary coverage parameter $\Lambda_{t,c}$ captures the propagation
characteristics between TP $t\in\mathcal{T}$ and CS $c\in\mathcal{C}$.
Particularly, $\Lambda_{t,c}=1$ if a radio link between the two positions can
be established and zero otherwise. These parameters are set according to
physical considerations, such as distance, transmission power, receiver
sensitivity, antenna gain, attenuation losses, and more. Additionally,
blockages due to fixed and opaque obstructions between any pair of CS-TP can
be modelled by setting the corresponding coverage parameter to 0.
Given the fixed known position of any potential CS-TP pair, the maximum
achievable downlink bit-rate can be pre-computed according to the transmitter
and receiver characteristics and any propagation model of choice. Indeed,
given the extreme directivity of mm-Wave downlink transmissions that can
strongly limit any interference effect, we can reasonably assume this bit-rate
to be independent of other simultaneous access transmissions [10].
However, a well-known issue of millimetre-based communication is its high
penetration loss and limited diffraction [20], resulting in frequent blockages
due to obstacles transiting across the connection line of sight. Blocked radio
links experience a dramatic reduction in throughput, and this can be taken
into consideration by weighting the maximum achievable bit-rate of each link
with the probability of the link being in a state where such bit-rate is
actually available (i.e., not blocked)111Specific blockage models, such as
[21], express this probability as a decreasing function of the link length,
allowing this quantity to be computed given the CS-TP distances.. Parameter
$R_{t,c}^{\text{BS}}$ denote this expected (blockage-weighted) maximum
throughput between TP $t\in\mathcal{T}$ and BS installed in $c\in\mathcal{C}$.
Similarly, $R^{\text{MIN}}$ identifies the minimum expected throughput that
needs to be guaranteed to each TP for it to be considered as covered. Knowing
the channel states $\mathcal{S}$, their probabilities $p_{s},s\in\mathcal{S}$
and the corresponding achievable rates $r_{s},s\in\mathcal{S}$, these
parameters can be computed according to the following formula:
$R=\sum_{s\in\mathcal{S}}p_{s}r_{s}.$ (1)
Finally, the coverage planning is constrained to a budget value $B$ and
parameter $P_{c}$ describes the cost of installing a BS in a particular CS
$c\in\mathcal{C}$.
The proposed planning model is based on the following decision variables:
* •
$y_{c}^{\text{BS}}\in\\{0,1\\}$: installation variable equal to 1 if a BS is
installed in site $c\in C$ and 0 otherwise,
* •
$x_{t,c}\in\\{0,1\\}$: association variable equal to 1 if BS in
$c\in\mathcal{C}$ is assigned for coverage of test point $t\in\mathcal{T}$,
* •
$\tau_{t,c}^{\text{BS}}\in[0,1]$, time-sharing variable indicating the
fraction of time during which BS in $c\in\mathcal{C}$ transmits to test point
$t\in\mathcal{T}$. This variable allows us to model the BS resource sharing as
a time-sharing process, in accordance to 3GPP Rel. 15 specifications. Note
that the very same notation can be applied if the joint time and sub-carrier
sharing has to be considered.
Given the notation, the parameters and the variables described above, we now
propose a basic MILP (Mixed Integer Linear Programming) formulation of the
coverage planning problem: [2] ∑_t ∈T, c ∈C R_t,c^BS⋅τ^BS_t,c ∑_c ∈Cx_t,c≤1∀t
∈T τ_t,c^BS≤Λ_t,c⋅x_t,c ∀t ∈T, c ∈C ∑_t ∈Tτ_t,c^BS≤y_c^BS∀c ∈C ∑_c ∈C
R_t,c^BS⋅τ_t,c^BS≥R^MIN ∀t ∈T ∑_c ∈CP_c⋅y_c^BS≤B The objective function in (1)
expresses the goal of the planning model: the maximization of the sum-
throughput. A per-user average throughput appears in the sum, which depends on
both the nominal link capacity between BS and TP and the fraction of resources
the BS dedicates to the specific TP. Also, note that we consider this
objective function as one of the very many possible ones. Other approaches,
such as the sum of throughput logarithms, the max-min throughput, etc., can be
easily plugged in with minimal changes to the formulation.
Constraints (III) enforces each TP to be covered at most by 1 BS. Constraint
(III) is such that a BS in $c\in\mathcal{C}$ can transmit to a TP
$t\in\mathcal{T}$ for a strictly positive fraction of time only if such TP is
associated with this particular BS (i.e. $x_{t,c}=1$) and if a radio link can
be established between the two (i.e. $\Lambda_{t,c}=1$).
Constraint (1) has a double function. First, it does not allow any
transmission of strictly positive duration to originate from any BS which has
not been installed. Additionally, it limits to 1 the overall sum of the
fractions of time dedicated for transmissions towards specific TPs for each
installed BS, effectively enforcing a time-based division of BS resources.
Note that this constraint may imply single-beam BS transmissions. However, the
goal of this formulation is not to provide a perfect user throughput figure,
which is usually computed by system-level simulators, but rather to design a
good network layout. The latter can be achieved even with approximated user
throughput models that do not substantially change the optimal deployment. On
top of that, multi-beam antenna patterns remarkably decrease link directivity,
strongly limiting BS coverage. As such, we believe it is reasonable to assume
that most of the downlink transmissions involve one user at a time.
Constraint (1) simply bounds each TP’s throughput to be at least the minimum
throughput $R^{\text{MIN}}$.
Finally, constraint (1) limits the deployment cost to the available planning
budget $B$, with $P_{c}^{\text{BS}}$ indicating the cost of installing a BS in
CS $c\in\mathcal{C}$.
## IV Modelling Reconfigurable Intelligent Surfaces
Figure 1: Example of SRC with RIS orientation and lines of sight angles.
In our modelling efforts, RISs behave as passive beamformers, focusing the
impinging radio waves in specific directions and creating what is often
identified as a Smart Radio Environment. In this way, a proper configuration
of the RIS can actively assist the communication between a transmitter-
receiver pair by increasing the Signal to Noise Ratio (SNR) at the receiver
[7]. Following the same rationale, we introduce the novel concept of Smart
Radio Connection (SRC): a triplet that comprises one transmitter (i.e. the
BS), one receiver (i.e. the UE located in a specific TP) and a smart surface
configured to assist this specific communication222While it is possible for
multiple RIS to be configured to assist a single TX-RX pair [11], in this work
we focus on up to one surface per SRC.. Any SRC is then modeled as a tuple
$<t,d,r>$, where $t\in\mathcal{T}$ denotes the TP, $d\in\mathcal{C}$ denotes
the BS installation site and $r\in\mathcal{C}$ denotes the RIS installation
site, as the example pictured in Figure 1 shows.
The problem of jointly optimizing the transmitter pre-coding and the RIS
elements’ phase shifts in a SRC is generally not convex [11]. However, the
inherent characteristics of a mm-Wave channel allow for significant
simplifications and an optimal closed form expression of the average received
power can be derived. In this work, we consider the average SRC channel gain
expression developed in [12] for mm-Wave communication, which we propose here
in a compact form:
$\gamma=f(\mathbf{h}_{B,R},\mathbf{h}_{R,P})+f^{\prime}(\mathbf{h}_{B,R},\mathbf{h}_{R,P},\mathbf{h}_{B,P})+f^{\prime\prime}(\mathbf{h}_{B,P}),$
(2)
where $\mathbf{h}_{B,R}$ is the channel between the BS and the RIS,
$\mathbf{h}_{R,P}$ is the channel between the RIS and the TP,
$\mathbf{h}_{B,P}$ is the channel between the BS and the TP and
$f,f^{\prime},f^{\prime\prime}$ are proper functions.
The contribution of the RIS to the SRC channel gain is linearly separable from
the contribution of the traditional direct link, meaning that the increment in
SRC link capacity with respect to unassisted communication is directly
dependent only on the terms
$f(\mathbf{h}_{B,R},\mathbf{h}_{R,P})+f^{\prime}(\mathbf{h}_{B,R},\mathbf{h}_{R,P},\mathbf{h}_{B,P})$.
It follows that, by knowing the relative positions of the three components of
a SRC, as well as the state probability of each channel, the performance of
any SRC can be completely characterized. Indeed, we define
$R_{t,d,r}^{\text{SRC}}$ as the expected (blockage-weighted) throughput when
BS in $d\in\mathcal{C}$ transmits to TP $t\in\mathcal{T}$, while being
assisted by RIS in $r\in\mathcal{C}$.
In general, a RIS can be part of many SRCs, and we assume an instantaneous
reconfiguration of the reflecting elements when the surface switches between
different SRCs. However, we allow each surface to assist up to 1 TX-RX pair at
a time, meaning that the RIS sharing takes the form of a time-sharing process.
We are fully aware that the previous assumptions may represent some though
technological challenges for RIS hardware manufacturers. However, we believe
them to be consistent with a realistic technological maturity level that needs
to be considered from the beginning if we want to investigate the potential
benefits of RIS development. For instance, a similar evolution occurred in
literature to beamforming reconfiguration assumptions.
Similarly to what happens for uniform linear antenna arrays, RISs are expected
to present a limited array field of view [9]. We consider this by defining a
RIS orientation, coinciding with the vector normal to the surface. For a given
orientation, the lines of sight of the base stations/test points of all SRCs
which the RIS is assigned to have to fall inside the surface field of view. In
this work, we define a horizontal field of view angle $D$ and we discard the
vertical field of view333It usually has a limited impact on the network
layout, however, if needed, a vertical field of view can be easily included in
the model.
Finally, our proposed model maintains generality by not forcing any BS-TP pair
to be RIS-assisted. However, including both SRCs and traditional direct-link
radio connections in a planning model was found to require a cumbersome number
of additional variables and constraints. We worked around this issue by
including an additional candidate site $\tilde{c}$ where a fake RIS is always
installed. This particular RIS has no cost, no time-sharing limitation and
360° field of view, but grants no additional throughput performance to any
assisted BS-TP pair. After an optimal solution is found, a post-processing
operation changes any SRC including the fake RIS into a traditional unassisted
BS-TP communication. This way, we could maintain a leaner formulation by
modelling SRCs only, while avoiding any loss of generality.
According to the previously described modeling choices, the following
variables were needed to extend the mm-Wave coverage planning model presented
in sec. III:
* •
$y_{c}^{\text{RIS}}\in\\{0,1\\}$: RIS installation variable, equal to 1 if a
RIS is installed in site $c\in\mathcal{C}$ and 0 otherwise,
* •
$s_{t,d,r}\in\\{0,1\\}:$ SRC activation variable, equal to 1 if RIS in
$r\in\mathcal{C}$ is assigned to assist the communication between BS in
$d\in\mathcal{C}$ and TP $t\in\mathcal{T}$,
* •
$\tau_{t,d,r}^{\text{SRC}}\in[0,1]:$ SRC time sharing variable, indicating the
fraction of time during which BS in $d\in\mathcal{C}$ transmits to TP
$t\in\mathcal{T}$ aided by a RIS installed in $r\in\mathcal{C}$,
* •
$\phi_{r}\in[0,2\pi]:$ azimuth of RIS installed in CS $r\in\mathcal{C}$
computed with respect to a reference direction.
We are now ready to introduce the coverage planning model extended to include
Reconfigurable Intelligent Surfaces: [3] ∑_t ∈T, d ∈C, r ∈C
R_t,d,r^SRC⋅τ^SRC_t,d,r y_c^BS+y_c^RIS≤1∀c ∈C y_~c^RIS≥1 ∑_d ∈C, r
∈Cs_t,d,r≤1∀t ∈T τ_t,d,r^SRC≤Λ_t,d,r⋅s_t,d,r∀t ∈T, d,r ∈C ∑_t ∈T, r
∈Cτ_t,d,r^SRC≤y_d^BS∀d ∈C ∑_t ∈T, d ∈Cτ_t,d,r^SRC≤y_r^RIS∀r ∈C∖{~r} ∑_d ∈C, r
∈CR_t,d,r^SRC⋅τ^SRC_t,d,r≥R^MIN∀t ∈T ϕ_r≥Φ^A_r,t - D/2 - 2π(¬s_t,d,r)∀t ∈T,
d,r ∈C: r ≠~c ϕ_r≤Φ^A_r,t + D/2 + 2π(¬s_t,d,r)∀t ∈T, d,r ∈C: r ≠~c ϕ_r≥Φ^B_r,d
- D/2 - 2π(¬s_t,d,r)∀t ∈T, d,r ∈C: r ≠~c ϕ_r≤Φ^B_r,d + D/2 + 2π(¬s_t,d,r)∀t
∈T, d,r ∈C: r ≠~c ∑_c ∈C∖{~c}(P_c^BS⋅y_c^BS \+ P_c^RIS⋅y_c^RIS)≤B Objective
function (2) is of the sum-throughput type. Constraint (IV) makes sure that a
BS and a RIS cannot be installed in the same candidate site, while (2) forces
the installation of the fake surface. Constraint (IV) allows for up to 1 SRC
to be active for each TP, meaning that each $t\in\mathcal{T}$ is covered by up
to 1 BS and up to 1 RIS. In (IV-2) the BS and RIS time sharing is enforced. In
particular, a strictly positive transmission duration is allowed only if the
SRC is active, if both BS and RIS are installed and if a radio connection
between the three network components can be established444Note that, while the
coverage parameter $\Lambda_{t,d}$ has been extended to also include a third
index representing the RIS CS, its rationale remains unchanged.. Constraints
(IV-IV) force the RIS azimuth to be such that the lines of sight of any
associated BS and TP all fall inside its field of view. Parameters
$\Phi_{r,t}^{\text{A}}$ and $\Phi_{r,d}^{\text{B}}$ indicate the angle between
a reference vector originating from RIS $r\in\mathcal{C}$ and the connected TP
$t\in\mathcal{T}$ and BS $d\in\mathcal{C}$ lines of sight, respectively. The
reader can find an illustration in Figure 1. Note that $\neg
s_{t,d,r}=(1-s_{t,d,r})$. Finally, we have introduced a RIS cost parameter
$P_{c}^{\text{RIS}}$ in the budget constraint (2).
## V Results
In this section, we numerically analyze the previously described models when
applied to different instances. Such instances are characterized by parameters
that vary according to the specific result or property intended to be
highlighted. However, some assumptions will be valid throughout the entire
section unless otherwise stated.
We consider scenarios where the BS employs several uniform linear antenna
arrays, such that the BS field of view is 360°. We assume 64 antennas per
array and a transmit power of $30dBm$. The receiver’s antenna is assumed to be
omnidirectional, and RX sensitivity is set to $-78dBm$.
Given that the size of the reflecting surface is directly related to the
system performance [22], we show results for both $10^{4}$ and $10^{5}$
reflecting elements in each RIS. These are compatible with surface sizes of
about $50\text{x}50cm$ (i.e. small RIS) and $150\text{x}150cm$ (i.e. large
RIS), respectively, since the reflecting elements need around $\lambda/2$
spacing [23]. Additionally, RIS field of view is set to 120°.
Carrier frequency is set to $28GHz$ and both propagation and blockage models
are taken from [21]. According to this model, the expected throughput
decreases with the link-length, as longer links incur in higher blockage
probabilities.
The received power of SRCs has been computed with the formula derived in [12]
and summarized by Eq. 2. Traditional direct communication received powers have
been computed using the same formula, but discarding the RIS contributions,
without loss of generality.
Maximum achievable bit-rates are computed according to realistic modulation
and coding schemes, like those specified by IEEE 802.11ad standard [24].
In each instance, 52 CSs and 32 TPs are randomly but uniformly scattered on a
$400\text{x}300m$ area.
The default planning budget is set to $10.6$. BS cost is set to 1, while large
and small RIS costs are set to $0.1$ and $0.05$, respectively.
For any given set of parameters, numeric results have been computed by
averaging on 30 random instances of TP and CS positions.
We have used MATLAB to generate each instance and CPLEX to find an optimal
solution.
Figure 2: Mean TP throughput varying $R^{\text{MIN}}$
The first result we intend to analyze is the performance in terms of expected
throughput experienced at the test points for different values of
$R^{\text{MIN}}$.
Figure 2 shows this value averaged over all TPs, for $R^{\text{MIN}}$ spanning
from $0Mbps$ to $800Mbps$, with $100Mbps$ increments.
We note how, independently on the RIS size, the basic planning model is
outperformed by the model that includes intelligent surfaces for any value of
$R^{\text{MIN}}$.
Additionally, larger surfaces perform better than their smaller versions, and
the performance difference between the 3 cases grows with the minimum
guaranteed throughput. This suggests that the well studied link-level benefits
of employing RIS in mm-Wave communication scale well also at system-level.
Finally, the model without RIS becomes unfeasible when
$R^{\text{MIN}}>600Mbps$, while optimal solutions can still be found for both
RIS sizes. This shows how re-configurable surfaces allow mm-Wave radio access
networks to go beyond the coverage capabilities of traditional networks when a
larger minimum guaranteed throughput is required.
(a) Mean TP throughput
(b) Active sites
Figure 3: Budget variations, from $5$ to $35$ units with $4$ units increments.
Figure 4: Average TP-CS distances when varying budget.
We have shown how intelligent surfaces effectively augment the coverage while
also increasing the TP experienced throughput. In the following results, we
further expand the analysis of the latter in order to establish the efficacy
of RISs in boosting the raw network performance.
We set $R^{\text{MIN}}=100Mbps$ and let $B$ span from around $6$ units to
around $36$ units with increments of $4$ units. Note that $B=36$ is equivalent
to an infinite budget since it allows the installation of the maximum number
of BSs and RISs given the other parameters.
Figure 3a shows the impact of the available budget on the experienced TP
throughput, while in Figure 3b we have plotted the variations in the number of
active sites where either BSs or RISs are installed.
Interestingly, the number of active sites where RISs are installed - dotted
and dashed blue curves in Figure 3b \- decreases as the budget increases from
$4$ until around $16$ units, independently on the RIS size. For the same
values of $B$, the number of installed base stations increases.
Optimal solutions for lower budgets seem to favour a relatively larger number
of RIS installations, which is reduced when BSs substitute RISs as more budget
becomes available. However, while still being able to provide adequate
coverage levels, the larger count of RISs has little impact on performance
boosting for such low values of $B$, as Figure 3a testifies.
Indeed, this figure shows that a budget of 20 units or more is needed in order
to experience a more substantial raw performance boost, which also coincides
with an increased installed RISs count. The suggestion is that the sites where
to install intelligent surfaces are chosen to increase coverage for lower
values of $B$, while, as the budget increases, additional RISs are installed
to increase the throughput.
We confirm this by showing the average TP-RIS distances - dashed and dotted
blue curves - against the budget variations in Figure 4. Here is indeed
evident how these distances decrease at first, as the budget increases,
testifying that RISs are installed closer and closer to TPs in order to
decrease the probability of blockage and thus guarantee a better coverage.
However, when $B\geq 32$ units, the average distances abruptly increase
together with the average RIS installation count. Note indeed that only up to
$32$ BSs can be installed (i.e. one per TP), leaving the remaining budget to
be spent entirely on RIS installations.
This behaviour indirectly shows how RISs are most effective in boosting the
radio access network performance when a portion of the planning budget can be
dedicated to their installation or, in other words, when the BSs have been
already installed. This is arguably an exciting result, as it suggests that
intelligent surfaces might be quite effective in boosting the performance of
mm-Wave access networks that have been already deployed.
We conclude this section by providing additional comments on Figure 4.
Consider the solid black line and both the dashed and dotted red lines. These
represent the average optimal TP-BS distance for the model without RISs (solid
black), the average optimal TP-BS distance in SRCs with small RIS size (dashed
red) and the same quantity for larger RIS size (dotted red).
In general, we can expect SRCs to be more robust against blockages because
multiple lines of sight need to be interrupted at the same time for the
connection to fail.
This concept becomes evident when comparing the 3 curves above, as they show
how base stations belonging to SRCs can be placed further away from the test
points without reducing the blockage-weighted throughput as opposed to BS-TP
distances found by solving the base model. Additionally, SRCs allow for a more
efficient BS resource sharing, since on average more TPs are in the coverage
range of each BS.
As mentioned in Section IV, the RIS-aware model still allows for any TP to be
covered by a traditional connection if such a choice is optimal. In this
regard, the dashed and dotted black curves in Figure 4 show how those TPs
which are covered through a traditional connection are, on average, remarkably
closer to the assigned BS with respect to the test points involved in SRCs.
This confirms that optimal TP-RIS assignments are chosen such that TPs which
are further away from base stations are prioritized, while also suggesting
that a heuristic approach based on such policy might yield satisfying results.
## VI Conclusion
To study the effect of RISs on large-scale mm-Wave access networks, we have
developed a new mathematical formulation for the coverage planning problem
that includes reconfigurable surfaces. In our models, RIS can be installed in
given candidate sites of a geographic area to assist the communication between
base stations and test points, effectively creating what we call a Smart Radio
Connection. We have also formulated a baseline model where the coverage
planning does not consider the presence of RISs. Our simulation campaigns show
how RISs can effectively increase both performance and throughput of access
networks. Numerical results also highlight the impact of the planning budget
on the KPIs above. In particular, we have shown how RISs can offer better
coverage even for relatively low budget values, while increasingly noticeable
throughput gains are obtained for larger values. Finally, our analysis on the
optimal distances between base stations, RISs and test points have shown which
RIS positioning policies are the most effective. The study of different
planning objectives, more complex deployment scenarios and more refined
channel models might be subject of future works.
## Acknowledgment
The research in this paper has been carried out in the framework of Huawei-
Politecnico di Milano Joint Research Lab. The Authors acknowledge Huawei Milan
research center for the collaboration.
## References
* [1] S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter-wave cellular wireless networks: Potentials and challenges,” _Proceedings of the IEEE_ , vol. 102, no. 3, pp. 366–385, 2014.
* [2] T. S. Rappaport, Y. Xing, O. Kanhere, S. Ju, A. Madanayake, S. Mandal, A. Alkhateeb, and G. C. Trichopoulos, “Wireless communications and applications above 100 ghz: Opportunities and challenges for 6g and beyond,” _IEEE Access_ , vol. 7, pp. 78 729–78 757, 2019.
* [3] T. S. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N. Wong, J. K. Schulz, M. Samimi, and F. Gutierrez, “Millimeter wave mobile communications for 5g cellular: It will work!” _IEEE Access_ , vol. 1, pp. 335–349, 2013.
* [4] J. HANSRYD, J. EDSTAM, B.-E. OLSSON, and C. LARSSON. (2013) ”non-line-of-sight microwave backhaul for small cells”. [Online]. Available: https://www.ericsson.com/en/reports-and-papers/ericsson-technology-review/articles/5g-nr-evolution
* [5] S. Kutty and D. Sen, “Beamforming for millimeter wave communications: An inclusive survey,” _IEEE Communications Surveys Tutorials_ , vol. 18, no. 2, pp. 949–973, 2016.
* [6] E. Basar, M. Di Renzo, J. De Rosny, M. Debbah, M. Alouini, and R. Zhang, “Wireless communications through reconfigurable intelligent surfaces,” _IEEE Access_ , vol. 7, pp. 116 753–116 773, 2019.
* [7] M. Di Renzo, K. Ntontin, J. Song, F. H. Danufane, X. Qian, F. Lazarakis, J. de Rosny, D.-T. Phan-Huy, O. Simeone, R. Zhang, M. Debbah, G. Lerosey, M. Fink, S. Tretyakov, and S. Shamai, “Reconfigurable Intelligent Surfaces vs. Relaying: Differences, Similarities, and Performance Comparison,” _IEEE Open Journal of the Communications Society_ , pp. 1–1, jun 2020.
* [8] C. Huang, A. Zappone, G. C. Alexandropoulos, M. Debbah, and C. Yuen, “Reconfigurable intelligent surfaces for energy efficiency in wireless communication,” _IEEE Transactions on Wireless Communications_ , vol. 18, no. 8, pp. 4157–4170, 2019.
* [9] X. Tan, Z. Sun, D. Koutsonikolas, and J. M. Jornet, “Enabling indoor mobile millimeter-wave networks based on smart reflect-arrays,” in _IEEE INFOCOM 2018 - IEEE Conference on Computer Communications_ , 2018, pp. 270–278.
* [10] F. Devoti and I. Filippini, “Planning mm-wave access networks under obstacle blockages: A reliability-aware approach,” _IEEE/ACM Transactions on Networking_ , vol. 28, no. 5, pp. 2203–2214, 2020.
* [11] Y. Cao, T. Lv, and W. Ni, “Intelligent Reflecting Surface Aided Multi-User mmWave Communications for Coverage Enhancement,” in _2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications_. IEEE, aug 2020, pp. 1–6. [Online]. Available: https://ieeexplore.ieee.org/document/9217160/
* [12] P. Wang, J. Fang, X. Yuan, Z. Chen, and H. Li, “Intelligent reflecting surface-assisted millimeter wave communications: Joint active and passive precoding design,” _IEEE Transactions on Vehicular Technology_ , vol. 69, no. 12, pp. 14 960–14 973, 2020.
* [13] H. Du, J. Zhang, J. Cheng, and B. Ai, “Millimeter wave communications with reconfigurable intelligent surfaces: Performance analysis and optimization,” _IEEE Transactions on Communications_ , pp. 1–1, 2021.
* [14] M. Di Renzo, M. Debbah, D.-T. Phan-Huy, A. Zappone, M.-S. Alouini, C. Yuen, V. Sciancalepore, G. C. Alexandropoulos, J. Hoydis, H. Gacanin _et al._ , “Smart radio environments empowered by reconfigurable ai meta-surfaces: An idea whose time has come,” _EURASIP Journal on Wireless Communications and Networking_ , vol. 2019, no. 1, pp. 1–20, 2019.
* [15] G. C. Alexandropoulos, G. Lerosey, M. Debbah, and M. Fink, “Reconfigurable intelligent surfaces and metamaterials: The potential of wave propagation control for 6g wireless communications,” _arXiv preprint arXiv:2006.11136_ , 2020.
* [16] N. Palizban, S. Szyszkowicz, and H. Yanikomeroglu, “Automation of millimeter wave network planning for outdoor coverage in dense urban areas using wall-mounted base stations,” _IEEE Wireless Communications Letters_ , vol. 6, no. 2, pp. 206–209, 2017.
* [17] I. Mavromatis, A. Tassi, R. J. Piechocki, and A. Nix, “Efficient millimeter-wave infrastructure placement for city-scale its,” in _2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring)_. IEEE, 2019, pp. 1–5.
* [18] Q. Wu, L. Chen, X. Chen, and W. Wang, “Cell planning for millimeter wave cellular networks,” in _2017 9th International Conference on Wireless Communications and Signal Processing (WCSP)_ , 2017, pp. 1–6.
* [19] E. Amaldi, A. Capone, M. Cesana, I. Filippini, and F. Malucelli, “Optimization models and methods for planning wireless mesh networks,” _Computer Networks_ , vol. 52, no. 11, pp. 2159–2171, 2008.
* [20] M. Gerasimenko, D. Moltchanov, M. Gapeyenko, S. Andreev, and Y. Koucheryavy, “Capacity of multiconnectivity mmwave systems with dynamic blockage and directional antennas,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 4, pp. 3534–3549, 2019.
* [21] M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular capacity evaluation,” _IEEE Journal on Selected Areas in Communications_ , vol. 32, no. 6, pp. 1164–1179, 2014.
* [22] E. Bjornson, O. Ozdogan, and E. G. Larsson, “Intelligent Reflecting Surface Versus Decode-and-Forward: How Large Surfaces are Needed to Beat Relaying?” _IEEE Wireless Communications Letters_ , vol. 9, no. 2, pp. 244–248, 2020\.
* [23] M. D. Renzo, A. Zappone, M. Debbah, M. S. Alouini, C. Yuen, J. D. Rosny, and S. Tretyakov, “Smart Radio Environments Empowered by Reconfigurable Intelligent Surfaces: How it Works, State of Research, and Road Ahead,” _IEEE Journal on Selected Areas in Communications_ , 2020.
* [24] S. Sur, V. Venkateswaran, X. Zhang, and P. Ramanathan, “60 ghz indoor networking through flexible beams: A link-level profiling,” _SIGMETRICS Perform. Eval. Rev._ , vol. 43, no. 1, p. 71–84, Jun. 2015. [Online]. Available: https://doi.org/10.1145/2796314.2745858
|
# Towards Solving Industry-Grade Surrogate Modeling Problems using Physics
Informed Machine Learning
Saakaar Bhatnagara
<EMAIL_ADDRESS>Corresponding Author Andrew Comerforda
<EMAIL_ADDRESS>Araz Banaeizadeha
<EMAIL_ADDRESS>
(aAltair Engineering Inc., 100 Mathilda Place, Sunnyvale, CA, USA)
###### Abstract
Deep learning combined with physics-based modeling represents an attractive
and efficient approach for producing accurate and robust surrogate modeling.
In this paper, a new framework that utilizes Physics Informed Neural Networks
(PINN) to solve PDE-based problems for the creation of surrogate models for
steady-state flow-thermal engineering design applications is introduced. The
surrogate models developed through this framework are demonstrated on several
use cases from electronics cooling to biomechanics. Additionally, it is
demonstrated how these trained surrogate models can be combined with design
optimization methods to improve the efficiency and reduced the cost of the
design process. The former is shown through several realistic 3D examples and
the latter via a detailed cost-benefit trade off. Overall, the findings of
this paper demonstrate that hybrid data-PINN surrogate models combined with
optimization algorithms can solve realistic design optimization and have
potential in a wide variety of application areas.
Keywords: Surrogate Modeling; Physics-Based Machine Learning; Physics Informed
Neural Networks; Design of Experiments; Design Optimization; Electronics
Thermal Design
## 1 Introduction
Over the last few years, there has been significant growth in the popularity
of machine learning algorithms to solve partial differential equations (PDE)
or assist PDE solvers, such as a computational fluid dynamics(CFD) solver [7,
10]. The attraction of ML algorithms in these scenarios is the ability to find
solutions in domains that are challenges in conventional CFD, such as large
design space explorations [37], turbulence model closure [73] or solving
incomplete/ill-posed problems [9].
A particular application where CFD solvers struggle, due to the computational
cost, is iterative design optimization. This is the process of continually
updating a design (e.g. an electronics assembly layout) and computing the
solution (e.g. flow or thermal fields) to optimize the performance (e.g.
constrain the temperatures or reduce the pressure drop). The challenge for CFD
is the input-output relationship is one-to-one. Therefore, any changes to the
input vector (e.g. geometric variations) need to be re-simulated, leading to
high costs when iterating on different design scenarios [31]. Overall high-
fidelity iterative design requires a prohibitive level of resources, both
computationally and monetarily and often leads to a sub-optimal outcome.
To enhance the iterative design process and speed up CFD simulations surrogate
models are usually utilized. Surrogate modeling represents an essential
simulation technology for many computationally expensive applications. In
literature, there has been a significant amount of research on the
construction of surrogate or reduced order models using simulation data [32],
[20]; these methods include Proper Orthogonal Decomposition(POD) [32], Gappy
POD [42], Manifold Learning [40]. These approaches serve as good surrogates
that give near real-time predictions for input parameters within a certain
range. However, these methods have remained largely academic in their
application due to limitations, such as training data requirements and only
working in simple scenarios. Recently, increased attention has been given to
statistical methods like Gaussian processes and neural networks that
incorporate Machine Learning (ML) to create the same surrogates. Bhatnagar et
al. [4] used a CNN architecture to predict aerodynamic flow fields over
airfoils, and created a surrogate model that generalized between flow
conditions and airfoil geometries. Guo et al. [22] also used a Convolutional
Neural Network (CNN) architecture to predict steady flows over automotive
vehicles . Lee and You [34] used Generative Adversarial Networks(GANs) coupled
with physical laws to predict unsteady flow around a cylinder, demonstrating
the benefits of using embedded physics. Raissi and Karniadakis [50] uses
Gaussian processes to model and identify several complex PDEs.
Machine learning, and more specifically Artificial Neural Networks (ANNs),
have seen success in areas such as computer vision, natural language
processing and healthcare [14, 51, 63]. Much of this success can be attributed
to the development of new model architectures, programming frameworks (e.g.
Tensorflow and PyTorch), and most importantly large training datasets. The
ability of ANNs to capture complex nonlinear relationships between inputs and
outputs have also made them very well-suited for regression problems. For
design engineering applications, the ANN-based surrogate model must be
consistent with the underlying physics; however, to achieve this, a number of
hurdles must be overcome:
1. 1.
Data is Expensive: Unlike many other commercial applications, solution data to
create surrogate models is generally expensive to obtain. Typically, the data
is obtained through repetitive simulation. Since ANNs work best with large
amounts of data to train them, this makes the cost of conventional data-driven
surrogate modeling via ANNs expensive.
2. 2.
Spectral Bias: It is a well-documented feature of ANNs [48, 62, 11] that they
possess a bias towards learning the lower frequencies of a signal they are
being trained on first. This leads to a more challenging training process for
ANNs for engineering problems, where it is important to capture these high-
frequency components of the solution in an economical manner.
3. 3.
Multiphysics and Multiscale problems: Many engineering applications,
especially in the flow-thermal realm, are multiscale in nature with complex
flow physics that need to be captured by surrogate models. This is not
straightforward and requires special treatment to ensure the generalizability
of the model [28, 13], as well as the correctness of the solution across any
interfaces.
4. 4.
Unphysical Results: Purely data-driven models may not end up learning the
underlying physics from the data. This leads to unphysical results when trying
to generalize to unseen test cases with these models.
In order to build scalable, robust surrogate models using ANNs for general
engineering design problems, the above challenges need to be addressed. The
ideal surrogate model must satisfy the following criteria: 1) limited input
data; 2) generalizable; 3) easy and economical to create; and 4)
parallelizable.
In this paper, the Altair framework for flow-thermal surrogate model creation
and automatic design optimization is presented. The aforementioned
difficulties are individually addressed with a number of different proposed
solutions while addressing the lack of research applying physics-informed ML
to problems closer to the complexity of industrial design problems. The
effectiveness of adding very sparse solution data in the domain to aid Physics
Informed Neural Network convergence in 3D is demonstrated for a complex
geometry at a high Reynolds Number. Additionally, flow through a stenosis is
solved at a Reynolds Number of 150 in 3 dimensions, done with realistic
physical parameters in a purely physics-informed manner. Finally, a novel
method of coupling physics-informed ANN surrogates with traditional design
optimization algorithms (e.g. Particle Swarm Optimization [29]) is presented
for several industrial electronics assemblies, which are complex 3D setups
with realistic physical parameters, to enable automatic and more efficient
design optimization using surrogates compared to brute force search.
## 2 Physics Informed Neural Networks (PINNs)
### 2.1 Physics informed Neural Networks
Proposed by Raissi et al. [49], physics-informed neural networks (PINNs)
leverage automatic differentiation to obtain an analytical representation of
an output variable and its derivatives, given a parametrization using the
trainable weights of the network. By employing the underlying static graph it
is possible to construct the differential equations that govern physical
phenomena. Using gradient-based optimization the residual is converted into a
loss function and driven to zero in order to satisfy the PDE. Similarly, the
same methods can be used for the boundary conditions (Dirichlet/Neumann/Robin)
and initial conditions to completely construct a well-posed PDE for the neural
network to solve.
Once the static graph for PINN training has been created, any iterative
gradient-based optimization algorithm can be used to minimize the loss
function. Popular choices in literature, include Adam [30] and L-BFGS. Both
these optimizers come with advantages and drawbacks. For smaller problems, a
combination of Adam followed by L-BFGS to fine-tune the optimization works
well. However, for larger problems, the full batch size requirement of the
L-BFGS second-order optimizer limits its use to small/simplified problems.
Theoretically, PINNs require no data to train since the loss function will
contain all the terms necessary to completely describe the PDE problem.
However, depending on the implementation, they may require training data. If
the objective is to do a forward solve (i.e solving for the solution of the
PDE), no data should be required. If, however, one is solving an inverse
problem (i.e given some/all of the solution and part of the PDE infer the rest
of the PDE), they will need some training data and the known part of the PDE.
This diversity in potential applications is where the true potential of PINNs
lies; PINNs represent the intersection of PDE-based and data-driven solution
methods for systems that can be described by partial differential equations,
in a very simple, efficient manner. They allow for the creation of models
that, if trained correctly, increase the likelihood that the models will obey
the PDE-based laws of physics as applicable to the processes they are
modeling. Even in cases where PINNs cannot be trained correctly in a data-free
manner (described in Section 3), including the physics as an inductive bias
has been shown to improve the surrogate models by reducing overfitting and
making the results more physical [34, 65].
### 2.2 Mathematical Formulation
A PDE problem in the general form reads:
$\mathcal{N}_{\textbf{x}}[u]=0,\textbf{x}\in\Omega,$
$u(\textbf{x})=g(\textbf{x}),\textbf{x}\in\partial\Omega.$
In order to solve the PDE using the PINN method, the residual of the governing
PDE is minimized, which is defined by
$r_{\theta}(\textbf{x})=\mathcal{N}_{\textbf{x}}[f_{\theta}(\textbf{x})],$
where $f_{\theta}$ is the predicted value by the network. The residual value,
along with the deviation of the prediction from boundary/initial conditions,
is used to construct the loss, which takes the form:
$L(\theta)=L_{r}(\theta)+\sum^{M}_{i=1}\lambda_{i}L_{i}(\theta),$
where the index i refers to different components of the loss function,
relating to initial conditions, boundary conditions and measurement/simulation
data. $\lambda_{i}$ refer to the weight coefficient of each loss term. The
individual loss terms are constituted as follows:
$L_{r}=\frac{1}{N_{r}}\sum_{i}^{N_{r}}[r(\textbf{x}_{r}^{i})]^{2},$
$L_{b}=\frac{1}{N_{b}}\sum_{i}^{N_{b}}[u(\textbf{x}_{b}^{i})-g_{b}^{i}]^{2},$
$L_{d}=\frac{1}{N_{d}}\sum_{i}^{N_{d}}[u(\textbf{x}_{d}^{i})-\hat{u}(x_{d}^{i},t_{d}^{i})]^{2},$
where the subscripts r,b,d refer to collocation, boundary, initial and data
points, respectively.
### 2.3 Current Challenges with PINNs
Although the PINN method shows great promise, it still has a number of
unresolved issues. The biggest challenges with PINNs currently lie in the
scalability of the algorithms to large 3D problems as well as problems with
complex nonlinearities, and unsteady problems.
#### 2.3.1 Weak imposition of Boundary Conditions
The solution of a PDE problem must obey all initial and boundary conditions
imposed on it while minimizing the residual of the governing equation.
However, for neural network based solvers it is difficult to impose boundary
and initial conditions in an exact manner. This is because the standard way to
impose B.C in PINNs is to create a linear combination of loss functions (as
described mathematically in the previous section). Each loss either describes
the deviation of the network output from a specific boundary condition, or the
magnitude of the residual of the governing equations. Therefore, boundary
conditions are only satisfied in a weak manner. There has been research
demonstrating the utility of exact imposition of boundary conditions [60, 59,
38] or creative multi-network approaches [52], such implementations are mostly
problem-specific and do not generalize well.
Weak imposition of boundary conditions also creates another issue, one that is
fairly common in multi-task learning and multi-objective optimization:
choosing the values of loss term coefficients that make up the linear
combination. Choosing these weights is a nontrivial exercise that would
require calibration via hyper-parameter search, which is not feasible. Wang et
al. [64] introduced a heuristic dynamic weighting algorithm to update and
select these weights automatically and continuously during the training, to
enable convergence to the correct answer. Additionally, there have been
several other algorithms proposed to choose the correct scheme for weighting
the losses [68, 67, 5]. This continues to be an active area of research in the
PINNs community. Finally, methods have been proposed to impose the boundary
conditions in a strong manner by manipulating the output formulations [60] or
by utilizing operator networks [53].
#### 2.3.2 Difficult Optimization Problem
A second problem is the nature of the loss landscape itself, in which a
reasonable local minimum is required to be found. As seen in Krishnapriyan et
al. [33], Gopakumar et al. [21],Subramanian et al. [58] and Basir and Senocak
[2], as well as the author’s own experiments, different non-dimensional
quantities (e.g. Reynolds number) in the governing equations, the number of
dimensions of the problem, the point cloud/discretization, the boundary
conditions and the complexity of the solution to be predicted can adversely
affect the loss landscape of the neural network training. This makes the
optimization challenging and can fail to find an adequate local minimum via a
gradient descent-based algorithm. Recently, methods borrowing concepts from
optimization theory have shown alternate formulations (e.g. augmented
lagrangian method for the loss functions) can aid the convergence properties
of the training problem [2, 57]. There have also been efforts towards imposing
physical constraints in an integral form [23].
#### 2.3.3 Cost of training
Constructing the PDE loss functions involves several backward passes through
the network, which is a costly operation. PINNs on average takes longer to
train than their data-driven counterparts for exactly this reason; the
computation graph of a PINN training is much more complex. Moreover, for the
Navier-Stokes equations, it has been seen that although the stream function
formulation provides better results (due to exact enforcement of continuity),
it is costlier in terms of training time. As seen in NVIDIA’s experiments
[25], it can take several million iterations for the more complex problems to
be solved via PINNs. To reduce the cost of training approaches such as
automatic differentiation for finite difference formulations [24], or using
first-order formulations [18], have been proposed. However, these solutions
tend to be mostly problem-specific and do not necessarily generalize well to
increased problem complexity and grid definitions. Meta-learning algorithms
[16] have also recently gained significance as an effective way to reduce the
cost of training neural networks on new tasks, and some of this work has been
extended to PINNs [46] as well.
#### 2.3.4 Unsteady problems
As discussed above, in a standard implementation, solving a time-dependent
problem via PINNs is a challenging task. This is primarily because the
standard implementation treats time as another axis just like space; what this
means is that in creating the spatial point discretization, the space mesh has
to be repeated for every time step. This can be a very expensive process for
any problem except simple 1D and 2D problems. Moreover, another very important
issue with this method is the lack of respect for causality. It was
demonstrated by Wang et al. [66] that networks have a tendency to learn the
solution of the PDE at later time steps first, and if the solution does not
correctly solve for earlier time steps, the solution at later time steps will
most likely be incorrect. In literature, a number of methods have been
proposed to get around these issues [6, 33]. Krishnapriyan et al. [33]
suggests using a sequential model approach, breaking down the total set of
time steps into blocks of consecutive steps. Each block of time steps has its
own dedicated trained model, using either the initial condition provided (with
the PDE to solve) or the solution of the previous block of time steps as the
initial condition for the next block’s neural network solve. This approach has
been shown to lend stability to the training, as occasionally trying to solve
for all time steps simultaneously would fail. However, the bigger benefit for
simulating industry-grade problems, which have larger meshes/point clouds, is
that with this breakdown of the problem, the training for each network that
represents a sub-block can be fit on a GPU. These networks can then be trained
in a sequential manner, much faster on a GPU as compared to a CPU. To address
the problem of causality, Wang et al. [66] suggested a dynamic weighting
approach for the loss terms corresponding to each time step that would favor
the network learning the solution in a causal manner.
Another way of solving time-dependent problems using PINNs, is to use layers
that were designed to deal with sequential data in the first place. In
conventional data-driven deep learning, for applications like time-series
analysis and natural language processing, RNN and LSTM layers were designed to
deal with sequences of data instead. For applications in scientific machine
learning, these layers can be adapted to deal with time sequences. Work by Wu
et al. [69] and Cheng et al. [12] has already demonstrated promising
applications of these layers to scientific ML.
## 3 Important Features
In this section, some of the important features used in the framework are
outlined for the creation of ANN-based surrogate models.
### 3.1 Fourier Feature Embeddings
As described in Tancik et al. [62], Artificial Neural Networks suffer from a
spectral bias problem. To overcome this, they introduced a Fourier feature
embedding that allows models to capture high-frequency components of the
solution effectively. In addition, there have been other proposed solutions
for the spectral bias problem for applications to PDE problems, such as the
Siren activation [56], Fourier Neural Operators [35], and weighting schemes
derived from the theory of Neural Tangent Kernels (NTK) [68].
### 3.2 Embedding Physics as an Inductive Bias
It is clear that the current state of the art in PINNs has improved the method
significantly from what was proposed in the first paper [49], and researchers
have been able to gradually solve PDE-based problems involving more complex
nonlinearities and in more dimensions. However, the PINNs method is currently
not suited for solving complex engineering problems often encountered in
industry in a data-free manner. The optimization issues and cost of model
training outlined above make the method, presently, unsuitable for use as a
forward solver. To get the best of both worlds, the PINNs method can be
augmented with data. Figure 1 depicts the tradeoff between using only data or
only physics, and that the sweet spot lies in using both. There have been
several examples showing that the inclusion of solution data significantly
improves the convergence capabilities of the method [54, 9, 8]. This has a
number of distinct advantages:
1. 1.
The combined data-physics approach can be used to solve ill-posed problems
(for example, problems with missing B.C that have some solution data
available)
2. 2.
Create surrogate models using lesser solution data that are more consistent
with physics and hence more accurate [34, 65, 1]
3. 3.
Inverse problems for system identification [70, 27]
The simplicity and non-intrusiveness of the method allow a user to quickly set
up a problem on top of a standard solver.
Figure 1: The spectrum of data-driven versus physics-informed models.
Incorporating governing physics information into the models during creation
serves as an effective form of regularization and often helps reduce the
amount of data required to achieve the same accuracy levels
#### 3.2.1 Demonstration
To demonstrate the effect of adding physics-based regularizers to the training
and their effect on the final converged solution, a comparison of data versus
data plus physics was undertaken. In this experiment, a coarse selection (1%
of the total nodes) of points was selected randomly to use for the data term;
all nodes were used for the physics. Since the mesh had approximated 230,000
node points, this meant 2,300 data points. The experiment is divided into two
parts, first, a model is trained on this 1 % data without any physics, and
then a new model is trained starting with 1 % data and then adding the physics
at all nodal points (using a warm start approach described in Section 3.4). To
the author’s knowledge, this represents a novel attempt at using a very coarse
solution to aid convergence of the network to physically correct solutions in
3D for an external flow at a high Reynolds Number. This class of problem is
conventionally challenging for PINNs due to the nonlinearity and gradients
involved.
(a)
(b)
(c)
Figure 2: ANN prediction with and without physics, for very coarse data
supplied. (2(a)) Trained on 1% solution data from solver (2(b)) Trained on 1%
data and physics (2(c)) True Solution from CFD solver
Figure 2 shows the ANN predictions for different case results. It is evident
that by using sparse data, the network is able to converge to the right answer
using the physics-based regularizer. It should be noted that the eddy
viscosities are provided at each node point in this example, to the physics
residual. However, this opens exciting possibilities about using physics-based
regularizers in the future. Data generation costs to create surrogate models
can be greatly reduced by providing solution data on a very coarse grid and
solving the physics on a much finer grid. This has been demonstrated by
Gopakumar et al. [21] for 2D flow over a block. Ghosh et al. [17] also uses
sparse solution data in the domain to create surrogate models for flow over a
cylinder at high Reynolds number in 2D.
It should be noted that the above result was demonstrated using a small
fraction of overall solution data to show the effect of adding physics. In the
rest of the paper, wherever a data term was used for training, all available
training data for a set of input parameters was used.
### 3.3 Domain Decomposition
The domain decomposition process is defined as breaking the overall solution
domain into several subdomains and defining a model for each subdomain. As
discussed by Jagtap et al. [28]and Hu et al. [26], breaking down the overall
solution domain into subdomains can allow for better generalization capability
for complex multiphysics and multiscale problems since predictions for each
subdomain are performed on the sub-network of that domain. Furthermore, the
training of sub-networks obtained via domain decomposition is inherently
parallelizable and the sub-domain models can be placed on separate GPUs with
an associated speedup. Care must be taken as mentioned by Hu et al. [26] to
avoid overfitting the model.
### 3.4 Warm Start for Physics
One of the issues with using standard initialization schemes like Xavier [19]
for training a PINN in a data-free manner, is that the resulting outputs of
the networks have no physical meaning at the start of the training. Minimizing
residual-based losses, in this case, is very ineffective since the gradients
generated by backpropagation are likely to be very noisy and increase the
likelihood of the training converging to a bad local minimum. One solution is
to have a ”warm start” by first training on some solution data only. This has
two benefits:
1. 1.
It brings the network weights closer to their ”converged” values before using
the PDE residuals to improve the quality of the solution generated by the
network. This allows the PDE residuals to be much more effective in the
training process.
2. 2.
It reduces the cost of training. As discussed in Section 2.3.3, computing
gradients of PDE residuals via backpropagation is an expensive operation. By
using a warm start via a pretraining approach, one can mitigate this cost by
avoiding training iterations where the physics would have little to no benefit
on the training process.
## 4 Implementation
### 4.1 Model Setup
A simple fully connected architecture with 3 hidden layers of 128 nodes each
for fluid domains, and 2 hidden layers of 64 nodes each for solid domains were
used. This distinction was made because of multiple reasons:
* •
The solution in the fluid subdomains is likely to be much more non-linear,
particularly in regions of fine sampling like the boundary layer. Hence a
smaller network can be used for the thermal subdomains as solutions in these
subdomains are not likely to require as much expressive power.
* •
A flow-thermal domain is likely to have more thermal subdomains than fluid
subdomains. This can be seen in the example on the electronics box (Section
5.2.3) where there is 1 flow subdomain and 27 thermal subdomains. In the
interest of cost-effectiveness from a computing perspective, smaller networks
are used to express the solutions in solid domains where only the energy
equation is being solved.
In addition, the space coordinates are encoded in the frequency domain as
described in [62]. The number of outputs of the network depends on the
governing equation. For the Navier Stokes Equations, there are four outputs
(u,v,w,p), and for the energy equation, there is one output (T). The number of
inputs to the network depends on the number of axes of parametrization for the
given problem. For example, if the parametrization is along one axis (say Re)
then the total number of inputs would be 4 (3 spatial + one parameter). This
network architecture is not the result of a parameter search. The model is
trained using an ADAM optimizer. The framework also supports using L-BFGS,
however, it is only recommended to use L-BFGS with full batch size; this is
problematic from a memory perspective for 3D nonlinear problems.
(a)
(b)
Figure 3: Training graphs during different training phases. (3(a)) Training
graph for warm start (3(b)) Training graph when physics is included
The training process for a surrogate model can be divided into two phases,
depicted in Figures 3(a) and 3(b). The input vector to any of the _N_ sub-
networks is the coordinates and the input parameter set of _m_ parameters (if
creating a surrogate model). For the training warm-up (described in Section
3.4), only the solution data loss is included in the total loss, referenced in
Figure 3(a). Once the warm-up is complete (based on a fixed number of
iterations or convergence criteria), include the physics-based losses to
enhance the training (shown in Figure 3(b)). For the case with multiple sub-
domains, the PDE losses are scaled relative to one another at the start of the
training as follows:
$\beta_{i}=\min(\frac{\max_{i=1....N}(\text{PDE Loss})_{i,iter=0}}{(\text{PDE
Loss})_{i,iter=0}},10^{5}),$
The outer min is added to avoid exploding gradients due to very high
coefficients. The inter loss coefficients $\lambda_{i}$ are determined using
Learning Rate Annealing [64].
### 4.2 Design Optimizer Setup
To optimize physical designs with trained models that predict flow-thermal
results, the Particle Swarm Optimization (PSO) [29] algorithm is used. It is a
zero-order optimization method that uses an initial distribution of particles
in the search space and based on the ”fitness” of each particle computes new
positions and velocities of particles in the search space. Eventually, the
particles converge towards the optima.
The current categories of problems being solved in this paper ( i.e
constrained flow-thermal design optimization problems) take the general form
$\min_{\textbf{u}}\;f(\textbf{u}),$
s.t
$g_{i}(\textbf{u})\leq X_{i}\;\;\;\;\;i=1.....N,$
$u_{j}^{min}\leq u_{j}\leq u_{j}^{max}\;\;\;\;j=1..M,$
where $f$ represents an objective function, $g_{i}$ represents the ith
constraint and $X_{i}$ represents constraint values. u represents the input
vector of design parameters (of length M), and each component of u has a
$u_{j}^{min}$ and $u_{j}^{max}$ that they can take. The objective and
constraint values would be a derivative of flow thermal variables like
pressure, temperature, or velocities, or design variables like geometry
lengths, inflow rates, or source terms. The objective and constraints can be
placed on bodies, individual surfaces, or the internal volumes of components
that are a part of the simulation.
In order to solve this problem via the PSO algorithm, the constrained
optimization problem is converted to an unconstrained problem via the penalty
method to get the objective function:
$f(\textbf{u})+\sum_{i=1}^{N}\lambda_{i}\cdot(g_{i}(\textbf{u})-X_{i})^{2}\cdot\mathbb{1}(g_{i}(\textbf{u})>X_{i})+\beta\sum_{i=1}^{N}\mathbb{1}(g_{i}(\textbf{u})>X_{i}).$
The first term is the constrained objective function. The second term
represents the degree of deviation of the constraint from the boundary if the
constraint is violated. The third term adds cost based on the number of
constraints violated, and $\lambda_{i}$ and $\beta$ are constants.
Once the objective function is set up, the ith particle positions (u) and
velocities are updated according to the equations
$\textbf{u}^{i}=\textbf{u}^{i}+\textbf{v}^{i},$
$\textbf{v}^{i}=w\textbf{v}^{i}+c_{1}r_{1}(\textbf{u}^{i}_{best}-\textbf{u}^{i})+c_{2}r_{2}(\textbf{u}_{best}-\textbf{u}^{i}),$
where $c_{1}$,$c_{2}$,$r_{1}$,$r_{2}$ and $w$ are constants.
There are several reasons why PSO is chosen for design optimization:
1. 1.
It is more economical to use compared to brute force grid search of Design of
Experiment (DoE) space via querying solutions from the ANN. While this may not
seem intuitive, in cases where there is pre-processing required (of say the
point cloud) before the ANN can be queried for the result, It helps to
minimize the amount of computation required. An example of this is when
parametrizing geometry, in which case to query a new u a new points cloud/mesh
would have to be generated.
2. 2.
The distribution of particles at convergence gives a smaller subspace to do
high-fidelity modeling, rather than returning a single particle as the best
solution. This is important because the modeling process via ANNs is not exact
or high fidelity, and it is better to have a subset of the DoE space returned
rather than a single point, as for example, a gradient-based optimization
method may do. Moreover, there may be multiple regions of the DoE space which
satisfy the constraints and minimize the objective, and the PSO method can
return both regions
Figure 4 depicts the design cycle using the PSO algorithm. In traditional
design optimization, Step 1 is done using CFD solvers, but this can get very
expensive. NN-based surrogates are an ideal tool to replace CFD solvers for
early-stage design cycles.
Figure 4: A design iteration using a design optimization algorithm (like PSO)
## 5 Applications
### 5.1 Forward Solve of 3D Stenosis Problem
As a first step to demonstrate the solver/surrogate modeling framework, flow
through an idealized 3D stenosis geometry at a physiologically relevant
Reynolds number is demonstrated, see Figure 5 for details about the geometry.
To the author’s knowledge, flow through a stenosis has been solved using PINNs
only at a low Reynolds number of approximately 6 (based on inlet diameter)
[60]. Flow through irregular geometries has been solved at a higher Re (500),
but in 2D [44]. In this paper, the stenosis problem is solved at Re 150, and
in 3 dimensions. As pointed out by Krishnapriyan et al. [33], at higher
Reynolds numbers the standard PINN implementation struggles to achieve a good
local minimum. This was confirmed using a standard PINN implementation. To
alleviate this issue, there are several approaches that can be taken. First,
sporadic solution data can be added throughout the domain of interest
(depicted in Figure 6(b)). Tests confirmed that this significantly reduces
convergence time and allows us to arrive at the correct solution in complex
domains. Second, as proposed by NVIDIA continuity planes can be used to
enforce mass/volume flow rate conservation across the cross-section of
internal flows [25] (depicted in Figure 6(a)). This was implemented in the
framework, using basic Euler integration to solve for the mass flow through a
cross-section based on the inlet mass flow rate.
#### 5.1.1 Problem Setup
Figure 5: Visual description of stenosis problem
The flow problem through the stenosis is solved by solving the steady-state
Navier-Stokes equations:
$\nabla\cdot\textbf{u}=0,$
$(\textbf{u}\cdot\nabla)\textbf{u}=-\frac{1}{\rho}\nabla\textbf{p}+\nu\nabla\cdot(\nabla\textbf{u}),$
subject to
$\textbf{u}(x_{b1})=0,x_{b1}\in\partial\Omega_{2},$
$\nabla u_{i}(x_{b2})\cdot\textbf{n}=0,x_{b2}\in\partial\Omega_{3},i=1,2,3,$
$\textbf{u}(x_{b3})=g(x_{b3}),x_{b3}\in\partial\Omega_{1},$
where $g(x_{b3})$ represents a profiled input to the stenosis.
In the present problem, a parabolic profiled input is provided with a peak
value inflow of 0.15 m/s. The ratio of areas of the throat to the inlet is
0.36.
The output of the network is approximated as $G_{\theta}$, which is a
4-component output, and use the network output to estimate all 4 components of
the output:
$u=G_{\theta,1},$
$v=G_{\theta,2},$
$w=G_{\theta,3},$
$p=G_{\theta,4}.$
(a)
(b)
Figure 6: Slices of stenosis showing several ways of aiding PINN convergence
(6(a)) Positions of continuity planes (6(b)) Position of solution data
provided
#### 5.1.2 Results
(a)
(b)
Figure 7: Solution Comparison. (7(a)) Altair AcuSolve® Solution to stenosis
problem (7(b)) PINN forward solve to stenosis problem.
(a)
(b)
Figure 8: Centerline solution comparisons : PINN versus Altair AcuSolve®
(8(a)) Total Velocity Comparison (8(b)) Pressure Comparison
Figure 7 compares the velocity magnitude returned by the trained PINN model
and Altair AcuSolve® through a 2D slice of the stenosis. As can be seen, the
essential features of the flow are captured. Figure 8(a) and 8(b) compare the
velocity and pressure profile through the center of the stenosis. The
differences between the line plots are attributed to differences in mesh
fineness between the two cases. It should also be noted that making minor
adjustments to the fineness of sampling of the continuity plane affects the
solution from the PINN.
### 5.2 Surrogate Modeling and Design Optimization
In the following subsections, the PINNs surrogate modeling technique is
demonstrated for rapid design iteration in the electronics cooling space.
Three key ideas are combined in this approach: data-driven modeling; physics-
informed methods; and optimization algorithms (see section 4.2). For the data
enhancement, sampling of FE data on a coarse DoE space is undertaken. The
physics-informed nature comes about through the inclusion of a PDE-based
regularizer as previously discussed. The combination of these two methods
allows both accuracy and robustness (as discussed in Section 3.2).
#### 5.2.1 Heat Sink Design Optimization
The electronics assembly utilizes a chip with a fin-type heatsink on top to
dissipate heat into the surrounding fluid. The chip-heatsink assembly is
cooled by forced convection of air. The geometry and setup are shown in Figure
9. The governing equations solved for this conjugate heat transfer problem
are, the Navier-Stokes for the flow domain surrounding the chip-heatsink
assembly:
$\nabla\cdot\textbf{u}=0,$
$(\textbf{u}\cdot\nabla)\textbf{u}=-\frac{1}{\rho}\nabla\textbf{p}+\nu\nabla\cdot(\nabla\textbf{u}),$
subject to no-slip boundary conditions on the surrounding box and on the chip-
heatsink assembly with a variable velocity at the inlet. The energy equation
in both fluid and solid reads:
$\nabla^{2}T+\dot{q}_{src}-\textbf{u}\cdot\nabla T=0,$
where the velocity is zero in the solid. At the interface between the fluid
and solid domain (fluid-sink, sink-chip, and fluid-chip) the interface
condition is applied by minimizing the following loss terms as shown in [28]:
$L_{flux}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(f_{d_{1}}(\textbf{u}(x_{i}))\cdot\textbf{n}_{d_{1}}+f_{d_{2}}(\textbf{u}(x_{i}))\cdot\textbf{n}_{d_{2}})^{2},$
$L_{val}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(\textbf{u}_{d_{j}}(x_{i})-\overline{\textbf{u}_{d_{j}}(x_{i})})^{2},$
where $\textbf{n}_{d1}=-\textbf{n}_{d2}$ and j=1,2. The average is taken over
j. $d_{1}$ and $d_{2}$ refer to the domains on both sides of the interface,
and $N_{int}$ is the number of node points on the interface.
The goal of the heatsink optimization is to find the maximum power the chip
can generate subject to certain constraints.
The design variables that can be altered for this present optimization are:
* •
Inflow Velocity
* •
Fin height
* •
Source term in the chip (has to be maximized)
The upper and lower limits of each of the design variables mentioned above are
summarized in Table 1. The inlet velocity is set based on typical values found
in literature [36] and corresponds to a Reynolds number range of Re 10,300 to
Re 24,000.
Parameter | Lower Value | Upper Value
---|---|---
Inflow Velocity ($m/s$) | 3 | 7
Fin Height ($mm$) | 15 | 23
Source Term ($W$) | 30 | 60
Table 1: Design of Experiments space axes ranges for the heat sink design
optimization
The domain is divided into three parts (Fluid, chip and sink) and each part
has its own prediction network. The material properties were as follows: Chip
is made of FR-4; Heatsink is made of Aluminium; and the cooling fluid is air.
The material properties are shown in Table 2
Domain | Density ($kg/m^{3}$) | Dynamic Viscosity($kg/ms$) | Conductivity($W/mK$) | Specific Heat($J/KgK$)
---|---|---|---|---
Chip | 7190.0 | - | 50.208 | 460.0
Sink | 2770.0 | - | 175.0 | 986.0
Air | 1.225 | 1.78 e-5 | 0.0251 | 1006.0
Table 2: Physical properties of different subdomains in the heat sink design
optimization Figure 9: Basic problem geometry and flow depiction
##### The Model Creation Process
The sampling of the above Design of DoE space is done via an efficient space-
sampling method to optimally fill the DoE space [41]. Sampling strategies are
important to minimize the number of samples while keeping the design
exploration process accurate and simultaneously cost-efficient. The model was
trained on a single Titan V GPU card, on which the overall training time was
around 130 mins.
##### The Optimization Process
In order to optimize designs based on user-defined constraints, the created
surrogate models are interfaced with an optimizer that solves a generic
constrained optimization problem, described in section 4.2. The same is
demonstrated in the example case:
Pressure drop across the heat sink channel $<=$ 11 Pa
Maximum temperature anywhere on the chip $<=$ 350 K
(a)
(b)
(c)
Figure 10: Design optimization iterations of the heat sink problem (10(a))
Iteration 0 (10(b)) Iteration 5 (10(c)) Iteration 10
Each snapshot in Figure 10 represents a design iteration, and each particle
represents a point in the DoE space. Each axis of a plot represents a
parameter axis.
For the given constraints, the particles converge to a much smaller region of
the DoE space. Due to imperfections in the overall modeling process, the aim
should not be to return a single design point to the user but a massively
truncated DoE space in which they can then run high-fidelity CFD to optimize
their design. For the sake of demonstration of results, however, a single
point in this truncated DoE space is chosen. The design point returned by the
optimizer in this case is:
Inflow Velocity: 6 m/s
Chip Power: 50W
Fin Height: 17mm
To test the result, high-fidelity CFD of the problem is run at the above
design point and compared to the result from the framework. As shown in
Figures 11 and 12, not only are the differences in solution minimal, but the
given design point nearly satisfies the design constraints. The time taken for
the overall design optimization after model training was 10 minutes, and this
approach shows how effective surrogate modeling-based optimization approaches
can be in reducing turnaround times for design problems. The resultant best
parameters can hence be tweaked a little by using high-fidelity CFD to obtain
a satisfactory design.
Figure 11: Temperature plot through a slice at the rear of the sink (from
bottom to top). The comparison between the high-fidelity solution on the fine
mesh and the PINN prediction on a coarser mesh shows good agreement Figure 12:
Pressure plot through a slice through the middle of the flow channel (from
left to right). The comparison between the high-fidelity solution on the fine
mesh and the PINN prediction on a coarser mesh shows good agreement
#### 5.2.2 Printed Circuit Board Thermal Analysis
In this example, thermal analysis of a PCB board and its components is done.
The setup is shown in Figure 13, with the material properties of the
components described in Table 3. The value ranges for each axis of the DoE is
shown in Table 4 The goal is to compare the solution from the ANN surrogate to
the solution from the CFD solver and test the ability of the model to
generalize to new test points. After that, a similar optimization problem to
the previous section is solved. The physics of the problem is identical to
that of the heat sink design problem and is not described again for brevity.
Figure 13: Basic problem geometry and flow depiction Domain | Density ($kg/m^{3}$) | Dynamic Viscosity($kg/ms$) | Conductivity ($W/mK$) | Specific Heat ($J/KgK$)
---|---|---|---|---
Air | 1.225 | 1.78 e-5 | 0.0251 | 1006.0
Capacitor | 700.0 | - | 82 | 2330.0
Chip | 700.0 | - | 82 | 2330.0
Sink | 2770.0 | - | 175 | 986.0
PCB | 1850 | - | 0.294 | 1900.0
Table 3: Material Properties for PCB analysis problem Parameter | Lower Value | Upper Value
---|---|---
Inflow Velocity ($m/s$) | 3 | 9
Source Term(Chip) ($W$) | 30 | 100
Source Term(Capacitor) ($W$) | 3 | 10
Table 4: DoE parameter ranges for PCB analysis problem
##### The Model Creation Process
The sampling of the above DoE space is done via an efficient space-sampling
method, to minimize the number of samples required to fill in the DoE space in
an efficient manner. 8 data points are used for the model creation process.
The model was trained on a single Titan V GPU card, on which the overall
training time was 100 mins. The total time taken to generate the model was
around 120 minutes (including data generation time, done on a highly efficient
32-core cluster).
##### Results
Prediction versus truths for 2 line plots for two randomly selected test
points (far enough from any training points) within the DoE space as defined
above. The two lines were chosen to get a quantitative comparison between
results while capturing the relevant physics. Though only results for energy
equation solves were shown, the model trained was a flow-thermal one.
Parameter set 1
* •
Inflow Velocity: 5 m/s
* •
Thermal Power in Chip: 90 W
* •
Thermal Power in Capacitor: 10 W
Figure 14: Line plot results for Parameter Set 1: CFD prediction versus
Network Prediction
Figure 14 compares the Altair AcuSolve® and ANN solution through two lines.
The left figure captures the solution in a fin of the sink, in a region that
is the hottest during the operation of the chip. The right figure captures the
temperature in the streamwise direction. The solutions for this test case show
good agreement, except for the wake area behind the sink. This is attributed
to imperfections in the flow solution in the wake of the sink.
Parameter set 2
* •
Inflow Velocity: 8 m/s
* •
Thermal Power in Chip: 50 W
* •
Thermal Power in Capacitor: 5 W
Figure 15: Line plot results for Parameter Set 2: CFD prediction versus
Network Prediction
Figure 15 again compares the Altair AcuSolve® and ANN solution through two
lines. The solutions for this test case also show good agreement, with some
deviation in the wake of the sink.
##### Design Optimization
The optimization feature of the framework is used to solve a similar design
optimization problem as before. One example problem is as follows:
Maximize $Q_{chip}$ s.t
Pressure drop across channel $<=$ 20 Pa
Maximum Temperature in Chip $<=$ 340 K
Maximum Temperature in Capacitor $<=$ 340 K
This makes for an interesting optimization problem as there is a slight
interaction between the wake of the chip and the capacitor
(a)
(b)
(c)
Figure 16: Design optimization iterations of the PCB design problem (16(a))
Iteration 0 (16(b)) Iteration 5 (16(c)) Iteration 10
Figure 16 shows the PSO algorithm converging to the best region of the DoE
space, which satisfies the constraints and solves the optimization problem.
Although the purpose of the optimizer is not to return a single point but to
return a truncated DoE space, for argument’s sake a single returned point,
which has the parameters:
Inflow Velocity: 5.5 m/s
Chip Source Term: 5.5 $e6W/m^{3}$
Capacitor Source Term: 3.87 $e5W/m^{3}$
The returned answer is compared to the CFD result at this optimal point.
Looking at the constraints defined above, it is visible in Figure 17 that the
surrogate-optimizer combination has very efficiently returned an optimal
operating point that maximizes the utilization of the chip (and the thermal
power dissipated).
(a)
(b)
(c)
Figure 17: Checking values of physical variables in solution domain to see if
they satisfy constraints. (17(a)) Start of the plot indicates the temperature
in the PCB followed by chip (17(b)) Pressure across flow channel. Left end
shows pressure at the inlet (17(c)) Temperature through the length of the
capacitor
#### 5.2.3 Electronics Box Analysis
Finally, the ability of the framework to model the flow thermal
characteristics of an entire electronics box is presented. It represents the
most challenging problem of the three due to having many components (28 bodies
in total), being made up of several different materials, and having many
different heat generation sources.
##### Model Creation Process
Figure 18: Basic problem geometry and flow depiction
The sampling of the DoE space is done via the same efficient space sampling
method of the DoE space. The ranges of parameters for each axis are given in
Table 5. 15 data points were used for the model creation process. The model
training time was around 85 minutes (done on a Titan V GPU), and the total
time taken to generate the model was around 200 minutes (which includes data
generation time, created on an 8-core CPU machine).
Parameter | Lower Value | Upper Value
---|---|---
Inflow Velocity ($m/s$) | 0.5 | 5
Source Term(BGA) ($W/m^{3}$) | $1e^{6}$ | $1e^{7}$
Source Term(D25 Die) ($W/m^{3}$) | $5e^{6}$ | $5e^{7}$
Source Term(D26 Die) ($W/m^{3}$) | $5e^{6}$ | $5e^{7}$
Source Term(Q11 Die) ($W/m^{3}$) | $1e^{7}$ | $1e^{8}$
Table 5: DoE parameter ranges for the electronics box problem
##### Results
Figure 19 shows the temperature and pressure contours for a selected randomly
drawn test point. Figures 19(a) and 19(b) show the temperature comparison, and
Figures 19(c) and 19(d) show the pressure prediction. This qualitative
accuracy of the test results versus CFD demonstrates the ability of the model
to generalize well and predict flow-thermal results within the DoE design
space, for very nontrivial geometries like the electronics box.
(a)
(b)
(c)
(d)
Figure 19: Comparisons between a commercial solver (Altair AcuSolve®) and the
surrogate model. (19(a)) CFD temperature prediction on the test case (19(b))
Model temperature prediction on the test case (19(c)) CFD pressure prediction
on the test case (19(d)) Model pressure prediction on the test case
#### 5.2.4 Cost Benefit Analysis Compared to Standard CFD
In this section, a quantitative comparison of PINNs bases surrogate modeling
versus CFD for DoE problems is presented. For the PINNs model training was
performed on a single Titan V GPU card.
##### Heat Sink Design Optimization
Table 6 shows the cost-benefit analysis of the hybrid PINNs-CFD surrogate
model versus CFD for the heat sink design optimization problem shown in
Section 5.2.1. The comparison, visualized in Figure 20 shows the total time
taken to optimize the design using the PSO method and compares the time taken
versus the number of iterations. The ”model training time” encapsulates the
time taken to create the data and train the model, so that it is a fair
comparison.
Solve Type | Model Training Time | Time for a Design Iteration | Time for 10 Design Iterations
---|---|---|---
CFD (4 cores) | - | 160 min | 1600 min
PINN (1 GPU + 1 core) | 286 mins | 55 seconds | 300 mins
Table 6: Comparing design iteration times using CFD versus ANN surrogate for
the heatsink problem
Figure 20: Time comparison of doing PSO-based design iterations using the
surrogate versus CFD. Once the surrogate returns a truncated DoE space, the
designer can perform high-fidelity CFD to fine-tune the design
##### PCB Thermal Analysis
Table 7 and Figure 21 shows a similar comparison for the PCB electronics
assembly (see section 5.2.2). It is evident from both results that using
accurate surrogate models can cap the cost of thoroughly solving DoE problems
and reduce costs and increase efficiency.
Solve Type | Model Training Time | Time for each prediction
---|---|---
CFD (32 cores) | - | 120 s
PINN (1 GPU + 1 core) | 116 mins | 3 seconds
Table 7: Comparing design iteration times using CFD versus ANN surrogate for
the PCB design problem
Figure 21: Comparison of time cost of using a surrogate to query versus CFD
#### 5.2.5 Accuracy Comparisons
The created network models have several features that allow the creation of
surrogates that are accurate, train quickly, and show good generalization
capabilities. To demonstrate this, a few comparisons of predictions (for a
test case) are shown, done with a standard fully connected network with none
of the features that are described in Section 3, with predictions done using a
model from the framework. These are example cases where using a standard data-
driven network can lead to regions with unphysical results in the domain. Both
models in both comparisons are created with the same dataset, the same
hyperparameters (besides those applicable to only Physics-ML models) and
trained on the same hardware for the same number of iterations.
##### Example 1: Temperature on PCB
(a)
(b)
(c)
(d)
Figure 22: Temperature comparison on the PCB between predictions from standard
data-driven NN versus PINN-based networks. (22(a)) Prediction from framework
network (22(b)) Prediction from standard fully connected NN (22(c)) True
Solution (22(d)) Insets from the prediction of standard NN showing unphysical
results, especially near the boundaries
Figure 22 shows an example qualitative comparison of using models created by
the framework versus standard data-driven models. The result from the standard
data-driven model has regions of unphysical results, shown in Figure 22(d).
##### Example 2: Pressure around Electronics Box
(a)
(b)
(c)
Figure 23: Pressure comparisons for flow over the electronics box (23(a))
Prediction from framework network (23(b)) Prediction from standard fully
connected NN (23(c)) True Solution
Figure 23 shows a similar difference in the accuracy of the result, in that
the standard data-driven NN ends up with regions of non-physical results, all
other applicable model creation parameters remaining the same.
#### 5.2.6 Active Research Areas
Given that PINNs is such a promising yet young field, there are several areas
of active research on applying them to different problems. A few pertinent
ones are addressed here:
##### Turbulence Modeling
One unexplored area in PINNs is incorporating turbulence modeling into the
learning process. Currently in the Altair framework, for creating surrogate
models, the eddy viscosities from Altair AcuSolve® are used. Only very simple
turbulence models have been implemented in 3D [25] for PINNs. Ghosh et al.
[17] implemented the $\kappa-\epsilon$ 2 equation model for 2D flows, but this
has yet to be extended to 3D. The lack of research on turbulence models for
PINNs means that only the simplest turbulence models can be used currently,
though Eivazi et al. [15] used PINNs to simulate simpler turbulent flow cases.
##### Geometry Parametrization
One of the most pressing problems that Neural Network based surrogate models
have promised to solve is that of shape optimization. Design exploration by
changing geometry is a time-consuming and complicated process in traditional
CFD due to having to re-mesh for every small change, followed by running the
solver. Yet, topology and shape optimization has become a key feature of most
commercial solvers, highlighting the demand for the feature. The quick
prediction capability of ANNs, coupled with the ability to predict on point
clouds instead of meshes makes this method very promising in this application.
A recently popular approach is to use graph neural networks to read in point
clouds or meshes directly [47, 39, 45], opening the door to exciting
possibilities of parametrizing complex geometry relatively easily. There are
also works using generative design and neural operators to solve problems of
topology optimization. There have been several instances of solving these for
structural problems but more recently there has been work solving flow-thermal
topology optimization problems as well[43, 61, 55].
##### Uncertainty Quantification
One of the key metrics in being able to trust the results of ANN-based
surrogates is to have a quantitative idea about the uncertainty of the
prediction of the model. There has been a recent increase in the volume of
works related to this topic [72, 3, 71], as practitioners and researchers have
realized the importance of providing uncertainty measurements.
## 6 Conclusions and Future Work
In this paper, Physics Informed Neural Networks were introduced and some of
their current limitations were discussed. Recent research on how to enhance
the convergence properties of PINNs was discussed and a novel example in 3D
was demonstrated, showing the benefit of adding physics to sparsely provided
solution data. A demo problem solve was also demonstrated showing that PINNs
can be used to solve realistic problems in a data-free manner for the 3D
Navier Stokes equations. Their applications were also demonstrated for
industry-grade flow-thermal surrogate modeling problems with the inclusion of
some solution data and showed how combined with optimization algorithm models
can arrive at optimal designs automatically based on user-defined constraints,
in near real-time. It was also demonstrated that models from the framework
perform better than basic data-driven ANNs for the exact same hyperparameters
while showing the cost-benefit analysis for creating and using these models
(including data creation and training time). The results shown in this paper
can be built on to reduce surrogate cost creation, improve accuracy and reduce
the black-box nature of ANN-based surrogate models.
There are multiple avenues through which the work shown in this paper can be
improved. Research has to be done to improve convergence and offer guarantees
of PINNs training toward the correct local minimums. This will further reduce
data requirements for the creation of ANN-based surrogates. Further research
also needs to be done for unsteady problems, which form an important class of
problems that require surrogate models to model. More work needs to be done to
incorporate and use other turbulence models with PINNs. Finally, one of the
core issues with using physics-based regularizers is the additional cost they
impose on the training process, and an important yet unexplored area of
research is being able to use the physics regularization terms in a more
memory and compute-efficient manner.
Acknowledgements
This research did not receive any specific grant from funding agencies in the
public or not-for-profit sectors, or from any external commercial entities.
The authors gratefully acknowledge the use of Altair Engineering Inc.’s
computing facilities for running experiments.
CRediT authorship contribution statement
Saakaar Bhatnagar: Formal Analysis, Investigation, Methodology, Software,
Validation, Writing-original draft. Andrew Comerford: Conceptualization,
Investigation, Project Administration, Supervision, Writing- review and
editing Araz Banaeizadeh: Conceptualization, Project Administration,
Supervision, Writing- review and editing
## References
* Antonello et al. [2023] Federico Antonello, Jacopo Buongiorno, and Enrico Zio. Physics informed neural networks for surrogate modeling of accidental scenarios in nuclear power plants. _Nuclear Engineering and Technology_ , 2023.
* Basir and Senocak [2022] Shamsulhaq Basir and Inanc Senocak. Critical investigation of failure modes in physics-informed neural networks. In _AIAA SCITECH 2022 Forum_. American Institute of Aeronautics and Astronautics, 2022. doi: 10.2514/6.2022-2353. URL https://doi.org/10.2514%2F6.2022-2353.
* Bharadwaja et al. [2022] BVSS Bharadwaja, Mohammad Amin Nabian, Bharatkumar Sharma, Sanjay Choudhry, and Alankar Alankar. Physics-informed machine learning and uncertainty quantification for mechanics of heterogeneous materials. _Integrating Materials and Manufacturing Innovation_ , 11(4):607–627, 2022.
* Bhatnagar et al. [2019] Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik. Prediction of aerodynamic flow fields using convolutional neural networks. _Computational Mechanics_ , 64:525–545, 2019.
* Bischof and Kraus [2021] Rafael Bischof and Michael Kraus. Multi-objective loss balancing for physics-informed deep learning. _arXiv preprint arXiv:2110.09813_ , 2021.
* Bora et al. [2021] Aniruddha Bora, Weizhong Dai, Joshua P Wilson, and Jacob C Boyt. Neural network method for solving parabolic two-temperature microscale heat conduction in double-layered thin films exposed to ultrashort-pulsed lasers. _International Journal of Heat and Mass Transfer_ , 178:121616, 2021.
* Brunton et al. [2020] Steven L Brunton, Bernd R Noack, and Petros Koumoutsakos. Machine learning for fluid mechanics. _Annual review of fluid mechanics_ , 52:477–508, 2020.
* Cai et al. [2021a] Shengze Cai, Zhiping Mao, Zhicheng Wang, Minglang Yin, and George Em Karniadakis. Physics-informed neural networks (pinns) for fluid mechanics: A review. _Acta Mechanica Sinica_ , 37(12):1727–1738, 2021a.
* Cai et al. [2021b] Shengze Cai, Zhicheng Wang, Sifan Wang, Paris Perdikaris, and George Em Karniadakis. Physics-Informed Neural Networks for Heat Transfer Problems. _Journal of Heat Transfer_ , 143(6):060801, 2021b. doi: 10.1115/1.4050542.
* Calzolari and Liu [2021] Giovanni Calzolari and Wei Liu. Deep learning to replace, improve, or aid cfd analysis in built environment applications: A review. _Building and Environment_ , 206:108315, 2021.
* Cao et al. [2019] Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, and Quanquan Gu. Towards understanding the spectral bias of deep learning. _arXiv preprint arXiv:1912.01198_ , 2019.
* Cheng et al. [2021] Chen Cheng, Hao Meng, Yong-Zheng Li, and Guang-Tao Zhang. Deep learning based on pinn for solving 2 dof vortex induced vibration of cylinder. _Ocean Engineering_ , 240:109932, 2021.
* D. Jagtap and Em Karniadakis [2020] Ameya D. Jagtap and George Em Karniadakis. Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. _Communications in Computational Physics_ , 10, 2020. doi: https://doi.org/10.4208/cicp.OA-2020-0164.
* Devlin et al. [2018] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ , 2018.
* Eivazi et al. [2022] Hamidreza Eivazi, Mojtaba Tahani, Philipp Schlatter, and Ricardo Vinuesa. Physics-informed neural networks for solving reynolds-averaged navier–stokes equations. _Physics of Fluids_ , 34(7), 2022. doi: 10.1063/5.0095270. URL https://doi.org/10.1063%2F5.0095270.
* Finn et al. [2017] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In _International conference on machine learning_ , pages 1126–1135. PMLR, 2017.
* Ghosh et al. [2023] Shinjan Ghosh, Amit Chakraborty, Georgia Olympia Brikis, and Biswadip Dey. Rans-pinn based simulation surrogates for predicting turbulent flows. _arXiv preprint arXiv:2306.06034_ , 2023.
* Gladstone et al. [2022] Rini J Gladstone, Mohammad A Nabian, and Hadi Meidani. Fo-pinns: A first-order formulation for physics informed neural networks. _arXiv preprint arXiv:2210.14320_ , 2022.
* Glorot and Bengio [2010] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In _Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics_ , Proceedings of Machine Learning Research, pages 249–256. PMLR, 2010.
* Gonçalves et al. [2020] Gabriel FN Gonçalves, Assen Batchvarov, Yuyi Liu, Yuxin Liu, Lachlan R Mason, Indranil Pan, and Omar K Matar. Data-driven surrogate modeling and benchmarking for process equipment. _Data-Centric Engineering_ , 1:e7, 2020.
* Gopakumar et al. [2023] Vignesh Gopakumar, Stanislas Pamela, and Debasmita Samaddar. Loss landscape engineering via data regulation on pinns. _Machine Learning with Applications_ , 12:100464, 2023.
* Guo et al. [2016] Xiaoxiao Guo, Wei Li, and Francesco Iorio. Convolutional neural networks for steady flow approximation. pages 481–490. Association for Computing Machinery, 2016. doi: 10.1145/2939672.2939738.
* Hansen et al. [2023] Derek Hansen, Danielle Maddix Robinson, Shima Alizadeh, Gaurav Gupta, and Michael Mahoney. Learning physical models that can respect conservation laws. In _ICML 2023_ , 2023.
* He and Pathak [2020] Haiyang He and Jay Pathak. An unsupervised learning approach to solving heat equations on chip based on auto-encoder and image gradient. _arXiv preprint arXiv:2007.09684_ , 2020.
* Hennigh et al. [2021] Oliver Hennigh, Susheela Narasimhan, Mohammad Amin Nabian, Akshay Subramaniam, Kaustubh Tangsali, Zhiwei Fang, Max Rietmann, Wonmin Byeon, and Sanjay Choudhry. Nvidia simnet™: An ai-accelerated multi-physics simulation framework. In _International conference on computational science_ , pages 447–461. Springer, 2021.
* Hu et al. [2022] Zheyuan Hu, Ameya D. Jagtap, George Em Karniadakis, and Kenji Kawaguchi. When do extended physics-informed neural networks (XPINNs) improve generalization? _SIAM Journal on Scientific Computing_ , 44(5):A3158–A3182, 2022. doi: 10.1137/21m1447039. URL https://doi.org/10.1137%2F21m1447039.
* Ishitsuka and Lin [2023] Kazuya Ishitsuka and Weiren Lin. Physics-informed neural network for inverse modeling of natural-state geothermal systems. _Applied Energy_ , 337:120855, 2023. ISSN 0306-2619. doi: https://doi.org/10.1016/j.apenergy.2023.120855. URL https://www.sciencedirect.com/science/article/pii/S0306261923002192.
* Jagtap et al. [2020] Ameya D. Jagtap, Ehsan Kharazmi, and George Em Karniadakis. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. _Computer Methods in Applied Mechanics and Engineering_ , 365:113028, 2020. doi: https://doi.org/10.1016/j.cma.2020.113028.
* Kennedy and Eberhart [1995] J. Kennedy and R. Eberhart. Particle swarm optimization. In _Proceedings of ICNN’95 - International Conference on Neural Networks_ , volume 4, pages 1942–1948, 1995. doi: 10.1109/ICNN.1995.488968.
* Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Knight [2001] Doyle Knight. _Design optimization in computational fluid dynamics_. Springer US, Boston, MA, 2001. doi: 10.1007/0-306-48332-7˙90. URL https://doi.org/10.1007/0-306-48332-7_90.
* Krath et al. [2021] Elizabeth H. Krath, Forrest L. Carpenter, Paul G.A. Cizmas, and David A. Johnston. An efficient proper orthogonal decomposition based reduced-order model for compressible flows. _Journal of Computational Physics_ , 426:109959, 2021. doi: 10.1016/j.jcp.2020.109959.
* Krishnapriyan et al. [2021] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. _Advances in Neural Information Processing Systems_ , 34:26548–26560, 2021.
* Lee and You [2019] Sangseung Lee and Donghyun You. Data-driven prediction of unsteady flow over a circular cylinder using deep learning. _Journal of Fluid Mechanics_ , 879:217–254, 2019. doi: 10.1017/jfm.2019.700.
* Li et al. [2020] Zongyi Li, Nikola B. Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew M. Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. _CoRR_ , 2020.
* Lindstedt and Karvinen [2012] Matti Lindstedt and Reijo Karvinen. Optimization of plate fin arrays with laminar and turbulent forced convection. In _Journal of Physics: Conference Series_ , volume 395, page 012059\. IOP Publishing, 2012.
* Liu et al. [2022] Ruo-Lin Liu, Yue Hua, Zhi-Fu Zhou, Yubai Li, Wei-Tao Wu, and Nadine Aubry. Prediction and optimization of airfoil aerodynamic performance using deep neural network coupled bayesian method. _Physics of Fluids_ , 34(11), 2022.
* Liu et al. [2019] Zeyu Liu, Yantao Yang, and Qing-Dong Cai. Solving differential equation with constrained multilayer feedforward network. _arXiv preprint arXiv:1904.06619_ , 2019.
* Mayr et al. [2023] Andreas Mayr, Sebastian Lehner, Arno Mayrhofer, Christoph Kloss, Sepp Hochreiter, and Johannes Brandstetter. Boundary graph neural networks for 3d simulations. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 37, pages 9099–9107, 2023.
* Melas-Kyriazi [2020] Luke Melas-Kyriazi. The mathematical foundations of manifold learning. _arXiv preprint arXiv:2011.01307_ , 2020.
* Morris and Mitchell [1995] Max D. Morris and Toby J. Mitchell. Exploratory designs for computational experiments. _Journal of Statistical Planning and Inference_ , 43(3):381–402, 1995. ISSN 0378-3758. doi: https://doi.org/10.1016/0378-3758(94)00035-T.
* Nekkanti and Schmidt [2023] Akhil Nekkanti and Oliver T. Schmidt. Gappy spectral proper orthogonal decomposition. _Journal of Computational Physics_ , 478:111950, 2023. doi: 10.1016/j.jcp.2023.111950.
* Oh et al. [2019] Sangeun Oh, Yongsu Jung, Seongsin Kim, Ikjin Lee, and Namwoo Kang. Deep Generative Design: Integration of Topology Optimization and Generative Models. _Journal of Mechanical Design_ , 141(11):111405, 2019. doi: 10.1115/1.4044229.
* Oldenburg et al. [2022] Jan Oldenburg, Finja Borowski, Alper Öner, Klaus-Peter Schmitz, and Michael Stiehm. Geometry aware physics informed neural network surrogate for solving navier–stokes equation (gapinn). _Advanced Modeling and Simulation in Engineering Sciences_ , 9(1):8, 2022. doi: https://doi.org/10.1186/s40323-022-00221-z.
* Pegolotti et al. [2023] Luca Pegolotti, Martin R Pfaller, Natalia L Rubio, Ke Ding, Rita Brugarolas Brufau, Eric Darve, and Alison L Marsden. Learning reduced-order models for cardiovascular simulations with graph neural networks. _arXiv preprint arXiv:2303.07310_ , 2023.
* Penwarden et al. [2023] Michael Penwarden, Shandian Zhe, Akil Narayan, and Robert M Kirby. A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. _Journal of Computational Physics_ , 477:111912, 2023.
* Pfaff et al. [2020] Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W Battaglia. Learning mesh-based simulation with graph networks. _arXiv preprint arXiv:2010.03409_ , 2020.
* Rahaman et al. [2019] Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In _International Conference on Machine Learning_ , pages 5301–5310. PMLR, 2019.
* Raissi et al. [2019] M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _Journal of Computational Physics_ , 378:686–707, 2019\. doi: https://doi.org/10.1016/j.jcp.2018.10.045.
* Raissi and Karniadakis [2018] Maziar Raissi and George Em Karniadakis. Hidden physics models: Machine learning of nonlinear partial differential equations. _Journal of Computational Physics_ , 357:125–141, 2018\. doi: 10.1016/j.jcp.2017.11.039.
* Ramesh et al. [2021] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In _International Conference on Machine Learning_ , pages 8821–8831. PMLR, 2021.
* Rao et al. [2021] Chengping Rao, Hao Sun, and Yang Liu. Physics-informed deep learning for computational elastodynamics without labeled data. _Journal of Engineering Mechanics_ , 147(8):04021043, 2021.
* Saad et al. [2023] Nadim Saad, Gaurav Gupta, Shima Alizadeh, and Danielle Maddix Robinson. Guiding continuous operator learning through physics-based boundary constraints. In _ICLR 2023_ , 2023.
* Sharma et al. [2023] Rahul Sharma, Maziar Raissi, and Yuebin Guo. Physics-informed deep learning of gas flow-melt pool multi-physical dynamics during powder bed fusion. _CIRP Annals_ , 2023. doi: https://doi.org/10.1016/j.cirp.2023.04.005.
* Shukla et al. [2023] Khemraj Shukla, Vivek Oommen, Ahmad Peyvan, Michael Penwarden, Luis Bravo, Anindya Ghoshal, Robert M Kirby, and George Em Karniadakis. Deep neural operators can serve as accurate surrogates for shape optimization: a case study for airfoils. _arXiv preprint arXiv:2302.00807_ , 2023.
* Sitzmann et al. [2020] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. _Advances in neural information processing systems_ , 33:7462–7473, 2020.
* Son et al. [2023] Hwijae Son, Sung Woong Cho, and Hyung Ju Hwang. Enhanced physics-informed neural networks with augmented lagrangian relaxation method (al-pinns). _Neurocomputing_ , page 126424, 2023.
* Subramanian et al. [2022] Shashank Subramanian, Robert M Kirby, Michael W Mahoney, and Amir Gholami. Adaptive self-supervision algorithms for physics-informed neural networks. _arXiv preprint arXiv:2207.04084_ , 2022.
* Sukumar and Srivastava [2022] N. Sukumar and Ankit Srivastava. Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks. _Computer Methods in Applied Mechanics and Engineering_ , 389:114333, 2022. doi: 10.1016/j.cma.2021.114333.
* Sun et al. [2020] Luning Sun, Han Gao, Shaowu Pan, and Jian-Xun Wang. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. _Computer Methods in Applied Mechanics and Engineering_ , 361:112732, 2020. doi: 10.1016/j.cma.2019.112732.
* Sun et al. [2023] Yubiao Sun, Ushnish Sengupta, and Matthew Juniper. Physics-informed deep learning for simultaneous surrogate modeling and pde-constrained optimization of an airfoil geometry. _Computer Methods in Applied Mechanics and Engineering_ , 411:116042, 2023. doi: https://doi.org/10.1016/j.cma.2023.116042.
* Tancik et al. [2020] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. _Advances in Neural Information Processing Systems_ , 33:7537–7547, 2020.
* Varoquaux and Cheplygina [2022] Gaël Varoquaux and Veronika Cheplygina. Machine learning for medical imaging: methodological failures and recommendations for the future. _NPJ digital medicine_ , 5(1):48, 2022.
* Wang et al. [2021a] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. _SIAM Journal on Scientific Computing_ , 43(5):A3055–A3081, 2021a. doi: 10.1137/20M1318043.
* Wang et al. [2021b] Sifan Wang, Hanwen Wang, and Paris Perdikaris. Learning the solution operator of parametric partial differential equations with physics-informed deeponets. _Science Advances_ , 7(40):eabi8605, 2021b. doi: 10.1126/sciadv.abi8605.
* Wang et al. [2022a] Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality is all you need for training physics-informed neural networks. _arXiv preprint arXiv:2203.07404_ , 2022a.
* Wang et al. [2022b] Sifan Wang, Hanwen Wang, and Paris Perdikaris. Improved architectures and training algorithms for deep operator networks. _Journal of Scientific Computing_ , 2022b.
* Wang et al. [2022c] Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why pinns fail to train: A neural tangent kernel perspective. _Journal of Computational Physics_ , 449:110768, 2022c.
* Wu et al. [2022] Benjamin Wu, Oliver Hennigh, Jan Kautz, Sanjay Choudhry, and Wonmin Byeon. Physics informed rnn-dct networks for time-dependent partial differential equations. In _International Conference on Computational Science_ , pages 372–379. Springer, 2022.
* Xu and Darve [2022] Kailai Xu and Eric Darve. Physics constrained learning for data-driven inverse modeling from sparse observations. _Journal of Computational Physics_ , 453:110938, 2022. ISSN 0021-9991. doi: https://doi.org/10.1016/j.jcp.2021.110938. URL https://www.sciencedirect.com/science/article/pii/S0021999121008330.
* Yang et al. [2021] Liu Yang, Xuhui Meng, and George Em Karniadakis. B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data. _Journal of Computational Physics_ , 425:109913, 2021.
* Zhang et al. [2019] Dongkun Zhang, Lu Lu, Ling Guo, and George Em Karniadakis. Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. _Journal of Computational Physics_ , 397:108850, 2019. doi: 10.1016/j.jcp.2019.07.048. URL https://doi.org/10.1016%2Fj.jcp.2019.07.048.
* Zhao et al. [2020] Yaomin Zhao, Harshal D Akolekar, Jack Weatheritt, Vittorio Michelassi, and Richard D Sandberg. Rans turbulence model development using cfd-driven machine learning. _Journal of Computational Physics_ , 411:109413, 2020.
|
# Theoretical possibilities of head transplant
Santanu Acharjee
Department of Mathematics, Gauhati University, Assam, India
e-mail<EMAIL_ADDRESS>
## Abstract:
Recently, Zielinski and Sokal [ Zielinski, P., Sokal, P. (2016). Full spinal
cord regeneration after total transection is not possible due to entropy
change. Medical Hypotheses, 94, 63-65] proposed one hypothesis by considering
three assumptions against the spinal cord regeneration after total
transection. Thus, their claims are concluding that head transplant is not
possible. But, using theoretical justifications, we show that head transplant
is possible without any information loss. We design a spinal cord logic
circuit bridge (SCLCB), which is a reversible logic circuit and thus, we show
that there is no information loss.
2020 AMS Classifications: 92C50, 94C11.
Keywords: Spinal cord regeneration, entropy, laws of thermodynamics, head
transplant, logic gates, SCLGB.
## 1 Background
In 2013, Canavero [1] announced that full head (or body) transplant of a human
is possible. Since then, Canavero and Ren [2] have been facing criticisms.
Often, ideas of Canavero [1] or related procedural developments by Canavero
and Ren [3] are considered to be medically impossible tasks and unethical. One
may find many articles on medical impossibilities as well as ethical issues
related to head transplantation in human [4, 5, 6], but only a few groups are
supporting possibilities of head transplantation in human [7, 8]. In short,
Canavero’s HEAVEN (The head anastomosis venture project) with spinal linkage
(project GEMINI) [1] is a matter of discussion from various sides, in spite of
Ren and Canavero’s hope [7].
Amidst all the debates regarding possibilities and impossibilities related to
head transplantation in human, an article [9] attracted our attention. In
2016, Zielinski and Sokal [9] considered a hypothesis and proved that full
spinal cord regeneration after total transection is not possible. Their
hypothesis concludes that head transplant with cent percent success rate is
not possible. Their hypothesis is cited below.
“The hypothesis is that full spinal cord restoration after transection is not
possible due to irreversible loss of information about its structure, needed
to restore the connections.”
To prove their hypothesis, Zielinski and Sokal [9] considered following three
assumptions:
* •
There are two million of axons in the pyramidal tract in the cervical region
and these axons are important for spinal cord regeneration and restoration of
adequate quality of life of a patient.
* •
The second assumption is that the regeneration of damaged spinal cord should
lead to axonal growth through the lesion site from the proximal end of the
cord to the distal end and their re-connections with adequate target cells
with loss of distal parts of axons, below the level of transection and there
is an equal number of targets.
* •
The axonal growth of the severed spinal cord is made fully possible
Zielinski and Sokal [9] provided some basic mathematical justifications (using
permutations) behind their claims, which cannot be neglected at first glance.
They also claimed that there is a lack of mathematical background in this area
of research and thus unnecessary expenditures of high research funds. But, in
next section, we provide mathematical justifications and prove that head
transplant with cent percent success rate is possible in theoretical sense.
## 2 Hypothesis
We consider the following hypothesis:
Head transplant is possible without any loss of information. Moreover, a logic
circuit (SCLCB) can be used to transmit nerve signals in case of transection
of spinal cord due to spinal cord injury.
## 3 Shannon entropy, uncertainty and information loss during head
transplant
In [9], Zielinski and Sokal considered that there are two millions pyramidal
axons in the spinal cord. They considered spinal cord as an open system whose
entropy would be lost after injury. After transection due to injury, the
proximal part and the distal part would be formed. Both of these parts would
have new entropy. Now, while reconnecting proximal part with distal part, if
one makes incorrect re-connections, the brain may have some problems to
reorganise, refining its own connectivity due to the brain plasticity [10].
Here, we develop related mathematics to show that head transplant is not
possible in case of maximum uncertainty. Let us rename two millions pyramidal
axons as $A_{1},A_{2},...,A_{2,000,000}$. If $p_{i}$ be the probability of
establishing correct interconnection of a pyramidal axons $A_{i}$ from
proximal part to its counter distal part, then $p_{i}=\frac{1}{2,000,000}$;
$\forall i=1,2,...,2,000,000$. In this case, if $H_{random}$ be the Shannon
entropy [11], then
$H_{random}=-\sum_{i=1}^{2,000,000}p_{i}log_{2}(p_{i})=log_{2}(2000000)$.
Thus, maximum uncertainty is obtained when one wants to establish
interconnections randomly. We consider an ideal hypothetical condition where
interconnection after total transection will be done as same as before total
transection. In this case, if $p_{i}^{\prime}$ be the probability of
establishing correct interconnection of a pyramidal axons $A_{i}$ from
proximal part to its counter distal part, then $p_{i}^{\prime}=1$; $\forall
i=1,2,...,2,000,000$. Let $H_{ideal}$ be the Shannon entropy [11] for this
ideal case, then
$H_{ideal}=-\sum_{i=1}^{2,000,000}p_{i}^{\prime}log_{2}(p_{i}^{\prime})=0$.
Thus, the entropy is zero i.e. there is no uncertainty only in case of ideal
hypothetical case. Zielinski and Sokal [9] assumed random re-connections and
thus obtained 2,000,000! permutations. But, why do we assume that
neurosurgeons will establish random interconnections of pyramidal axons? In
[13], Ren et al. discussed various medical methods along with experimental
evidences regarding possibilities of spinal cord fusion, axons regeneration,
etc. Moreover, they quoted the following paragraph from Busy et al. [14].
“The pyramidal tract is not essential to useful control of the skeletal
musculature. In the absence of the corticospinal fibers, other fiber systems,
particularly some multi‑neuronal mechanism passing through the mesencephalic
tegmentum, are capable of producing useful, well‑coordinated, strong, and
delicate movements of the extremities.”
Thus, the above paragraph encourages us to develop mathematical possibilities
with functionally equivalent mechanism. Let us subdivide 2,000,000 pyramidal
axons into some classes based on their functional equivalence and non-
functional equivalence mechanisms. Let $F_{1}$, $F_{2}$,…, $F_{K}$ be $k$
classes having functional equivalence mechanism and $F_{K+1}$ is a class of
pyramidal axons based on non-functional equivalence mechanism. We assume
$|F_{1}|+|F_{2}|+...+|F_{K}|=2,000,000-n$ and $|F_{K+1}|=n$, where
$n<<2,000,000$. Due to functional equivalence mechanism, Shannon entropy for
class $F_{i}$ (symbolically $H_{F_{i}}$) is zero, for $i=1,2,3,...,k$ and
$H_{F_{k+1}}$ = $log_{2}(n)$ due to non-functional equivalence. If
$H_{functional}$ be the entropy based on functional equivalence and non-
functional equivalence mechanisms, then due to Shannon [9], we obtain
$H_{functional}$=$\sum\limits_{i=1}^{k+1}H_{F_{i}}$=$log_{2}(n)$. Since
$n<<2,000,000$, thus $H_{ideal}\leq H_{functional}<H_{random}$. Thus, it can
be concluded mathematically that functions will be restored due to re-
connections as long as the mismatch is not extreme [9]. Since, entropy of
information theory is connected to the second law of thermodynamics [15], thus
we urged against the claim of Zielinski and Sokal [9]. Hence, we conclude that
spinal cord regeneration is theoretically possible if neurosurgeons identify
classes $F_{i}$s with functional equivalence mechanisms at most matching.
## 4 Spinal Cord Logic Circuit Bridge (SCLCB)
Figure 1: Spinal Cord Logic Circuit Bridge (SCLCB) Figure 2: (a) Spinal cord
injured (SCI) and transected (b) SCLCB between transected spinal cord
Recently, Nie et al.[16] showed that nerve signal transmission after spinal
cord injury is possible in quantum realm and thus, they proposed spinal cord
quantum bridge (SCQB). This inspires us to built a logic circuit using only
three logical operators AND, OR, and NOT. Figure 1 shows a logic circuit which
takes input X and input Y in right and left side respectively. Again, it gives
output Y and output X in right and left side respectively. We call this logic
circuit as Spinal Cord Logic Circuit Bridge (SCLCB).
In (a) of figure 2, an injured spinal cord is shown, which is transected.
Here, X represents transmission of nerve signals in upward direction and Y
represents transmission of nerve signals in downward direction. More
elaborately, X represents transmission of sensory nerve signals from
peripheral nerves to central nerves and Y represents transmission of motor
nerve signals from central nerves to outer peripheral nerves. But, due to
transection of spinal cord, transmission of X and Y are stopped. In (b) of
figure 2, we implant SCLCB between two breakpoints Part A and Part B. While
implanting SCLCB, we are to connect X(input) and Y(output) in Part B and
X(output) and Y(input) in Part A of transected spinal cord. SCLCB has the
property that it takes both the signals X and Y simultaneously as inputs at
time $t_{1}$ and transmit signals X and Y as outputs at time $t_{2}$. For
example, if we consider X=1 and Y=1 as inputs at time $t_{1}$ i.e. signals
transmit in both directions at time $t_{1}$, then we can easily check that
SCLCB gives outputs X=1 and Y=1 at time $t_{2}$. One may check for other three
cases of inputs viz. (i) X=1 and Y=0 (ii) X=0 and Y=1, and (iii) X=0 and Y=0.
Thus, we propose that SCLCB can be implant between two parts of spinal cord
after transection due to spinal cord injury. Thus, we propose that
transmissions of nerve signals are possible in SCLCB. Since, both SCLCB and
SCQB [16] show the possibilities to transmit nerve signals during spinal cord
injury, thus we theoretically predict that head transplant is possible.
## 5 Information loss in digital circuit and SCLCB
Recently, Hänninen et al. [17] established methodologies to quantify
irreversible information loss in digital circuits. The property of mapping a
set of input bits onto a set of outputs in a logical operation is called
logical reversibility [17]. In a logically reversible computation, the inputs
can be known from the outputs. Physical reversibility occurs in an isolated
system. According to Hänninen et al. [17], a logically irreversible
computation can be obtained in an isolated system. One may refer to Keyes and
Landauer [20] for energy dissipation in logic gates. We cite the following
paragraph from Hänninen et al. [17].
“One bit can be in two states, so the associated entropy is $S={k_{\beta}}ln2$
Erasing an unknown bit, changing either 0 or 1 to a NULL state, means there is
a transfer of this entropy to the environment with associated free energy:
$\Delta E=TS={k_{\beta}}Tln2$.
Thus a physical implementation of a bit ensure or any logical operation that
loses 1 bit of information must necessarily dissipate an amount of heat
greater than or equal to $\Delta E$.”
In case of logical reversible computation, if the computation can be made
sufficiently slow, then dissipation of arbitrary small amount of energy is
occurred [21]. This claim is quite similar to the claim made by Ren et al.
[13] in favour of extremely sharp cut. Moreover, Bennett [21] suggested that
minimisation of the energy dissipation can be obtained near thermodynamic
equilibrium. Thus, our ideas of functionally equivalent mechanism and
$H_{functional}$ are important for neurosurgeons for head transplantation.
Bennett [21] showed that a logically irreversible computation can be made
logically reversible in each step. But, it is important to note that SCLCB is
reversible logical circuit because one can determine both the inputs from the
outputs [17]. Thus, there is no case of information loss. Hence, we can
conclude that there is no dissipation of energy in SCLCB [20, 21]. More
suitable chemical and electronic technologies viz. molecular logic gates [23],
electronic circuit design [22], etc. may be used to make SCLCB more effective.
Recently, clinical trails on partial restoration of spinal cord neural
continuity were done [18, 19]. Thus, we should remain optimistic about head
transplant. So, we can justify our hypothesis.
## 6 Conclusion
Zielinski and Sokal [9] predicted that full spinal cord regeneration after
total transection is not possible due to entropy change, and thus it concluded
indirectly that head transplant is impossible. To predict, they considered
three assumptions. In this paper, considering the same assumptions we proved
that head transplant is possible. We showed that functional equivalence
mechanisms yield less information loss. Moreover, we design the spinal cord
logic circuit bridge (SCLCB) as a technical measure for transmission of nerve
signals through spinal cords after transection due to spinal cord injury. Both
SCLCB and SCQB of [16] show the possibilities of head transplantation in
theoretical sense. Since, the procedure of head transplantation requires
sophistication in medical and engineering technologies, thus we assume that
the domain of head transplant attract attention of several interdisciplinary
fields.
Competing interests: The author has no conflict of interest.
Funding statement: No funding received.
Ethical approval/Ethical approval: Not required.
## References
* [1] Canavero, S. (2013). HEAVEN: The head anastomosis venture project outline for the first human head transplantation with spinal linkage (GEMINI). Surgical Neurology International, 4(Suppl 1), S335.
* [2] Ren, X., Canavero, S. (2017). HEAVEN in the making: Between the rock (the academe) and a hard case (a head transplant). AJOB Neuroscience, 8(4), 200-205.
* [3] Canavero, S., Ren, X. (2016). Houston, GEMINI has landed: Spinal cord fusion achieved. Surgical neurology international, 7(Suppl 24), S626.
* [4] Furr, A., Hardy, M. A., Barret, J. P., Barker, J. H. (2017). Surgical, ethical, and psychosocial considerations in human head transplantation. International Journal of Surgery, 41, 190-195.
* [5] Wolpe, P. R. (2017). Ahead of our time: Why head transplantation is ethically unsupportable. AJOB Neuroscience, 8(4), 206-210.
* [6] Suskin, Z. D., Giordano, J. J. (2018). Body–to-head transplant; a” caputal” crime? Examining the corpus of ethical and legal issues. Philosophy, Ethics, and Humanities in Medicine, 13(1), 1-6.
* [7] Ren, X., Canavero, S. (2017). From hysteria to hope: The rise of head transplantation. International Journal of Surgery, 41(5), 203-4.
* [8] Iamsakul, K., Pavlovcik, A. V., Calderon, J. I., Sanderson, L. M. (2017). PROJECT HEAVEN: Preoperative training in virtual reality. Surgical Neurology International, 8:59.
* [9] Zielinski, P., Sokal, P. (2016). Full spinal cord regeneration after total transection is not possible due to entropy change. Medical Hypotheses, 94, 63-65.
* [10] Lundborg, G. (2003). Nerve injury and repair–a challenge to the plastic brain. Journal of the Peripheral Nervous System, 8(4), 209-226.
* [11] Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal, 27(3), 379-423.
* [12] Rényi, A. (1961). On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics (Vol. 4, pp. 547-562). University of California Press.
* [13] Ren, X., Kim, C. Y., Canavero, S. (2019). Bridging the gap: spinal cord fusion as a treatment of chronic spinal cord injury. Surgical neurology international, 10.
* [14] Bucy, P. C., Keplinger, J. E., Siqueira, E. B. (1964). Destruction of the “pyramidal tract” in man. Journal of neurosurgery, 21(5), 385-398.
* [15] Duncan, T. L., Semura, J. S. (2007). Information loss as a foundational principle for the second law of thermodynamics. Foundations of Physics, 37(12), 1767-1773.
* [16] Nie, M., Zhang, L., Yang, G. (2013). The bidirectional relay transmission of nerve signal after spinal cord injury based on quantum entanglement. In Proceedings of 2013 3rd International Conference on Computer Science and Network Technology, 1174-1177, IEEE.
* [17] Haenninen, I. K., Lent, C. S., Snider, G. L. (2014). Quantifying irreversible information loss in digital circuits. ACM Journal on Emerging Technologies in Computing Systems, 11(2), 1-17.
* [18] Ren, X., Zhang, W., Mo, J., Qin, J., Chen, Y., Han, J.,Feng, S., Liang, H., Ren, S. (2022). Partial Restoration of Spinal Cord Neural Continuity via Sural Nerve Transplantation Using a Technique of Spinal Cord Fusion. Frontiers in neuroscience, 16.
* [19] Ren, X., Zhang, W., Qin, J., Mo, J., Chen, Y., Han, J., … Ren, S. (2022). Partial restoration of spinal cord neural continuity via vascular pedicle hemisected spinal cord transplantation using spinal cord fusion technique. CNS Neuroscience $\&$ Therapeutics.00,1–13.
* [20] Keyes, R. W., Landauer, R. (1970). Minimal energy dissipation in logic. IBM Journal of Research and Development, 14(2), 152-157.
* [21] Bennett, C. H. (1973). Logical reversibility of computation. IBM journal of Research and Development, 17(6), 525-532.
* [22] Gift, S. J., Maundy, B. (2020). Electronic Circuit Design and Application. Springer Nature.
* [23] Liu, L., Liu, P., Ga, L., Ai, J. (2021). Advances in Applications of Molecular Logic Gates. ACS omega, 6(45), 30189-30204.
|
# Distinguishing immunological and behavioral effects of vaccination
Mats Stensrud1, Daniel Nevo2, Uri Obolski3,4
1Department of Mathematics, École Polytechnique Fédérale de Lausanne,
Switzerland
2Department of Statistics and Operations Research, Tel Aviv University, Israel
3Department of Epidemiology and Preventive Medicine, Tel Aviv University,
Israel
4Department of Environmental Studies, Tel Aviv University, Israel
###### Abstract
The interpretation of vaccine efficacy estimands is subtle, even in randomized
trials designed to quantify immunological effects of vaccination. In this
article, we introduce terminology to distinguish between different vaccine
efficacy estimands and clarify their interpretations. This allows us to
explicitly consider immunological and behavioural effects of vaccination, and
establish that policy-relevant estimands can differ substantially from those
commonly reported in vaccine trials. We further show that a conventional
vaccine trial allows identification and estimation of different vaccine
estimands under plausible conditions, if one additional post-treatment
variable is measured. Specifically, we utilize a “belief variable” that
indicates the treatment an individual believed they had received. The belief
variable is similar to “blinding assessment” variables that are occasionally
collected in placebo-controlled trials in other fields. We illustrate the
relations between the different estimands, and their practical relevance, in
numerical examples based on an influenza vaccine trial.
Keywords: vaccine effectiveness, randomized controlled trials, expectancy,
causal inference, placebo, blinding
## 1 Introduction
Pharmaceutical interventions are commonly justified by their immunological
mechanisms of action, but might also affect outcomes through other pathways.
For example, recipients of vaccines have been reported to increase their
number of social contacts due to perceived protective effects [39, 6, 19],
although the extent of such behavioral changes varies across populations and
time [13, 51, 17, 45].
Conventional vaccine trials are designed to identify immunological effects of
vaccines [30]. These trials often have blinded treatment and control groups
[4, 16] and the rationale for (patient) blinding is precisely to eliminate
non-immunological effects of vaccination. Indeed, an ideal placebo control
satisfies two criteria: it does not have any cross-reactivity with the
pathogen in question; and it is perceived to be indistinguishable from the
vaccine, for example by inducing common vaccine side effects like fever or
soreness in the place of injection. The second criterion is challenging to
satisfy in many vaccine trials where inert saline vaccines are used as
controls, see e.g. Haas et al [16].
The reliability of placebo controls has been studied in so-called unblinding
assessments or manipulation checks [24, 10], where trial participants are
asked to guess the treatment they received. Differences in guesses between
assignment groups indicate that the placebo was unsuccessful. Such differences
have been observed in trials assessing appetite suppressive treatments [23],
smoking cessation strategies [37], psychiatric drugs [11, 38], back pain
treatments [12] and other interventions [5, 20].
However, unblinding might be a consequence of the treatment being effective.
If a treatment is noticeably beneficial, and individuals are asked to guess
their treatment group after the effect becomes evident, then these individuals
might correctly guess their treatment status. Because unblinding assessments
are difficult, the mandatory reporting of blinding success was revoked in the
2001 CONSORT guidelines [29, 50]. Thereafter, assessment of blinding in RCTs
has become less frequent [5, 20, 38, 3, 21]. In particular, we could not find
examples of blinding assessment in vaccine studies.
In this article we claim that an assessment of treatment beliefs, similar to
unblinding assessments, is desirable in vaccine trials because this assessment
can allow us to identify policy-relevant estimands. First, we formally study
causal effects targeted by vaccine trials, scrutinizing their practical
relevance. Even under perfect blinding and no interference, the conventional
vaccine efficacy estimated from a trial might not be representative of real-
world vaccine effectiveness. In addition, broken blinding challenges the
interpretation of common vaccine estimands as parameters quantifying
“immunological efficacy”. Second, we describe how different, but related,
estimands can be identified and estimated from conventional vaccine trials
under testable, and often plausible, assumptions when a blinding assessment is
conducted.
## 2 Preliminaries
Consider data from a blinded randomized controlled trial (RCT) with $n$
individuals who are assigned treatment $A\in\\{0,1\\}$ at baseline, where
$A=1$ indicates receiving vaccine and $A=0$ indicates placebo or other
control. Together with $A$, we explicitly define the message
$M\in\\{-1,0,1\\}$, indicating whether the individual receives the message
that the vaccine status is blinded ($M=-1$), that they are unvaccinated
$(M=0)$ or that they are vaccinated against the pathogen of interest $(M=1)$.
Unlike a real-world setting, where we would expect that $A=M$ with probability
one (w.p.1), blinding in an RCT implies that the message $M$ is fixed to $-1$.
Furthermore, let $L$ be a vector of measured covariates, which might affect
the outcome $Y$. We treat $L$ as discrete for simplicity of presentation, but
all results hold for continuous $L$ by replacing probabilities with
probability density functions and sums with appropriate integrals. Let
$S\in\\{0,1\\}$ be an indicator of a possible side effect, and let $B\in[0,1]$
be a variable quantifying the degree to which an individual believes that they
have received the vaccine, where $B=1$ corresponds to being convinced about
having received the active vaccine and $B=0$ being convinced about having
received control; here we would expect that the belief depends on the type of
control, for example depending on whether the control is simply no treatment
or an inert (placebo) vaccine. Finally, let $E$ be a variable quantifying how
much an individual has been exposed to the infectious agent. We do not assume
that $E$ is measured and leave the domain of $E$ arbitrary. In addition, we
will occasionally introduce an unmeasured variable $U$ as a common cause of at
least two of previously introduced variables.
As assumed in most analyses of vaccine RCTs, suppose that the trial
participants are drawn from a near-infinite super-population where
interactions among participants are negligible. Therefore, we omit the $i$
subscript from random variables. The assumption that interactions between
participants in the trial and the population are negligible with respect to
the vaccine effect implies an assumption about no interference, so all
potential outcomes we subsequently present are well defined. However, our
arguments also apply to certain situations when interference is present, as
discussed in Web Appendix A.
We use superscripts to denote counterfactuals. In particular, let $B^{a,m}$ be
an individual’s belief about their vaccine status when treatment and the
message are set to $a$ and $m$, respectively. When there is no blinding, e.g.
after the vaccine has been made available for the entire population and $A=M$
w.p.1, we would expect that $B^{a,m}=a=m$. Let also $E^{a,m}$ quantify the
exposure of the study participant when treatment is fixed to $a$ and the
message to $m$. As with $E$, the domain of $E^{a,m}$ is left arbitrary. Hence,
our results do not depend on the variable type (e.g. binary, count or
continuous) assumed for $E^{a,m}$. If receiving the vaccine can cause a side
effect shortly after vaccination, say within 7 days [16], we further define
$S^{a}$ to be the indicator that the participant experienced this side effect.
Let $Y^{a}$ be the disease status some fixed time (e.g. one month) after an
individual was assigned treatment level $A=a$, which is measured without
misclassification. Finally, let $Y^{a,m}$ be the outcome had treatment level
was fixed to $A=a$ and the message to $M=m$ . Henceforth we assume consistency
with respect to all counterfactuals defined and corresponding observed data,
for example, if $A=a$ then $Y=Y^{a}$.
## 3 Causal effects and target trials
### 3.1 Conventional two-arm trial
Consider first the average treatment effect of being vaccinated ($A$) on the
clinical outcome ($Y$), when the treatment allocation is fixed to be blinded
$(m=-1)$,
$\displaystyle\mathbb{E}(Y^{a=1,m=-1})\text{ vs. }\mathbb{E}(Y^{a=0,m=-1}).$
(1)
This effect is identified by design in a conventional, blinded two-arm vaccine
trial, henceforth denoted by $\mathcal{T}_{II}$. We deliberately set $m=-1$ as
part of the intervention indicated in the superscripts, because blinding is a
crucial feature of the intervention tested in $\mathcal{T}_{II}$. While
studies of vaccine effects usually state estimands of the form
$\mathbb{E}(Y^{a=1})$ and $\mathbb{E}(Y^{a=0})$, without indicating the
message $M$, we make this distinction to clarify that other, subtly different
estimands can be important for policy decisions. Our variable definitions are
related to, but different from, the definitions given by Murray [25], who
explicitly formulated a counterfactual definition of a per-protocol placebo
effect, see Web Appendix B.
Because conventional vaccine trials enforce $m=-1$, such trials are, at least
implicitly, targeting an immunological vaccine effect: the intention of
blinding is to eliminate certain (psychological and behavioral) effects of
receiving the vaccine. Suppose now that we offer the vaccine to individuals in
a real-world setting, outside the trial. Even if the trial and real-world
settings share similar conditions, e.g. individuals are drawn from the same
super-population, and the transmission rates are equal in both settings, the
effect of the vaccine in the real-world setting might differ from the effect
in the RCT. Individuals in the real-world setting are, unlike the vaccinated
trial participants, informed about the treatment they received ($M=A$ w.p.1).
In particular, a vaccinated individual knows that they received a vaccine
($B=1$) and this knowledge can lead to changes in their behavior. For example,
the vaccinated individual might reduce their protective behaviors, and thus
increase their risk of being exposed.
Because people might change behavior after vaccination, the total effect,
$\displaystyle\mathbb{E}(Y^{a=1,m=1})\text{ v.s. }\mathbb{E}(Y^{a=0,m=0}),$
(2)
which quantifies the joint effect of receiving both the vaccine ingredient and
the message, is a relevant parameter for policy makers when deciding vaccine
policies. This effect is different from the the effect given in (1). That is,
the effect in (1) is analogous to the effect targeted in a successfully
blinded experiment, where the intention might be to eliminate placebo effects
by fixing $m=-1$; the effect in (2) is the effect in an unblinded trial, and
captures different pathways by which vaccination can affect the outcome of
interest. For example, if vaccination leads to reduced use of protective
measures, the knowledge of being vaccinated might counteract the protective
immunological effect of the vaccine.
However, the effect in (2) is not identifiable from the data in
$\mathcal{T}_{II}$ without additional assumptions. We will therefore consider
hypothetical trials that allow identification of such effects, including a
feasible, minor modification of $\mathcal{T}_{II}$.
### 3.2 Hypothetical four-arm trial
Consider a four-arm trial where each individual is assigned the blinded
treatment $A\in\\{0,1\\}$, and then immediately given a message
$M\in\\{0,1\\}$, stating that they, possibly contrary to fact, received the
control ($M=0$) or active vaccine ($M=1)$. In this trial, henceforth denoted
by $\mathcal{T}_{IV}$, it is possible that the message ($M$) is the opposite
of the actual treatment assignment ($A$), that is, $\Pr(M\neq A)>0$. By
design, in $\mathcal{T}_{IV}$ we identify
$\displaystyle\mathbb{E}(Y^{a,m})\text{ v.s.
}\mathbb{E}(Y^{a^{\prime},m^{\prime}}),$
for $a,a^{\prime},m,m^{\prime}\in\\{0,1\\}$.
Such a trial design, also known as a “balanced placebo design” [7], has been
implemented to examine the effects of nausea treatments [36], nicotine and
alcohol [7], and caffeine [2]. To the best of our knowledge, this design has
never been implemented to study vaccine effects. Conducting such a vaccine
trial is ethically problematic, because the participants are given misleading
information about vaccination status that might e.g. affect their risk through
behavior [7]. Even if the trial is practically infeasible, we can still
conceptualize a study that jointly assigns $A$ and $M$ at random, which would
allow us to separate immunological and behavioral effects of receiving the
vaccine. For example, the contrast
$\displaystyle\mathbb{E}(Y^{a=1,m})\text{ v.s. }\mathbb{E}(Y^{a=0,m})$ (3)
is, like the contrast in (1), expected to quantify an immunological effect of
receiving the vaccine, because individuals in both arms are told that they
have the same vaccination status, $m\in\\{0,1\\}$.
On the other hand, the contrast
$\displaystyle\mathbb{E}(Y^{a,m=1})\text{ v.s. }\mathbb{E}(Y^{a,m=0})$ (4)
quantifies a behavioral effect of the vaccine, in the sense that both groups
receive the same biological vaccine component ($a$), but one of the groups are
told that they, contrary to fact, did not. Thus, this contrast quantifies how
knowledge (belief) of being vaccinated changes the outcome, e.g. through risk-
increasing behavior. Furthermore, the total effect, (2), would be identified
from $\mathcal{T}_{IV}$ without additional assumptions.
Because of the ethical and logistical issues with conducting
$\mathcal{T}_{IV}$, we present conditions ensuring that we can use data from
$\mathcal{T}_{II}$ to identify and interpret (3) and (4) as immunological and
behavioral effects, respectively.
### 3.3 Identification based on relations between the two-arm and four-arm
trials
To relate the outcomes in $\mathcal{T}_{II}$ and $\mathcal{T}_{IV}$, consider
the belief $B$, quantifying the degree to which an individual believes that
they received active vaccine (higher values of $B$) or control (lower values
of $B$). In particular, $B=1$ means that the individual is convinced that they
were vaccinated. While the results we present in this work are applicable for
continuous $B\in[0,1]$, to simplify the notation we henceforth focus on a
binary belief, $B\in\\{0,1\\}$.
If the four-arm trial $\mathcal{T}_{IV}$ is successful, we would expect that
the message $M$ deterministically causes the belief $B$.
###### Assumption 1.
$B=M$ w.p.1. where $M\in\\{0,1\\}$.
In $\mathcal{T}_{II}$, individuals receive no message, which we defined as
fixing $m=-1$, but they might still form beliefs about the treatment received.
When the belief $B$ affects the risk of exposure $E$, the counterfactual
quantity identified in $\mathcal{T}_{II}$, $\mathbb{E}(Y^{a=1,m=-1})$, would
be less relevant to the outcomes in the setting where people know their
vaccination status, as is usually the case in practice.
However, it is feasible to measure the belief $B$ in $\mathcal{T}_{II}$, by
asking whether an individual believes they received active vaccine or placebo.
We denote by $\mathcal{T}_{II^{B}}$ the two-arm trial where $B$ is also
measured. By introducing the belief variable $B$, we can formalize the notion
that receiving the vaccine affects our risk of infectious disease outcomes
both through the immunological effect of a vaccine and through behavior.
Suppose first that the belief determinism holds in the four-arm trial
$\mathcal{T}_{IV}$, that is $B=M$ w.p.1. Consider now the six-arm trial which
incorporates the arms of the two- and four-arm trials introduced so far: let
$\mathcal{T}_{VI}$ be the trial where $A\in\\{0,1\\}$ and $M\in\\{-1,0,1\\}$
are randomly assigned jointly, but independently of each other. Suppose
further that the message $M$ only affects $Y$ through the belief, which we
explicitly state as an isolation condition.
###### Assumption 2 ($M$ partial isolation).
The only causal paths from $M$ to $Y$ are directed paths intersected by $B$.
In our setting, Assumption 2 requires that the external message about the
treatment status only affects the outcome $Y$ through our belief about the
treatment status. Assumption 2 seems to be plausible in practice; to violate
this assumption, the message $M$ must affect $Y$ outside of the belief $B$,
which will by contrived in many settings. For example, Assumption 2 holds in
the DAGs in Figure 1, where the arrow from $M=-1$ to $B$ is trivial, and in
Figure 1.
Consider also the following assumption, inspired by previous work on separable
effects [33, 43, 44].
###### Assumption 3 ($Y$ Dismissible component condition).
$Y\perp\mkern-9.5mu\perp_{\text{VI}}M\mid L,A,B,$
where $\perp\mkern-9.5mu\perp_{\text{VI}}$ denotes independence in
$\mathcal{T}_{VI}$.
We can directly read off that this assumption holds with $L=\emptyset$ in the
DAG in Figure 1, describing a six-arm trial $\mathcal{T}_{VI}$. In Figure 1,
the node $E$ is not needed to evaluate our identification conditions, but
including $E$ clarifies that $M$ only affects $Y$ through the exposure to the
infectious agent, $E$.
Assumption 3 would be expected to fail, except in some special cases with
perfect cancellations, whenever Assumption 2 fails. However, Assumption 3 can
also fail when Assumption 2 holds, e.g. when there are unmeasured common
causes between $B$ and $Y$, as illustrated by the path $B\leftarrow
U\rightarrow Y$ in Figure 2.
To identify $\mathbb{E}(Y^{a,m})$ from $\mathcal{T}_{II^{B}}$, we also require
the following positivity assumption.
###### Assumption 4 (Positivity).
In $\mathcal{T}_{II}$, for all $a,b\in\\{0,1\\}$, and for all possible values
of $L$
$\Pr(B=b\mid L=l,A=a)>0.$
The positivity assumption requires that we observe individuals who believe
they are vaccinated ($B=1$) and unvaccinated ($B=0$) for each covariate value
$L=l$ and treatment arm $A=a$.
The following proposition establishes that $\mathbb{E}(Y^{a,m})$ can be
identifiable from $\mathcal{T}_{II^{B}}$.
###### Proposition 1.
Under Assumptions 1, 3 and 4, $\mathbb{E}(Y^{a,m})$ for $a,m\in\\{0,1\\}$ is
identified from the two-arm trial $\mathcal{T}_{II^{B}}$,
$\displaystyle\mathbb{E}(Y^{a,m})=\sum_{l}\mathbb{E}(Y\mid
L=l,A=a,B=m)\Pr(L=l)\quad\text{ for }a,m\in\\{0,1\\}.$ (5)
See Web Appendix D for a proof.
### 3.4 Decomposition into immunological and behavioral effects
Suppose that receiving the vaccine affects the risk of being exposed ($E$)
only through the message, $M$, as formalized in the following assumption.
###### Assumption 5 (No direct effect of $A$ on exposure $E$).
$E^{a=1,m}=E^{a=0,m}\text{ }w.p.1\text{ for }m\in\\{-1,0,1\\}.$
Assumption 5 can in principle be tested in a trial where $(A,M)$ are randomly
assigned and the exposure $E$ is measured, e.g. by assessing whether
$\mathbb{E}(E\mid A=1,M=m)=\mathbb{E}(E\mid A=0,M=m)$
for either value of $m$. Measuring $E$ is practically difficult and rarely
done in trials, but augmented information on behavior could at least in
principle be collected, e.g. using smartphone data [26]. However, such
information is often hard to obtain [41]. Yet Assumption 5 seems to be
plausible whenever the vaccine does not induce noticeable (side) effects, e.g.
when blinding is successful.
When both Assumptions 2 and 5 hold, we can interpret (3) and (4) as direct and
indirect effects, quantifying immunological and behavioral components,
respectively. However, Assumptions 2 and 5 are not necessary for any of our
mathematical (identification) arguments to be valid. We further discuss
decompositions of the total effect (2) on different scales in Web Appendix C.
### 3.5 Effects under broken blinding
Consider the assumption that the (blinded) treatment $A$ has no effect on the
belief $B$ in $\mathcal{T}_{II}$, which we denote successful blinding.
###### Assumption 6 (Successful blinding).
$\displaystyle B^{a=1,m=-1}=B^{a=0,m=-1}.$
Assumption 6 is easily falsifiable in $\mathcal{T}_{II^{B}}$, e.g. by
evaluating whether
$\Pr(B=1\mid A=1)=\Pr(B=1\mid A=0)$
in this trial. Assumption 6 might fail in practice, indicating that blinding
is broken. Consider for example a mild side effect $S$, which is biologically
induced by the vaccine. Thus, the side effect occurs more often under $A=1$
compared to $A=0$, and individuals who experience this side effect are more
likely to believe that they are treated. A setting where $A$ affects the
belief $B$ through a side effect $S$ is illustrated by the arrows
$A\rightarrow S$ and $S\rightarrow B$ in Figure 3. Furthermore, $S$ can be
affected by unmeasured factors $U$ that also affect the outcome $Y$, which
would imply that Assumption 3 fails. Suppose, however, that two less
restrictive dismissible component conditions hold in $\mathcal{T}_{VI}$:
###### Assumption 7 ($Y,S$ Dismissible component conditions).
$\displaystyle Y$ $\displaystyle\perp\mkern-9.5mu\perp_{VI}M\mid L,A,S,B,$
$\displaystyle S$ $\displaystyle\perp\mkern-9.5mu\perp_{VI}M\mid L,A,$ (6)
where $\perp\mkern-9.5mu\perp_{\text{VI}}$ denotes independence in
$\mathcal{T}_{VI}$.
Suppose also that the following positivity condition holds, which is testable
using the observed data from $\mathcal{T}_{II^{B}}$.
###### Assumption 8 (Positivity 2).
In $\mathcal{T}_{II^{B}}$, for all possible values of $L$
$\displaystyle\Pr(S=s\mid L=l,A=a)>0,\text{ for all }a,s\in\\{0,1\\},$
$\displaystyle\Pr(B=b\mid L=l,A=a,S=s)>0,\text{ for all }a,b,s\in\\{0,1\\}.$
The following proposition establishes that $\mathbb{E}(Y^{a,m})$ can be
identified from $\mathcal{T}_{II^{B}}$, even under broken blinding.
###### Proposition 2.
Under Assumptions 1, 7 and 8, $\mathbb{E}(Y^{a,m})$ for $a,m\in\\{0,1\\}$ is
identified from the two-arm trial $\mathcal{T}_{II^{B}}$,
$\displaystyle\mathbb{E}(Y^{a,m})=\sum_{s,l}\mathbb{E}(Y\mid
L=l,A=a,S=s,B=m)\Pr(S=s\mid L=l,A=a)\Pr(L=l).$ (7)
See Web Appendix D for a proof.
### 3.6 Conditional causal effects
We can use data from $\mathcal{T}_{II^{B}}$ to identify other effects of
vaccination that are not affected by behavior. Consider the contrast
$\displaystyle\mathbb{E}(Y^{a=1,m=-1}\mid B^{a=1,m=-1}=b)\text{ v.s.
}\mathbb{E}(Y^{a=0,m=-1}\mid B^{a=0,m=-1}=b).$ (8)
This contrast is not a causal effect, in the sense that this contrast is not a
comparison of counterfactual outcomes in the same subset of individuals.
However, when Assumption 6 holds, we can rewrite (8) as
$\displaystyle\mathbb{E}(Y^{a=1,m=-1}\mid B^{m=-1}=b)\text{ v.s.
}\mathbb{E}(Y^{a=0,m=-1}\mid B^{m=-1}=b),$ (9)
which is a causal effect of treatment $a=1$ v.s. $a=0$ among those with a
particular belief $b$ when $m=-1$. In $\mathcal{T}_{II^{B}}$, this causal
effect is simply identified as
$\displaystyle\mathbb{E}(Y\mid A=1,B=b)\text{ v.s. }\mathbb{E}(Y\mid
A=0,B=b),$
which has an interpretation as an immunological effect under Assumption 6, as
we condition on individuals having the same behavior, and thus the same risk
of exposure to the infectious agent. It follows that “immunological effects”
are not uniquely defined, because a different immunological effect was defined
in (3). However, (9) is restricted to a subset of the population that has a
particular belief under blinding. It is not clear how the effect in this
subpopulation is relevant to the entire population, without imposing
additional assumptions.
## 4 Estimation
Based on the identification results in Section 3.3, we can motivate standard
estimators of $\mathbb{E}(Y^{a,m})$ for $\ a,m\in\\{0,1\\}$ using data from
$\mathcal{T}_{II^{B}}$ [31, 34]. Confidence intervals can, for example, be
calculated by bootstrap or the delta method.
### 4.1 Outcome regression estimator
Define $\nu_{a,m}$ as identification formula (5) and consider the simple
regression estimator
$\hat{\nu}_{or,a,m}=\frac{1}{n}\sum_{i=1}^{n}\hat{\mathbb{E}}(Y\mid
L=L_{i},A=a,B=m;\hat{\theta}),$
where $\mathbb{E}(Y\mid L=l,A=a,B=m;\theta)$ is a parametric model for
$\mathbb{E}(Y\mid L=l,A=a,B=m)$ indexed by the parameter ${\theta}$, and
assume $\hat{\theta}$ is its MLE. For example, if $Y$ is binary, we could use
a logistic regression model. The estimator $\hat{\nu}_{or,a,m}$ is consistent
provided that the model indexed by $\theta$ is correctly specified. An
analogous parametric g-formula estimator for the expression in Proposition 2,
that also includes $S$, is given in Web Appendix E.
### 4.2 Weighted estimator
An inverse probability weighted estimator $\hat{\nu}_{ipw,a,m}$ of $\nu_{a,m}$
is given by
$\hat{\nu}_{ipw,a,m}=\frac{1}{n}\sum_{i=1}^{n}\frac{I(A_{i}=a,B_{i}=m)}{\widehat{\Pr}(B=m\mid
L=L_{i},A=a;\hat{\alpha}_{1})\widehat{\Pr}(A=a\mid
L=L_{i};\hat{\alpha}_{2})}Y_{i},$
where we have indexed estimated models with the parameter $\alpha_{j}$ for
$j\in\\{1,2\\}$ and assume $\hat{\alpha}_{j}$ is its MLE. Often the vaccine
assignment probability $\Pr(A=a\mid L=L_{i};\alpha_{2})=0.5$ by design in a
RCT, but it is still more efficient to estimate this probability non-
parametrically. We derive parametric inverse probability weighted estimators
based on the identification result of Proposition 2, which leverage the
additional variable $S$, in Web Appendix E.
## 5 Examples
Define vaccine efficacy under interventions on $A$ and $M$ as
$\displaystyle VE(m)=1-\frac{\mathbb{E}(Y^{a=1,m})}{\mathbb{E}(Y^{a=0,m})},\
m\in\\{-1,0,1\\},$
which is a special case of the generic causal contrasts (1) and (3). We can
interpret $VE(0)$ and $VE(1)$ as immunological VEs, but the interpretation of
$VE(-1)$ as an immunological VE is more subtle, and depends on whether
blinding is broken. Nevertheless, $VE(-1)$ is usually the implicit target
parameter in a blinded RCT.
The “total VE”,
$VE_{t}=1-\frac{\mathbb{E}(Y^{a=1,m=1})}{\mathbb{E}(Y^{a=0,m=0})},$
is a special case of (2).
We consider these parameters in two numerical studies. First, we clarify and
compare the values of different vaccine efficacy estimands (Section 5.1).
Second, we illustrate the validity of our estimators in simulations based on a
real vaccine study (Section 5.2). All R scripts used to create the numerical
examples are available from
https://github.com/daniel258/DistinguishingVaccineEffects.
### 5.1 Illustration of numerical differences in vaccine efficacy estimands
Consider a hypothetical two-arm vaccine trial $\mathcal{T}_{II}$, where
blinding was broken due to a mild side effect of the vaccine. Let the belief
$B$ be binary. We are interested in the potential outcomes once the vaccine is
available to the population and individuals are fully aware of their
vaccination status, so $A=M$. Web Figures 5 and 5 present the assumed
graphical structures, encoding the key feature that there are no causal or
non-causal paths between $M$ and $Y$, except the path $M\rightarrow
B\rightarrow Y$.
In this scenario, we have data from a trial where placebo recipients are
equally likely to believe that they have received the active vaccine or
placebo, $\mathbb{E}(B^{a=0,m=-1})=0.5$. Furthermore,
$\mathbb{E}(Y^{a,m=1})/\mathbb{E}(Y^{a,m=0})=2$ for both $a=0,1$, reflecting
an increased risk due to risky behavior when receiving a message $m=1$
compared to $m=0$. Broken blinding is introduced by
$\mathbb{E}(B^{a=1,m=-1})/\mathbb{E}(B^{a=0,m=-1})=RR_{B}$, and $RR_{B}>1$;
treated participants are more likely to believe that they are treated than
untreated participants. Finally, the potential infection rate of unvaccinated
people, had they been told they received the vaccine is
$\mathbb{E}(Y^{a=0,m=1})=0.01$. Using these specifications, we can write
$VE(0)$ and $VE(1)$ as functions of $VE_{t}$, and $VE(-1)$ as a function
$VE_{t}$ and $RR_{B}$, see Appendix F.1.
Even when blinding is successful, such that $RR_{B}=1$, $VE(-1)$ might differ
from $VE_{t}$ (Figure 4). When $RR_{B}=1$, then $VE(-1)=VE(0)=VE(1)$. When
$RR_{B}>1$, $VE(-1)>VE(m)$ for $m\in\\{0,1\\}$, and the difference increases
as $RR_{B}$ diverges from one (Web Figure 6). As $RR_{B}$ increases, $VE(-1)$
is closer to $VE_{t}$. Examples of $VE(-1)$ reported for COVID-19 [28],
pertussis [46], and influenza [15] are annotated in Figure 4.
### 5.2 Simulations based on an influenza vaccine trial
Here we illustrate that unbiased estimation of $VE_{t}$ and $VE(m)$ for
$m\in\\{0,1\\}$ is possible even when blinding is broken, if side effects $S$
are measured and the conditions of Proposition 2 hold. Our data-generating
mechanism (DGM) is grounded in a RCT [8], comparing a seasonal influenza
vaccine against saline placebo injection in children. The outcome of interest
is VE against pandemic influenza A(H1N1) infection, one out of two main
outcomes in the trial, coded here as a binary outcome. Our DGM is consistent
with the DAG in Web Figure 7 and satisfies the published marginal proportion
of adverse reactions in each treatment arm (individual-level data were not
published).
The original trial was blinded and estimated $VE(-1)$ as 47%, calculated from
the estimated rates $\hat{\mathbb{E}}(Y|A=0)=0.17$ and
$\hat{\mathbb{E}}(Y|A=1)=0.09$. Furthermore, the trial reported that 50% of
the children in the vaccine arm experienced pain or soreness of any degree at
the injection site (41% mild, 8% moderate, 1%) compared to only 21% of the
children in the placebo arm (19% mild, 2% moderate) [8][Table S1].
Let $S^{a}\in\\{0,1\\}$ indicate the presence of a side effect under $A=a$.
Similar to the trial, let $\mathbb{E}(S^{a=1})=0.50$ and
$\mathbb{E}(S^{a=0})=0.21$. For $m\in\\{0,1\\}$, let $B=m$ and for $m=-1$, let
$B^{a,s,m}=B^{s}$. Furthermore, $\mathbb{E}(B^{s=1})=0.70$ and
$\mathbb{E}(B^{s=0})=0.18$, reflecting that those who experience side effects
are more likely to believe they received the vaccine. Under these
specifications, the magnitude of different vaccine parameters differs
substantially (Table 1). Further technical details about this illustration are
given in Web Appendix F.
We simulated 1,000 datasets from the DGM, with sample sizes corresponding to
the trial, 317 in the placebo arm and 479 in the vaccine arm. The observed
data vector for each individual consisted of $(A,S,B,Y)$ and $M=-1$. To
mitigate finite-sample bias, we repeated the simulations for sample sizes ten
times larger (3,170 receiving placebo and 4,790 vaccine).
We estimated $VE(-1)$ by comparing infection rates for each treatment arm in
each dataset, and estimated $\mathbb{E}(Y^{a,m})$ for $a,m\in\\{0,1\\}$ by
substituting expectations with empirical means in (7). We also considered two
alternative estimation strategies, where outcomes were compared across
treatment arms conditional on $S=s$, for $s\in\\{0,1\\}$. Such strategies
correspond to estimating
$1-\frac{\mathbb{E}(Y^{a=1,m=-1}|S^{a=1,m=-1}=s)}{\mathbb{E}(Y^{a=0,m=-1}|S^{a=0,m=-1}=s)},$
which generally does not represent a causal effect of interest. Under the
assumed DAG (Web Figure 7), these strategies estimate the controlled direct
effect of $A$ on $Y$, while fixing the side effects to be $s$; i.e., when
comparing the joint intervention setting $A=1,M=-1,S=s$ versus the
intervention $A=0,M=-1,S=s$.
The results from this DGM indicate that the mean estimates conditional on $S$,
$\widehat{VE}(-1,S=0)$ and $\widehat{VE}(-1,S=1)$, are between $VE(m),m=0,1$,
and and $VE(-1)$, see Table 1. In contrast, $VE_{t}$ is lower than all other
VEs. For large sample sizes, our methods gave approximately unbiased
estimates.
## 6 Discussion
Contrary to common perceptions, we have argued that the effects targeted in
standard vaccine trials often differ from the effects of vaccines in real-
world settings. We proposed measuring a single additional variable in a
standard trial, analogous to a blinding assessment. Using this variable, and
under weak assumptions, it is possible to identify effects that are often
relevant to real-world vaccination programs.
To relate our results to previous work in the causal inference literature, we
interpret our identification argument in the context of a six arm trial
$\mathcal{T}_{VI}$. Our observed data only comprise two out of six arms in
$\mathcal{T}_{VI}$, but we target parameters corresponding to expected
outcomes in the unobserved four arms. Using this interpretation,
$\mathcal{T}_{II}$ assigns a composite treatment $(A,M)$, where we only
observe individuals with $M=-1$ deterministically. Our identification task is
therefore similar to the identification task in classical separable effects
settings [33, 35, 44, 42]. Inspired by the original treatment decomposition
idea by Robins and Richardson [33], we have decomposed the effect of
vaccination into the immunological component of the vaccine, $A$, and a
deterministic message, $M=-1$. A similar story of placebo-controlled
treatments was given as a motivating example for a treatment decomposition in
Didelez [9], who considered interventionist mediation analysis in a survival
setting. Our variable definitions are also related to, but different from, the
definitions given by Murray [25], who explicitly formulated a counterfactual
definition of a per-protocol placebo effect (see Web Appendix B).
Our proposal has limitations. First, our belief variable requires collection
of self-reported data, which may be unreliable. As a remedy, other collected
data could be used to assess blinding. As illustrated in our example, the
distribution of adverse effects in the two treatment arms could indicate
successful blinding. Alternatively, one could perform a negative control
outcome analysis [22, 40]. Suppose, for example, that individuals in each arm
of an influenza vaccine trial are tested for a panel of other, immunologically
distinct respiratory infections when presenting with relevant symptoms.
Comparable rates of such other infections would indicate that participants’
behavior and exposure patterns are similar across the arms.
Second, the consequences of forming a belief on behaviour might vary between
diseases and populations, due to differences in risk perceptions. Future work
should address such heterogeneity and assess the transportability of vaccine
estimates between different settings.
Third, we defined estimands with respect to a single measurement of belief,
but it is possible that beliefs change over time. Similarly, immunological
effects are often time-varying, e.g. due to waning. In future extensions, we
will study longitudinal settings where beliefs, exposures and outcomes vary
over time. We conjecture that the methods can be generalized, under
appropriate assumptions, by including a time-varying belief variable.
In conclusion, our arguments give nuance to the practical relevance of
classical VE estimands. But we offer a constructive solution: different
estimands, which can quantify immunological and behavioral effects of
vaccination in real-world settings, can be identified and estimated under
assumptions that often are plausible.
| $\mathbb{E}(Y^{1,m})$ | $\mathbb{E}(Y^{0,m})$ | $VE(m)$
---|---|---|---
$m=-1$ | 0.170 | 0.090 | 0.470
$m=0$ | 0.140 | 0.084 | 0.400
$m=1$ | 0.244 | 0.098 | 0.600
Table 1: Values of $\mathbb{E}(Y^{a,m})$ and $VE(m)$ vaccine efficacy in the simulations modelled after an influenza vaccine RCT. Under this DGM, $VE_{t}$ equals to $1-{\mathbb{E}(Y^{a=1,m=1})}/{\mathbb{E}(Y^{a=0,m=0})}=0.3$. | | Estimand
---|---|---
| Sample size | $VE(-1)$ | $VE(0)$ | $VE(1)$ | $VE_{t}$ | $\widehat{VE}(-1,S=0)$ | $\widehat{VE}(-1,S=1)$
True value | | 0.47 | 0.40 | 0.60 | 0.30 | |
Mean | 796 | 0.463 | 0.380 | 0.575 | 0.273 | 0.446 | 0.526
| 7960 | 0.470 | 0.398 | 0.597 | 0.294 | 0.456 | 0.555
Table 2: Simulation results for different estimands.
$A$$M=-1$$B$$E$$U$$Y$ (a) DAG describing a two-arm trial $\mathcal{T}_{II}$,
where $M$ is deterministically equal to $-1$. Thus, the node $M=-1$ is a
trivial constant, and is only included in the graph for clarity.
$A$$M$$B$$E$$U$$Y$ (b) DAG describing the (hypothetical) four-arm trial
$\mathcal{T}_{IV}$, where the bold arrow indicates the assumed determinism
between the message $M$ and the belief $B$.
$A$$M$$B$$E$$U$$Y$ (c) DAG describing the hypothetical six-arm trial
$\mathcal{T}_{VI}$, where $A\in\\{0,1\\}$ and $M\in\\{-1,0,1\\}$ are randomly
assigned.
Figure 1: DAGs describing $\mathcal{T}_{II}$, $\mathcal{T}_{IV}$ and
$\mathcal{T}_{VI}$.
$A$$M=-1$$B$$E$$U$$Y$
Figure 2: DAG compatible with the two-arm trial $\mathcal{T}_{II}$ where
$A\in\\{0,1\\}$ and $M=-1$. Here, Assumption 3 is expected to be violated due
to the blue arrow.
$A$$S$$B$$E$$U$$Y$$M$
Figure 3: DAG describing the six-arm trial $\mathcal{T}_{VI}$, where a side
effect $S$ can affect the belief $B$. Here, the presence of the orange arrow
violates Assumption 3 but not Assumption 7. Figure 4: $VE(-1)$ versus $VE_{t}$
as a function of magnitude of broken blinding $RR_{B}$. Conventional $VE(-1)$
estimates from vaccine RCTs are added as horizontal lines.
## References
* Baden et al. [2021] L. R. Baden, H. M. El Sahly, B. Essink, K. Kotloff, S. Frey, R. Novak, D. Diemert, S. A. Spector, N. Rouphael, C. B. Creech, et al. Efficacy and safety of the mrna-1273 sars-cov-2 vaccine. _New England journal of medicine_ , 384(5):403–416, 2021.
* Beedie et al. [2006] C. J. Beedie, E. M. Stuart, D. A. Coleman, A. J. Foad, et al. Placebo effects of caffeine on cycling performance. _Medicine and science in sports and exercise_ , 38(12):2159, 2006.
* Bello et al. [2014] S. Bello, H. Moustgaard, and A. Hróbjartsson. The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications. _Journal of clinical epidemiology_ , 67(10):1059–1069, 2014.
* Bender et al. [2022] F. Bender, W. Rief, and M. Wilhelm. Really just a little prick? a meta-analysis on adverse events in placebo control groups of seasonal influenza vaccination rcts. _Vaccine_ , 2022.
* Boutron et al. [2005] I. Boutron, C. Estellat, and P. Ravaud. A review of blinding in randomized controlled trials found results inconsistent and questionable. _Journal of clinical epidemiology_ , 58(12):1220–1226, 2005.
* Brewer et al. [2007] N. T. Brewer, C. L. Cuite, J. E. Herrington, and N. D. Weinstein. Risk compensation and vaccination: can getting vaccinated cause people to engage in risky behaviors? _Annals of Behavioral Medicine_ , 34(1):95, 2007\.
* Colagiuri [2010] B. Colagiuri. Participant expectancies in double-blind randomized placebo-controlled trials: potential limitations to trial validity. _Clinical Trials_ , 7(3):246–255, 2010.
* Cowling et al. [2012] B. J. Cowling, S. Ng, E. S. Ma, V. J. Fang, H. C. So, W. Wai, C. K. Cheng, J. Y. Wong, K.-H. Chan, D. K. Ip, et al. Protective efficacy against pandemic influenza of seasonal influenza vaccination in children in hong kong: a randomized controlled trial. _Clinical Infectious Diseases_ , 55(5):695–702, 2012.
* Didelez [2018] V. Didelez. Defining causal meditation with a longitudinal mediator and a survival outcome. _Lifetime data analysis_ , pages 1–18, 2018.
* Ejelöv and Luke [2020] E. Ejelöv and T. J. Luke. “rarely safe to assume”: Evaluating the use and interpretation of manipulation checks in experimental social psychology. _Journal of Experimental Social Psychology_ , 87:103937, 2020.
* Fisher and Greenberg [1993] S. Fisher and R. P. Greenberg. How sound is the double-blind design for evaluating psychotropic drugs? _The Journal of nervous and mental disease_ , 181(6):345–350, 1993.
* Freed et al. [2021] B. Freed, B. Williams, X. Situ, V. Landsman, J. Kim, A. Moroz, H. Bang, and J. J. Park. Blinding, sham, and treatment effects in randomized controlled trials for back pain in 2000–2019: a review and meta-analytic approach. _Clinical Trials_ , 18(3):361–370, 2021.
* Goldszmidt et al. [2021] R. Goldszmidt, A. Petherick, E. B. Andrade, T. Hale, R. Furst, T. Phillips, and S. Jones. Protective behaviors against covid-19 by individual vaccination status in 12 countries during the pandemic. _JAMA Network Open_ , 4(10):e2131137–e2131137, 2021.
* Gould et al. [2023] C. V. Gould, J. E. Staples, C. Y.-H. Huang, A. C. Brault, and R. J. Nett. Combating west nile virus disease—time to revisit vaccination. _New England Journal of Medicine_ , 2023.
* Govaert et al. [1994] T. M. Govaert, C. Thijs, N. Masurel, M. Sprenger, G. Dinant, and J. Knottnerus. The efficacy of influenza vaccination in elderly individuals: a randomized double-blind placebo-controlled trial. _Jama_ , 272(21):1661–1665, 1994.
* Haas et al. [2022] J. W. Haas, F. L. Bender, S. Ballou, J. M. Kelley, M. Wilhelm, F. G. Miller, W. Rief, and T. J. Kaptchuk. Frequency of adverse events in the placebo arms of covid-19 vaccine trials: a systematic review and meta-analysis. _JAMA network open_ , 5(1):e2143955–e2143955, 2022.
* Hall et al. [2022] P. A. Hall, G. Meng, M. N. Sakib, A. C. Quah, T. Agar, and G. T. Fong. Do the vaccinated perform less distancing, mask wearing and hand hygiene? a test of the risk compensation hypothesis in a representative sample during the covid-19 pandemic. _Vaccine_ , 2022.
* Halloran et al. [1996] M. E. Halloran, I. M. Longini Jr, and C. J. Struchiner. Estimability and interpretation of vaccine efficacy using frailty mixing models. _American Journal of Epidemiology_ , 144(1):83–97, 1996.
* Hossain et al. [2022] M. E. Hossain, M. S. Islam, M. J. Rana, M. R. Amin, M. Rokonuzzaman, S. Chakrobortty, and S. M. Saha. Scaling the changes in lifestyle, attitude, and behavioral patterns among covid-19 vaccinated people: insights from bangladesh. _Human Vaccines & Immunotherapeutics_, 18(1):2022920, 2022.
* Hróbjartsson et al. [2007] A. Hróbjartsson, E. Forfang, M. Haahr, B. Als-Nielsen, and S. Brorson. Blinded trials taken to the test: an analysis of randomized clinical trials that report tests for the success of blinding. _International journal of epidemiology_ , 36(3):654–663, 2007.
* Kahan et al. [2015] B. C. Kahan, S. Rehal, and S. Cro. Blinded outcome assessment was infrequently used and poorly reported in open trials. _PloS one_ , 10(6):e0131926, 2015.
* Lipsitch et al. [2010] M. Lipsitch, E. T. Tchetgen, and T. Cohen. Negative controls: a tool for detecting confounding and bias in observational studies. _Epidemiology (Cambridge, Mass.)_ , 21(3):383, 2010.
* Moscucci et al. [1987] M. Moscucci, L. Byrne, M. Weintraub, and C. Cox. Blinding, unblinding, and the placebo effect: An analysis of patients’ guesses of treatment assignment in a double-blind clinical trial. _Clinical Pharmacology & Therapeutics_, 41(3):259–265, 1987.
* Moustgaard et al. [2020] H. Moustgaard, G. L. Clayton, H. E. Jones, I. Boutron, L. Jørgensen, D. R. Laursen, M. F. Olsen, A. Paludan-Müller, P. Ravaud, J. Savović, et al. Impact of blinding on estimated treatment effects in randomised clinical trials: meta-epidemiological study. _bmj_ , 368, 2020.
* Murray [2021] E. J. Murray. Demystifying the placebo effect. _American Journal of Epidemiology_ , 190(1):2–9, 2021.
* Pandit et al. [2022] J. A. Pandit, J. M. Radin, G. Quer, and E. J. Topol. Smartphone apps in the covid-19 pandemic. _Nature Biotechnology_ , 40(7):1013–1022, 2022\.
* Pearl [2001] J. Pearl. Direct and indirect effects. In _Proceedings of the seventeenth conference on uncertainty in artificial intelligence_ , pages 411–420. Morgan Kaufmann Publishers Inc., 2001\.
* Polack et al. [2020] F. P. Polack, S. J. Thomas, N. Kitchin, J. Absalon, A. Gurtman, S. Lockhart, J. L. Perez, G. Pérez Marc, E. D. Moreira, C. Zerbini, et al. Safety and efficacy of the bnt162b2 mrna covid-19 vaccine. _New England journal of medicine_ , 383(27):2603–2615, 2020.
* Polit et al. [2011] D. F. Polit, B. M. Gillespie, and R. Griffin. Deliberate ignorance: a systematic review of blinding in nursing clinical trials. _Nursing research_ , 60(1):9–16, 2011.
* Rid et al. [2014] A. Rid, A. Saxena, A. H. Baqui, A. Bhan, J. Bines, M.-C. Bouesseau, A. Caplan, J. Colgrove, A. Dhai, R. Gomez-Diaz, et al. Placebo use in vaccine trials: recommendations of a who expert panel. _Vaccine_ , 32(37):4708–4712, 2014.
* Robins [1986] J. M. Robins. A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. _Mathematical modelling_ , 7(9-12):1393–1512, 1986.
* Robins and Greenland [1992] J. M. Robins and S. Greenland. Identifiability and exchangeability for direct and indirect effects. _Epidemiology_ , pages 143–155, 1992.
* Robins and Richardson [2010] J. M. Robins and T. S. Richardson. Alternative graphical causal models and the identification of direct effects. _Causality and psychopathology: Finding the determinants of disorders and their cures_ , pages 103–158, 2010.
* Robins et al. [2000] J. M. Robins, M. A. Hernan, and B. Brumback. Marginal structural models and causal inference in epidemiology, 2000\.
* Robins et al. [2020] J. M. Robins, T. S. Richardson, and I. Shpitser. An interventionist approach to mediation analysis. _arXiv preprint arXiv:2008.06019_ , 2020.
* Roscoe et al. [2010] J. A. Roscoe, M. O’Neill, P. Jean-Pierre, C. E. Heckler, T. J. Kaptchuk, P. Bushunow, M. Shayne, A. Huston, R. Qazi, and B. Smith. An exploratory study on the effects of an expectancy manipulation on chemotherapy-related nausea. _Journal of pain and symptom management_ , 40(3):379–390, 2010.
* Schnoll et al. [2008] R. A. Schnoll, L. Epstein, J. Audrain, R. Niaura, L. Hawk, P. G. Shields, C. Lerman, and E. P. Wileyto. Can the blind see? participant guess about treatment arm assignment may influence outcome in a clinical trial of bupropion for smoking cessation. _Journal of substance abuse treatment_ , 34(2):234–241, 2008.
* Scott et al. [2022] A. J. Scott, L. Sharpe, and B. Colagiuri. A systematic review and meta-analysis of the success of blinding in antidepressant rcts. _Psychiatry Research_ , 307:114297, 2022.
* Serisier et al. [2023] A. Serisier, S. Beale, Y. Boukari, S. Hoskins, V. Nguyen, T. Byrne, W. L. E. Fong, E. Fragaszy, C. Geismar, J. Kovar, et al. A case-crossover study of the effect of vaccination on sars-cov-2 transmission relevant behaviours during a period of national lockdown in england and wales. _Vaccine_ , 41(2):511–518, 2023.
* Shi et al. [2020] X. Shi, W. Miao, and E. T. Tchetgen. A selective review of negative control methods in epidemiology. _Current epidemiology reports_ , 7:190–202, 2020.
* Stensrud and Smith [2022] M. J. Stensrud and L. Smith. Identification of vaccine effects when exposure status is unknown. _Epidemiology_ , 34(2):216–224, 2022.
* Stensrud et al. [2020] M. J. Stensrud, J. G. Young, V. Didelez, J. M. Robins, and M. A. Hernán. Separable effects for causal inference in the presence of competing events. _Journal of the American Statistical Association_ , pages 1–9, 2020\.
* Stensrud et al. [2021] M. J. Stensrud, M. A. Hernán, E. J. Tchetgen Tchetgen, J. M. Robins, V. Didelez, and J. G. Young. A generalized theory of separable effects in competing event settings. _Lifetime Data Analysis_ , pages 1–44, 2021.
* Stensrud et al. [2022] M. J. Stensrud, J. M. Robins, A. Sarvet, E. J. Tchetgen Tchetgen, and J. G. Young. Conditional separable effects. _Journal of the American Statistical Association_ , pages 1–13, 2022\.
* Thorpe et al. [2022] A. Thorpe, A. Fagerlin, F. Drews, H. Shoemaker, and L. Scherer. Self-reported health behaviors and risk perceptions following the covid-19 vaccination rollout in the usa: an online survey study. _Public Health_ , 208:68–71, 2022.
* Trollfors et al. [1995] B. Trollfors, J. Taranger, T. Lagergård, L. Lind, V. Sundh, G. Zackrisson, C. U. Lowe, W. Blackwelder, and J. B. Robbins. A placebo-controlled trial of a pertussis-toxoid vaccine. _New England Journal of Medicine_ , 333(16):1045–1050, 1995.
* Tsiatis and Davidian [2021] A. A. Tsiatis and M. Davidian. Estimating vaccine efficacy over time after a randomized study is unblinded. _arXiv preprint arXiv:2102.13103_ , 2021.
* VanderWeele [2015] T. VanderWeele. _Explanation in causal inference: methods for mediation and interaction_. Oxford University Press, 2015.
* VanderWeele and Vansteelandt [2010] T. J. VanderWeele and S. Vansteelandt. Odds ratios for mediation analysis for a dichotomous outcome. _American journal of epidemiology_ , 172(12):1339–1348, 2010.
* Webster et al. [2021] R. K. Webster, F. Bishop, G. S. Collins, A. W. Evers, T. Hoffmann, J. A. Knottnerus, S. E. Lamb, H. Macdonald, C. Madigan, V. Napadow, et al. Measuring the success of blinding in placebo-controlled trials: Should we be so quick to dismiss it? _Journal of Clinical Epidemiology_ , 135:176–181, 2021\.
* Wright et al. [2022] L. Wright, A. Steptoe, H. W. Mak, and D. Fancourt. Do people reduce compliance with covid-19 guidelines following vaccination? a longitudinal analysis of matched uk adults. _J Epidemiol Community Health_ , 76(2):109–115, 2022.
## Appendix Web Appendix A
## The no-interference assumption
Assumptions about no interference between individuals requires careful
arguments in vaccine settings, but are often invoked and might sometimes hold.
Consider for example vaccines against west Nile virus (WNV), which are
currently being developed [14]. The no-interference assumption is plausible
because WNV is transmitted from birds to humans, with humans having a viral
load too low to carry on transmission, constituting dead-end hosts. Thus the
infection of one individual, or their vaccination, should not affect the risk
of infection in other individuals.
Furthermore, even when the no-interference assumption does not hold exactly in
the population, this assumption might hold in the trial. Because nobody
outside of the trial receives the vaccine, the only potential source of
interference is among trials participants, and the trials are conducted on a
small number of individuals relative to the entire population. Even the
COVID-19 mRNA vaccine trials, which were large compared to most vaccine trials
[28, 1], only included a small proportion of the entire population. Indeed,
most analyses of vaccine RCTs rely on the no-interference assumption [47, 18].
Consider a trial of a new vaccine and let
$Y_{i}^{a_{1},...,a_{n},a_{n+1},...,a_{N}}$ be the potential outcome of
individual $i\in\mathcal{S}_{\text{trial}}=\\{1,...,n\\}$, where
$\mathcal{S}_{\text{trial}}$ is the trial sample of size $n$, and $N$ is the
population size. Even if the no-interference assumption does not generally
hold for a specific triad of a population, disease and vaccine, it can be a
reasonable assumption in a trial, as well as in the population when the
proportion of vaccinated is still low. Under this particular no interference
assumption,
$Y_{i}^{a_{1},...,a_{n},a_{n+1}=0,...,a_{N}=0}=Y_{i}^{a_{i},a_{n+1}=0,...,a_{N}=0}$
for $i\in\mathcal{S}_{\text{trial}}$, i.e., vaccination status of other trial
participants $j\in\mathcal{S}_{\text{trial}},j\neq i$ is negligible. Thus,
under the assumption that there is no interference in the trial, the contrast
studied in the trial can be written as the direct effect of the vaccine
$E(Y_{i}^{a_{i}=1,a_{n+1}=0,...,a_{N}=0})\text{ vs.
}E(Y_{i}^{a_{i}=0,a_{n+1}=0,...,a_{N}=0}),$ (A10)
where here the expectations are taken over $\mathcal{S}_{\text{trial}}$.
The interference in the population can be mild in some scenarios, such as
shortly after a vaccination campaign has started. Then, Equation (A10) can
approximate well the direct effect in the population; that is, the effect of
vaccination versus non-vaccination, while fixing the vaccination status of the
rest of the population. We leave more formal considerations for future work.
Therefore, the results presented in the main text distinguishing between
immunological and behavioral vaccine effects can also apply for studying
vaccine effects under interference. When interference is present but is
negligible in the trial and in the population in the early vaccination stages,
the implicit estimand in the analysis of the two-arm trial $\mathcal{T}_{II}$
is (A10) with the additional setting $m=-1$.
## Appendix Web Appendix B
## Per protocol effects and relation to [25]
[25] introduced a random variable $Z$ denoting treatment assignment and $X$
corresponding to received treatment, both taking values in $\\{-1,0,1\\}$,
denoting a nontherapeutic control, placebo and active treatment, respectively.
Then, intention-to-treat and per protocol effects to quantify the placebo
effect were defined as
$\displaystyle\mathbb{E}(Y^{z=-1})\text{ vs. }\mathbb{E}(Y^{z=0}),$
$\displaystyle\mathbb{E}(Y^{z=-1,x=-1})\text{ vs. }\mathbb{E}(Y^{z=0,x=0}).$
To illustrate a point, consider a setting with full compliance such that the
per protocol effect is equal to the intention-to-treat effect. Then, the joint
of our treatments $(a=0,m=-1)$ will correspond to $(z=-1)$ and $(a=0,m=0)$
will correspond to $(z=0)$.
We can also extend our results to the setting with incomplete compliance. Let
$R$ will be the treatment that is randomly assigned, which consists of our
message $M$ and treatment offer $Z$ of taking treatment. Now let $A$ be the
treatment (vaccine) actually taken. When the measured baseline covariates $L$
are sufficient to adjust for common causes of $A$ and $Y$, then our arguments
extend to the setting in [25].
## Appendix Web Appendix C
## Decomposition of the total effect
In Section 3, we discussed various causal estimands, in particular the total
effect (2) contrasting $\mathbb{E}(Y^{a=1,m=1})$ and
$\mathbb{E}(Y^{a=0,m=0})$. Here we briefly present decompositions of such
estimands, which are algebraically similar to decompositions obtained in
mediation analysis [32, 27, 48].
On the difference scale,
$\displaystyle\begin{split}\mathbb{E}(Y^{a=1,m=1})-\mathbb{E}(Y^{a=0,m=0})&=\big{[}\mathbb{E}(Y^{a=1,m=1})-\mathbb{E}(Y^{a=1,m=0})\big{]}+\big{[}(\mathbb{E}(Y^{a=1,m=0})-\mathbb{E}(Y^{a=0,m=0})\big{]}\\\
&=\big{[}\mathbb{E}(Y^{a=1,m=1})-\mathbb{E}(Y^{a=0,m=1})\big{]}+\big{[}\mathbb{E}(Y^{a=0,m=1})-\mathbb{E}(Y^{a=0,m=0})\big{]}\end{split}$
(C11)
so the total effect is the sum of two effects: the first line in (C11)
presents the total effect as the immunological effect when $m=0$ and the
behavioural effect when $a=1$; the second row decomposes the total effect into
the behavioral effect without the vaccine ($A$ is held to $a=0$), and the
immunological effect when $m=1$.
Analogous results are obtained on the ratio scale, where the decompositions
are products rather than sums. Similar results hold for odds ratios when the
outcome is rare[49].
Turning to the VE scale, under which the total VE was defined in Section 5 as
$VE_{t}$, the immunological effects are $VE(m)$, and the behavioral effects
(4) is $VE_{m}(a)=1-\frac{\mathbb{E}(Y^{a,m=1})}{\mathbb{E}(Y^{a,m=0})}$.
Then,
$\displaystyle VE_{t}$ $\displaystyle=VE_{m}(0)+VE(1)-VE_{m}(0)VE(1)$
$\displaystyle=VE_{m}(1)+VE(0)-VE_{m}(1)VE(0).$
## Appendix Web Appendix D
## Proofs
### D.1 Proof of Proposition 1
###### Proof.
For $m\in\\{0,1\\}$, consider data from the hypothetical four-arm trial
$\mathcal{T}_{IV}$, constituting four of the six-arms in the hypothetical
trial $\mathcal{T}_{VI}$. We use subscripts to indicate expectations and
distributions with respect to a particular trial. For example,
$\mathbb{E}_{IV}(\cdot)$ is the expected value in the superpopulation of
$\mathcal{T}_{IV}$. Furthermore, we assume that all participants are drawn
from populations with identical distributions of all baseline and pre-baseline
covariates, such that, for example $P_{IV}(L=l)=P_{VI}(L=l)=\Pr(L=l)$, where
distributions without subscripts indicate the observed data from the
population referring to $\mathcal{T}_{II}$.
$\displaystyle\mathbb{E}(Y^{a,m})$
$\displaystyle=\mathbb{E}_{IV}(Y^{a,m})=\sum_{l}\mathbb{E}_{IV}(Y^{a,m}\mid
L=l)\Pr(L=l)\text{ (LTOT)}$ $\displaystyle=\sum_{l}\mathbb{E}_{IV}(Y^{a,m}\mid
M=m,A=a,L=l)\Pr(L=l)\text{ (randomization of $A$ and $M$)}$
$\displaystyle=\sum_{l}\mathbb{E}_{IV}(Y\mid M=m,A=a,L=l)\Pr(L=l)\text{
(consistency)}$ $\displaystyle=\sum_{l}\mathbb{E}_{IV}(Y\mid
M=m,B=m,A=a,L=l)\Pr(L=l)\text{ ($B=M$ w.p.1 in $\mathcal{T}_{IV}$)}.$
It follows from Assumptions 3 and 4 that the previous expression is equal to
$\displaystyle\sum_{l}\mathbb{E}(Y\mid M=-1,B=m,A=a,L=l)\Pr(L=l)$
$\displaystyle=\sum_{l}\mathbb{E}(Y\mid B=m,A=a,L=l)\Pr(L=l)\text{ (Assumption
\ref{ass: delta 1})},$
which is observed in $\mathcal{T}_{II^{B}}$. ∎
### D.2 Proof of Proposition 2
###### Proof.
For $m\in\\{0,1\\}$, consider data from the hypothetical four-arm trial
$\mathcal{T}_{IV}$, which also constitute a subset of the data from the six-
arm trial $\mathcal{T}_{VI}$.
$\displaystyle\mathbb{E}(Y^{a,m})$
$\displaystyle=\mathbb{E}_{IV}(Y^{a,m})=\sum_{l}\mathbb{E}_{IV}(Y^{a,m}\mid
L=l)\Pr(L=l)\text{ (LTOT)}$ $\displaystyle=\sum_{l}\mathbb{E}_{IV}(Y^{a,m}\mid
M=m,A=a,L=l)\Pr(L=l)\text{ ($A,M$ randomized)}$
$\displaystyle=\sum_{l}\mathbb{E}_{IV}(Y^{a,m}\mid M=m,A=a,L=l)\Pr(L=l\mid
A=a)\text{ ($A$ randomized)}$ $\displaystyle=\sum_{l}\mathbb{E}_{IV}(Y\mid
M=m,A=a,L=l)\Pr(L=l\mid A=a)\text{ (consistency)}$
$\displaystyle=\sum_{l,s}\mathbb{E}_{IV}(Y\mid
M=m,A=a,S=s,L=l){\Pr}_{IV}(S=s\mid L=l,A=a){\Pr}_{IV}(L=l)$
$\displaystyle\quad(\eqref{eq: delta m s},\text{ LTOT, $A$ randomized})$
$\displaystyle=\sum_{l,s}\mathbb{E}_{IV}(Y\mid
M=m,B=m,A=a,S=s,L=l){\Pr}_{IV}(S=s\mid L=l,A=a){\Pr}_{IV}(L=l)\ \text{($B=M$
w.p.1)}.$
It follows from Assumptions 7 and 8 that the previous expression is equal to
$\displaystyle=\sum_{l,s}\mathbb{E}(Y\mid M=-1,B=m,A=a,S=s,L=l)\Pr(S=s,\mid
L=l,A=a)\Pr(L=l)$ $\displaystyle=\sum_{l,s}\mathbb{E}(Y\mid
B=m,A=a,S=s,L=l)\Pr(S=s,\mid L=l,A=a)\Pr(L=l),$
where we also used that $A$ is randomized, that ${\Pr}_{IV}(S=s\mid
L=l,A=a)=\Pr(S=s\mid L=l,A=a)$ by assumption and that $\Pr(M=-1)=1$ in
$\mathcal{T}_{II}$,∎
## Appendix Web Appendix E Estimation conditional on the presence of a side
effect $S$
### E.1 Outcome regression estimator
Consider first a simple regression estimator $\hat{\theta}_{or,a,m}$, where we
define $\theta_{a,m}$ as the identification formula in Proposition 2. Let this
estimator be the solution to
$\sum_{i=1}^{n}U_{or,i}(\theta_{a,m},\hat{\eta})=0$ with respect to
$\theta_{a,m}$, where
$\displaystyle U_{or,i}(\theta_{a,m},\hat{\eta})=$
$\displaystyle\sum_{s}\mathbb{E}(Y\mid
B=m,A=a,S=s,L_{i};\hat{\eta}_{1})\Pr(S=s,\mid
L_{i},A=a;\hat{\eta}_{2})-\theta_{a,m}.$
such that $\mathbb{E}(Y\mid B=m,A=a,S=s,L=l;\hat{\eta}_{1})$ and $\Pr(S=s,\mid
L=l,A=a;\hat{\eta}_{2})$ are parametric models indexed by the parameter
${\eta}_{k},\ k=1,2$, where $\hat{\eta}_{k}$ is its MLE. The estimating
equation defined by $U_{or,i}(\theta_{a,m},\hat{\eta})$ has mean zero and the
estimator $\hat{\eta}_{or,a,m}$ is consistent provided that the models are
correctly specified.
Alternatively, consider the estimator
$\sum_{i=1}^{n}U^{\prime}_{or,i}(\theta_{a,m},\hat{\eta}_{1})=0$ with respect
to $\theta_{a,m}$, where
$\displaystyle U^{\prime}_{or,i}(\theta_{a,m},\hat{\eta}_{1})=$ $\displaystyle
I(A_{i}=a)\\{\mathbb{E}(Y\mid
B=m,A=a,S=S_{i},L=L_{i};\hat{\eta}_{1})-\theta_{a,m}\\},$
which is also consistent and requires specification of a smaller number of
parametric models, but also uses less data and thus might be less efficient in
the setting where all models are correctly specified.
### E.2 Weighted estimator
An inverse probability weighted estimator $\hat{\theta}_{ipw,a,m}$ of
$\theta_{a,m}$ is the solution to the estimating equation
$\sum_{i=1}^{n}U_{ipw,i}(\theta_{a,m},\hat{\beta})=0$ with respect to
$\theta_{a,m}$, where
$\displaystyle
U_{ipw,i}(\theta_{a,m},\hat{\beta})=\frac{I(A_{i}=a,B_{i}=m)}{\Pr(B=m\mid
L=L_{i},S=S_{i},A=a;\hat{\beta}_{1})\Pr(A=a\mid
L=L_{i};\hat{\beta}_{2})}\left(Y_{i}-\theta_{a,m}\right),$
where we have indexed estimators with the parameter $\beta_{j}$ for
$j\in\\{1,2\\}$ and assume $\hat{\beta}_{j}$ is its MLE. As stated in the main
text, usually $\Pr(A=a\mid L=L_{i})=0.5$ by design in a vaccine RCT, but it
can also be estimated as $\Pr(A=a\mid L=L_{i};\hat{\beta}_{2})$ for efficiency
reasons.
## Appendix Web Appendix F
## Further details and results for the examples
### F.1 Illustration of different vaccine effects
The mathematical of the DGM described in Section 5.1 of the main text is as
follows. As previously noted, this illustration is presented under the
graphical structure of Web Figures 5 and 5. The potential outcomes of $B$
under interventions on $A,M$ were
$\displaystyle B^{a,m=0}=0,\quad\text{for}\quad a=0,1$ $\displaystyle
B^{a,m=1}=1,\quad\text{for}\quad a=0,1$
$\displaystyle\mathbb{E}(B^{a=0,m=-1}=1)=0.5$
$\displaystyle\mathbb{E}(B^{a=1,m=-1}=1)=0.5\times RR_{B}$
The potential outcomes of $Y$ under interventions on $A,M$ were
$\displaystyle\begin{split}&\mathbb{E}(Y^{a=0,m=0})=0.01\\\
&\mathbb{E}(Y^{a=1,m=1})=0.01\times(1-VE_{t})^{\ddagger}\\\
&\mathbb{E}(Y^{a=0,m=1})=0.02^{\star}\\\
&\mathbb{E}(Y^{a=1,m=0})=0.005\times(1-VE_{t})^{\star}\\\ \end{split}$ (F12)
‡ By the definition of $VE_{t}$.
⋆ By $\frac{\mathbb{E}(Y^{a,m=1})}{\mathbb{E}(Y^{a,m=0})}=2$ for both $a=0,1$
Under the above DGM, we can calculate $VE(m)$. We start with $VE(0)$ and
$VE(1)$ which are immediately given by (LABEL:Eq:_VE_comparison_POdef) as
$\displaystyle VE(0)$
$\displaystyle=1-\frac{\mathbb{E}(Y^{a=1,m=0})}{\mathbb{E}(Y^{a=0,m=0})}=0.5(VE_{t}+1)$
$\displaystyle VE(1)$
$\displaystyle=1-\frac{\mathbb{E}(Y^{a=1,m=1})}{\mathbb{E}(Y^{a=0,m=1})}=0.5(VE_{t}+1)$
Turning to $VE(-1)$, note that because there are no causal and no non-casual
paths between $M$ and $Y$ except the indirect effect through $B$, and because
$B^{a,m}=m$ for $a,m=0,1$, we have
$\displaystyle\mathbb{E}(Y^{a=0,m=-1}|B^{a=0,m=-1}=0)$
$\displaystyle=\mathbb{E}(Y^{a=0,m=0}|B^{a=0,m=0}=0)=\mathbb{E}(Y^{a=0,m=0}),$
$\displaystyle\mathbb{E}(Y^{a=0,m=-1}|B^{a=0,m=-1}=1)$
$\displaystyle=\mathbb{E}(Y^{a=0,m=1}|B^{a=0,m=1}=1)=\mathbb{E}(Y^{a=0,m=1}),$
$\displaystyle\mathbb{E}(Y^{a=1,m=-1}|B^{a=1,m=-1}=1)$
$\displaystyle=\mathbb{E}(Y^{a=1,m=1}|B^{a=0,m=1}=1)=\mathbb{E}(Y^{a=1,m=1}),$
$\displaystyle\mathbb{E}(Y^{a=1,m=-1}|B^{a=1,m=-1}=0)$
$\displaystyle=\mathbb{E}(Y^{a=1,m=0}|B^{a=1,m=0}=0)=\mathbb{E}(Y^{a=1,m=0}).$
We can now use the above results to derive $\mathbb{E}(Y^{a=0,m=-1})$ and
$\mathbb{E}(Y^{a=1,m=-1})$, as follows.
$\displaystyle\mathbb{E}$ $\displaystyle(Y^{a=0,m=-1})$
$\displaystyle=\mathbb{E}(Y^{a=0,m=-1}|B^{a=0,m=-1}=1)\times
0.5+\mathbb{E}(Y^{a=0,m=-1}|B^{a=0,m=-1}=0)\times 0.5$
$\displaystyle=\mathbb{E}(Y^{a=0,m=1})\times 0.5+\mathbb{E}(Y^{a=0,m=0})\times
0.5$ $\displaystyle=0.03\times 0.5=0.015.$
Next,
$\displaystyle\mathbb{E}$ $\displaystyle(Y^{a=1,m=-1})$
$\displaystyle=\mathbb{E}(Y^{a=1,m=-1}|B^{a=1,m=-1}=1)\times 0.5\times
RR_{B}+\mathbb{E}(Y^{a=1,m=-1}|B^{a=1,m=-1}=0)\times(1-0.5RR_{B})$
$\displaystyle=\mathbb{E}(Y^{a=1,m=1})\times 0.5\times
RR_{B}+\mathbb{E}(Y^{a=1,m=0})\times(1-0.5\times RR_{B})$
$\displaystyle=0.01\times(1-VE_{t})\times 0.5\times
RR_{B}+0.005\times(1-VE_{t})\times(1-0.5\times RR_{B})$
$\displaystyle=(1-VE_{t})\times(0.005+0.0025\times RR_{B}).$
Therefore,
$VE(-1)=1-\frac{(1-VE_{t})\times(0.005+0.0025\times RR_{B})}{0.015}$
$A$$M=-1$$B$$Y$ (a) DAG describing the two-arm trial $\mathcal{T}_{II}$ from
Section 5.1, where $M$ is deterministically equal to $-1$. Thus, the node
$M=-1$ is a trivial constant, and is only included for clarity.
$A$$M$$B$$Y$ (b) DAG describing the six-arm trial $\mathcal{T}_{VI}$ from
Section 5.1. The arrows from $M$ and $A$ to $B$ reflect that when $M=0,1$,
then $B=M$, and when $M=-1$, then $B$ is a random variable whose distribution
depends on $A$.
Web Figure 5: Figures describing the DGM used in Section 5.1.
Web Figure 6 presents an additional comparison, between $VE(-1)$ as a function
of $VE(0)=VE(1)$ for different $RR_{B}$ values and the relationship between
$VE_{t}$ and $VE(0)=VE(1)$. Because it is assumed a message $M=1$ induces more
risky behaviour, we obtained that $VE_{t}<VE(m),m=0,1$.
Web Figure 6: $VE(-1)$ and $VE_{t}$ as a function of $VE(m),m=0,1$ for
different $RR_{B}$ values. Note that $VE(m),m=0,1$ and $VE_{t}$ are not
functions of $RR_{B}$, because the belief is identical to the message in these
comparisons.
### F.2 Simulations
Web Figure 7 describes the assumptions underlying the DGM used in our
simulations. Starting from the non-trial conditions ($A=M=B$),
$\mathbb{E}(Y^{a,m})$ are determined by setting $\mathbb{E}(Y^{a=0,m=0})$,
$VE_{t}$, and $VE(m)$ for $m=0,1$. The values will be determined later to
obtain similar results to the trial. Next, to calculate
$\mathbb{E}(Y^{a,m=-1})$, we note that for $m=-1$, only $S$ affects $B$, and
we can write
$\displaystyle\mathbb{E}$ $\displaystyle(Y^{a,m=-1})$
$\displaystyle=\mathbb{E}$
$\displaystyle(Y^{a,m=-1}|B^{s=0,m=-1}=1,S^{a=1}=0)\Pr(S^{a}=0)\Pr(B^{s=0,m=-1}=1)$
$\displaystyle+\mathbb{E}(Y^{a,m=-1}|B^{s=1,m=-1}=1,S^{a=1}=1)\Pr(S^{a}=1)\Pr(B^{s=1,m=-1}=1)$
$\displaystyle+\mathbb{E}(Y^{a,m=-1}|B^{s=0,m=-1}=0,S^{a=1}=0)\Pr(S^{a}=0)\Pr(B^{s=0,m=-1}=0)$
$\displaystyle+\mathbb{E}(Y^{a,m=-1}|B^{s=1,m=-1}=0,S^{a=1}=1)\Pr(S^{a}=1)\Pr(B^{s=1,m=-1}=0)$
$\displaystyle=\mathbb{E}$
$\displaystyle(Y^{a,m=1})\big{[}\Pr(S^{a}=0)\Pr(B^{s=0}=1)+\Pr(S^{a}=1)\Pr(B^{s=1}=1)\big{]}$
$\displaystyle+\mathbb{E}(Y^{a,m=0})\big{[}\Pr(S^{a}=0)\Pr(B^{s=0}=0)+\Pr(S^{a}=1)\Pr(B^{s=1}=0)\big{]}$
Note these weights can be abbreviated a bit recognizing it’s just the
probability of belief Now, under the values given below, we obtain population-
level results similar to what was obtained in the trial. We took
$\mathbb{E}(Y^{a,m=-1})$ (and therefore $VE(-1)$) to be approximately equal to
$\hat{\Pr}(Y=1|A=a)$ from the trial. The estimated VE in the trial was 0.47.
We also took the probability of side effects under each treatment
$\Pr(S^{a}=1)$ to be equal to the trial observed proportions of soreness of
any severity at each treatment arm. We took
$\displaystyle\mathbb{E}(Y^{a=0,m=-1})$ $\displaystyle=0.170$
$\displaystyle\mathbb{E}(Y^{a=1,m=-1})$ $\displaystyle=0.090$
$\displaystyle\mathbb{E}(Y^{a=0,m=0})$ $\displaystyle=0.1395$ $\displaystyle
VE_{tot}$ $\displaystyle=0.3$ $\displaystyle VE(0)$ $\displaystyle=0.40$
$\displaystyle VE(1)$ $\displaystyle=0.60$ $\displaystyle\Pr(S^{a=1}=1)$
$\displaystyle=0.50$ $\displaystyle\Pr(S^{a=0}=1)$ $\displaystyle=0.21$
$\displaystyle\Pr(B^{s=1}=1)$ $\displaystyle=0.70$
$\displaystyle\Pr(B^{s=0}=1)$ $\displaystyle=0.18$
which resulted in the mean potential outcomes $\mathbb{E}(Y^{a,m})$ given in
Table 1 of the main text.
$A$$S$$B$$Y$$M$
Web Figure 7: DAG describing the simulations according to a six-arm trial
$\mathcal{T}_{VI}$, where a side effect $S$ can affect the belief $B$.
|
colorlinks Department of Biochemistry and Molecular Medicine
Université de Montréal
MILA Québec
Department of Physics
McGill University
Montréal, Québec
<EMAIL_ADDRESS>
# Waves, patterns and bifurcations:
a tutorial review on the vertebrate
segmentation clock
Paul François and Victoria Mochulska
###### Contents
1. 0 Verterbrate segmentation for theorists: why?
2. 1 Characterizing vertebrate segmentation : clock, waves, morphogens
1. 1 Early concepts
2. 2 Statics and dynamics of metazoan segmentation
3. 3 The segmentation clock paradigm
3. 2 Early models
1. 1 The clock and wavefront framework
2. 2 Meinhardt’s model
3. 3 Cell Cycle model
4. 3 Phase models
1. 1 From chemical equations to phase
2. 2 Clock and Unclock
3. 3 The Lewis Phase Model (LPM)
4. 4 Flow-based phase models
5. 5 Delayed coupled models
6. 6 Doppler period shift
7. 7 Wavefront as a phase shock emerging from coupling
8. 8 Phase-amplitude coupling and excitability for oscillation arrest
5. 4 From systems biology to geometric models
1. 1 Simple delayed oscillators
2. 2 Molecular (clock and) Wavefront models
3. 3 Somite AP patterning: Inverse problem approach
4. 4 Landscape geometry of segment formation
5. 5 Clock and switch modulated
6. 6 From geometric back to phase models ?
6. 5 Hacking the segmentation clock
1. 1 Monolayer PSM cell cultures : the $\alpha$ two-oscillator model
2. 2 Entraining the segmentation clock
3. 3 Inferring coupling rules : walkie talkie model
4. 4 Exploring cell communications/coupling with optogenetics
5. 5 In vitro mouse segmentation clock
6. 6 Zebrafish cultures
7. 7 Stem-cell systems
8. 8 Randomization
7. 6 Theoretical challenges and future insights
1. 1 Categorizing models: primary waves and bifurcations
2. 2 Reconciling models with experimental observations
3. 3 Towards a universal model of the cellular dynamics during segmentation ?
4. 4 Unknown and Known limitations, blind spots
5. 5 Theory, learning and Evolution
8. 7 Supplementary Materials
9. 8 Acknowledgements
10. A Some dynamical systems theory for biological oscillators
1. Bifurcations and excitable systems
2. Perturbing the phase
3. Two demonstrations of Malkin theorem
11. B Complementary Discussions
1. Scaling Laws
2. Number of waves in a growth model
3. Doppler period shift calculations
4. Conditions for oscillations for negative feedback oscillators with delay
5. Delayed model with noise
6. Practical details on the ERICA model
###### List of Figures
1. 1 Sketches of Malpighi illustrating the process of somite formation in chick embryos
2. 2 Definition of Kymographs
3. 1 The French Flag Model
4. 2 Fly segmentation
5. 3 Vertebrate segmentation
6. 4 Sketches of Molecular Players in Somitogenesis
7. 5 Different wave patterns in different species
8. 1 Qualitative view of the Clock and wavefront model.
9. 2 Two-well landscape for the clock and wavefront model
10. 3 Mathematical formulation of the Clock and Wavefront model.
11. 4 Simulations of the Clock and Wavefront model.
12. 5 Landscape Dynamics in the Clock and Wavefront model
13. 6 Primary and Secondary waves
14. 7 Meinhardt model and its reduction
15. 8 Simulation of the reduced Meinhardt model
16. 9 Flow-plots of the Meinhardt-Van der Pol model
17. 10 Nucleation of a new border in the Meinhardt model
18. 11 Scaling of the Meinhardt-Van Der Pol model
19. 12 Cell Cycle model
20. 1 Clock and Unclock
21. 2 Lewis Phase Model
22. 3 Contributions to the Doppler Shift of Segmentation period
23. 4 Murray model
24. 5 Phase Amplitude model
25. 1 Delayed Oscillator
26. 2 Evolved clock and switch model
27. 3 PORD model
28. 4 Building a Geometric model
29. 5 Phase model with SNIC
30. 1 mPSM culture
31. 2 $\alpha$ model
32. 3 Entraining the segmentation clock.
33. 4 FitzHugh-Nagumo oscillator
34. 5 Randomization Assay to infer coupling rules
35. 1 Four possible scenarios for primary/secondary waves in somitogenesis
36. 2 A possible ’Switch/Clock/Switch’ scenario
37. 3 Geometry of a Mesp-2 like secondary wave
38. 1 Type I excitability
39. 2 Type II excitability
## Chapter 0 Verterbrate segmentation for theorists: why?
The French naturalist Geoffroy Saint Hillaire noticed in the XIXth century a
universal feature of the body of many common animals [1]: they are primarily
built on the repeat of metaxmeric units along their anteroposterior axis.
Canonical examples include segments in arthropods, or our vertebrae. This
organization is so fundamental that entire phylogenetic groups have been named
in reference to units of their body plan, e.g. annelids or vertebrates. Fossil
records suggest that this segmental organization is an extreme form of organ
metamerism, that possibly accompanied the Cambrian explosion 600 million years
ago [2]. As such, metamerism can be considered a major evolutionary innovation
leading to modern animal life. The segmental organization is generally assumed
to provide multiple evolutionary advantages, for instance having multiple
connected body parts allows for versatile body movements, and division between
units allows for subsequent evolutionary specializations of individual
segments [3].
Vertebrae precursors in embryos are called somites, and the process of somite
formation is called "somitogenesis". Somites first appear as pairs of
epithelial spheres on both left and right sides of the neural tube, and
sequentially form from anterior to posterior during axis elongation, Fig. 1.
Multiple tissues derive from somites so a proper understanding and control of
somite formation might potentially lead to both fundamental advances and
practical application in regenerative medicine [4]. Somitogenesis is
particularly appealing to physicists for multiple reasons. As we will describe
below, it is now established that somitogenesis is tied to the presence of a
global genetic oscillator, called the segmentation clock [5], which is
associated with multiple waves propagating in embryonic tissues [6, 7, 8, 9].
The periodicity of this process further allows for multiple observations
within one single experiment, making it an ideal system for developmental
biophysics. Examples of experimental perturbations include recovery of
oscillation following perturbations [10, 11] and entrainment [12]. Individual
cells can oscillate when dissociated [13], and it is now clear that the
segmentation clock at the tissue level is an emergent, self-organized process
[14]. Somites are in fine well-defined physical units, so somitogenesis also
presents a nice example of interaction between genetic expression, signaling,
and biomechanical process leading to morphogenesis. Lastly, it should be
pointed out that the existence of an oscillator controlling somite formation
has been predicted theoretically using advanced mathematical concepts
(catastrophe theory) [15] 21 years before its definitive experimental proof
[5]. So vertebrate segmentation is a good example of "Figure 1" scientific
endeavor [16], where theoretical predictions suggest experiments, and where a
fruitful back and forth between experimental biology and theoretical modeling
has occurred.
Important recent advances include more controlled experimental setups such as
explants [17, 18], stem cell systems [19, 20] and even synthetic developmental
biology assays [21] such as somitoids/segmentoids [22, 23]. Feynman’s famous
quote "What I cannot create, I do not understand" is often invoked (see e.g.
[24, 25]) to motivate such in vitro reconstruction of biological systems.
Indeed, great insights can be drawn by creating and manipulating minimal
experimental models. It is however important to stress that this quote, found
on Feynman’s blackboard upon his death [26], likely reflects the mindset of a
theoretical physicist, further known for his pedagogical insights 111Schwinger
even qualified Feynman diagrams as ”pedagogy, not physics” [27]. While
experiments are of course necessary, "creation” in Feynman’s mind might also
refer to the building of a predictive mathematical model, seen as the sine qua
non for understanding. This program is best described by Hopfield [28] :
> "The central idea was that the world is understandable, that you should be
> able to take anything apart, understand the relationships between its
> constituents, do experiments, and on that basis be able to develop a
> quantitative understanding of its behavior. Physics was a point of view that
> the world around us is, with effort, ingenuity, and adequate resources,
> understandable in a predictive and reasonably quantitative fashion. Being a
> physicist is a dedication to a quest for this kind of understanding."
This tutorial aims at introducing such a quantitative understanding of
somitogenesis. Excellent reviews have been recently written on a more
biological side, e.g. [29, 30, 1], or on developmental oscillations in
general, [31], see also [32] for a review of synchronization in the present
context. We hope to provide here a modern mathematical introduction to the
field of somitogenesis, allowing for conceptual discussions framed with non-
linear models, in a language amenable to physicists. We will be careful to
relate models to experimental biology as much as we can.
In the following, we first briefly summarize the main biological concepts and
molecular players. The field is still evolving and new aspects are still
discovered to this date (we write those words in 2023). This justifies a more
theoretical and conceptual discussion. We then follow approximately a
chronological approach, describing how the (theoretical) understanding of the
field has progressed with time. Importantly, classical models proposed before
the molecular biological era have been crucial to suggest experiments and
ideas, and our ambition is to describe them in detail because they are still
relevant today, at the very least to frame the theoretical discussion.
The field has then been strongly driven by the constant experimental progress
in molecular biology, genetics, imaging, and more recently synthetic biology,
allowing scientitsts to explore more complex and refined scenarios, that we
will describe. Many of the most recent ideas described in the following also
find their origin in the era of "systems biology", with a focus on the
(emergent) properties of gene regulatory networks [33]. For this reason, there
is a bias in both experimental and modeling works, towards the signaling
aspects of the system, which we would loosely define as the dynamics of gene
expression in time and space, described by non-linear models. We will discuss
the experimental reasons why such an approach makes sense in retrospect, but
will also describe works exploring other aspects (e.g. mechanics). We will
eventually connect those models to current descriptions grounded in dynamical
systems or catastrophe theory [34], with the hope to infer some general
principles and scenarios [35, 36] (see e.g. summary Figure 1). In Appendix A,
we put together a condensed discussion of classical results on non-linear
oscillators and bifurcations, with examples relevant to the present context
(phase oscillator, phase responses, and some introduction to relaxation
oscillators and excitability). Appendix B contains calculations associated
with the main text. We also include multiple Jupyter notebooks to simulate the
multiple models presented in this tutorial :
https://github.com/prfrancois/somitetutorial
Figure 1: Sketches of Malpighi illustrating the process of somite formation in
chick embryos
#### Conventions and definitions used in tutorial
In Table 1, we summarize a couple of notations used throughout this tutorial
Symbol | Definition
---|---
$\theta_{i}(t)$ | phase of a given oscillator at time $t$ and discrete position $i$.
$\theta(x,t)$ | phase of a given oscillator at time $t$ and position $x$ (continuous limit of $\theta_{i}$)
$\omega_{i}$ | frequency of an oscillator at position $i$
$\omega(x)$ | continuous limit of $\omega_{i}$
$\Omega$ | (global) frequency of the segment formation process
$T$ | period of the segment formation process
$\phi$ | phase of an oscillator in a moving frame of reference
$\psi$ | relative phase of an oscillator with respect to a reference oscillator (usually $\phi$)
$v$ | speed of propagation of the front
$S$ | size of somites
$L$ | size of the tissue (e.g. presomitic mesoderm)
$\tau$ | delay in the differential equations and/or the coupling
$\lambda$ | wave length of the pattern
Table 1: Some notations used in this review
When discussing biology, we follow a standard convention where specific gene
names are italicized (e.g. Mesp2, Lfng, names of pathways or gene families are
kept in normal font (e.g. Notch pathway).
For many theoretical works, we represent the spatio-temporal behaviour of the
system by so-called "kymographs", which are pictures showing the spatio-
temporal values of a variable as different colour/gray level. We will follow
the convention for representing kymographs from [17] and other works: columns
of the kymographs correspond to different times (with time increasing from
left to right), and lines to different positions in the embryo (with most
anterior cells on the top and tail bud cells on the bottom), see Fig. 2 below.
For models including growth, instead of imposing some moving boundary
condition, it is common to typically extend the tail of the embryos as a
fictitious extended region in space with a homogeneous pattern of expression,
as is represented at the bottom of Fig.2.
Figure 2: Correspondance between the observed spatial-temporal pattern of
genetic expression (in an embryo, top) and a theoretical Kymograph (bottom).
When representing the behaviour of theoretical models, as a convention, we
extend the tail expression pattern to a fictitious region posterior to tail
(dotted triangle at the bottom)
## Chapter 1 Characterizing vertebrate segmentation : clock, waves,
morphogens
### 1 Early concepts
#### 1 Vertebrate segmentation
One of the first recorded observations of somite formation is due to Marcello
Malpighi, a medical doctor who pioneered the use of the microscope for
scientific observation. In Opera Omnia [37], published in 1687, Malpighi drew
several stages of chick embryonic development, Fig. 1 (reproduced from [38]).
Somites were represented as balls of cells on both sides of the neural tube.
For the first time, it was visible from these drawings that somite formation
is a dynamic process, where somites sequentially form from anterior to
posterior as the embryo is elongating.
It took a few more centuries to get a more detailed view of embryogenesis and
of somite dynamical formation. In 1850, Remak observed that future vertebrae
arise from the fusion of the posterior part of a somite with the anterior part
of the following one [39, 1], suggesting that somites are not homogeneous and
present functional anteroposterior (another biological term for this being
’rostral-caudal’) polarity. Fast forward another century, a more precise
description of somitogenesis (the "genesis" of somites) was made possible with
progress in the manipulation of chicken and amphibian embryos, and was
motivated in parts by theoretical questions. We refer to Pourquié’s recent
review [1] for a very detailed description and now proceed in describing key
theoretical proposals of those pioneering times.
#### 2 Morphogens
Turing’s seminal work on "The Chemical basis of morphogenesis" [40] represents
a conceptual turning point in theoretical embryology, . Turing introduced
several key ideas that have deeply shaped the entire field of developmental
biology, up to this day. In particular :
* •
Turing suggested that some morphogenetic events find their origin in
differences of concentrations of chemical substances. While he explicitly
discussed in the introduction the role of mechanics in morphogenesis, he was
the first to consider a model where the chemical and mechanical aspects can be
separated.
* •
Chemical substances driving development are called "morphogens", a term now
widely used in biology. Turing postulated that morphogens interact with each
other via reaction and diffusion. This can give rise to patterns (now
generally called "Turing patterns") at the origin of biological shapes.
The typical Turing ’interaction network’ is made of two morphogens : one
’activator’ morphogen, that diffuses slowly (thus with short range activity),
self-activating and activating a ’repressor’ morphogen, that diffuses rapidly
(thus with long range activity). A simulation of 1 D Turing mechanism with
(almost) homogeneous initial condition indeed gives rise to a periodic
pattern, where islands of the activator morphogens are limited by more broadly
expressed repressors. Turing patterns thus are a natural candidate for the
formation of metameric units, similar to the ones observed in vertebrate
segmentation. Alternation of stripes in a Turing model could either correspond
to a somite/nonsomite pattern or the anterior/posterior parts of somites.
Diffusion is crucial in the establishment and maintenance of Turing patterns,
for instance if a physical barrier is put in place, the long-range repression
effect is impinged, and new activating regions can emerge. Another key feature
of Turing patterns is their intrinsic length scale, which is a function of the
parameters such as diffusion constant. This led to a direct experimental test
of a Turing-based model of somitogenesis by Waddington and Deuchar [41] (and
later Cooke [42]), who generated amphibian embryos of different sizes by
adding/removing tissue at the gastrula stage. They observed that somite size
is scaling accordingly, i.e. bigger embryos have bigger somites in all
dimensions. This excludes a process where the length scale is set by a simple
reaction-diffusion process. Another difference can be found in the dynamical
aspect of the process: as said above, the formation of somites is sequential,
from anterior to posterior, while stripes or spots in a Turing system a priori
form simultaneously.
#### 3 Positional information
Further considerations of the scaling of structures in embryos of different
sizes led to many conceptual discussions on how genetically identical cells
can take different fates, which are worth mentioning to better understand the
current theoretical framework. In 1969, Lewis Wolpert introduced the notion of
positional information in development [43]. Information here should be
understood in the colloquial sense: positional information is more akin to a
zip code or an address (rather than a physics-inspired definition of
information in relation to entropy). Wolpert’s underlying idea is that cells
have ways to "know" (or to compute) their position within an embryo, and to
differentiate accordingly. Then problems such as embryonic scaling boil down
to the problem of specification of positional information (which should
actively scale with, e.g., cell number).
Figure 1: The French Flag Model (Adapted from [44]). A graded morphogen
concentration is used as an input for cells to define three domains - via two
thresholds of activity $\Theta_{B,R}$. Those three domains are depticted using
the three colours of the French Flag. (left). If the embryo size is reduced
but the morphogen gradient properly scales, this ensures a scaled pattern of
cellular fates even if the number of cells itself is not conserved (right)
The paradigmatic example of positional information in biology is Wolpert’s
famous French Flag Model [44], Fig. 1. Imagine an embryo as a line of cells
(with the position of a given cell defined by its coordinate $x$), and imagine
that there is a graded concentration of a morphogenetic protein (let us assume
it is exponential of the form $C(x)=C_{0}e^{-x/L}$ where $L$ is the size of
the tissue to pattern). Then, cells have access to local concentration $C(x)$
and can decide their fate based on this. For instance, imagine that there are
two thresholds respectively at $\Theta_{B}$ and $\Theta_{R}$, then cells
observing concentration lower than $\Theta_{R}$ can develop into a "red" fate,
cells with observing concentrations between $\Theta_{R}$ and $\Theta_{B}$
develop into a "white" fate and cells with concentrations higher than
$\Theta_{B}$ can develop into a "blue" fate, giving rise to a paradigmatic
French Flag picture Fig. 1. The French Flag paradigm provides a parsimonious
explanation of embryonic scaling. If the number of cells is changing, one can
possibly scale patterning within an embryo by scaling the morphogen gradient
itself, which is arguably a much simpler problem to solve (both for biology
and for theorists). For instance, Crick proposed in 1970 a "source-sink" model
where a gradient of a diffusing protein is maintained at concentration $C_{0}$
at one extremity of the embryo and at $0$ at the other extremity [45]. A
solution of the 1D diffusion equation with those boundary conditions clearly
is a linear, steady-state profile, which thus naturally scales with the size
of the diffusing field. To ensure scaling, one simply needs to impose boundary
conditions, which is consistent with the existence of embryonic regions such
as organizers [44]. Such ideas led to multiple discussions on the
theory/conceptual side. For instance, it is not clear if one can separate any
informational content from the processing of this information. Some of those
early debates are summarized by Cooke [46], who observed that the proportional
allocation of cells to different tissues in embryos of vastly different sizes
can not be very easily explained with simple morphogen gradients or reaction-
diffusion models . He suggested some coupling between protein production rates
and the size of tissue might rather play a role as a ’proportion sensor’. It
should be mentioned that our understanding of such scaling properties remains
incomplete to this date.
Coming back to segmentation, a natural idea within the positional information
framework would be to assume that different thresholds of one or several
morphogens would define somite locations. The problem is that many animals
(snakes, centipedes) can have many segments (more than 200 vertebrae in
snakes). In a French Flag/positional information picture, the potential number
of thresholds needed to explain somite formation appears unlikely huge [15].
Another issue is that from one animal to the other, there is some variability
in the number of somites even within the same species, which implies a degree
of versatility with respect to the overall body plan in the encoding of somite
position [15]. Other explanations are thus needed both for the process of
segmentation itself and the underlying scaling mechanisms.
### 2 Statics and dynamics of metazoan segmentation
#### 1 Establishment of the (fly) segmentation paradigm
In parallel, starting in the early 1980s, molecular details of developmental
processes in general have been established and refined with increasing
progress in molecular biology, genetics, and, later on, imaging. The fruit fly
(Drosophila) model organism is the first organism for which the key principles
of segmentation and associated genes have been identified, starting in 1981
with a groundbreaking series of papers by Christiane Nüsslein-Volhard and Eric
Wieschaus [47] (who were awarded the Medecine Nobel Prize for this work in
1995.)
In a nutshell, fly segmentation appears, maybe surprisingly, largely
consistent with the "French Flag model" view, [44], Fig. 2). Multiple
morphogenetic gradients were discovered over the years: the bicoid gradient
defines identities in the anterior part of the embryo, while posterior
gradients such as nanos and caudal define identities in the posterior part of
the embryo [48, 49]. Those gradients are generally called "maternal, since
they are initially defined by localization of RNA molecules in the egg by the
mother (and subsequent cross-regulation, e.g. caudal translation is repressed
by bcd ).
In their original papers, Wieschaus and Nüsslein-Volhard identify so-called
"gap-like" phenotypes, in which mutants have parts of their body missing.
Those gap phenotypes are due to the mutation of so-called gap genes,
themselves normally expressed in the part of the body missing in the mutants.
Gap genes’s expressions are positioned and controlled by the maternal
gradients, and consistent with this, cellular identities can be shifted
anteriorly or posteriorly by changing the levels of the maternal gradients
[50].
Downstream the gap genes, we find pair-rule genes, then segmentation genes
[51, 49], Fig. 2. The pair-rule genes correspond to periodic structure every 2
segments, while segmentation genes are expressed in all segments. Those genes
are expressed in periodic stripes corresponding to future segments and their
sub-compartments. Such striped patterns naturally evoke reaction-diffusion
mechanisms to physicists, but quite astonishingly, it turns out that those
different stripes are encoded in the genetic sequence and regulated more or
less independently from one another. As an example, an Eve2 stripe genetic
module can be identified on the fly DNA, regulated by a subset of gap genes
independently from all other stripe modules [52], Fig. 2 B. Those discoveries
thus suggested a very local and feedforward view of development and positional
information, where concentrations of morphogens dictate local fates all the
way to segmentation genes. Remarkably, it has been shown since then that the
bicoid gradients and the gap genes downstream of it contain exactly the right
amount of information (in the physics sense) to encode identity with a single
cell resolutions along the entire fly embryo [53, 54, 55, 56].
Those discoveries considerably shaped the subsequent discussions on
segmentation in vertebrates as well. First, they firmly established the
morphogen gradient paradigm, where different levels define different
identities or properties. Second, they argue against models where reaction-
diffusion processes are crucial for robust patterning. The view coming from
fly segmentation is more local and modular: the definition of cellular fates
is done through a given gap gene combination [55, 56], which is specific to
the cell location, independently from all other locations within the embryo.
Consistent with this view, there is some variability in the pattern of gap
genes’ expression (and likely regulation) from one species to the other in
"long germ band" insects (forming their segments like flies) [57, 58], see
also [59] for simulations of underlying network evolution.
That said, it rapidly turned out that flies are to some extent evolutionary
exceptions. The almost paradigmatic morphogen, bicoid, does not exist outside
of Drosophila. Long germ segmentation further appears highly derived
evolutionary: it occurs in an egg of approximately fixed size, with
segmentation genes expressed more or less simultaneously, while in most other
metazoans, segmentation is sequentially coupled to embryonic growth [60] 111it
should be pointed out though that long germ segmentation still evolved many
times independently, suggesting deep evolutionary forces are at stake to move
towards such mode of segmentation . Gap phenotypes are also not observed in
vertebrates, suggesting that segmentation is a more global, integrated process
in opposition to a more local process where identities are defined by local
morphogen concentrations. Finally, as said above, flies have a relatively
small number of segments compared to some vertebrates such as snakes.
Figure 2: Summary of Fly segmentation (A) Schematic of the expression pattern
of some of the main genes regulating segmentation in Drosophila. Maternal
genes, control gap genes and gap genes in turn control the expression of pair-
rule genes. (B) A simplified, hierarchical feed-forward model for fly
segmentation, reproduced from [59]. The embryo is simulated as a one-
dimensional field. The left panel shows the behaviour of the model, the
maternal profiles are imposed and the network "computes" the concentration of
downstream genes from top to bottom. The right panel shows the topology of the
corresponding gene regulation network. Genes interaction are symbolized by
arrows, regular arrows correspond to activation, T-shaped arrows to
repression. For instance, one can see how individual Eve stripes are regulated
differently. We highlight Eve2, which is activated by Bcd and repressed by
both Anterior Gt and Kr in this model, and as a consequence appears at the
interface between those two genes.
#### 2 Discovery and phenomenology of the segmentation clock
All animals are evolutionary related and, as a spectacular consequence, many
of the lower level controls of the animal physiology are similar even in very
different-looking animals [2]. This is especially patent for molecular
controls of embryonic development : many developmental genes are highly
conserved, and play the exact same role in many animals. A spectacular example
are Hox genes, which prescribe anterior-posterior identities of cells in
similar ways in all animals, to the point that Hox genes were proposed as a
’defining character of the kingdom Animalia’ [61, 62]. Coming back to
segmentation, given the crucial role of pair-rule genes in fly, several groups
then proceeded to identify and study their homologs in vertebrates.
It quickly appeared that vertebrate proteins closely related to the fly hairy
genes presented patterns in developing vertebrate embryos somehow reminiscent
of what happens in the fly. For instance, her1 in zebrafish 222her stands for
’hairy-E(spl)’ which is the name of the broader family of these proteins was
first described to present patterns, with broad stripes in the presomitic
mesoderm (PSM) and narrower stripes in somite primordia [63]. In 1997,
Palmeirim et al.identified a homologous of hairy in chick (called c-hairy),
and carefully studied its behavior in a seminal work [5] redefining the entire
field.
Palmeirim et al.proceeded to study the pattern of genetic expression of
c-hairy. Comparing embryos to embryos, they confirmed that c-hairy presents
two distinct patterns of expression. In the anterior part of the embryos,
c-hairy is expressed in the posterior half of formed somites. But the pattern
of gene expression in the non-segmented pre-somitic mesoderm (i.e. posterior
to formed somites) appears much more complex. Depending on the embryo, c-hairy
is expressed broadly in the posterior, or into increasingly narrower and more
anterior stripes of genetic expression, not unlike what happens in zebrafish
for her1 [63].
The "Eureka" aspect of this work was to realize that this pattern of gene
expression in the posterior actually corresponds to snapshots of the dynamics
of a propagating (and narrowing) wave of c-hairy expression from posterior to
anterior, which appears clearly when embryos are reordered as a function of a
pseudo-time (see schematic in Fig. 3 A, c-hairy would correspond to the green
colour). To unambiguously show that such a wave originates from a posterior
oscillator, Palmeirim et al used an ingenious trick of chick embryology. They
cut the embryo into two pieces, fixed one side of the embryo, then waited
before fixing the other side. Assuming the dynamics on either side of the
embryo are independent of what happens on the other side, this allows the
capture of two time-points of the same dynamical process (essentially a two-
point kymograph), and from there to reconstruct the entire process using
multiple embryos. This technique indeed shows that the variability of the
c-hairy pattern comes from a dynamical gene expression, since in the very same
embryo one effectively sees a stripe of c-hairy gene expression move towards
the anterior, similar to what is observed for the gene expressions in Fig. 3
A). Furthermore, fixing the two halves of embryos with a time difference of 90
mins, one sees the same pattern of gene expression of c-hairy, but with one
extra somite on the right side vs the left (compare first and last time in
Fig. 3 A). This indicates that a periodic mechanism drives the waves of
genetic expression and is indeed correlated to somite formation, as expected
from the oscillatory models proposed previously (see Section Early models).
The very same technique of fixing one half of the embryo while keeping the
other alive was later used to show the existence of a segmentation oscillator
in Tribolium [64].
Figure 3: Segmentation in vertebrate. (A) Phenomenology of segmentation,
coupled to growth. Oscillatory gene expressions in the posterior give rise to
kinematic waves in the embryo propagating from posterior to anterior. The wave
refines and stabilizes in the anterior to structure future somites into
anterior and posterior compartments. There is a differentiation zone
corresponding to the region where cellular oscillations stop and physical
boundaries form. On the right, we show schematically the oscillation phases
for a cell staying in the posterior (bottom) and for a cell (star) reaching
the differentiation zone at the end of the time window depicted. The
oscillation is slowing down as cells get more anterior, giving rise to a
period gradient within the oscillatory zone. (B) Experimental visualization of
the segmentation clock in a mouse embryo, adapted from [17]. The Middle panel
shows snapshots of a movie with a dynamical Lfng reporter (see Section The
molecular forest), with recapitulation of the location of the last "stripe"
for each wave. The right panel shows a corresponding Kymograph . (C)
Experimental visualization of the segmentation clock in zebrafish embryos,
with a single cell resolution, adapted from [7]. A live reporter for Her1 is
used. (D) Her-1 oscillation for single cells in a zebrafish embryo, reproduced
from [13]. The cell is moving (relatively) from tail bud until a somite where
it is integrated and where the oscillation stops. (E) Inferred period (top)
for two single-cell oscillators in a zebrafish embryo as a function of time,
reproduced from [13]. The distance measured from the posterior (bottom) allows
to identify the relative positions of the cells within the presomitic mesoderm
(PSM), and to correlate it to their respective period. The cell staying close
to the tail bud oscillates with a constant period, while the period is
increasing as a function of time for cell moving towards the anterior,
indicative of the existence of a period gradient within the PSM.
### 3 The segmentation clock paradigm
#### 1 Phenomenology of the segmentation clock
It is now generally acknowledged that the work of Palmeirim et al.showed the
existence of what is now called the "segmentation clock". In this review, by
"segmentation clock", we mean the ensemble of periodic gene expressions, at
the embryo level, which controls the periodic formation of somites. Before we
focus on molecular details in the next section, we wish to point out four
high-level components and properties underlying the segmentation clock, which
will be central to the discussion in this review. The segmentation clock
paradigm is summarized in Fig. 3 A, with experimental illustrations in
subsequent panels (B-E).
Firstly, the segmentation clock emerges through cellular oscillators, clearly
visible in Fig. 3 B-C. Cells in the presomitic mesoderm PSM display
coordinated oscillation of multiple genes, thus defining a global oscillator
at the PSM level. Importantly, cellular oscillators are synchronized but not
in phase: waves of oscillations sweep the embryo from posterior to anterior,
as first evidenced in the work of Parlmeirm et al., and can now be seen using
real-time reporters Fig. 3 B-C. Those waves are related to the fact that, as
cells get more anterior, the period of their internal oscillator is increasing
(see e.g. the oscillation in the starred cell compared to posterior
oscillation in the schematic in Fig. 3 A, and see experimental measurements of
the period in single cells in Fig. 3 D-E). There are thus parallel anterior-
posterior period and phase gradients in the PSM. One of the key theoretical
question is to figure out how those gradients are related : do cells modify
their intrinsic period (slow down) so that a phase gradient builds up, or is
there a phase gradient building up (e.g. via cell-to-cell interactions)
leading up to an apparent period slowing down ?
As the waves of genetic expression move towards the anterior, and as the local
period of the oscillators increases, the wavelength decreases, before
stabilizing into a fixed pattern. Some genes, like c-hairy discussed in the
previous section, then form a stripe pattern of genetic expressions, localized
in half a somite. The formation of those stripes appears to be tightly coupled
to the formation of a somite boundary, Fig. 3 B, middle panel. Somites
eventually display an anterior-posterior (or rostral-caudal) pattern of
genetic expression, with some specific genes expressed in the anterior half of
the somite, and some others in the posterior half of the somite, see Fig. 3 A
–within the same somite, blue gene is rostral, and green gene is caudal.
Notice that this pattern is to some extent reminiscent of pair-rule patterning
in flies, compare Fig. 3 A with Eve and Ftz in Fig. 2 . The region where
cellular oscillations stop and where, subsequently, boundaries form between
future somites, is labeled as ’differentiation zone’ in Fig. 3, and is the
second important component of the segmentation process. Specific genes are
expressed in this region. Very often in the literature, this region is
designated not as a zone, but phenomenologically reduced to a single front,
often called ’wavefront’, largely because of the initial Clock and Wavefront
model that we describe in the section Early models. Notice that the slowing
down of the cellular oscillations is tied to stable patterns in somites,
following posterior to anterior waves of genes such c-hairy. So clock and
differentiation front might not be considered as independent processes. They
seem at the very least coordinated, which raises the fundamental question of
the nature of the front and its spatial extension, a central question
discussed in this review (see Fig. 1 for a synthesis).
Thirdly, segmentation is tied to embryonic growth. Schematically, as the tail
is growing, cells move anteriorly relative to the growth zone (Fig. 3 E
bottom), so that, as said above, a phase gradient is accumulating and their
period appears to increase (Fig. 3 E top). They eventually differentiate and
integrate into somites. It is well established that embryonic growth is
connected to anterior-posterior gradients of various morphogens, and thus it
is natural to think that those gradients likely regulate somite formation in
some way, especially in line with the French Flag and the Fly paradigms where
anterior to posterior gradients largely control segment position. Since
somitogenesis is a much more dynamical process, there are two additional
questions: how do gradients control cellular oscillators themselves (e.g.
their period and amplitude ?), and how do they control the location of the
differentiation zone? Again those questions are not independent and we will
comment on them in this review.
Fourthly, vertebrate segmentation is a tissue autonomous process: interruption
of continuity of the presomitic mesoderm (PSM) - the undifferentiated tissue
from which somites derive - does not impinge somite formation. Furthermore,
local inversion of fragments within the PSM leads to an "inversion" of the
progression of somite formation. This suggests that once cells exit the tail
bud, they are largely preprogrammed to oscillate and eventually differentiate
in a precise way, and as we will see below it seems that indeed dissociated
cells behave very similarly to cells within the embryo, suggesting that many
processes are largely cell autonomous. From the theoretical standpoint, it is
not clear how this large degreee of cell autonomy eventually gives rise to
weill proportioned, multi-cellular somites.
To finish this section, it is important to point out that the existence of an
oscillator (or clock) driving the formation of somites was first predicted and
studied by Cooke/Zeeman and Meinhardt in two pioneering models, that we
describe in details in section Early models. This is a nice example in biology
where theory was far ahead of experimental biology and inspired it.
#### 2 The molecular forest
The phenomenology of the segmentation waves first described in [5] and
summarized in the previous section has been confirmed and generalized to other
model organisms. Furthermore, it has been established in subsequent works that
not only the phenomenon of oscillations and waves is broadly observed, but
also that a plethora of genes is oscillating, forming multiple parallel waves
of gene expression during vertebrate segmentation [65, 66]. Listing here all
phenotypes and interactions discovered would be both impossible and
potentially confusing, but to understand the principles underlying current
modeling, it is important to summarize some of the biological players, as well
as some crucial biological mechanisms they have been suggested to regulate. It
should be pointed out that a major difficulty is that interactions are not
conserved between different species [67], e.g. a gene oscillating in one
species might not oscillate in another one. This renders the study of
molecular segmentation clock very difficult, and to this date, no clear
conserved molecular mechanism controlling the segmentation oscillator has been
established, and in fact, segmentation waves likely work in slightly different
ways in different organisms (see section Difference between species). We
summarize some important results in this section, with a special focus on
mouse somitogenesis, but will also comment on some results on other animals.
Three major signaling pathways have been implicated in the segmentation waves:
Notch, Wnt, and FGF [66]. The current consensus is that the core oscillator is
related to the Notch signaling pathway, implicated in cellular communication
[32]. Notch ligands (called deltas) are produced and membrane-bound at the
surface of cells, and interact with Notch receptors at the surface of
neighboring cells, driving transcriptional response. Lunatic Fringe (Lfng), a
glycotransferase modifying Notch activity, is at the heart of the chick
segmentation clock [68]. Misexpression of Lfng disrupts somite formation and
anteroposterior compartmentalization in chick [68], and similar phenotypes are
observed in mouse [69, 70]. Lfng does not oscillate in zebrafish though, and
studies in this organism have rather focused on other components of Notch
signaling pathway. Notch ligands (delta) are implicated in many segmentation
phenotypes. Perturbation of Notch signaling results in clear somite formation
defects [11]. Mutations of delta ligands do not prevent segmentation but
impact the coherence of segmentation waves, prompting the suggestion that the
main role of Notch signaling is to synchronize cellular oscillators [71, 72].
Indeed, real-time monitoring has since then confirmed that in delta mutants,
individual cells oscillate but are desynchronized [7]. Lfng has actually been
shown to play a role in this synchronization as well in mouse by modulating
delta ligand activity and thus Notch signaling in neighboring cells [73]. The
Hes/Her transcription factors, phylogenetically related to the fly hairy gene
mentioned above, appear to play a major role in the core part of the
oscillator[74, 75, 76]. Interestingly, serum-induced oscillations of Hes1 (a
Notch effector) are observed in multiple types of cultured cells (myoblasts,
fibroblasts, neuroblastoma) with a 2-hour period consistent with somitogenesis
period in several organisms [77], suggesting that it could be part of a more
general core oscillator based on a negative feedback loop [78]. Hes5
oscillations have also been implicated in neurogenesis [79]
Another major oscillating pathway is Wnt. Axin2, a negative regulator of the
Wnt pathway oscillates in mouse, even when Notch signaling is impaired [80].
Perturbation of Wnt signaling pathway results in segmentation phenotypes, e.g.
Wnt3a is required for oscillating Notch signaling activity. Importantly, a
posterior to anterior gradient of $\beta$-catenin (a key intracellular
mediator of Wnt transcription) is also observed [6], and crucially, mutants
with constitutive (i.e. highly expressed) $\beta$-catenin display non-stopping
traveling waves of gene expression within the PSM, suggesting that Wnt plays a
crucial role in the stopping of the segmentation waves. However, Wnt does not
oscillate in zebrafish
The last major player is FGF. Many genes related to the FGF pathway oscillate
[65], but the major feature of FGF is that it appears to control the location
and the size of somite. FGF8 presents a graded expression, from posterior to
anterior [81, 10]. FGF8 overexpression disrupts segmentation by maintaining
cells in a posterior-like state (characterized by the expression of many
characteristic markers and associated posterior morphology). Dubrulle et
al.used beads soaked with FGF8 to show that local overexpression of FGF leads
to strong segmentation phenotype in chick (monitored by looking at the
expressions of the Notch ligand c-delta) [81, 10]. If the bead is initially
placed in a posterior region, as elongation proceeds and the bead gets more
anterior, major changes are observed, with several small somites anterior to
the bead and one big somite posterior to the bead. If the bead is placed
midway in the PSM, a similar phenotype is observed but only around the bead,
up to a well-defined anterior boundary, 4 somites posterior to the first
somite boundary. Grafts of FGF beads in this region yield no phenotype.
Some genes are also (in)activated following an apparent front moving from
anterior to posterior, likely controlling somite formation. For instance, in
mouse, Tbx6 is expressed only in the oscillating PSM region[82]. Furthermore,
in the most anterior section of the presomitic mesoderm, segmentation
oscillators slow down, and genetic waves of expression either stabilize or
simply disappear. In the region where the system leaves the oscillatory
regimes, new genes are expressed, such as Mesp2. Mesp2 is first expressed in a
few broad stripes, possibly slightly bigger than a somite size, before
restricting itself to the anterior part of the somite [83, 82]. Mesp2
activates Ripply2, which then turns off Tbx6.
Somites present Anterior-Posterior (or rostrocaudal) polarity. As said above,
within a somite, Mesp2 is eventually becoming anterior (A) within a somite.
Other Notch signaling pathway genes get stably expressed in the posterior part
(P) of somites, such as Dll1 or Uncx4.1. [82]. Interestingly, the boundary
formation between somites is clearly correlated to the Posterior-Anterior
boundary between Notch signaling in the posterior part of a future somite and
Mesp2 in the anterior part of the next one [84, 85].
One issue, first discussed by Meinhardt [86] is the problem of the symmetry of
AP vs PA boundary to define the somite boundary. This is visible on kymographs
such as the one in Fig. 2 focusing only on the expression of oscillator genes
: the boundaries at steady state between the green and the blue region do not
distinguish between internal or external somite boundaries. Meinhardt
suggested that there might be a third state (X) to define the such boundary.
Experiments in zebrafish possibly falsify the existence of such intermediate
state: mutants for convergence extension 333a process of cellular convergence
towards an axis, so that, because of volume conservation, tissue is thinning
perpendicular to the axis and extending in the direction of the axis give
rise to broad, large somites with well-defined boundaries, but only 2-cell
wide in the anteroposterior directions [87]. So in such somites, there can not
be any cell corresponding to a hypothetical X state [A possible caveat is that
those cells are polarized so that there could be subcellular divisions
allowing for the existence of the X state]. Coming back to mouse, in [85], a
solution is suggested where the clock would in fact impose a rostrocaudal
gradient of Mesp2 inside the somite, imposing a natural polarity, where the PA
border between somites is "sharper" than the AP border within somites, leading
to a local "sawtooth" pattern. This exactly fits the pattern of downstream
genes implicated in cellular adhesion [88].
Figure 4: Schematic of some key molecular players in somitogenesis, using
mouse genes as examples. Anterior is on the top, posterior at the bottom. In
mouse, there is only one wave (i.e. roughly a $2\pi$ phase shift) of genetic
expression within the presomitic mesoderm.
It is worth mentioning at this stage a few other higher-order molecular
controls modulating somitogenesis formation. Retinoic Acid (RA) is well-known
to form an anteroposterior gradient opposite to FGF in metazoan embryos. RA
mutants display smaller somites [89]. So a natural question is the impact of
RA mutation on FGF gradient and the segmentation clock, [90, 91].
Surprisingly, embryos deprived of retinoic acid form asymmetrical left-right
somites. The associated phenotype is highly dynamic: for the first 7 somites,
Lfng and Hes7 waves are symmetrical, but afterward somites on the right side
of the embryo form later than on the left side, with one to three cycle delay.
The wave pattern is asymmetrical, and Mesp2 is more anterior on the right
side. This somite asymmetry is a consequence of the general left-right, Nodal
induced asymmetry (driving in particular internal organs asymmetry) [92, 90],
so that RA appears in fact to act as a buffer of this already present
asymmetry.
There are also many interesting modulations on the formation of the somite
boundaries. For instance, it is possible to induce separation between the
rostral and caudal parts of a somite by modifying cadherin and cad11 [93],
thus reavealling a length scale half of somite size in mouse. Conversely, in
zebrafish, disruption of her7 creates somites with alternating weak and strong
boundaries, suggesting the system can also generate an intrinsic length scale
twice the somite size [94].
#### 3 Visualizing oscillations in embryos
Recent years have seen the development of multiple fluorescent reporters,
allowing for the real-time observations of some of the clock components. In
mouse, the current toolbox includes reporters for Notch signaling pathway,
such as a destabilized luciferase reporter for Hes1 [9], destabilized Venus
reported for Lfng (LuVeLu) [6]. An Axin2 reporter associated with the Wnt
signalling pathway is also available [18] as well as Mesp2 and FGF Erk
reporters [95]. In zebrafish, reporters for the Notch signaling pathway are
available as well, mostly based on Her1 fluorescent fusion proteins, and a
single cell resolution to visualize oscillations has been achieved [7, 96, 97,
98]. It should be pointed out that it is not necessarily easy to combine
reporters to visualize multiple components of the system in real-time, one
reason being that some of them are based on similar fluorescent proteins and
would not be easily distinguishable in the same cells [18].
Oscillations of Notch signaling pathway in single cells present a
characteristic profile, where both the average and the amplitude of the
oscillations increase as cells mature towards the anterior PSM. In zebrafish,
the last peak-to-peak time difference is approximately twice the period in the
tailbud [97], consistent with the strong slowing down first inferred from in
situs [76]. Waves of oscillations move from posterior to anterior to the very
anterior PSM, so that the most anterior cells within a somite are the last
ones to stop oscillating (as measured by the timing of the last peak of
oscillation [97]). This contrasts with the idea of a differentiation front
moving continuously from anterior to posterior: there, within a future
presumptive somite, anterior cells are expected to differentiate (and stop
their clock) before posterior cells. Such a mechanism creates an asymmetry in
the wavefront, with a $\pi$ phase shift within a future presumptive somite,
giving a "sawtooth" pattern within the presumptive somite. This could define
anterior and posterior somite compartments [97], and relate to the previous
observation that the system can generate a length scale twice the normal
somite-size [94].
It is also possible to monitor mitotic cells in embryos, providing a natural
perturbation of the segmentation oscillator. Mitosis delays oscillation in
cells, but divided cells eventually resynchronize with their neighbors after
roughly one cycle [7]. Interestingly, sibling cells are statistically more
synchronized with one another than with their neighbors, which shows that
single-cell oscillations are rather robust and only modulated by interactions.
Lastly, there is a clear interaction between the cell cycle and the
segmentation oscillator, since mitosis happens preferentially at a well-
defined phase, when Notch activity is the lowest (which possibly provides a
natural mechanism for noise robustness in presence of equal partitioning of
proteins) [7]. In Notch pathway mutants, single cells still oscillate, but in
a desynchronized way and with a longer period. The amplitude of Notch
oscillations in mutants appears bigger than in WT, with possibly a modest
increase towards the anterior, but there is no obvious increase in period
length in those mutants.
#### 4 Biomechanical aspects
When treated with Noggin (an inhibitor of another signaling pathway called
BMP), non-somite mesoderm spontaneously segregates into somite-like structures
[99]. Those have sizes similar to normal somites, and when grafted instead of
normal somites, express normal somite markers. Contrary to normal somites,
they form almost simultaneously without the need for a clock, and are not
linearly organized but rather look like "a bunch of grapes". Importantly, they
do not have well-defined rostrocaudal identities: rather, cells within those
somite-like structures display patchy expressions of rostral and caudal
markers. This suggests that normal anteroposterior patterning within somites
might in fact be one of the main outputs of the clock [100].
The biomechanical program responsible for somite segregation can thus be
triggered independently of the segmentation clock. This suggests that there is
a level of biomechanical self-organization in the system, with associated
length scales, which raises the question of the multiple scaling effects at
play and of downstream self-organization within a given somite [101].
Consistent with this, it has been recently shown in normal somitogenesis that
tension forces allow for a correction of initial left-right asymmetries in
somite size [102]. This possibly suggests an overall view where slightly
imprecise signaling mechanisms (clock, wavefront, somite anteroposterior
polarity) are later canalized/corrected/adjusted by downstream biophysical
processes, such as tissue mechanics [102].
#### 5 Difference between species
While the phenomenology of somitogenesis is roughly conserved between species,
it is also worth pointing out rather striking quantitative and qualitative
differences.
The segmentation period varies widely between species: around 30 mins for
zebrafish, 90 minutes for chicken, 2 hours for mice, and 5 hours for humans
[103]. More direct comparisons between mammalian cells, have been done using
stem cell cultures differentiated into PSM cells (See the section Stem-cell
systems for more details) [104, 105]. Mouse and human cells were first
compared [104], and later on, a segmentation ‘zoo’ was designed, including
marmoset, rabbit, cattle, and white rhinoceros [105]. The segmentation clock
periods in this new zoo range from 150 mins in rabbit to 390 mins in marmoset,
and are comparable to the ones in embryos. ‘Swap’ cells where e.g. human
sequences for the Hes7 gene is introduced in mouse cells show a period
increase of 20 to 30 mins, so only a fraction of the 200 mins difference of
periods between the two species. This suggests that internal cellular
biochemistry (rather than specific coding sequences) plays a role in setting
up the segmentation period.
Those scaling dependencies appear rather specific to the segmentation clock
though: the authors estimate parameters for other genetic cascades and protein
degradation rates in mice vs humans, and, while degradation rates are slower
in human cells than in mice cells, the typical differences are at most by a
few tens of percents (while the segmentation period varies by more than two-
fold), and for some important mesodermal proteins like Brachyury (also called
T) there is hardly any difference at all. All in all, those experiments
suggest that the biochemical reactions specifically implicated in the
segmentation clocks are essentially scaled in one species vs another.
Interestingly, this scaling could be rather global in the sense that the
segmentation clock period scales with embryogenesis length (defined as the
time from fertilization to the end of organogenesis). Of note, similar scaling
of embryonic developmental steps is often observed, for instance, different
fly species living under different climates (and thus different temperatures)
present scaling developmental stages [106]. See more discussions on scaling in
the Appendix, section Scaling Laws.
Beyond the time scales of the segmentation period and development, it is worth
pointing out that the wave pattern observed in the PSM widely varies between
species, Fig. 5. In Mouse and Medaka, there is only one ‘wave’ of genetic
expression within the PSM (meaning that the oscillators close to the front are
less than one cycle phase-shifted compared to the oscillators in the tail
bud). In zebrafish, there are three waves, and in snake, there are 8 to 9
waves. This suggests that the relative clock period as a function of relative
position within PSM varies widely between species. While in mice, the period
close to the front is only slightly shorter than the period in the tailbud, in
other animals such as zebrafish and snakes, the relative period in the
anterior appears to be at least 3 times longer, possibly more [107, 108, 109].
Interestingly the period profile as a function of relative position within the
PSM is highly non-linear, almost diverging towards anterior PSM, and rather
identical between zebrafish and snake, see [108] for a comparison. This could
indicate some common mechanisms ensuring the coordination of the slowing down
of individual cellular oscillators.
Figure 5: Schematic of the different wave patterns in different species.
Adapted from [109, 8, 1].
It is proposed in [108] that the extensive number of segments in snake vs
other animals is indeed due to a relatively slower overall growth rate
compared to the segmentation clock. Imagine for instance a zebrafish growing
at half the normal rate, but with a segmentation clock keeping the same pace,
then it would naturally have twice as many segments. This scenario is
supported by the following back-of-the-envelope calculation :
* •
assuming PSM growth is completely driven by the cell cycle, period
$T_{cycle}$, the number of generation times for the PSM to fully grow is
$n_{g}=T_{tot}/T_{cycle}$ where $T_{tot}$ is again the total developmental
time
* •
the length of a somite approximately is $S=\alpha LT$, where $T$ is the period
of the segmentation clock and $\alpha=\ln 2/T_{cycle}$ is the growth rate of
the PSM ($\ln 2$ factor converts into time via cell division)
* •
eliminating $T_{cycle}$ one gets $n_{g}=\frac{T_{tot}}{T}\frac{S}{L\ln
2}=n_{s}\frac{S}{L\ln 2}$ where $n_{s}$ is the number of somites (assuming a
constant period of the segmentation clock).
Now $n_{s}$ is 315 in snake and 65 in mouse, but $\frac{S}{L\ln 2}$, the
rescaled ratio of somite vs PMS is also 5 times lower in snake than in mouse,
so that both effects compensate and the number of generation $n_{g}$ is the
same independent of the organism. This suggests a picture where $n_{g}$ is
constant across species for other reasons, and that inter-species variability
in the number of stripes indeed primarily comes from different values of
$T/T_{tot}$ or similarly $T/T_{cycle}$. Notice that, if the segmentation clock
period gradient within the PSM is (once rescaled) the same in all species
irrespective of PSM size, then if a cell spends relatively more time (in cell
cycle units) to go from tail bud to the front compared to other species, it
accumulates much more extensive phase gradient, which results in more waves
within the PSM, consistent with what is seen in snake (see more detailed
calculations in Appendix Number of waves in a growth model).
Going into more molecular details, it turns out that there is quite some
variability/plasticity between species in the genes oscillating [67].
Microarrays [65] identify 40 to 100 oscillating genes in the PSM, mostly
involved in signaling and transcription. In mouse, genes in Notch, Wnt and FGF
pathways oscillate, but in zebrafish it seems only Notch pathway clearly
oscillates. Phase relations between pathways also appear to vary between
species. Interestingly, only Hes1 and Hes5 orthologs appear to oscillate in
the three species considered in [67] (mouse, zebrafish, and chick), meaning
that there is likely "very limited conservation of the individual cycling
genes observed", and consistent with the hypothesis that the Hes gene family
includes the "core" oscillator. Needless to say, those differences might
matter a lot when modeling the segmentation process. There could be big
differences between segmentation processes in different species, and for this
reason, it is all the more important to discuss, contrast and compare multiple
models. Also, since individual cycling genes are likely, not conserved, this
justifies more top-down approaches, focused on higher levels, that can
eventually be related to actual gene expressions, rather than bottom-up
approaches too closely tied to the molecular implementation in a given
species.
## Chapter 2 Early models
We now review models of vertebrate segmentation spanning more than 40 years of
theoretical work. We start with two pioneering models proposed before the
discovery of the segmentation clock : the Cooke/Zeeman clock and wavefront
model, and the reaction-diffusion Meinhardt model. Those two models frame the
conceptual discussion and still inspire experiments to this date, but they are
also useful reference points for subsequent models. We also review in this
section a cell-cycle model, proposed shortly after the discovery of the
segmentation clock, to some extent as an alternative explanation and also
providing a slightly different viewpoint (see also review in [110]).
### 1 The clock and wavefront framework
In 1976, Cooke and Zeeman [15] proposed a "clock and wavefront" model for
somite formation to recapitulate many aspects known at that time. In a
nutshell, the model argues that a simple way to build a spatially periodic
pattern (e.g. vertebrae) is to imprint a spatial record of a time-periodic
signal (i.e. a clock).
#### 1 Qualitative view : wavefront
Such imprint is done with the help of a moving variable coupling positional
information to developmental time :
> "There will thus be a rate gradient or timing gradient along these columns,
> and we shall assume a fixed monotonic relation (non necessary linear)
> between RATE of an intracellular evolution of development process, and local
> positional information value experienced by a cell at the time of setting
> that rate."
It is not difficult to imagine such a variable in the context of embryonic
development since in many metazoans, growth happens in the anterior to
posterior (AP) direction, with anterior cells laid down before posterior ones.
This is represented in Fig. 1 A : here we define it as the age of the embryo
when the cell is born and positioned, counted from the beginning of embryonic
growth (anterior cells have age $0$, posterior cells have higher age), so that
the "positional information value" is linear in the position. While they were
not known at the time of the Cooke and Zeeman publication, we know now that
Hox genes [61] encode a similar discretized version of such coordinates, and
are likely controlled by a more continuous variable [111, 112]. Notice that
if, for some reason, the growth rate is twice as small, cells laid at a given
distance from the head are twice ’older’ compared to a reference embryo, so
that the positional information value grows at a doubled rate in absolute unit
in space. Thus positional information naturally scales with embryo length,
Fig. 1 A right. This naturally solves the scaling problem mentioned in Section
Early concepts.
Cooke and Zeeman propose that such positional information variable could then
be used to set the time for future developmental transitions. A simple model
would be that a developmental process is triggered after a time proportional
to the positional information value defined in Fig. 1 A. Phenomenologically,
this results in what we would call today a timer [112, 113], where the time at
which the process happens at a given position is proportional to the relative
position along the A-P axis.
In such a case, one would observe a developmental wavefront, moving along the
anterior-posterior axis. Thus in this picture developmental time (when a cell
is positioned along the AP axis) defines positional information, later setting
the stage for a kinematic wave of developmental transition moving from
anterior to posterior. [Importantly from a physics standpoint, the term wave
does not refer to any oscillation here, but rather is, to quote Zeeman, the
"movement of a frontier separating two regions" [114], see Box for the
definition of primary and secondary waves]. Again, an important aspect of such
proposal is that the kinematic wave would move at a speed scaling with the
embryo size since a temporal coordinate related to growth is properly
positioned relatively to an embryo of any size, Fig. 1 A right, consistent
with experiments where the number of cells is artificially reduced[42].
#### 2 Qualitative view : clock
However, such a kinematic wave moves smoothly from anterior to posterior,
while the aim is to define discrete units (somites). To induce such change,
Cooke and Zeeman propose to introduce a periodic variable or "clock". A simple
description of the mechanism is illustrated on Fig 1 B. Imagine there is a
global oscillator in the embryo, or at the very least that there are
synchronized oscillators so that
> [pre-somite cells] "are entrained and closely phase-organized (…) because of
> intercellular communication."
Now assume that the front is moving from head to tail with a speed $v$. The
assumption is that as the front moves, it interacts with the clock to switch
the local state of a cell from undifferentiated (not somite) to differentiated
(somite). Importantly, the timing of the transition depends on the phase of
the clock when the front passes, to ensure a synchronous commitment.
To fix ideas, let us assume that a segmental boundary is formed if and only if
the clock interacts with the wavefront at phase $\phi=\phi^{*}$ (Fig 1 B).
Then starting from an initial segmental boundary where the front is present
(phase $\phi=\phi_{*}$ at $x=x_{1}$), the clock goes on ticking (period $T$)
while the front is passing. No boundary is formed until the clock reaches
again the phase $\phi=\phi_{*}+2\pi$, i.e. after waiting for the period $T$.
During that time, the front has moved from position $x=x_{1}$ to position
$x_{2}=x_{1}+vT$, where the next segmental boundary is formed. This entire
process is then:
> "converting the course of the wavefront into a step function in time, in
> terms of the spread of recruitment of cells into post-catastrophe behavior."
It is thus clear that segments of size $S=vT$ are sequentially formed.
Importantly, this process recapitulates the minimum phenomenology of somite
formation. Somites form periodically in time, and sequentially in space.
Future somite boundaries are encoded in the tissue by the kinetics of the
wavefront and the clock, so way before boundaries form. Notice that as soon as
we assume the existence of a clock with period $T$ and of a wavefront of speed
$v$, the size of the pattern to be proportional to $S=vT$ by dimensional
analysis, irrespective of the details of the model, so that if the clock
period $T$ period is fixed, the size of the segment is proportional to $v$
(which should thus scale with embryonic size) . See Appendix section Scaling
Laws for discussions of other possible scaling laws.
Figure 1: Qualitative view of the Clock and wavefront model. (A) A temporal
coordinate is imposed on the embryo, via some monotonic process (e.g. growth),
defining positional information. Different colours indicate different values
of the temporal coordinate, notice that cells along the same anterior
posterior position have the same coordinate. Also if the embryo is smaller
(right), the temporal coordinate should scale with the size of the embryo. (B)
From top to bottom, one cycle of the segmentation clock (left), as the
wavefront (vertical dashed line) progresses with speed $v$ along the temporal
coordinate defined in A (right). Phase $\phi^{*}$ of the clock defines when
the new somite boundary is formed, here at position $x_{2}$
#### 3 Mathematical model : Wavefront
Cooke and Zeeman’s paper is also groundbreaking because it uses seminal
mathematical notions to describe developmental transitions. The model is
inspired by catastrophe theory, a branch of applied mathematics concerned with
a systematic classification of qualitative changes in behaviors of dynamical
systems. There the state of a cell is defined as a vector in a
multidimensional space, which generally localizes on a small number of
attractor domains (defining different cell states). The idea is that cells
move smoothly within each attractor domain, but developmental transitions
occur when cells abruptly change their attractor domain (akin to a
"catastrophe" [115], see also the work of Zeeman [116, 34]). As pointed out in
[117], there is no explicit equations provided for their model, but their
exact reasoning can easily be put into equations, which we do in the
following.
Cooke and Zeeman graphically suggest in their Fig. 4 [116] that somite
formation is induced by a bistable/cusp catastrophe, and that space and time
define the two parameters controlling the transition. Calling $t$ the time,
$p$ the positional information (which should be related to the anteroposterior
position in the embryo, higher $p$ being more posterior), and $z$ the variable
representing the state of the cell, let us then define a potential :
$F(t,p,z)=z^{4}/4-pz^{2}/2+\mu tz$ (1)
This functional form is identical to the one generated by the so-called
"Zeeman Catastrophe Machine" [34] (see also section Generalization : Zeeman’s
primary and secondary waves). A cell at the local position and time $p,t$ has
a state variable $z$, driven by the landscape defined by Eq. 1 (Fig. 2). All
cells are independent and each cell has its own landscape and state variable
$z$; it is implicitly assumed here that a positive value of $z$ corresponds to
an undifferentiated state, while negative values correspond to a
differentiated (somite) state. For simplicity, we put time and space in
different monomials in Eq. 1, which is not generic, but we will comment on
more general forms below.
Figure 2: Representation of the two-well landscape depending on time and space
defined by Eq. 1. The somite state corresponds to the left well, and the
undifferentiated state to the right well
The equilibrium points are given by the solution of the third-order polynomial
equation $\frac{\partial F}{\partial z}=z^{3}-pz+\mu t=0$. Assuming the system
is such that it rapidly stabilizes, we first see that for
$t\rightarrow-\infty$, the system is in a "positive" $z\propto t^{1/3}$ state
(so corresponding to an undifferentiated state) while for
$t\rightarrow\infty$, the system is in a "negative" $z\propto-t^{1/3}$ state
(corresponding to a somite state). Using classical algebra, it is not
difficult to show that for $p<0$, the system is monostable, i.e. $z$ can only
take one stable positive value, so can not differentiate. The interesting
behaviour occurs for $p>0$, for which there is a bistable region (i.e. $z$ can
take two stable values), delimited by $p=\left(\frac{27}{4}\right)^{1/3}(|\mu
t|)^{2/3}$. The most interesting behavior occurs along the line
$p=\left(\frac{27}{4}\right)^{1/3}(\mu t)^{2/3}$, which corresponds to the
saddle-node bifurcation where the high $z$ (i.e. undifferentiated) state
disappears (this line corresponds to what Zeeman calls a "primary"
developmental wave in [114], see section Generalization : Zeeman’s primary and
secondary waves). Inverting the expression, and assuming the system quickly
relaxes to a steady state, at time $\mu
t_{c}(p)=\left(\frac{4}{27}\right)^{1/2}p^{3/2}$, the system at position $p$
has no other choice than to suddenly jump from the positive to the negative
state (Fig 3 A-B). Notice this jump happens (much) later for higher $p$.
In this view, there would be a kinematic differentiation front, continuously
moving at higher $p$ values as a function of time, which is what Cooke and
Zeeman refer to when they say the actual differentiation wavefront involves :
> "a kinematic ‘wave’ controlled, without ongoing cellular interaction, by a
> much earlier established timing gradient."
Cooke and Zeeman point out that such variable $p$ could be easily set up by a
smooth, anteroposterior (timing) gradient.
#### 4 Mathematical model : Clock
To make a somite, we shall not need a smooth wave propagation, but rather a
simultaneous differentiation for a block of cells - for a range of different
positions in the embryo $p$. To account for such "block" differentiation, one
needs to introduce a clock. There are multiple ways to put that into
equations, but to fix ideas, let us thus consider the following addition to
the cusp catastrophe model :
$\dot{z}_{p}=-\frac{\partial F}{\partial z}-k\delta_{T}(t)=-\mu
t+pz_{p}-z_{p}^{3}-k\delta_{T}(t)$ (2)
Figure 3: Mathematical formulation of the original Clock and Wavefront model.
(A-D) The blue curve indicates the possible steady-state values of the state
variable $z$ from Eq. 2 as a function of space and time. The actual dynamics
of variable $z$ are sketched with an arrow. High $z$ corresponds to
undifferentiated cells, and low $z$ to somites. When $t$ is high enough the
system goes through a saddle-node bifurcation from a bistable to a monostable
system, and $z$ suddenly jumps from high to low value. In the absence of a
clock, this transition happens at a later time for more posterior cells
(compare A and B). (C-D) The effect of the clock (red arrow) is to
periodically lower $z$, so that cells close to the bifurcation will jump from
the high to low state branch. Ensemble of cells close enough to the
bifurcation jump at the same time, thus defining discrete bloks. This is
illustrate in Panel with a yellow line Figure 4: (A) Kymograph for $z$ in the
absence of the clock, Bistable and Monostable zones are indicated for
reference (B) In presence of the clock (red arrows), modeled as periodic kicks
uniform in space, blocks of cells are simultaneously induced from high to low
$z$ state, modeling somite commitment. Notice that somite commitment happens
below the bifurcation line of (A), which is indicated by a dotted while line,
thus corresponding to the wavefront (C) Effect of a slower clock. In this
case, some cells reach the bifurcation line before the next pulse of the
clock, so that the front follows the bifurcation line with the periodic
commitment of smaller blocks (D) Changing time and space dependencies of the
control parameters changes the shape of the bifurcation line and of the front.
where we consider the time evolution of the state $z_{p}(t)$ for a cell with
positional information $p$. Here, $\delta_{T}(t)$ is a function periodically
kicking the value of all $z$ (magnitude $k$) towards a more negative value. In
such a situation, for cells close enough to the jump (saddle-node
bifurcation), the periodic kicking might induce differentiation earlier than
$t_{c}(p)$ (Fig 3 C-D). In particular, following a tick of the clock, we
expect multiple cells close to bifurcation to jump simultaneously to the
negative $z$ state, defining a somite in Cooke and Zeeman’s view. More
posterior cells with higher positional information $p$ initially stay in the
high $z$ state, but as they get closer to the bifurcation they will eventually
jump. Notice that in physics terms, the differentiation timing exactly
corresponds to the first passage time from the right well to the left well in
the time-evolving landscape of Fig. 2, under the control of the clock
periodically kicking towards the left. A 3D plot in Fig. 3 E further
summarizes the overall dynamics in the spirit of Fig. 4 of the initial Cooke
and Zeeman paper [15].
#### 5 Simulated Clock and Wavefront model
Fig 4 displays actual simulations of Eq. 2 under various conditions, see also
attached Notebook. Fig. 5 also illustrates what happens within a landscape
description (see also Supplementary Movie 1). The bistable/monostable regions
are illustrated in Fig 4 A by simulating the system without the clock. Fig 4 B
shows what happens with the clock, where blocks of cells jump in a coordinated
way as desired. Notice that the new somite boundary after each pulse is always
below the bifurcation line, i.e. in the absence of the clock, cells would be
committed later compared to a situation with the clock. Interestingly, there
is a balance between the position of the bifurcation line and the
period/strength of the signal induced by the clock, a situation not studied in
[15]. For instance, if the clock is either weaker or slower enough, it can
happen that some cells will reach the bifurcation line between two cycles of
the clocks, leading to a "jagged" front, 4 C. The intuition for this result is
simpler: in the limit of no clock, the cells only transition when they go
through the bifurcation, so if the clock is both slow and weak, only cells
very close to the bifurcation would periodically transition to the
differentiated state.
Cooke and Zeeman further comment on an interesting geometrical feature of the
wavefront: as can be clearly seen from Fig 4 , the front is not a straight
line, which means that the speed of the wavefront is not constant in the
coordinate defined by the positional information $p$. Here, the saddle-node
bifurcation happens for $p\propto t^{2/3}$, so we expect the speed of the
differentiation front (in units of positional information) to be proportional
to $t^{-1/3}$ as well, i.e. going to $0$. If positional information is
directly proportional to the actual position, this means that that boundary
$i$ is located at a position scaling as $(iT)^{2/3}$, and thus the size of a
somite $i$ would then be $S_{i}\propto i^{-1/3}T^{2/3}$, so that the size of
somites would go to $0$ as well. This could explain why somites can get
smaller during development. This scaling law comes from the fact that position
and time are in separate coefficients in the polynomial of $z_{p}$ in Eq.2, a
choice we made here for simplicity. A more generic model would be to mix time
and space dependency, e.g. we can add a temporal dependency in the linear term
$z_{p}$ that modulates the front speed and shape, see e.g. Fig. 4 D : the
speed front would then go to zero and a stable boundary would form separating
the monostable and the bistable region, thus leaving a permanently
undifferentiated region.
Figure 5: Time space dependency of the cellular states in the Landscape
defined by Eq. 1, with same conventions as Fig. 2. Different lines correspond
to different positions (top is more anterior), and different columns to
different times. Green beads correspond to the time evolution of the system
without the clock, and orange bead to time evolution with the clock as defined
by Eq. 2. A cycle of the clock is completed every three columns. The
background colour is a function of the state of the cell in the Clock and
Wavefront model (light green: undifferentiated, light blue: differentiated)
Lastly, it is worth mentioning that in Cooke and Zeeman’s view, the clock is
an external pacemaker, essentially independent from the catastrophe
controlling differentiation, and could go on oscillating with minimal impact,
even in differentiated cells. Remarkably, the clock has an effect on the state
of the cells only close to the primary wave defined by the saddle-node
bifurcation. There are important experimental consequences of this
observation: for instance, if one could find an external way to manipulate the
variable $z_{p}$, one could induce somite formation without a clock, for all
cells within the bistable zone. Conversely, one should be able to largely
manipulate features of the clock (such as the period) without impacting the
potential driving the dynamics of the variable $z_{p}$. The most direct way to
test this would be to change the clock period, to see how this impacts the
speed of the regression and the size of the somites. However, there could be
new features arising in a regime where the clock is very slow, or has only a
weak influence on $z_{p}$: as illustrated in Fig 4 C, one can obtain a mixed
system with both discrete and continuous jumps for weak or slow clocks.
This example illustrates one issue in defining the wavefront: depending on the
parameters, the jump in $z_{p}(x,t)$ can be discrete within a block of cells,
continuous, or both. Thus the actual wavefront of differentiation is an
emergent feature of the interactions of the system, that might not be easily
associated with some simple observable (e.g. a given level of a morphogen).
There is an even more general lesson here: processes that are independently
regulated (here the clock on the one hand and the possible states of the cell
$z_{p}$ on the other hand) might become more coupled close to a bifurcation
(i.e. at criticality [118]), with important phenotypical consequences. For
this reason, it might in fact be desirable that both the clock and the
kinematic wave induced by the $z$ jump are in fact coordinated upstream in
some way. For instance, one could imagine models where the ‘constant’ term in
the right-hand side of Eq. 2 could also depend more explicitly on $p$ and on
the phase of the clock, or we could imagine that the strength of the clock
increases with clock period to prevent a situation like Fig. 4 C. Conversely,
a weaker clock might in fact be desirable, for instance, the jagged line in 4
C could be used to define anteroposterior polarity within one somite, so again
requiring some level of fine-tuning or coupling between the clock and the
primary wave.
#### 6 Generalization : Zeeman’s primary and secondary waves
The Clock and Wavefront model is related to an earlier proposal by Zeeman
regarding the existence of "primary" and "secondary" waves for spatially
extended dynamical systems [114]. Zeeman proposes a much more general theory,
with illustrations from epidemiology, ecology, and developmental biology.
The general idea is to consider the propagation of a boundary separating two
regions with different steady states.
> "By a wave, we mean the movement of a frontier separating two regions. We
> call the wave primary if the mechanism causing the wave depends upon space
> and time."
An example offered by Zeeman in the context of embryonic development is a
field of cells, where initially cells are in a B state, but where cells can
also exist in an A state because of bistability. A primary wave can then
propagate from a region of A cells into a region of B cells if cells lose
their ability to be in the B state. This can happen for instance via a saddle-
node bifurcation, say in response to a disappearing morphogen. For this
reason, in this review, we will associate primary waves with bifurcations and
will be slightly more generic by including bifurcations associated with the
disappearance of oscillating states. Secondary waves are defined as such
> "We call the wave secondary if it depends only upon time, in other words it
> is series of local events that occur at a fixed time delay after the passage
> of the primary wave."
For instance, in a pandemic context, a primary wave would consist in the
propagation of a disease in a population, while the secondary waves would
consist of the delayed appearance of symptoms. This example illustrates in
particular how the secondary wave might reveal the existence of a hidden
primary wave. Similarly, in biology, the actual differentiation of cells might
be a secondary wave following a primary wave directing cells to go to
different fates depending on positional information depending on space, and
time.
To fix ideas and be more quantitative, let us consider a slightly more general
potential than Eq. 1, similar to the example that Zeeman uses in Fig. 5 of
[114]
$F_{\epsilon,\alpha}(t,p,z)=\epsilon(z^{4}/4-(p+\alpha t)z^{2}/2+\mu tz)$ (3)
with the associated dynamics $\dot{z}=-F^{\prime}(z)$, with various examples
displayed in Fig. 6, see also attached Notebook. Initially, all cells are in
the same state (at $t\rightarrow\infty$), and then as bifurcation occurs cells
end up in two different states, clearly visible in Fig 6. The primary wave
then coincides with the bifurcation line from bistability to monostability
separating the two regions. Notice that the wavefront in the Cooke Zeeman
model is such a primary wave and that the role of the clock is mainly to
anticipate the "catastrophic jump" associated to such primary wave.
The case $\epsilon=1,\alpha=0$ gives the same example as Fig 4 A. There, the
primary and secondary wave essentially coincides because there is a very fast
relaxation of $z$ following the jump from high to low $z$ values on the
saddle-node bifurcation line. As pointed out above, this is a bit of a
particular case because the polynomial coefficients should rather mix space
and time, so that a more general case is displayed in the middle panel of Fig.
6, where $\epsilon=1,\alpha=0.02$. In such a case, the bifurcation line does
not move completely towards the posterior, so the primary wave "invades" a
portion of the field before stabilizing, leading to the sharp and fast
definition of two regions. For slow dynamics of $z$, e.g. $\epsilon=0.001$ in
the right panel of Fig. 6, the dynamics of domain separation is not sharp and
there rather is a refinement process. The primary wave is identical to the
middle panel of Fig. 6, but because of the smallness of $\epsilon$ the
dynamics take a long time to relax to smaller values of $z$, leading to the
slow propagation of a secondary differentiation wave. Noteworthy, the final
steady state in the latter case is identical to the former one but will take a
much longer time to reach, giving the feeling that some boundary sharpens,
while it was in fact defined much earlier by the primary wave.
Figure 6: Different dynamics of the primary and secondary waves described by
Eq. 3. On the left panel and middle panel, primary and secondary wave are
essentially simultaneous ($\epsilon=1$), the right panel has same $\alpha$ as
middle panel but with a much slower $\epsilon$, giving rise to an identical
(hidden) primary wave and a much later secondary wave
### 2 Meinhardt’s model
In a series of papers in the 70s, an alternative view was defended by Gierer
and Meinhardt, who proposed that reaction-diffusion processes combining
activator and inhibitors were at the origin of segment formation in metazoans
[119]. In 1977 Meinhardt applies them to fly, proposing the following model
[120, 121] :
$\displaystyle\dot{A}$ $\displaystyle=$ $\displaystyle cA^{2}/H-\mu
A+D_{a}\Delta A+\rho_{0}$ (4) $\displaystyle\dot{H}$ $\displaystyle=$
$\displaystyle cA^{2}-\nu H+D_{h}\Delta H+\rho_{1}\ $ (5)
where $\Delta=\frac{\partial^{2}}{\partial x^{2}}$ is the one-dimensional
diffusion operator. This model is ‘Turing-like’, with an activator $A$ that
self-activates and activates a repressor $H$, both diffusing. Later, in 1982,
Meinhardt argued that the addition of a segment from a growth zone, with
subcompartmentalization, required new mechanisms to produce an alternation of
Anterior and Posterior states within one segment. In particular, it is very
natural to assume there is an oscillator generating such alternation, that can
further be coupled to an external morphogen. Meinhardt calls this the
"pendulum-escapement model" :
> "Imagine a grandfather clock. The weights are at a certain level
> (corresponding to the local morphogen concentration). They bring a pendulum
> into movement, which alternates between two extreme positions. The
> escapement mechanism allows the pointer to advance one unit after each
> change from one extreme to the other. As the clock runs down, the number of
> left-right alternations of the pendulum and hence the final position of the
> pointer is a measure of the original level of the weights (level of
> morphogen concentration)."
The "extreme" positions of the pendulum correspond to the anterior-posterior
segment states, both being generated by an oscillator and modulated by the
presence of an explicit morphogen to control the pattern (e.g. the number of
segments). So while Meinhardt proposes the existence of a clock his work
differs from the Cooke and Zeeman model in a subtle but crucial way. In the
Cooke and Zeeman model, the oscillator defines blocks of cells corresponding
to somites. In Meinhardt’s model, the oscillator defines alternating fates of
genetic expression, in modern terms corresponding to somite compartments
(anterior and posterior).
To model such alternation, Meinhardt essentially combines his fly segmentation
model reproduced above with its own negative mirror image, to include another
alternating fate. Remarkably, the addition of this fate allows for the natural
emergence of oscillations. More precisely, Meinhardt assumes that two
variables are present, called $A$ and $P$ (that correspond respectively to
anterior/posterior markers of somites). $A$ and $P$ also activate fast
diffusing variables $S_{A}$ and $S_{P}$, respectively limiting extension of
$A$ and $P$, so that the pairs $(A,S_{A})$ and $(P,S_{P})$ define two (so far
independent) Turing systems. Meinhardt then adds mutual exclusion between the
two Turing systems, via a repressor $R$ which is activated similarly to both
$A$ and $P$, Fig. 7 .
Figure 7: Meinhardt model and its reduction to the Meinhardt-VanDerPol Model.
A and P are anterior and posterior genes within the same segment, they are
mutually exclusive via the interaction with an extra variable R. SA and SP are
diffusing genes limiting the expansion of respective genes A,P. See the main
text for detailed equations.
#### 1 Mathematical formulation
We could not find an explicit mathematical description of this model from
Meinhardt, but it can be reconstructed both from Meinhardt’s other similar
models and from the BASIC code used to generate his figures, found in appendix
of [86], Fig. 7, left. Meinhardt’s model can thus be described with 5
variables :
$\displaystyle\frac{dA}{dt}$ $\displaystyle=$
$\displaystyle\rho_{0}-d_{A}A+\frac{cA^{2}}{RS_{A}}$ (6)
$\displaystyle\frac{dP}{dt}$ $\displaystyle=$
$\displaystyle\rho_{0}-d_{P}P+\frac{cP^{2}}{RS_{P}}$ (7)
$\displaystyle\frac{dR}{dt}$ $\displaystyle=$
$\displaystyle\frac{cA^{2}}{S_{A}}+\frac{cP^{2}}{S_{P}}-\beta R$ (8)
$\displaystyle\frac{dS_{A}}{dt}$ $\displaystyle=$
$\displaystyle\gamma_{A}(A-S_{A})+D_{A}\Delta S_{A}$ (9)
$\displaystyle\frac{dS_{P}}{dt}$ $\displaystyle=$
$\displaystyle\gamma_{P}(A-S_{P})+D_{P}\Delta S_{P}$ (10)
Because of the presence of $R$, in the absence of diffusion, the whole system
oscillates, while in the presence of diffusion a stabilizing wavefront
propagates, converting the temporal oscillation into a spatial one [86].
The initial Meinhardt model requires 5 variables, so is rather complicated to
analyze. But we can use its natural symmetries to simplify it and extract the
core working mechanism.
To make a better sense of what happens, let us take $d_{A}=d_{P}=d$,
$\gamma_{A}=\gamma_{P}=\gamma$, and $D_{A}=D_{P}=D$. We start with a quasi-
equilibrium assumption on $R$ so that
$\beta R=\frac{cA^{2}}{S_{A}}+\frac{cP^{2}}{S_{P}}$ (12)
This gives
$\frac{d(A+P)}{dt}=2\rho_{0}-d(A+P)+\beta$ (13)
This suggests performing a new quasi-static assumption
$A+P=\frac{\beta+2\rho_{0}}{d}=C_{0}$ (14)
Notice then that $A$ and $P$ are inversely correlated, corresponding to the
intuition that they repress one another.
Similarly, we can make a quasi-static assumption for the variable
$S_{A}+S_{P}$ so that
$S_{A}+S_{P}=\frac{\beta+2\rho_{0}}{d}=C_{0}$ (15)
(basically, we make the system fully symmetrical in $A$, $P$) This allows
using symmetries in the equations to eliminate completely either $A$ or $P$.
Keeping for instance $A$, Meinhardt’s reduced model then is:
$\displaystyle\frac{dA}{dt}$ $\displaystyle=$
$\displaystyle\rho_{0}-dA+f(A,S)$ (16) $\displaystyle\frac{dS}{dt}$
$\displaystyle=$ $\displaystyle\gamma(A-S)+D\Delta S$ (17)
wifh
$f(A,S)=\beta\left(1+\frac{(C_{0}/A-1)^{2}}{C_{0}/S-1}\right)^{-1}=\beta\frac{A^{2}}{S}\frac{C_{0}-S}{C_{0}-S+(C_{0}-A)^{2}}$.
The simplification of the model is illustrated in Fig. 7 .
Notice the similarity with the initial fly model in Eqs. 4-5 : there still is
auto-activation of $A$ and repression by $S$, in particular when $A$ and $S$
are small. But the additional modulation
$\frac{C_{0}-S}{C_{0}-S+(C_{0}-A)^{2}}$ is equal to
$\frac{S_{P}}{S_{P}+P^{2}}$. This illustrates the symmetry with respect to $P$
and suggests additional non-linear effects when both $A,S$ are close to
$C_{0}$.
A simulation of this model is shown on Fig. 8 and indeed recapitulates
properties of the full Meinhardt model, see attached Notebook. Interestingly,
in the absence of diffusion, the $A/S$ dynamics is a typical relaxation
oscillator, as can be clearly seen from Fig. 8 B (see below and in the
Appendix A for general discussions on relaxation oscillators). $A$ oscillates
between two values approximately equal to $0$ and $C_{0}$, and $S$ slowly
relaxes towards $A$, Fig. 8 B. Like standard relaxation oscillators, when $S$
passes a threshold, it induces a "jump" of $A$ towards a new value
($0\rightarrow C_{0}$ or $C_{0}\rightarrow 0$) and a symmetric part of the
cycle occurs.
One also sees the effect of the $f$ function described above on the
nullclines, (i.e. lines for which respectively $\dot{A}$ and $\dot{S}$ are 0
in absence of diffusion) in Fig. 8) B right. Close to $A\sim 0$, the $A$
nullcline (blue) diverges; this corresponds to the regime where
$f(A,S)\propto\frac{A^{2}}{S}$ so that using term Eq. 16, one gets
$A^{2}/S\propto A$, i.e. $A\propto 1/S$. Close to $A\sim C_{0}$ a new regime
occurs : in this regime $f(A,S)\sim\frac{C_{0}-S}{(C_{0}-A)^{2}}$, so that
again using 16 and definitions of $C_{0}$, one gets after some algebra that
$(C_{0}-S)\propto\frac{1}{C_{0}-A}$. This regime essentially is the
"symmetrical" regime on the posterior variable $P$ of what happens for the
anterior variable $A$.
Those two regimes provide the two branches of a relaxation oscillator driving
the AP alternation. When adding diffusion on $S$, a boundary from high to low
$A$ is stable and nucleates a moving front stabilizing the pattern.
Figure 8: Simulation of the Meinhardt model. (A) Kymographs of the variables
$A$ and $S$, respectively, obtained with parameter values
$\beta=1.5,\rho_{0}=0.012,d=1,\gamma=0.01,$ and $D=0.01$. The initial
condition is an induced boundary ($S=1$ in 3 anterior cells, $S=A=0.1$
everywhere else). (B) Example trajectories of $A$ and $S$ in a cell, and flow
diagram of the $A,S$ system in a single oscillating cell, with the limit cycle
trajectory in red. Nullclines for $A,S$ are shown, blue for $A$ and green for
$S$.
#### 2 Generalization : Meinhardt - Van der Pol model
A behavior similar to the Meinhardt model can be observed with many other
(symmetrical) relaxation oscillators, which are better suited for a more
precise study of what happens. This was later rediscovered by [122] and the
associated patterning mechanism was called a "progressive, oscillatory
reaction diffusion" (PORD) model (see section Somite AP patterning: Inverse
problem approach below).
Let us for instance consider the following Meinhardt-VanderPol model, based on
the addition of a diffusive term to the slow variable of a classical Van Der
Pol/Rayleigh oscillator (see Appendix) :
$\displaystyle\epsilon\frac{dA}{dt}$ $\displaystyle=$ $\displaystyle
A-S-A^{3}/3$ (18) $\displaystyle\frac{dS}{dt}$ $\displaystyle=$
$\displaystyle\lambda(A-\mu S)+D\Delta S$ (19)
Figure 9: Flow-plots of the Meinhardt-Van der Pol model for different values
of the control parameter $h$. nullclines for $A$ is in blue, nullclines for
$S$ in green, and trajectories are in red. For $h=h_{c}\sim\lambda.0.94$ the
system underges a Hopf bifurcation.
This model has only one non-linearity, the $A^{3}$ term in Eq. 18. We can
interpret this model "biologically" with A self-activating, and repressed by
$S$ (itself activated by A). Instead of the repression by $P$, this model
introduces a cubic degradation term for $A$ which makes sure that $A$ non-
linearly goes to $0$ once $|A|$ is big enough. Notice however that both $A$
and $S$ can take either positive or negative values and that the initial
symmetry of Meinhardt’s model is in fact conserved if one flips the signs of
$A,S$. Here, only $S$ diffuses (to stay consistent with Meinhardt) and we
simulate the system on a line of (discrete) cells and the pattern is stable
(consistent with the recent observation that Turing patterns are stable with
only one diffusing variable on discrete grids [123]). We also introduce a
parameter $\lambda$ allowing to modulate the dynamics of the slow variable (in
particular the period of the oscillator).
Moving now to phase space analysis, when $D=0$, the system jumps back and
forth the sigmoidal branches of the $A$ nullclines (Fig. 9 left), like a
classical Van Der Pol oscillator (see e.g. [124] Chapter 10, and Appendix). To
understand what happens when there is diffusion, let us treat the $D\Delta S$
term as an external control parameter $h$. Phase plane analysis immediately
reveals that when $h$ passes a threshold value (approximately equal to
$h_{c}\sim 0.94\lambda$), the system $(A,S)$ system undergoes a Hopf
bifurcation, due to the fact that the $S$ nullcline moves vertically and
intersects one "bistable" branch of $A$ (the negative $A$ branch in Fig. 9
middle). Notice that since the $A,S$ system is fully symmetrical, a similar
bifurcation happens when $h<-h_{c}$, with the system stabilizing on the
positive $A$ branch. So we expect that wherever the second spatial derivative
of $S$ reaches this threshold, the system stops oscillating and, depending on
the sign of this second derivative, stabilizes in a branch of either positive
or negative A. Also notice that, interestingly, the system stays excitable
even for $h>h_{c}$ (Fig. 9 right, see Appendix for the definition of
excitability).
To understand what happens when $D>0$, it is first useful to consider the
steady state situation. We see an alternation of stripes of $A,S$, where $A$
jumps from almost constant values and $S$ presents a smoother, oscillatory
profile. In particular, for $S$ we get at steady state:
$D\Delta S(x)-\lambda\mu S(x)=-\lambda A$ (20)
A crude approximation is to consider that $A$ takes almost constant positive
and negative values ($A\simeq\pm A_{0})$, then in one stripe (centered with
$0$) we expect, solving the equation, that $S\simeq\pm\left(A_{0}/\mu-
S_{x}\cosh(\sqrt{\frac{\lambda\mu}{D}}x)\right)$ at steady state. At a stripe
boundary, $A$ switches sign, so that $S$ has to be equal to $0$ by continuity
of its derivatives. This imposes that
$A_{0}/\mu=S_{x}cosh(\sqrt{\mu/D}(x_{0}/2))$, and thus defines $S_{x}$ as a
function of $x_{0},A_{0}$ which respectively correspond to the size of the
pattern and the scale of $A$ at steady state. $S_{x}$ and $x_{0}$ can not be
defined by the steady state equation in a self-consistent way, and emerge from
the dynamics. Notice that $A$ jumps while $S$ stays continuous, so as a
consequence, the control parameter has to be spatially discontinuous at steady
state.
It is then useful to plot the dynamics of the control parameter to see how
such a discontinuity appears and how the pattern forms. We show kymographs of
$A,S$ and rescaled control parameter $|D\Delta S|/\lambda$ in Fig. 11. We see
a "checkerboard" pattern of the control parameter along the front; in
particular, at well-defined, discrete times, the control parameter quickly
moves above $h_{c}\sim 0.94$ in blocks, defining stabilized regions.
The precise dynamics explaining stabilization are rather complex, as might be
expected for a system defining its control parameter through the second
derivative of a bistable variable. To our knowledge, there is no precise
mathematical study of this process. We will thus limit ourselves to a
qualitative and intuitive description of what happens, Fig. 10. Let us focus
first on the boundary of a region that has just formed. We see that posterior
to this region, the control parameter $|h(S)|>h_{c}$, so that the
discontinuity in the pattern is established and stable, Fig. 10 A. This
induces a spatial gradient of control parameter $h$: close to this
discontinuity, the region is oscillating (like the posterior) but is close to
the bifurcation point Fig. 10 B. Such dynamics of the control parameter make
sense, since after the jump, there is a discontinuity in $A$ between the
stable and the oscillating region, and we thus expect $S$ to follow in a
"smoother" way, with an increase in its second derivative.
The absolute value of the control parameter is slowly increasing in this
region in a graded way so that oscillations stabilize in more and more cells.
Eventually, the posterior oscillation (where the control parameter still is
around $0$) jumps on the other branch, Fig. 10 C, left. This creates two
domains in $A$ (one positive, one negative), between posterior cells which
have just jumped and more anterior cells where the oscillation is close/past
the bifurcation on the other branch.
Finally, because of the relaxation-oscillator dynamics, $S$ follows $A$ with
delay. This creates a sudden increase of second derivatives of $S$ at the
interface between positive and negative $A$ , and eventually a spatial
discontinuity in both $A$ and in the control parameter Fig. 10 D-E ensues.
This both nucleates the next stable region and stabilizes this region that
never jumped, and the process iterates forming a stable alternation between
regions of low and high $A$, with $S$ following $A$ in a "smoother" way.
Notice in particular that a new block stabilizes in three steps: first, a
small stable region is nucleated close to a newly formed boundary (Fig. 10 A),
then the next stable boundary is induced Fig. (Fig. 10 C) and lastly the
interior of a newly defined block between two stable boundary stabilizes (Fig.
10 D-E).
Figure 10: [
-20pt]Time evolution of the spatial pattern in the Meinhardt model. The posterior is on the left. The range of control parameter for which the system is oscillating is indicated by green lines. In (A-B), a small region left of the boundary stopped oscillating, creating a spatial gradient in S.In (C), the jump of the oscillator in the posterior-most region nucleates a new boundary that moves towards the right. The control parameter crosses the Hopf line, stabilizing the boundary around position 80, and the oscillation stabilizes in an entire block for higher positions (D). The control parameter keeps increasing left of the new boundary (E), leading to a situation symmetrical to (A).
The scaling law of this process is of particular interest. As pointed out by
[122], the speed of the front is an emerging quantity of the diffusion of the
stabilizing zone, induced in our example by the changes in the control
parameter $h$. There are a priori at least two possibilities here. First, the
speed $v$ could emerge independently from the clock, like in the initial clock
and wavefront model, so that the size of the pattern would be $S=vT$. The
other possibility could be that the speed and the clock are coupled by
diffusion so that there is a (pattern) wavelength proportional to $\sqrt{DT}$,
where $T$ is the period of the clock and $D$ is the diffusion constant. This
would then give a wavefront speed proportional to $\sqrt{D/T}$. Going back to
simulations, our numerical studies reveal that the wavelength of the pattern
is almost exactly proportional to $T$ over more than one order of magnitude of
period change (data now shown) so that the wavefront speed does not depend on
the period, similar to the former hypothesis. This is visually illustrated in
Fig. 11 : the slope of the stable region in the kymographs does not depend
much on the control parameter of the period $\lambda$. This suggests that the
front speed is purely diffusion-driven, like many other models in physics and
biophysics, see e.g [125], while the nucleation of the new stable zone is
driven by the relaxation oscillation.
Figure 11: Behaviour and scaling of the Meinhardt-VanDerPol model for
different values of parameter $\lambda$ from Eqs. 18-19. Kymographs of the
variables $A$ and $S$ are represented. In the third column, we plot
$\frac{|D\Delta S|-h_{c}(\lambda)}{\lambda}$, showing the jump in the control
parameter at the front.
#### 3 Biological Interpretation of the Meinhard’s model
As pointed out by Meinhardt himself :
> "In the model I propose, the oscillation (between A and P), the wavefront
> (separating the oscillating and the stable pattern), as well as the
> spatially stable periodic pattern (of A and P), result from one and the same
> mechanism."
This simplicity in the equations explaining multiple aspects of the process
obviously has a strong appeal for physicists, especially when reduced to two
variables. As such it provides important insights into biological mechanisms,
both by setting a modeling framework and by suggesting predictions.
First, the pattern in Meinhardt’s model is clearly stabilized by interactions
of consecutive domains where $A$ is present/absent. So spatial diffusion is
crucial to form and stabilize the boundary. This somehow contradicts the
kinematic view of somites formation associated to the robustness to various
embryonic manipulations (graft, spatial boundaries), with the caveat that
those manipulations are at the tissue scale and might not be the best to
falsify local mechanisms at the cellular scale.
Second, as explained above, there is no discrete formation of a block of cells
defining somites like in the Cooke and Zeeman model. Rather, somites are
assumed to be defined a posteriori as the concatenation of one anterior
compartment with a posterior one. Then, the alternation of a APAPAP pattern
does not define unambiguously a somite, since boundaries should be defined for
the P to A boundary but not for the A to P boundary. Meinhardt, therefore,
suggests that there might be a third oscillating variable (called X) so that
the real alternation is of the form APXAPX, unambiguously defining the somite
boundary. In fact, Meinhardt points out another potential mechanism, where the
system might rather detect the temporal succession of P to A in opposition to
A to P to trigger boundary formation :
> "Imagine a ship in a channel system with locks. A lock can be in two states.
> Either the lower gate is open and the upper gate is closed or vice versa. In
> neither state can a ship pass through. But in one state the ship can enter
> into the lock and after the switch to the other state, the ship can pass. In
> one state, the transition is prepared but blocked. In the other state, the
> block is released, the transition can take place, but no preparation of the
> next transition is possible. For the sequential activation of control genes
> I assume that, for instance, in the P-state a substance X is produced that
> activates the subsequent gene, but that its action is blocked. In the
> A-state, the block is released but X is no longer produced. Only with a P-A
> transition the activation of the subsequent gene can take place due to the
> simultaneous release of the block and the presence of the substance X. In
> contrast, activation of subsequent control genes can not occur if cells
> remain permanently in the P- or in the A-state."
In modern terms, this describes by essence a phase detector downstream of an
incoherent feedforward loop network (see e.g. [126]), where $P$ activates $X$
but represses its downstream target, while $A$ derepresses the target. $X$ is
produced only when $P$ is fading out and $A$ increasing.
Like the Clock and Wavefront model, the differentiation wavefront is emerging
from the dynamics. One can first approximate the wavefront in Meinhardt’s
model as the point where the oscillation stops (or the limit cycle
disappears), but as seen from simulations, this is not a continuous front,
rather, due to the relaxation oscillation, it jumps in a discontinuous way
from one boundary to the other, later-on stopping oscillations in-between.
This jumping process is not so different qualitatively from the pulses of the
clock inducing transitions in the Cooke and Zeeman model. The dynamics of
motion are very different though: in the Cooke and Zeeman model, the
competency zone for transition to bistability is defined by the external
positional information variable $p$, while in the Meinhard’t model, it rather
is a self-organized ’domino’ effect where one stable region nucleates the
following one with the help of the ongoing relaxation oscillator and
diffusion. This creates a difficulty for scaling/changing the size of the
pattern. In particular, in Meinahrdt’s model there is no external positional
information variable independent from the oscillation. Meinhardt anticipates
this potential difficulty by introducing a modulation to his model, adding a
spatial dependency in equation $A$ of the form :
$\frac{dA}{dt}=\rho_{0}-d_{A}A+\frac{cA^{2}}{R(S_{A}+\Theta(x,t))}$ (21)
This threshold $\Theta(x,t)$ de facto defines some external positional
information in the system, which can modulate the speed of the clock and as
such the size of the pattern. Meinhardt suggests a simple model so that
$\Theta$ is essentially monitoring the number of cycles in relation to a
morphogen gradient. By adjusting the slope of the morphogen gradient in a
size-dependent way, scaling with embryonic size can be obtained. This model
can be further adapted to account for further specialization of some segments
as a function of time (e.g. in the case of insects, some segments will give
rise to wings while other ones will give rise to halteres, due to the
expression of so-called Hox genes).
### 3 Cell Cycle model
In the early 90s, Stern and co-workers proposed that the segmentation clock
could be in fact related to the cell cycle [127]. This comes from a series of
clever experiments in chick showing very striking features [128, 129] :
* •
One single heat shock produces several segmental anomalies, restricted to one
or two consecutive segments, but separated by 6 to 7 somites - corresponding
to roughly 9 hours of development. This suggests the existence of a long
temporal cycle implicated in segment formation, with a length corresponding to
the time required to form 6-7 somites. Then if this cycle is initially
perturbed, the perturbation would be repeated every 6 to 7 somites,
corresponding to the period of the oscillator.
* •
The 9 hours period was later shown to correspond to the length of the cell
cycle, strongly suggesting that it is coupled to somite formation.
* •
A single progenitor cell in the tail bud injected with dye gives rise to
several clusters of cells in the PSM and in somites, with a 6 to 7 somite
periodicity [130, 127]
This suggests the following picture: progenitors in the tail bud constantly
divide and lay down cells in the PSM in an ordered way so that cells at the
same anteroposterior position are roughly at the same phase of their cell
cycle. A 6-7 somite periodicity thus recapitulates spatially a phae gradient
of the cell cycle. Then, the cell cycle is coupled to somite formation, for
instance, there might be a special phase $\phi_{*}$ of the cell cycle for
which cells form a boundary when they reach the anterior. Now we need to
assume that one cell cycle phase (say $\phi_{S}$) is specifically sensitive to
heat shock (while other phases of the cycle would not be), which could well
happen for discrete events in the cell cycle (e.g. a transition between G1 and
S/G2/M). So when heat shock occurs, it disrupts all cells in $\phi_{S}$, not
only the older cells in the anterior but also the cells just laid in the
posterior a few cell cycles later. When those perturbed cells end up
differentiating into somites, theoretically at phase $\phi_{*}$, their
disrupted cell cycle results in segment anomalies. The cells just posterior to
this anomaly were not in phase $\phi_{S}$ at the time of the heat shock, so
are laid down normally and form somites at $\phi_{*}$. Then, one full cell
cycle later, cells that were again at $\phi_{S}$ at the time of the heat shock
would theoretically reach $\phi_{*}$ but are disrupted again. This explains
why one single heat shock disrupts several segments in a periodic way.
In [131], McInerney et al.proposed a mathematical implementation of the cell
cycle model for somitogenesis. Essentially, the goal is to understand with a
realistic biochemical model how a spatial gradient of cell cycle phases can
translate into blocks of simultaneously differentiating somites. In
particular, this model is not concerned with the formation of stripes or AP
somite polarity (contrary to Meinhardt’s model). From a modelling standpoint,
the challenge is to find how a continuous periodic process (such as the cell
cycle, with a spatial gradient of phases) can give rise to a discrete output
(spatially extended somite blocks), and as such, while details differ, this
model is in fact very close to the initial Clock and Wavefront vision. This
model is also of particular interest from a conceptual standpoint because many
subsequent models implement similar ideas with different hypotheses on the
nature of the oscillator or of the front.
The model relies on the combination of two continuously moving fronts with a
simple, two-component biochemical network, encoded into the following
equations :
$\displaystyle\frac{\partial u}{\partial t}$ $\displaystyle=$ $\displaystyle
f(u,v)$ (22) $\displaystyle\frac{\partial v}{\partial t}$ $\displaystyle=$
$\displaystyle g(u,v)+D\frac{\partial^{2}v}{\partial x^{2}}$ (23)
The $f$ and $g$ functions encode generic signaling dynamics where $u$ self-
activates, and is activated by $v$, while $v$ is repressed by $u$. After
dimensionless reduction, one gets :
$f(u,v)=\frac{(u+\mu v)^{2}}{\gamma+\kappa u^{2}}\chi_{u}-\frac{u}{\kappa}$
(25)
and
$g(u,v)=\frac{1}{\epsilon+u}\chi_{v}-v$ (26)
Two fronts moving with speed $c$ are encoded into a spatial-temporal
dependency of the activations on $u,v$ :
$\chi_{u}=H(ct-x+x_{1})\qquad\chi_{v}=H(ct-x+x_{2})$ (27)
where $H$ is the Heaviside function. A cell cycle gradient is imposed by the
fact that $x_{2}<x_{1}$ : so cells become competent to express $u$ before they
are competent to express $v$. Practically, the couple $(\chi_{u},\chi_{v})$
can only take three values $(0,0),(1,0)$ and $(1,1)$. Those three values
correspond to three spatially distinct regions of the embryo, respectively
corresponding to the posterior of the embryo (region $I$), a somite definition
zone (region $II$), and the anterior of the embryo (region $III$).
Figure 12: (A-C) Simulation of the cell cycle model for somitogenesis with
parameter values
$\mu=0.0001,\gamma=0.01,\kappa=10,\epsilon=0.001,D=60,$and$c=0.00125$. (A)
Kymograph of the variable $u$, with blocks of cells moving from low ($u=0$) to
high ($u=1$) state. (B) Kymograph of the variable $v$ showing spatially
extended transient pulses. (C) Propagation of the fronts, shown as the sum of
activations $\chi_{u}+\chi_{v}$. Three distinct regions are indicated: the
posterior ($I$), the somite definition zone ($II$), and the anterior ($III$).
(D-G) Keeping the rest of the parameters the same as in (A), we change one
parameter value in the simulation. (D)$D=30$. (A). (E) $D=120$. (F)
$\kappa=20$. (G) $\mu=0.0003$.
It is useful to first study the behavior of the $u,v$ system for constant
values of $(\chi_{u},\chi_{v})$ corresponding to different regions. The
posterior of the embryo (region $I$) is the simplest case: $\chi$s functions
are $0$ so that the only steady state is $(u,v)=(0,0)$.
In region $II$, the steady state value of $v$ still is $0$, but self-
activation of $u$ creates a new stable steady state
$u_{*}=\frac{1}{2}(1+\sqrt{1-4\gamma/\kappa})$. We will also assume for
simplicity that $\gamma<<\kappa$ so that $u_{*}\sim 1$. We thus expect region
$II$ to be a region of bistability where $u$ can go to stable values $0$ or
$\sim 1$, depending on the initial conditions of both $u$ and $v$.
In the anterior, region $III$, $v$ can no longer be strictly $0$. Assuming
initially $u\sim 0$, then we see that $v$ initially rises quickly to a (high)
value $\sim 1/\epsilon$. Then since $v$ activates $u$, the value of $u$ starts
increasing as well. The final state depends on parameters, but if $\epsilon$
is small enough, $v$ is transiently so high that it strongly activates $u$,
which can become high enough to sustain its own production. In turn, $u$ then
squashes down $v$, to get a steady state not far from the steady state value
$u_{*}\sim 1$, corresponding to a differentiated, somite state. McInerney et
al.proposed that the system is in fact monostable for those parameters,
leading to a high, sustained steady state approximately equal to
$\sim(u_{*},1/(\epsilon+u_{*})$. Notice these dynamics also create a transient
"pulse" of $v$, before going back to a close to $0$ value of $v$, so akin to
an excitable system (see Appendix).
So we see that in region $I$ we expect $u$ to be $0$, in region $II$, $u$ can
be essentially $0$ or $1$, while in region $III$, $u$ is essentially $1$.
Now the idea underlying the full model is that the Heaviside function combined
to diffusion will induce a sudden transition of $u$ from $0$ to $1$ in a block
of cells in the region $II$, via a spatially extended pulse of $v$. Let us
assume that at $t=0$, $v$ is roughly $0$ everywhere, $u\sim 1$ for $x<0$ and
$u=0$ otherwise. Then as time increases, $\chi_{v}$ is turned on close $x\sim
0$, so that there is a sudden pulse of $v$ there. If the diffusion constant is
very high, this $v$ pulse is going to diffuse very quickly towards higher $x$,
leading the pulse to be spatially extended. What happens then depends on the
balance of the parameters, but after some time (say $\tilde{t}$), this pulse
of $v$ induces a transition from $u=0$ to $u=1$ values in a region close to
$x\sim 0$. The size of this region depends on parameters. If the diffusion is
fast enough, induction occurs in the entire region where $\chi_{u}=1$, but if
this region is big enough (or diffusion too small), this will happen only in
part of the region where $\chi_{u}=1$ (see Fig. 12).
When $u$ is high enough to self-sustain, it pushes $v$ back towards $0$ in
this region. So we end up with an entire region where, after the pulse of $v$,
all cells "commit" simultaneously to a high $u$ state, corresponding to
discrete somite formation. Once this transition has occurred, $v$ is again $0$
everywhere, $u\sim 1$ for a more extended region, and $u=0$ otherwise, so that
this process can start again.
In the initial model, it was assumed that the transition happens in the entire
region where $\chi_{u}=1$ because of fast diffusion. What sets the size of the
block then is the time $\tilde{t}$ for $v$ to activate $u$ everywhere in this
region, and the size of the block of the activated bock of cells then is
$c\tilde{t}$. But if diffusion is not fast enough or the region where
$\chi_{u}=1$ is too big, the $v$ pulse will propagate from $x=0$ and activate
cells in a more localized region. The size of the pattern thus is a complex
function of all parameters, including diffusion (see different examples in Fig
12 D-G, and attached Notebook).
In summary, this model allows for the formation of somites by the generation
of periodic pulses close to the anterior PSM boundary, synchronously expressed
in a field of cells, triggering commitment to somite fate (modeled via a
bistable variable $u$). Notice that the dynamics of $u$ thus is very similar
to the variable $z$ in the Cooke and Zeeman model, Eq. 1. $v$ plays the same
role at the pulsatile clock in Eq. 2, interpreted as the cell cycle. It is
also worth comparing how primary waves (in the Zeeman sense [114]) are encoded
in both models: in the initial Clock and Wavefront model, the $(t,p$)
potential associated to the cusp catastrophe was creating an emerging
transition from a bistable to a monostable region, while here, a similar
primary wave is created by the region II to region III transition, Eq. 27,
when $v$ is activated and ensures that $u$ is monostable. In other words, the
primary wave is defined by $\chi_{v}$.
The big difference comes from the dynamics of variable $v$. First, similar to
Meinhardt’s model, diffusion of $v$ is crucial to define the pattern
(switching $u$). $u$ also shuts down $v$. This ensures coordination between
the state variable and the clock, a possibility we alluded to at the end of
the description of the clock and wavefront model. It is also noteworthy that
the oscillator is in fact not explicitly modeled in this $u,v$ model, and
rather emerges as a consequence of the sliding window $\chi_{v}$ which creates
a pulsatile window of expression of $v$ in region II. So there is no explicit
need for, say, a posterior oscillation (in the region I) like in Meinhardt’s
model. It is, in particular, not entirely clear how the differential sliding
window would practically connect to the phase of the cell cycle oscillator,
and how the initial proposal that heat shocks disrupt specific phases in the
cell cycle would be accounted for in this model.
## Chapter 3 Phase models
On the one hand, the vast number of molecular players implicated in
somitogenesis is daunting from a theoretical standpoint, since it is not clear
how and what to model in a predictive way. On the other hand, the
phenomenology of the segmentation behavior still is relatively simple, with
waves of genetic expression sweeping from posterior to anterior, leading to
patterning. This suggests first following the spirit of classical models
described in Section Early models to focus on rather phenomenological models,
not specifically tied to actual genes. Similar issues arise for oscillators in
neuroscience and physiology and motivated the development of a "phase-based"
approach to describe more explicitly the segmentation clock dynamics, which we
briefly summarize here (see also Appendix, treatments of various complexities
can be found in [132, 133, 134, 135]). In line with our previous observation
that the clock seems to be tightly connected to Zeeman’s primary wave of
differentiation, one challenge is to tie those phase descriptions to both
clock stopping and patterning.
### 1 From chemical equations to phase
Consider a (biological) oscillator described by equations in the space of its
components, e.g. mRNA/protein concentrations :
$\frac{d\mathbf{X}}{dt}=F(\mathbf{X})$ (1)
Given some initial conditions, the system relaxes to the limit cycle, which is
a closed curve in the space of concentrations. The position on this curve can
thus be indexed by a single parameter. We define the phase $\phi(t)$ of an
oscillator by :
$\frac{d\phi}{dt}=1$ (2)
and express the phase $\phi$ modulo $T$, where $T$ is the period of the
oscillator, to account for the fact that the system is periodic (notice there
are other conventions, i.e. one can rescale time so that the period is either
$1$ or $2\pi$). In this formalism, the phase of an oscillator is nothing more
than a (rescaled) time variable on the limit cycle. For instance, if we
rescale time so that the period of the oscillator is $2\pi$, phase $\pi/2$
means that the oscillator is at the $1/4$ of its cycle, phase $\pi$ means that
the oscillator is at half its cycle, and phase $2\pi=0\quad\mathrm{mod}\quad
2\pi$ is the initial phase corresponding to the full period. Notice that
phases also correspond to positions in the space of protein concentrations,
i.e. $\phi(t)=\phi(\mathbf{X}(t))$ where $\mathbf{X}(t)$ is the value of
protein concentrations at time $t$ on the limit cycle.
There are now two important observations from the modeling standpoint :
1. 1.
It is possible to extend the definition of phase for points outside of the
limit cycle. Imagine for instance that at a given time $t_{p}$ you first
perturb the system, e.g. by making a change
$\mathbf{X}(t)\rightarrow\mathbf{X}(t)+\Delta\mathbf{X}$, then let the
oscillator relax. Eventually, the system will go back to the limit cycle,
where you have defined a phase using Eq. 2. But then, since the phase is
nothing more than time, from this phase on the limit cycle, you can go back in
time on the trajectory you have just followed to define a phase corresponding
to the initial condition $\mathbf{X}(t)+\Delta\mathbf{X}$ at time $t_{p}$.
This way, you can define a phase for all vectors $\mathbf{X}$, even outside
the limit cycle, defining so-called "isochrons", or lines with identical
phases.
2. 2.
for any limit cycle oscillator, the amplitude is stable (so not easily changed
by a perturbation) while the phase is neither stable nor unstable [132]. Thus,
weak perturbations of an oscillator only change its phase.
Those two properties essentially mean that, for many purposes, the behavior of
a (perturbed) limit cycle oscillator can be entirely captured by its phase
behavior, which remarkably allows us to go from complex dynamics in a high
dimensional system to only one phase variable for a given oscillator. For
instance, imagine two coupled oscillators, then if their coupling is
relatively weak, the perturbations induced by each oscillator onto one another
will stay close to the limit cycle in the initial mRNA/protein space, and one
can use the isochron theory to translate any coupling into effective phase
equations. While it is clear that this substitution is not trivial, and
computations of phase responses can be quite tricky (and has to be done
numerically for a more complex system, see Appendix A for the Adjoint method
and Malkin theorem), some generic simplifications also arise from the
periodicity of the coupling and symmetry in the equations [133] (see
Appendix). One can then use such formalism to study all kinds of effects, from
entrainment to changes of the intrinsic period. In summary, if the limit cycle |
# Layerwise Quantum Convolutional Neural Networks Provide a Unified Way for
Estimating Fundamental Properties of Quantum Information Theory
Myeongjin Shin<EMAIL_ADDRESS>School of Computing, KAIST, Daejeon
34141, Korea Seungwoo Lee School of Computing, KAIST, Daejeon 34141, Korea
Mingyu Lee Department of Computer Science and Engineering, Seoul National
University, Seoul 08826, Korea Donghwa Ji College of Liberal Studies, Seoul
National University, Seoul 08826, Korea Hyeonjun Yeo Department of physics
and astronomy, Seoul National University, Seoul 08826, Korea Harrison J. Lee
<EMAIL_ADDRESS>School of Electrical and Electronic Engineering,
Yonsei University, Seoul 03722, Korea Quantum Computing R&D, Norma Inc.,
Seoul 04799, Korea Kabgyun Jeong<EMAIL_ADDRESS>Research Institute of
Mathematics, Seoul National University, Seoul 08826, Korea School of
Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea
###### Abstract
The estimation of fundamental properties in quantum information theory,
including von Neumann entropy, Rényi entropy, Tsallis entropy, quantum
relative entropy, trace distance, and fidelity, has received significant
attention. While various algorithms exist for individual property estimation,
a unified approach is lacking. This paper proposes a unified methodology using
Layerwise Quantum Convolutional Neural Networks (LQCNN). Recent studies
exploring parameterized quantum circuits for property estimation face
challenges such as barren plateaus and complexity issues in large qubit
states. In contrast, our work overcomes these challenges, avoiding barren
plateaus and providing a practical solution for large qubit states. Our first
contribution offers a mathematical proof that the LQCNN structure preserves
fundamental properties. Furthermore, our second contribution analyzes the
algorithm’s complexity, demonstrating its avoidance of barren plateaus through
a structured local cost function.
## I Introduction
In this paper, our objective is to estimate fundamental properties within
quantum information theory, encompassing von Neumann entropy
($S(\rho)=-\mathrm{Tr}(\rho\log\rho)$), Rényi entropy
($S_{\alpha}(\rho)=\frac{1}{1-\alpha}\log(\mathrm{Tr}(\rho^{\alpha}))$),
Tsallis entropy
($S_{q}(\rho)=\frac{1}{1-q}\left(\mathrm{Tr}(\rho^{q})-1\right)$), quantum
relative entropy ($S(\rho||\sigma)=\mathrm{Tr}(\rho\log\rho-\rho\log\sigma)$),
trace distance ($T(\rho,\sigma)=\frac{1}{2}\mathrm{Tr}|\rho-\sigma|$), and
fidelity
($F(\rho,\sigma)=\left(\mathrm{Tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)^{2}$).
The estimation of these quantities has garnered significant interest,
resulting in numerous papers acharya2020estimating ; subramanian2021quantum ;
wang2022new ; wang2023quantum ; acharya2019measuring ; wang2023fast ;
chen2021variational ; shin2023estimating ; goldfeld2023quantum ;
lee2023estimating . However, a unified approach to estimate all these
quantities has not been proposed. As a consequence, different algorithm
implementations are necessary to estimate each quantity, leading to an
increase in the required copies of the quantum state with the growing number
of fundamental properties.
Regarding algorithm implementation details, quantum neural estimation of
entropies (QNEE) goldfeld2023quantum stands out as a highly unified approach.
QNEE utilizes variational formulas, where the lower or upper bounds of each
formula correspond to fundamental properties. To optimize the variational
formula and determine the lower or upper bounds, parameterized quantum
circuits are employed. The advantage lies in the ease of implementing this
algorithm in software, as only the variational formula needs to be altered to
estimate other quantities. However, in terms of hardware cost and copy
complexity, QNEE falls short of providing a unified solution. The ansatz and
parameters in parameterized quantum circuits must be adjusted for each
property, preventing the reuse of optimized parameters. Consequently, the
number of training instances is proportional to the number of quantities to be
estimated, leading to an increase in copy complexity. Additionally, both QNEE
and other methods employing parameterized quantum circuits face a critical
issue known as barren plateaus mcclean2018barren ; marrero2021entanglement .
This paper introduces an algorithm that employs a parameterized quantum
circuit to estimate fundamental properties in a unified manner, addressing
both implementation and complexity concerns. The proposed quantum circuit is
trained layer-wise, with each layer utilizing a local cost function during
optimization. Each layer’s objective is to diminish the dimension of the
quantum state while preserving all fundamental properties with minimal error.
Once a layer is trained, its parameters are fixed, and training subsequent
layers does not impact the parameters of previous layers. We term this
architecture Layerwise Quantum Convolutional Neural Network (LQCNN).
## II Background
### II.1 Fundamental Properties of Quantum Information Theory
The von Neumann entropy $S(\rho)$ is defined for a density matrix $\rho$ as:
$S(\rho)=-\text{tr}(\rho\log(\rho)).$
This quantity represents the average amount of uncertainty or information in a
quantum state. In quantum statistical mechanics, it finds application in the
context of thermal states, where $S$ quantifies the system’s entropy.
The Rényi entropy $S_{\alpha}(\rho)$ is a family of entropies parameterized by
$\alpha$, defined as:
$S_{\alpha}(\rho)=\frac{1}{1-\alpha}\log(\text{tr}(\rho^{\alpha})).$
This generalization offers a spectrum of information measures. In quantum
phase transitions, $S_{\alpha}$ characterizes the order of the transition,
providing a richer understanding of critical phenomena.
The Tsallis entropy $S_{q}(\rho)$ is expressed as:
$S_{q}(\rho)=\frac{1}{1-q}\left(\text{tr}(\rho^{q})-1\right).$
Its application extends to the study of non-equilibrium quantum systems. In
quantum phase transitions, $S_{q}$ captures deviations from thermal
equilibrium, providing insights into the emergence of complexity in quantum
dynamics.
The trace distance $T(\rho,\sigma)$ quantifies the distinguishability between
quantum states $\rho$ and $\sigma$, defined as:
$T(\rho,\sigma)=\frac{1}{2}\text{tr}|\rho-\sigma|=\frac{1}{2}||\rho-\sigma||_{1}.$
In quantum communication, trace distance serves as a figure of merit. For
instance, in quantum key distribution, it gauges the distinguishability of
quantum states and influences the security of the protocol.
The fidelity $F(\rho,\sigma)$ measures the similarity between two quantum
states $\rho$ and $\sigma$, given by:
$F(\rho,\sigma)=\left(\text{tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)^{2}.$
Beyond its role as a similarity metric, fidelity is integral to quantum error
correction. It ensures the accuracy of quantum gates by quantifying how well
an ideal quantum state is approximated by an imperfect operation.
The relative entropy $S(\rho||\sigma)$ also quantifies the distinguishability
between two quantum states like trace distance. It defined as:
$S(\rho||\sigma)=-\text{tr}\rho\log\sigma-S(\rho)=\text{tr}\rho(\log\rho-\log\sigma).$
It is asymmetric unlike trace distance because it measures the difference in
uncertainty between two states.
To understand these measures more deeply, one needs to explore advanced
mathematical concepts. Operator theory becomes crucial, particularly when
dealing with density matrices, and functional analysis is employed to study
the properties of quantum states in Hilbert spaces. Additionally, concepts
from quantum information theory, such as completely positive maps, play a
vital role in extending these measures to complex quantum scenarios.
In conclusion, the mathematical richness of these quantum information measures
not only contributes to their elegance but also underscores their versatility
and applicability across a spectrum of quantum phenomena, from foundational
principles to cutting-edge quantum technologies.
### II.2 Continuity Bound of Fundamental Properties
The continuity bound of fundamental properties shows that if quantum states
are close enough, the fundamental properties are close too. Trace
norm(distance) $T=\frac{1}{2}||\rho-\sigma||_{1}$ is often used as the measure
to represent the distance of the quantum states in the continuity bound.
Audenaert audenaert2007sharp proved the continuity bound of von Neumann
entropy with respect to trace norm:
$|S(\rho)-S(\sigma)|\leq T\log\left(d-1\right)-T\log
T-(1-T)\log\left(1-T\right).$
Audenaert audenaert2007sharp and Chen chen2017sharp proved the continuity
bound of Rényi entropy with respect to trace norm:
$\displaystyle|S_{\alpha}(\rho)-S_{\alpha}(\sigma)|\leq\begin{cases}\frac{1}{1-\alpha}\log((1-T)^{\alpha}+(d-1)^{1-\alpha}T^{\alpha})&(0<\alpha<1)\\\
\frac{1}{1-\alpha}(1-(1-T)^{\alpha}-(d-1)^{1-\alpha}T^{\alpha})&(\alpha>1).\\\
\end{cases}$
Audenaert audenaert2007sharp and Raggio raggio1995properties proved the
continuity bound of Tsallis entropy with respect to trace norm:
$\displaystyle|S_{q}(\rho)-S_{q}(\sigma)|\leq\begin{cases}\frac{1}{1-q}((1-T)^{q}-1+(d-1)^{1-q}T^{q})&(0<q<1)\\\
\frac{2q}{1-q}T&(q>1).\\\ \end{cases}$
Recently, general continuity bounds for quantum relative entropies have been
found bluhm2023general :
$|S(\rho_{1}||\sigma_{1})-S(\rho_{2}||\sigma_{2})|\leq(1-\frac{\log
m}{\sqrt{2}})||\rho_{1}-\rho_{2}||_{1}+\frac{5\log^{2}m}{\sqrt{2}(1-m)}||\sigma_{1}-\sigma_{2}||_{1},$
where $\textnormal{ker}\rho_{1}\subset\textnormal{ker}\sigma_{1}$,
$\textnormal{ker}\rho_{2}\subset\textnormal{ker}\sigma_{2}$ and $m$ satisfies
$m\rho_{1}\leq\sigma_{1}$, $m\rho_{2}\leq\sigma_{2}$.
The continuity bound of trace distance and fidelity can be derived from the
triangle inequality. The triangle inequality of trace distance is represented
as nielsen2001quantum :
$T(\rho,\sigma)+T(\sigma,\gamma)\geq T(\rho,\gamma).$
On the other hand, fidelity doesn’t satisfy the triangle inequality directly.
But it is known that it satisfies nielsen2001quantum :
$\arccos F(\rho,\sigma)+\arccos F(\sigma,\gamma)\geq\arccos F(\rho,\gamma).$
We derive the continuity bound of fidelity in the next section.
### II.3 Estimation of Fundamental Properties
Estimation of fundamental properties has garnered significant interest in
quantum information theory throughout the years. Most intuitive way is to use
quantum state tomography to restore the density matrix and calculate the
fundamental properties by definition. This approach is undesirable, because
the time complexity is proportional to the dimension of the state, which is
the exponential of the number of qubits.
Numerous papers acharya2020estimating ; subramanian2021quantum ; wang2022new ;
wang2023quantum ; acharya2019measuring ; wang2023fast ; chen2021variational ;
shin2023estimating ; goldfeld2023quantum ; lee2023estimating tried to address
this problem. Estimation under quantum query model effectively address the
problem. The query complexity of the algorithm polynomial to the rank of the
quantum states wang2022new . However, these quantum query model algorithms
have limitation because the quantum circuit which generates the quantum state
needs to be prepared, and the effectiveness of constructing a query model for
input state isn’t known wang2023quantum . So many algorithmswang2023quantum ;
wang2023fast ; subramanian2021quantum tried to estimate the quantities by
only using the identical copies of the unknown quantum state. These algorithms
provide speedups in certain cases but in worst cases it still needs
exponential resources.
All of the current approach doesn’t provide a unified solution for estimating
fundamental properties. Many algorithms only provide estimation for certain
quantities. In contrast, Algorithms that use variational formula allows us to
estimate all the quantities in a similar way shin2023estimating ;
goldfeld2023quantum ; lee2023estimating ; chen2021variational . But the ansatz
and parameters in parameterized quantum circuits must be adjusted and can’t be
reused. This paper provied a unified solution for estimation.
Currently, algorithms using parameterized quantum circuits have produced
promising results shin2023estimating ; goldfeld2023quantum ; lee2023estimating
; chen2021variational . Those approaches builds upon the possibility of
parameterized quantum circuits can provide quantum speedup and is full
trainable by using classical optimizers goldfeld2023quantum . They use
variational formulas, where the lower or upper bounds of each formula
correspond to fundamental properties. Parameterized quantum circuits are
employed to optimize the variational formula and determine the lower or upper
bounds. But the variational formula can’t be fully optimized in deep circuits,
because those formulas are global cost functions and can’t be full trained
because of barren plateaus mcclean2018barren ; cerezo2021cost . This paper
also addressed the problem of barren plateaus.
## III Fundamental Properties Preservation of Quantum Information Theory
We propose and demonstrate that fundamental properties of quantum information,
such as von Neumann entropy, Rényi entropy, Tsallis entropy, quantum relative
entropy, fidelity, and trace distance, can be preserved with minimal error
while simultaneously reducing the dimension of the quantum state. If an
appropriate unitary operation exists, we apply it and subsequently trace out
the first qubit of the quantum state.
###### Theorem 1 (Fundamental Properties Preservation).
Let $\rho$ and $\sigma$ be density matrices of dimension $d=2^{n}$ with ranks
$r_{1}$ and $r_{2}$, respectively. Suppose there exists a unitary operator $U$
such that:
$\mbox{$\textnormal{Tr}$}(U\rho U^{\dagger}|0\rangle\langle 0|\otimes
I_{n-1})\geq 1-\epsilon,\;\;\textnormal{and}$ (1)
$\mbox{$\textnormal{Tr}$}(U\sigma U^{\dagger}|0\rangle\langle 0|\otimes
I_{n-1})\geq 1-\epsilon^{\prime}.$ (2)
Then, define $\rho^{\prime}$ and $\sigma^{\prime}$ as:
$\rho^{\prime}=\mbox{$\textnormal{Tr}$}_{1}(U\rho U^{\dagger}|0\rangle\langle
0|\otimes I_{n-1}),\;\;\textnormal{and}$ (3)
$\sigma^{\prime}=\mbox{$\textnormal{Tr}$}_{1}(U\sigma
U^{\dagger}|0\rangle\langle 0|\otimes I_{n-1}).$ (4)
Then, we have:
$|f(\rho,\sigma)-f(\rho^{\prime},\sigma^{\prime})|\leq
O(\textnormal{function}(f_{params},n,d,\epsilon,\epsilon^{\prime})),$ (5)
where $f$ is the fundamental property function, $f_{params}$ are the
parameters of f, $\epsilon>0$ and $\epsilon^{\prime}>0$ are error terms.
###### Proof.
Suppose $\rho=\sum_{i=1}^{r_{1}}p_{i}|\psi_{i}\rangle\langle\psi_{i}|$ and
$\sigma=\sum_{i=1}^{r_{2}}q_{i}|\phi_{i}\rangle\langle\phi_{i}|$. Given that
it satisfies Eqs. (1) and (2), $U\rho U^{\dagger}$ and $U\sigma U^{\dagger}$
can be expressed as:
$U\rho
U^{\dagger}=\sum_{i=1}^{r_{1}}p_{i}\left(\sqrt{1-\epsilon_{i}}|0\rangle|\psi_{i0}\rangle+\sqrt{\epsilon_{i}}|1\rangle|\psi_{i1}\rangle\right)\left(\sqrt{1-\epsilon_{i}}\langle
0|\langle\psi_{i0}|+\sqrt{\epsilon_{i}}\langle
1|\langle\psi_{i1}|\right),\;\;\textnormal{and}$
$U\sigma
U^{\dagger}=\sum_{i=1}^{r_{2}}q_{i}\left(\sqrt{1-\epsilon_{i}^{\prime}}|0\rangle|\phi_{i0}\rangle+\sqrt{\epsilon_{i}^{\prime}}|1\rangle|\phi_{i1}\rangle\right)\left(\sqrt{1-\epsilon_{i}^{\prime}}\langle
0|\langle\phi_{i0}|+\sqrt{\epsilon_{i}^{\prime}}\langle
1|\langle\phi_{i1}|\right),$
where the conditions are such that:
$\sum_{i=1}^{r_{1}}\epsilon_{i}p_{i}=\epsilon,\;\;\textnormal{and}\quad\sum_{i=1}^{r_{2}}\epsilon_{i}^{\prime}q_{i}=\epsilon^{\prime}.$
Consequently, we obtain:
$\rho^{\prime}=\sum_{i=1}^{r_{1}}\frac{(1-\epsilon_{i})p_{i}}{1-\epsilon}|\psi_{i0}\rangle\langle\psi_{i0}|,\;\;\textnormal{and}\quad\sigma^{\prime}=\sum_{i=1}^{r_{2}}\frac{(1-\epsilon_{i}^{\prime})q_{i}}{1-\epsilon^{\prime}}|\phi_{i0}\rangle\langle\phi_{i0}|.$
The comprehensive proof for each fundamental property function is provided
below. ∎
Theorem 1 suggests that fundamental properties can be preserved with low error
while reducing the dimension of the quantum state if a feasible unitary
exists. Each fundamental property can be proved by using Theorem 2 and the
continuity bounds of fundamental properties.
###### Theorem 2 (Trace Distance of Input and Output).
Let $\rho$ be a density matrix of dimension $d=2^{n}$ with rank $r$. Suppose
there exists a unitary operator $U$ that satisfies Eq (1). Then, define
$\rho^{\prime}$ as given in Eq (3). Subsequently,
$T(U\rho U^{\dagger},|0\rangle\langle
0|\otimes\rho^{{}^{\prime}})=O\left(\sqrt{\epsilon}\right).$ (6)
###### Proof.
$\displaystyle\|U\rho U^{\dagger}-|0\rangle\langle
0|\otimes\rho^{{}^{\prime}}||_{1}$
$\displaystyle=||\sum(1-\epsilon_{i})p_{i}|0\rangle|\psi_{i0}\rangle\langle\psi_{i0}|\langle
0|+\epsilon_{i}p_{i}|1\rangle|\psi_{i1}\rangle\langle\psi_{i1}|\langle 1|$
$\displaystyle+\sqrt{1-\epsilon_{i}}\sqrt{\epsilon_{i}}p_{i}(|0\rangle|\psi_{i0}\rangle\langle\psi_{i1}|\langle
1|+|1\rangle|\psi_{i1}\rangle\langle\psi_{i0}|\langle
0|)-\sum\frac{(1-\epsilon_{i})p_{i}}{1-\epsilon}|0\rangle|\psi_{i0}\rangle\langle\psi_{i0}|\langle
0|||_{1}$
$\displaystyle=||-\sum\frac{(1-\epsilon_{i})p_{i}\epsilon}{1-\epsilon}|0\rangle|\psi_{i0}\rangle\langle\psi_{i0}|\langle
0|+\epsilon_{i}p_{i}|1\rangle|\psi_{i1}\rangle\langle\psi_{i1}|\langle 1|$
$\displaystyle\quad+\sqrt{1-\epsilon_{i}}\sqrt{\epsilon_{i}}p_{i}(|0\rangle|\psi_{i0}\rangle\langle\psi_{i1}|\langle
1|+|1\rangle|\psi_{i1}\rangle\langle\psi_{i0}|\langle 0|)||_{1}$
$\displaystyle\leq\sum\frac{(1-\epsilon_{i})p_{i}\epsilon}{1-\epsilon}\left\lVert|0\rangle|\psi_{i0}\rangle\langle\psi_{i0}|\langle
0|\right\rVert_{1}+\sum\epsilon_{i}p_{i}\left\lVert|1\rangle|\psi_{i1}\rangle\langle\psi_{i1}|\langle
1|\right\rVert_{1}$
$\displaystyle\quad+\sum\sqrt{1-\epsilon_{i}}\sqrt{\epsilon_{i}}p_{i}(|||0\rangle|\psi_{i0}\rangle\langle\psi_{i1}|\langle
1|||_{1}+\left\lVert|1\rangle|\psi_{i1}\rangle\langle\psi_{i0}|\langle
0|\right\rVert_{1})$
$\displaystyle=\sum\frac{(1-\epsilon_{i})p_{i}\epsilon}{1-\epsilon}+\epsilon_{i}p_{i}+2\sqrt{1-\epsilon_{i}}\sqrt{\epsilon_{i}}p_{i}=2\epsilon+2\sum\sqrt{1-\epsilon_{i}}\sqrt{\epsilon_{i}}p_{i}.$
Since $\sum\epsilon_{i}p_{i}=\epsilon$, by using cauchy-schwarz inequality the
following can be proved.
$\displaystyle\sum\sqrt{\epsilon_{i}}p_{i}\leq\sqrt{\epsilon}.$
So, we can conclude that:
$\displaystyle T(U\rho U^{\dagger},|0\rangle\langle
0|\otimes\rho^{{}^{\prime}})=\frac{1}{2}||U\rho U^{\dagger}-|0\rangle\langle
0|\otimes\rho^{{}^{\prime}}||_{1}<\epsilon+\sqrt{\epsilon}.$
∎
We define $T_{\rho}=T(U\rho U^{\dagger},|0\rangle\langle
0|\otimes\rho^{{}^{\prime}})=O(\sqrt{\epsilon})$ and $T_{\sigma}=T(U\sigma
U^{\dagger},|0\rangle\langle
0|\otimes\sigma^{{}^{\prime}})=O(\sqrt{\epsilon^{{}^{\prime}}})$.
### III.1 von Neumann Entropy
###### Theorem 3 (von Neumann Entropy Preservation Theorem).
Let $\rho$ be an $d=2^{n}$ dimension, rank $r$ density matrix. Suppose that
there exists a unitary $U$ such that satisfies Eq (1). Then define
$\rho^{\prime}$ as Eq (3). Then,
$|S(\rho)-S(\rho^{\prime})|=O\left(\epsilon
n+\sqrt{\epsilon}\log\left(\frac{1}{\epsilon}\right)\right).$ (7)
###### Proof.
It is evident that $S(\rho)=S(U\rho U^{\dagger})$ and
$S(\rho^{{}^{\prime}})=S(|0\rangle\langle 0|\otimes\rho^{{}^{\prime}})$. By
using Theorem 2 and the continuity bound of von Neumann entropy, we can
conclude that:
$\displaystyle|S(\rho)-S(\rho^{\prime})|$ $\displaystyle=|S(U\rho
U^{\dagger})-S(|0\rangle\langle 0|\otimes\rho^{\prime})|$ $\displaystyle\leq
T_{\rho}\log(d-1)-T_{\rho}\log T_{\rho}-(1-T_{\rho})\log(1-T_{\rho})$
$\displaystyle=O(\sqrt{\epsilon}n-\sqrt{\epsilon}\log\sqrt{\epsilon}-(1-\sqrt{\epsilon})\log(1-\sqrt{\epsilon}))$
$\displaystyle=O\left(\epsilon
n+\sqrt{\epsilon}\log\left(\frac{1}{\epsilon}\right)\right).$
∎
The Rényi entropy and Tsallis entropy are preserved too. The proof can be done
in a similar manner. By using the Rényi and Tsallis entropy continuity bound
with Theorem 2.
### III.2 Trace distance
###### Theorem 4 (Trace distance Preservation Theorem).
Let $\rho,\sigma$ be an $d=2^{n}$ dimension, rank $r_{1},r_{2}$ density
matrix. Suppose that there exists a unitary $U$ such that satisfies Eqs (1)
and (2). Then define $\rho^{\prime},\sigma^{\prime}$ as Eqs (3) and (4). Then,
$|T(\rho,\sigma)-T(\rho^{\prime},\sigma^{\prime})|=O(\sqrt{\epsilon}+\sqrt{\epsilon^{{}^{\prime}}}).$
(8)
###### Proof.
It is evident that $T(\rho,\sigma)=T(U\rho U^{\dagger},U\sigma U^{\dagger})$
and $T(\rho^{\prime},\sigma^{\prime})=T(|0\rangle\langle
0|\otimes\rho^{\prime},|0\rangle\langle 0|\otimes\sigma^{\prime})$. By using
the triangle inequality of trace distance we can conclude that:
$\displaystyle|T(\rho,\sigma)-T(\rho^{\prime},\sigma^{\prime})|$
$\displaystyle=|T(U\rho U^{\dagger},U\sigma U^{\dagger})-T(|0\rangle\langle
0|\otimes\rho^{\prime},|0\rangle\langle 0|\otimes\sigma^{\prime})|$
$\displaystyle\leq T(U\rho U^{\dagger},|0\rangle\langle
0|\otimes\rho^{\prime})+T(U\sigma U^{\dagger},|0\rangle\langle
0|\otimes\sigma^{\prime})$
$\displaystyle=T_{\rho}+T_{\sigma}=O(\sqrt{\epsilon}+\sqrt{\epsilon^{{}^{\prime}}}).$
∎
### III.3 Fidelity
###### Theorem 5 (Fidelity Preservation Theorem).
Let $\rho,\sigma$ be an $d=2^{n}$ dimension, rank $r_{1},r_{2}$ density
matrix. Suppose that there exists a unitary $U$ such that satisfies Eqs (1)
and (2). Then define $\rho^{\prime},\sigma^{\prime}$ as Eqs (3) and (4). Then,
$|F(\rho,\sigma)-F(\rho^{\prime},\sigma^{\prime})|=O(\sqrt[4]{\epsilon}+\sqrt[4]{\epsilon^{{}^{\prime}}}).$
(9)
###### Proof.
It is evident that $F(\rho,\sigma)=F(U\rho U^{\dagger},U\sigma U^{\dagger})$
and $F(\rho^{\prime},\sigma^{\prime})=F(|0\rangle\langle
0|\otimes\rho^{\prime},|0\rangle\langle 0|\otimes\sigma^{\prime})$. Define
$A(\rho,\sigma)=\arccos{F(\rho,\sigma)}$. It is proven that triangle
inequality for $A$ works nielsen2001quantum :
$\displaystyle A(\rho,\sigma)+A(\sigma,\gamma)\geq A(\rho,\gamma).$
By using that triangle inequality and using the inequality
$F(\rho,\sigma)\geq(1-T(\rho,\sigma))^{2}$, we can conclude that:
$\displaystyle|A(\rho,\sigma)-A(\rho^{{}^{\prime}},\sigma^{{}^{\prime}})|$
$\displaystyle\leq A(U\rho U^{\dagger},|0\rangle\langle
0|\otimes\rho^{\prime})+A(U\sigma U^{\dagger},|0\rangle\langle
0|\otimes\sigma^{\prime})$ $\displaystyle=\arccos{F(U\rho
U^{\dagger},|0\rangle\langle 0|\otimes\rho^{\prime})}+\arccos{F(U\sigma
U^{\dagger},|0\rangle\langle 0|\otimes\sigma^{\prime})}$
$\displaystyle\leq\arccos{(1-T_{\rho})^{2}}+\arccos{(1-T_{\sigma})^{2}}$
$\displaystyle\leq\arccos{(1-\sqrt{\epsilon}-\epsilon)^{2}}+\arccos{(1-\sqrt{\epsilon^{{}^{\prime}}}-\epsilon^{{}^{\prime}})^{2}}$
$\displaystyle=O(\sqrt[4]{\epsilon}+\sqrt[4]{\epsilon^{{}^{\prime}}}).$
Now applying mean value theorem, we can conclude that:
$\displaystyle|F(\rho,\sigma)-F(\rho^{{}^{\prime}},\sigma^{{}^{\prime}})|$
$\displaystyle=|\arccos{F(\rho,\sigma)}-\arccos{F(\rho^{{}^{\prime}},\sigma^{{}^{\prime}})}|\sqrt{1-t^{2}}$
$\displaystyle<|A(\rho,\sigma)-A(\rho^{{}^{\prime}},\sigma^{{}^{\prime}})|=O(\sqrt[4]{\epsilon}+\sqrt[4]{\epsilon^{{}^{\prime}}}).$
∎
During the proof, we suggested a new continuity bound for fidelity in both
inputs.
### III.4 Quantum Relative Entropy
###### Theorem 6 (Quantum Relative Entropy Preservation Theorem).
Let $\rho,\sigma$ be an $d=2^{n}$ dimension, rank $r_{1},r_{2}$ density
matrix. Suppose that there exists a unitary $U$ such that satisfies Eqs (1)
and (2). Then define $\rho^{\prime},\sigma^{\prime}$ as Eqs (3) and (4). Then,
$|S(\rho||\sigma)-S(\rho^{\prime}||\sigma^{\prime})|=O((1-\frac{\log
m}{\sqrt{2}})\sqrt[4]{\epsilon}+\frac{5\log^{2}m}{\sqrt{2}(1-m)}\sqrt[4]{\epsilon^{{}^{\prime}}}),$
(10)
where $\textnormal{ker}\rho\subset\textnormal{ker}\sigma$ and $m$ satisfies
$m\rho\leq\sigma$.
###### Proof.
It is evident that $S(\rho||\sigma)=S(U\rho U^{\dagger}||U\sigma U^{\dagger})$
and $S(\rho^{\prime}||\sigma^{\prime})=S(|0\rangle\langle
0|\otimes\rho^{\prime}|||0\rangle\langle 0|\otimes\sigma^{\prime})$. We can
also prove that
$\textnormal{ker}\rho^{{}^{\prime}}\subset\textnormal{ker}\sigma^{{}^{\prime}}$.
So we can use the continuity bound for quantum relative entropy
bluhm2023general . Applying Theorem 2 to the continuity bound, we can conclude
the proof. ∎
## IV Layerwise Quantum Convolutional Neural Network
When training quantum neural networks (QNNs), researchers have initiated
studies to address the issue of Barren Plateaus, which can impede learning. A
notable proposal in this regard is the quantum convolutional neural network
(QCNN) structure introduced by Cong et al cong2019quantum . This QCNN
structure is known to mitigate the occurrence of Barren Plateaus during the
learning process by employing a local cost function and a pooling layer
pesah2021absence . Inspired by this architecture and using the fundamental
properties preservation theorem above, we propose a layerwise quantum
convolutional neural network (LQCNN) structure.
### IV.1 Proposed Structure
The purpose of this structure is to reduce the number of qubits while
retaining the information of the initially given quantum state, and then
easily obtain the desired fundamental properties. To achieve this a process is
employed to reduce the number of qubits while preserving the fundamental
properties of the quantum state, incorporating ideas inspired by QCNN. This
structure is based on a sequential unitary operator and a trace-out process
conducted by post-processing. The $k$-th layer of LQCNN is represented by a
unitary operator $U_{k}(\theta_{k})$ designed to maximize the probability of
measuring $|0\rangle$ on the first qubit. Following the optimization of
$\theta_{k}$, we perform measurements on the first qubit. If the outcome is
$|0\rangle$, we discard the first qubit and pass the remaining qubits to the
subsequent layer. In the case of an outcome of $|1\rangle$, the entire state
is discarded, and the measurement is repeated until a $|0\rangle$ outcome is
obtained. Once each pooling layer $U_{k}(\theta_{k})$ is trained, the final
layer $U(\theta)$ serves as the processing layer responsible for calculating
fundamental properties. The processing layer can be implemented using quantum
state tomography, QNEE, or other methodologies.
Figure 1: Structure of LQCNN
Suppose that $\rho_{k},\sigma_{k}$ are the input states of the k-th layer
$U_{k}$. Theorem 1 suggest that if the probability of measuring $|0\rangle$ on
the first qubit of each
$U_{k}\rho_{k}U^{\dagger}_{k},U_{k}\sigma_{k}U^{\dagger}_{k}$ is small,
difference of the fundamental properties between $\rho_{k},\sigma_{k}$ and
between $\rho_{k+1},\sigma_{k+1}$ are small. This implies that if each layer
of LQCNN is trained well, the fundamental properties are passed with low error
to the next layer. So calculating fundamental properties of the final output
states of LQCNN, estimates the fundamental properties of the input states of
LQCNN.
### IV.2 Analysis
By using LQCNN, the complexity of estimating the fundamental properties are
determined by two aspects. The complexity of calculating the fundamental
properties of the final layer states and the training complexity of each layer
in LQCNN.
The first complexity are determined by the dimension of the final states.
After each layer of LQCNN, the dimension is reduced to the half. The final
dimension is determined by the number of layers. Now we focus on how many
layer can be stacked. Theorem 1 suggests that fundamental properties can be
preserved with low error while reducing the dimension of the quantum state if
a feasible unitary exists. The feasible unitary is the unitary that maximizes
the probability of measuring 0 on the first qubit. Focusing on existence of
the feasible unitary, we can determine the number of layers. An important fact
is that the rank of quantum states monotonically decreases while passing the
layer of LQCNN. And if $2^{n-1}\geq r_{1}+r_{2}$, it is evident that a unitary
satisfying Eqs (1) and (2) exists. Thus, we can stack the layers while the
dimension of the quantum states is greater than the rank. Using the result of
previous papers shin2023estimating ; goldfeld2023quantum ; lee2023estimating ;
acharya2020estimating ; subramanian2021quantum ; wang2023quantum ;
acharya2019measuring ; wang2023fast ; chen2021variational the complexity of
calculating fundamental properties will be proportional to the rank.
The second complexity are determined by the ansatz of each layer. As the
ansatz are complex the training becomes complex, but less error occurs. So
there is a trade-off between error and train complexity. An important aspect
of LQCNN is that it uses local cost function for training each layers. So it
avoids barren plateaus cerezo2021cost and training is available for complex
ansatz. The training and ansatz complexity for construct each layer of LQCNN
for less error needs to be further investigated.
## V Conclusions
We suggested fundamental properties preservation (FPP) theorem, a new
inequality for von Neumann entropy, Rényi entropy, Tsallis entropy, quantum
relative entropy, trace distance, and fidelity. FPP theorem can be proved by
using the continuity bounds and the triangle inequalities. In this paper LQCNN
is proposed and it can used to calculate the fundamental properties. By
applying FPP theorem, it can be proven that each layer of LQCNN preserves the
fundamental properties with low error. By using LQCNN, fundamental properties
can be calculated in a unified manner with low error and low complexity. And
since LQCNN uses local cost function, contrast to other parametric methods it
avoids barren plateaus. We also anticipate that LQCNN can play a key role in
other quantum machine learning sectors. We expect that fundamental properties
preservation characteristics of LQCNN can be applied to effectively extract
the information of a quantum state. Those applications and complexity of
training a unitary that satisfies Eqs (1) and (2) requires further research.
## Acknowledgments
This work was supported by the National Research Foundation of Korea (NRF)
through grants funded by the Ministry of Science and ICT
(NRF-2022M3H3A1098237) and the Ministry of Education
(NRF-2021R1I1A1A01042199). This work was partially supported by an Institute
for Information & Communications Technology Promotion (IITP) grant funded by
the Korean government (MSIP) (No. 2019-000003; Research and Development of
Core Technologies for Programming, Running, Implementing, and Validating of
Fault-Tolerant Quantum Computing Systems), and Korea Institute of Science and
Technology Information (KISTI).
## Notes
This preprint serves as a concise summary of the entire study, and a detailed
version including specific experimental results will be uploaded shortly.
## References
* (1) J. Acharya, I. Issa, N. V. Shende, and A. B. Wagner, “Estimating quantum entropy,” IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 2, pp. 454–468, 2020.
* (2) S. Subramanian and M.-H. Hsieh, “Quantum algorithm for estimating $\alpha$-renyi entropies of quantum states,” Physical review A, vol. 104, no. 2, p. 022428, 2021.
* (3) Q. Wang, J. Guan, J. Liu, Z. Zhang, and M. Ying, “New quantum algorithms for computing quantum entropies and distances,” arXiv preprint arXiv:2203.13522, 2022.
* (4) Y. Wang, B. Zhao, and X. Wang, “Quantum algorithms for estimating quantum entropies,” Physical Review Applied, vol. 19, no. 4, p. 044041, 2023.
* (5) J. Acharya, I. Issa, N. V. Shende, and A. B. Wagner, “Measuring quantum entropy,” in 2019 IEEE International Symposium on Information Theory (ISIT), pp. 3012–3016, IEEE, 2019.
* (6) Q. Wang and Z. Zhang, “Fast quantum algorithms for trace distance estimation,” arXiv preprint arXiv:2301.06783, 2023.
* (7) R. Chen, Z. Song, X. Zhao, and X. Wang, “Variational quantum algorithms for trace distance and fidelity estimation,” Quantum Science and Technology, vol. 7, no. 1, p. 015019, 2021.
* (8) M. Shin, J. Lee, and K. Jeong, “Estimating quantum mutual information through a quantum neural network,” arXiv preprint arXiv:2306.14566, 2023.
* (9) Z. Goldfeld, D. Patel, S. Sreekumar, and M. M. Wilde, “Quantum neural estimation of entropies,” arXiv preprint arXiv:2307.01171, 2023.
* (10) S. Lee, H. Kwon, and J. S. Lee, “Estimating entanglement entropy via variational quantum circuits with classical neural networks,” arXiv preprint arXiv:2307.13511, 2023.
* (11) J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, “Barren plateaus in quantum neural network training landscapes,” Nature communications, vol. 9, no. 1, p. 4812, 2018.
* (12) C. O. Marrero, M. Kieferová, and N. Wiebe, “Entanglement-induced barren plateaus,” PRX Quantum, vol. 2, no. 4, p. 040316, 2021.
* (13) K. M. Audenaert, “A sharp continuity estimate for the von neumann entropy,” Journal of Physics A: Mathematical and Theoretical, vol. 40, no. 28, p. 8127, 2007.
* (14) Z. Chen, Z. Ma, I. Nikoufar, and S. Fei, “Sharp continuity bounds for entropy and conditional entropy,” arXiv preprint arXiv:1701.02398, 2017.
* (15) G. A. Raggio, “Properties of q-entropies,” Journal of Mathematical Physics, vol. 36, no. 9, pp. 4785–4791, 1995.
* (16) A. Bluhm, Á. Capel, P. Gondolf, and A. Pérez-Hernández, “General continuity bounds for quantum relative entropies,” arXiv preprint arXiv:2305.10140, 2023.
* (17) M. A. Nielsen and I. L. Chuang, “Quantum computation and quantum information,” Phys. Today, vol. 54, no. 2, p. 60, 2001.
* (18) M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, “Cost function dependent barren plateaus in shallow parametrized quantum circuits,” Nature communications, vol. 12, no. 1, p. 1791, 2021.
* (19) I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural networks,” Nature Physics, vol. 15, no. 12, pp. 1273–1278, 2019.
* (20) A. Pesah, M. Cerezo, S. Wang, T. Volkoff, A. T. Sornborger, and P. J. Coles, “Absence of barren plateaus in quantum convolutional neural networks,” Physical Review X, vol. 11, no. 4, p. 041011, 2021.
|
# Motion Segmentation via Global and Local Sparse Subspace Optimization
Michael Ying Yang1, Hanno Ackermann2, Weiyao Lin3, Sitong Feng2 and Bodo
Rosenhahn2 1University of Twente<EMAIL_ADDRESS>University
Hannover2Shanghai Jiao Tong University
###### Abstract
In this paper, we propose a new framework for segmenting feature-based moving
objects under affine subspace model. Since the feature trajectories in
practice are high-dimensional and contain a lot of noise, we firstly apply the
sparse PCA to represent the original trajectories with a low-dimensional
global subspace, which consists of the orthogonal sparse principal vectors.
Subsequently, the local subspace separation will be achieved via automatically
searching the sparse representation of the nearest neighbors for each
projected data. In order to refine the local subspace estimation result and
deal with the missing data problem, we propose an error estimation to
encourage the projected data that span a same local subspace to be clustered
together. In the end, the segmentation of different motions is achieved
through the spectral clustering on an affinity matrix, which is constructed
with both the error estimation and sparse neighbors optimization. We test our
method extensively and compare it with state-of-the-art methods on the Hopkins
155 dataset and Freiburg-Berkeley Motion Segmentation dataset. The results
show that our method is comparable with the other motion segmentation methods,
and in many cases exceed them in terms of precision and computation time.
## I Introduction
In the past years, dynamic scenes understanding has been receiving increasing
attention especially on the moving camera or multiple moving objects. Motion
segmentation as a part of the video segmentation is an essential part for
studying the dynamic scenes and many other computer vision applications [1].
Particularly, motion segmentation aims to decompose a video into different
regions according to different moving objects that tracked throughout the
video. In case of feature extraction for all the moving objects from the
video, segmentation of different motions is equivalent to segment the
extracted feature trajectories into different clusters. One example of
feature-based motion segmentation is presented in Figure 1.
Figure 1: Example results of the motion segmentation on the real traffic video
cars9.avi from the Hopkins 155 dataset [2].
Generally, the algorithms of motion segmentation are classified into 2
categories [3]: affinity-based methods and subspace-based methods. The
affinity-based methods focus on computing the correspondences of each pair of
the trajectories, whereas the subspace-based approaches use multiple subspaces
to model the multiple moving objects in the video and the segmentation of
different motions is accomplished through subspace clustering. Recently, some
affinity-based methods [3, 4] are proposed to cluster the trajectories with
unlimited number of missing data. However, the computation times of them are
so high that require an optimizing platform to be reduced. Whereas, the
subspace-based methods [5, 6] have been developed to reconstruct the missing
trajectories with their sparse representation. The drawback is that they are
sensitive to the real video which contains a large number of missing
trajectories. Most of the existing subspace-based methods still fall their
robustness for handling missing features. Thus, there is an intense demand to
explore a new subspace-base algorithm that can not only segment multiple kinds
of motions but also handle the missing and corrupted trajectories from the
real video.
### I-A Contributions
We propose a new framework with subspace models for segmenting different types
of moving objects from a video under affine camera. We cast the motion
segmentation as a two stage subspace estimation: the global and local subspace
estimation. Sparse PCA [7] is adopted for optimizing the global subspace in
order to defend the noise and outliers. Meanwhile, we seek a sparse
representation for the nearest neighbors in the global subspace for each data
point that span a same local subspace. In order to solve the missing data
problem and refine the local subspace estimation, we propose an error
estimation and build the affinity graph for spectral clustering to obtain the
clusters. To the best of our knowledge, our framework is the first one to
simultaneously optimize the global and local subspace with sparse
representation.
The remaining sections are organized as follows. The related works are
discussed in Section II. The basic subspace models for motion segmentation are
introduced in Section III. The proposed approach will be described in detail
in Section IV. Furthermore, the experimental results are presented in Section
V. Finally, this paper is concluded in Section VI.
## II Related Work
During the last decades, either the subspace-based techniques [5, 6] or the
affinity-based methods [3, 4] have been receiving an increasing interest on
segmentation of different types of motions from a real video.
Affinity-based methods. [4] uses the distances of each pair of feature
trajectories as the measurement to build the affinity matrix based on a
translational motion model. This method can segment motions with unlimited
number of missing or incomplete trajectories, which means they are robust to
the video with occlusions or moving camera problems. Another approach which is
based on the affinity is called Multi-scale Clustering for Motion Segmentation
(MSMC) [3]. Based on the conventional image classification technique split and
merge, they use the correspondences of each two features between two frames to
segment the different motions with many missing data. One of the general
problems of affinity-based method is highly time-consuming. They have to be
implemented with an optimized platform in order to save the computation times.
Subspace-based methods. The existing works based on subspace models can be
divided into 4 main categories: algebraic, iterative, sparse representation
and subspace estimation.
Algebraic approaches, such as Generalized Principal Component Analysis (GPCA)
[8], which uses the polynomials fitting and differentiation to obtain the
clusters. GPCA can segment the rigid and non-rigid motions effectively, but
once the number of moving objects in the video increases, its computation cost
increases and the precision decreases in the same time. The general procedure
of an iterative method contains two main aspects: find the initial solution
and refine the clustering results to fit each subspace model. RANdom SAmple
Consensus (RANSAC) [9] selects randomly the number of points from the original
dataset to fit the model. RANSAC is robust to the outliers and noise, but it
requires a good initial parameter selection. Specifically, it computes the
residual of each point to the model with setting a threshold, if the residual
is below the threshold, it will be considered as inliers and vice versa.
Sparse Subspace Clustering (SSC) [5] is one of the most popular method based
on the sparse representation. SSC exploits a fact that each point can be
linearly represented with a sparse combination of the rest of other data
points. SSC has one of the best accuracy compared with the other subspace-
based methods and can deal with the missing data. The limitation is that it
requires a lot of computation times. Another popular algorithm based on the
sparse representation is Agglomerate Lossy Compression (ALC) [6], which uses
compressive sensing on the subspace model to segment the video with missing or
corrupted trajectories. However, the implementation of ALC cannot ensure that
find the global maximum with the greedy algorithm. By the way ALC is highly
time-consuming in order to tune the parameter.
Our work combines the subspace estimation and sparse representation methods.
The subspace estimation algorithms, such as Local Subspace Affinity (LSA)
[10], firstly project original data set with a global subspace. Then the
projected global subspace is separated into multiple local subspaces through
K-nearest neighbors (KNN). After calculating the affinities of different
estimated local subspaces with principle angles, the final clusters are
obtained through spectral clustering. It comes to the issue that the KNN
policy may overestimate the local subspaces due to noise and improper
selection of the number K, which is determined by the rank of the local
subspace. LSA uses the model selection (MS) [11] to estimate the rank of
global and local subspaces, but the MS is quite sensitive to the noise level.
## III Multi-body Motion Segmentation with Subspace models
In this section, we introduce the motion structure under affine camera model.
Subsequently, we show that under affine model segmentation of different
motions is equivalent to separate multiple low-dimensional affine subspaces
from a high-dimensional space.
### III-A Affine Camera Model
Most of the popular algorithms assume an affine camera model, which is an
orthographic camera model and has a simple mathematical form. It gives us a
tractable representation of motion structure in the dynamic scenes. Under the
affine camera, the general procedure for motion segmentation is started from
translating the 3-D coordinates of each moving object to its 2-D locations in
each frame. Assume that $\\{x_{fp}\\}_{f=1,...,F}^{p=1,...,P}\in R^{2}$
represents one 2-D tracked feature point $p$ of one moving object at frame
$f$, its corresponding 3-D world coordinate is $\\{X_{p}\\}_{p=1,...,P}\in
R^{3}$. The pose of the moving object at frame $f$ can be represented with
$(R_{f},T_{f})\in SO(3)$, where $R_{f}$ and $T_{f}$ are related to the
rotation and translation respectively. Thus, each 2-D point $x_{fp}$ can be
described with Equation 1
$x_{fp}=\left[R_{f}\ T_{f}\right]X_{p}=A_{f}X_{p}$ (1)
where $A_{f}=\left[\begin{array}[]{ccc}1&0&0\\\ 0&1&0\\\
\end{array}\right]\left[R_{f}\ T_{f}\right]\in R^{2\times 4}$ is the affine
transformation matrix at frame $f$.
### III-B Subspace models for Motion Segmentation under Affine View
The general input for the subspace-based motion segmentation under affine
camera can be formulated as a trajectory matrix containing the 2-D positions
of all the feature trajectories tracked throughout all the frames. Given 2-D
locations $\\{x_{fp}\\}_{f=1,...,F}^{p=1,...,P}\in R^{2}$ of the tracked
features on a rigid moving object, the corresponding trajectory matrix can be
formulated as Equation 2
$W_{2F\times P}=\left[\begin{array}[]{ccc}x_{11}&\cdots&x_{1P}\\\
\vdots&\vdots&\vdots\\\ x_{F1}&\cdots&x_{FP}\\\ \end{array}\right]$ (2)
under affine model, the trajectory matrix $W_{2F\times P}$ can be further
reformulated as Equation 3
$W_{2F\times P}=\left[\begin{array}[]{c}A_{1}\\\ \vdots\\\ A_{F}\\\
\end{array}\right]_{2F\times 4}\left[\begin{array}[]{ccc}X_{1}&\cdots&X_{P}\\\
1&\cdots&1\\\ \end{array}\right]_{4\times P}$ (3)
we can rewrite it as following,
$W_{2F\times P}=M_{2F\times 4}S_{P\times 4}^{T}$ (4)
where $M_{2F\times 4}$ is called motion matrix, whereas $S_{P\times 4}$ is
structure matrix. According to Equation 4, we can obtain that under affine
view the rank of trajectory matrix $W_{2F\times P}$ of a rigid motion is no
more than 4. Hence, as the trajectory matrix is obtained, the first step is
reducing its dimensionality with a low-dimension representation, which is
called the global subspace transformation. Subsequently, each projected
trajectory from the global subspace lives in a local subspace. Then the
obstacle of multi-body motion segmentation is to separate these underlying
local subspaces from the global subspace, which means the segmentation of
different motions is related with segmenting different subspaces.
## IV Proposed Framework
Our proposed framework extends the LSA [10] with sparse optimization for both
the global and local parts. As shown in Figure 2, given a general trajectory
matrix, we firstly transform it into a global subspace with Sparse PCA [7],
which is robust to noise and outliers. Furthermore, instead of the KNN
estimation we use the sparse neighbors to automatically find the projected
data points span a same subspace. To correct the overestimation and encourage
the projected data from the same subspace to be collected, we propose an error
function to build the affinity matrix for spectral clustering.
Figure 2: Overview of the proposed framework.
### IV-A Global Subspace Transformation
Due to the trajectory matrix of a rigid motion has a maximal rank 4, most
people choose the projected dimension to be $m=4n$ or $5$, where $n$ is the
number of the motions in the video. Assume that the trajectory matrix is
$W_{2F\times P}$, where $F$ is the number of frames and $P$ is the number of
extracted trajectories. The traditional way to project $W_{2F\times P}$ is
Principal Component Analysis (PCA) [12], which can be formed as following,
$z^{*}=\max_{z^{T}z\leq 1}z^{T}\Sigma z,$ (5)
where $\Sigma=W^{T}W$ is the covariance matrix of $W$, solutions $z^{*}$
represent the principal components. Usually, PCA can be obtained through
performing singular value decomposition (SVD) for $W$. The solutions $z^{*}$
are fully observed, which means they are constructed with all the input
variables. However, if the principal components $z^{*}$ are built with only a
few number of original variables but still can represent the original data
matrix well, it should be easier to separate the underlying local subspaces
from the transformed global subspace. The sparse PCA technique has been proved
that it is robust to the noise and outliers in terms of dimensionality
reduction and feature selection [13, 14], which aims to seek a low-dimensional
sparse representation for the original high-dimensional data matrix. In
contrast to PCA, sparse PCA produces the sparse principal components that
achieve the dimensional reduction with a small number of input variables but
can interpret the main structure and significant information of the original
data matrix.
In order to contain the orthogonality of projected vectors in the global
subspace, we apply the generalized power method for sparse PCA [15] to
transform the global subspace. Given the trajectory matrix $W_{2F\times
P}=\left[w_{1},...,w_{F}\right]^{T}$, where $w_{f}\in R^{2\times P},f=1,...,F$
contains all the tracked $P$ 2-D feature points in each frame $f$. We can
consider a direct single unit form as Equation 6 to extract one sparse
principal component $z^{*}\in R^{P}$ [7, 15].
$z^{*}(\gamma)=\max\limits_{y\in B^{P}}\max\limits_{z\in
B^{2F}}(y^{T}Wz)^{2}-\gamma\|z\|_{0}$ (6)
where $y$ denotes a initial fixed data point from the unit Euclidean sphere
$B^{P}=\\{y\in R^{P}|y^{T}y\leq 1\\}$, and $\gamma>0$ is the sparsity
controlling parameter. If project dimension is $m,1<m<2F$, which means there
are more than one sparse principal components needed to be extracted, in order
to enforce the orthogonality for the projected principal vectors, [15] extends
Equation 6 to block form with a trace function(Tr()), which can be defined as
Equation 7
$\displaystyle Z^{*}(\gamma)=$ $\displaystyle\max\limits_{Y\in
S_{m}^{P}}\max\limits_{Z\in[S^{2F}]^{m}}Tr(Diag(Y^{T}WZN)^{2})$ (7)
$\displaystyle-\sum_{j=1}^{m}\gamma_{j}\|z_{j}\|_{0}$
where $\gamma=\left[\gamma_{1},...,\gamma_{m}\right]^{T}$ is a positive
$m$-dimensional sparsity controlling parameter vector, and parameter matrix
$N=Diag(\mu_{1},\mu_{2},...,\mu_{m})$ with setting distinct positive diagonal
elements enforces the loading vectors $Z^{*}$ to be more orthogonal,
$S_{m}^{p}=\\{Y\in R^{P\times m}|Y^{T}Y=I_{m}\\}$ represents the Stiefel
manifold111Stiefel manifold: the Stiefel manifold $V^{k}(R^{n})$ is the set of
all orthonormal k-frames in $R^{n}$.. Subsequently, Equation 7 is completely
decoupled in the columns of $Z^{*}(\gamma)$ as following,
$Z^{*}(\gamma)=\max_{Y\in S_{m}^{P}}\sum_{j=1}^{m}\max_{z_{j}\in
S^{2F}}(\mu_{j}y_{j}^{T}Wz_{j})^{2}-\gamma_{j}||z_{j}||_{0}$ (8)
Obviously, the objective function in Equation 8 is not convex, but the
solution $Z^{*}{\gamma}$ can be obtained after solving a convex problem in
Equation 9
$Y^{*}(\gamma)=\max\limits_{Y\in
S_{m}^{P}}\sum_{j=1}^{m}\sum_{i=1}^{F}\left[(\mu_{j}w_{i}^{T}y_{j})^{2}-\gamma_{j}\right]_{+}$
(9)
which under the constraint that all
$\gamma_{j}>\mu_{j}^{2}\max_{i}||w_{i}||_{2}^{2}$. In [15], a gradient scheme
has been proposed to efficiently solve the convex problem in Equation 9.
Hence, the sparsity pattern $\mathbf{I}$ for the solution $Z^{*}$ is defined
by $Y^{*}$ after Equation 9 under the following criterion,
$\mathbf{I}=\left\\{\begin{array}[]{cc}active,&(\mu_{j}w_{i}^{T}y_{j}^{*})^{2}>\gamma_{j},\\\
0,&otherwise\\\ \end{array}\right.$ (10)
As a result, the seeking sparse loading vectors $Z^{*}\in S_{m}^{P}$ are
obtained after iteratively solving Equation 9. After normalization, the global
projected subspace $\widetilde{W}_{m\times P}=normalize(Z^{*})^{T}$ is
achieved, which is embedded with multiple orthogonal underlying local
subspaces.
### IV-B Local Subspace Estimation
Figure 3: Illustration of 6-nearest neighbors and sparse nearest neighbors
policy. The circles and triangles represent the data points from two different
local subspaces respectively. The red points denote the estimated neighbors
for the observed data $\alpha_{i}$ from the same local subspace under the
determinate searching area.
In order to cluster the different subspaces according to different moving
bodies, the first step is finding out the multiple underlying local subspaces
from the global subspace. Generally, the estimation of different local
subspaces can be addressed as the extraction of different data sets, which
contain only the projected trajectories from the same subspace. One of the
most traditional way is the local sampling [10], which uses the KNN.
Specifically, the underlying local subspace spanned by each projected data is
found by collecting each projected data point and its corresponding K nearest
neighbors, which are calculated by the distances [10, 16]. However, the local
sampling can not ensure that all the extracted K-nearest neighbors truly span
one same subspace, which means an overestimation, especially for the video who
contains a lot of degenerated/depended motions or missing data. Moreover, [17]
has testified that the selection of number K is quite sensitive, which depends
on the rank estimation. In this paper, for the sake of avoiding the searching
for only nearest neighbors and solving the overestimation problem we adopt the
sparse nearest neighbors optimization to automatically find the set of the
projected data points that span a same local subspace.
The assumption of sparse nearest neighbors is derived from SMCE [18], which
can cluster the data point from a same manifold robustly. Given a random data
point $x_{i}$ that draw from a manifold $M_{l}$ with dimension $d_{l}$, under
the SMCE assumption, we can find a relative set of points
$\mathcal{N}_{i}={x_{j},j\neq i}$ from $M_{l}$ but contains only a small
number of non-zero elements that passes through $x_{i}$. This assumption can
be mathematically defined with Equation 11
$\|c_{i}[x_{1}-x_{i},...,x_{P}-x_{i}]\|_{2}\leq\epsilon,\ s.t\
\textbf{1}^{T}c_{i}=\textbf{1}$ (11)
where $c_{i}$ contains only a few non-zero entries that denote the indices of
the data point that are the sparse neighbors of $x_{i}$ from the same
manifold, $\textbf{1}^{T}c_{i}=\textbf{1}$ is the affine constraint and $P$
represent the number of all the points lie in the entire manifold.
We apply the sparse neighbors estimation to find the underlying local
subspaces in our transformed global subspace. As shown in Figure 3, with the
6-nearest neighbors estimation, there are four triangles have been selected to
span the same local subspace with observed data $\alpha_{i}$, because they are
near to $\alpha_{i}$ than the other circles. While, the sparse neighbors
estimation is looking for only a small number of data point that close to
$\alpha_{i}$, in this way most of the intersection area between the different
local subspaces can be eliminated. In particular, we constraint the searching
area of the sparse neighbors for each projected trajectory from the global
subspace with calculating the normalized subspace inclusion (NSI) distances
[19] between them. NSI can give us a robust measurement between the orthogonal
projected vectors based on their geometrically consistent, which is formulated
as
$NSI_{ij}=\frac{tr\\{\alpha_{i}^{T}\alpha_{j}\alpha_{j}^{T}\alpha_{i}\\}}{\min(\dim(\alpha_{i}),\dim(\alpha_{j}))}$
(12)
where the input is the projected trajectory matrix $\widetilde{W}_{m\times
P}=[\alpha_{1},...,\alpha_{P}]$, and $\alpha_{i}$ and $\alpha_{j},i,j=1,...,P$
represent two different projected data. The reason of using NSI distances to
constraint the sparse neighbors searching area is the geometric property of
the projected global subspace. Nevertheless the data vectors which are very
far away from $\alpha_{i}$ definitely can not span the same local subspace
with $\alpha_{i}$. Moreover, in addition to save computation times, the
selection for the searching area with NSI distances is more flexible, which
has a wide range of values, than tuning the fixed parameter K for nearest
neighbors.
Furthermore, all the NSI distances are stacked into a vector
$X_{i}=[NSI_{i1},...,NSI_{iP}]^{T}$, the assumption from SMCE in Equation 11
can be solved with a weighted sparse $\mathcal{L}_{1}$ optimization under
affine constraint, which is formulated as following
$\displaystyle\min\|Q_{i}c_{i}\|_{1}$ (13) $\displaystyle s.t\
\|X_{i}c_{i}\|_{2}\leq\epsilon,1^{T}c_{i}=1$
where $Q_{i}$ is a diagonal weight matrix and defined as
$Q_{i}=\frac{exp(X_{i}/\sigma)}{exp(\sum_{t}\neq
iX_{it})/\sigma}\in(0,1],\sigma>0$. The effect of the positive-definite matrix
$Q_{i}$ is encouraging the selection of the closest points for the projected
data $\alpha_{i}$ with a small weight, which means a lower penalty, but the
points that are far away to $\alpha_{i}$ will have a larger weight, which
favours the zero entries in solution $c_{i}$. We can use the same strategy as
SMCE to solve the optimization problem in Equation 13 with Alternating
direction method of multipliers (ADMM) [20].
As a result, we can obtain the sparse solutions $C_{P\times
P}=[c_{1},...,c_{P}]^{T}$ with a few number of non-zero elements that contain
the informations and connections between the projected data point and its
estimated sparse neighborhoods. As investigated in SMCE [18], in order to
build the affinity matrix with sparse solution $C_{P\times P}$ we can
formulate a sparse weight matrix $\Omega_{P\times P}$ with vector
$\omega_{i}$, which is built by
$\omega_{ii}=0,\omega_{ij}=\frac{c_{ij}/X_{ij}}{\sum_{t\neq
i}c_{it}/X_{ti}},j\neq i$. The achieved weight matrix $\Omega_{P\times P}$
contains only a few non-zero entries in column, which give the indices of all
the estimated sparse neighbors and the distances between them. Hence, we can
collect each data $\alpha_{i}$ and its estimated sparse neighbors
$\mathcal{N}_{i}$ into one local subspace $\widehat{S_{i}}$ according to the
non-zero elements of $\omega_{i}$.
### IV-C Error Estimation
Although the sparse neighbors optimization can help us to avoid the
intersection between different local subspaces, it turned into quite sensitive
and can’t ensure to carry all the information about the underlying local
subspaces under the missing data situation. The local subspace estimation
after the sparse neighbors searching can be illustrated with Figure 4. In
Figure 4 the estimated local subspaces are not completely spanned by each
observed data and its corresponding sparse neighborhood. Obviously, there are
some neighbors have been estimated to span two different local subspaces,
which can be called the overlapping estimation. Moreover, the obtained local
subspaces with some overlapping problems cannot carry the enough dissimilarity
or similarity information between two local subspaces, which can be used to
build an affinity matrix that can separate the different subspaces with
spectral clustering.
Figure 4: The geometrical illustration of incorrect local subspace estimation
with sparse neighbors. $S_{1},S_{2},S_{3},S_{4}$ are four estimated local
subspaces spanned by the observed data
$\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}$ respectively.
For purpose of estimating these overlapping and making a strong connection
between the data points from the same local subspace, we propose the following
error function with Equation 14
$e_{it}=\|(I-\widehat{\beta_{i}}\widehat{\beta_{i}}^{+})\alpha_{t}\|_{2}^{2},t=1,...,P$
(14)
where $\widehat{\beta_{i}}\in R^{m\times m_{i}}$ is the basis of estimated
local subspace $\widehat{S_{i}},m_{i}=rank(\widehat{S_{i}})$, which can be
achieved through the SVD of $\widehat{S_{i}}$, and $\widehat{\beta_{i}}^{+}$
is the Moore-Penrose inverse of $\widehat{\beta_{i}}$, the $I\in R^{m\times
m}$ is an identity matrix. Actually the geometrical meaning of the error
function $e_{it}$ is the distance between the estimated local subspace and
projected data. More specifically, if the projected data $\alpha_{t}$ truly
comes from the local subspace $\widehat{S_{i}}$, the corresponding error
$e_{it}$ should have a very small value, which ideally is near to zero, and
vice versa. As a consequence, after computing for each estimated local
subspace $\widehat{S_{i}}$ its corresponding error vector
$e_{i}=[e_{i}1,...,e_{i}P]$, we can build an error matrix $\mathbf{e}_{P\times
P}=[e_{1},...,e_{P}]$, which contains the strong connection between the
projected data span a same local subspace.
In the end, we can construct our affinity graph $\mathcal{G}=(V,E)$ with
combining the estimated error matrix $\mathbf{e}_{P\times P}$ and the sparse
weight matrix $\Omega_{P\times P}$, whose the nodes $V$ represent all the
projected data points and edges $E$ denote the distances between them. In our
affinity graph, the connection between each two nodes $\alpha_{i}$ and
$\alpha_{j}$ is determined by both the $e_{ij}$ and $\omega_{ij}$. Therefore,
our constructed affinity graph contains only several connected elements, which
are related to the data points span the same subspace, whereas there is no
connection between the data points live in a different subspace. More
formally, the adjacent matrix of the affinity graph is formulated as follows
$\displaystyle A[i]=|\omega_{i}|+|{e}_{i}|$ (15)
$\displaystyle\mathcal{A}=\left[\begin{array}[]{cccc}A[1]&0&...&0\\\
0&A[2]&...&0\\\ \vdots&\vdots&\ddots&\vdots\\\
0&0&...&A[P]\end{array}\right]\Gamma$
where the $\Gamma\in R^{P\times P}$ is an arbitrary permutation matrix.
Subsequently, we can perform the normalized spectral clustering [21] on the
symmetric matrix $\mathcal{A}$ and obtain the final clusters with different
labels, and each cluster is related to one moving object.
## V Experimental Results
Our proposed framework is evaluated on both the Hopkins 155 dataset [2] and
the Freiburg-Berkeley Motion Segmentation Dataset [4] with comparing with
state-of-the-art subspace clustering and affinity-based motion segmentation
algorithms.
Implementation Details Most popular subspace based motion segmentation methods
[5, 10, 6, 3, 4] have assumed that the number of motions has been already
known. For the Hopkins 155 dataset, we give the exactly number of clusters
according to the number of motions, while for the Berkeley dataset we set the
number of clusters with 7 for all the test sequences. In this work, the
constrained area for searching the sparse neighbors is firstly varied in a
range variables $[10,20,30,50,100]$, then it turns out that the tuned
constrained area performs equally well from 20 to 50, so we choose to set the
number with 20, which according to the alternative number of sparse numbers.
In our experiments, we have applied the PCA and sparse PCA for evaluating the
performance of our framework on estimating the multiple local subspaces from a
general global subspace with dimension $m=5$. The sparsity controlling
parameter for sparse PCA is setted to $\gamma=0.01$ and the distinct parameter
vector $[\mu_{1},...,\mu_{m}]$ is setted to $[1/1,1/2,...,1/m]$.
### V-A The Hopkins 155 Dataset
The Hopkins 155 dataset [2] contains 3 different kinds sequences:
checkerboard, traffic and articulated. For each of them, the tracked feature
trajectories are already been provided in the ground truth and the missing
features are removed as well, which means the trajectories in the Hopkins 155
dataset are fully observed and there is no missing data. We have computed the
average and median misclassification error for comparison our method with
state-of-the-art methods: SSC [5], LSA [10], ALC [6]and MSMC [3], as shown in
Table I, Table II, Table III. Table IV refers to the run times of our method
comparing with two sparse optimization based methods: ALC and SSC.
Method | ALC | SSC | MSMC | LSA | Ourpca | Ourspca
---|---|---|---|---|---|---
Articulated, 11 sequences
mean | 10.70 | 0.62 | 2.38 | 4.10 | 2.67 | 0.55
median | 0.95 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00
Traffic, 31 sequences
mean | 1.59 | 0.02 | 0.06 | 5.43 | 0.2 | 0.48
median | 1.17 | 0.00 | 0.00 | 1.48 | 0.00 | 0.00
Checkerboard, 78 sequences
mean | 1.55 | 1.12 | 3.62 | 2.57 | 1.69 | 0.56
median | 0.29 | 0.00 | 0.00 | 0.27 | 0.00 | 0.00
All 120 sequences
mean | 2.40 | 0.82 | 2.62 | 3.45 | 1.52 | 0.53
median | 0.43 | 0.00 | 0.00 | 0.59 | 0.00 | 0.00
TABLE I: Mean and median of the misclassification (%) on the Hopkins 155 dataset with 2 motions. Method | ALC | SSC | MSMC | LSA | Ourpca | Ourspca
---|---|---|---|---|---|---
Articulated, 2 sequences
mean | 21.08 | 1.91 | 1.42 | 7.25 | 3.72 | 3.19
median | 21.08 | 1.91 | 1.42 | 7.25 | 3.72 | 3.19
Traffic, 7 sequences
mean | 7.75 | 0.58 | 0.16 | 25.07 | 0.19 | 0.72
median | 0.49 | 0.00 | 0.00 | 5.47 | 0.00 | 0.19
Checkerboard, 26 sequences
mean | 5.20 | 2.97 | 8.30 | 5.80 | 5.01 | 1.22
median | 0.67 | 0.27 | 0.93 | 1.77 | 0.78 | 0.55
All 35 sequences
mean | 6.69 | 2.45 | 3.29 | 9.73 | 2.97 | 1.94
median | 0.67 | 0.20 | 0.78 | 2.33 | 1.50 | 1.30
TABLE II: Mean and median of the misclassification (%) on the Hopkins 155 dataset with 3 motions. Method | ALC | SSC | MSMC | LSA | Ourpca | Ourspca
---|---|---|---|---|---|---
all 155 sequences
Mean | 3.56 | 1.24 | 2.96 | 4.94 | 1.98 | 0.70
Median | 0.50 | 0.00 | | 0.90 | 0.75 | 0.00
TABLE III: Mean and median of the misclassification (%) on all the Hopkins 155 dataset. Method | ALC | SSC | OurPCA | OurSPCA
---|---|---|---|---
Run-time [s] | 88831 | 14500 | 1066 | 1394
TABLE IV: Computation-Time (s) on all the Hopkins 155 dataset.
Obviously, as Table I and Table II show, the overall error rate of ours with
sparse PCA projection is the lowest for both 2 and 3 motions. Generally, the
PCA projection has a lower accuracy than sparse PCA projection for the
articulated and checkerboard sequences. However, the traffic video with PCA
projection reaches a better result than the sparse PCA projection, which gives
us a conclusion that PCA is more robust to represent the rigid motion
trajectory matrix, but the sparse PCA projection can better represent the
trajectory matrix of independent or non-rigid motions. We also notice that
MSMC performs the best for the traffic sequence with 3 motions, but our work
with PCA projection is just slightly worse to MSMC and inferior to SSC, which
is one of the most accurate subspace-based algorithm. But due to the property
of MSMC, which is based on computing the affinities between each pair
trajectories, it is highly time-consuming. The checkerboard data is the most
significant component for the entire Hopkins dataset, which in particular
contains a lot of features points and many intersection problems between
different motions. To be specific, the most accurate results for the
checkerboard sequences belong to our proposed framework with sparse PCA
projection, either for two or three motions. It means that our method has the
most accuracy for clustering different intersected motions. Table III shows
our method achieves the least misclassification error for all the sequences
from the Hopkins dataset in comparison with all the other algorithms. Although
our method with sparse PCA or PCA projection is a bit loss of precision for
the traffic sequences, we save a lot of computation times comparing with SSC
and ALC as shown in Table IV. We evaluate our method with sparse PCA
projection in comparison with LSA [10], SSC [5], MSMC [3], GPCA [8], RANSAC
[9] and MSMC [3] in Figure 5 and Figure 6 on the Hopkins 155 dataset. Note
that MSMC has not been evaluated on the checkboard sequence.
Figure 5: Comparison of Our approach with ground truth and the other
approaches on the 1RT2RC video: 5: GroudTruth; 5: GPCA, error: $44.98\%$; 5:
LSA, error:$1.94\%$; 5: RANSAC, error: $33.66\%$; 5: SSC, $0\%$; 5: Our, $0\%$
on the 1RT2TC sequence from the Hopkins 155 dataset.
Figure 6: Comparison of Our approach with ground truth and the other
approaches on the 1RT2RC video: 6: GroudTruth; 6: GPCA, error: $19.34\%$; 6:
LSA, error:$46.23\%$; 6 MSMC, error: $46.23\%$; 6 SSC, $0\%$; 6: Our, $0\%$.
### V-B Freiburg-Berkeley Motion Segmentation Dataset
In this subsection, our method has been evaluated on the Freiburg-Berkeley
Motion Segmentation dataset [4] to test the performance on the real video
sequences with occlusion and moving camera problems. This dataset contains 59
sequences and all the feature trajectories are tracked densely. All the
missing trajectories have not been removed and there is no pre-processing for
correcting the error tracked trajectory. The parameters for evaluation are
precision (%) and recall (%). Our method has been compared with Ochs [4],
which is based on the affinity of the trajectories between each two frames,
SSC [5] and ALC [6]. The results on all the training set and test set of the
Berkeley dataset are shown in Table V.
| Ochs | ALC | SSC | Ourpca | Ourspca
---|---|---|---|---|---
Precision | 82.36 | 55.78 | 64.55 | 72.12 | 70.77
Recall | 61.66 | 37.43 | 33.45 | 66.52 | 65.42
TABLE V: Results on the entire Freiburg-Berkeley Motion Segmentation Dataset
[4].
Figure 7: Our segmentation results on Freiburg-Berkeley Motion Segmentation
Dataset in comparison with the groundtruth segmentations from [4]. 7:bear01,
7: marple4, 7: cars8. Figure 8: Additional segmentation results of Freiburg-
Berkeley Motion Segmentation Dataset [4].
In general, as shown in Table V, the PCA projection has a better performance
on this dataset than the sparse PCA, which can not deal with the data matrix
contains a lot of zero entries. More specifically, our method with PCA
projection obtains the most Recall value comparing with the others, which
indicates our assigned clusters can cover the most parts of the different
ground-truth regions. However, compared with Ochs [4], which is based on the
affinity, our method lacks the precision. It means that our method can detect
the boundaries of different regions but can not complete segment the moving
objects from the background. Figure 7 show us the examples of our results with
PCA projection. Among all of these examples, our method has high quality
segmentations of the primary foreground moving objects, which according to to
the high recall value. However, there are some incorrect segmentations as
well, such as the features on the object cannot be distinguished exactly
especially at the last few frames. These incomplete segmentation results
indicate the small precision value in Table V. Among all of the subspace-based
motion segmentation algorithms SSC and ALC, which need to firstly apply the
sparse reconstruction for the incomplete trajectories, our method only depends
on the error estimation and sparse neighbors technique but has a superior
performance on the precision and recall.
Figure 8 show us some additional segmentation results. The typical failure
segmentations are shown in the bottom row marple1.avi, which contains 300
frames. Our method can not exactly extract the moving objects from the
background for the video that has the really long observed frames. Moreover
our method can not segment the video accurately when the camera is also
moving, due to the moving foreground usually has the short feature
trajectories that are very difficult to handle.
## VI Conclusions
In this paper, we have proposed a subspace-based framework for segmenting
multiple moving objects from a video sequence with integrating global and
local sparse subspace optimization methods. The sparse PCA performs a data
projection from a high-dimensional subspace to a global subspace with sparse
orthogonal principal vectors. To avoid improperly choosing K-nearest neighbors
and defend intersection between different local subspaces, we seek a sparse
representation for the nearest neighbors in the global subspace for each data
point that span a same local subspace. Moreover, we propose an error
estimation to refine the local subspace estimation for the missing data. The
advantage of the proposed method is that we can apply two sparse optimizations
and a simple error estimation to handle the incorrect local subspace
estimation under the missing trajectories. The limitation of our work is the
number of motions should be known firstly and only a constrained number of
missing data can be handled accurately. The experiments on the Hopkins and
Berkeley dataset show our method are comparable with state-of-the-art methods
in terms of accuracy, and sometimes exceeds them on both precision and
computation time.
## Acknowledgements
The work is funded by DFG (German Research Foundation) YA 351/2-1 and the ERC-
Starting Grant (DYNAMIC MINVIP). The authors gratefully acknowledge the
support.
## References
* [1] M. Y. Yang and B. Rosenhahn, “Video segmentation with joint object and trajectory labeling,” in _IEEE Winter Conference on Applications of Computer Vision_ , 2014, pp. 831–838.
* [2] R. Tron and R. Vidal, “A benchmark for the comparison of 3-d motion segmentation algorithms,” in _CVPR_ , 2007, pp. 1–8.
* [3] R. Dragon, B. Rosenhahn, and J. Ostermann, “Multi-scale clustering of frame-to-frame correspondences for motion segmentation,” in _ECCV_ , 2012, pp. 445–458.
* [4] P. Ochs, J. Malik, and T. Brox, “Segmentation of moving objects by long term video analysis,” _PAMI_ , vol. 36, no. 6, pp. 1187–1200, 2014.
* [5] E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in _CVPR_ , 2009, pp. 2790–2797.
* [6] Y. Ma, H. Derksen, W. Hong, and J. Wright, “Segmentation of multivariate mixed data via lossy data coding and compression,” _PAMI_ , vol. 29, no. 9, pp. 1546–1562, 2007.
* [7] H. Zou, T. Hastie, and R. Tibshirani, “Sparse principal component analysis,” _Journal of computational and graphical statistics_ , vol. 15, no. 2, pp. 265–286, 2006.
* [8] R. Vidal, Y. Ma, and S. Sastry, “Generalized principal component analysis (gpca),” _PAMI_ , vol. 27, no. 12, pp. 1945–1959, 2005.
* [9] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” _Communications of the ACM_ , vol. 24, no. 6, pp. 381–395, 1981.
* [10] J. Yan and M. Pollefeys, “A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate,” in _ECCV_ , 2006, pp. 94–106.
* [11] K. Kanatani, “Motion segmentation by subspace separation and model selection,” in _ICCV_ , 2001, pp. 586–591.
* [12] H. Abdi and L. J. Williams, “Principal component analysis,” _Wiley Interdisciplinary Reviews: Computational Statistics_ , vol. 2, no. 4, pp. 433–459, 2010.
* [13] T. Zhou, D. Tao, and X. Wu, “Manifold elastic net: a unified framework for sparse dimension reduction,” _Data Mining and Knowledge Discovery_ , vol. 22, no. 3, pp. 340–371, 2011.
* [14] N. Naikal, A. Y. Yang, and S. S. Sastry, “Informative feature selection for object recognition via sparse pca,” in _ICCV_ , 2011, pp. 818–825.
* [15] M. Journée, Y. Nesterov, P. Richtárik, and R. Sepulchre, “Generalized power method for sparse principal component analysis,” _The Journal of Machine Learning Research_ , vol. 11, pp. 517–553, 2010.
* [16] A. Goh and R. Vidal, “Segmenting motions of different types by unsupervised manifold clustering,” in _CVPR_ , 2007, pp. 1–6.
* [17] L. Zappella, X. Lladó, E. Provenzi, and J. Salvi, “Enhanced local subspace affinity for feature-based motion segmentation,” _Pattern Recognition_ , vol. 44, no. 2, pp. 454–470, 2011.
* [18] E. Elhamifar and R. Vidal, “Sparse manifold clustering and embedding,” in _NIPS_ , 2011, pp. 55–63.
* [19] N. P. da Silva and J. P. Costeira, “The normalized subspace inclusion: Robust clustering of motion subspaces,” in _ICCV_ , 2009, pp. 1444–1450.
* [20] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” _Foundations and Trends in Machine Learning_ , vol. 3, no. 1, pp. 1–122, 2011.
* [21] U. Von Luxburg, “A tutorial on spectral clustering,” _Statistics and computing_ , vol. 17, no. 4, pp. 395–416, 2007.
|
# How do you Converse with an Analytical Chatbot?
Revisiting Gricean Maxims for Designing Analytical Conversational Behavior
Vidya Setlur Tableau Research260 California Ave STE 300, Palo AltoPalo
AltoCA94306USA<EMAIL_ADDRESS>and Melanie Tory The Roux Institute,
Northeastern University100 Fore St.PortlandME04101USA<EMAIL_ADDRESS>
(2022)
###### Abstract.
Chatbots have garnered interest as conversational interfaces for a variety of
tasks. While general design guidelines exist for chatbot interfaces, little
work explores analytical chatbots that support conversing with data. We
explore Gricean Maxims to help inform the basic design of effective
conversational interaction. We also draw inspiration from natural language
interfaces for data exploration to support ambiguity and intent handling. We
ran Wizard of Oz studies with 30 participants to evaluate user expectations
for text and voice chatbot design variants. Results identified preferences for
intent interpretation and revealed variations in user expectations based on
the interface affordances. We subsequently conducted an exploratory analysis
of three analytical chatbot systems (text + chart, voice + chart, voice-only)
that implement these preferred design variants. Empirical evidence from a
second 30-participant study informs implications specific to data-driven
conversation such as interpreting intent, data orientation, and establishing
trust through appropriate system responses.
chatbots, intent, visual analysis, ambiguity, repair, refinement.
††journalyear: 2022††copyright: rightsretained††conference: CHI Conference on
Human Factors in Computing Systems; April 29-May 5, 2022; New Orleans, LA,
USA††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’22), April 29-May 5, 2022, New Orleans, LA, USA††doi:
10.1145/3491102.3501972††isbn: 978-1-4503-9157-3/22/04††submissionid:
4814††ccs: Human-centered computing Interaction devices††ccs: Human-centered
computing Visualization
Figure 1. Participants conversing with various analytical chatbot prototypes.
(a) A Slack chatbot showing an interactive message with a drop-down menu to
help a user refine a previous response within the conversation thread. (b) An
Echo Show chatbot simulator screen showing the top 5 wineries result along
with two other follow-up utterance options on the right side of the screen.
(c) Interaction with an Echo chatbot. The grey text bubbles indicate voice
transcripts from the participants while the blue ones are from the chatbot.
Follow-up questions and feedback from the chatbot encourage conversational
behavior.
[Participants conversing with various analytical chatbot
prototypes.]Participants conversing with various analytical chatbot
prototypes. (a) A Slack chatbot showing an interactive message with a drop-
down menu to help a user refine a previous response within the conversation
thread. (b) An Echo Show chatbot simulator screen showing the top 5 wineries
result along with two other follow-up utterance options on the right side of
the screen. (c) Interaction with an Echo chatbot. The grey text bubbles
indicate voice transcripts from the participants while the blue ones are from
the chatbot. Follow-up questions and feedback from the chatbot encourage
conversational behavior.
## 1\. Introduction
Conversational interfaces (CIs) such as smart assistants and chatbots have
become prevalent for tasks ranging from simple fact-finding (e.g., asking for
the weather) to question-and-answer scenarios such as making a restaurant
reservation (Atkinson and Heritage, 2010; Sidnell and Stivers, 2012). CIs
constitute a distinctive form of interaction that borrows patterns from
natural human conversation. With access to online resources, increased
computational power, and machine-learning, CIs have come a long way from early
natural language (NL) programs that were fraught with difficulty in user
understanding (Weizenbaum, 1966); they are now more conversational and
understand reasonably complex utterances within known contexts (Moore and
Arar, 2019).
Recently, natural language interfaces (NLIs) for visual analysis tools have
garnered interest in supporting expressive ways for users to interact with
their data and see results expressed as visualizations (Setlur et al., 2016;
Hoque et al., 2017; Gao et al., 2015; Kumar et al., 2016; ask, 2021; pow,
2021; tho, 2021; ibm, 2021). Users interact with a dataset or a visualization
and can change the data display by filtering, navigating, and seeking details-
on-demand. In these information-seeking conversations, the user may express
their intent using NL input, and the system provides visualization responses.
The analytical experience focuses on keeping the user in the flow of
conversation. These interfaces are often designed for a specific platform or
modality, with user intent understanding constrained by the domain of the
knowledge base or context in which the interaction occurs. Furthermore, these
conversational interfaces tend to focus on NL only as an input mechanism, not
as part of the system response.
The promise that NL will make visual analysis tools more approachable has led
to a proliferation of new potential entry points, platforms, and styles of
interaction. One emerging interaction modality is the _analytical chatbot_ , a
software application that engages in a back and forth NL dialogue with the
user about data (Hoon et al., 2020; Fast et al., 2018; Zhi and Metoyer, 2020;
Bieliauskas and Schreiber, 2017; Kassel and Rohs, 2018). Like other types of
chatbots, analytical chatbots are designed to simulate the way a human would
act as a conversational partner, and therefore need to employ NL as both an
input and output mechanism. They may additionally employ visualizations in
their responses. When compared to existing NLIs for visual analysis,
analytical chatbots have a different style of interaction and more “agent-
like” behavior.
The emergence of analytical bots as mediators of data analysis activities
presents new challenges and opportunities, some of which we investigate in
this work. Merely repurposing how user intent is interpreted for one type of
NLI in another does not always lead to precise interpretation. Additionally,
we need to consider the interplay of NL and visualization components in how a
bot responds to user questions. To build functionally intuitive analytical
chatbots, we need to understand how users interact in these environments and
develop design principles that can guide appropriate system responses in
relation to utterance intent. While there are general design guidelines for
chatbot interfaces, in this paper, we wanted to explore how users interact
with analytical chatbot systems through natural language, and how modality
affects both user interaction and behavior.
Chatbot design often draws inspiration from human-to-human conversation and
mechanisms that facilitate the exchange of information between speaker and
listener. In such conversations, there is an expectation that the information
shared is relevant and that intentions are conveyed. Grice’s Cooperative
Principle (CP) (Grice, 1975) states that participants in a conversation
normally attempt to be truthful, relevant, concise, and clear. Consider this
conversation snippet:
> Lizzie: Is there another carton of juice?
> Milo: I’m going to the supermarket in a few minutes!
A human who reads the above conversation will easily infer that at the moment,
there is no juice and that juice will be bought from the supermarket soon.
Examples like these prompted Grice to propose various maxims where the CP
explains the implication process. Grice argued that the generation and
perception of implicatures are based on the following principle: “Make your
conversational contribution such as is required, at the stage at which it
occurs, by the accepted purpose or direction of the talk exchange in which you
are engaged.” Though these Gricean Maxims have provided some guidance for
human-computer mediated communication (Herring, 2010), little work has
explored how to support cooperative conversation when a user is specifically
exploring data with the help of an agent. In this cooperative framework, the
question arises: when is it appropriate to introduce visualization versus
language? When asking a question, we all are familiar with when an answer is
too detailed or too terse. Because we are social beings with experience in
conversation, we know what an appropriate response is and what the
implications are when someone deviates from the norm. So how _does_ one
converse with an analytical chatbot? What are the expectations of system
behavior and interaction that support a cooperative conversation for data
exploration? Are there differences in these user expectations across
modalities and platforms?
### 1.1. Research Questions
Our primary goal is to explore how platform and modality differences influence
users’ conversational behaviors and system expectations when exploring and
asking questions about data. Towards this goal, we ran a series of studies
designed around best practices for both text- and voice-based CIs. We consider
three platforms (voice-only, voice with visual responses, and text-based).
Specifically, our studies aim to address the following research questions:
* •
RQ1 - NL utterances: What are the characteristics of NL questions that users
ask through text vs. voice? What types of ambiguous and underspecified
questions do they ask with these modalities?
* •
RQ2 - Response expectations: What would users expect as a reasonable response?
When do users expect only a text or voice response? When do they want charts
to be shown along with a text or voice response? What are users’ expectations
of the charts shown in response to NL questions?
* •
RQ3 - Modalities for repair: When the result is unexpected, how do users
expect to repair the system behavior?
### 1.2. Contributions
This paper explores conversational patterns and expectations as users interact
with analytical chatbots in various text- and voice-based platforms during
data exploration. Specifically, the contributions of our paper are:
* •
Revisiting Gricean Maxims, we explore design principles for supporting
cooperative behavior and expectations in chatbot conversations that are
specific to data exploration.
* •
We conducted a series of Wizard of Oz (WoZ) studies using three modalities:
voice-only, voice with charts, and text with charts on Slack (sla, 2021a) to
better understand how users explore data through NL interaction. Findings from
the studies show that analytical chatbot experiences constitute a distinctive
set of user interaction behaviors and expectations. These observations provide
additional context when employing Gricean Maxims as a guideline for
conversational behavior during data exploration.
* •
Based on observations from the WoZ studies, we identified and implemented a
subset of design variants in three CI platforms – Slack (text with charts),
Echo Show (voice with charts), and Echo (voice-only).
* •
We subsequently conducted an evaluation of these three prototypes to identify
design implications and guidelines for creating useful experiences with
analytical chatbots.
## 2\. Related Work
We explore related work on NLIs for visual analysis and more specifically,
analytical chatbots.
### 2.1. NLIs for Visual Analysis
NLIs have recently become popular as a means of interaction with data and may
offer a lower barrier to entry compared to other interaction modalities. These
conversational analytics tools automatically produce or modify visualizations
in response to NL questions about data. DataTone (Gao et al., 2015) introduced
ambiguity widgets, allowing users a means of repair when the system makes
incorrect responses to ambiguous input. Eviza (Setlur et al., 2016) and
Evizeon (Hoque et al., 2017) supported ongoing analytical conversation by
enabling follow-on queries via language pragmatics. Orko (Srinivasan and
Stasko, 2017) extended these concepts to voice interaction and network
diagrams, and InChorus (Srinivasan et al., 2020) developed a framework for
multimodal interactions involving both touch and voice. Additional systems
that employ NL interaction with visualization include Articulate (Kumar et
al., 2016), Analyza (Dhamdhere et al., 2017), and Text-to-Viz (Cui et al.,
2019). Many conversational interaction concepts have also been deployed in
commerical visualization tools (e.g., (ibm, 2021), (pow, 2021), (ask, 2021),
(tho, 2021)). All of these systems focus on NL as an input mechanism, where
the system output is one or more charts. While many of the learnings from
these systems may apply to chatbot interfaces, chatbots have a different
interaction style and are expected to hold a natural language dialogue with
the user.
Research has also investigated how natural language could be used to describe
visualizations and data facts, potentially informing the design of an
analytical chatbot’s language responses. Srinivasan et al. (Srinivasan et al.,
2018) illustrated how visualizations augmented with interactive NL data facts
could support exploratory analysis. Similarly, Wang et al. (Wang et al., 2019)
generated automatic fact sheets containing both visualizations and NL, and Liu
et al. (Liu et al., 2020) generated automatic chart captions by employing a
deep learning algorithm and NL templates. Longer narratives to express
causality relationships were explored by Choudhry et al. (Choudhry et al.,
2020). Studies by Lima and Barbosa (Lima and Barbosa, 2020) suggest that
organizing visualization recommendations by the NL questions that they answer
may help users understand recommendation content. Furthermore, empirical work
on how people describe data insights and visualizations (e.g. Henkin and
Turkay’s (Henkin and Turkay, 2020) research on scatterplot verbalizations) can
serve as a foundation for automatic approaches to natural language generation.
These conversational analytics and recommendation systems demonstrate value
for NL as both an input and output modality for interaction with analytical
tools. However, none of them specifically explore a chatbot style of
interaction.
### 2.2. Analytical Chatbots
Chatbots have become a popular means of interactions in many applications,
with some of the earliest ones being rule-based (Weizenbaum, 1966) and recent
ones employing learning-based approaches (Bordes and Weston, 2016; Dodge et
al., 2016; Kannan et al., 2016; Li et al., 2016; Serban et al., 2016; Vinyals
and Le, 2015). For factors known to influence the user experience of
chatbots, the reader is referred to several recent surveys (Chaves and Gerosa,
2021; Rapp et al., 2021; Mygland et al., 2021; Dobrowsky et al., 2021). For
example, Rapp et al. reported that realistic user expectations, relevance and
timeliness of chatbot responses, and the chatbot’s personality, transparency,
and social interaction style all influence human trust. Similarly, Chaves and
Gerosa (Chaves and Gerosa, 2021) describe how human-like social
characteristics such as conversational intelligence and manners may benefit
the user experience. However, human-like characteristics are perceived more
favorably only up to a point; chatbots with imperfect human-like behaviors may
trigger an uncanny valley effect (Dobrowsky et al., 2021; Ciechanowski et al.,
2019). Text and voice interaction modalities are particularly relevant to our
work. A comparative study of voice and text-based interaction with chatbots
(Rzepka et al., 2021) found that voice was generally preferred in terms of
cognitive effort, enjoyment, efficiency, and satisfaction, but this was
influenced by goal-directedness of the task.
Most closely related to our work are analytical chatbots for answering
questions with data. Hoon et al.’s (Hoon et al., 2020) ‘analytics bot’
augmented a data dashboard so that users could ask additional questions about
the data, but the chatbot produced only text responses, not visualizations.
Visual Dialog (Das et al., 2017) was an AI agent that could hold a dialog
between a computer and a human, discussing visual content. The characteristics
of the conversation included temporal continuity and grounding the visual
content in the conversational exchange. A two-person chat data-collection
protocol was used to curate a large-scale dataset (VisDial) containing
question-answer pairs and to train a set of neural encoders to create a visual
chatbot application. Our paper explores a similar goal of enabling
conversational interaction, including visual artifacts, but in our case, the
focus is to support answering questions about data.
In the data space, Fast et al. (Fast et al., 2018) introduced a chatbot for
Data Science with a limited ability to plot statistical charts and Valetto
(Kassel and Rohs, 2018) introduced an analytical chatbot for tablets,
employing a chat-style interface side by side with a chart. GameBot (Zhi and
Metoyer, 2020), a chatbot for sports data, demonstrated how narrative text and
visualizations could be integrated in chatbot responses. Bieliauskas and
Schreiber (Bieliauskas and Schreiber, 2017) illustrated how an analytical
chatbot could be integrated into team messaging environments such as Slack.
Their chatbot could adjust filters and metrics in a network visualization
juxtaposed next to the chat window. Both of these latter chatbots were domain-
specific (sports or software engineering) and their utility was not evaluated.
Most similar to our work are studies investigating user expectations around
analytical chatbots. Kassel and Rohs (Kassel and Rohs, 2019) explored
expectations around chatbot responses with the Valetto prototype (Kassel and
Rohs, 2018), introducing an ‘answer space’ framework varying across level of
statistical detail and whether the answers were descriptive or explanatory.
They found that people’s statistical knowledge influenced the style of answers
they preferred and that it was important to match the level of detail in the
chatbot’s answer to the user’s language. Hearst and Tory (Hearst and Tory,
2019) conducted a series of crowd-sourced studies to understand when users
expected text versus chart responses to predefined data questions. They found
a split in people’s preferences, with approximately 40% preferring not to see
charts in their analytical chatbot conversations. Those who did appreciate
charts generally preferred to see more data than they specifically requested
to provide context. In a similar experiment, Hearst et al. (Hearst et al.,
2019) explored how analytics systems should respond to natural language
queries with vague terms like ‘high’ or ‘expensive.’ Zhi (Zhi, 2020) compared
usability of three response formats in an interactive chatbot: text only, text
with visualizations, and text with interactive visualizations. Results showed
a strong preference for interactive visualizations that enable access to more
information than requested.
Our research employs a series of exploratory Wizard Of Oz and prototype
evaluation studies to investigate people’s expectations around chatbot
interaction. Like Kassel and Rohs (Kassel and Rohs, 2019), we found that the
level of detail in the chatbot response influences user assessments of
appropriateness. Mirroring Hearst and Tory (Hearst and Tory, 2019) and Zhi
(Zhi, 2020), our results show that users tend to prefer interactive
visualizations, and value context and additional information in chatbot
answers. We extend this line of research beyond level of detail and types of
context, to consider both text and voice input and output modalities, use of
message threading, and the interplay between text and visualization responses.
## 3\. Analytical Chatbot Design Principles
The goal of our work is to understand how we can support users’ data
exploration in chatbot interfaces for commonly available modalities, ranging
from text interaction with visual responses in a medium like Slack (sla,
2021a) to voice-based interaction commonly found in smart assistants (ale,
2021d; goo, 2021).
Understanding the structure of a single utterance and its semantic content is
not enough to have a complete understanding of the conversational context.
Pragmatic reasoning that understands the context and intent of the
conversation lends itself to a more engaging experience (Chakrabarti and
Luger, 2015). The interaction design space for implementing conversational
experiences for chatbots can be vast and vague. Despite the importance of
pragmatic processing, evaluating the quality of conversation is difficult to
determine. While grammars and well-defined language rules can address
syntactic and semantic handling of individual input utterances, there is no
gold standard to evaluate the quality of a chatbot with respect to its
conversational behavior. In order to ground the possible variants in this
conversational design space to specific conversational characteristics, we
employ Grice’s cooperative principles (Grice, 1975). The principles describe
how speakers act cooperatively to be mutually understood for effective
communication. Grice divided the cooperative principle into four
conversational maxims. We describe each of the maxims and how we apply them to
chatbot design, specifically guidelines for effective system responses and
interaction behavior.
* •
Maxim of Quantity: Be informative. Provide all the information necessary for
the purpose of the current conversational exchange. Do not make your
contribution more informative than is required, but ensure that the response
addresses the intent in the question. For example, the conversation snippet
below has just the right amount of information about the nearest store along
with its opening time.
human: “When does the nearest grocery store open?”
chatbot: “The nearest grocery store is at 48 Main Street and it opens at 8:00
am.”
Violations of this maxim are either a terse chatbot response saying, “8:00 am”
or too detailed a response such as, “There are three grocery stores located
within a radius of 10 miles. The nearest store is 1.4 miles away at 48 Main
Street and opens at 8:00 am.”
* •
Maxims of Quality: Be truthful. Avoid stating information that you believe
might be wrong, unless there is some compelling reason to do so. If you do
choose to include it, then provide a disclaimer that points your doubts
regarding this information. Avoid including information that cannot be
supported by evidence. For example, in the conversation snippet below, the
chatbot greets the human and sets the appropriate expectations regarding its
capabilities of understanding the conversation.
chatbot: “Welcome! I’m a virtual assistant that can help you book a concert
ticket. You can ask me simple questions or follow my lead. Remember that I’m
not a human and can’t understand everything. Shall we start?”
human: “Sure!”
A violation of this maxim is a chatbot greeting that simply says, “Hi! You can
ask me anything about the concert.” This example does not set up the
conversation for success as the chatbot is not transparent about its
capabilities, leading to unrealistic user expectations.
* •
Maxim of Relation: Be relevant. Make sure that all the information you provide
is relevant to the current exchange and omit irrelevant information. For
example, in the conversation snippet below, even though the human did not
respond to the chatbot’s initial question, the chatbot provides a response
relevant to the human’s question. Providing a follow-up inquiry after the
relevant response is a useful way of directing the human back to the original
question that the chatbot posed or indicating the presence of other related
tasks.
chatbot: “Would you like to book an appointment?”
human: “When’s the next availability?”
chatbot: “The next available appointment is at 11 am on Friday. Would you like
to make an appointment or modify an existing one?”
A violation of this maxim is a chatbot response, “Please answer yes or no” to
the human’s question, “When’s the next availability?” In this case, the
chatbot is not providing a relevant response to the human and continues to
focus on its original intent of booking an appointment.
* •
Maxims of Manner: Be clear and concise. Avoid obscurity of expression and
ambiguous language that is difficult to understand. Ask for clarification or
follow-up inquiry to support conversation turns. Unlike the previous three
maxims that primarily focus on what is said during the conversational
exchange, the Maxim of Manner focuses on _how_ that exchange occurs. For
example, in the conversation snippet below, the chatbot is conveying its
thought process to the human clearly by sharing and requesting for information
in a turn-by-turn manner.
chatbot: “Please hold while I connect you to a representative.”
(After 20 seconds)
chatbot: “Sorry, no one’s available right now. Would you like me to send an
email? They will respond in 24 hours.”
human: “Yes!”
chatbot: “Great. To send the email, I first need some information about you.
What’s your first name?”
A violation of this maxim is a chatbot response that simply ends the
conversation without providing a follow-up option, for example, “Sorry, no
one’s available right now. Bye-bye!”
For the purpose of analytical chatbot design, Gricean Maxims provide a basic
framework for determining the various components of a conversation. We draw
inspiration from an established set of best practices for identifying and
implementing cooperative chatbot behaviors (Habermas, 1984; Saygin and
Cicekli, 2002; Bickmore and Cassell, 2005; Jacquet et al., 2019). We identify
the following conversational design patterns (DP) with their relevant maxims:
* •
DP1: Greeting and orientation: When the user first interacts with the chatbot,
the greeting needs to clearly convey what purpose the chatbot serves (Maxims
of Manner and Quantity).
* •
DP2: Turn-taking: Conversations should be a back and forth exchange so that
users do not need to specify all the details at once (Moore and Arar, 2018).
The chatbot should avoid dead-end responses and provide prompts to move the
conversation forward. It should understand context between sequential
utterances and anaphoric references to prior utterances (e.g., “What did you
mean by that?”, “how about adding coffee beans to the order?”) (Maxim of
Manner).
* •
DP3: Acknowledgements and confirmations: To build trust, acknowledgments need
to be provided as feedback indicating that the user’s input was received. The
chatbot should ask the user to repeat the query or clarify the system response
in situations when the chatbot’s confidence in recognizing the intent is low
(Maxims of Quality and Relation).
* •
DP4: Concise and relevant responses: To minimize cognitive effort, chatbot
responses should be concise and to the point based on the user’s intent.
Lengthy content can be broken into chunks with the most relevant chunk
returned first. Users should be able to add follow-up clarification or request
more information, for example, by clicking on a button or asking an explicit
follow-up query (Maxims of Quantity and Manner).
We acknowledge that while Gricean Maxims help frame expectations for chatbot
design, there are some criticisms of the theory. For instance, the Gricean
Maxims do not specifically provide guidance for handling conversational
ambiguity (i.e., queries with more than one possible interpretation) or
misinterpretation. These cases of failure in conversational implicature may be
due to linguistic parsing issues, failure to understand the user’s actual
intent, or simply misunderstanding of idioms of the language. The only general
guidance that Gricean Maxims provide is to have the user and/or the chatbot
restate or clarify the question (Hadi, 2013). However, in the NLI space, there
is a precedence in how visual analysis tools handle underspecification (i.e.,
queries with missing information such as an attribute name, date value or
analytical operation) and ambiguity. Some systems interpret user intent
through simple pragmatics in analytical interaction using contextual
inferencing, wherein the context established by the preceding dialog is used
to create a complete utterance, in combination with information from the data
domain (Gao et al., 2015; ask, 2021; Setlur et al., 2016; Hoque et al., 2017;
Srinivasan and Stasko, 2017). Most NLI tools provide targeted textual feedback
with the system responses, along with ambiguity widgets that enable the user
to both repair and refine the system choices. We hence include two additional
design patterns that are specific to _analytical_ conversation within the
chatbot interaction space:
* •
DP5: Ambiguous and underspecified utterance handling: When chatbots encounter
an ambiguous or underspecified utterance, they need to provide feedback to the
user explaining their interpretation of the utterance and how it was handled.
For data exploration, ambiguous utterances can arise when there are multiple
ways of interpreting the intent (Setlur et al., 2016). Underspecified
utterances have missing information that needs to be filled to create a valid
query that can be executed against the underlying datasource to generate a
system response (Setlur et al., 2019). For example, for the query, “which
products are doing _well_?”, the word ‘well’ is both underspecified and
ambiguous as the user did not mention which data attribute(s) to associate it
with and what data range of values to filter the query to. In this case, the
chatbot could infer Sales and/or Profit as the relevant attributes with some
pre-defined range filters. The chatbot should present a concise text or verbal
explanation of its inferences that is relevant to the context of the data. If
there are other viable interpretations, the chatbot should provide follow-up
options to present alternatives to the user. If disambiguation is not
possible, the chatbot should request help from the user to explicitly clarify
the utterance. A message introducing the clarification request could include
phrases such as, “Did you mean…”, “Was this answer helpful?”, or “This is what
I could find…”
* •
DP6: Refinement and repair: Complementary to the handling of ambiguity and
underspecification, chatbots should provide interface affordances (visual or
language) so users can refine and repair system choices and interpretations.
In a GUI context, graphical elements, such as buttons, images, and menus,
could be mixed into the interaction alongside NL input (Schegloff et al.,
1977). These elements can enable the user to choose alternative analytical
functions (e.g., ‘average’ instead of ‘count’), options to change or include
other data attributes, and value filters for updating the system response and
visualization. Voice-only chatbots need to elicit clarification through a
series of verbal actions that are presented one at a time. For example, “how
about adjusting _young_ to be 12 and under instead?”
## 4\. Study 1: Evaluating Interaction Behavior
To fully explore the expressibility of queries and responses, we ran the
studies as Wizard of Oz simulations, where two human wizards produced
visualizations and responses to the participants’ input. We used a dual-wizard
protocol to reduce difficulty of the wizard role. One wizard operated Tableau
to generate visualizations, and the 2nd wizard provided text or voice
responses based on a template of responses (Figure 3), with the complete
version in the supplementary material. Below is the setup information for each
study. An example is shown in Figure 2.
We conducted three exploratory Wizard of Oz studies to observe how people use
NL interaction for visual analysis on communication platforms such as Slack
and smart assistant devices such as Alexa. We collected NL utterances, plus
qualitative data on user expectations. Each study investigated a different
modality - (Study 1a) text interaction using Slack, (Study 1b) voice
interaction using a Bluetooth speaker device, and (Study 1c) voice interaction
using an iPad. Although the studies were conducted separately, we present them
together as the method, task, and setup was largely the same. Any differences
are called out in the sections below.
### 4.1. Participants
A total of $30$ volunteer participants ($18$ female, $12$ male) took part in
the studies, and none of them participated more than once. All participants
were fluent in English. The participants had a variety of job backgrounds with
visual analytics experience - administrator, supply chain consultant, legal,
user researcher, engineering leader, data analyst, senior manager of BI,
product manager, technical program manager, and a marketing manager. The
participants signed up at an industry tech conference or were recruited from a
local town email group, with the criteria being that they were conversant in
English and were familiar with using any chatbot or smart assistant device.
Participation in the study was voluntary, and participants were offered a
conference tote bag and a water bottle for their time. We use the notation
[P#] when referring to participants in these studies.
### 4.2. Prototypes
#### 4.2.1. Application of Design Patterns
We first summarize how we apply the six design patterns to the study variants
with additional details based on the different modalities described in more
detail in each of the three study sections.
* •
DP1: Greeting and orientation: To address the Maxims of Manner and Quantity,
participants are greeted in voice and / or text with a metadata summary of the
data source they can ask questions about.
* •
DP2: Turn-taking: To address the Maxim of Manner, we employ threading in Slack
and pose follow-up questions through voice to encourage turn-taking.
* •
DP3: Acknowledgements and confirmations: To address the Maxims of Quality and
Relation, we rely on a template of text and verbal acknowledgements and
confirmations that are consistent with each study type for various analytical
expressions.
* •
DP4: Concise and relevant responses: To address the Maxims of Quantity and
Manner, we rely on a template of text and verbal responses that are crafted to
be relevant to the questions. To stay concise, fact-finding questions are
answered with a single text response without the display of a chart, or with a
verbal response.
* •
DP5: Ambiguous & underspecified utterance handling: For handling ambiguity and
underspecification, we include responses that attempt to clarify the wizard’s
interpretation with additional text or verbal explanation and a prompt for the
participant to clarify.
* •
DP6: Refinement and repair: Participants were provided the option to re-
clarify their questions or amend the wizard’s response by typing or asking a
follow-up question.
##### (Study 1a) Text interaction using Slack
The participant and the wizard each had a Mac laptop with a Slack app
connected to the same workspace. The participant was shown a welcome message
and a description of the data source (DP1). They also had access to a
laminated information sheet about the datasource. The participant interacted
with the data by typing a question into Slack. The questions could be of
aggregation, group, filter, limit, and sort expression types as found in
Tableau. The wizard responded by typing a response based on a pre-defined
template of responses for each corresponding expression type (DP3). The wizard
then pasted an image of the corresponding visualization generated via Tableau
for that question (using the Mojave OS Screenshot app on the Mac) into the
Slack channel. Note that single answer responses in Tableau were just pasted
as text into Slack (without any chart response) (DP4).
Slack has additional features that help with conversational interaction (DP2).
The first is message threading that facilitates focused follow-up
conversations inside a ‘flex pane’ next to the main chat pane (sla, 2021d).
Threads help to organize information by making the public channels more
readable and moving discussions about discrete topics into their own workspace
(DP4).
Figure 2. An example setup of the iPad variant in the studies
[An example setup of the iPad variant]An example setup of the iPad variant in
the studies
##### (Study 1b) Voice interaction using a Bluetooth speaker
The wizard used a laptop connected to a Bluetooth speaker and Amazon Polly
(pol, 2021) to convert the text response into computer-generated speech
output. The Bluetooth speaker welcomed the participant with a brief summary of
the data source (DP1). They also had access to a laminated information sheet
about the data source. The participant initiated the system by prefacing the
start of the conversation with “Hey ¡chatbot greeting¿ (anonymized)” so that
the wizard could distinguish between general chatter and questions intended to
be parsed by the chatbot. The participant interacted with the data by verbally
asking a question about the data. The questions could be of aggregation,
group, filter, limit, and sort expression types as found in Tableau. The
wizard responded by typing a response into Polly based on a pre-defined
template of responses for each corresponding expression type (DP3, DP4).
Responses were played on the Bluetooth speaker as audio output to the
participant. Upon completion of a task, the wizard added a follow-up question
like, “Is there anything else I can help you with?” to support conversational
turns (DP2).
##### (Study 1c) Voice interaction using an iPad + Bluetooth speaker setup
The wizard used a Mac laptop connected to an iPad via Bluetooth. A separate
Bluetooth speaker provided audio output, while the iPad functioned as a
display to show visualization responses. The wizard used Amazon Polly to
convert the text response into computer-generated speech output. The iPad
welcomed the participant with a brief summary of the data source shown on the
screen (DP1). The participant also had access to a laminated information sheet
about the datasource. The participant initiated the system by saying “Hey
¡chatbot greeting¿ (anonymized)” so that the wizard could distinguish between
general chatter and questions intended to be parsed by the chatbot. They
interacted with the data by verbally asking a question about the data. The
questions could be of aggregation, group, filter, limit, and sort expression
types as found in Tableau. The wizard responded by typing a response into
Polly based on a pre-defined template of responses for each corresponding
expression type (DP3, DP4). The wizard then took a screenshot of the
corresponding visualization generated via Tableau using the Screenshot app on
the Mac. The wizard sent the chart image to the iPad via the Message app on
the Mac laptop. Note that single-answer responses in Tableau were just sent as
verbal responses without an accompanying chart image. Similar to Study 1b,
upon completion of a task, the wizard added a follow-up question to support
conversational turns (DP2).
Figure 3. A subset of template responses used by the wizard
[A subset of template responses]A subset of template responses used by the
wizard
### 4.3. Task and Data
Participants were asked to explore a dataset about passengers onboard the
Titanic ship. They were asked to focus on questions containing attributes from
the dataset, including passenger age, fare, class information, and Boolean
attributes to indicate whether a passenger survived or not, and had family
aboard or not.
(a) Study 1a - Slack
(b) Study 1b - Voice
(c) Study 1c - Voice with iPad
Figure 4. Conversation snippets from Study 1. (b) and (c): The grey text
bubbles indicate voice transcripts from the participants while the blue ones
are from the chatbot. Visualizations are displayed alongside the system
responses in (a) and (c).
[Conversation snippets from Study 1]Conversation snippets from Study 1. (b)
and (c): The grey text bubbles indicate voice transcripts from the
participants while the blue ones are from the chatbot. Visualizations are
displayed alongside the system responses in (a) and (c).
### 4.4. Procedure
We conducted $10$ sessions in each study, each lasting 25 minutes. Four staff
members supported each session: one facilitator, one note taker, and two
wizards. Participants were not made aware of the Wizard prior to
participation. The facilitator followed a script. Participants were first
introduced to the study and we asked about their background and role. They
were then given instructions and spent most of the session interacting with
the system by entering text questions in Slack or asking voice-based
questions, and then observing the resulting visualizations plus text or audio
responses.
We employed a question-asking protocol to elicit qualitative feedback. While
the system was “thinking,” the facilitator asked the participants what they
expected as a response to their input, and then when the response arrived, the
facilitator asked for the participant’s feedback. Given that the responses
were manually generated by the wizard, there was no built-in logic for
ambiguous and underspecified utterance handling or repair. Instead,
participants were asked to restate a modified follow-up utterance if the
response was not what they expected (DP5, DP6). Participants were told at the
end of the session that the system was a simulation. We then wrapped up the
session during the last 5-10 minutes, getting their overall feedback about the
prototype.
### 4.5. Data collection and analysis
Natural language utterances were collected with audio recordings of the voice
input and Slack history for the text input. Sessions were screen-recorded and
audio-recorded. A notetaker was present in most sessions to take field notes.
Field notes were expanded to a video log after the study through partial
transcription of the videos. The video log (and raw video for reference) was
then qualitatively coded to look for themes and trends.
## 5\. Study 1 Findings
For each study, we categorized the input utterances based on the type of
analytical intent they referred to. The categories included the five basic
database operations found in VizQL (Stolte et al., 2002) along with other
intents such as ‘clear’ for starting a new conversation, ‘compare’ for
comparing two values in a field, ‘clarification’ for wanting to clarify the
system’s response, and asking for a specific visualization type. The full set
of classification types is available in the supplementary material. Examples
of conversation snippets from the studies are shown in 4(c). We also
classified whether the utterances were follow-up utterances to a previous
conversation thread or not. These data differed in interesting ways for the
three variants, as shown in Figure 5 and summarized in the following sections.
Figure 5. Utterance classification from Studies 1a-c. Top: Voice modalities
elicited a greater proportion of fact-finding questions, especially in Study
1b. The analytical categories expressed were varied with the need for deeper
analytical insights in Study 1a. Bottom: In general, there were fewer follow-
up utterances across all the three studies, with Study 1b (voice-only) having
the least proportion.
[Utterance classification and follow-up utterances from Studies
1a-c.]Utterance classification from Studies 1a-c. Top: Voice modalities
elicited a greater proportion of fact-finding questions, especially in Study
1b. The analytical categories expressed were varied with the need for deeper
analytical insights in Study 1a. Bottom: In general, there were fewer follow-
up utterances across all the three studies, with Study 1b (voice-only) having
the least proportion.
### 5.1. (Study 1a) Text interaction using Slack
Ten participants asked the Slack prototype 124 utterances in total (Avg 10.6
utterances per session). Based on coding of the videos and the notes, 40.4% of
the utterances were manually classified as fact-finding, expecting a single
response such as a number or “yes/no” (e.g., P15 - “how many families survived
in total?”). 19.3% of the utterances were that of a comparison intent where
participants wanted to compare a set of values for a given attribute (e.g.,
P18 - “Can you show me a chart of survival % compared to age?” ). A small
proportion (14.4%) of these utterances involved grouping by an attribute
(e.g., “what was the average age for the female survivors?”). Interestingly,
there were several examples (17.5%) where the participant wanted deeper
insights about the data (e.g., P15 - “have there been outliers per class?”,
P16 - “How much more likely was a passenger to survive if they were in first
class?”).
19.3% of the initial utterances had follow-up utterances. Several follow-ups
involved reformulating the original utterance when the system response was
unexpected. For example, P15 reformulated the utterance “can you show me this
graph in clusters?” with a follow-up, “can you show me this graph in bins?”.
P14 reformulated the original utterance “can I see all fares paid for men?”
with a follow-up, “all fares paid by men?” when they found the Gantt chart
returned by the first utterance to be unacceptable. They hoped that
simplifying the utterance would result in something more favorable (which did
not happen). Others used follow-up utterances as a means to help clarify what
they were seeing. For example, P18 asked “if you were female, are you more
likely to have survived?” with a follow-up “Why?”.
Text interaction via Slack elicited a variety of analytical questions beyond
simple fact-finding, often involving multi-turn conversation threads. This led
us to further investigate this modality in a later study (Section 6).
### 5.2. (Study 1b) Voice interaction using a Bluetooth speaker
Ten participants asked the voice-only prototype 103 utterances in total (Avg
9.72 utterances per session). Based on coding of the videos and the notes, a
majority (91.4%) of the utterances were manually classified as fact-finding,
expecting a single response such as a number or “yes/no” (e.g., “did any
passengers survive on Titanic?”). This was much higher than in the iPad and
Slack studies, suggesting that a voice-only chatbot would be used primarily
for lookup tasks rather than deeper analysis. A small number of utterances
involved grouping by an attribute (e.g., “what is the distribution of age bin
per passenger?”) and asking for more information about the Titanic dataset
(e.g., “what is this dataset?”), with 3.5% for each. Interestingly, there was
one fact-finding question that expected the system to provide deeper
analytical insights, asked by P10 - “Is there any outlier for fare in class
1?”
The voice-only study had a low incidence of follow-up utterances, amounting to
13.8%. Among those follow-up utterances, a majority of them were also fact-
finding in nature (e.g., “What is the age of these paying 0 in class 1?”),
with a few utterances requesting a number to be swapped out for a percentage
(e.g., “how many of the passengers that survived paid more than $300?”,
followed by “what’s the percentage?”). One participant (P12) tested the
prototype with some trick questions just to assess the limitations of the
system by asking questions such as “did anyone have the name almo?” and “what
about aria?” even though they were told that the dataset did not contain the
names of passengers.
### 5.3. (Study 1c) Voice interaction with an iPad + Bluetooth speaker
Ten participants asked the prototype 110 utterances in total (Avg 10.08
utterances per session). Based on coding of the videos and the notes,
utterances were manually classified into one of the following categories:
43.4% Grouping by an attribute (e.g., “Show survival rate by fare class”),
31.6% Fact-finding, expecting a single response such as a number or “yes/no”
(e.g., “How many female passengers survived?”), 14.5% Comparison across values
for an attribute (e.g., “% of men and women who survived”), with a smaller
percentage of the remaining utterances either being resetting the context of
the conversation, explicitly requesting a chart type (e.g., “Box plot for the
fare”), or asking a deeper insight or reasoning (e.g., “What’s the key factors
that indicate somebody survived or not”).
In the iPad study, 34.2% of the utterances were classified as follow-ups.
22.4% of those follow-up utterances involved adding an attribute to the
current visualization (e.g., “can you split it by number of people
survived?”). A small number of follow-up utterances involved swapping out an
attribute with another (e.g., “Switch class by fare”) and filtering out nulls
(e.g.,“Remove null from this dataset”). Interestingly, a new type of follow-up
utterance was also observed where a user asked a follow-up fact-finding
question about the visualization (e.g., “Average fare these women paid”).
### 5.4. User expectations
Based on participants’ alouds, we observed some common user expectations
spanning across the chatbot variants: Automatically filter nulls: Several
participants across the Slack and iPad variants expected the system to
automatically filter out nulls, with accompanying language indicating that the
filter was applied to the visualization response.
Provide context to fact-finding questions: In the iPad variant, there were
several utterances for which the system behavior was not satisfactory. P02
asked, “was the first class more expensive than others?” Upon seeing the
response, they said, “A complicated Gantt chart with no explanation wasn’t
that helpful.” When asked if a simple yes/no response would have been
preferred, they replied that the Boolean response would probably be more
useful than the Gantt chart, but would still expect some additional context.
As another example, for an utterance “what % of passengers in cabin class 1
survived?” a response “62% of class 1 survived when compared to 43% in class 2
and 26% in class 3” is more useful than just “62%.” In the voice-only variant,
participants were expecting the system to parrot back some version of the
question, especially those questions that could be answered by a single number
or a yes/no response; here the context confirms that the system correctly
understood the user’s request.
Support query expressibility: One of the challenges while designing a natural
language interface is the high variability in how people express questions.
While we saw follow-up threads in the Slack and iPad variants, the utterances
in the voice-only variant were found to be precise, self-contained fact-
finding questions such as “how many people who were 50 or older were on the
titanic?” As P04 said - “It is an interesting concept, can see non tech-savvy
people use this […] with voice that people frame their questions
linguistically in a certain way. I’d be concerned in both text and voice, but
with voice, there are more nuances. I’d be concerned whether the responses
would be the same if asked differently.”
Semantics is important: Participants used a variety of synonyms and related
concepts in their utterances. For example, P06 asked in the iPad variant, “How
many families are fully lost on the boat,” where “fully lost” pertained to
“not survived.” P4 asked “Average fare these women paid,” where paid refers to
“Fare.” Recognizing synonyms and concepts would help enhance the
recognizability of these types of utterances, in addition to providing self-
service tools for people to add domain-specific concepts with their datasets.
Support repair and refinement: Many of the follow-up utterances for the Slack
and iPad variants involved adding an additional attribute to the analysis,
swapping out a number for a percentage to do a comparison or filter out
information. Even in the voice-only variant, follow-up questions often
involved a fact-finding inquiry based on the current context. When designing
natural language systems with these various voice/text/visual modalities, it
is important to set the right expectations to the user that follow-up
utterances and clarification are supported. This aligns with existing design
guidelines suggested for chatbots, including the one on Alexa, where the
suggestion is to design responses with intents followed with a question such
as “Do you want to know more?” or “Is there anything else I can help you
with?”.
Understand follow-up utterances vs. resetting the context: Very few people
used terms such as “clear” and “start over” to explicitly reset the context,
even though that information was part of the instructions. Several
participants used anaphora such as “that chart” to refer to the current
context. This pattern of behavior was more pronounced in the Slack and iPad
variants. We could leverage Slack threads to explicitly provide feedback to
the system that a user intends to follow-up on a previous conversation.
However, the problem of automatically detecting follow-up vs. a new utterance
is more challenging in voice-based interaction as the system would need to
reliably detect anaphora.
Support interactive visualizations: A few participants expressed that they
would have liked the visualizations to be interactive or editable via an
authoring tool. P14 said, “Some interactivity might help to reformulate the
results.” The screenshots we used (from Ask Data (ask, 2021)) showed a drop-
down widget for choosing a different visualization type that was not clickable
and set false expectations about interactivity. P18 said, “if it (the
visualization) is static, I wouldn’t expect there to be a drop-down box.”
Provide a text description accompanying the visualization responses: Several
participants did not notice that the prototype was speaking to them. They
often forgot what it said, and when looking at the visualization, they forgot
what they were looking at. Participants wanted a text description, feedback
pills, or a caption describing the visualization. This information could also
show the attributes the system used to generate the visualization, helping
users to determine whether the system correctly interpreted their question.
Enable deeper insights and reasoning: With the chart variants, especially
Slack, participants (P15, P16, P18) asked several “why” questions about
observations such as outliers and trends. Extending the capabilities of
analytical conversation interfaces to not only provide the “what”, but the
“why” and “how” from the data, could help facilitate richer and deeper
analytical workflows with such tools.
Integrate chatbots into other visual analysis workflows: Chatbots are
conducive for question-answering, but participants also expected them to
integrate into other aspects of the analytical workflow, such as creating
dashboards and saving the results to a workbook. P03 said in their exit
interview, “Could we throw vizzes to dashboard too while we are asking
questions […] so this tool can be used as the dashboard builder. Clients who
don’t have IT departmental infrastructure can use this tool for automating
some of that stuff. We use a lot of auditing and can use this tool there.”
## 6\. Study 2: Evaluating Analytical Chatbot Interfaces
Based on our observations of user behaviors and preferences from the three
sets of Wizard of Oz studies, we implemented three working analytical chatbot
systems on Slack (supporting text and images), the Echo Show (supporting voice
and images), and the Echo (supporting voice only) platforms. We ran an
exploratory study with each of these platforms to collect qualitative data on
how users interact with these chatbots and the types of data exploration
behaviors they exhibit. Unlike the Wizard of Oz studies in Study 1, where a
human wizard controlled the interaction behavior with the participant, Study 2
implemented three working chatbot systems that automated the system responses.
Figure 6. System overview of the chatbot architecture
[System overview of the chatbot architecture]System overview of the chatbot
architecture
### 6.1. Method
We chose a between-subjects design to avoid learning and fatigue effects.
Participants were randomly assigned to a chatbot condition, and either the
Titanic passenger dataset or a wines review dataset (Thoutt, 2017). The task,
procedure, data collection, and analysis were similar to those in Study 1 with
differences documented below.
We conducted 10 sessions per condition, each lasting 25 minutes. Two staff
members supported each session: one facilitator and one notetaker. The
facilitator followed the same experiment script from Study 1. Participants
were first introduced to the study and we asked about their background and
role. They were then given instructions and spent most of the session
interacting with the system and observing the resulting visualizations and
text responses. We employed the same question-asking protocol from Study 1 to
elicit qualitative feedback. We then wrapped up the session during the last
5-10 minutes getting their overall feedback about the prototype.
All sessions were screen-recorded and audio-recorded. For Slack, NL utterances
were collected from the conversation history logs. Field notes were expanded
to a video log after the study through partial transcription of the videos.
The video log (and raw video for reference) was then qualitatively coded to
look for themes and trends. All studies took place outdoors with masks on to
conform with COVID-19 social distancing protocol.
#### 6.1.1. Participants
Ten participants took part in each of the three study variants, with a total
of $30$ ($15$ female, $15$ male). Note that these participants were different
from those who participated in Study 1. All participants were fluent in
English and familiar with using a chatbot platform. Similar to the previous
studies, the participants had a variety of job backgrounds with visual
analytics experience ranging from a school staff member, graduate students,
entrepreneurs, program managers, software engineers, and data analysts.
Participants were recruited via a public mailing list for a local town.
Participation in the study was voluntary and were offered gourmet cupcakes
from a local bakery for their time. We use the notation [P’#.Condition], where
‘Condition’ is “Slack,” “Echo Show,” or “Echo” to contextualize quotes with
the condition the participant experienced.
### 6.2. System Implementation
The chatbot systems employ a node.js (nod, 2021) client-server architecture
and have the following general components (Figure 6):
* •
Chatbot Client: Listens to user greetings, interaction events and message
events from the Chatbot Server (DP1). In the case of Slack and Echo Show
platforms, the interface also displays native interactive widgets for
surfacing ambiguity.
* •
Chatbot Server: The main application-specific server bridge between the
Chatbot Client and the other components of the application. The server
translates input client events (e.g., slack messages or voice commands) into
appropriate API requests and responses into a format appropriate for the
client.
* •
Parser: Parses input NL queries (text- and voice-based) into tokens based on
an underlying grammar as implemented in Eviza (Setlur et al., 2016). These
tokens are resolved as data attributes and values (with information from the
data source), or intent lexicons such as ‘trend’ and ‘correlation’ as well as
modifiers such as ‘young’ and ‘best’ (Setlur and Kumar, 2020). The parser also
supports intent handling and infers underspecified or ambiguous information,
similar to work in (Setlur et al., 2019) (DP5). The server passes the parsed
tokens to the Chatbot Server, so that the information can be used to generate
a system response.
* •
Viz Module: Generates images of data visualization results based on
information such as chart type, intent strategy, data attributes, and values
using Vegalite (Satyanarayan et al., 2017) commands. This module is relevant
to GUI-based chatbots such as Slack and the Echo Show.
* •
Natural Language Generation (NLG) Module: Employs simple language templates
for NLG with pre-defined placeholders to insert information for generating
text- and voice-based system responses (DP3). Given that the application
domain for these chatbot interactions uses a set of known analytical intents
along with attributes and values from the underlying data, the space of
linguistic variations is relatively small and the outputs can be specified
using templates (Reiter, 2010). We define the templates by referring to
utterances from Study 1, along with utterances commonly supported across
existing NLIs (Setlur et al., 2016; Hoque et al., 2017; Yu and Silva, 2019;
Setlur et al., 2019; Narechania et al., 2021) and sample utterances collected
through studies investigating the use of NL to create or interact with data
visualizations (Tory and Setlur, 2019; Srinivasan et al., 2021). The grammar
rules from the parser modules are used to aid in the NLG process, which
involves ordering constituents of the NLG output and generating the right
morphological forms (including verb conjugations and agreement) (Reiter and
Dale, 1997).
Figure 7. Echo Show prototype showing a result for “price by winery location”
and actively listening for the next utterance
[Echo Show prototype]Echo Show prototype showing a result for “price by winery
location” and actively listening for the next utterance
The Slack chatbot uses the Slack API (sla, 2021b) for listening to Slack
events. Slack responses from the template used in Study 1 (Section 4.2) are
passed as input to the prototype as a JSON file. The prototype automatically
generates a system response as a new thread to the original top-level
utterance when it detects follow-up questions (DP2); for example, when the
user refers to the context in the previous utterance using anaphoric
references such as “that viz” or “how about showing the response for first
class instead.” We did not provide any specific instructions to the
participants about when to interact in threads since we wanted to observe
their behavior without any priming. When a participant chose to respond in a
thread, the Slackbot also automatically responded in the same thread. When the
participant decided to type a question in the main channel, a new thread was
automatically created with the corresponding system response (DP3, DP4).
The prototype utilizes Slack’s interactive messaging framework (sla, 2021c)
that augments messages with interactive interface affordances such as buttons,
menus, and custom actions for displaying ambiguity widgets (DP6), as seen in
Figure 1a. We implement two types of interactive widgets to accompany the
chatbot responses: (1) a drop-down menu for filtering to specific values on
the data domain; (2) a yes/no button option to clarify whether the response is
expected when the input utterance is ambiguous (DP5).
Figure 8. Utterance classification from the Slack, Echo Show, and Echo
prototype studies. Top: Similar to findings in Study 1, voice modalities
elicited a greater proportion of fact-finding questions, especially in the
Echo chatbot. Bottom: Proportion of follow-up utterances across all three
studies.
[Utterance classification from the Slack, Echo Show, and Echo prototype
studies]Utterance classification from the Slack, Echo Show, and Echo prototype
studies. Top: Similar to findings in Study 1, voice modalities elicited a
greater proportion of fact-finding questions, especially in the Echo chatbot.
Bottom: Proportion of follow-up utterances across all three studies.
The Echo Show and Echo chatbot systems have a similar implementation
architecture to the Slack chatbot. However, rather than using a bespoke
parser, the application employs the Alexa API (ale, 2021d) for parsing intents
in the utterances. We activate a feature called _Follow-Up Mode_ (ale, 2021b)
that lets users make multiple requests, including follow-up inquiries without
having to say the trigger phrase, “hey, chatbot!” each time a question is
asked (DP2). Participants were instructed to use the trigger phase once at the
beginning of the interaction session to set the Echo device in active
listening mode, indicated by a blue halo light on the chatbot device (see
Figure 7). Both the Echo Show and Echo chatbots provide verbal follow-up
prompts to either continue or refine the current conversation, or ask a new
question (DP3, DP4). The Echo Show can display a list of options on its touch
screen based on pre-defined display templates available for Alexa devices
(ale, 2021a) when it encounters ambiguous or underspecified utterances (DP5,
DP6). We chose a popular US-English based female voice option called ‘Joanna’
(ale, 2021c) for both the voice chatbots.
### 6.3. Study 2 Findings
We summarize people’s reactions to the three prototypes and examine the impact
of their behavior as participants conversed. Figure 1 shows examples of
participants conversing with the Slack, Echo Show, and Echo chatbots. A
summary of the utterance types and proportion of follow-up utterances is shown
in Figure 8.
#### 6.3.1. Slack Study Findings
In general, all participants were comfortable using Slack or a similar
collaborative messaging platform. Many were curious what a real-time
interaction with a chatbot would be like as several reported having used
Slackbots that were either passive or served a one-time need like conducting a
poll with a group. P’07.Slack remarked, “I’m used to using a whole bunch of
Slackbots like Polly (for polls) or seeing my Google calendar events for the
day. This feels more interactive.”
Ten participants asked the Slack prototype 147 utterances in total (Avg 13.2
utterances per session). Similar to the procedure in Study 1, we manually
classified the types of utterances based on coding of the videos and the
notes. 38.46% of the utterances were manually classified as fact-finding,
expecting a single response such as a number or “yes/no,” 24.62% of the
utterances were that of a comparison intent where participants wanted to
compare a set of values for a given attribute, 18.46% of these utterances
involved filtering by a data value, and 16.92% involved a request for deeper
insights about the data. The remaining small percentage of utterances were
failure cases or a clarification.
##### Effect of threads on conversational behavior
52.31% of the utterances were follow-up conversations within the Slack
threads. The average number of conversation turns111A type of organization in
conversational discourse wherein the participant and the chatbot alternate in
response to one another. was $3.7$. People generally liked threaded responses
for single word/number responses, but often found the real-estate in the Slack
flex pane too small to read the visualizations. When the facilitator suggested
that they could resize the pane to make it wider, the user experience improved
considerably. P’03.Slack said, “This is cool. The system responds in a thread
and makes me want to ask something more about what I’m seeing here.”
P’07.Slack thought that threading helped them to easily refer back - “I
sometimes want to go back and see what the number was and I could search for
my question and see the chat history around it.” A few participants did not
like the automatic threading and found it confusing. P’08.Slack commented,
“it’s unclear where to place my comments as I need to think if I’m asking a
follow-up or a new topic” and P’06.Slack said, “Difficult to track new
messages.” On the other hand, participants found that the presence of widgets
helped them focus their gaze to the bottom of the thread and see the responses
in-situ while interacting with the widgets. P’04.Slack said, “I liked being
able to follow a discussion thread after interacting with the menu options. It
helped keep the flow linear.”
##### Utility of interactive widgets in the system responses
$78.4\%$ of the total system responses contained one or more widgets and
participants frequently interacted with them (76.8% of total system responses
containing widgets). Threading along with the widgets motivated participants
to interact with the widgets as they preferred seeing the responses from those
interactions in the thread. P’08.Slack said, “I like to see the chatbot
immediately respond in the thread when I play with the menu. That way, I can
see all the results in one place for easy lookup.” Generally, participants
liked the drop-down menu to show alternative responses by filtering to other
values. Having interactive widgets with threading prompted longer back-and-
forth conversation, as P’09.Slack states, “The menu made me want to click on
it and when I saw the response in the thread, I wanted to choose other options
from the drop-down. That made me want to ask something more about it.”
However, buttons to verify the expectations of the system responses got a
mixed reaction. Some participants appreciated the affordance to provide
feedback, “I liked to click on _Yes_ to make sure that the chatbot remembers
my preference. It knew what I meant when I asked for rich passengers.”
P’02.Slack exclaimed, “Nice! I hit _No_ , and the system gives me a hint of
how I can rephrase my question for better luck.” Others found the buttons less
useful; e.g., “I don’t need the buttons, but rather a prompt asking me if I
want to rephrase and give me a hint directly. I do not want to click on the
_No_ to get it [the chatbot] to follow up with me.” [P’09.Slack]
##### Types of utterances and failure cases
Generally, we observed richer analytical conversations than just fact-finding
or single-turn conversations. $39.5\%$ of the utterances were identified as
being ambiguous across conditions. Participants often restated the utterance
using a mixed-initiative approach of widget interaction and follow-up
questions to express their intent more clearly. For example, P’03.Slack
commented, “I wanted to know if the elderly survived and I realized that the
system took a guess. I then explicitly asked for greater than 65.”
We categorized $18.2\%$ of the utterances as failure cases because either the
chatbot could simply not understand the utterance or it resulted in an
incorrect system response that the participant could not correct. Some of
these cases were due to insufficient information in the underlying data. For
example, P’02.Slack asked, “how long did it take to rescue the Titanic
survivors ?” The chatbot could not resolve ‘rescue’ to any analytical concept
or data and simply showed the total number of Titanic survivors as its
response. Other cases failed because the chatbot could not recognize certain
tokens and concepts in the utterances. For example, “which wineries would you
suggest for a good cab? [P’08.Slack]” resulted in no meaningful system
response as the chatbot failed to understand that ‘cab’ was the short form for
‘cabernet’ and was unable to interpret ‘suggest.’ When the chatbot prompted
the participant to rephrase their query, the participant was able to get a
more satisfactory answer by typing, “show me wineries with good cabernet.” In
general, the prompts and clarifications that the chatbot posed to the
participants in the event of these unsuccessful interpretations, encouraged
them to actively restate their original utterance or pivot to a new
conversation topic.
#### 6.3.2. Echo Show and Echo Devices
We combine the discussion for both of these prototypes as the main modality of
interaction was voice and there were commonalities in participant feedback
across the two platforms. In addition, we highlight the interaction
differences that we found when participants used the touchscreen on the Echo
Show device, compared to a headless Echo device.
All participants found the voice chatbots to be easy to interact with.
P’14.EchoShow said, “I can operate the bot without the use of the hands and
without thinking about my grammar when I ask something.” Participants also
mentioned that they enjoyed the voice interaction and were often curious about
the answers they were provided. P’27.Echo reacted, “I sometimes like to ask my
Alexa random questions and see what she does. Even though we are talking about
data here, I was curious to see what she (the chatbot) would answer. I was
pretty pleased when I asked where should I go to get the best rated wine and
it responded with Lewis Winery in Napa Valley.”
Ten participants asked the Echo Show chatbot 121 utterances in total (Avg 11.2
utterances per session). Based on coding of the videos and the notes,
utterances were manually classified into one of the following categories:
46.05% Fact-finding, 25% Comparisons across values for a given attribute,
18.42% Grouping attributes, 6.58% Filtering one or more values, and 3.95%
requesting information about the dataset. The rest of the utterances were
either resetting the context of the conversation or asking for a deeper data
insight.
Ten participants asked the Echo chatbot 116 utterances in total (Avg 10.4
utterances per session). Based on coding of the videos and the notes, a
majority (94.8%) of the utterances were manually classified as fact-finding,
similar to Study 1 and in stark contrast with the Slack and Echo Show
chatbots. Other types of utterances included Filter (3.45%), and asking for
more information about the dataset (1.72%).
Other differences across the two platforms are documented below.
##### Effect of device modality on conversational behavior
There were more occurrences of follow-up conversations with the Echo Show
(38.16% of the utterances; average number of conversation turns was $2.05$)
when compared to the Echo (24.14%; average number of conversation turns was
$1.19$). The Echo Show touchscreen served as a scaffold for conversation
turns, prompting the user with a list of follow up questions that they could
select or verbalize. P’11.EchoShow explained, “It’s hard to keep all the
options in my head if the chatbot just speaks to me. The screen gave me hints
so I can figure out my next step.” In contrast, participants found it more
challenging to mentally maintain the context of a conversation thread in a
voice-only interaction. P’26.Echo remarked, “It’s a lot easier to just ask
single questions and get single responses back; kind of how I use my Alexa at
home for the weather.” Note that follow-up conversation was considerably lower
with both voice chatbots as compared to Slack ($50.9\%$ follow-up utterances).
##### Utility of follow-up prompts in the system responses
As expected, with the voice-only Echo chatbot, follow-up questions were asked
verbally as that was the only mode of interaction. Surprisingly, most
participants interacting with the Echo Show also chose to ask follow-up
questions verbally ($89.7\%$ of the follow-up utterances) rather than
interacting with the list of options provided on the touch screen. When
participants were asked for their reason of choice, many of them simply found
it more convenient to verbally ask the question, as the chatbot was in active
listening mode (P’13.EchoShow, P’16.Echoshow, P’18.EchoShow). Other
participants rationalized their behavior with the way they typically interact
with voice-based chatbots at home as P’11.EchoShow described – “I use an Echo
Show in my kitchen, but my hands are always messy. It’s easier for me to just
ask for a recipe or set a timer without having to walk over and touch the
screen. I’m just used to that.”
##### Types of utterances and failure cases
Similar to voice-only interaction in Study 1, most questions ($94\%$) that
participants asked of the Echo chatbot were fact-finding or single-turn
conversations. With the Echo Show chatbot, having the visualizations available
along with the verbal responses encouraged more variety in the types of
intents they asked. $20.5\%$ and $16.6\%$ of the utterances were identified as
being ambiguous in the Echo Show and Echo chatbots respectively. These numbers
are lower than what we observed in the Slack chatbot ($39.5\%$), as
participants tended to be more explicit and concise when verbally interacting
with the chatbots. P’30.Echo commented, “Voice is a bit dicey. I’m not going
get complicated with it and keep my questions simple and to the point.”
We identified $24.6\%$ and $27.3\%$ of the utterances as failure cases in the
Echo Show and Echo chatbot interaction respectively. Similar to the Slack
chatbot, failure cases were due to insufficient information in the underlying
data or an inability to recognize concepts in the utterances. Compared to
Slack ($18.2\%$), the voice chatbots had more failure cases, due to difficulty
recognizing proper names (e.g., names of wineries and passenger names) and
certain accents. In the Echo Show scenario, participants found it easier to
select an alternative option on the screen and continue their interaction. In
comparison, participants interacting with the Echo chatbot would either
restate their question slowly or use alternative simpler language (e.g.,
asking for wineries in “France” as opposed to in “Bordeaux”), hoping for a
more appropriate system response.
## 7\. Discussion
Our explorations of different analytical chatbot modalities revealed
variations in people’s behavior and expectations. Below, we first revisit our
research questions and then discuss opportunities for future work.
### 7.1. Revisiting the Research Questions
RQ1 - NL utterances: What are the characteristics of NL questions that users
ask through text vs. voice? What types of ambiguous and underspecified
questions do they ask with these modalities?: Observations from our studies
found that while voice-only interaction placed a heavy emphasis on fact-
finding, chatbots that could respond with both NL and charts, engaged users in
richer analytical workflows involving multi-turn conversation and analytical
operations such as grouping, filtering, and comparisons. Conversational
affordances of threading and interactive widgets further prompted multi-turn
conversational interaction. We observed ambiguity around fuzzy concepts such
as “How many of the survivors were _young_ ” and “Did the _richer_ passengers
have better chances of surviving?” and intent such as “Have there been people
who paid too much in their class?”.
RQ2 - Response expectations: What would users expect as a reasonable response?
When do users expect only a text or voice response? When do they want charts
to be shown along with a text or voice response? What are users’ expectations
of the charts shown in response to NL questions?: Our studies identified many
user expectations around chatbot interaction, as documented in Section 5.4.
These ranged from simple operations like automatically filtering nulls (and
making those system actions transparent) to more elaborate requirements such
as providing context in the chatbot responses to confirm that the query was
understood, remind the user what they asked, and facilitate next steps in the
analytical workflow. The user expectations also point towards areas where
future research is needed, such as automatically identifying breaks in
conversational flow where the context should be reset.
RQ3 - Modalities for repair: When the result is unexpected, how do users
expect to repair the system behavior?: Follow-up utterances for repair either
reformulated a misunderstood query or revised the chart to continue the
analysis. With the Slack chatbot in Study 2, participants also extensively
used the widgets for repair. Widgets also offered a mechanism to rapidly
explore variants of a chart to see different perspectives (i.e. by adjusting
filters). While all participants appreciated having various ways to repair
their input, feedback on the “was this what you expected?” buttons was mixed
as it sometimes interrupted a user’s natural workflow and forced an extra step
in the repair process. In addition, UI widgets were seldom used in the Echo
Show, despite supporting both visual and voice. This observation highlights
the need to support repair and refinement in the modality that people are most
familiar or comfortable with on a given platform, and which keep them in a
natural workflow.
### 7.2. Future Directions
Analytical chatbots are a promising medium for human-data interaction.
Findings from our preliminary studies open up interesting research directions
to explore.
##### Support for greater versatility in intent understanding
Understanding user intent and providing relevant responses is important to any
chatbot platform, but is particularly challenging for analytics. Similar to
general chatbot interfaces, analytical chatbots are expected to exhibit the
Maxims of Quantity and Manner. However, the notion of relevance is more
nuanced for analytical inquiry and there are are opportunities to develop
techniques for deeply understanding analytical intent. Participants especially
wanted the system to better support intent around comparisons. For example,
P05 stated in response to a bar chart shown for their question “can you show
the total % of survivors age 20 and up?” – “I’m more of a visual person and it
makes it more challenging to see what I’m looking for. If there is already a
dashboard that is interactive, I could ask a question and see how the
dashboard would respond. For a discrete question, I would like to see the
discrete response relative to the whole.” Future studies should explore more
deeply how analytical chatbots can adapt to a range of questions in the
context of established visual analytics systems (ask, 2021; pow, 2021; ibm,
2021). Studies should also explore additional analytical capabilities and
datasets. Fact-finding questions were prevalent across all the three study
variants, especially with voice input; users appreciated additional context
with the simple system responses. More work needs to be done to ascertain the
kinds of context that are most appropriate and helpful, including external
sources of information.
##### Establish trust and transparency
Utterances can be ambiguous and chatbots need to infer information and provide
sensible responses. We found that establishing trust was critical to user
engagement, conforming with the Maxims of Manner and Quality. It was helpful
to describe the provenance of responses, with the underlying logic and any
data transformations. P08 commented, “she gave me the right number but then
she qualified the response with first class.” P09 said, “Repeating the phrases
from the question is useful to make sure that the system understood me. The
value of repeating validates the quality and accuracy of the system response.”
Additionally, we need to design chatbots to gracefully handle requests that
cannot be supported or are not understood. P07 commented, “I was happy that it
showed some representation of the data, even if not the requested viz type.”
The studies showed that follow-up questions and widgets were useful
affordances to repair and refine system choices. We also found that
predictability in the chatbot behavior for handling different types of
analytical questions further enhanced people’s trust in the systems. P27 said,
“At first it felt a bit intimidating to figure out what I can ask. I now know
what to expect after asking a few questions and I feel comfortable poking into
the data more.” Along the lines of trust, the business logic and data for
chatbot platforms commonly exist in the cloud. As we see the prevalence of
these platforms for enterprise data exploration, privacy and security are
important issues to address for supporting wider adoption.
##### Promote collaboration
Grice’s Maxims collectively support cooperative conversation between humans
and with a computer. Chatbot and messaging platforms such as Slack and Teams
provide support for collaborative participation in communities of practice or
special interest groups. Our current set of studies focused on interaction
behaviors between a human and the chatbot and we did not consider multi-person
conversation. It would be useful to better understand collaboration patterns
in the context of cooperative conversational behaviors around data and visual
analysis.
##### Understand social cues and expectations
People often apply social heuristics to computer interactions, focusing on
cues in language, intonation, and emotions expressed by the chatbot agent
(Nass et al., 2001). These social behaviors help provide the necessary
grounding for conversational understanding, supporting characteristics
described by Maxims of Manner and Quantity. Research supports the benefits of
using anthropomorphic characteristics in human-robot interactions (Don et al.,
1992) in encouraging more conversational interaction and enhancing a person’s
ability to make precise assumptions on how that agent is likely to act based
on its persona. While we did not explicitly present the chatbots with human-
like attributes such as a name or an avatar, we found some evidence of
participants anthropomorphizing the voice chatbots. For example, P’26.EchoShow
described the chatbot’s behavior as, “That was a tricky question, but she did
her best to find the answer for me” while others expressed politeness during
their interaction using words such as “please” and “thanks, chatbot!” Further
exploration is needed to understand the effect of anthropomorphic agency on
people’s attitude, trust, satisfaction, and biases when conversing about data.
##### Leverage context and situation
Lastly, we did not consider situational context when designing these
analytical chatbots. Contextual chatbots can ascertain a user’s intent by
location, time, or role to stay both informative and relevant to the
conversation. Situational context can further bolster an analytical chatbot’s
behavior based on the Maxims of Quantity and Relation. For example, a smart
assistant considers a user’s location when asked whether it will rain today.
Adding additional intelligence to provide data insights (e.g., sharing metrics
on the latest sales data to a company executive) as well as learning user
preferences over time for the types of data questions that are of interest and
the types of responses that are preferred, can further improve the utility of
analytical chatbots.
To summarize, the user expectations that people had towards analytical
chatbots generally conform to Grice’s Maxims while conversing with data.
However, the analytical task, platform, and mode of interaction provide
additional challenges and opportunities for richer and nuanced ways of
understanding and expressing intent. Future work would need to explore these
research directions both across and within each of the four maxims. Further,
the complexity and interplay between language and data could introduce new
techniques and experiences for scaffolding analytical conversations.
## 8\. Conclusion
Participants’ enthusiastic reactions to our analytical chatbot prototypes
suggest that chatbots are a promising and approachable design approach for
data analytics. Although existing interaction design guidelines for chatbots
are generally applicable here, our studies identified additional principles
inherent to data exploration. Our results suggested approaches to interpret
intent and reveal variations in user behavior based on the modality and
interface affordances. Users tended to ask fact-finding or simple analytic
questions, often as single-turn conversations, when interacting via voice
alone. Adding charts, together with voice or text interaction, encouraged
multi-turn conversation and deeper analytical questions. Threading and widgets
in our Slack prototype especially encouraged this sort of behavior. Preferred
affordances for follow-up adjustments differed across the platforms, with
voice prompts being the overall preferred approach for voice-based chatbots
and widgets heavily used in the Slack chatbot. Overall, these studies provide
a better understanding of principles for designing analytical chatbots,
highlighting the intricacies of language pragmatics and analytical
complexities with the UI capabilities of the platform. We hope that others
find value in our insights around the design of intelligent analytical
chatbots and explore new research directions in conversational discourse
behavior along with novel user experiences.
## References
* (1)
* ale (2021a) 2021a. Alexa Display Template Reference. https://developer.amazon.com/en-US/docs/alexa/custom-skills/display-template-reference.html.
* ale (2021b) 2021b. Alexa Follow-up Mode. https://www.amazon.com/gp/help/customer/display.html?nodeId=202201630.
* ale (2021c) 2021c. Alexa Speech Synthesis. https://developer.amazon.com/en-US/docs/alexa/custom-skills/speech-synthesis-markup-language-ssml-reference.html.
* ale (2021d) 2021d. Amazon Echo & Alexa Devices. https://www.amazon.com/smart-home-devices.
* pol (2021) 2021\. Amazon Polly. https://aws.amazon.com/polly.
* goo (2021) 2021\. Google Smart Speakers and Displays. https://store.google.com/us/magazine/compare_nest_speakers_displays.
* ibm (2021) 2021\. IBM Watson Analytics. http://www.ibm.com/analytics/watson-analytics.
* pow (2021) 2021\. Microsoft Q & A. https://powerbi.microsoft.com/en-us/documentation/powerbi-service-q-and-a.
* nod (2021) 2021\. _Node.js ®_.
* sla (2021a) 2021a. Slack. https://slack.com.
* sla (2021b) 2021b. Slack API. https://api.slack.com.
* sla (2021c) 2021c. Slack Messaging Interactivity. https://api.slack.com/messaging/interactivity.
* sla (2021d) 2021d. Slack: Use Treads to Organize Discussions. https://slack.com/help/articles/115000769927-Use-threads-to-organize-discussions.
* ask (2021) 2021\. Tableau’s Ask Data. https://www.tableau.com/products/new-features/ask-data.
* tho (2021) 2021\. ThoughtSpot. http://www.thoughtspot.com.
* Atkinson and Heritage (2010) J. Atkinson and John Heritage. 2010. Structures of social action: Studies in conversation analysis. _Aphasiology_ May 1 (08 2010), 243–249. https://doi.org/10.1080/026870399402073
* Bickmore and Cassell (2005) Timothy Bickmore and Justine Cassell. 2005. _Social Dialogue with Embodied Conversational Agents_. https://doi.org/10.1007/1-4020-3933-6_2
* Bieliauskas and Schreiber (2017) Stefan Bieliauskas and Andreas Schreiber. 2017. A conversational user interface for software visualization. In _2017 ieee working conference on software visualization (vissoft)_. IEEE, 139–143.
* Bordes and Weston (2016) Antoine Bordes and Jason Weston. 2016. Learning End-to-End Goal-Oriented Dialog. _CoRR_ abs/1605.07683 (2016). arXiv:1605.07683 http://arxiv.org/abs/1605.07683
* Chakrabarti and Luger (2015) Chayan Chakrabarti and George F. Luger. 2015. Artificial conversations for customer service chatter bots: Architecture, algorithms, and evaluation metrics. _Expert Systems with Applications_ 42, 20 (2015), 6878–6897. https://doi.org/10.1016/j.eswa.2015.04.067
* Chaves and Gerosa (2021) Ana Paula Chaves and Marco Aurelio Gerosa. 2021. How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. _International Journal of Human–Computer Interaction_ 37, 8 (2021), 729–758.
* Choudhry et al. (2020) Arjun Choudhry, Mandar Sharma, Pramod Chundury, Thomas Kapler, Derek WS Gray, Naren Ramakrishnan, and Niklas Elmqvist. 2020. Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality. _arXiv preprint arXiv:2009.02649_ (2020).
* Ciechanowski et al. (2019) Leon Ciechanowski, Aleksandra Przegalinska, Mikolaj Magnuski, and Peter Gloor. 2019\. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. _Future Generation Computer Systems_ 92 (2019), 539–548.
* Cui et al. (2019) Weiwei Cui, Xiaoyu Zhang, Yun Wang, He Huang, Bei Chen, Lei Fang, Haidong Zhang, Jian-Guan Lou, and Dongmei Zhang. 2019\. Text-to-viz: Automatic generation of infographics from proportion-related natural language statements. _IEEE transactions on visualization and computer graphics_ 26, 1 (2019), 906–916.
* Das et al. (2017) Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jose M. F. Moura, Devi Parikh, and Dhruv Batra. 2017\. Visual Dialog. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Dhamdhere et al. (2017) Kedar Dhamdhere, Kevin S. McCurley, Ralfi Nahmias, Mukund Sundararajan, and Qiqi Yan. 2017\. Analyza: Exploring Data with Conversation. In _Proceedings of the 22nd International Conference on Intelligent User Interfaces_ _(IUI 2017)_. 493–504.
* Dobrowsky et al. (2021) David Dobrowsky, Lili Aunimo, Gerald Janous, Ilona Pezenka, and Teresa Weber. 2021. The Influence of Interactional Style on Affective Acceptance in Human-Chatbot Interaction–A Literature Review. In _AINL 2020 WORKSHOP ON HUMAN-AI INTERACTION_.
* Dodge et al. (2016) Jesse Dodge, Andreea Gane, X. Zhang, Antoine Bordes, S. Chopra, Alexander H. Miller, Arthur D. Szlam, and J. Weston. 2016\. Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems. _CoRR_ abs/1511.06931 (2016).
* Don et al. (1992) Abbe Don, Susan Brennan, Brenda Laurel, and Ben Shneiderman. 1992\. Anthropomorphism: From Eliza to Terminator 2. _Proceedings of CHI’92_ , 67–70. https://doi.org/10.1145/142750.142760
* Fast et al. (2018) Ethan Fast, Binbin Chen, Julia Mendelsohn, Jonathan Bassen, and Michael S Bernstein. 2018. Iris: A conversational agent for complex tasks. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_. 1–12.
* Gao et al. (2015) Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, and Karrie G. Karahalios. 2015. DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization. In _Proceedings of the 28th Annual ACM Symposium on User Interface Software Technology_ _(UIST 2015)_. ACM, New York, NY, USA, 489–500.
* Grice (1975) H. Paul Grice. 1975\. Logic and Conversation. In _The Semantics-Pragmatics Boundary in Philosophy_ , Maite Ezcurdia and Robert J. Stainton (Eds.). Broadview Press, 47.
* Habermas (1984) Jürgen Habermas. 1984\. _The Theory of Communicative Action, Vol. 1, ’Reason and the Rationalization of Society’_. Polity.
* Hadi (2013) Atefeh Hadi. 2013\. A Critical Appraisal of Grice’s Cooperative Principle. _Open Journal of Modern Linguistics_ 03 (01 2013), 69–72. https://doi.org/10.4236/ojml.2013.31008
* Hearst and Tory (2019) Marti Hearst and Melanie Tory. 2019. Would You Like A Chart With That? Incorporating Visualizations into Conversational Interfaces. In _2019 IEEE Visualization Conference (VIS)_. IEEE, 1–5.
* Hearst et al. (2019) Marti Hearst, Melanie Tory, and Vidya Setlur. 2019\. Toward Interface Defaults for Vague Modifiers in Natural Language Interfaces for Visual Analysis. In _2019 IEEE Visualization Conference (VIS)_. IEEE, 21–25.
* Henkin and Turkay (2020) Rafael Henkin and Cagatay Turkay. 2020. Words of Estimative Correlation: Studying Verbalizations of Scatterplots. _IEEE Transactions on Visualization and Computer Graphics_ (2020).
* Herring (2010) Susan C. Herring. 2010\. Computer-mediated conversation Part I: Introduction and overview. _Language@Internet_ 7, 2 (2010). http://nbn-resolving.de/urn:nbn:de:0009-7-28011
* Hoon et al. (2020) Gan Keng Hoon, Loo Ji Yong, and Goh Kau Yang. 2020\. Interfacing Chatbot with Data Retrieval and Analytics Queries for Decision Making. In _RITA 2018_. Springer, 385–394.
* Hoque et al. (2017) Enamul Hoque, Vidya Setlur, Melanie Tory, and Isaac Dykeman. 2017. Applying pragmatics principles for interaction with visual analytics. _IEEE transactions on visualization and computer graphics_ 24, 1 (2017), 309–318.
* Jacquet et al. (2019) Baptiste Jacquet, Alexandre Hullin, Jean Baratgin, and Frank Jamet. 2019. The Impact of the Gricean Maxims of Quality, Quantity and Manner in Chatbots. 180–189. https://doi.org/10.1109/DT.2019.8813473
* Kannan et al. (2016) Anjuli Kannan, Karol Kurach, Sujith Ravi, T. Kaufmann, A. Tomkins, Balint Miklos, G. Corrado, László Lukács, Marina Ganea, Peter Young, and Vivek Ramavajjala. 2016. Smart Reply: Automated Response Suggestion for Email. _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (2016).
* Kassel and Rohs (2018) Jan-Frederik Kassel and Michael Rohs. 2018. Valletto: A multimodal interface for ubiquitous visual analytics. In _Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems_. 1–6.
* Kassel and Rohs (2019) Jan-Frederik Kassel and Michael Rohs. 2019. Talk to Me Intelligibly: Investigating An Answer Space to Match the User’s Language in Visual Analysis. In _Proceedings of the 2019 on Designing Interactive Systems Conference_. 1517–1529.
* Kumar et al. (2016) Abhinav Kumar, Jillian Aurisano, Barbara Di Eugenio, Andrew Johnson, Alberto Gonzalez, and Jason Leigh. 2016. Towards a Dialogue System that Supports Rich Visualizations of Data. In _17th Annual Meeting of the Special Interest Group on Discourse and Dialogue_. 304–309.
* Li et al. (2016) Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016\. Deep Reinforcement Learning for Dialogue Generation. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Austin, Texas, 1192–1202. https://doi.org/10.18653/v1/D16-1127
* Lima and Barbosa (2020) Raul de Araújo Lima and Simone Diniz Junqueira Barbosa. 2020\. VisMaker: a Question-Oriented Visualization Recommender System for Data Exploration. _arXiv preprint arXiv:2002.06125_ (2020).
* Liu et al. (2020) Can Liu, Liwenhan Xie, Yun Han, Xiaoru Yuan, et al. 2020\. AutoCaption: An Approach to Generate Natural Language Description from Visualization Automatically. In _2020 IEEE Pacific Visualization Symposium (PacificVis)_. IEEE, 191–195.
* Moore and Arar (2018) Robert J. Moore and Raphael Arar. 2018. _Conversational UX Design: An Introduction_. Springer International Publishing, 1–16. https://doi.org/10.1007/978-3-319-95579-7_1
* Moore and Arar (2019) Robert J. Moore and Raphael Arar. 2019. _Conversational UX Design: A Practitioner’s Guide to the Natural Conversation Framework_. Association for Computing Machinery, New York, NY, USA.
* Mygland et al. (2021) Morten Johan Mygland, Morten Schibbye, Ilias O Pappas, and Polyxeni Vassilakopoulou. 2021. Affordances in human-chatbot interaction: a review of the literature. In _Conference on e-Business, e-Services and e-Society_. Springer, 3–17.
* Narechania et al. (2021) Arpit Narechania, Arjun Srinivasan, and John Stasko. 2021\. NL4DV: A toolkit for generating analytic specifications for data visualization from natural language queries. _IEEE Transactions on Visualization and Computer Graphics_ 27, 2 (2021), 369–379.
* Nass et al. (2001) Clifford Nass, Katherine . Isbister, and Eun-Ju Lee. 2001\. _Truth is Beauty: Researching Embodied Conversational Agents_. MIT Press, Cambridge, MA, USA, 374–402.
* Rapp et al. (2021) Amon Rapp, Lorenzo Curti, and Arianna Boldi. 2021. The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. _International Journal of Human-Computer Studies_ (2021), 102630.
* Reiter (2010) Ehud Reiter. 2010\. _Natural Language Generation_. John Wiley & Sons, Ltd, Chapter 20, 574–598. https://doi.org/10.1002/9781444324044.ch20 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781444324044.ch20
* Reiter and Dale (1997) Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. _Natural Language Engineering_ 3, 1 (1997). https://doi.org/10.1017/S1351324997001502.
* Rzepka et al. (2021) Christine Rzepka, Benedikt Berger, and Thomas Hess. 2021\. Voice Assistant vs. Chatbot–Examining the Fit Between Conversational Agents’ Interaction Modalities and Information Search Tasks. _Information Systems Frontiers_ (2021), 1–18.
* Satyanarayan et al. (2017) Arvind Satyanarayan, Dominik Moritz, Kanit Wongsuphasawat, and Jeffrey Heer. 2017. Vega-Lite: A Grammar of Interactive Graphics. _IEEE Transactions on Visualization and Computer Graphics_ 23, 1 (Jan. 2017), 341–350. https://doi.org/10.1109/TVCG.2016.2599030
* Saygin and Cicekli (2002) Ayse Pinar Saygin and Ilyas Cicekli. 2002. Pragmatics in human-computer conversations. _Journal of Pragmatics_ 34 (03 2002), 227–258. https://doi.org/10.1016/S0378-2166(02)80001-7
* Schegloff et al. (1977) Emanuel Schegloff, Gail Jefferson, and Harvey Sacks. 1977\. The Preference for Self-Correction in the Organization of Repair in Conversation. _Language_ 53 (06 1977), 361–382. https://doi.org/10.2307/413107
* Serban et al. (2016) Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016\. Building End-to-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In _Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence_ (Phoenix, Arizona) _(AAAI’16)_. AAAI Press, 3776–3783.
* Setlur et al. (2016) Vidya Setlur, Sarah E Battersby, Melanie Tory, Rich Gossweiler, and Angel X Chang. 2016\. Eviza: A natural language interface for visual analysis. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology_. 365–377.
* Setlur and Kumar (2020) V. Setlur and Arathi Kumar. 2020. Sentifiers: Interpreting Vague Intent Modifiers in Visual Analysis using Word Co-occurrence and Sentiment Analysis. _2020 IEEE Visualization Conference (VIS)_ (2020), 216–220.
* Setlur et al. (2019) Vidya Setlur, Melanie Tory, and Alex Djalali. 2019\. Inferencing Underspecified Natural Language Utterances in Visual Analysis. In _Proceedings of the 24th International Conference on Intelligent User Interfaces_ (Marina del Ray, California) _(IUI ’19)_. Association for Computing Machinery, New York, NY, USA, 40–51. https://doi.org/10.1145/3301275.3302270
* Sidnell and Stivers (2012) J. Sidnell and T. Stivers. 2012. _The Handbook of Conversation Analysis_. Wiley. https://books.google.com/books?id=_UCDGhdYcxEC
* Srinivasan et al. (2018) Arjun Srinivasan, Steven M Drucker, Alex Endert, and John Stasko. 2018. Augmenting visualizations with interactive data facts to facilitate interpretation and communication. _IEEE transactions on visualization and computer graphics_ 25, 1 (2018), 672–681.
* Srinivasan et al. (2020) Arjun Srinivasan, Bongshin Lee, Nathalie Henry Riche, Steven M Drucker, and Ken Hinckley. 2020\. InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–13.
* Srinivasan et al. (2021) Arjun Srinivasan, Nikhila Nyapathy, Bongshin Lee, Steven M. Drucker, and John Stasko. 2021\. Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations. In _Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems_. ACM, Article 464, 10 pages.
* Srinivasan and Stasko (2017) Arjun Srinivasan and John Stasko. 2017. Orko: Facilitating multimodal interaction for visual exploration and analysis of networks. _IEEE transactions on visualization and computer graphics_ 24, 1 (2017), 511–521.
* Stolte et al. (2002) Chris Stolte, Diane Tang, and Pat Hanrahan. 2002. Polaris: A System for Query, Analysis, and Visualization of Multidimensional Relational Databases. _IEEE Transactions on Visualization and Computer Graphics_ 8, 1 (Jan. 2002), 52–65. https://doi.org/10.1109/2945.981851
* Thoutt (2017) Zack Thoutt. 2017\. Wine Reviews. Kaggle CC-BY Dataset: https://kaggle.com/zynicide/wine-reviews.
* Tory and Setlur (2019) Melanie Tory and Vidya Setlur. 2019. Do what I mean, not what I say! Design considerations for supporting intent and context in analytical conversation. In _Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST)_. https://doi.org/10.1109/VAST47406.2019.8986918.
* Vinyals and Le (2015) Oriol Vinyals and Quoc Le. 2015. A Neural Conversational Model. _ICML Deep Learning Workshop, 2015_ (06 2015).
* Wang et al. (2019) Yun Wang, Zhida Sun, Haidong Zhang, Weiwei Cui, Ke Xu, Xiaojuan Ma, and Dongmei Zhang. 2019. DataShot: Automatic Generation of Fact Sheets from Tabular Data. _IEEE transactions on visualization and computer graphics_ 26, 1 (2019), 895–905.
* Weizenbaum (1966) Joseph Weizenbaum. 1966\. ELIZA – a Computer Program for the Study of Natural Language Communication between Man and Machine. _Commun. ACM_ 9, 1 (Jan. 1966), 36–45. https://doi.org/10.1145/365153.365168
* Yu and Silva (2019) Bowen Yu and Cláudio T Silva. 2019. Flowsense: A natural language interface for visual data exploration within a dataflow system. _IEEE transactions on visualization and computer graphics_ 26, 1 (2019), 1–11.
* Zhi (2020) Qiyu Zhi. 2020. _Coupling Text and Visualization in Visual Storytelling for Data Communication_. Ph.D. Dissertation. University of Notre Dame.
* Zhi and Metoyer (2020) Qiyu Zhi and Ronald Metoyer. 2020. GameBot: A Visualization-augmented Chatbot for Sports Game. In _Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–7.
|
# R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state
Estimator and mapping
Jiarong Lin, Chunran Zheng, Wei Xu, and Fu Zhang J. Lin, C. Zheng, W. Xu and
F. Zhang are with the Department of Mechanical Engineering, Hong Kong
University, Hong Kong SAR., China. $\\{$jiarong.lin, zhengcr, wuweii,
<EMAIL_ADDRESS>
###### Abstract
In this letter, we propose a robust, real-time tightly-coupled multi-sensor
fusion framework, which fuses measurement from LiDAR, inertial sensor, and
visual camera to achieve robust and accurate state estimation. Our proposed
framework is composed of two parts: the filter-based odometry and factor graph
optimization. To guarantee real-time performance, we estimate the state within
the framework of error-state iterated Kalman-filter, and further improve the
overall precision with our factor graph optimization. Taking advantage of
measurement from all individual sensors, our algorithm is robust enough to
various visual failure, LiDAR-degenerated scenarios, and is able to run in
real-time on an on-board computation platform, as shown by extensive
experiments conducted in indoor, outdoor, and mixed environment of different
scale. Moreover, the results show that our proposed framework can improve the
accuracy of state-of-the-art LiDAR-inertial or visual-inertial odometry. To
share our findings and to make contributions to the community, we open source
our codes on our Github111https://github.com/hku-mars/r2live.
## I Introduction
With the capacity of estimating ego-motion in six degrees of freedom (DOF) and
simultaneously building dense and high precision maps of surrounding
environments, LiDAR-based SLAM has been widely applied in the field of
autonomous driving vehicles [1], drones [2, 3], and etc. With the development
of LiDAR technologies, the emergence of low-cost LiDARs (e.g., Livox LiDAR
[4]) makes LiDAR more accessible. Following this trend, a number of related
works [5, 6, 7, 8] draw the attention of the community to this field of
research. However, LiDAR-based SLAM methods easily fail (i.e., degenerate) in
those scenarios with few available geometry features, which is more critical
for those LiDARs with small FoV [9]. In this work, to address the degeneration
problems of LiDAR-based odometry, we propose a LiDAR-inertial-visual fusion
framework to obtain the state estimation of higher robustness and accuracy.
The main contributions of our work are:
* •
We take advantage of measurements from LiDAR, inertial and camera sensors and
fuse them in a tight-coupled way. Experiments show that our method is robust
enough in various challenging scenarios with aggressive motion, sensor
failure, and even in narrow tunnel-like environments with a large number of
moving objects and small LiDAR field of view.
* •
We propose a framework with a high-rate filter-based odometry and a low-rarte
factor graph optimization. The filter-based odometry fuses the measurements of
LiDAR, inertial, and camera sensors within an error-state iterated Kalman
filter to achieve real-time performance. The factor graph optimization refines
a local map of keyframe poses and visual landmark positions.
* •
By tightly fusing different types of sensors, we achieve high-accuracy state
estimation. Experiment results show that our system is accurate enough to be
used to reconstruct large-scale, indoor-outdoor dense 3D maps of building
structures (see Fig. 1).
Our system is carefully engineered and open sourced11footnotemark: 1 to
benefit the whole robotics community.
Figure 1: We use our proposed method to reconstruct a high precision, large
scale, indoor-outdoor, dense 3D maps of the main building of the University of
Hong Kong (HKU). The green path is the computed trajectory and the 3D points
are colored by height.
## II Related work
In this section, we review existing works closely related to our work,
including LiDAR-only odometry and mapping, LiDAR-Inertial fusion and LiDAR-
Inertial-Visual methods.
### II-A LiDAR Odometry and Mapping
Zhang et al [10] first proposed a LiDAR odometry and mapping framework, LOAM,
that combines ICP method [11] with point-to-plane and point-to-edge distance.
It achieves good odometry and mapping performance by running the two modules
at different rates. To make the algorithm run in real time at a computation
limited platform, Shan et al [12] propose a lightweight and ground-optimized
LOAM (LeGO-LOAM), which discards unreliable features in the step of ground
plane segmentation. These works are mainly based on multi-line spinning
LiDARs. Our previous work [9, 13] develop an accurate and robust algorithm by
considering the low-level physical properties of solid-state LiDARs with small
FOV. However, these methods solely based on LiDAR measurements and are very
vulnerable to featureless environments or other degenerated scenarios.
### II-B LiDAR-Inertial Odometry
The existing works of LiDAR-Inertial fusion can be categorized into two
classes: loosely-coupled and the tightly-coupled. Loosely-coupled methods deal
with two sensors separately to infer their motion constraints while tightly-
coupled approaches directly fuse lidar and inertial measurements through
joint-optimization. Compared with loosely-coupled methods, tightly-coupled
methods show higher robustness and accuracy, therefore drawing increasing
research interests recently. For example, authors in [14] propose LIOM which
uses a graph optimization based on the priors from LiDAR-Inertial odometry and
a rotation-constrained refinement method. Compared with the former algorithm,
LIO-SAM [15] optimizes a sliding-window of keyframe poses in a factor graph to
achieve higher accuracy. Similarly, Li et al. propose LiLi-OM [16] for both
conventional and solid-state LiDARs based on sliding window optimization. LINS
[17] is the first tightly-coupled LIO that solves the 6 DOF ego-motion via
iterated Kalman filtering. To lower the high computation load in calculating
the Kalman gain, our previous work FAST-LIO [5] proposes a new formula of the
Kalman gain computation, the resultant computation complexity depends on the
state dimension instead of measurement dimension. The work achieves up to 50
Hz odometry and mapping rate while running on embedded computers onboard a
UAV.
### II-C LiDAR-Inertial-Visual Odometry
On the basis of LiDAR-Inertial methods, LiDAR-Inertial-Visual odometry
incorporating measurements from visual sensors shows higher robustness and
accuracy. In the work of [18], the LiDAR measurements are used to provide
depth information for camera images, forming a system similar to RGB-D camera
that can leverage existing visual SLAM work such as ORB-SLAM [19]. This is a
loosely-coupled method as it ignores the direct constraints on state imposed
by LiDAR measurements. Zuo et al [20] propose a LIC-fusion framework combining
IMU measurements, sparse visual features, and LiDAR plane and edge features
with online spatial and temporal calibration based on the MSCKF framework,
which is claimed more accurate and robust than state-of-the-art methods. In
quick succession, their further work termed LIC-Fusion 2.0 [21] refines a
novel plane-feature tracking algorithm across multiple LiDAR scans within a
sliding-window to make LiDAR scan matching more robust.
To the best of our knowledge, our work is the first open sourced tightly-
coupled LiDAR-inertial-visual fusion system. By fusing different types of
sensor measurements, we achieve state estimation of higher accuracy and
robustness. Extensive results show that our system is more accurate and robust
than state-of-the-art LiDAR-inertial and Visual-inertial fusion estimator
(e.g., FAST-LIO [5], and VINS-mono[22]).
## III The overview of our system
Figure 2: The overview of our proposed method.
The overview of our system is shown in Fig. 2, where the filter-based odometry
taking advantage of the measurements from LiDAR, camera and inertial sensor,
estimates the state within the framework of error-state iterated Kalman filter
as detailed in Section IV. To further improve the visual measurements, we
leverage the factor graph optimization to refine the visual landmarks within a
local sliding window as detailed in Section V.
## IV Filter-based odometry
### IV-A The boxplus “$\boxplus$” and boxminus “$\boxminus$” operator
In this paper, we make use of the “$\boxplus$” and “$\boxminus$” operations
encapsulated on a manifold $\mathcal{M}$ to simplify the notations and
derivations. Let $\mathcal{M}$ be the manifold on which the system state lies.
Since a manifold is locally homeomorphic to $\mathbb{R}^{n}$, where $n$ is the
dimension of $\mathcal{M}$, we can use two operators, “$\boxplus$” and
“$\boxminus$”, establishing a bijective map between the local neighborhood of
$\mathcal{M}$ and its tangent space $\mathbb{R}^{n}$ [23]:
$\displaystyle\boxplus:\mathcal{M}\times\mathbb{R}^{n}\rightarrow\mathcal{M},~{}~{}\boxminus:\mathcal{M}\times\mathcal{M}\rightarrow\mathbb{R}^{n}$
(1)
For the compound manifold $\mathcal{M}=SO(3)\times\mathbb{R}^{n}$, we have:
$\displaystyle\begin{bmatrix}\mathbf{R}\\\
\mathbf{a}_{1}\end{bmatrix}\boxplus\begin{bmatrix}\mathbf{r}\\\
\mathbf{a}_{2}\end{bmatrix}\triangleq\begin{bmatrix}\mathbf{R}\cdot\mathtt{Exp}(\mathbf{r})\\\
\mathbf{a}_{1}+\mathbf{a}_{2}\end{bmatrix},\hskip
2.84544pt\begin{bmatrix}\mathbf{R}_{1}\\\
\mathbf{a}_{1}\end{bmatrix}\boxminus\begin{bmatrix}\mathbf{R}_{2}\\\
\mathbf{a}_{2}\end{bmatrix}\triangleq\begin{bmatrix}\mathtt{Log}(\mathbf{R}_{2}^{T}\mathbf{R}_{1})\\\
\mathbf{a}_{1}-\mathbf{a}_{2}\end{bmatrix}$
where $\mathbf{r}\in\mathbb{R}^{3}$,
$\mathbf{a}_{1},\mathbf{a}_{2}\in\mathbb{R}^{n}$, $\mathtt{Exp}(\cdot)$ and
$\mathtt{Log}(\cdot)$ denote the Rodrigues’ transformation between the
rotation matrix and rotation
vector222https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula.
### IV-B Continuous-time kinematic model
In our current work, we assume that the time offset among all the three
sensors, LiDAR, camera, and IMU, are pre-calibrated. Furthmore, we assume the
extrinsic between LiDAR and IMU are known as they are usually integrated and
calibrated in factory, but estimate the camera IMU extrinsic online. Moreover,
a LiDAR typically scans points sequentially, and points in a LiDAR frame could
be measured at different body pose. This motion is compensated by IMU back
propagation as shown in [5], hence points in a LiDAR frame are assumed to be
measured at the same time. With these, the input data sequences of our system
can be simplified into Fig. 3.
Figure 3: Illustration of the input data sequences, where the frame rate of
IMU, camera, and LiDAR is 200 Hz, 20 Hz and 10 Hz, respectively. The notation
$i$ denotes the index of IMU data while $k$ denotes the index of LiDAR or
camera measurements.
We assume that the IMU, LiDAR, and camera sensors are rigidly attached
together with the extrinsic between LiDAR and IMU (LiDAR frame w.r.t. IMU
frame) as ${}^{I}\mathbf{T}_{L}=(^{I}\mathbf{R}_{L},{{}^{I}\mathbf{p}_{L}})$,
and the extrinsic between camera and IMU (camera frame w.r.t. IMU frame) is
${}^{I}\mathbf{T}_{C}=(^{I}\mathbf{R}_{C},{{}^{I}\mathbf{p}_{C}})$. For the
sake of convenience, we take IMU as the body frame, which leads to the
following continuous kinematic model:
$\displaystyle{{}^{G}\dot{\mathbf{p}}_{I}}={{}^{G}{\mathbf{v}}_{I}},{{}^{G}\dot{\mathbf{v}}_{I}}={{}^{G}{\mathbf{R}}_{I}}(\mathbf{a}_{m}$
$\displaystyle-\mathbf{b}_{a}-\mathbf{n}_{a})+{{}^{G}{\mathbf{g}}},$
$\displaystyle{{}^{G}{\dot{\mathbf{R}}}_{I}}={{}^{G}{{\mathbf{R}}}_{I}}\left[\bm{\omega}_{m}-\mathbf{b}_{\mathbf{g}}-\mathbf{n}_{\mathbf{g}}\right]_{\times}$
$\displaystyle,~{}\dot{\mathbf{b}}_{\mathbf{g}}=\mathbf{n}_{\mathbf{bg}},\dot{\mathbf{b}}=\mathbf{n}_{\mathbf{ba}}$
(2)
where ${{}^{G}(\cdot)}$ denotes a vector represented in the global frame (i.e.
the first gravitational aligned IMU frame [22]), ${}^{G}\mathbf{R}_{I}$ and
${}^{G}\mathbf{p}_{I}$ are the attitude and position of the IMU relative to
the global frame, ${{}^{G}\mathbf{g}}$ is the gravitational acceleration,
$\bm{\omega}_{m}$ and $\mathbf{a}_{m}$ are the raw gyroscope and accelerometer
readings, $\mathbf{n}_{\mathbf{a}}$ and $\mathbf{n}_{\mathbf{g}}$ are the
white noise of IMU measurement, $\mathbf{b}_{\mathbf{a}}$ and
$\mathbf{b}_{\mathbf{g}}$ are the bias of gyroscope and accelerometer, which
are modelled as random walk driven by Gaussian noise
$\mathbf{n}_{\mathbf{bg}}$ and $\mathbf{n}_{\mathbf{ba}}$.
### IV-C Discrete IMU model
We discretize the continuous model (2) at the IMU rate. Let $\mathbf{x}_{i}$
be the state vector at the $i$-th IMU measurement:
$\displaystyle\mathbf{x}_{i}$
$\displaystyle=\begin{bmatrix}{}^{G}\mathbf{R}_{I_{i}}^{T}&{}^{G}\mathbf{p}_{I_{i}}^{T}&{}^{I}\mathbf{R}_{C_{i}}^{T}&{}^{I}\mathbf{p}_{C_{i}}^{T}&{}^{G}\mathbf{v}_{i}^{T}&\mathbf{b}_{\mathbf{g}_{i}}^{T}&\mathbf{b}_{\mathbf{a}_{i}}^{T}\end{bmatrix}^{T}$
Discretizing (2) by zero-order holder (i.e., IMU measurements over one
sampling time period $\Delta t$ are constant), we obtain
$\mathbf{x}_{t+1}=\mathbf{x}_{i}\boxplus\left(\Delta
t\mathbf{f}\left(\mathbf{x}_{i},\mathbf{u}_{i},\mathbf{w}_{i}\right)\right)$
(3)
where
$\displaystyle\mathbf{u}_{i}$
$\displaystyle=\begin{bmatrix}\bm{\omega}_{m_{i}}^{T}&\mathbf{a}_{m_{i}}^{T}\end{bmatrix}^{T},\hskip
5.69046pt\mathbf{w}_{i}=\begin{bmatrix}\mathbf{n}_{\mathbf{g}_{i}}^{T}&\mathbf{n}_{\mathbf{a}_{i}}^{T}&\mathbf{n}_{\mathbf{b}\mathbf{g}_{i}}^{T}&\mathbf{n}_{\mathbf{b}\mathbf{a}_{i}}^{T}\end{bmatrix}^{T}\hskip
5.69046pt$ $\displaystyle\mathbf{f}$
$\displaystyle(\mathbf{x}_{i},\mathbf{u}_{i},\mathbf{w}_{i})=\begin{bmatrix}\bm{\omega}_{m_{i}}-\mathbf{b}_{\mathbf{g}_{i}}-\mathbf{n}_{\mathbf{g}_{i}}\\\
{}^{G}\mathbf{v}_{i}\\\ \mathbf{0}_{3\times 1}\\\ \mathbf{0}_{3\times 1}\\\
{}^{G}\mathbf{R}_{I_{i}}\left(\mathbf{a}_{m_{i}}-\mathbf{b}_{\mathbf{a}_{i}}-\mathbf{n}_{\mathbf{g}_{i}}\right)-{{}^{G}\mathbf{g}}\\\
\mathbf{b}_{\mathbf{g}_{i}}\\\ \mathbf{b}_{\mathbf{a}_{i}}\end{bmatrix}$
### IV-D Propagation
In our work, we leverage an on-manifold iterated error state Kalman filter
[24] to estimate the state vector $\mathbf{x}_{i}$, in which the state
estimation error $\delta\hat{\mathbf{x}}_{i}$ is characterized in the tangent
space of the state estimate $\hat{\mathbf{x}}_{i}$:
$\displaystyle\delta\hat{\mathbf{x}}_{i}\triangleq\mathbf{x}_{i}\boxminus\hat{\mathbf{x}}_{i}$
$\displaystyle=\begin{bmatrix}{}^{G}\delta\hat{\mathbf{r}}_{I_{i}}^{T}&{}^{G}\delta\hat{\mathbf{p}}_{I_{i}}^{T}&{}^{{I}}\delta\hat{\mathbf{r}}_{C_{i}}^{T}&{}^{{I}}\delta\hat{\mathbf{p}}_{C_{i}}^{T}&{}^{G}\delta\hat{\mathbf{v}}_{i}^{T}&\delta\hat{\mathbf{b}}_{\mathbf{g}_{i}}^{T}&\delta\hat{\mathbf{b}}_{\mathbf{a}_{i}}^{T}\end{bmatrix}^{T}$
$\displaystyle\sim\mathcal{N}(\mathbf{0}_{21\times
1},\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{i}})$ (4)
Note that $\delta\hat{\mathbf{x}}_{i}\in\mathbb{R}^{21}$ is in minimum
dimension (the system dimension 21) and is a random vector with covariance
$\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{i}}$.
${}^{G}\delta\hat{\mathbf{r}}_{I_{i}}^{T}$ and
${{}^{{I}}\delta\hat{\mathbf{r}}_{C_{i}}^{T}}$ are:
${}^{G}\delta\hat{\mathbf{r}}_{I_{i}}=\mathtt{Log}({{}^{G}\hat{\mathbf{R}}_{I_{i}}^{T}}{{}^{G}{\mathbf{R}}_{I_{i}}^{T}}),~{}~{}{{{}^{{I}}\delta\hat{\mathbf{r}}_{C_{i}}}}=\mathtt{Log}({{}^{I}\hat{\mathbf{R}}_{C_{i}}^{T}}{{}^{I}{\mathbf{R}_{C_{i}}}})$
Once receiving a new IMU measurement, the state estimate is propagated by
setting the process noise in (3) to zero:
$\displaystyle\hat{\mathbf{x}}_{i+1}=\hat{\mathbf{x}}_{i}\boxplus\left(\Delta
t\cdot\mathbf{f}(\hat{\mathbf{x}}_{i},\mathbf{u}_{i},\mathbf{0})\right).$ (5)
The associated estimation error is propagated in the linearized error space as
follows (see [24] for more details):
$\displaystyle\delta{\hat{\mathbf{x}}}_{i+1}={\mathbf{x}}_{i+1}\boxminus\hat{\mathbf{x}}_{i+1}$
(6) $\displaystyle=\Large(\mathbf{x}_{i}\boxplus\left(\Delta
t\cdot\mathbf{f}({\mathbf{x}}_{i},\mathbf{u}_{i},\mathbf{w}_{i})\right)\Large)\boxminus\left(\hat{\mathbf{x}}_{i}\boxplus\left(\Delta
t\cdot\mathbf{f}(\hat{\mathbf{x}}_{i},\mathbf{u}_{i},\mathbf{0})\right)\right)$
$\displaystyle\sim\mathcal{N}(\mathbf{0}_{21\times
1},\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{i+1}})$
where:
$\displaystyle\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{i+1}}=\mathbf{F}_{\delta{\hat{\mathbf{x}}}}\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{i}}\mathbf{F}_{\delta{\hat{\mathbf{x}}}}^{T}+\mathbf{F}_{\mathbf{w}}\mathbf{Q}\mathbf{F}_{\mathbf{w}}^{T}$
(7)
$\displaystyle\mathbf{F}_{\delta{\hat{\mathbf{x}}}}=\left.\dfrac{\partial\left(\delta{\hat{\mathbf{x}}}_{i+1}\right)}{\partial\delta{\hat{\mathbf{x}}_{i}}}\right|_{\delta{\hat{\mathbf{x}}_{i}}=\mathbf{0},\mathbf{w}_{i}=\mathbf{0}},~{}~{}\mathbf{F}_{{\mathbf{w}}}=\left.\dfrac{\partial\left(\delta{\hat{\mathbf{x}}}_{i+1}\right)}{\partial{\mathbf{w}_{i}}}\right|_{\delta{\hat{\mathbf{x}}_{i}}=\mathbf{0},\mathbf{w}_{i}=\mathbf{0}}$
with their exact values computed in Appendix. -A.
The two propagation in (5) and (7) starts from the optimal state and
covariance estimate after fusing the most recent LiDAR/camera measurement
(e.g., the $k$-th measurement, see Section IV-I), and repeat until receiving
the next LiDAR/camera measurement (e.g., the $(k+1)$-th measurement). The
relation between time index $i$ and $k$ is shown in Fig. 3.
### IV-E The prior distribution
Let the two propagation in (5) and (7) stop at the $(k+1)$-th LiDAR/camera
measurement (see Fig. 4), and the propagated state estimate and covariance are
$\hat{\mathbf{x}}_{k+1}$ and $\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{k+1}}$,
respectively. They essentially impose a prior distribution for the state
$\mathbf{x}_{k+1}$ before fusing the $(k+1)$-th measurement as below:
Figure 4: The illustraction of the update of our error-state iterated Kalman
filter.
$\displaystyle\mathbf{x}_{k+1}\boxminus\hat{\mathbf{x}}_{k+1}$
$\displaystyle\sim\mathcal{N}(\mathbf{0},\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{k+1}}).$
(8)
### IV-F Initialization of iterated update
The prior distribution in (8) will be fused with the LiDAR or camera
measurements to produce a maximum a-posterior (MAP) estimate (denoted as
$\check{\mathbf{x}}_{k+1}$) of $\mathbf{x}_{k+1}$. The MAP estimate
$\check{\mathbf{x}}_{k+1}$ is initialized as the prior estimate
$\hat{\mathbf{x}}_{k+1}$ and is refined iteratively due to the nonlinear
nature of the problem. In each iteration, the error
$\delta{\check{\mathbf{x}}}_{k+1}$ between the true state $\mathbf{x}_{k+1}$
and the current estimate $\check{\mathbf{x}}_{k+1}$, defined as
$\delta{\check{\mathbf{x}}}_{k+1}\triangleq\mathbf{x}_{k+1}\boxminus\check{\mathbf{x}}_{k+1},$
(9)
will be solved by minimizing the posterior distribution considering the prior
in (8) and LiDAR/visual measurements. Therefore, the prior distribution in
terms of $\mathbf{x}_{k+1}$ represented by (8) should be transformed to an
equivalent prior distribution in terms of $\delta{\check{\mathbf{x}}}_{k+1}$:
$\begin{split}\mathbf{x}_{k+1}\boxminus\hat{\mathbf{x}}_{k+1}&=\left(\check{\mathbf{x}}_{k+1}\boxplus\delta{\check{\mathbf{x}}}_{k+1}\right)\boxminus\hat{\mathbf{x}}_{k+1}\\\
&\approx\check{\mathbf{x}}_{k+1}\boxminus\hat{\mathbf{x}}_{k+1}+\bm{\mathcal{H}}\delta{\check{\mathbf{x}}}_{k+1}\\\
&\sim\mathcal{N}(\mathbf{0},\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{k+1}}),\end{split}$
(10)
where
$\bm{\mathcal{H}}=\dfrac{\left(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1}\right)\boxminus\hat{\mathbf{x}}_{k+1}}{\partial\delta\check{\mathbf{x}}_{k+1}}|_{\delta\check{\mathbf{x}}_{k+1}=\mathbf{0}}$
is computed in detail in Appendix. -B, (10) essentially imposes a prior
distribution to $\delta{\check{\mathbf{x}}}_{k+1}$ as below:
$\delta{\check{\mathbf{x}}}_{k+1}\sim\mathcal{N}(-\bm{\mathcal{H}}^{-1}\left(\check{\mathbf{x}}_{k+1}\boxminus\hat{\mathbf{x}}_{k+1}\right),\bm{\mathcal{H}}^{-1}\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{k+1}}\bm{\mathcal{H}}^{-T})$
(11)
### IV-G LiDAR measurement
If the $(k+1)$-th measurement is a LiDAR frame, we extract planar feature
points from the raw 3D points as in [5] and compensate the in-frame motion as
in Section IV-B. Denote ${\bm{\mathcal{L}}}_{k+1}$ the set of feature points
after motion compensation, we compute the residual of each feature point
${{}^{L}\mathbf{p}_{j}}\in{\bm{\mathcal{L}}}_{k+1}$ where $j$ is the index of
feature point and the superscript $L$ denotes that the point is represented in
the LiDAR-reference frame.
With $\check{\mathbf{x}}_{k+1}$ being the current estimate of
${\mathbf{x}}_{k+1}$, we can transform ${{}^{L}\mathbf{p}_{j}}$ from LiDAR
frame to the global frame
${{}^{G}\mathbf{p}_{j}}={{}^{G}{\check{\mathbf{R}}}_{I_{k+1}}}(^{I}\mathbf{R}_{L}{{}^{L}{\mathbf{p}}_{j}}+{{}^{I}\mathbf{p}_{L}})+{{}^{G}\check{\mathbf{p}}_{I_{k+1}}}.$
As the previous LOAM pipeline does in [9, 5], we search for the nearest planar
feature points in the map and use them to fit a plane with normal
$\mathbf{u}_{j}$ and an in-plane point ${{\mathbf{q}}_{j}}$, the measurement
residual is:
$\displaystyle\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1},{{}^{L}{\mathbf{p}}}_{j})=$
$\displaystyle\mathbf{u}_{j}^{T}\left({{}^{G}\mathbf{p}_{j}}-{{\mathbf{q}}_{j}}\right)$
(12)
Let $\mathbf{n}_{j}$ be the measurement noise of the point
${{}^{L}\mathbf{p}_{j}}$, we can obtain the true point location
${{}^{L}\mathbf{p}^{\mathtt{gt}}_{j}}$ by compensating the noise from
${{}^{L}\mathbf{p}_{j}}$:
$\displaystyle{{}^{L}\mathbf{p}_{j}}={{}^{L}\mathbf{p}^{\mathtt{gt}}_{j}}+{\mathbf{n}_{j}},{\mathbf{n}_{j}}\sim\mathcal{N}(\mathbf{0},\bm{\Sigma}_{\mathbf{n}_{j}}).$
(13)
This true point location together with the true state $\mathbf{x}_{k+1}$
should lead to zero residual in (12), i.e.,
$\displaystyle\mathbf{0}=\mathbf{r}_{l}(\mathbf{x}_{k+1},{{}^{L}\mathbf{p}}^{\mathtt{gt}}_{j})$
$\displaystyle=\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1},{{}^{L}{\mathbf{p}}}_{j})+\mathbf{H}^{{l}}_{j}\delta\check{\mathbf{x}}_{k+1}+\bm{\alpha}_{j},$
(14)
which constitutes a posteriori distribution for
$\delta\check{\mathbf{x}}_{k+1}$. In (14), $\mathbf{x}_{k+1}$ is parameterized
by its error $\delta\check{\mathbf{x}}_{k+1}$ defined in (9) and
$\bm{\alpha}_{j}\sim\mathcal{N}(\mathbf{0},\bm{\Sigma}_{\bm{\alpha}_{j}})$:
$\displaystyle\mathbf{H}^{{l}}_{j}=$
$\displaystyle\dfrac{\partial\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1}\boxplus\delta{\check{\mathbf{x}}}_{k+1},{{}^{L}{\mathbf{p}}}_{j})}{\partial\delta{\check{\mathbf{x}}}_{k+1}}|_{\delta{\check{\mathbf{x}}}_{k+1}=\mathbf{0}}$
$\displaystyle\bm{\Sigma}_{\bm{\alpha}_{j}}=$
$\displaystyle{\mathbf{F}_{{{\mathbf{p}}}_{j}}}\bm{\Sigma}_{\mathbf{n}_{j}}{\mathbf{F}_{{{\mathbf{p}}}_{j}}^{T}}$
(15) $\displaystyle{\mathbf{F}_{{{\mathbf{p}}}_{j}}}=$
$\displaystyle\left(\dfrac{\partial\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1},{{}^{L}{\mathbf{p}}}_{j})}{\partial{{}^{L}{\mathbf{p}}}_{j}}\right)={{}^{G}{\check{\mathbf{R}}}_{I_{k+1}}}^{I}\mathbf{R}_{L}$
The detailed computation of $\mathbf{H}^{{l}}_{j}$ can be found in Appendix.
-C.
### IV-H Visual measurement
If the $(k+1)$-th frame is a camera image, we extract the FAST corner feature
points $\bm{\mathcal{C}}_{k+1}$ from the undistorted image and use KLT optical
flow to track feature points in $\bm{\mathcal{C}}_{k+1}$ seen by keyframes in
the current sliding window (Section V). If a feature point in
$\bm{\mathcal{C}}_{k+1}$ is lost or has not been yet tracked before, we
triangulate the new feature point in 3D space (visual landmarks) with the
optimal estimated camera poses.
The reprojection errors between visual landmarks and its tracked feature
points in the $(k+1)$-th frame are used for updating the current state
estimate $\check{\mathbf{x}}_{k+1}$. For an extracted corner point
${{}^{C}\mathbf{p}_{s}}=\begin{bmatrix}u_{s}&v_{s}\end{bmatrix}^{T}\in\bm{\mathcal{C}}_{k+1}$
where $s$ is the index of corner point, its correspondence landmark in 3D
space is denoted as ${}^{G}\mathbf{P}_{s}$, then the measurement residual of
${{}^{C}}\mathbf{p}_{s}$ is:
$\begin{split}&{}^{C}\mathbf{P}_{s}=\left({{}^{G}\check{\mathbf{R}}_{I_{k+1}}}{{}^{I}\check{\mathbf{R}}_{C_{k+1}}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}\\\
&\quad\quad\quad\quad\quad\quad\quad\quad-\left({{}^{I}\check{\mathbf{R}}_{C_{k+1}}}\right)^{T}{{{}^{G}}\check{\mathbf{p}}_{I_{k+1}}}-{{}^{I}\check{\mathbf{p}}_{C_{k+1}}}\\\
&\mathbf{r}_{c}\left({\check{\mathbf{x}}_{k+1},{{}^{C}{\mathbf{p}}}_{s}},{{}^{G}\mathbf{P}_{s}}\right)={{}^{C}}\mathbf{p}_{s}-\bm{\pi}({{}^{C}\mathbf{P}_{s}})\end{split}$
(16)
where $\bm{\pi}(\cdot)$ is the pin-hole projection model.
Now considering the measurement noise, we have:
$\displaystyle{{}^{G}\mathbf{P}_{s}}={{}^{G}\mathbf{P}_{s}^{\mathtt{gt}}}+\mathbf{n}_{\mathbf{P}_{s}},~{}$
$\displaystyle\mathbf{n}_{\mathbf{P}_{s}}\sim\mathcal{N}(\mathbf{0},\bm{\Sigma}_{\mathbf{n}_{\mathbf{P}_{s}}})$
(17)
$\displaystyle{{}^{C}\mathbf{p}_{s}}={{}^{C}\mathbf{p}_{s}^{\mathtt{gt}}}+\mathbf{n}_{\mathbf{p}_{s}},~{}$
$\displaystyle\mathbf{n}_{\mathbf{p}_{s}}\sim\mathcal{N}(\mathbf{0},\bm{\Sigma}_{\mathbf{n}_{\mathbf{p}_{s}}})$
(18)
where ${{}^{G}\mathbf{P}_{s}^{\mathtt{gt}}}$ and
${{}^{C}\mathbf{p}_{s}^{\mathtt{gt}}}$ are the true value of
${{}^{G}\mathbf{P}_{s}}$ and ${{}^{C}\mathbf{p}_{s}}$, respectively. With
these, we obtain the first order Taylor expansion of the true zero residual
$\mathbf{r}_{c}(\mathbf{x}_{k+1},{{}^{C}\mathbf{p}^{\mathtt{gt}}_{s}})$ as:
$\begin{split}\mathbf{0}&=\mathbf{r}_{c}(\mathbf{x}_{k+1},{{}^{C}\mathbf{p}}^{\mathtt{gt}}_{s},{{}^{G}\mathbf{P}^{\mathtt{gt}}_{s}})\\\
&\approx\mathbf{r}_{c}\left({\check{\mathbf{x}}_{k+1},{{}^{C}{\mathbf{p}}}_{s}},{{}^{G}\mathbf{P}_{s}}\right)+\mathbf{H}^{{c}}_{s}\delta\check{\mathbf{x}}_{k+1}+\bm{\beta}_{s},\end{split}$
(19)
which constitutes another posteriori distribution for
$\delta\check{\mathbf{x}}_{k+1}$. In (19),
$\bm{\beta}_{s}\sim\mathcal{N}(\mathbf{0},{\bm{\Sigma}_{\bm{\beta}_{s}}})$
and:
$\displaystyle\mathbf{H}^{{c}}_{s}$
$\displaystyle=\dfrac{\partial\mathbf{r}_{c}(\check{\mathbf{x}}_{k+1}\boxplus\delta{\check{\mathbf{x}}}_{k+1},{{}^{C}{\mathbf{p}}}_{s},{{}^{G}\mathbf{P}_{s}})}{\partial\delta{\check{\mathbf{x}}}_{k+1}}|_{\delta{\check{\mathbf{x}}}_{k+1}=\mathbf{0}}$
$\displaystyle{\bm{\Sigma}_{\bm{\beta}_{s}}}$
$\displaystyle=\bm{\Sigma}_{\mathbf{n}_{\mathbf{p}_{s}}}+{\mathbf{F}_{{{\mathbf{P}}}_{s}}}\bm{\Sigma}_{\mathbf{{P}}_{s}}{\mathbf{F}_{{{\mathbf{P}}}_{s}}}^{T}$
(20) $\displaystyle{\mathbf{F}_{{{\mathbf{P}}}_{s}}}$
$\displaystyle=\dfrac{\partial\mathbf{r}_{c}(\check{\mathbf{x}}_{k+1},{{}^{C}{\mathbf{p}}}_{s},{{}^{G}\mathbf{P}_{s}})}{\partial{{}^{G}{\mathbf{P}}}_{s}}$
The detailed computation of $\mathbf{H}^{{c}}_{s}$ and
${\mathbf{F}_{{{\mathbf{P}}}_{s}}}$ is given in appendix. -D
### IV-I Update of error-state iterated Kalman filter
Combining the prior distribution (11), the posterior distribution due to LiDAR
measurement (14) and the posterior distribution due to visual measurement
(19), we obtain the maximum a posterior (MAP) estimation of
$\delta\check{\mathbf{x}}_{k+1}$:
$\begin{split}\mathop{\min}_{\delta\check{\mathbf{x}}_{k+1}}&\left(\left\|\check{\mathbf{x}}_{k+1}\boxminus\hat{\mathbf{x}}_{k+1}+\bm{\mathcal{H}}\delta{\check{\mathbf{x}}}_{k+1}\right\|_{\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{k+1}}^{-1}}\right.\\\
+&\sum\nolimits_{j=1}^{m_{l}}{\left\|\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1},{{}^{L}{\mathbf{p}}}_{j})+\mathbf{H}^{l}_{j}\delta{\check{\mathbf{x}}}_{k+1}\right\|^{2}_{\bm{\Sigma}_{\bm{\alpha}_{j}}^{-1}}}\\\
+&\left.\sum\nolimits_{s=1}^{m_{c}}{\left\|\mathbf{r}_{c}(\check{\mathbf{x}}_{k+1},{{}^{C}\mathbf{p}_{s}},{{}^{G}\mathbf{P}}_{s})+\mathbf{H}^{{c}}_{s}\delta{\check{\mathbf{x}}}_{k+1}\right\|^{2}_{\bm{\Sigma}_{\bm{\beta}_{s}}^{-1}}}\right)\end{split}$
where
$\left\|\mathbf{x}\right\|_{\bm{\Sigma}}^{2}=\mathbf{x}\bm{\Sigma}\mathbf{x}^{T}$.
Notice that the measurements of LiDAR and camera may not appear at the same
time instant (see Fig. 4), therefore $m_{l}$ (or $m_{c}$) could be zero in the
above optimization. Denote
$\begin{split}\mathbf{H}^{T}&=\begin{bmatrix}{\mathbf{H}^{l}_{1}},\dots,{\mathbf{H}^{l}_{m_{l}}},{\mathbf{H}^{{c}}_{1}}^{T},\dots,{\mathbf{H}^{{c}}_{m_{c}}}\end{bmatrix}^{T}\\\
\mathbf{R}&=\text{diag}(\begin{matrix}\bm{\Sigma}_{\bm{\alpha}_{1}},\dots,\bm{\Sigma}_{\bm{\alpha}_{m_{l}}},\bm{\Sigma}_{\bm{\beta}_{1}},\dots,\bm{\Sigma}_{\bm{\beta}_{m_{c}}}\end{matrix})\\\
\check{\mathbf{z}}_{k+1}^{T}&=\left[\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1},{{}^{L}{\mathbf{p}}}_{1}),\dots,\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1},{{}^{L}{\mathbf{p}}}_{m_{l}}),\right.\\\
&\quad\left.\mathbf{r}_{c}(\check{\mathbf{x}}_{k+1},{{}^{C}\mathbf{p}_{1}},{{}^{G}\mathbf{P}}_{1}),\dots,\mathbf{r}_{c}(\check{\mathbf{x}}_{k+1},{{}^{C}\mathbf{p}_{m_{c}}},{{}^{G}\mathbf{P}}_{{m_{c}}})\right]\\\
\mathbf{P}&=\left(\bm{\mathcal{H}}\right)^{-1}\bm{\Sigma}_{\delta\hat{\mathbf{x}}_{k+1}}{\left(\bm{\mathcal{H}}\right)}^{-T}\end{split}$
(21)
Following [5], we have the Kalman gain computed as:
$\displaystyle\mathbf{K}=\left(\mathbf{H}^{T}\mathbf{R}^{-1}\mathbf{H}+\mathbf{P}^{-1}\right)^{-1}\mathbf{H}^{T}\mathbf{R}^{-1}$
(22)
Then we can update the state estimate as:
$\displaystyle\check{\mathbf{x}}_{k+1}=$
$\displaystyle\check{\mathbf{x}}_{k+1}\boxplus\left(-\mathbf{K}\check{\mathbf{z}}_{k+1}-\left(\mathbf{I}-\mathbf{KH}\right)\left(\bm{\mathcal{H}}\right)^{-1}\left(\check{\mathbf{x}}_{k+1}\boxminus\hat{\mathbf{x}}_{k+1}\right)\right)$
The above process (Section IV-G to Section IV-I) is iterated until convergence
(i.e., the update is smaller than a given threshold). The converged state
estimate is then used to (1) project points in the the new LiDAR frame to the
world frame and append them to the existing point cloud map; (2) triangulate
new visual landmarks of the current frame if it is a keyframe; (3) serve as
the starting point of the propagation in Section IV-D for the next cycle:
$\hat{\mathbf{x}}_{k+1}=\check{\mathbf{x}}_{k+1},\hskip
5.69046pt\hat{\bm{\Sigma}}_{\delta\bar{\mathbf{x}}_{k+1}}=\left(\mathbf{I}-\mathbf{KH}\right)\check{\bm{\Sigma}}_{\delta\mathbf{x}_{k+1}}$
## V Factor graph optimization
As mentioned in Section IV-I, untracked visual landmarks in the newly added
keyframe are triangulated to create new visual landmarks. This triangulation
is usually of low precision due to keyframe pose estimation error. To further
improve the quality of visual landmarks, keyframe poses, and simultaneously
calibrate the time offset between the camera and LiDAR-IMU subsystem, we
leverage a factor graph optimization for optimizing the camera-poses and the
visual landmarks within a sliding window of image keyframes.
Figure 5: Our factor graph optimization.
Our factor graph optimization is similar to VINS-Mono [22], but further
incorporates pose constraints due to LiDAR measurements as shown in Fig. 5.
Constraints due to IMU preintegration are also included to connect the LiDAR
factor with camera factor. To keep the back-end optimization light-weight, the
LiDAR poses in the pose graph are fixed and the LiDAR raw point measurements
are not engaged in the pose graph optimization.
## VI Experiments and Results
### VI-A Our device for data sampling
Our handheld device for data sampling is shown in Fig. 6 (a), which includes
the power supply unit, the onboard DJI
manifold-2c333https://www.dji.com/manifold-2 computation platform (equipped
with an Intel i7-8550u CPU and 8 GB RAM), a global shutter camera, and a LiVOX
AVIA444https://www.livoxtech.com/avia LiDAR. The FoV of the camera is
$82.9^{\circ}\times 66.5^{\circ}$ while the FoV of LiDAR is
$70.4^{\circ}\times 77.2^{\circ}$. For quantitatively evaluating the precision
of our algorithm (our experiment in Section. VI-E), we install a differential-
GPS (D-GPS) real-time kinematic (RTK) system555https://www.dji.com/d-rtk on
our device, shown in Fig. 6 (b).
Figure 6: Our handheld device for data sampling, (a) shows our minimum system,
with a total weight $2.09$ Kg; (b) a D-GPS RTK system is used to evaluate the
accuracy.
Figure 7: We evaluate the robustness of our algorithm under scenarios with the
aggressive motion (sequence in a$\sim$d) and sensor failure by intentionally
blocking the camera (e) and LiDAR sensor (f).
Figure 8: Our estimated pose and the raw gyroscope reading of Experiment-1.
The shaded area in blue, yellow and red represent different phases of
aggressive motion, camera-failure and LiDAR-failure, respectively. Figure 9:
We evaluate our algorithm in a Hong Kong MTR station consisting of cluttered
lobby and very long narrow tunnels, as shown in (a). The tunnel is up to $190$
meters long and is filled with moving pedestrians, making it extremely
challenging for both LiDAR-based and camera-based SLAM methods. (b): the map
built by our system is well aligned with the street map of MTR station. (c)
Trajectory comparison among our system “R2LIVE”, the LiDAR-inertial system
“Fast-LIO” , and visual-inertial system “VINS-Mono” and (our). The starting
point is marked with $\mathop{\raisebox{-1.18399pt}{$\leavevmode\hbox
to5.6pt{\vbox to8.4pt{\pgfpicture\makeatletter\hbox{\hskip
0.21527pt\lower-1.44235pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{}{{}}\pgfsys@setlinewidth{0.43056pt}\pgfsys@invoke{
}\pgfsys@roundjoin\pgfsys@invoke{ }{}{{}}{} {}{} {}{} {}{} {}{}
{}\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{2.75554pt}\pgfsys@lineto{2.58333pt}{6.73817pt}\pgfsys@lineto{5.16666pt}{2.75554pt}\pgfsys@lineto{2.58333pt}{-1.22708pt}\pgfsys@lineto{0.0pt}{2.75554pt}\pgfsys@closepath\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$}}$ while the ending
point of each trajectory is marked with $\bigstar$. “VINS-Mono” stopped at
middle way due to the failure of feature tracking.
### VI-B Experiment-1: Robustness evaluation with aggressive motion and
sensor failure
In this experiment, we evaluate the robustness of our algorithm under the
scenario with aggressive motion (see Fig. 8 (a$\sim$d)), in which the maximum
angular speed reaches up to $300^{\circ}/s$ (see the gyroscope readings in
Fig. 8). In addition, we also simulate drastic sensor failures by
intentionally blocking the camera (see Fig. 8(e)) and LiDAR (see Fig. 8(f)).
The results of our estimated attitude and position are shown in Fig. 8, where
we use shade the different testing phases with different colors. As shown in
Fig. 8, we can see that our estimated trajectory can tightly track the actual
motion even in severe rotation and translation. Even when the camera or LiDAR
provides no measurements, the estimated trajectory is sill very smooth and
exhibits no noticeable degradation. These results show that our algorithm is
robust enough to endure with the case with aggressive motion or even with the
failure of sensors. We refer readers to the accompanying video on _XXX_
showing details of the experiment in practice.
### VI-C Experiment-2: Robustness evaluation in a narrow tunnel-like
environments with moving objects
In this experiment, we challenge one of the most difficult scenarios in the
scope of camera-based and LiDAR-based SLAM, a MTR station, the HKU
station666https://en.wikipedia.org/wiki/HKU_station. It is a typical narrow
tunnel-like environment (see the left figure of Fig. 9(a)), with the maximum
reaches $190$ meters. Moreover, there are many moving pedestrians frequently
showing in the sensor views. All these factors make the SLAM extremely
challenging: 1) the long tunnel structure significantly reduces the geometric
features for LiDAR-based SLAM, especially when walking along the tunnel or
turning the LiDAR around at one end of the tunnel; 2) The highly repeated
visual feature (pattern on the floor) causes the error in matching and
tracking the visual features in visual SLAM; 3) The many moving pedestrians
could cause outliers to the already few point- and visual-features.
Despite these challenges, our system is robust enough to survive in this
scene. In Fig. 9 (b), we align our point cloud data with the HKU station
street map777https://www.mtr.com.hk/archive/en/services/maps/hku.pdf, and find
them match tightly. This demonstrates that our localization is of high
robustness and accuracy. Moreover, we plot our estimated trajectory together
with that of Vins-Mono888https://github.com/HKUST-Aerial-Robotics/VINS-Mono
and Fast-LIO999https://github.com/hku-mars/FAST_LIO/ in Fig. 9 (c), where we
can see that our method achieves the best overall performance in this
experiment.
Figure 10: The upper figure plots the different trajectories in experiments-4,
while the lower figure shows the raw gyroscope reading in the experiment.
Figure 11: The relative pose error in experiments-4, for the sequence of $300$
meters, the median of pose error of “Vins-Mono”, “Fast-LIO” and “R2LIVE” are
($0.35^{\circ}$,$3.84\%$), ($0.41^{\circ}$,$0.34\%$) and
($\mathbf{0.27}^{\circ}$,$\mathbf{0.21}\%$) respectively. Figure 12: The
reconstructed 3D maps in Experiment-3 are shown in (d), and the detail point
cloud with the correspondence panorama images are shown in (a) and (b). (c)
show that our algorithm can close the loop itself (returning the starting
point) without any additional processing (e.g. loop closure). In (e), we merge
our maps with the satellite image to further examine the accuracy of our
system.
### VI-D Experiment-3: High precision maps building in large-scale indoor &
outdoor urban environment
In this experiment, we show that our proposed method is accurate enough to
reconstruct a dense 3D, high precision, large-scale indoor-outdoor map of
urban environments. We collect data of the HKU main building exterior and
interior. The real-time reconstructed 3D maps is shown in Fig. 12, in which
the clear and high-quality point cloud demonstrates that our proposed method
is of high accuracy. What worth to be mentioned is, without any additional
processing (i.e. loop closure), our algorithm can still close the loop itself
(see Fig. 12 (c)) after traversing $876$ meters, which demonstrates that our
proposed method is extremely low drift. Finally, we merge our maps together
with the satellite image and find them match tightly (see Fig. 12 (e)).
### VI-E Experiment-4: Quantitative evaluation of precision using D-GPS RTK
In this experiment, we quantitatively evaluate the precision of Vins-Mono
(IMU+Camera), Fast-LIO (IMU+LiDAR) and our algorithm by comparing their
estimated trajectory with the ground truth trajectory (see the upper figure of
Fig. 10) obtained from a real-time differential-GPS (D-GPS) Kinematic (RTK).
The data has the maximum angular velocity reaching $130^{\circ}/s$ (see the
gyroscope reading in Fig. 10). We calculate translational and rotational
errors for all possible subsequences of length (50,…,300) meters, with the
relative pose error (RPE) among these methods shown in Fig. 11.
### VI-F Run time analysis
The average time consumption in experiment 1$\sim$4 are listed in TABLE. I,
which demonstrates that R2LIVE can achieve real-time on both the desktop PC
and embedded computation platform. Noticed that the factor graph optimization
is run on a separate thread and therefore is allowed to run at a lower rate.
## VII Conclusion
In this letter, we propose a robust, real-time LiDAR-inertial-visual fusion
framework based on a high-rate filter-based odometry and factor graph
optimization. We fuse the measurements of LiDAR, inertial, and camera sensors
within an error-state iterated Kalman filter and use the factor graph
optimization to refine the local map that in a sliding window of image
keyframes and visual landmarks. Our system was tested in large-scale, indoor-
outdoor, and narrow tunnel-like environments, challenging sensor failure and
aggressive motion. In all the tests, our method achieves a high level of
accuracy and robustness in localization and mapping.
PC/on-board | Exp-1 (ms) | Exp-2 (ms) | Exp-3 (ms) | Exp-4 (ms)
---|---|---|---|---
LI-Odom | 8.81 / 15.98 | 11.54 / 25.30 | 14.91 / 30.64 | 10.92 / 24.37
VI-Odom | 7.84 / 13.92 | 8.84 / 19.10 | 9.57 / 19.56 | 8.99 / 20.16
FG-OPM | 26.10 / 45.35 | 30.20 / 65.25 | 29.25 / 58.08 | 27.98 / 60.43
TABLE I: The average running time of R2LIVE in Experiment-1$\sim$4
(Exp-1$\sim$4) on desktop PC (with Intel i7-9700K CPU and 32GB RAM) and on-
board computed (with Intel i7-8550u CPU and 8GB RAM). The items ”LI-Odom”,
”VI-Odom” and ”FG-OPM” are the average time consumption of LiDAR-IMU filter-
based odometry, Visual-Inertial filter-based odometry, and factor graph
optimization, respectively.
## References
* [1] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt, _et al._ , “Towards fully autonomous driving: Systems and algorithms,” in _2011 IEEE Intelligent Vehicles Symposium (IV)_. IEEE, 2011, pp. 163–168.
* [2] A. Bry, A. Bachrach, and N. Roy, “State estimation for aggressive flight in gps-denied environments using onboard sensing,” in _2012 IEEE International Conference on Robotics and Automation_. IEEE, 2012, pp. 1–8.
* [3] F. Gao, W. Wu, W. Gao, and S. Shen, “Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments,” _Journal of Field Robotics_ , vol. 36, no. 4, pp. 710–733, 2019.
* [4] Z. Liu, F. Zhang, and X. Hong, “Low-cost retina-like robotic lidars based on incommensurable scanning,” _IEEE Transactions on Mechatronics_ , 2021, in press. [Online]. Available: https://arxiv.org/pdf/2006.11034.pdf
* [5] W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,” _IEEE Robotics and Automation Letters_ , 2021, in press. [Online]. Available: https://arxiv.org/pdf/2010.08196.pdf
* [6] J. Lin, X. Liu, and F. Zhang, “A decentralized framework for simultaneous calibration, localization and mapping with multiple lidars,” in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2020, pp. 4870–4877.
* [7] Z. Liu and F. Zhang, “Balm: Bundle adjustment for lidar mapping,” _IEEE Robotics and Automation Letters_ , 2021, in press. [Online]. Available: https://arxiv.org/pdf/2010.08215.pdf
* [8] X. Liu and F. Zhang, “Extrinsic calibration of multiple lidars of small fov in targetless environments,” _IEEE Robotics and Automation Letters_ , 2021, in press.
* [9] J. Lin and F. Zhang, “Loam livox: A fast, robust, high-precision lidar odometry and mapping package for lidars of small fov,” in _2020 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2020, pp. 3126–3131.
* [10] J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time.” in _Robotics: Science and Systems_ , vol. 2, no. 9, 2014.
* [11] K.-L. Low, “Linear least-squares optimization for point-to-plane icp surface registration,” _Chapel Hill, University of North Carolina_ , vol. 4, no. 10, pp. 1–3, 2004.
* [12] T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2018, pp. 4758–4765.
* [13] J. Lin and F. Zhang, “A fast, complete, point cloud based loop closure for lidar odometry and mapping,” _arXiv preprint arXiv:1909.11811_ , 2019.
* [14] H. Ye, Y. Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometry and mapping,” in _2019 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2019.
* [15] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping,” _arXiv preprint arXiv:2007.00258_ , 2020.
* [16] K. Li, M. Li, and U. D. Hanebeck, “Towards high-performance solid-state-lidar-inertial odometry and mapping,” _arXiv preprint arXiv:2010.13150_ , 2020.
* [17] C. Qin, H. Ye, C. E. Pranata, J. Han, S. Zhang, and M. Liu, “Lins: A lidar-inertial state estimator for robust and efficient navigation,” in _2020 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2020, pp. 8899–8906.
* [18] Y. Zhu, C. Zheng, C. Yuan, X. Huang, and X. Hong, “Camvox: A low-cost and accurate lidar-assisted visual slam system,” _arXiv preprint arXiv:2011.11357_ , 2020.
* [19] R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” _IEEE Transactions on Robotics_ , vol. 33, no. 5, pp. 1255–1262, 2017.
* [20] X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Huang, “Lic-fusion: Lidar-inertial-camera odometry,” in _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2019, pp. 5848–5854.
* [21] X. Zuo, Y. Yang, J. Lv, Y. Liu, G. Huang, and M. Pollefeys, “Lic-fusion 2.0: Lidar-inertial-camera odometry with sliding-window plane-feature tracking,” in _IROS 2020_ , 2020.
* [22] T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” _IEEE Transactions on Robotics_ , vol. 34, no. 4, pp. 1004–1020, 2018.
* [23] C. Hertzberg, R. Wagner, U. Frese, and L. Schröder, “Integrating generic sensor fusion algorithms with sound state representations through encapsulation of manifolds,” _Information Fusion_ , vol. 14, no. 1, pp. 57–77, 2013.
* [24] D. He, W. Xu, and F. Zhang, “Embedding manifold structures into kalman filters,” _arXiv preprint arXiv:2102.03804_ , 2021.
* [25] J. Sola, “Quaternion kinematics for the error-state kalman filter,” _arXiv preprint arXiv:1711.02508_ , 2017.
* [26] T. D. Barfoot, _State estimation for robotics_. Cambridge University Press, 2017.
### -A Computation of $\mathbf{F}_{\delta{\hat{\mathbf{x}}}}$ and
$\mathbf{F}_{\mathbf{w}}$
$\displaystyle\mathbf{F}_{\delta{\hat{\mathbf{x}}}}=\left.\dfrac{\partial\left(\delta{\hat{\mathbf{x}}}_{i+1}\right)}{\partial\delta{\hat{\mathbf{x}}_{i}}}\right|_{\delta{\hat{\mathbf{x}}_{i}}=\mathbf{0},\mathbf{w}_{i}=\mathbf{0}}$
$\displaystyle=$
$\displaystyle\begin{bmatrix}\mathtt{Exp}(-\hat{\bm{\omega}}_{i}\Delta
t)&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&-{\mathbf{J}_{r}(\hat{\bm{\omega}}_{i}\Delta
t)}^{T}&\mathbf{0}\\\
\mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{I}\Delta
t&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
-^{G}\hat{\mathbf{R}}_{I_{i}}[\hat{\mathbf{a}}_{i}]_{\times}\Delta
t&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}&-^{G}\hat{\mathbf{R}}_{I_{i}}\Delta
t\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}\end{bmatrix}$
$\displaystyle\mathbf{F}_{{\mathbf{w}}}=\left.\dfrac{\partial\left(\delta{\hat{\mathbf{x}}}_{i+1}\right)}{\partial{\mathbf{w}_{i}}}\right|_{\delta{\hat{\mathbf{x}}_{i}}=\mathbf{0},\mathbf{w}_{i}=\mathbf{0}}$
$\displaystyle=$
$\displaystyle\begin{bmatrix}-{\mathbf{J}_{r}(\hat{\bm{\omega}}_{i}\Delta
t)}^{T}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&-^{G}\hat{\mathbf{R}}_{I_{i}}\Delta t&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{I}\Delta t&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}\Delta t\\\ \end{bmatrix}$
where $\hat{\bm{\omega}}_{i}=\bm{\omega}_{m_{i}}-\mathbf{b}_{\mathbf{g}_{i}}$
, $\hat{\mathbf{a}}_{i}=\mathbf{a}_{m_{i}}-\mathbf{b}_{\mathbf{a}_{i}}$, and
$\mathbf{J}_{r}(\cdot)$ are called the right Jacobian matrix of $SO(3)$:
$\mathbf{J}_{r}(\mathbf{r})=\mathbf{I}-\dfrac{1-\cos||\mathbf{r}||}{||\mathbf{r}||^{2}}\left[\mathbf{r}\right]_{\times}+\dfrac{||\mathbf{r}||-\sin(||\mathbf{r}||)}{||\mathbf{r}||^{3}}\left[\mathbf{r}\right]_{\times}^{2},\mathbf{r}\in\mathbb{R}^{3}$
For the detailed derivation of $\mathbf{F}_{\check{\delta{\mathbf{x}}}}$ and
$\mathbf{F}_{\mathbf{w}}$, please refer to the Section. B of our supplementary
material.
### -B The computation of $\bm{\mathcal{H}}$
$\displaystyle\bm{\mathcal{H}}$
$\displaystyle=\dfrac{\left(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1}\right)\boxminus\hat{\mathbf{x}}_{k+1}}{\partial\delta\check{\mathbf{x}}_{k+1}}|_{\delta\check{\mathbf{x}}_{k+1}=\mathbf{0}}$
$\displaystyle=\begin{bmatrix}\mathbf{A}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}_{3\times
9}\\\ \mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{0}_{3\times 9}\\\
\mathbf{0}&\mathbf{0}&\mathbf{B}&\mathbf{0}&\mathbf{0}_{3\times 9}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}_{3\times 9}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}_{9\times 9}\\\
\end{bmatrix}$
where the $3\times 3$ matrix
$\mathbf{A}=\mathbf{J}_{r}^{-1}(\mathtt{Log}({{{}^{G}\hat{\mathbf{R}}_{I_{k+1}}}^{T}}{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}))$
and
$\mathbf{B}=\mathbf{J}_{r}^{-1}(\mathtt{Log}({{{}^{I}\hat{\mathbf{R}}_{C_{k+1}}}^{T}}{{}^{I}\check{\mathbf{R}}_{C_{k+1}}}))$.
$\mathbf{J}_{r}^{-1}(\cdot)$ are called the inverse right Jacobian matrix of
$SO(3)$:
$\displaystyle\mathbf{J}_{r}^{-1}(\mathbf{r})=$
$\displaystyle\mathbf{I}+\dfrac{1}{2}\left[\mathbf{r}\right]_{\times}+\left(\dfrac{1}{||\mathbf{r}||^{2}}-\dfrac{1+\cos(||\mathbf{r}||)}{2||\mathbf{r}||\sin(||\mathbf{r}||)}\right)\left[\mathbf{r}\right]_{\times}^{2}$
(23)
### -C The computation of $\mathbf{H}^{{l}}_{j}$
$\displaystyle\mathbf{H}^{{l}}_{j}=\mathbf{u}_{j}^{T}\begin{bmatrix}-{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\left[\mathbf{P_{a}}\right]_{\times}&\mathbf{I}_{3\times
3}&\mathbf{0}_{3\times 15}\end{bmatrix}$
where
$\mathbf{P_{a}}={{}^{I}\mathbf{R}}_{L}{{}^{L}{\mathbf{p}}_{j}}+{{}^{I}\mathbf{p}_{L}}$.
For the detailed derivation of $\mathbf{H}^{{l}}_{j}$, please refer to the
Section. D of our supplementary material.
### -D The computation of $\mathbf{H}^{{c}}_{s}$ and
${\mathbf{F}_{{{\mathbf{P}}}_{s}}}$
$\displaystyle\mathbf{H}^{{c}}_{s}=-\mathbf{F_{A}}\cdot\mathbf{F_{B}}$
$\displaystyle\mathbf{F}_{{{\mathbf{P}}}_{s}}=-\mathbf{F_{A}}\cdot\mathbf{F_{C}}$
with:
$\displaystyle\mathbf{F_{A}}$
$\displaystyle=\dfrac{1}{{{}^{C}{P}_{s}}_{z}}\begin{bmatrix}f_{x}&0&-f_{x}\dfrac{{{}^{C}{P}_{s}}_{x}}{{{}^{C}{P}_{s}}_{z}}\\\
0&f_{y}&-f_{y}\dfrac{{{}^{C}{P}_{s}}_{y}}{{{}^{C}{P}_{s}}_{z}}\end{bmatrix}$
(24) $\displaystyle\mathbf{F_{B}}$
$\displaystyle=\begin{bmatrix}\mathbf{M_{A}}&\mathbf{M_{B}}&\mathbf{M_{C}}&-\mathbf{I}&\mathbf{0}_{3\times
12}\end{bmatrix}$ $\displaystyle\mathbf{F_{C}}$
$\displaystyle=\left({{}^{G}\check{\mathbf{R}}_{I_{k+1}}}{{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}$
(25)
where $f_{x}$ and $f_{y}$ are the focal length, $c_{x}$ and $c_{y}$ are the
principal point offsets in image plane, and the $3\times 3$ matrix
$\mathbf{M_{A}}$, $\mathbf{M_{B}}$ and $\mathbf{M_{C}}$ are:
$\displaystyle\mathbf{M_{A}}=\left({{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}\left[\left({{}^{G}\hat{\mathbf{R}}_{I_{k+1}}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}\right]_{\times}$
$\displaystyle\mathbf{M_{B}}=-\left({{}^{I}\hat{\mathbf{R}}_{C}}\right)^{T}$
$\displaystyle\mathbf{M_{C}}=\left[\left({{}^{G}\hat{\mathbf{R}}_{I_{k+1}}}{{}^{I}\hat{\mathbf{R}}_{C}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}\right]_{\times}-\left[\left({{}^{I}\hat{\mathbf{R}}_{C}}\right)^{T}{{{}^{G}}\hat{\mathbf{p}}^{i}_{I_{k+1}}}\right]_{\times}$
For the detailed derivation of $\mathbf{H}^{{c}}_{s}$ and
${\mathbf{F}_{{{\mathbf{P}}}_{s}}}$, please refer to Section. E of our
supplementary material.
Supplementary Material: R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual
tightly-coupled state Estimator and mapping
### S0-A Perturbation on $SO(3)$
In this appendix, we will use the following approximation of perturbation
$\delta\mathbf{r}\rightarrow\mathbf{0}$ on $SO(3)$ [25, 26]:
$\displaystyle\mathtt{Exp}(\mathbf{r}+\delta\mathbf{r})$
$\displaystyle\approx\mathtt{Exp}(\mathbf{r})\mathtt{Exp}(\mathbf{J}_{r}(\mathbf{r})\delta\mathbf{r})$
$\displaystyle\mathtt{Exp}(\mathbf{r})\mathtt{Exp}(\delta\mathbf{r})$
$\displaystyle\approx\mathtt{Exp}(\mathbf{r}+\mathbf{J}_{r}^{-1}(\mathbf{r})\delta\mathbf{r})$
$\displaystyle\mathbf{R}\cdot\mathtt{Exp}(\delta\mathbf{r})\cdot\mathbf{u}$
$\displaystyle\approx\mathbf{R}\left(\mathbf{I}+\left[\delta\mathbf{r}\right]_{\times}\right)\mathbf{u}=\mathbf{R}\mathbf{u}-\mathbf{R}\left[\mathbf{u}\right]_{\times}\delta\mathbf{r}$
where $\mathbf{u}\in\mathbb{R}^{3}$ and we use $\left[\cdot\right]_{\times}$
denote the skew-symmetric matrix of vector $(\cdot)$;
$\mathbf{J}_{r}(\mathbf{r})$ and $\mathbf{J}_{r}^{-1}(\mathbf{r})$ are called
the right Jacobian and the inverse right Jacobian of $SO(3)$, respectively.
$\displaystyle\mathbf{J}_{r}(\mathbf{r})=$
$\displaystyle\mathbf{I}-\dfrac{1-\cos||\mathbf{r}||}{||\mathbf{r}||^{2}}\left[\mathbf{r}\right]_{\times}+\dfrac{||\mathbf{r}||-\sin(||\mathbf{r}||)}{||\mathbf{r}||^{3}}\left[\mathbf{r}\right]_{\times}^{2}$
$\displaystyle\mathbf{J}_{r}^{-1}(\mathbf{r})=$
$\displaystyle\mathbf{I}+\dfrac{1}{2}\left[\mathbf{r}\right]_{\times}+\left(\dfrac{1}{||\mathbf{r}||^{2}}-\dfrac{1+\cos(||\mathbf{r}||)}{2||\mathbf{r}||\sin(||\mathbf{r}||)}\right)\left[\mathbf{r}\right]_{\times}^{2}$
### S0-B Computation of $\mathbf{F}_{\delta{\mathbf{x}}}$ and
$\mathbf{F}_{\mathbf{w}}$
Combing (4) and (6) , we have:
$\displaystyle\quad\quad\delta{\hat{\mathbf{x}}}_{i+1}={\mathbf{x}}_{i+1}\boxminus\hat{\mathbf{x}}_{i+1}$
$\displaystyle=\Large(\mathbf{x}_{i}\boxplus\left(\Delta
t\cdot\mathbf{f}({\mathbf{x}}_{i},\mathbf{u}_{i},\mathbf{w}_{i})\right)\Large)\boxminus\left(\hat{\mathbf{x}}_{i}\boxplus\left(\Delta
t\cdot\mathbf{f}(\hat{\mathbf{x}}_{i},\mathbf{u}_{i},\mathbf{0})\right)\right)$
$=\begin{bmatrix}\mathbf{Log}\left(\left({}^{G}\hat{\mathbf{R}}_{I_{i}}\mathtt{Exp}\left({\hat{\bm{\omega}}}_{i}\Delta
t\right)\right)^{T}\cdot\left({}^{G}\hat{\mathbf{R}}_{I_{i}}\mathtt{Exp}\left({{}^{G}\delta\mathbf{r}_{I_{i}}}\right)\mathtt{Exp}\left({\bm{\omega}}_{i}\Delta
t\right)\right)\right)\\\
{{}^{G}\delta\mathbf{p}_{I_{i}}}+{{}^{G}\delta\mathbf{v}_{i}}\Delta
t+\dfrac{1}{2}{\mathbf{a}}_{i}\Delta
t^{2}-\dfrac{1}{2}{\hat{\mathbf{a}}}_{i}\Delta t^{2}\\\
{{}^{I}\delta\mathbf{r}_{C_{i}}}\\\ {{}^{{I}}\delta\mathbf{p}_{C_{i}}}\\\
{{}^{G}\delta\mathbf{v}_{i}}+\left({}^{G}\hat{\mathbf{R}}_{I_{i}}\mathtt{Exp}\left({{}^{G}\delta\mathbf{r}_{I_{i}}}\right)\right){\mathbf{a}}_{i}\Delta
t-{{}^{G}\hat{\mathbf{R}}_{I_{i}}}\hat{\mathbf{a}}_{i}\Delta t\\\
\delta\mathbf{b}_{g_{i}}+\mathbf{n}_{\mathbf{bg}_{i}}\\\
\delta\mathbf{a}_{g_{i}}+\mathbf{n}_{\mathbf{ba}_{i}}\\\ \end{bmatrix}$
with:
$\displaystyle\hat{\bm{\omega}}_{i}=\bm{\omega}_{m_{i}}-\mathbf{b}_{\mathbf{g}_{i}},$
$\displaystyle\hskip
5.69046pt{\bm{\omega}}_{i}=\hat{\bm{\omega}}_{i}-\delta\mathbf{b}_{\mathbf{g}_{i}}-\mathbf{n}_{\mathbf{g}_{i}}$
(S1)
$\displaystyle~{}\hat{\mathbf{a}}_{i}=\mathbf{a}_{m_{i}}-\mathbf{b}_{\mathbf{a}_{i}},$
$\displaystyle\hskip
5.69046pt{\mathbf{a}}_{i}=\hat{\mathbf{a}}_{i}-\delta\mathbf{b}_{\mathbf{a}_{i}}-\mathbf{n}_{\mathbf{a}_{i}}$
(S2)
And we have the following simplification and approximation form Section. A.
$\displaystyle\mathtt{Log}\left(\left({}^{G}\hat{\mathbf{R}}_{I_{i}}\mathtt{Exp}\left({\hat{\bm{\omega}}}_{i}\Delta
t\right)\right)^{T}\cdot\left({}^{G}\hat{\mathbf{R}}_{I_{i}}\mathtt{Exp}\left({{}^{G}\delta\mathbf{r}_{I_{i}}}\right)\mathtt{Exp}\left({\bm{\omega}}_{i}\Delta
t\right)\right)\right)$ $\displaystyle=$
$\displaystyle\mathtt{Log}\left(\mathtt{Exp}\left(\hat{\bm{\omega}}_{i}\Delta
t\right)^{T}\cdot\left(\mathtt{Exp}\left({{}^{G}\delta\mathbf{r}_{I_{i}}}\right)\cdot\mathtt{Exp}\left({\bm{\omega}}_{i}\Delta
t\right)\right)\right)$ $\displaystyle\approx$
$\displaystyle\mathtt{Log}\left(\mathtt{Exp}\left(\hat{\bm{\omega}}_{i}\Delta
t\right)^{T}\mathtt{Exp}\left({{}^{G}\delta\mathbf{r}_{I_{i}}}\right)\mathtt{Exp}\left(\hat{\bm{\omega}}_{i}\Delta
t\right)\cdot\right.$ $\displaystyle\hskip
22.76228pt\left.\mathtt{Exp}\left(-\mathbf{J}_{r}(\hat{\bm{\omega}}_{i}\Delta
t)\left(\delta\mathbf{b}_{g_{i}}+\mathbf{n}_{\mathbf{g}_{i}}\right)\right)\right)$
$\displaystyle\approx$
$\displaystyle\mathtt{Exp}\left(\hat{\bm{\omega}}_{i}\Delta
t\right)\cdot{{}^{G}\delta\mathbf{r}_{I_{i}}}-{\mathbf{J}_{r}(\hat{\bm{\omega}}_{i}\Delta
t)}^{T}\delta\mathbf{b}_{\mathbf{g}_{i}}-{\mathbf{J}_{r}(\hat{\bm{\omega}}_{i}\Delta
t)}^{T}\mathbf{n}_{\mathbf{g}_{i}}$
$\displaystyle\left({}^{G}\mathbf{R}_{I_{i}}\mathtt{Exp}\left({{}^{G}\delta\mathbf{r}_{I_{i}}}\right)\right){\mathbf{a}}_{i}\Delta
t$ $\displaystyle\approx$
$\displaystyle\left({}^{G}\mathbf{R}_{I_{i}}\left(\mathbf{I}+[^{G}\delta\mathbf{r}_{I_{i}}]_{\times}\right)\right)\left(\hat{\mathbf{a}}_{i}-\delta\mathbf{b}_{\mathbf{a}_{i}}-\mathbf{n}_{\mathbf{a}_{i}}\right)\Delta
t$ $\displaystyle\approx$ ${}^{G}\mathbf{R}_{I_{i}}\hat{\mathbf{a}}_{i}\Delta
t-^{G}\mathbf{R}_{I_{i}}\delta\mathbf{b}_{\mathbf{a}_{i}}\Delta
t-^{G}\mathbf{R}_{I_{i}}\mathbf{n}_{\mathbf{a}_{i}}\Delta
t-{{}^{G}\mathbf{R}_{I_{i}}}\left[\hat{\mathbf{a}}_{i}\right]_{\times}{{}^{G}\delta\mathbf{r}_{I_{i}}}$
To conclude, we have the computation of $\mathbf{F}_{\delta{\mathbf{x}}}$ and
$\mathbf{F}_{\mathbf{w}}$ as follow:
$\displaystyle\mathbf{F}_{\delta{\hat{\mathbf{x}}}}=\left.\dfrac{\partial\left(\delta{\hat{\mathbf{x}}}_{i+1}\right)}{\partial\delta{\hat{\mathbf{x}}_{i}}}\right|_{\delta{\hat{\mathbf{x}}_{i}}=\mathbf{0},\mathbf{w}_{i}=\mathbf{0}}$
$\displaystyle=$
$\displaystyle\begin{bmatrix}\mathtt{Exp}(-\hat{\bm{\omega}}_{i}\Delta
t)&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&-{\mathbf{J}_{r}(\hat{\bm{\omega}}_{i}\Delta
t)}^{T}&\mathbf{0}\\\
\mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{I}\Delta
t&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
-^{G}\hat{\mathbf{R}}_{I_{i}}[\hat{\mathbf{a}}_{i}]_{\times}\Delta
t&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}&-^{G}\hat{\mathbf{R}}_{I_{i}}\Delta
t\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}\end{bmatrix}$
$\displaystyle\mathbf{F}_{{\mathbf{w}}}=\left.\dfrac{\partial\left(\delta{\hat{\mathbf{x}}}_{i+1}\right)}{\partial{\mathbf{w}_{i}}}\right|_{\delta{\hat{\mathbf{x}}_{i}}=\mathbf{0},\mathbf{w}_{i}=\mathbf{0}}$
$\displaystyle=$
$\displaystyle\begin{bmatrix}-{\mathbf{J}_{r}(\hat{\bm{\omega}}_{i}\Delta
t)}^{T}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&-^{G}\hat{\mathbf{R}}_{I_{i}}\Delta t&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{I}\Delta t&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}\Delta t\\\ \end{bmatrix}$
### S0-C The computation of $\bm{\mathcal{H}}$
Recalling (15), we have:
$\displaystyle\bm{\mathcal{H}}$
$\displaystyle=\dfrac{\left(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1}\right)\boxminus\hat{\mathbf{x}}_{k+1}}{\partial\delta\check{\mathbf{x}}_{k+1}}|_{\delta\check{\mathbf{x}}_{k+1}=\mathbf{0}}$
$\displaystyle=\begin{bmatrix}\mathbf{A}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}_{3\times
9}\\\ \mathbf{0}&\mathbf{I}&\mathbf{0}&\mathbf{0}&\mathbf{0}_{3\times 9}\\\
\mathbf{0}&\mathbf{0}&\mathbf{B}&\mathbf{0}&\mathbf{0}_{3\times 9}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}&\mathbf{0}_{3\times 9}\\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{I}_{9\times 9}\\\
\end{bmatrix}$
with the $3\times 3$ matrix
$\mathbf{A}=\mathbf{J}_{r}^{-1}(\mathtt{Log}({{{}^{G}\hat{\mathbf{R}}_{I_{k+1}}}^{T}}{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}))$
and
$\mathbf{B}=\mathbf{J}_{r}^{-1}(\mathtt{Log}({{{}^{I}\hat{\mathbf{R}}_{C_{k+1}}}^{T}}{{}^{I}\check{\mathbf{R}}_{C_{k+1}}}))$.
### S0-D The computation of $\mathbf{H}^{{l}}_{j}$
Recalling (12) and (15), we have:
$\displaystyle\mathbf{r}_{l}(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1},{{}^{L}{\mathbf{p}}}_{j})=\mathbf{u}_{j}^{T}\left({{}^{G}\check{\mathbf{p}}_{I_{k+1}}}+{{}^{G}\delta\check{\mathbf{p}}_{I_{k+1}}}-\right.$
$\displaystyle{{\mathbf{q}}_{j}}+{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\mathtt{Exp}({{}^{G}\check{\delta\mathbf{r}}_{I_{k+1}}})\left.(^{I}\mathbf{R}_{L}{{}^{L}{\mathbf{p}}_{j}}+{{}^{I}\mathbf{p}_{L}})\right)$
(S3)
And with the small perturbation approximation, we get:
$\displaystyle{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\mathtt{Exp}({{}^{G}\delta\check{\mathbf{r}}_{I_{k+1}}})\mathbf{P_{a}}$
$\displaystyle\approx$
$\displaystyle{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\left(\mathbf{I}+\left[{{}^{G}\delta\check{\mathbf{r}}_{I_{k+1}}}\right]_{\times}\right)\mathbf{P_{a}}$
$\displaystyle=$
$\displaystyle{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\mathbf{P_{a}}-{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\left[\mathbf{P_{a}}\right]_{\times}{{}^{G}\delta\check{\mathbf{r}}_{I_{k+1}}}$
(S4)
where
$\mathbf{P_{a}}={{}^{I}\mathbf{R}}_{L}{{}^{L}{\mathbf{p}}_{j}}+{{}^{I}\mathbf{p}_{L}}$.
Combining (S3) and (S4) together we can obtain:
$\displaystyle\mathbf{H}^{{l}}_{j}=\mathbf{u}_{j}^{T}\begin{bmatrix}-{{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\left[\mathbf{P_{a}}\right]_{\times}&\mathbf{I}_{3\times
3}&\mathbf{0}_{3\times 15}\end{bmatrix}$
### S0-E The computation of $\mathbf{H}^{{c}}_{s}$ and
${\mathbf{F}_{{{\mathbf{P}}}_{s}}}$
Recalling (16), we have:
${{}^{C}\mathbf{P}_{s}}=\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})=\begin{bmatrix}{{}^{C}{P}_{s}}_{x}~{}{{}^{C}{P}_{s}}_{y}~{}{{}^{C}{P}_{s}}_{z}\end{bmatrix}^{T}$
where the function
$\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})$ is:
$\displaystyle\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})=\left({{}^{G}\check{\mathbf{R}}_{I_{k+1}}}{{}^{I}\check{\mathbf{R}}_{C_{k+1}}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}$
(S5) $\displaystyle\hskip
71.13188pt-\left({{}^{I}\check{\mathbf{R}}_{C_{k+1}}}\right)^{T}{{{}^{G}}\check{\mathbf{p}}_{I_{k+1}}}-{{}^{I}\check{\mathbf{p}}_{C_{k+1}}}$
(S6)
From (20), we have:
$\displaystyle\mathbf{r}_{c}\left({\check{\mathbf{x}}_{k+1},{{}^{C}{\mathbf{p}}}_{s}},{{}^{G}\mathbf{P}_{s}}\right)$
$\displaystyle={{}^{C}}\mathbf{p}_{s}-\bm{\pi}({{}^{C}\mathbf{P}_{s}})$
$\displaystyle\bm{\pi}({{}^{C}\mathbf{P}_{s}})$
$\displaystyle=\begin{bmatrix}f_{x}\dfrac{{{}^{C}{P}_{s}}_{x}}{{{}^{C}{P}_{s}}_{z}}+c_{x}~{}~{}f_{y}\dfrac{{{}^{C}{P}_{s}}_{y}}{{{}^{C}{P}_{s}}_{z}}+c_{y}\end{bmatrix}^{T}$
(S7)
where $f_{x}$ and $f_{y}$ are the focal length, $c_{x}$ and $c_{y}$ are the
principal point offsets in image plane.
For conveniently, we omit the
$(\cdot)|_{\delta{\check{\mathbf{x}}}^{i}_{k+1}=\mathbf{0}}$ in the following
derivation, and we have:
$\displaystyle\mathbf{H}^{{c}}_{s}=-\dfrac{\partial\bm{\pi}({{}^{C}\mathbf{P}_{s}})}{\partial{{}^{C}\mathbf{P}_{s}}}\cdot\dfrac{\partial\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})}{\partial\delta{\check{\mathbf{x}}}_{k+1}}$
(S8)
$\displaystyle\mathbf{F}_{{{\mathbf{P}}}_{s}}=-\dfrac{\partial\bm{\pi}({{}^{C}\mathbf{P}_{s}})}{\partial{{}^{C}\mathbf{P}_{s}}}\cdot\dfrac{\partial\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})}{\partial{{}^{G}{\mathbf{P}}}_{s}}$
(S9)
where:
$\displaystyle\dfrac{\partial\bm{\pi}({{}^{C}\mathbf{P}_{s}})}{\partial{{}^{C}\mathbf{P}_{s}}}$
$\displaystyle=\dfrac{1}{{{}^{C}{P}_{s}}_{z}}\begin{bmatrix}f_{x}&0&-f_{x}\dfrac{{{}^{C}{P}_{s}}_{x}}{{{}^{C}{P}_{s}}_{z}}\\\
0&f_{y}&-f_{y}\dfrac{{{}^{C}{P}_{s}}_{y}}{{{}^{C}{P}_{s}}_{z}}\end{bmatrix}$
(S10)
$\displaystyle\dfrac{\partial\mathbf{P_{b}}(\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})}{\partial{{}^{G}{\mathbf{P}}}_{s}}$
$\displaystyle=\left({{}^{G}\check{\mathbf{R}}_{I_{k+1}}}{{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}$
(S11)
According to Section. A, we have the following approximation of
$\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})$:
$\displaystyle\hskip
5.69046pt\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})$
$\displaystyle=\left({{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\mathtt{Exp}\left({{}^{G}{\delta\check{\mathbf{r}}_{I_{k+1}}}}\right){{}^{I}\check{\mathbf{R}}_{C_{k+1}}}\mathtt{Exp}\left({{}^{I}{\delta\check{\mathbf{r}}_{C_{k+1}}}}\right)\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}-{{}^{I}\check{\mathbf{p}}_{C}}$
$\displaystyle-{{}^{I}{\delta\check{\mathbf{p}}}_{C}}-\left({{}^{I}\check{\mathbf{R}}_{C}}\mathtt{Exp}\left({{}^{I}{\delta\check{\mathbf{r}}_{C}}}\right)\right)^{T}\left({{{}^{G}}\check{\mathbf{p}}_{I_{k+1}}}+{{}^{G}\delta\check{\mathbf{p}}}_{I_{k+1}}\right)$
$\displaystyle\approx\mathbf{P_{b}}(\check{\mathbf{x}}^{i}_{k+1},{{}^{G}\mathbf{P}_{s}})+\left[\left({{}^{G}\check{\mathbf{R}}_{I_{k+1}}}{{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}\right]_{\times}{{}^{I}{\delta\check{\mathbf{r}}_{C}}}$
$\displaystyle+\left({{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}\left[\left({{}^{G}\check{\mathbf{R}}_{I_{k+1}}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}\right]_{\times}{{}^{G}{\delta\check{\mathbf{r}}_{I_{k+1}}}}-\left({{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}{{}^{G}\delta\check{\mathbf{p}}}_{I_{k+1}}$
$\displaystyle-\left[\left({{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}{{{}^{G}}\check{\mathbf{p}}_{I_{k+1}}}\right]_{\times}{{}^{I}{\delta\check{\mathbf{r}}}_{C}}-{{}^{I}{\delta\check{\mathbf{p}}}_{C}}$
With this, we can derive:
$\displaystyle\dfrac{\partial\mathbf{P}_{\mathbf{C}}(\check{\mathbf{x}}_{k+1}\boxplus\delta\check{\mathbf{x}}_{k+1},{{}^{G}\mathbf{P}_{s}})}{\partial\delta{\check{\mathbf{x}}}_{k+1}}=\begin{bmatrix}\mathbf{M_{A}}~{}~{}\mathbf{M_{B}}~{}~{}\mathbf{M_{C}}~{}~{}-\mathbf{I}~{}~{}\mathbf{0}_{3\times
12}\end{bmatrix}$ (S12)
$\displaystyle\mathbf{M_{A}}=\left({{}^{I}\check{\mathbf{R}}_{C}}\right)^{T}\left[\left({{}^{G}\hat{\mathbf{R}}_{I_{k+1}}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}\right]_{\times}$
$\displaystyle\mathbf{M_{B}}=-\left({{}^{I}\hat{\mathbf{R}}_{C}}\right)^{T}$
$\displaystyle\mathbf{M_{C}}=\left[\left({{}^{G}\hat{\mathbf{R}}_{I_{k+1}}}{{}^{I}\hat{\mathbf{R}}_{C}}\right)^{T}{{{}^{G}}{\mathbf{P}}_{s}}\right]_{\times}-\left[\left({{}^{I}\hat{\mathbf{R}}_{C}}\right)^{T}{{{}^{G}}\hat{\mathbf{p}}^{i}_{I_{k+1}}}\right]_{\times}$
Substituting (S10), (S11) and (S12) into (S8) and (S9), we finish the
computation of $\mathbf{H}^{{c}}_{s}$ and ${\mathbf{F}_{{{\mathbf{P}}}_{s}}}$.
|
* Berkowitz (2017) Evan Berkowitz, “METAQ: Bundle Supercomputing Tasks,” (2017), https://github.com/evanberkowitz/metaq, arXiv:1702.06122 [physics.comp-ph] .
* Berkowitz _et al._ (2018b) Evan Berkowitz, Gustav R. Jansen, Kenneth McElvain, and André Walker-Loud, “Job Management and Task Bundling,” EPJ Web Conf. 175, 09007 (2018b), arXiv:1710.01986 [hep-lat] .
* Chang _et al._ (2020) Chia Cheng Chang, Christopher Körber, and André Walker-Loud, “EspressoDB: A Scientific Database for Managing High-Performance Computing Workflow,” J. Open Source Softw. 5, 2007 (2020), arXiv:1912.03580 [hep-lat] .
* Lepage (2020a) Peter Lepage, “gplepage/gvar: gvar version 11.2,” (2020a), https://github.com/gplepage/gvar.
* Lepage (2020b) Peter Lepage, “gplepage/lsqfit: lsqfit version 11.5.1,” (2020b), https://github.com/gplepage/lsqfit.
* Aoki _et al._ (2010) S. Aoki _et al._ (PACS-CS), “Physical Point Simulation in 2+1 Flavor Lattice QCD,” Phys. Rev. D 81, 074503 (2010), arXiv:0911.2561 [hep-lat] .
* Luscher and Schaefer (2013) Martin Luscher and Stefan Schaefer, “Lattice QCD with open boundary conditions and twisted-mass reweighting,” Comput. Phys. Commun. 184, 519–528 (2013), arXiv:1206.2809 [hep-lat] .
* Finkenrath _et al._ (2013) Jacob Finkenrath, Francesco Knechtli, and Björn Leder, “One flavor mass reweighting in lattice QCD,” Nucl. Phys. B 877, 441–456 (2013), [Erratum: Nucl.Phys.B 880, 574–575 (2014)], arXiv:1306.3962 [hep-lat] .
* Leder _et al._ (2014) Björn Leder, Jacob Finkenrath, and Francesco Knechtli, “One flavor mass reweighting: foundations,” PoS LATTICE2013, 035 (2014), arXiv:1401.1079 [hep-lat] .
* Hasenbusch (2001) Martin Hasenbusch, “Speeding up the hybrid Monte Carlo algorithm for dynamical fermions,” Phys. Lett. B 519, 177–182 (2001), arXiv:hep-lat/0107019 .
* Hasenfratz _et al._ (2008) Anna Hasenfratz, Roland Hoffmann, and Stefan Schaefer, “Reweighting towards the chiral limit,” Phys. Rev. D 78, 014515 (2008), arXiv:0805.2369 [hep-lat] .
* Liu _et al._ (2013) Qi Liu, Norman H. Christ, and Chulwoo Jung, “Light Quark Mass Reweighting,” Phys. Rev. D 87, 054503 (2013), arXiv:1206.0080 [hep-lat] .
## Appendix C STABILITY PLOTS OF THE OMEGA GROUND STATE MASS
Here we present the stability plots for the remaining Omega correlator fits
used in our analysis, which are presented in Fig. 12–25.
Figure 12: Same as Fig. 1 for the a15m400 ensemble.
Figure 13: Same as Fig. 1 for the a15m350 ensemble.
Figure 14: Same as Fig. 1 for the a15m310 and a15m310L ensembles.
Figure 15: Same as Fig. 1 for the a15m220 ensemble.
Figure 16: Same as Fig. 1 for the a12m400 ensemble.
Figure 17: Same as Fig. 1 for the a12m350 ensemble.
Figure 18: Same as Fig. 1 for the a12m310 and a12m310XL ensembles.
Figure 19: Same as Fig. 1 for the a12m220 ensembles.
Figure 20: Same as Fig. 1 for the a12m220ms ensembles.
Figure 21: Same as Fig. 1 for the a12m180L ensembles.
Figure 22: Same as Fig. 1 for the a09m400 ensemble.
Figure 23: Same as Fig. 1 for the a09m350 ensemble.
Figure 24: Same as Fig. 1 for the a09m310 ensemble.
Figure 25: Same as Fig. 1 for the a09m220 ensemble. |
# PASA: Attack Agnostic Unsupervised Adversarial Detection using Prediction &
Attribution Sensitivity Analysis
Dipkamal Bhusal Rochester Institute of Technology
Rochester, NY, USA Md Tanvirul Alam Rochester Institute of Technology
Rochester, NY, USA Monish K. Veerabhadran Rochester Institute of Technology
Rochester, NY, USA Michael Clifford Toyota Motor North America
Mountain View, CA, USA Sara Rampazzi University of Florida
Gainesville, FL, USA Nidhi Rastogi Rochester Institute of Technology
Rochester, NY, USA
###### Abstract
Deep neural networks for classification are vulnerable to adversarial attacks,
where small perturbations to input samples lead to incorrect predictions. This
susceptibility, combined with the black-box nature of such networks, limits
their adoption in critical applications like autonomous driving. Feature-
attribution-based explanation methods provide relevance of input features for
model predictions on input samples, thus explaining model decisions. However,
we observe that both model predictions and feature attributions for input
samples are sensitive to noise. We develop a practical method for this
characteristic of model prediction and feature attribution to detect
adversarial samples. Our method, PASA, requires the computation of two test
statistics using model prediction and feature attribution and can reliably
detect adversarial samples using thresholds learned from benign samples. We
validate our lightweight approach by evaluating the performance of PASA on
varying strengths of FGSM, PGD, BIM, and CW attacks on multiple image and non-
image datasets. On average, we outperform state-of-the-art statistical
unsupervised adversarial detectors on CIFAR-10 and ImageNet by 14% and 35%
ROC-AUC scores, respectively. Moreover, our approach demonstrates competitive
performance even when an adversary is aware of the defense mechanism.
## 1 Introduction
Deep neural networks (DNNs) have demonstrated state-of-the-art performance in
various classification tasks [24, 7, 2]. However, DNNs are known to be
vulnerable to adversarial evasion attacks. Attackers carefully and
deliberately craft samples by adding small perturbations to fool the DNN and
cause it to make incorrect predictions [22, 12, 40]. The susceptibility of
DNNs to such attacks poses serious risks when deploying them in application
scenarios where security and reliability are essential, such as in autonomous
vehicles [32] and medical diagnosis[52].
Current approaches for defending against such evasion attacks can be broken
into two broad categories. One category increases the robustness of a model
(e.g., adversarial training [22], feature denoising [70]). However, such
approaches achieve model robustness at the cost of modification of the network
architecture or training process and compromise natural accuracy as a result.
Such methods are still susceptible to adversarial attacks like blind-spot
attacks [74]. The other category identifies adversarial samples instead of
making robust classifications. While detecting adversarial attacks is as
challenging as classifying them [64], such methods are useful in many
practical situations where discarding adversarial samples for security or
generating an alert for human intervention is possible. Detection methods are
supervised if they require both benign and adversarial samples in their
training [19, 39]. The main limitations of supervised detection methods are
the requirement of prior attack knowledge and the availability of adversarial
samples. In contrast, unsupervised methods solely rely on properties of
natural (benign) samples for training [41, 71].
Figure 1: Benign (1st column) and Adversarial PGD Image (3rd column).
Corresponding Integrated Gradient (IG) Attribution (2nd and 4th column).
A different line of research, called post-hoc explanation methods, addresses
the black-box nature of DNNs [53, 45, 60, 37, 10]. Post-hoc explanation
methods explain the decision made by a DNN for a test input based on its input
features, and enhance our understanding of the DNN’s decision-making process.
For example, in an image classifier, such methods can identify the key pixels
in an input image that lead to a DNN decision. Explanations can use various
approaches such as feature attribution, rules, or counterfactuals to explain
an instance [5]. Feature attribution-based methods, such as Integrated
Gradient (IG) [60], assign attribution or relevance scores to each input
feature, quantifying the importance of the feature to the model’s prediction.
Recent research has explored the application of feature-attribution-based
explanation methods in detecting adversarial attacks [63, 73, 72, 68, 67].
However, these methods require both benign and adversarial samples to train an
additional classifier for detection and do not incorporate features from the
classification model in the detection pipeline.
Proposed Approach: We propose a novel method for detecting adversarial samples
by combining model prediction and feature attribution. Our approach is
motivated by the evident differences in model prediction and feature
attribution between benign and adversarial samples when noise is introduced:
(a) we observe that a DNN exhibits distinct behavior when noise (e.g.,
gaussian noise) is introduced to adversarial samples, similar to studies
performed by Roth et al. [46] and Hu et al. [28]. However, while prior works
[46, 28] look for noise that does not change the model prediction for benign
samples, we empirically identify noise that maximizes the distinction between
the behavior of benign and adversarial samples. (b) There are noticeable
differences in the attribution map of benign and adversarial samples. Figure 1
illustrates these differences in images where the distribution of attribution
scores varies significantly for adversarial samples, evident from the change
in red and blue pixels in the attribution map. Even though adversarial samples
are crafted by adding small perturbations to the input data, we observe that
their feature attribution differs markedly from those of benign samples. This
distinction in feature attribution becomes more pronounced when noise is
introduced to samples. (c) Examining these discrepancies in model prediction
and feature attribution of benign and adversarial samples subjected to
additional perturbation can effectively detect adversarial attacks, and ensure
the security of systems incorporating deep learning models.
We introduce PASA111“PASA” is the Newari term for “friend,” a Sino-Tibetan
language spoken by the indigenous people of the Kathmandu Valley, Nepal., a
threshold-based, unsupervised method for detecting adversarial samples, using
Prediction & Attribution Sensitivity Analysis. We use noise as a probe to
modify input samples, measure changes in model prediction and feature
attribution, and learn thresholds from benign samples. At test time, PASA
computes model prediction and feature attribution of a given input and its
noisy counterpart and rejects the input if the change in model prediction or
feature attribution does not meet the predefined threshold. We demonstrate the
effectiveness of our lightweight detection approach by evaluating its
performance on five different datasets (MNIST [34], CIFAR-10 [31], CIFAR-100
[31], ImageNet [17] and updated CIC-IDS2017 [18]) and five different deep
neural network architectures (MLP [47], LeNet [34], VGG16 [31], ResNet [25],
and MobileNet [48]). On average, PASA outperforms other state-of-the-art
statistical unsupervised detectors (FS [71], MagNet [41], LOO [72], TWS [28])
by 14% on CIFAR-10, 4% on CIFAR-100 and 35% on ImageNet. We further evaluate
PASA under an adaptive adversary setting and demonstrate its robustness. We
observe that performing adaptive attacks against both model prediction and
feature attribution increases computational complexity for an adversary, and
PASA still achieves competitive performance. PASA has low inference latency,
and the simplicity yet effectiveness of this approach makes it suitable for
deployment in scenarios with limited computational resources. Our code is
available at {https://github.com/dipkamal/PASA}.
## 2 Background and Related Work
Deep Neural Network: Deep neural networks (DNNs) learn efficient
representations of training data by extracting features using interconnected
neurons. Let $(X,Y)$ represent the training data where $X$ reflects input
space and $Y$ reflects label space. Then, a deep neural network ($F$)
generates a highly accurate functional representation $F:X\rightarrow Y$. Such
networks are trained using backpropagation, a gradient-based optimization
algorithm, which adjusts the weights between neurons to minimize the error
between model predictions and ground truth labels. For example: Convolutional
Neural Networks (CNNs) are a type of DNN used for image, video, tabular, and
text data [34]. In supervised classification, a CNN is trained using $N$
labeled samples. The $i^{th}$ sample $({\textbf{x}_{i}},y_{i})$ consists of an
input $\textbf{x}_{i}$ with label $y_{i}$. The final layer of the network is
often referred to as a “logit” layer and consists of $k$ neurons corresponding
to the $k$ target classes. The “logits,” $Z(\textbf{x})$ are the unnormalized
scores the network generates for each class before applying a softmax
function, which maps the logits to the probability distribution over the
classes. Logits represent the internal representations of the input data
learned by the network. During training, the network’s parameters are adjusted
to minimize the difference between predicted probabilities and the actual
labels of the training data. This optimization process helps the network learn
to assign higher logits to the correct classes and lower logits to incorrect
classes. For a $k$-class classifier, the output vector of probabilities $y$ is
obtained by applying the softmax function to the logits $Z(\textbf{x})$. The
final model prediction is the class with the highest probability. Let $y$ be
the output vector of probabilities then,
$y=\text{softmax}(Z(\textbf{x}))=\frac{\exp(Z(\textbf{x}))}{\sum_{j=1}^{k}\exp(Z_{j}(\textbf{x}))}$
where $Z(\textbf{x})$ are the logits generated by the CNN for the input image
x and $y$ is the resulting vector of class probabilities.
Adversarial Attack: Though highly accurate, DNNs are vulnerable to adversarial
attacks that can cause input misclassifications [22, 12, 40, 6, 33]. Given a
model $F$ and input sample $\textbf{x},y$, the goal of an adversarial evasion
attack is to modify the sample x by adding a perturbation such that
$F(\textbf{x})\neq F(\textbf{x}^{*})$ and
$||\textbf{x}^{*}-\textbf{x}||<\epsilon$ where $\epsilon\in R^{n}$ is the
maximum perturbation allowed and x* is the adversarial sample. In targeted
attacks, an adversary aims to misclassify the input sample into a specific
class such that $F(\textbf{x}^{*})=t$ where $t$ is the target label in the
label space. In untargeted attacks, an adversary aims to misclassify an input
into any other class but the correct one. Untargeted attacks have fewer
perturbations than targeted attacks and have better success rates with strong
transferability capability [12, 36]. Adversarial attacks also differ based on
the distance measures, usually defined as $L_{p}$ norm ($L_{0}$, $L_{1}$,
$L_{2}$, and $L_{\infty}$), between the benign and adversarial input. Based on
adversary knowledge of the target classifier, attacks can further be
classified as black-box (no information), gray-box (partial information), and
white-box (complete information) attacks. For example, the Fast Gradient Sign
Attack (FGSM) [22] assumes a linear approximation of the network loss function
and finds a perturbation by increasing the local linear approximation of the
loss. The Basic Iterative Method (BIM) [33] is an iterative version of FGSM
where the perturbation is computed multiple times with small steps. Projected
Gradient Descent (PGD) [40] is also an iterative method similar to BIM.
However, unlike BIM, it starts from a random perturbation in the $L_{\infty}$
ball around the input sample. Auto-PGD attack [14] is a gradient-based
adversarial attack that reduces the parameter dependence on step-size of the
PGD attack. Carlini and Wagner (CW) [12] attacks comprise a range of attacks
that follow an optimization framework similar to L-BFGS [61]. However, they
replace the loss function with an optimization problem involving logits
$(Z(.))$, instead of using the model prediction.
Defense Against Attack: There are two categories of approaches to adversarial
attack mitigation. The first category focuses on improving model robustness
against attacks. For example, adversarial training [22] augments natural
training data with adversarial samples and performs model training to build a
robust classifier that utilizes both original and perturbed samples. However,
this method requires information on the generation of adversarial samples,
compromises the benign accuracy of the model, and is still susceptible to
adversarial attacks like blind-spot attacks [12, 74]. The second category of
defense focuses on detecting and rejecting adversarial samples at test time.
The detection can be supervised or unsupervised. The supervised detection
methods extract features of benign and adversarial samples and train another
network for detection [19, 39, 56, 69]. However, the detection network can
also be compromised by adversarial attacks [11]. Since supervised approaches
require prior knowledge of attacks and the availability of adversarial samples
for training, it can be a major limitation. On the other hand, unsupervised
detection methods require only benign samples for training. Such methods
extract features from benign samples and compute thresholds that measure
inconsistency between properties of benign and adversarial samples. For
example, Feature Squeezing [71] identifies adversarial images by compressing
the input space using image filters and comparing its prediction vectors with
that of original images using a threshold learned from benign images. MagNet
[41] uses denoisers trained on benign images to reconstruct input samples. It
assumes that the threshold-based reconstruction error will be smaller for
benign images than for adversarial images. While effective on small datasets
(MNIST, CIFAR-10), none of these methods perform well on larger images such as
ImageNet [11]. DNR [55] uses features of benign images from different layers
of a network to build a detection method but requires training additional
N-SVM classifiers with an RBF kernel. NIC [38] also extracts the activation
distribution of benign images in network layers but builds an additional set
of models.
Similar to our work, Roth et al. [46] and Hu et al. [28] use noise as a probe
for adversarial detection. Roth et al. [46] compute log-odds robustness, the
changes in logits between each pair of predicted classes, for a given image
and its noisy counterpart. They learn threshold from benign images and use it
for adversarial detection. However, Hosseini et al. [27] generate adversarial
images using mimicry attacks that can bypass this statistical approach. The
method also requires generating over 2000 noisy samples per image, making it
unsuitable when prediction time is of concern. Hu et al. [28] compare the
change in softmax scores between the original and noisy input images and learn
the detection threshold from benign images. However, the prediction
probability from a softmax distribution poorly corresponds to the model’s
confidence [26]. Consequently, applying the softmax function to logits results
in a loss of discriminating information [1] and can be ineffective in
detecting adversarial detection. Both approaches aim to preserve the model
prediction of benign images and do not account for changes in feature
attribution.
Explanation Method: Explanation methods consist of techniques that explain a
prediction of a black-box model in a post-hoc manner. Given a trained model
and a test input, such methods provide explanations in terms of attribution
scores or rules to explain why the model made a certain prediction. Feature
attribution, a type of post-hoc explanation method, assigns an attribution
score to each input feature of an instance, indicating its importance in the
model’s prediction [45]. Given a trained model $F(.)$ and a test instance
$\textbf{x}\in R^{d}$, a feature attribution-based explanation method $\phi$
returns an attribution vector $\phi(\textbf{x})\in R^{d}$. The attribution
vector is a vector of scores that quantify the importance of the input
features in the model prediction of the test instance. Backpropagation-based
attribution methods utilize gradients to propagate the model prediction to the
input layer. For example, the Vanilla Gradient method [53] calculates the
gradient of the class score (output) with respect to the input. Integrated
Gradient (IG) method [60] aggregates gradients along a linear path from a
baseline to the test input. The choice of the baseline is specific to the use-
case [58] and should have a near-zero score for model prediction. IG satisfies
fundamental axioms of attribution methods: sensitivity (the feature is
essential for prediction), implementation (attribution method is efficient and
scalable), and completeness (attribution score of input features adds up to
output score for an input) [60]. It also produces more stable explanations
than the Vanilla Gradient method [5].
Attack Detection Using Explanation: Recent research has explored using feature
attribution for adversarial detection. For example: (a) Tao et al. [63]
identify critical neurons in feature attribution of faces to detect
adversarial images. However, their approach is limited to face recognition
systems. (b) Zhang et al. [73] train a supervised classifier using the Vanilla
Gradient method-based feature attribution. However, their additional networks
for detection can be vulnerable to adversarial attacks. (c) ML-LOO [72]
extracts feature attribution using the leave-one-out (LOO) method [35] and
trains several detectors using benign and adversarial images. However, they
train several attack-specific detectors by extracting attribution from hidden
layers, which can be computationally expensive and may not be scalable to a
large number of attacks. (d) X-ensemble [68] and ExAD [67] train an ensemble
network by extracting feature attribution using different explanation methods
for benign and adversarial images. However, this approach also requires prior
information on attacks and feature attribution for various adversarial samples
and explanation methods, making it difficult to apply in real-world scenarios.
## 3 Motivation
We provide motivations for our detector design by discussing adversarial
perturbation regions around benign samples, and the influence of noise on
model prediction and feature attribution. For this discussion, we consider $F$
to be a target image classification model, x is an input image, and
$Z(\textbf{x})$ is the logits given by the model. We sample noise
$\eta\in\mathcal{N}(0,\sigma^{2})$ (where $\sigma^{2}$ is a hyperparameter)
and obtain a noisy version of the input image
$\textbf{x}^{\prime}=\textbf{x}+\eta$. The logits returned by the model $F$
for $\textbf{x}^{\prime}$ is given by $Z(\textbf{x}^{\prime})$. We use
Integrated Gradient (IG) [60] as our attribution method that provides
attribution vector $IG^{F}(\textbf{x})$ and $IG^{F}(\textbf{x}^{\prime})$ for
the original and noisy sample.
Effect of noise on model prediction: DNNs are susceptible to imperceptible
adversarial perturbations in an input sample that can change the predicted
label of the sample [22, 12]. Early explanations for this vulnerability
attributed it to “blind spots” in the high-dimensional input space, which are
low-probability adversarial pockets that make an input vulnerable to
adversarial perturbations [61]. Goodfellow et al. [22] explain this
vulnerability of neural networks in terms of the linear behavior of networks
in high-dimensional spaces. Tanay et al. [62] demonstrate that adversarial
attacks are possible because most input samples in high-dimensional space are
bound to exist near the class boundary, making them susceptible to even minor
modifications.
Furthermore, it has been shown that by carefully traversing data manifolds,
the true label of an input sample can be changed by perturbing it off the data
manifold [20]. We can do so by introducing noise (e.g., Gaussian) to an input.
Prior works [28] have modified an input with noise and studied how the model
responds to gain insights into the model’s behavior for adversarial detection.
Hu et al. [28] compute softmax scores before and after adding noise to an
image and measure the change. They empirically pick a noise parameter that
preserves the behavior of benign images on the assumption that natural images
are robust to random input corruptions compared with adversarial images.
However, such additional noise can amplify the adversarial effects and
increase the likelihood of fooling the DNN. This impact depends on the noise
parameter and nature of the dataset. For example, for lower-dimension datasets
like MNIST, both benign and adversarial images are concentrated in a low-
dimension manifold [49]. While benign images stay robust to noise, adversarial
images move off the manifold, producing significant changes to the model
prediction.
On the contrary, for higher dimensions, benign images tend to lie close to
decision boundaries, making them susceptible to small noise that can lead to
misclassification [62]. Adversarial images, on the other hand, often lie in
the space outside the data manifold. These are low-probability manifolds
created due to a lack of images in the training dataset. Hence, additional
noise to benign images can change their position relative to their original
position in the manifold, producing significant variation in model prediction.
However, because the adversarial samples already lie on the low-probability
manifold, model sensitivity to additional noise is minimal. This sensitivity
of an image to noise can be measured by comparing the change in logits
$\delta_{1}=||Z(\textbf{x}^{\prime})-Z(\textbf{x})||$.
Figure 2: 2D Visualization: Adversarial vs. Benign Classifier Features. Left:
Shared manifold for simpler images (MNIST). Right: Adversarial samples form
out-of-distribution features in complex images (CIFAR-10).
Illustrative Example: We project the feature extracted from the penultimate
layer of the trained CNN classifiers into a 2D space using the t-SNE algorithm
[66] (see Figure 2). We use 10000 representative benign images and their
adversarial counterpart with an untargeted $L_{\infty}$ FGSM attack
$\epsilon=8/255$ from the training set of each dataset. For MNIST, the benign
images and their adversarial counterparts lie in a low-dimensional manifold,
suggesting that adversarial images are crafted by traversing the data manifold
observed in the training distribution. However, for CIFAR-10, we observe that
adversarial images primarily lie outside the distribution observed in the
training data. This suggests that adversarial samples for complex images are
created by introducing out-of-distribution samples where network behavior is
not predictable based on the training data. Given that benign images lie near
the decision boundary, adding sufficient noise can push them to the out-of-
distribution region, thus resulting in a significant change in model
predictions than their adversarial counterparts.
Effect of noise on feature attribution: Feature attribution-based explanation
methods assign a score to each input feature, indicating their importance in
the model’s prediction for an instance. The distribution of such feature
attribution scores varies for adversarial and benign samples and can be
measured using statistical measures of dispersion [72]. Figure 1 shows heat
maps for benign and adversarial counterparts for the ImageNet samples using
the IG method, highlighting the contrast in attribution distribution. We
observe that positive attribution scores (red pixels) are distributed across
more input features in adversarial images than in benign images. This
observation underscores the sensitivity of gradient-based feature attribution
methods to perturbations in the input.
Relationship between IG sensitivity and model sensitivity: The feature
attribution score computed by IG for feature $i$ of input sample
$\textbf{x}\in R^{d}$ with baseline u, model $F$ is given by:
$IG_{i}^{F}(\textbf{x,u})=(x_{i}-u_{i}).\int_{\alpha=0}^{1}{\partial_{i}F(\textbf{u}+\alpha(\textbf{x}-\textbf{u}))}\partial\alpha$
(1)
For an input sample x, IG returns a vector $IG^{F}(\textbf{x},\textbf{u})\in
R^{d}$ with scores that quantify the contribution of $x_{i}$ to the model
prediction $F(\textbf{x})$. For a single layer network
$F(\textbf{x})=H(<\textbf{w},\textbf{x}>)$ where $H$ is a differentiable
scalar-valued function and $<\textbf{w},\textbf{x}>$ is the dot product
between the weight vector w and input $\textbf{x}\in R^{d}$, IG attribution
has a closed form expression [13].
For given x, u and $\alpha$, let us consider
$\textbf{v}=\textbf{u}+\alpha(\textbf{x}-\textbf{u})$. If the single-layer
network is represented as $F(\textbf{x})=H(<\textbf{w},\textbf{x}>)$ where $H$
is a differentiable scalar-valued function, $\partial_{i}F(\textbf{v})$ can be
computed as:
$\displaystyle\partial_{i}F(\textbf{v})$ $\displaystyle=\frac{\partial
F(\textbf{v})}{v_{i}}=\frac{\partial H(<\textbf{w},\textbf{v}>)}{\partial
v_{i}}=H^{\prime}(z)\frac{\partial<\textbf{w},\textbf{v}>}{\partial v_{i}}$
$\displaystyle=w_{i}H^{\prime}(z)$ (2)
Here, $H^{\prime}(z)$ is the gradient of the activation $H(z)$ where
$z=<\textbf{w},\textbf{v}>$. To compute $\frac{\partial
F(\textbf{v})}{\partial\alpha}$:
$\frac{\partial F(\textbf{v})}{\partial\alpha}=\sum_{i=1}^{d}(\frac{\partial
F(\textbf{v})}{\partial v_{i}}\frac{\partial v_{i}}{\partial\alpha})$ (3)
We can substitute value of $\frac{\partial
v_{i}}{\partial\alpha}=(x_{i}-u_{i})$ and $\partial_{i}F(\textbf{v})$ from Eq.
3 to Eq. 3.
$\displaystyle\frac{\partial F(\textbf{v})}{\partial\alpha}$
$\displaystyle=\sum_{i=1}^{d}[w_{i}H^{\prime}(z)(x_{i}-u_{i})]$
$\displaystyle=<\textbf{x}-\textbf{u},\textbf{w}>H^{\prime}(z)$ (4)
This gives:
$dF(\textbf{v})=<\textbf{x}-\textbf{u},\textbf{w}>H^{\prime}(z)\partial\alpha$
(5)
Since $<\textbf{x}-\textbf{u},\textbf{w}>$ is scalar,
$H^{\prime}(z)\partial\alpha=\frac{dF(\textbf{v})}{<\textbf{x}-\textbf{u},\textbf{w}>}$
(6)
Eq. 6 can be used to rewrite the integral in the definition of
$IG_{i}^{F}(\textbf{x})$ in Eq. 1,
$\displaystyle\int_{\alpha=0}^{1}\partial_{i}F(\textbf{v})\partial\alpha$
$\displaystyle=\int_{\alpha=0}^{1}w_{i}H^{\prime}(z)\partial
z~{}~{}~{}\textnormal{[From Eqn. \ref{eqn:partialfv}]}$
$\displaystyle=\int_{\alpha=0}^{1}w_{i}\frac{dF(\textbf{v})}{<\textbf{x}-\textbf{u},\textbf{w}>}$
$\displaystyle=\frac{w_{i}}{<\textbf{x}-\textbf{u},\textbf{w}>}\int_{\alpha=0}^{1}{dF(\textbf{v})}$
$\displaystyle=\frac{w_{i}}{<\textbf{x}-\textbf{u},\textbf{w}>}[F(\textbf{x})-F(\textbf{u})]$
(7)
Hence, we obtain the closed form for IG from its definition in Eqn. 1 as
$\displaystyle IG_{i}^{F}(\textbf{x},\textbf{u})$
$\displaystyle=[F(\textbf{x})-F(\textbf{u})]\frac{({x_{i}}-{u_{i}}){w_{i}}}{<\textbf{x}-\textbf{u},\textbf{w}>}$
$\displaystyle IG^{F}(\textbf{x},\textbf{u})$
$\displaystyle=[F(\textbf{x})-F(\textbf{u})]\frac{(\textbf{x}-\textbf{u})\odot\textbf{w}}{<\textbf{x}-\textbf{u},\textbf{w}>}$
(8)
Here, $\odot$ is the entry-wise produce of two vectors.
Eq. 3 shows that the feature attribution in IG is proportional to the
fractional contribution of a feature to the change in logit
$<\textbf{x}-\textbf{u},\textbf{w}>$. When an adversary perturbs an input
sample for changing the predicted label, the value of logits changes
accordingly. In untargeted attacks, the adversarial perturbation aims to
maximize the softmax value of a class different than the original class.
Hence, the perturbation can increase or decrease logit values of other classes
[1]. This change in logits also brings a change in feature attribution. When
an additional noise is introduced to an input sample, the change in feature
attribution follows the change in model prediction. This sensitivity of IG to
noise can be measured using Eq. 3.
$\displaystyle\delta_{2}$
$\displaystyle=||IG^{F}(\textbf{x}^{\prime},\textbf{u})-IG^{F}(\textbf{x},\textbf{u})||_{1}\approx||IG^{F}(\textbf{x}^{\prime},\textbf{x})||_{1}$
$\displaystyle\approx\Big{|}\Big{|}[F(\textbf{x}^{\prime})-F(\textbf{x})]\frac{(\textbf{x}^{\prime}-\textbf{x})\odot\textbf{w}}{<\textbf{x}^{\prime}-\textbf{x},\textbf{w}>}\Big{|}\Big{|}_{1}$
$\displaystyle\approx\Big{|}\Big{|}[F(\textbf{x}^{\prime})-F(\textbf{x})]\frac{\Delta\odot\textbf{w}}{<\Delta,\textbf{w}>}\Big{|}\Big{|}_{1}$
(9)
Assuming w to be constant for a given model, we can conclude from Eqn. 3 that
$\delta_{2}\propto||F(\textbf{x}^{\prime})-F(\textbf{x})||$. This implies that
the sensitivity of IG is tied to the overall sensitivity of the model. Based
on these observations, we posit that the sensitivity of IG could serve as a
valuable tool in identifying adversarial samples by providing an additional
layer of insight into the behavior of deep learning models.
Figure 3: PASA overview: A & B are neural network outputs (logits), C & D are
IG feature attributions.
## 4 Methodology
### 4.1 Threat model
We consider a classification task with a distribution $D$ over input samples
$\textbf{x}\in R^{n}$ with labels $y\in[K]$. A classifier is a function
$F:R^{n}\rightarrow[K]$ learned by a neural network architecture in a
supervised manner that classifies a given input sample into one of $k$
classes. An adversary can manipulate the sample at test time by adding
$L_{\infty}$ perturbation so that the new sample $\textbf{x}^{*}$ is an
adversarial sample and wrongly classified by the classifier. A detector
$f_{det}$ is a function that computes a score for the given input sample based
on our proposed approach and decides whether the sample is benign or
adversarial by comparing it against a learned threshold. The optimal threshold
for each dataset is learned during the training phase of the detector (See
Section 4.2). At test time, we assume no previous knowledge of the underlying
attack mechanism. Below, we describe the set of assumptions about an adversary
for our proposed method and its evaluation.
#### 4.1.1 Adversary goal
Adversarial samples are inputs specifically designed to produce targeted or
untargeted misclassification from a targeted machine learning model. We assume
that the adversary is not concerned with a specific target label and only aims
to produce misclassification. Untargeted attacks require fewer perturbations
than targeted attacks and are more difficult to detect [12].
#### 4.1.2 Adversary capabilities
Defenses to adversarial samples typically restrict the adversary’s capability
to make “small” changes to the given input. In the case of image
classification, this change is measured in $L_{p}$ norm between two inputs for
$p\in[0,1,2,\infty]$. We assume that the adversary performs $L_{\infty}$
attack with the constraint of $\epsilon$, which means that the attack cannot
modify the pixel of the input image by more than $\epsilon$ [22]. However, we
evaluate our results on different $\epsilon$ specifically,
$\epsilon\in[8/255,16/255,32/255,64/255]$.
#### 4.1.3 Adversary knowledge
We evaluate our detector under white-box assumption where an adversary has
complete knowledge of the target model and its parameters and dataset used for
training. We perform two categories of white-box attacks: (a) an adversary has
access to the model so that it can create an attack to evade the
classification; (b) in addition to the target model, an adversary has
knowledge of the underlying detector and can modify their attack to evade
target model, and the detection mechanism.
### 4.2 Proposed design
Based on our insights on the sensitivity of model prediction and feature
attribution (discussed in Section 3), we propose using noise as a probing
mechanism for adversarial detection. The underlying principle is that the
characteristics of model prediction and feature attribution on the noise-
corrupted sample differ depending on whether the sample is natural or
adversarial. We add noise to a sample and measure the change in model
prediction and feature attribution. Our detector (see Figure 3) classifies an
input sample as adversarial if the change in either the model prediction or
feature exceeds a learned threshold established from benign samples.
Figure 4: Performance of PASA for MNIST against various adversarial attacks at
varying noise spread parameters.
Training: Given a black box model $F(\textbf{.})$, and an input sample x, the
model outputs logits, $Z(\textbf{x})$. Feature attribution method, Integrated
Gradient (IG), gives attribution vector $IG^{F}(\textbf{x})$. To derive a
noisy version of the input sample, we add Gaussian noise
$\eta\in\mathcal{N}(0,\sigma^{2})$, where $\sigma^{2}$ is a hyperparameter and
equals $(max(\textbf{x})-min(\textbf{x}))*spread$. The noisy sample
($\textbf{x}^{\prime}$) is obtained as $\textbf{x}+\eta$. $spread$ controls
the standard deviation of the noise and is our only hyper-parameter required
for detector design. We vary the parameter spread under different values for
each dataset and empirically select the value that gives us the best
adversarial detection performance. For example, Figure 4 shows the performance
of our detector on various noise spread parameters for the MNIST dataset with
different adversarial attacks at $\epsilon=0.15$. We can observe that the
detector has the maximum AUC at the noise spread parameter 0.005. We followed
the same procedure on updated-CIC-IDS2017, CIFAR-10, CIFAR-100, and ImageNet
and obtained the noise-spread parameter as 0.0005, 0.15, 0.15, and 0.35
respectively.
Next, we compute the logit and feature attribution of the noisy sample (x’)
and measure the change using the $L_{1}$ norm of the difference. We term these
changes as prediction sensitivity (PS), and attribution sensitivity (AS) as
expressed in Eq. 10 and Eq 11 respectively.
$\delta_{1}=||Z(\textbf{x}^{\prime})-Z(\textbf{x})||_{1}$ (10)
$\delta_{2}=||IG^{F}(\textbf{x}^{\prime},\textbf{u})-IG^{F}(\textbf{x},\textbf{u})||_{1}$
(11)
We demonstrate the different characteristics of model prediction and feature
attribution on noise-corrupted images for MNIST and CIFAR-10 in Figure 5 and
Figure 6. As explained in training, we first collect benign and adversarial
images of both datasets, add Gaussian noise (spread parameter of 0.005 for
MNIST and 0.15 for CIFAR-10), and measure prediction sensitivity and
attribution sensitivity. Figure 5 shows the histogram plots for a set of
benign and adversarial image prediction sensitivity. For MNIST, benign samples
demonstrate smaller norms compared to their adversarial counterparts,
indicating that they can be distinguished from adversarial samples with a
threshold range $[0-3]$. This behavior is true for datasets like MNIST, where
images are concentrated in low-dimensional manifolds. In contrast, for a
three-channel image dataset like CIFAR-10, we observe a divergent behavior.
The difference in model prediction for noisy and original images in benign
samples is greater than that of adversarial samples and their noisy
counterparts. This behavior is due to the distinct positions benign and
adversarial images occupy within the input space manifold, as discussed in
Section 3. We can also observe that for CIFAR-10, adversarial samples
generated with a larger perturbation parameter ($\epsilon$) exhibit minimal
changes in model prediction. This is because the adversarial images are
located far from the decision boundary, and the added noise has minimal
impact. ImageNet and CIFAR-100 demonstrate similar behavior.
Figure 5: The distribution of the difference between logits of benign and
adversarial images with their noisy counterparts (left: MNIST, right:
CIFAR-10). Adversarial samples are obtained at various perturbation strengths
$\epsilon$.
Figure 6: The distribution of the difference between the attribution vector of
benign and adversarial images with their noisy counterparts (left: MNIST,
right: CIFAR-10). Adversarial samples are obtained at various perturbation
strengths $\epsilon$.
Figure 6 shows the histogram plots for a set of benign and adversarial images
of the MNIST and CIFAR-10 datasets for attribution sensitivity. We observed
contrasting model prediction sensitivity between MNIST and CIFAR-10 in Figure
5. Since feature attribution of an image relies on the model prediction as
demonstrated by Eq. 3, the feature attribution sensitivity distribution
follows the model prediction behavior. While for MNIST, the benign and its
noisy counterparts have a smaller $L_{1}$ norm, the opposite is true for
CIFAR-10. ImageNet and CIFAR-100 demonstrate similar behavior.
Training PASA thus involves learning the threshold for the prediction
sensitivity and attribution sensitivity for benign samples. For each dataset,
we collect 5000 benign samples from the training set, probe them with noise,
measure the prediction sensitivity and attribution sensitivity metrics, and
learn the threshold that yields various false positive rates (FPRs) on a
validation set of benign samples. We provide the methodology of our approach
below:
Methodology:
Step 1. Set noise spread parameter $\sigma$ as 0.001 for MNIST, 0.0001 for
CIC-IDS2017, 0.1 for CIFAR, and ImageNet.
Step 2. For a set of benign samples, produce its noisy version by adding
Gaussian noise. Compute two metrics, PS and AS (See Eqns 10 & 11).
Step 3. Find thresholds of PS and AS that produce 1%, 5%, and 10% False
Positive Rate (FPR) on a hold-out set of benign samples.
Step 4. Evaluate the detection results on a validation set (consisting of
benign and adversarial samples) using the threshold and noise parameter
learned from Step 3.
Step 5. Increment the noise to $\sigma^{\prime}=\sigma+\delta$, where $\delta$
is dataset-dependent. The following delta levels worked best in our
experiment: 0.01 for MNIST, 0.0001 for CIC-IDS2017, and 0.1 for CIFAR and
ImageNet.
Step 6. Repeat Steps 2-5.
Step 7. Pick the best-performing noise spread parameter and threshold.
Testing: At test time, we evaluate changes in model prediction and feature
attribution of an input sample. We add Gaussian noise with zero mean and
standard deviation of $(max(\textbf{x})-min(\textbf{x}))*spread$, where we
select spread empirically during training. We compute prediction sensitivity
and attribution sensitivity using expressions of Eq. 10 and Eq. 11. We reject
a sample as adversarial if either of the computed metrics does not satisfy the
threshold learned during training.
TABLE I: Adversarial Detection Performance for MNIST and CIFAR-10 models: Our
Method (PASA) vs. Unsupervised Methods (FS, MagNet, U-LOO, TWS) using AUC
scores.
Attack | | MNIST | CIFAR-10 (VGG) | CIFAR-10 (ResNet)
---|---|---|---|---
| Strength | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA
FGSM | 8/255 | 0.89$\pm$0.01 | 0.94$\pm$0.01 | 0.93$\pm$0.01 | 0.94$\pm$0.01 | 0.97$\pm$0.01 | 0.57$\pm$0.01 | 0.62$\pm$0.01 | 0.52$\pm$0.01 | 0.51$\pm$0.01 | 0.63$\pm$0.02 | 0.76$\pm$0.01 | 0.63$\pm$0.01 | 0.55$\pm$0.01 | 0.72$\pm$0.01 | 0.87$\pm$0.01
| 16/255 | 0.87$\pm$0.01 | 0.95$\pm$0.01 | 0.92$\pm$0.01 | 0.93$\pm$0.02 | 0.98$\pm$0.01 | 0.68$\pm$0.02 | 0.82$\pm$0.01 | 0.52$\pm$0.02 | 0.66$\pm$0.02 | 0.77$\pm$0.02 | 0.81$\pm$0.01 | 0.83$\pm$0.01 | 0.53$\pm$0.03 | 0.78$\pm$0.02 | 0.97$\pm$0.01
| 32/255 | 0.86$\pm$0.01 | 0.95$\pm$0.01 | 0.91$\pm$0.02 | 0.89$\pm$0.04 | 0.98$\pm$0.01 | 0.66$\pm$0.01 | 0.96$\pm$0.04 | 0.52$\pm$0.01 | 0.64$\pm$0.01 | 0.88$\pm$0.01 | 0.76$\pm$0.01 | 0.94$\pm$0.01 | 0.60$\pm$0.02 | 0.76$\pm$0.01 | 0.98$\pm$0.01
| 64/255 | 0.86$\pm$0.01 | 0.95$\pm$0.01 | 0.88$\pm$0.02 | 0.82$\pm$0.10 | 0.98$\pm$0.02 | 0.63$\pm$0.02 | 0.95$\pm$0.05 | 0.49$\pm$0.03 | 0.61$\pm$0.01 | 0.95$\pm$0.01 | 0.84$\pm$0.02 | 0.95$\pm$0.03 | 0.53$\pm$0.03 | 0.83$\pm$0.01 | 0.98$\pm$0.02
PGD | 8/255 | 0.90$\pm$0.01 | 0.95$\pm$0.03 | 0.99$\pm$0.01 | 0.92$\pm$0.11 | 0.98$\pm$0.01 | 0.52$\pm$0.01 | 0.59$\pm$0.03 | 0.49$\pm$0.05 | 0.58$\pm$0.02 | 0.74$\pm$0.02 | 0.25$\pm$0.01 | 0.57$\pm$0.04 | 0.62$\pm$0.01 | 0.14$\pm$0.01 | 0.83$\pm$0.03
| 16/255 | 0.88$\pm$0.01 | 0.95$\pm$0.04 | 0.99$\pm$0.01 | 0.76$\pm$0.10 | 0.98$\pm$0.01 | 0.59$\pm$0.01 | 0.75$\pm$0.02 | 0.51$\pm$0.03 | 0.56$\pm$0.01 | 0.82$\pm$0.01 | 0.16$\pm$0.01 | 0.73$\pm$0.03 | 0.65$\pm$0.01 | 0.14$\pm$0.02 | 0.93$\pm$0.02
| 32/255 | 0.77$\pm$0.02 | 0.94$\pm$0.04 | 0.99$\pm$0.02 | 0.32$\pm$0.03 | 0.99$\pm$0.01 | 0.62$\pm$0.02 | 0.93$\pm$0.03 | 0.51$\pm$0.01 | 0.53$\pm$0.01 | 0.90$\pm$0.02 | 0.13$\pm$0.01 | 0.91$\pm$0.05 | 0.68$\pm$0.02 | 0.14$\pm$0.03 | 0.98$\pm$0.01
| 64/255 | 0.48$\pm$0.01 | 0.95$\pm$0.03 | 0.99$\pm$0.03 | 0.11$\pm$0.01 | 0.99$\pm$0.02 | 0.60$\pm$0.01 | 0.95$\pm$0.05 | 0.52$\pm$0.03 | 0.51$\pm$0.04 | 0.95$\pm$0.01 | 0.16$\pm$0.01 | 0.95$\pm$0.06 | 0.72$\pm$0.02 | 0.14$\pm$0.02 | 0.98$\pm$0.02
BIM | 8/255 | 0.89$\pm$0.03 | 0.83$\pm$0.01 | 0.40$\pm$0.04 | 0.83$\pm$0.02 | 0.62$\pm$0.04 | 0.37$\pm$0.02 | 0.54$\pm$0.02 | 0.50$\pm$0.01 | 0.55$\pm$0.05 | 0.66$\pm$0.02 | 0.29$\pm$0.02 | 0.53$\pm$0.03 | 0.60$\pm$0.02 | 0.16$\pm$0.01 | 0.75$\pm$0.01
| 16/255 | 0.88$\pm$0.01 | 0.93$\pm$0.01 | 0.67$\pm$0.04 | 0.92$\pm$0.01 | 0.58$\pm$0.03 | 0.16$\pm$0.01 | 0.60$\pm$0.04 | 0.51$\pm$0.01 | 0.55$\pm$0.06 | 0.71$\pm$0.01 | 0.16$\pm$0.02 | 0.58$\pm$0.03 | 0.59$\pm$0.01 | 0.15$\pm$0.01 | 0.84$\pm$0.01
| 32/255 | 0.88$\pm$0.01 | 0.95$\pm$0.01 | 0.92$\pm$0.04 | 0.85$\pm$0.01 | 0.56$\pm$0.02 | 0.15$\pm$0.02 | 0.73$\pm$0.06 | 0.53$\pm$0.02 | 0.55$\pm$0.04 | 0.73$\pm$0.02 | 0.13$\pm$0.01 | 0.72$\pm$0.04 | 0.58$\pm$0.01 | 0.14$\pm$0.02 | 0.93$\pm$0.01
| 64/255 | 0.88$\pm$0.01 | 0.95$\pm$0.02 | 0.99$\pm$0.02 | 0.69$\pm$0.02 | 0.55$\pm$0.01 | 0.13$\pm$0.01 | 0.91$\pm$0.01 | 0.52$\pm$0.02 | 0.54$\pm$0.01 | 0.74$\pm$0.01 | 0.12$\pm$0.01 | 0.90$\pm$0.04 | 0.57$\pm$0.01 | 0.14$\pm$0.01 | 0.97$\pm$0.02
Auto-PGD | 0.15 | 0.81$\pm$0.02 | 0.95$\pm$0.01 | 0.98$\pm$0.03 | 0.56$\pm$0.03 | 0.98$\pm$0.01 | 0.14$\pm$0.02 | 0.96$\pm$0.01 | 0.52$\pm$0.04 | 0.12$\pm$0.02 | 0.97$\pm$0.01 | 0.13$\pm$0.01 | 0.95$\pm$0.01 | 0.75$\pm$0.01 | 0.13$\pm$0.01 | 0.98$\pm$0.02
CW | 0.15 | 0.88$\pm$0.03 | 0.94$\pm$0.02 | 0.91$\pm$0.02 | 0.95$\pm$0.02 | 0.58$\pm$0.02 | 0.66$\pm$0.01 | 0.71$\pm$0.03 | 0.54$\pm$0.04 | 0.68$\pm$0.01 | 0.82$\pm$0.01 | 0.84$\pm$0.01 | 0.93$\pm$0.01 | 0.55$\pm$0.03 | 0.82$\pm$0.01 | 0.98$\pm$0.01
Average | | 0.84$\pm$0.10 | 0.94$\pm$0.03 | 0.90$\pm$0.16 | 0.75$\pm$0.25 | 0.83$\pm$0.20 | 0.46$\pm$0.21 | 0.79$\pm$0.15 | 0.51$\pm$0.01 | 0.54$\pm$0.13 | 0.80$\pm$0.11 | 0.40$\pm$0.31 | 0.79$\pm$0.16 | 0.61$\pm$0.08 | 0.37$\pm$0.31 | 0.93$\pm$0.07
## 5 Experiment and Evaluation
### 5.1 Experiment Setup
We implemented PASA using Python and PyTorch and conducted experiments on a
server with a 4 Intel(R) Core(TM) i5-7600K CPU @ 3.80 GHz and a 12 GB NVIDIA
TITAN Xp GPU card. We used Captum [30] to generate explanations.
#### 5.1.1 Datasets
We evaluate the performance of PASA on the following datasets: MNIST [34],
CIFAR-10 [31], CIFAR-100 [31], ImageNet [17] and updated CIC-IDS2017 [18]. The
datasets are publicly available, and none of them contain personally
identifiable information. Details on the dataset can be found in Appendix A.
#### 5.1.2 Target networks
To demonstrate the generalization of our approach, we evaluate our results by
performing adversarial attacks and detection on a variety of networks: MLP
[47], LeNet [34], VGG-16 [54], ResNet [25], and MobileNet [48]. Details on
model architecture can be found in Appendix B.
#### 5.1.3 Attacks
We evaluate the performance of PASA against inputs perturbed using the
following untargeted $L_{\infty}$ attacks: FGSM [22], BIM [33] (10 iterations)
and PGD [40] (step-size $\alpha=\epsilon/10$, 40 iterations) with increasing
value of attack parameter $\epsilon\in[8/255,16/255,32/255,64/255]$, Auto-PGD
[14] ($\epsilon=0.15$) and zero confidence CW attack [12] ($\epsilon=0.15$,
learning rate= 0.01). Adversarial attacks are performed on the test set which
is not used for learning the threshold of PASA. Further details are provided
in Appendix C.
### 5.2 Evaluation
#### 5.2.1 Baselines
We present the experimental evaluation of PASA by comparing its results
against four types of unsupervised detectors that use different statistical
approaches for adversarial detection. We discuss their implementation in
Appendix D.
Feature squeezing (FS) [71]: FS is a filter-based approach that applies
filters to a given image and measures the distance between prediction vectors
of the two images. If the distance for any compressed image exceeds a certain
threshold learned from benign images, the unaltered image is considered
adversarial.
Magnet [41]: MagNet is a reconstruction-based detector that trains denoisers
on clean training data to reconstruct input samples. If the reconstruction
error score exceeds a threshold learned from benign images, the detector flags
an input sample as adversarial.
Turning a weakness into a strength (TWS) [28]: TWS is a noise-based approach
that identifies a given input image as adversarial if after perturbing the
input with noise does not result in a significant change in softmax score. The
defense also has a second evaluation criterion, which checks the number of
steps required to cross the decision boundary to a random target class. The
second test assumes white-box access to the model and detector and requires
modification of the adversarial attack. Hence, we only use the first criteria
as the detection mechanism.
TABLE II: Adversarial Detection Performance for MNIST and CIFAR-10 models: Our
Method (PASA) vs. Unsupervised Methods (FS, MagNet, U-LOO, TWS) using TPR
scores.
| Performance | MNIST | CIFAR-10 (VGG) | CIFAR-10 (ResNet)
---|---|---|---|---
Attack | Metric | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA
FGSM (8/255) | TPR (FPR @ 0.01) | 85.9 | 74.8 | 40.36 | 6.4 | 90.3 | 11.8 | 5.5 | 3.5 | 2.6 | 18.8 | 29.4 | 4.6 | 1 | 12.5 | 22.6
| TPR (FPR @ 0.05) | 96.5 | 95.3 | 91.9 | 64 | 97 | 21.2 | 5.6 | 9.9 | 7.5 | 23.3 | 33.1 | 16.8 | 5.3 | 15.4 | 22.9
| TPR (FPR @ 0.1) | 96.8 | 98.3 | 92.3 | 86.8 | 99.8 | 26.2 | 19.6 | 16.8 | 11.7 | 31.7 | 37.4 | 37.2 | 8.3 | 17.8 | 56.8
FGSM (16/255) | TPR (FPR @ 0.01) | 86.3 | 95.3 | 51.5 | 3.5 | 96.1 | 13.8 | 9.9 | 3.4 | 2.8 | 17 | 31.7 | 10.9 | 0.7 | 9.5 | 64.9
| TPR (FPR @ 0.05) | 83 | 96.7 | 81.2 | 59.4 | 98.7 | 19.4 | 10.6 | 9.2 | 8.8 | 32.2 | 36 | 11 | 4.8 | 14.8 | 97.1
| TPR (FPR @ 0.1) | 85.4 | 99.4 | 91.3 | 93.1 | 99.9 | 27.7 | 42.6 | 15.7 | 14.2 | 32.3 | 42.7 | 43.4 | 7.6 | 18.5 | 100
FGSM (32/255) | TPR (FPR @ 0.01) | 68.8 | 94.9 | 60.9 | 6.3 | 98.4 | 10.9 | 98.2 | 2 | 1.3 | 16.2 | 10.7 | 98.8 | 1.7 | 0.7 | 100
| TPR (FPR @ 0.05) | 69.2 | 96.8 | 90.9 | 47.5 | 99.6 | 15.2 | 98.6 | 8 | 3.8 | 30.2 | 12.9 | 98.8 | 5.7 | 12 | 100
| TPR (FPR @ 0.1) | 81.3 | 99.1 | 90.11 | 88 | 99.7 | 23.5 | 98.8 | 14.6 | 7.6 | 51.8 | 18.4 | 99.1 | 9.3 | 21 | 100
FGSM (64/255) | TPR (FPR @ 0.01) | 84.4 | 99.9 | 78.8 | 5.4 | 99.7 | 5.8 | 92.6 | 1.2 | 0.5 | 36.5 | 16.8 | 100 | 0.2 | 0.1 | 100
| TPR (FPR @ 0.05) | 87.9 | 99.9 | 88.9 | 47.3 | 99.9 | 8.4 | 93.6 | 4.9 | 0.8 | 76.7 | 24.3 | 100 | 0.2 | 20 | 100
| TPR (FPR @ 0.1) | 91.8 | 99.9 | 90.5 | 83.8 | 100 | 13.4 | 94.6 | 9.8 | 1.6 | 91.8 | 32.9 | 100 | 4.2 | 78.8 | 100
PGD (8/255) | TPR (FPR @ 0.01) | 78.9 | 100 | 100 | 13.9 | 100 | 5.8 | 4.5 | 1.2 | 3 | 53.7 | 1.7 | 4.3 | 2.8 | 0 | 12.8
| TPR (FPR @ 0.05) | 79.9 | 100 | 100 | 49.6 | 100 | 6.4 | 4.8 | 4.9 | 6 | 55 | 1.8 | 4.3 | 10.8 | 0 | 25.2
| TPR (FPR @ 0.1) | 99.3 | 100 | 100 | 90.2 | 100 | 7.5 | 17.4 | 12.5 | 8 | 55.3 | 3.1 | 15 | 14.9 | 0 | 52
PGD (16/255) | TPR (FPR @ 0.01) | 72.5 | 100 | 100 | 1.6 | 100 | 4 | 6.6 | 2 | 5 | 66.5 | 0.6 | 6.1 | 3.1 | 0 | 40.9
| TPR (FPR @ 0.05) | 77.8 | 100 | 100 | 13.6 | 100 | 4.2 | 7 | 6.2 | 9 | 67.8 | 1.3 | 6.2 | 11.1 | 0 | 61
| TPR (FPR @ 0.1) | 85.6 | 100 | 100 | 54.9 | 100 | 4.3 | 24.1 | 11.9 | 12 | 68.2 | 2 | 22.9 | 16.1 | 0 | 86
PGD (32/255) | TPR (FPR @ 0.01) | 44.3 | 100 | 100 | 1.3 | 100 | 8.6 | 34.8 | 2.5 | 0.5 | 72.7 | 0.2 | 30.5 | 5.8 | 0 | 70.6
| TPR (FPR @ 0.05) | 48.9 | 100 | 100 | 5.8 | 100 | 10.1 | 37.5 | 7.4 | 3.5 | 77.1 | 0.2 | 31.6 | 16.4 | 0 | 95.2
| TPR (FPR @ 0.1) | 53.6 | 100 | 100 | 9 | 100 | 12.8 | 100 | 14.3 | 5 | 78.2 | 0.3 | 100 | 22.7 | 0 | 99.8
PGD (64/255) | TPR (FPR @ 0.01) | 12.6 | 100 | 100 | 0.1 | 100 | 8 | 100 | 1.6 | 5 | 95.4 | 0.1 | 100 | 7.2 | 0 | 100
| TPR (FPR @ 0.05) | 17.9 | 100 | 100 | 0.9 | 100 | 8.9 | 100 | 4.4 | 6 | 96.2 | 0.1 | 100 | 17.9 | 0 | 100
| TPR (FPR @ 0.1) | 24.6 | 100 | 100 | 1.5 | 100 | 11.3 | 100 | 12 | 8 | 96.6 | 0.8 | 100 | 23.3 | 0 | 100
BIM (8/255) | TPR (FPR @ 0.01) | 72 | 38.9 | 4.4 | 4 | 3 | 7.8 | 5.1 | 2.1 | 2.1 | 34.8 | 2.8 | 6 | 1.9 | 0 | 3.2
| TPR (FPR @ 0.05) | 82.5 | 45.6 | 6.9 | 52 | 8.2 | 11.5 | 5.4 | 6.9 | 5.1 | 37.1 | 3.2 | 16.8 | 7.6 | 0 | 10.9
| TPR (FPR @ 0.1) | 92.9 | 56.7 | 12.5 | 96.4 | 16.1 | 18.7 | 15.9 | 13.5 | 7.4 | 38 | 3.6 | 28.9 | 12.3 | 0 | 37.7
BIM (16/255) | TPR (FPR @ 0.01) | 76.6 | 65.5 | 9.8 | 2.8 | 2.7 | 7.8 | 4.6 | 1.2 | 3.2 | 49.4 | 0.9 | 5.9 | 2.3 | 0 | 11.5
| TPR (FPR @ 0.05) | 86.8 | 88.5 | 37.3 | 39.5 | 7.6 | 13 | 4.7 | 6.5 | 6.5 | 50.2 | 0.9 | 16.1 | 7.6 | 0 | 26.5
| TPR (FPR @ 0.1) | 97 | 91.1 | 50.5 | 91.5 | 14 | 16.1 | 16.2 | 13.2 | 11.2 | 50.8 | 1.1 | 32.2 | 11.8 | 0 | 54.3
BIM (32/255) | TPR (FPR @ 0.01) | 75.4 | 75.5 | 26.3 | 0.8 | 3.7 | 9.1 | 5.4 | 2.1 | 2.8 | 54.3 | 0.4 | 6.4 | 1.2 | 0 | 23.6
| TPR (FPR @ 0.05) | 85.7 | 99.4 | 65 | 28.2 | 8.2 | 12.5 | 5.6 | 7.4 | 7.4 | 55.7 | 0.4 | 22.6 | 6.6 | 0 | 51.5
| TPR (FPR @ 0.1) | 96.1 | 99.5 | 75.1 | 74.6 | 14 | 18.7 | 24.5 | 14.2 | 12 | 56.6 | 0.6 | 49.1 | 10.3 | 0 | 83.3
BIM (64/255) | TPR (FPR @ 0.01) | 63.9 | 100 | 53.6 | 0.3 | 4 | 15.1 | 12.6 | 1.9 | 3 | 37.2 | 0.4 | 12.8 | 1.2 | 0 | 958.2
| TPR (FPR @ 0.05) | 74.3 | 100 | 86.8 | 9.3 | 8.5 | 19.4 | 14.5 | 8.2 | 8.1 | 40.3 | 0.6 | 88 | 6.1 | 0 | 94.4
| TPR (FPR @ 0.1) | 84.4 | 100 | 92.3 | 44.1 | 14.3 | 16.6 | 86.4 | 14.4 | 11.2 | 42.1 | 0.6 | 100 | 9.2 | 0 | 99.9
Auto-PGD (0.15) | TPR (FPR @ 0.01) | 90.7 | 100 | 83 | 0.15 | 99.2 | 0 | 82.5 | 2 | 0 | 98.2 | 0.4 | 51 | 8.7 | 0 | 85.4
| TPR (FPR @ 0.05) | 91 | 100 | 96 | 3.4 | 99.3 | 0 | 83.6 | 7.8 | 0 | 98.8 | 0.4 | 51.8 | 23.4 | 0 | 98.7
| TPR (FPR @ 0.1) | 91.6 | 100 | 97 | 17.3 | 99.6 | 0 | 84.8 | 15.3 | 0 | 98.8 | 1.2 | 100 | 29.7 | 0 | 99.8
CW (0.15) | TPR (FPR @ 0.01) | 85.3 | 86.3 | 29.1 | 3.9 | 2.1 | 14.6 | 26.4 | 3.5 | 2.8 | 6.8 | 38.7 | 57.2 | 2.3 | 7.3 | 82.2
| TPR (FPR @ 0.05) | 87.9 | 96.6 | 70.4 | 62.8 | 3.4 | 21.5 | 28.6 | 9.3 | 7.4 | 20.2 | 45.4 | 57.5 | 7.3 | 13 | 97.8
| TPR (FPR @ 0.1) | 93.1 | 96.8 | 80.6 | 98.1 | 13.8 | 29.2 | 90.4 | 16.8 | 13.3 | 42.1 | 52.2 | 87 | 9.8 | 18.9 | 99.7
ML-LOO [72]: ML-LOO is a feature-attribution-based defense that detects
adversarial examples using statistical measures of attribution vector. The
authors compute the inter-quartile range (IQR) of feature attribution of
benign and adversarial images for distinguishing benign images from
adversarial counterparts. While the paper also proposes a supervised detector
by extracting statistics from multiple hidden layers, we implement
unsupervised detection, U-LOO, for a fair comparison. The authors evaluate
their results using LOO [35] and IG [60]. We stick to IG since our detection
also uses the same method.
#### 5.2.2 Performance evaluation
For each dataset, we randomly sample 1000 benign samples from the test set,
which are correctly classified by the model, and generate 1000 adversarial
samples for each type of attack for evaluation. This is repeated 10 times to
account for randomness associated with the sampling. During test time, we
assume no previous knowledge of the attack mechanism. Given an input sample,
PASA only computes two noise-probed metrics, prediction sensitivity, and
attribution sensitivity, and compares them with the threshold learned during
training. If either of the metrics satisfies the threshold, the sample is
classified as benign, else adversarial.
##### Metrics
We assess the detector performance using in the following criteria: a) True
Positive Rate (TPR): TPR is computed as the ratio of the total number of
correctly identified adversarial samples to the overall number of adversarial
samples. In unsupervised detectors, the decision threshold is learned from
benign samples while maintaining a fixed false positive rate (FPR) on the
validation set. We then use this threshold on the test set and compute the
TPR. We report the TPR of detectors using thresholds calculated for 1%, 5%,
and 10% FPR. b) Area Under the Receiver Operating Characteristic Curve (AUC):
AUC is a threshold-independent measure of a detector performance which is
widely used as a standard in comparison between different methods [16].
TABLE III: Adversarial Detection Performance for CIFAR-100 and ImageNet
models: Our Method (PASA) vs. Unsupervised Methods (FS, MagNet, U-LOO, TWS)
using AUC scores.
Attack | | CIFAR-100 | ImageNet (MobileNet) | ImageNet (ResNet)
---|---|---|---|---
| Strength | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA
FGSM | 8/255 | 0.68$\pm$0.02 | 0.60$\pm$0.02 | 0.62$\pm$0.03 | 0.34$\pm$0.02 | 0.81$\pm$0.01 | 0.60$\pm$0.01 | 0.51$\pm$0.03 | 0.60$\pm$0.01 | 0.50$\pm$0.02 | 0.81$\pm$0.01 | 0.65$\pm$0.02 | 0.50$\pm$0.01 | 0.62$\pm$0.01 | 0.64$\pm$0.01 | 0.65$\pm$0.01
| 16/255 | 0.62$\pm$0.04 | 0.78$\pm$0.03 | 0.67$\pm$0.02 | 0.31$\pm$0.01 | 0.96$\pm$0.01 | 0.62$\pm$0.01 | 0.51$\pm$0.03 | 0.68$\pm$0.01 | 0.50$\pm$0.03 | 0.91$\pm$0.02 | 0.69$\pm$0.01 | 0.52$\pm$0.01 | 0.62$\pm$0.01 | 0.57$\pm$0.01 | 0.75$\pm$0.01
| 32/255 | 0.64$\pm$0.03 | 0.95$\pm$0.02 | 0.67$\pm$0.02 | 0.29$\pm$0.01 | 0.97$\pm$0.02 | 0.65$\pm$0.01 | 0.54$\pm$0.02 | 0.69$\pm$0.01 | 0.50$\pm$0.04 | 0.96$\pm$0.01 | 0.75$\pm$0.02 | 0.55$\pm$0.01 | 0.60$\pm$0.01 | 0.52$\pm$0.01 | 0.86$\pm$0.01
| 64/255 | 0.68$\pm$0.02 | 0.96$\pm$0.02 | 0.67$\pm$0.03 | 0.24$\pm$0.02 | 0.97$\pm$0.02 | 0.68$\pm$0.01 | 0.63$\pm$0.03 | 0.68$\pm$0.01 | 0.49$\pm$0.03 | 0.98$\pm$0.01 | 0.81$\pm$0.01 | 0.65$\pm$0.01 | 0.60$\pm$0.01 | 0.45$\pm$0.01 | 0.95$\pm$0.01
PGD | 8/255 | 0.67$\pm$0.03 | 0.56$\pm$0.03 | 0.63$\pm$0.03 | 0.61$\pm$0.01 | 0.60$\pm$0.02 | 0.25$\pm$0.02 | 0.51$\pm$0.01 | 0.59$\pm$0.01 | 0.52$\pm$0.01 | 0.98$\pm$0.02 | 0.29$\pm$0.01 | 0.51$\pm$0.01 | 0.59$\pm$0.01 | 0.57$\pm$0.02 | 0.97$\pm$0.01
| 16/255 | 0.62$\pm$0.02 | 0.66$\pm$0.04 | 0.68$\pm$0.03 | 0.59$\pm$0.01 | 0.68$\pm$0.02 | 0.19$\pm$0.03 | 0.50$\pm$0.01 | 0.59$\pm$0.01 | 0.51$\pm$0.02 | 0.99$\pm$0.02 | 0.18$\pm$0.01 | 0.52$\pm$0.02 | 0.63$\pm$0.02 | 0.27$\pm$0.02 | 0.98$\pm$0.01
| 32/255 | 0.74$\pm$0.03 | 0.85$\pm$0.05 | 0.72$\pm$0.03 | 0.48$\pm$0.02 | 0.86$\pm$0.04 | 0.16$\pm$0.02 | 0.52$\pm$0.01 | 0.57$\pm$0.01 | 0.51$\pm$0.01 | 0.98$\pm$0.02 | 0.11$\pm$0.02 | 0.52$\pm$0.01 | 0.75$\pm$0.02 | 0.05$\pm$0.00 | 0.97$\pm$0.01
| 64/255 | 0.69$\pm$0.02 | 0.95$\pm$0.03 | 0.73$\pm$0.03 | 0.08$\pm$0.01 | 0.95$\pm$0.02 | 0.17$\pm$0.02 | 0.57$\pm$0.01 | 0.59$\pm$0.01 | 0.50$\pm$0.01 | 0.98$\pm$0.02 | 0.11$\pm$0.02 | 0.57$\pm$0.01 | 0.74$\pm$0.03 | 0.02$\pm$0.00 | 0.98$\pm$0.01
BIM | 8/255 | 0.55$\pm$0.02 | 0.51$\pm$0.01 | 0.57$\pm$0.02 | 0.59$\pm$0.02 | 0.60$\pm$0.01 | 0.42$\pm$0.03 | 0.02$\pm$0.00 | 0.19$\pm$0.01 | 0.46$\pm$0.01 | 0.51$\pm$0.02 | 0.50$\pm$0.03 | 0.04$\pm$0.00 | 0.15$\pm$0.01 | 0.81$\pm$0.01 | 0.37$\pm$0.01
| 16/255 | 0.50$\pm$0.01 | 0.55$\pm$0.02 | 0.58$\pm$0.02 | 0.56$\pm$0.04 | 0.61$\pm$0.03 | 0.32$\pm$0.02 | 0.03$\pm$0.00 | 0.15$\pm$0.02 | 0.51$\pm$0.03 | 0.63$\pm$0.01 | 0.31$\pm$0.02 | 0.04$\pm$0.00 | 0.17$\pm$0.01 | 0.71$\pm$0.01 | 0.47$\pm$0.01
| 32/255 | 0.57$\pm$0.01 | 0.65$\pm$0.03 | 0.62$\pm$0.03 | 0.42$\pm$0.02 | 0.70$\pm$0.02 | 0.25$\pm$0.01 | 0.02$\pm$0.00 | 0.16$\pm$0.01 | 0.51$\pm$0.02 | 0.77$\pm$0.01 | 0.20$\pm$0.02 | 0.04$\pm$0.00 | 0.17$\pm$0.01 | 0.58$\pm$0.02 | 0.62$\pm$0.01
| 64/255 | 0.54$\pm$0.01 | 0.83$\pm$0.02 | 0.61$\pm$0.02 | 0.33$\pm$0.03 | 0.84$\pm$0.03 | 0.21$\pm$0.01 | 0.02$\pm$0.00 | 0.18$\pm$0.01 | 0.50$\pm$0.02 | 0.83$\pm$0.03 | 0.15$\pm$0.02 | 0.04$\pm$0.00 | 0.18$\pm$0.02 | 0.29$\pm$0.03 | 0.62$\pm$0.01
Auto-PGD | 0.15 | 0.32$\pm$0.01 | 0.91$\pm$0.02 | 0.64$\pm$0.04 | 0.30$\pm$0.01 | 0.98$\pm$0.02 | 0.19$\pm$0.04 | 0.37$\pm$0.01 | 0.59$\pm$0.01 | 0.17$\pm$0.02 | 0.97$\pm$0.02 | 0.14$\pm$0.01 | 0.38$\pm$0.01 | 0.66$\pm$0.01 | 0.10$\pm$0.00 | 0.96$\pm$0.01
CW | 0.15 | 0.58$\pm$0.02 | 0.91$\pm$0.01 | 0.68$\pm$0.03 | 0.22$\pm$0.01 | 0.92$\pm$0.03 | 0.58$\pm$0.01 | 0.03$\pm$0.00 | 0.11$\pm$0.02 | 0.51$\pm$0.01 | 0.87$\pm$0.01 | 0.58$\pm$0.01 | 0.04$\pm$0.00 | 0.15$\pm$0.01 | 0.68$\pm$0.02 | 0.95$\pm$0.01
Average | | 0.60$\pm$0.10 | 0.77$\pm$0.16 | 0.65$\pm$0.05 | 0.38$\pm$0.16 | 0.81$\pm$0.15 | 0.38$\pm$0.20 | 0.34$\pm$0.24 | 0.45$\pm$0.22 | 0.48$\pm$0.09 | 0.87$\pm$0.15 | 0.39$\pm$0.25 | 0.35$\pm$0.24 | 0.47$\pm$0.23 | 0.45$\pm$0.25 | 0.79$\pm$0.20
## 6 Results and Analysis
We first discuss the results of adversarial detection on image classifiers. We
discuss the performance of detectors on the security dataset in Section 7.
### 6.1 Adversarial Detection Performance
##### CIFAR-10
PASA outperforms all baseline methods in CIFAR-10 (ResNet) model. For example,
as observed in Table I, PASA obtains an AUC of 0.98$\pm$0.01 for detecting CW
attack on CIFAR-10 ResNet. The next best detector is MagNet with 0.93$\pm$0.01
AUC. On CIFAR-10 (VGG) model, PASA obtains an AUC of 0.82$\pm$0.01 for
detecting CW attack. MagNet is the the next best detector with 0.71$\pm$0.03
AUC. Other methods (e.g., FS, and TWS) show more variability in CIFAR-10
models, with lower AUC scores (less than 0.5) in some instances, suggesting
that the thresholds learned from benign images were suitable for detecting
specific attacks only. Thus such solutions become impractical for attack-
agnostic detection since they require a threshold change depending on the type
of attack.
We also observe that the MagNet performance remains competitive on both
CIFAR-10 models, however, the performance of other detection methods degrades
significantly. For example, on average, the U-LOO method obtains an AUC of
0.90$\pm$0.16 on MNIST, whereas the average AUC reduces to 0.51$\pm$0.01 on
the CIFAR-10 VGG model and 0.61$\pm$0.08 on the ResNet model. A similar
performance drop can be observed with TWS and FS. In Table II, we notice that
PASA obtains high TPRs at low FPRs consistently. For example, PASA obtains a
TPR of 82.2% at 1% FPR on detecting CW attack for the CIFAR-10 (ResNet) model.
The next best detector is MagNet with 57.2% TPR. While MagNet seems to obtain
high TPRs, especially on the VGG16 model, it comes at the cost of high FPRs,
discussed in Section 6.2.
##### CIFAR-100
As observed in Table III, PASA consistently outperforms the baseline methods
on CIFAR-100 with noticeable performance improvement as the strength of
adversarial perturbation increases. While the performance of detectors like
TWS decreases with an increase in adversarial perturbation, PASA achieves an
increment in its detection performance. This is because as perturbation
increases, the discrepancy between the attribution maps of benign and
adversarial images increases, which helps PASA detect the inconsistency. For
instance, PASA obtains an AUC of 0.92$\pm$0.03 on detecting CW attacks. The
next best detector is MagNet, with 0.91$\pm$0.01 AUC. TWS, also a noise-based
approach, obtains an AUC of 0.59$\pm$0.02 on detecting $\epsilon=8/255$ BIM
attack. The AUC reduces to 0.33$\pm$0.03 at $\epsilon=64/255$. PASA, on the
other hand, improves from an AUC of 0.60$\pm$0.01 to 0.84$\pm$0.03 on
detecting $\epsilon=8/255$ and $\epsilon=64/255$ BIM attacks. Averaged across
all attacks, PASA obtains an AUC of 0.81$\pm$0.15, with the next best
detector, MagNet, obtaining 0.77$\pm$0.16 AUC. In Table IV, we can observe
that PASA obtains the highest TPRs at the lowest FPRs settings consistently.
For example, PASA obtains a TPR of 34.5% on detecting CW attacks at 1% FPR.
The next best detector is FS with 26.7% TPR.
##### ImageNet
Table III demonstrates that PASA consistently outperforms the baseline methods
in detecting attacks on both ImageNet models. For instance, when detecting an
$\epsilon=8/255$ PGD attack on ImageNet (MobileNet) and ImageNet (ResNet),
PASA scores an AUC of 0.98$\pm$0.02 and 0.97$\pm$0.01 respectively,
outperforming all baselines by a significant margin. In the ImageNet-ResNet
model, while PASA obtains an AUC of 0.95$\pm$0.01 for the CW attack, the next
best detector only has an AUC of 0.66$\pm$0.01. Baseline method (TWS)
performance is slightly better than PASA in detecting two BIM attacks (8/255
and 16/255) on ImageNet (ResNet). However, as the attack strength increases,
our method surpasses the performance of TWS. MagNet and U-LOO have very low
AUC scores on attacks like CW for ImageNet, which means that the threshold
learned from benign images was only suitable for detecting FGSM and PGD
attacks. This suggests that the detection criteria used in those methods may
not be effective against different types of attacks without knowing the attack
types beforehand, which is impractical. From Table IV, we can observe that
PASA obtains the highest TPRs at low FPRs across different attacks in both
ImageNet models.
##### MNIST
All unsupervised detectors have overall competitive performance in detecting
adversarial attacks on MNIST, with MagNet consistently obtaining high AUC and
TPR scores. However, PASA has a drop in performance, especially in detecting
BIM and CW attacks. This could be attributed to the lower resolution of MNIST
images. Lower resolution (28x28) implies less visual information for the
feature attribution method, compared with CIFAR-10 (32x32x3) and ImageNet
(224x224x3). It limits the granularity at which IG attributes importance to
individual features, resulting in a small number of attributions and lower
sensitivity to noise.
#### 6.1.1 Analysis
PASA leverages the discrepancy between benign and adversarial samples in model
logits and class-specific feature attribution for detecting adversarially
perturbed samples. This discrepancy can be measured by injecting noise into a
given input sample and measuring the change in both logits and attribution
maps caused by noise. Previous studies [28] have demonstrated that the model
response of benign and adversarial inputs to noise differs because neural
networks are trained only with benign inputs. In this work, we also
demonstrate that the sensitivity of Integrated Gradients (IG) attribution is
linked to the sensitivity of the model (Eqn. 3). However, since IG assigns
importance to each feature, the level of granularity in importance attribution
depends on the number of features. Therefore, based on these considerations,
we combined these two inconsistency measures for the detection of
adversarially perturbed samples and we developed PASA accordingly to account
for a) the sensitivity of the trained model to noise, and b) the granularity
of IG attribution.
In our experiments, we followed a standard classification pipeline to achieve
high performance on the test set, using standard hyperparameters or pretrained
models (training details are discussed in Appendix B). However, different deep
learning models learn varying levels of feature abstraction from a given
dataset due to differences in depth, connections, and overall structure. For
instance, ResNet, with its residual connections, can capture more intricate
features compared to simpler networks like VGG or LeNet. Our experimental
results indicate that PASA performs notably better with deeper networks, as
demonstrated by the results obtained from ResNet models trained on CIFAR-10
and ImageNet (refer to Tables I and III). These findings suggest that
increased network depth, as observed in ResNet, enables the model to extract
complex patterns from the dataset, resulting in higher sensitivity to noise—a
quality utilized by PASA for detecting adversarial samples. Additionally, the
level of granularity in IG attribution depends on the number of features
present in the dataset. Consequently, PASA exhibits a notable decrease in
detecting certain attacks (e.g., BIM, CW) on MNIST, as the lower resolution of
MNIST leads to smaller norms of IG attribution. However, it consistently
obtains high detection performance on varying attacks on CIFAR-10, CIFAR-100,
and ImageNet.
Figure 7: Comparing adversarial detection performance of PASA with its
components, PS: Prediction Sensitivity, AS: Attribution Sensitivity, and
PS+AS: PASA. TABLE IV: Adversarial Detection Performance for CIFAR-100 and
ImageNet models: Our Method (PASA) vs. Unsupervised Methods (FS, MagNet,
U-LOO, TWS) using TPR scores.
| Performance | CIFAR-100 | ImageNet (MobileNet) | ImageNet (ResNet)
---|---|---|---|---
Attack | Metric | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA | FS | MagNet | U-LOO | TWS | PASA
FGSM (8/255) | TPR (FPR @ 0.01) | 6.2 | 4.1 | 12.3 | 5.5 | 10.3 | 2 | 6.1 | 2.1 | 0.5 | 11.2 | 7.1 | 4.3 | 2 | 2.3 | 3.9
| TPR (FPR @ 0.05) | 25.1 | 4.4 | 15.7 | 4.6 | 35.9 | 4.4 | 6.9 | 13.4 | 6.9 | 29.5 | 11.3 | 5 | 7.1 | 7.4 | 15.1
| TPR (FPR @ 0.1) | 36.1 | 15.1 | 17.8 | 5.8 | 54.8 | 11 | 16.3 | 21.8 | 16.1 | 49.8 | 15.4 | 5.2 | 18.1 | 32.5 | 27.6
FGSM (16/255) | TPR (FPR @ 0.01) | 7.5 | 4.5 | 17.3 | 5.9 | 61.3 | 2.2 | 4.6 | 2.1 | 1.3 | 29.5 | 8.8 | 5.3 | 1.2 | 0.3 | 2.6
| TPR (FPR @ 0.05) | 29.3 | 4.7 | 28.7 | 4.6 | 86.3 | 4.5 | 5.3 | 15 | 6.5 | 56.7 | 12.8 | 6.1 | 8.2 | 1.7 | 19
| TPR (FPR @ 0.1) | 47.3 | 27 | 37.3 | 5.1 | 94 | 15.9 | 14.3 | 25 | 13.8 | 79 | 15.9 | 6.3 | 21.5 | 12.4 | 33.3
FGSM (32/255) | TPR (FPR @ 0.01) | 25.5 | 82.5 | 17.4 | 1.2 | 96.6 | 3.5 | 5.1 | 2.1 | 0.5 | 74.6 | 10.9 | 4.8 | 1.3 | 0.1 | 14.7
| TPR (FPR @ 0.05) | 33.4 | 84.1 | 15.3 | 7.8 | 99.7 | 4.2 | 6.1 | 18.8 | 6.8 | 93.2 | 16.6 | 5.3 | 7 | 0.1 | 45.1
| TPR (FPR @ 0.1) | 60.5 | 100 | 33.3 | 6.9 | 99.9 | 35.9 | 16.4 | 29.3 | 13.8 | 97.7 | 23.8 | 6.2 | 15.8 | 3.5 | 63.4
FGSM (64/255) | TPR (FPR @ 0.01) | 73.1 | 100 | 7.4 | 3 | 100 | 2.5 | 8.1 | 2.2 | 1.5 | 96.8 | 20.3 | 5.5 | 0.8 | 0 | 55.1
| TPR (FPR @ 0.05) | 74.3 | 100 | 17.6 | 4.9 | 100 | 11.4 | 9.2 | 15.2 | 6.6 | 99.6 | 2.2 | 7.4 | 5.1 | 0 | 84.4
| TPR (FPR @ 0.1) | 75.3 | 100 | 23.7 | 5.4 | 100 | 36.6 | 21.1 | 25.6 | 11.8 | 100 | 37.2 | 8.3 | 15.3 | 0.1 | 93.36
PGD (8/255) | TPR (FPR @ 0.01) | 22.4 | 3.8 | 15.9 | 6.3 | 3.1 | 6.2 | 6.1 | 1.6 | 0.7 | 98.8 | 5.6 | 4.2 | 5.4 | 22.2 | 99.1
| TPR (FPR @ 0.05) | 27.8 | 3.9 | 24.9 | 7.2 | 23.3 | 7.1 | 7.1 | 10.8 | 5.5 | 99.1 | 6.6 | 4.7 | 15.3 | 27 | 99.1
| TPR (FPR @ 0.1) | 27.8 | 15.2 | 31.2 | 7.4 | 35.4 | 7.4 | 16.1 | 17.9 | 11.2 | 99.4 | 7.5 | 4.8 | 31.7 | 39.3 | 99.2
PGD (16/255) | TPR (FPR @ 0.01) | 15.4 | 5.4 | 17.9 | 4.6 | 6.6 | 4.1 | 6.2 | 1.4 | 1.1 | 100 | 2.9 | 5.2 | 11.1 | 5.5 | 99.9
| TPR (FPR @ 0.05) | 16.3 | 5.5 | 28.9 | 5.8 | 29.4 | 4.8 | 7 | 13 | 4.8 | 100 | 3 | 6 | 24.8 | 6 | 99.9
| TPR (FPR @ 0.1) | 17.2 | 18.7 | 35.6 | 5.4 | 45.5 | 5.1 | 14.5 | 20.6 | 12.4 | 100 | 3.7 | 6.2 | 44 | 10 | 99.9
PGD (32/255) | TPR (FPR @ 0.01) | 8.5 | 9.9 | 15.5 | 6.4 | 14.4 | 4.4 | 7.7 | 12.1 | 2 | 100 | 1.9 | 5 | 5 | 0.4 | 100
| TPR (FPR @ 0.05) | 9.2 | 10.1 | 30.7 | 7.9 | 46.1 | 5.8 | 9.1 | 11.4 | 5.98 | 100 | 2 | 6.1 | 11.3 | 0.5 | 100
| TPR (FPR @ 0.1) | 10.6 | 74.1 | 37.2 | 9.5 | 57.9 | 7.4 | 17.9 | 21.8 | 12.4 | 100 | 2.8 | 6.5 | 30 | 0.8 | 100
PGD (64/255) | TPR (FPR @ 0.01) | 4.4 | 100 | 19.9 | 0 | 54.5 | 4.2 | 6.7 | 1.7 | 2.1 | 100 | 1.2 | 6 | 55.3 | 0 | 100
| TPR (FPR @ 0.05) | 4.5 | 100 | 30.8 | 0.1 | 85.4 | 5 | 7.9 | 11.3 | 6 | 100 | 1.4 | 6.8 | 14.7 | 0 | 100
| TPR (FPR @ 0.1) | 4.5 | 100 | 38.2 | 0.1 | 96.1 | 7.8 | 18.2 | 21.4 | 14.3 | 100 | 2.7 | 6.7 | 31 | 0 | 100
BIM (8/255) | TPR (FPR @ 0.01) | 12.3 | 4.5 | 12 | 6.7 | 4.3 | 10.1 | 0 | 0.1 | 0.5 | 27.1 | 11.5 | 0 | 61.5 | 49.1 | 14.7
| TPR (FPR @ 0.05) | 20.8 | 4.6 | 22 | 7.1 | 7.9 | 12.3 | 0 | 1.5 | 3.5 | 29.2 | 12.9 | 0 | 2.4 | 58.1 | 16.3
| TPR (FPR @ 0.1) | 25.4 | 14.2 | 28.2 | 7.4 | 13.4 | 13.7 | 0 | 2.4 | 9.1 | 32.9 | 14.4 | 0 | 5.1 | 75.8 | 19
BIM (16/255) | TPR (FPR @ 0.01) | 9.7 | 4 | 11.7 | 5.7 | 2.9 | 6.8 | 0 | 0.1 | 0.5 | 51.8 | 5.1 | 0 | 1.4 | 40.3 | 36.9
| TPR (FPR @ 0.05) | 11.6 | 4.1 | 22.4 | 6.6 | 6.5 | 9.2 | 0 | 2.5 | 3.1 | 53 | 5.5 | 0 | 2.3 | 48.5 | 37.9
| TPR (FPR @ 0.1) | 15.7 | 14.1 | 30 | 6.3 | 13.8 | 10 | 0 | 3.4 | 10.5 | 54.4 | 6.8 | 0 | 7 | 62.4 | 39
BIM (32/255) | TPR (FPR @ 0.01) | 8.2 | 4.9 | 12.2 | 2.9 | 1.1 | 4.7 | 0 | 0.2 | 0.5 | 72.3 | 2.2 | 0 | 2 | 21.6 | 57.3
| TPR (FPR @ 0.05) | 9.6 | 5 | 25.1 | 3.2 | 12.5 | 6.5 | 0 | 2.5 | 3.5 | 73.2 | 2.9 | 0 | 3.4 | 27.5 | 57.4
| TPR (FPR @ 0.1) | 10.1 | 16 | 26.5 | 3.5 | 26 | 6.8 | 0 | 4.3 | 9 | 74 | 3.9 | 0 | 7.4 | 39.7 | 57.7
BIM (64/255) | TPR (FPR @ 0.01) | 5.1 | 5.7 | 1.5 | 1.4 | 11.6 | 3 | 0 | 0.4 | 0.5 | 81.2 | 1.1 | 0 | 1.1 | 5.4 | 58.8
| TPR (FPR @ 0.05) | 5.2 | 6 | 1.2 | 1.8 | 36.7 | 4.3 | 0 | 2.1 | 4.1 | 81.2 | 1.4 | 0 | 3.2 | 7.7 | 59.9
| TPR (FPR @ 0.1) | 5.1 | 39.6 | 8.2 | 2.3 | 55.5 | 6.1 | 0 | 3.6 | 12 | 81.6 | 1.7 | 0 | 8.1 | 13.4 | 59.2
Auto-PGD (0.15) | TPR (FPR @ 0.01) | 32.5 | 19.6 | 14.8 | 2.4 | 98.5 | 6.6 | 1.2 | 7.5 | 0.5 | 98 | 2.5 | 1.4 | 5.1 | 1.3 | 97.1
| TPR (FPR @ 0.05) | 41.2 | 20.5 | 22.3 | 3.1 | 99.2 | 7 | 1.5 | 12.2 | 4.1 | 98.1 | 2.8 | 1.7 | 17.3 | 1.4 | 97.1
| TPR (FPR @ 0.1) | 51.5 | 100 | 41.5 | 4.2 | 99.7 | 7.4 | 4.9 | 23.2 | 7.9 | 98.2 | 3.3 | 1.8 | 45.9 | 2.8 | 97.1
CW (0.15) | TPR (FPR @ 0.01) | 26.7 | 9.5 | 12.2 | 2.5 | 34.5 | 7.2 | 0 | 0.1 | 0.5 | 45.6 | 10.2 | 0 | 1.1 | 0.8 | 75.2
| TPR (FPR @ 0.05) | 28.2 | 9.9 | 20.5 | 3.6 | 67.9 | 10.4 | 0 | 0.4 | 1.1 | 67.3 | 13.5 | 0 | 3.4 | 14.5 | 87.3
| TPR (FPR @ 0.1) | 29.3 | 82.6 | 25.7 | 4.1 | 85.3 | 15.5 | 0 | 1.6 | 8.3 | 76.2 | 16.4 | 0 | 5.4 | 42.3 | 94.1
### 6.2 False positive rates of unsupervised detectors
We evaluate unsupervised detectors with the True Positive Rate (TPR), computed
as the ratio of the total number of correctly identified adversarial examples
to the overall number of adversarial examples. We compute the TPR of detectors
by using the threshold learned during training for specific thresholds on the
validation set. However, it is highly unlikely that the detector will get the
same FPR on the test set. Hence, computing another metric, false positive rate
(FPR), is an important criterion. FPR measures the ratio of the number of
natural images identified as adversarial images to the total number of natural
images. We compare the FPR of PASA against baselines in Figure 8, where we
plot the average FPRs on the test set across all attacks corresponding to the
FPR associated with the threshold learned during training (1%, 5%, 10%). The
dotted line represents the ideal position of the plot. Detectors that are
closer to this line have lower FPRs and are better detectors in classifying
the benign images correctly. We can observe that PASA consistently obtains
better false positive rates than other methods on CIFAR-10 and ImageNet. On
CIFAR-100, while TWS has the lowest FPRs overall, our method obtains better
FPRs than U-LOO and MagNet.
(a) CIFAR-10
(b) CIFAR-100
(c) ImageNet
Figure 8: FPR on validation set vs FPR on test set.
### 6.3 Ablation study
PASA comprises two statistical metrics: prediction sensitivity and attribution
sensitivity. In this ablation study, we assess the individual performance of
each metric. Specifically, our focus is on evaluating the detection
performance of our proposed method when utilizing only one of the statistical
metrics. We maintain a similar experimental setup, selecting 1000 benign
images from the test set that are accurately classified by the model. For each
attack, we generate 1000 corresponding adversarial images. We use the
thresholds for prediction sensitivity and attribution sensitivity learned
during the training of our detector. The collective average AUC for each
dataset under different attacks is illustrated in Figure 7. We summarize the
results below:
1\. On CIFAR-10 (VGG), prediction sensitivity outperforms attribution
sensitivity for detection even though both metrics have high performance on
average.
2\. On CIFAR-10 CIFAR-10 (ResNet), attribution sensitivity outperforms
prediction sensitivity. Its standalone performance is almost equivalent to the
combined performance.
3\. On CIFAR-100, the performance of attribution sensitivity and prediction
sensitivity is almost equivalent.
4\. On ImageNet, both metrics have lower performance when used standalone. The
combined performance is significantly better than the individual metric.
Detailed results can be found in Appendix F, where we demonstrate that AS and
PS exhibit sensitivity to different attack types, and the combination of both
metrics provides a more balanced detection strategy across various attack
types.
### 6.4 Evaluation with adaptive attacks
In the previous experiments, we assume that the adversary has access to the
model details but does not know the details of our detection mechanism. While
this is a realistic assumption, it does not provide the robustness measure of
the proposed detector. We now evaluate the performance of our proposed method
under adaptive attacks to evaluate its robustness. Adaptive attacks are
adversarial attacks targeted at a defense mechanism and are adapted to the
specific details of a defense. Since our detection approach comprises two
statistical measures, we perform adaptive attacks on both components [65]. We
optimize the PGD attack with perturbation set at $0.1$ and evaluate our
results on the CIFAR-10 (ResNet) dataset.
First, we attack the feature attribution method, Integrated Gradient (IG). An
adversary tries to deceive both the target classifier and IG. Similar to ADV2
attack [75], this attack generates an adversarial image $\textbf{x}^{*}$ such
that following conditions are satisfied: 1) target classifier ($F$)
misclassifies $\textbf{x}^{*}$, 2) IG generates attribution similar to benign
counterpart x where the similarity is measured using the intersection-over-
union (IoU) test, widely used in object detection [23], and 3) the difference
between benign x and adversarial $\textbf{x}^{*}$ image is minimized.
We solve the following optimization for an image x,
$min_{\textbf{x}^{*}}L_{1}(F(\textbf{x}^{*}),y^{*})+c*L_{2}(IG(\textbf{x}^{*}),IG(\textbf{x}))$
(12)
where $L_{1}$ is the prediction loss used by PGD attack,
$L_{2}=||IG(\textbf{x}^{*})-IG(\textbf{x})||_{2}$ is the loss measuring the
difference between the attribution map of the benign image and its adversarial
counterpart, and $c$ is a hyper-parameter to balance the two losses. Similar
to ADV2 attack [75], we observed that it is inefficient to search for
adversarial input by directly running the updates using Eq. 12. Hence, we
perform a warm start by first running a fixed number of steps for the regular
PGD attack and then resume the updates of Eq. 12. We use the following values
of $c\in[5,10,20,30,50]$ for iteration steps $\in[300,200,100,50]$. We
generate 1000 adversarial images according to the attack strategy of Eq. 12.
These adversarial images obtain an attack success rate of 100% to fool the
model (condition 1); the mean IoU score between benign and adversarial
attribution is 43% (condition 2) (note that ADV2 [75] also obtained IoU of
only 50% on attacking Vanilla Gradient [60]). The mean L2 distortion of
successful adaptive adversarial images is 2.8, which is slightly higher
compared with the PGD attack (2.6) (condition 3). We apply our detection
strategy on the adversarial images and obtain an AUC score of $0.75$. The
adaptive attack takes a significantly longer time compared with PGD. A normal
PGD attack takes about 0.065 seconds to generate an adversarial sample for a
single image, whereas this adaptive attack takes around 26.21 seconds, which
is $\sim$400 times slower.
TABLE V: Performance of PASA against Adaptive Attacks. Complexity measures
computation time in seconds.
Attack | Attack success rate | Detection AUC | Complexity (time)
---|---|---|---
| Before | After | Before | After | Before | After
Attack on IG | 100% | 100% | 0.98 | 0.75 | 0.065 | 26.21
Attack on logits | 100% | 100% | 0.98 | 0.76 | 0.065 | 0.32
Combined attack | 100% | 100% | 0.98 | 0.69 | 0.065 | 28.33
Next, we perform an adaptive attack on the model logits. The adversary creates
adversarial images in such a way that the distribution of logits is closer to
the logits of benign images. We follow the logit matching attack of Tramer et
al. [65]. We solve the following optimization to obtain an adversarial image
$\textbf{x}^{*}$
$min_{\textbf{x}^{*}}L_{1}(F(\textbf{x}^{*}),y^{*})+L_{3}(Z(\textbf{x}^{*}),Z(\textbf{x}))$
(13)
where $L_{1}$ is prediction loss of a PGD attack. The second loss term
$L_{3}=||Z(\textbf{x}^{*})-Z(\textbf{x})||_{2}$ is the mean square loss
between logits of adversarial and benign images. The attack runs for a fixed
number of iterations (given by PGD iterations) and produces adversarial
samples whose logits are closer to benign counterparts. For 1000 adversarial
images obtained using this strategy, the adaptive attack still achieves a 100%
attack success rate. The mean L2 distortion of successful samples is 2.7,
which is similar to the PGD attack (2.6). We obtain an AUC score of $0.76$ for
the detector. We observe that attacking only one component of our detector
does not significantly impact the overall detection.
Figure 9: Attribution map (second row) for three different images (first row):
benign image, adversarial image from PGD attack, and adversarial image from
adaptive attack.
Finally, we introduce attacks on both the attribution method and model logits.
We introduce two different losses and solve the following optimization to
obtain an adversarial image $\textbf{x}^{*}$ for an input image x
$min_{\textbf{x}^{*}}L_{1}(.)+c*L_{2}(.)+L_{3}(.)$ (14)
where $L_{1}(F(\textbf{x}^{*}),y*)$ is prediction loss,
$L_{2}(IG(\textbf{x}^{*}),IG(\textbf{x}))$ is the loss measuring the
difference between the attribution map of benign samples and their adversarial
counterparts, $L_{3}(Z(\textbf{x}^{*}),Z(\textbf{x}))$ measures the loss
between logits of benign and adversarial samples and $c$ is a hyper-parameter.
We perform a warm start by searching for adversarial samples using PGD attack
and then iteratively optimize Eq. 14 to obtain adversarial samples with
attribution vector and logits similar to benign samples. For a similar test
setting, the adaptive attack obtains an attack success rate of 100%. The mean
L2 distortion of successful samples is 2.7. The AUC score of the detector is
now reduced to $0.69$. This adaptive attack takes around 28.33 seconds on
average for each sample. Figure 9 shows the results of this adaptive attack.
The first row shows images from class “Truck” from CIFAR-10, and the second
row shows its heatmap computed using IG. We can observe that the attribution
map of a PGD image differs significantly from the attribution map of a normal
image. After performing an adaptive attack (which attacks both feature
attribution and model logit), the adversary obtains a perturbed image with its
attribution map similar to that of a natural image.
We summarize the result in Table V where attack success rate measures the
success of the attack in changing the label of an image, detection AUC
measures the performance of our detector in detecting adversarial images, and
complexity measures the time required by the attack for a single image (in
seconds). We observe that performing adaptive attacks against both components
of our detector increases computational complexity. However, even though the
detector performance drops with adaptive attacks, PASA is still able to
achieve a competitive performance under this strongest adversary assumption.
In Table VI, we evaluate the performance of different detection methods
against the adaptive adversarial samples obtained using Eqn. 14. This
adversarial attack was specifically designed to evade PASA’s detection
mechanism. However, all detection methods have considerably lower AUCs in
detecting the adversarial samples.
#### 6.4.1 Analysis
Prior works have shown that it is possible to add imperceptible perturbation
to images for generating random attribution maps [21, 75, 59]. However,
evading a classifier and generating an attribution similar to benign
counterparts is much more challenging. Attacks like ADV2 [75] only achieved a
50% IOU when targeting the Vanilla Gradient method [53]. In our evaluation,
the adaptive attack only achieved 43% IOU on attacking the Integrated Gradient
(IG) method. This means there is still a significant discrepancy between the
attribution map of benign and adversarial samples that PASA can utilize in
detection. This difficulty stems from the challenge of satisfying two counter-
intuitive objectives: retaining the adversarial label while aligning the
attribution with the benign example. This was validated by a recent work [8],
which shows that it is difficult to remove the $L_{1}$-norm of attribution
discrepancy between benign and adversarial images when an attribution method
satisfying completeness axiom (e.g. IG [60]) is used. These findings suggest
that an explanation method like IG can help detect discrepancies between
benign and adversarial examples.
TABLE VI: Evaluation of Adaptive Attacks
Method | FS | MagNet | TWS | U-LOO | PASA
---|---|---|---|---|---
AUC (Before) | 0.13 | 0.91 | 0.14 | 0.68 | 0.98
AUC (After) | 0.54 | 0.51 | 0.35 | 0.59 | 0.69
## 7 Application On Security Dataset
In this section, we demonstrate the application of PASA on a network security
dataset. We specifically focus on a network intrusion detection problem and
evaluate PASA using the updated CIC-IDS2017 dataset [18]. Since the goal of an
adversarial attack on network data is to classify attack samples as benign, we
preprocess the data by assigning a single attack label to different attack-
traffic types. Subsequently, we build a multi-layer perceptron model with a
binary classification objective, which achieves an accuracy of 99.04% on the
test set. Further details about the dataset, and model can be found in
Appendix A and B.5.
We generate adversarial samples using FGSM, BIM, PGD, CW and Auto-PGD attacks
by considering a threat model where an adversary is able to manipulate the
dataset in feature-space. Hence, such attacks might not be representative of a
realistic settings and might produce “unrealizable” adversarial samples [51,
3].
In Table VII, we compare U-LOO and PASA (AUC scores) on the intrusion
detection model. It is important to note that all baseline methods we
considered cannot be directly applied to security applications. For instance,
FS [71] is designed for image data and necessitates the application of various
image filters for adversarial detection. MagNet [41], proposed for image
datasets, relies on an auto-encoder model to compute the reconstruction error.
TWS [28] leverages the difference in softmax predictions between the original
and noisy inputs to detect adversarial samples. However, there is a negligible
difference between the softmax outputs of the original and noisy inputs in the
binary classification task, rendering TWS ineffective. On the other hand,
U-LOO works well in our scenario as it measures the interquartile range (IQR)
of feature attributions for both benign and adversarial samples.
## 8 Discussion
### 8.1 Indicators of attack failures
Indicators of attack failures [44] serve to uncover potential vulnerabilities
in the adversarial robustness evaluation. In our defense assessment, two
specific scenarios merit consideration: a) Non-converging attack: Prior
defenses [9, 43] often employed small steps in PGD attack, introducing the
risk of non-convergence. As discussed in Sec. 5.1.3, we mitigated this risk by
opting for larger iteration steps (40) in our PGD attack formulation. Aligning
with Pintor et al.[44] recommendation, we reassessed detection methods on the
PGD attack with 100 steps. Notably, all detection methods exhibited no
significant performance degradation when compared with the previous evaluation
of PGD samples (steps=40). Results are summarized in Appendix E. b. Non-
adaptive attacks: Prior defenses [15] have developed adaptive attacks,
neglecting non-differentiable or additional defense components. However,
adaptive attacks that ignore a component of a defense do not guarantee
successful bypassing of the target model. In our case, both model prediction
and feature attribution are differentiable and we incorporate them in crafting
adaptive adversarial samples, as discussed in Section 6.4.
TABLE VII: Evaluation on updated CIC-IDS2017
Method | FGSM | PGD | BIM | CW | Auto-PGD
---|---|---|---|---|---
PASA | 0.99 $\pm$ 0.00 | 0.98 $\pm$ 0.02 | 0.99 $\pm$ 0.01 | 0.98 $\pm$ 0.03 | 0.97 $\pm$ 0.03
U-LOO [72] | 0.70 $\pm$ 0.05 | 0.66 $\pm$ 0.07 | 0.76 $\pm$ 0.01 | 0.75 $\pm$ 0.01 | 0.63 $\pm$ 0.00
### 8.2 Efficient Lightweight detection
The main strength of PASA is its lightweight detection approach. Since it
requires the computation of two statistics by probing a given input image with
noise, it has low inference latency. The simplicity of this approach means
that it has a small memory footprint, making it suitable for deployment on
resource-constrained devices or in scenarios where computational resources are
limited. While other unsupervised methods like MagNet [41], FS [71], and TWS
[28], also have low inference time, PASA outperforms these methods in
detecting attacks against CIFAR and ImageNet models. We evaluated the
inference time, training time, and memory usage (peak memory required) of
different detection methods for 1000 images and report average results in
Table VIII, where we can observe that PASA is faster than LOO but slower than
TWS, FS, and MagNet. MagNet requires the highest training time, attributed to
the necessity of training a separate defense model. PASA has a moderate
training time, significantly lower than MagNet and LOO. PASA also has a
moderate memory usage, higher than TWS but lower than LOO, and MagNet.
### 8.3 On explanation methods and their explanation fragility
An adversary can introduce perturbations to an input sample such that it is
misclassified, but its attribution map is similar to the attribution of benign
sample [75]. This is an attack on the explanation method. Such attacks affect
the performance of detection methods that utilize disparity in feature
attribution. However, the success of this attack depends on the attribution
method. In our approach, we employed IG[60], which displayed higher resilience
against adaptive adversarial attacks. A similar result was demonstrated in a
study by Vardhan et al. [67]. However, the Vanilla Gradient [53] is more
sensitive to such attacks, producing similar attribution maps between benign
and adversarial samples [75]. Alternative feature attribution methods such as
LRP [4] and GBP [57] can also be manipulated to produce targeted attribution
maps mimicking those of benign samples, effectively fooling the detector [67].
TABLE VIII: Computational overhead of detection methods
Method | Inference Time (s) | Training Time (s) | Memory Usage (MB)
---|---|---|---
PASA | 0.0156 | 15.460 | 2701.46
U-LOO [72] | 0.0540 | 53.076 | 2800.62
FS [71] | 0.0085 | 8.456 | 2644.98
MagNet [41] | 0.0007 | 1262.778 | 2744.41
TWS [28] | 0.0051 | 5.406 | 2576.08
### 8.4 Utilizing latent representations
PASA utilizes the final layer features from the classification model (logit
layer) and the explanation method (input attribution) for detecting
adversarial samples. This can be extended to incorporate features from
multiple intermediate layers as well. Few recent works exploit the behavior of
neural networks to benign and adversarial samples in hidden layers to design
supervised adversarial detectors [72, 56]. However, since DNNs can have many
hidden layers, it will require careful consideration to include features from
specific layers; otherwise, the detectors may overfit the training data due to
a large number of features. We will explore the inclusion of statistics from
latent representations in our future work.
### 8.5 Limitations
PASA works well for detecting adversarial samples generated through
$L_{\infty}$ attacks. Such attacks aim to maximize perturbation within a
bounded norm. PASA is effective against them because it can capture
significant changes in attribution and prediction differences, which result
from substantial perturbations. In contrast, $L_{1},L_{2}$ attacks make
minimal changes to the input by altering only a few pixels and minimizing
perturbation magnitude, respectively (See Table IX). Integrated Gradient may
not capture such small or subtle perturbations effectively. For instance,
evaluating the detection methods on $L_{2}$ PGD attacks on the CIFAR-10 ResNet
at $\epsilon=64/255$, we obtained the following AUC: 0.59 (PASA), 0.55 (FS),
MagNet (0.51), U-LOO (0.52), TWS (0.38).
Other attacks beyond the $L_{p}$ attack (e.g., patch attacks) modify only a
specific part of an image by adding a patch, so directly applying PASA and
observing the difference in prediction and attribution does not work. However,
such attacks still perform significant modifications to hidden feature maps
that produce changes in model prediction. Our future work will focus on
utilizing noise-based approaches on hidden-activations in detecting other
evasion attacks of $L_{1}$, $L_{2}$ norms, and patch attacks.
PASA also has a noticeable drop in performance with images of lower resolution
like MNIST. This is because the granularity at which IG attributes importance
to individual features in MNIST is smaller and hence, it has lower sensitivity
to noise, the quality utilized by PASA in detection.
Like other noise-based approaches [28, 46], our noise parameter needs to be
determined empirically. This means that the effectiveness of the method can
depend on the specific dataset and problem at hand. Selecting the optimal
noise parameter requires experimentation, which could be time-consuming before
deployment. However, we have demonstrated through different datasets and
network architectures that once the optimal noise value is discovered, PASA
can be generalized across datasets for lightweight detection of attacks.
TABLE IX: Average distortion between benign & adversarial CIFAR-10 images at
different attack strength
Attack | 8/255 | 16/255 | 32/255 | 64/255
---|---|---|---|---
$L_{1}$ PGD | 0.001 | 0.002 | 0.003 | 0.007
$L_{2}$ PGD | 0.031 | 0.063 | 0.125 | 0.251
$L_{\infty}$ PGD | 1.456 | 2.706 | 4.818 | 8.512
## 9 Conclusion
In this paper, we propose PASA, a lightweight attack-agnostic, unsupervised
method for detecting adversarial samples. We use noise as a probing tool to
measure the sensitivity of model prediction and feature attribution. We learn
thresholds of sensitivity scores from benign samples and utilize them for
detection. PASA outperforms existing statistical unsupervised detectors in
classifying adversarial samples on the updated CIC-IDS2017, CIFAR-10,
CIFAR-100, and ImageNet datasets. PASA displays robust performance in
detecting adversarial samples even when an adversary has full knowledge of the
detector. We aim to extend the scope of our approach in future studies,
particularly to detect adversarial attacks of the $L_{0}$ and $L_{2}$ norm and
physically realizable patch-based attacks, and improve the security of diverse
systems that use deep learning models.
## Acknowledgement
We are grateful to anonymous reviewers at IEEE Euro S&P for their valuable
feedback. This work was supported by Toyota InfoTech Labs through Unrestricted
Research Funds and RIT AI Seed Fund Program.
## References
* [1] Jonathan Aigrain and Marcin Detyniecki. Detecting adversarial examples and other misclassifications in neural networks by introspection. CoRR, abs/1905.09186, 2019.
* [2] Hyrum S Anderson and Phil Roth. Ember: an open dataset for training static pe malware machine learning models. arXiv preprint arXiv:1804.04637, 2018.
* [3] Giovanni Apruzzese, Mauro Andreolini, Luca Ferretti, Mirco Marchetti, and Michele Colajanni. Modeling realistic adversarial attacks against network intrusion detection systems. Digital Threats: Research and Practice (DTRAP), 3(3):1–19, 2022.
* [4] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
* [5] Dipkamal Bhusal, Rosalyn Shin, Ajay Ashok Shewale, Monish Kumar Manikya Veerabhadran, Michael Clifford, Sara Rampazzi, and Nidhi Rastogi. Sok: Modeling explainability in security analytics for interpretability, trustworthiness, and usability. In The 18th International Conference on Availability, Reliability and Security (ARES 2023), 2023.
* [6] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 2154–2156, 2018.
* [7] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self-driving cars. CoRR, abs/1604.07316, 2016.
* [8] Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, and Luca Daniel. Proper network interpretability helps adversarial robustness in classification. In International Conference on Machine Learning, pages 1014–1023. PMLR, 2020.
* [9] Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. In International conference on learning representations, 2018.
* [10] Kirill Bykov, Anna Hedström, Shinichi Nakajima, and Marina M-C Höhne. Noisegrad—enhancing explanations by introducing stochasticity to model weights. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 6132–6140, 2022.
* [11] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pages 3–14, 2017.
* [12] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. Ieee, 2017.
* [13] Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Xi Wu, and Somesh Jha. Concise explanations of neural networks using adversarial training. In International Conference on Machine Learning, pages 1383–1391. PMLR, 2020.
* [14] Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pages 2206–2216. PMLR, 2020.
* [15] Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Siwei Li, Li Chen, Michael E Kounavis, and Duen Horng Chau. Shield: Fast, practical defense and vaccination for deep learning using jpeg compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 196–204, 2018.
* [16] Jesse Davis and Mark Goadrich. The relationship between precision-recall and roc curves. In Proceedings of the 23rd international conference on Machine learning, pages 233–240, 2006.
* [17] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [18] Gints Engelen, Vera Rimmer, and Wouter Joosen. Troubleshooting an intrusion detection dataset: the cicids2017 case study. In 2021 IEEE Security and Privacy Workshops (SPW), pages 7–12. IEEE, 2021.
* [19] Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, and Andrew B. Gardner. Detecting adversarial samples from artifacts. CoRR, abs/1703.00410, 2017.
* [20] Jacob R. Gardner, Matt J. Kusner, Yixuan Li, Paul Upchurch, Kilian Q. Weinberger, and John E. Hopcroft. Deep manifold traversal: Changing labels with convolutional features. CoRR, abs/1511.06421, 2015.
* [21] Amirata Ghorbani, Abubakar Abid, and James Zou. Interpretation of neural networks is fragile. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3681–3688, 2019.
* [22] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
* [23] Kaiming He and Georgia Gkioxari. P. doll ar, and r. girshick,“mask r-cnn,”. In Proc. IEEE Int. Conf. Comput. Vis, pages 2980–2988, 2017.
* [24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
* [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [26] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. International Conference on Learning Representations (ICLR), 2016.
* [27] Hossein Hosseini, Sreeram Kannan, and Radha Poovendran. Are odds really odd? bypassing statistical detection of adversarial examples. arXiv preprint arXiv:1907.12138, 2019.
* [28] Shengyuan Hu, Tao Yu, Chuan Guo, Wei-Lun Chao, and Kilian Q. Weinberger. A new defense against adversarial images: Turning a weakness into a strength. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 1633–1644, 2019.
* [29] Kaggle. Imagenet 1000 (mini). https://rb.gy/udu0a, 2020.
* [30] Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020.
* [31] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\.
* [32] K Naveen Kumar, C Vishnu, Reshmi Mitra, and C Krishna Mohan. Black-box adversarial attacks in autonomous vehicle technology. In 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), pages 1–7. IEEE, 2020.
* [33] Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In Artificial intelligence safety and security, pages 99–112. Chapman and Hall/CRC, 2018.
* [34] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
* [35] Jiwei Li, Will Monroe, and Dan Jurafsky. Understanding neural networks through representation erasure. CoRR, abs/1612.08220, 2016.
* [36] Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations, 2016.
* [37] Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765–4774, 2017.
* [38] Shiqing Ma, Yingqi Liu, Guanhong Tao, Wen-Chuan Lee, and Xiangyu Zhang. Nic: Detecting adversarial samples with neural network invariant checking. In 26th Annual Network And Distributed System Security Symposium (NDSS 2019). Internet Soc, 2019.
* [39] Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E Houle, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. In International Conference on Learning Representations, 2018.
* [40] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
* [41] Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pages 135–147, 2017.
* [42] Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian Molloy, and Ben Edwards. Adversarial robustness toolbox v1.2.0. CoRR, 1807.01069, 2018.
* [43] Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning, pages 4970–4979. PMLR, 2019.
* [44] Maura Pintor, Luca Demetrio, Angelo Sotgiu, Ambra Demontis, Nicholas Carlini, Battista Biggio, and Fabio Roli. Indicators of attack failure: Debugging and improving optimization of adversarial examples. Advances in Neural Information Processing Systems, 35:23063–23076, 2022.
* [45] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust you?” explaining the predictions of any classifier. In 22nd ACM SIGKDD, 2016.
* [46] Kevin Roth, Yannic Kilcher, and Thomas Hofmann. The odds are odd: A statistical test for detecting adversarial examples. In International Conference on Machine Learning, pages 5498–5507. PMLR, 2019.
* [47] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. łlearning internal representations by error propagation, ž california univ san diego la jolla inst for cognitive science. Technical report, Tech. Rep, 1985.
* [48] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520, 2018.
* [49] Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, and Tom Goldstein. Are adversarial examples inevitable? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
* [50] Iman Sharafaldin, Arash Habibi Lashkari, Ali A Ghorbani, et al. Toward generating a new intrusion detection dataset and intrusion traffic characterization. ICISSp, 1:108–116, 2018.
* [51] Ryan Sheatsley, Blaine Hoak, Eric Pauley, Yohan Beugin, Michael J Weisman, and Patrick McDaniel. On the robustness of domain constraints. In Proceedings of the 2021 ACM SIGSAC conference on computer and communications security, pages 495–515, 2021.
* [52] Dinggang Shen, Guorong Wu, and Heung-Il Suk. Deep learning in medical image analysis. Annual review of biomedical engineering, 19:221–248, 2017.
* [53] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings, 2014.
* [54] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
* [55] Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, and Fabio Roli. Deep neural rejection against adversarial examples. EURASIP Journal on Information Security, 2020:1–10, 2020.
* [56] Philip Sperl, Ching-Yu Kao, Peng Chen, Xiao Lei, and Konstantin Böttinger. Dla: dense-layer-analysis for adversarial example detection. In 2020 IEEE European Symposium on Security and Privacy (EuroS&P), pages 198–215. IEEE, 2020.
* [57] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
* [58] Pascal Sturmfels, Scott Lundberg, and Su-In Lee. Visualizing the impact of feature attribution baselines. Distill, 5(1):e22, 2020.
* [59] Akshayvarun Subramanya, Vipin Pillai, and Hamed Pirsiavash. Towards hiding adversarial examples from network interpretation. arXiv preprint arXiv:1812.02843, 2018.
* [60] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR, 2017.
* [61] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
* [62] Thomas Tanay and Lewis D. Griffin. A boundary tilting persepective on the phenomenon of adversarial examples. CoRR, abs/1608.07690, 2016.
* [63] Guanhong Tao, Shiqing Ma, Yingqi Liu, and Xiangyu Zhang. Attacks meet interpretability: Attribute-steered detection of adversarial samples. Advances in Neural Information Processing Systems, 31, 2018.
* [64] Florian Tramèr. Detecting adversarial examples is (nearly) as hard as classifying them. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 21692–21702. PMLR, 2022.
* [65] Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. Advances in neural information processing systems, 33:1633–1645, 2020.
* [66] LJP van der Maaten and GE Hinton. Visualizing high-dimensional data using t-sne. 2008. Journal of Machine Learning Research, 9:2579.
* [67] Raj Vardhan, Ninghao Liu, Phakpoom Chinprutthiwong, Weijie Fu, Zhenyu Hu, Xia Ben Hu, and Guofei Gu. Exad: An ensemble approach for explanation-based adversarial detection. arXiv preprint arXiv:2103.11526, 2021.
* [68] Jingyuan Wang, Yufan Wu, Mingxuan Li, Xin Lin, Junjie Wu, and Chao Li. Interpretability is a kind of safety: An interpreter-based ensemble for adversary defense. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 15–24, 2020.
* [69] Yuhang Wu, Sunpreet S Arora, Yanhong Wu, and Hao Yang. Beating attackers at their own games: Adversarial example detection using adversarial gradient directions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 2969–2977, 2021.
* [70] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 501–509, 2019.
* [71] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018. The Internet Society, 2018.
* [72] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael Jordan. Ml-loo: Detecting adversarial examples with feature attribution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6639–6647, 2020.
* [73] Chiliang Zhang, Zhimou Yang, and Zuochang Ye. Detecting adversarial perturbations with salieny. In Proceedings of the 6th International Conference on Information Technology: IoT and Smart City, pages 25–30, 2018.
* [74] Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S Dhillon, and Cho-Jui Hsieh. The limitations of adversarial training and the blind-spot attack. In International Conference on Learning Representations, 2018.
* [75] Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, and Ting Wang. Interpretable deep learning under fire. In Srdjan Capkun and Franziska Roesner, editors, 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020, pages 1659–1676. USENIX Association, 2020.
## Appendix A Datasets
1\. MNIST [34]: MNIST is a handwritten digit dataset with digits from $0$ to
$9$ in 10 classes with 60000 train images and 10000 test images. Each image in
the dataset is a grayscale image with a size of $28$x$28$. We use the PyTorch
torch-vision MNIST dataset for our evaluation.
2\. CIFAR-10 [31]: The CIFAR-10 dataset consists of $32$x$32$ three-channel
images in 10 classes with 6000 images per class. There are 50,000 training
images and 10,000 test images. The images belong to different real-life
objects like airplanes, cars, birds, cats, dogs, frogs, and trucks. We use the
PyTorch torch-vision CIFAR-10 dataset for our evaluation.
3\. CIFAR-100 [31]: This dataset is similar to CIFAR-10, but it has 100
classes containing 600 images each. There are 500 training images and 100
testing images per class. We use the PyTorch torch-vision CIFAR-10 dataset for
our evaluation.
4\. ImageNet [17]: ImageNet consists of $1000$ classes of high-dimensional
real-life RGB images. We use the open-source ImageNet subset available on
Kaggle [29]. It consists of 25000 images on the train set and 3000 images on
the validation set.
5\. Updated CIC-IDS2017 dataset [18]: The CIC-IDS2017 dataset [50] is a
popular dataset for evaluating intrusion detection systems (IDSs) in network
security. This dataset was created by the Canadian Institute for Cybersecurity
(CIC) and consists of network traffic data collected in a controlled
environment. However, a recent study [18] demonstrated problems with the
feature extraction and labeling of this dataset, and provided an improved
version of the dataset222https://intrusion-detection.distrinet-
research.be/WTMC2021/Dataset/dataset.zip. We utilize this updated version of
CIC-IDS2017 to perform analysis on network intrusion detection. Unlike image
data, preprocessing is required for the security dataset available in CSV
format. The dataset consists of potentially incorrect values and varying
formats, which necessitate preprocessing steps. We transform the categorical
features into binary features using one-hot encoding and scale the values
within the range of [0, 1] using min-max normalization. Since the goal of
adversarial attack on network dataset is to classify malicious traffic flow to
benign, we transform the class labels to binary classes (0 for normal traffic
and 1 for attack traffic).
## Appendix B Target models
### B.1 LeNet [34]
Lenet is one of the earliest convolutional neural network architectures
originally designed for handwritten digit recognition. LeNet consists of a
series of convolutional and pooling layers followed by a fully connected layer
and output layer for classification. We applied LeNet architecture, as
demonstrated in Table X, for MNIST classification. The model was trained with
a learning rate of 0.001 for 60 epochs using a batch size of 64 and the Adam
optimizer. We obtained an accuracy of 98.17% on the test set.
TABLE X: MNIST model architecture
# | Layer | Description
---|---|---
1 | Conv2D+ReLU | 6 filters, Kernel size = (5,5), Stride = (1,1)
2 | MaxPooling | Kernel size = 2, Stride = 2, Padding = 0
3 | Conv2D+ReLU | 16 filters, Kernel size = (5,5), Stride = (1,1)
4 | MaxPooling | Kernel size = 2, Stride = 2, Padding = 0
5 | Dense+ReLU | 256 units
6 | Dense+ReLU | 120 units
7 | Dense+Softmax | 84 units
### B.2 VGG [54]
VGG networks are also convolutional neural networks with deeper stacking of
convolutional layers than LeNet. It consists of a series of convolutional
neural networks followed by pooling and fully connected layers. Table XI
summarizes the architecture used for CIFAR-10 classification. The model was
trained with a learning rate of 0.001 for 100 epochs using a batch size of 64
and the SGD optimizer with momentum of 0.9 and weight decay of 5e-4. We
obtained an accuracy of 84.91% on the test set.
TABLE XI: CIFAR-10 VGG16 Architecture
# | Layer | Description
---|---|---
1 | Conv2d+ReLU | 64 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
2 | Conv2d+ReLU | 64 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
3 | MaxPooling | Kernel size = 2, Stride =2, Padding = 0
4 | Conv2d+ReLU | 128 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
5 | Conv2d+ReLU | 128 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
6 | MaxPooling | Kernel size = 2, Stride =2, Padding = 0
7 | Conv2d+ReLU | 256 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
8 | Conv2d+ReLU | 256 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
9 | Conv2d+ReLU | 256 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
10 | MaxPooling | Kernel size = 2, Stride =2, Padding = 0
11 | Conv2d+ReLU | 512 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
12 | Conv2d+ReLU | 512 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
13 | Conv2d+ReLU | 512 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
14 | MaxPooling | Kernel size = 2, Stride =2, Padding = 0
15 | Conv2d+ReLU | 512 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
16 | Conv2d+ReLU | 512 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
17 | Conv2d+ReLU | 512 filters, Kernel size = (3, 3) size, Stride=(1,1), Padding = (1,1)
18 | MaxPooling | Kernel size = 2, Stride =2, Padding = 0
19 | Average Pooling | Kerne size = 1, Stride = 1, Padding = 0
20 | Dense+Softmax | 512 units
### B.3 ResNet [25]
ResNet, short for “Residual Networks,” is a deep convolutional neural network.
The distinguishing characteristic of ResNet is the use of residual blocks. A
residual block consists of a “shortcut” or “skip connection” that bypasses one
or more convolutional layers. This shortcut allows the network to learn
residual functions, the difference between the desired output and the actual
output of the layer. This skip connection enables the training of extremely
deep networks without the vanishing gradient problem. We used ResNet for
CIFAR-10, CIFAR-100, and ImageNet datasets. Depending on the depth of the
network, ResNet is further represented in variants like ResNet-18, ResNet-50,
ResNet-56, ResNet-152. For CIFAR-100, we used a pre-trained ResNet56 model
from PyTorch [25]. It achieved 64.43% accuracy on the test set. For CIFAR-10,
we trained ResNet18 [25] models, which achieved 92.5% accuracy on the test
set. For ImageNet, we used a pre-trained Resnet50 [25] model from the Torch
library, which achieved 76.13% accuracy on the test set.
### B.4 MobileNet [48]
MobileNet is a family of network architectures designed for efficient deep
learning on mobile and embedded devices by minimizing computational and memory
resources. We use the MobileNetV2 model available in PyTorch. MobileNetV2
introduces ”inverted residuals” layers, which consist of bottleneck layers,
shortcut connections, and linear bottlenecks. Each inverted residual block
includes an expansion layer, which increases the number of channels before the
depth-wise separable convolution. MobileNet relies on depth-wise separable
convolution, which reduces the computational cost by separating the
convolution process into depth-wise and point-wise convolutions. For ImageNet,
we used a pre-trained MobileNet [48] from Torch library, which achieved 70.1%
accuracy on the test set.
We summarize the dataset, architecture, and their performance on the test in
Table XII.
TABLE XII: Datasets and DNN Architectures.
Dataset | Number of classes | Test Accuracy | Architecture
---|---|---|---
MNIST | 10 | 98.17% | LeNet [34]
CIFAR-10 | 10 | 92.5% | ResNet [25]
CIFAR-10 | 10 | 84.91% | VGG [54]
CIFAR-100 | 100 | 64.43% | ResNet [25]
ImageNet | 1000 | 76.13% | ResNet [25]
ImageNet | 1000 | 70.1% | MobileNet [48]
CIC-IDS2017 | 2 | 80.18% | MLP [47]
### B.5 Network Intrusion Detection
Similar to prior works [18], we use a Multi-Layer Perceptron (MLP) network as
an intrusion detector. It consists of 2 hidden layers of 64 neurons and a
softmax output layer with 2 neurons. Each neuron in the hidden layer uses ReLU
activation. We train the model for 1000 epochs using the Adam optimizer, and
the learning rate of 0.01.
## Appendix C Adversarial attack
Fast Gradient Sign Attack (FGSM) [22]: FGSM is a computationally efficient
method for finding adversarial examples. It assumes a linear approximation of
the network loss function and finds a perturbation by increasing the local
linear approximation of the loss. The perturbation for an FGSM attack against
a network with loss $J$ and parameters $\theta$, for test sample x, and with
true label $y$ is given by:
$\delta=\epsilon*\textnormal{sign}(\nabla_{x}J(\theta,\textbf{x},y))$ (15)
The strength of the perturbation for each dimension of the input is controlled
by $\epsilon$. We use $\epsilon\in[8/255,16/255,32/255,64/255]$.
Basic Iterative Method (BIM) [33]: BIM is an iterative version of FGSM where
the perturbation is computed multiple times with small steps. The pixel values
of the resulting adversarial image are clipped to ensure they lie within the
$L_{\infty}$ $\epsilon$ neighborhood of the original input image.
$\textbf{x}^{*}_{m+1}=\textbf{x}^{*}_{m}+Clip_{\textbf{x},\epsilon}(\alpha.\textnormal{sign}(\nabla_{x}J(\theta,\textbf{x}^{*}_{m},y))$
(16)
Here, ${0<\alpha<\epsilon}$ controls the $m^{th}$ iteration step size. We use
$\epsilon\in[8/255,16/255,32/255,64/255]$ with $\alpha=\epsilon/10$ and a
fixed number of iterations ($m$) as 10.
Projected Gradient Descent (PGD) [40]: PGD is also an iterative method similar
to BIM; however, unlike BIM, a PGD attack starts from a random perturbation in
the $L_{\infty}$ ball around the input sample.
$\textbf{x}^{*}=\textbf{x}_{n-1}-\textnormal{clip}_{\epsilon}(\alpha~{}\textnormal{sign}(\nabla
xJ(\theta,\textbf{x}_{n-1},y))$ (17)
We use $\epsilon\in[8/255,16/255,32/255,64/255]$ with $\alpha=\epsilon/10$ and
attack steps as $c*\frac{\epsilon}{\alpha}$ where $c$ is a constant. We use
$c=4$, so we apply an attack step of 40 for PGD attack across different
$\epsilon$.
Auto Projected Gradient Descent (Auto-PGD) [14]: Auto-PGD attack is gradient-
based adversarial attack that builds upon the Projected Gradient Descent (PGD)
metohd. It aims to automate the process of finding effective attack
parameters, reducing the need for manual tuning of step size and other
hyperparameters. Auto-PGD uses an alternative loss function for better
performance against defenses that might attempt to mask gradients. We use the
Auto-PGD implementation available in Adversarial Robustness Toolbox (ART)
[42]. An $\epsilon=0.15$ is considered a relatively moderate to strong attack
strength, which we choose in this paper.
Carlini and Wagner (CW) [12]: CW attacks comprise a range of attacks that
follow an optimization framework similar to L-BFGS [61]. However, it replaces
the loss function with an optimization problem involving logits $(Z(.))$
instead of the model prediction.
$g(\textbf{x})=max(max_{(}{i\neq
t}Z(\textbf{x}^{\prime})_{i})-Z(\textbf{x}^{\prime})_{t},-k)$ (18)
Here, $k$ encourages the optimizer to find an adversarial example with high
confidence. For $L_{\infty}$ CW attack, we use $\epsilon=0.15$, 400
iterations, and zero confidence settings. We use a learning rate of 0.01 for
the step size of the optimization.
## Appendix D Implementation of various detectors
Feature squeezing (FS) [71]: For MNIST, we use bit depth reduction and median
filter, while for CIFAR-10, CIFAR-100, and ImageNet, we use bit depth
reduction, median filter, and non-local means, as suggested by the original
paper.
Magnet [41]: We use the defensive architecture recommended by the original
paper for CIFAR-10 and MNIST. Since the original paper did not perform an
evaluation on CIFAR-100 and ImageNet, we use the defensive architecture of
CIFAR-10, which is designed for three-channel images.
Turning a weakness into a strength (TWS) [28]: We follow the implementation
shared by the authors available on
Github333https://github.com/s-huu/TurningWeaknessIntoStrength. We use Gaussian
noise of $\sigma=0.01$ and $\sigma=0.1$ for CIFAR and ImageNet, as suggested
in the paper. For MNIST and CIFAR-100, we empirically picked noise parameters
($\sigma=0.01$ and $\sigma=0.1$) that resulted in maximum adversarial
detection.
U-LOO [72]: For each data set, we randomly select 2000 benign samples, extract
feature attribution using the Integrated Gradient method, and compute the
inter-quartile range (IQR). IQR is the difference between the 75th percentile
and the 25th percentile among all entries of $IG(x)\in R^{d}$. A sample is
regarded as adversarial if the IQR is larger than the threshold learned from
benign samples.
## Appendix E Indicator of Failure: PGD at 100 steps
Following Pintor et al.[44] recommendation, we reevaluated detection methods
on PGD adversarial samples crafted with 100 iteration steps. We set the attack
parameter $\epsilon=8/255$. We can observe in Table XIII that most detection
methods do not have significant changes to the detection performance when
compared with prior performance on detecting adversarial samples crafted at
steps of 40.
TABLE XIII: Evaluation of PGD attack with steps = 100
| AUC | FS | MagNet | U-LOO | TWS | PASA
---|---|---|---|---|---|---
MNIST | Before | 0.90$\pm$0.01 | 0.95$\pm$0.03 | 0.99$\pm$0.01 | 0.92$\pm$0.11 | 0.98$\pm$0.01
| After | 0.87$\pm$0.02 | 0.95$\pm$0.03 | 0.99$\pm$0.99 | 0.90$\pm$0.08 | 0.97$\pm$0.02
CIFAR-10 (VGG) | Before | 0.52$\pm$0.01 | 0.59$\pm$0.03 | 0.49$\pm$0.05 | 0.58$\pm$0.02 | 0.74$\pm$0.02
| After | 0.32$\pm$0.02 | 0.58$\pm$0.01 | 0.48$\pm$0.03 | 0.11$\pm$0.02 | 0.75$\pm$0.02
CIFAR-10 (ResNet) | Before | 0.25$\pm$0.01 | 0.57$\pm$0.04 | 0.62$\pm$0.01 | 0.14$\pm$0.01 | 0.83$\pm$0.03
| After | 0.24$\pm$0.01 | 0.56$\pm$0.04 | 0.64$\pm$0.01 | 0.14$\pm$0.01 | 0.83$\pm$0.02
CIFAR-100 | Before | 0.67$\pm$0.03 | 0.56$\pm$0.03 | 0.63$\pm$0.03 | 0.61$\pm$0.01 | 0.60$\pm$0.02
| After | 0.68$\pm$0.02 | 0.55$\pm$0.02 | 0.60$\pm$0.03 | 0.67$\pm$0.02 | 0.58$\pm$0.03
ImageNet (MobileNet) | Before | 0.25$\pm$0.02 | 0.51$\pm$0.01 | 0.59$\pm$0.01 | 0.52$\pm$0.01 | 0.98$\pm$0.02
| After | 0.28$\pm$0.02 | 0.49$\pm$0.02 | 0.51$\pm$0.01 | 0.50$\pm$0.02 | 0.97$\pm$0.01
ImageNet (ResNet) | Before | 0.29$\pm$0.01 | 0.51$\pm$0.01 | 0.59$\pm$0.01 | 0.57$\pm$0.02 | 0.97$\pm$0.01
| After | 0.27$\pm$0.02 | 0.49$\pm$0.01 | 0.59$\pm$0.01 | 0.59$\pm$0.02 | 0.96$\pm$0.01
## Appendix F Ablation study
Our detection method combines two statistical metrics: prediction sensitivity
(PS) and attribution sensitivity (AS). In this ablation study, we assess the
individual performance of each metric. We summarize the results in Table XIV
and Table XV.
In Table XIV, we can observe that the performance of each metric is almost
equivalent to the combined performance. However, on CIFAR-10, the combination
of AS and PS (PS+AS) consistently outperforms AS and PS individually. AS and
PS exhibit sensitivity to different attack types. For instance, PS is more
effective at detecting adversarial inputs generated by FGSM, while AS excels
in detecting inputs perturbed by PGD and other attacks in CIFAR-10. Combining
both metrics provides a more balanced and robust detection strategy across
various attack types.
TABLE XIV: Adversarial detection performance for MNIST and CIFAR. Here, AS
represents Attribution Sensitivity, and PS represents Prediction Sensitivity.
PS+AS is our proposed detector.
| | MNIST | CIFAR-10 (ResNet) | CIFAR-10 (VGG)
---|---|---|---|---
Type | Strength | PS | AS | PS+AS | PS | AS | PS+AS | PS | AS | PS+AS
FGSM | 8/255 | 0.97 | 0.84 | 0.97 | 0.77 | 0.87 | 0.88 | 0.54 | 0.55 | 0.62
| 16/255 | 0.98 | 0.85 | 0.98 | 0.96 | 0.97 | 0.99 | 0.67 | 0.51 | 0.71
| 32/255 | 0.98 | 0.86 | 0.98 | 0.99 | 0.99 | 0.99 | 0.87 | 0.69 | 0.87
| 64/255 | 0.99 | 0.85 | 0.98 | 0.98 | 0.99 | 0.99 | 0.94 | 0.89 | 0.94
PGD | 8/255 | 0.98 | 0.85 | 0.98 | 0.13 | 0.88 | 0.85 | 0.51 | 0.57 | 0.61
| 16/255 | 0.98 | 0.87 | 0.97 | 0.30 | 0.94 | 0.92 | 0.58 | 0.51 | 0.61
| 32/255 | 0.98 | 0.89 | 0.98 | 0.22 | 0.97 | 0.97 | 0.74 | 0.51 | 0.73
| 64/255 | 0.98 | 0.89 | 0.98 | 0.08 | 0.98 | 0.98 | 0.89 | 0.81 | 0.90
BIM | 8/255 | 0.56 | 0.46 | 0.58 | 0.09 | 0.83 | 0.77 | 0.49 | 0.54 | 0.58
| 16/255 | 0.55 | 0.47 | 0.52 | 0.09 | 0.89 | 0.84 | 0.52 | 0.53 | 0.59
| 32/255 | 0.51 | 0.42 | 0.53 | 0.16 | 0.94 | 0.93 | 0.55 | 0.47 | 0.60
| 64/255 | 0.54 | 0.39 | 0.54 | 0.16 | 0.97 | 0.98 | 0.70 | 0.49 | 0.68
CW | 0.15 | 0.38 | 0.34 | 0.38 | 0.95 | 0.97 | 0.98 | 0.79 | 0.61 | 0.80
In Table XV, both attribution sensitivity and prediction sensitivity have high
detection performance in detecting FGSM attacks. However, with PGD, BIM, and
CW, individual metrics have weaker performances. The combination of AS and PS
(PS+AS) consistently outperforms the individual metrics (AS and PS). The
detector’s performance generally degrades as the strength of adversarial
attacks increases. This degradation is more pronounced in cases where the AS
and PS metrics are employed individually, noticeable with ImageNet.
TABLE XV: Adversarial detection performance for CIFAR-100 and ImageNet. Here,
AS represents Attribution Sensitivity, and PS represents Prediction
Sensitivity. PS+AS is our proposed detector.
| | CIFAR 100 | ImageNet (Mobilenet) | ImageNet (ResNet)
---|---|---|---|---
Type | Strength | PS | AS | PS+AS | PS | AS | PS+AS | PS | AS | PS+AS
FGSM | 8/255 | 0.68 | 0.74 | 0.82 | 0.62 | 0.81 | 0.82 | 0.48 | 0.60 | 0.65
| 16/255 | 0.78 | 0.89 | 0.95 | 0.68 | 0.91 | 0.91 | 0.55 | 0.73 | 0.75
| 32/255 | 0.89 | 0.89 | 0.97 | 0.74 | 0.96 | 0.96 | 0.61 | 0.87 | 0.87 |
# 3D Tooth Mesh Segmentation with Simplified Mesh Cell Representation
###### Abstract
Manual tooth segmentation of 3D tooth meshes is tedious and there is
variations among dentists. Several deep learning based methods have been
proposed to perform automatic tooth mesh segmentation. Many of the proposed
tooth mesh segmentation algorithms summarize the mesh cell as - the cell
center or barycenter, the normal at barycenter, the cell vertices and the
normals at the cell vertices. Summarizing of the mesh cell/triangle in this
manner imposes an implicit structural constraint and makes it difficult to
work with multiple resolutions which is done in many point cloud based deep
learning algorithms. We propose a novel segmentation method which utilizes
only the barycenter and the normal at the barycenter information of the mesh
cell and yet achieves competitive performance. We are the first to demonstrate
that it is possible to relax the implicit structural constraint and yet
achieve superior segmentation
performance.111https://github.com/ananyajana/tooth_mesh_seg
Index Terms— Intraoral scan segmentation, 3D tooth mesh segmentation, deep
learning, tooth mesh, tooth point cloud
Fig. 1: The mesh cell vertices uniquely define the barycenter as shown in (a),
but the barycenter does not uniquely define the mesh cell vertices as shown in
(b). Utilizing the barycenter and mesh cell vertices thus impose a structural
constraint.
## 1 Introduction
With the advancement of technology, computer-aided orthodontic treatment is
being widely embraced. Intraoral scanners are being widely adopted in place of
the intraoral/dental cameras due to their ability to reconstruct the 3D
surface. A vital task in computer aided orthodontic treatment is automated and
accurate segmentation of teeth from intraoral scans. The intraoral scanners
produce 3D surface reconstructions of the teeth either in the form of point
cloud or in a mesh format or both. A highly accurate automated tooth mesh
segmentation can help in downstream tasks such as recognising and classifying
different dental/oral conditions like gingivitis, caries, and white lesions.
There are multiple challenges involved in tooth mesh segmentation such as -
crowded teeth, misaligned teeth, missing teeth. The size and shape of teeth
can also vary widely across subjects. The second and third molar may evade
capturing due to their being in the deep intra oral regions. Or the
second/third molar might not be fully formed. Different teeth and gum
conditions like recession, enamel loss etc can also alter the appearance of
the teeth significantly. Multiple automatic tooth mesh segmentation algorithms
have been proposed[1, 2, 3, 4, 5, 6]. These tooth mesh segmentation algorithms
can achieve high accuracy. Some of these methods can even achieve high
accuracy when trained on a single 3D tooth mesh[7]. In this paper, we note
that a dominating trend in these highly accurate deep learning based tooth
segmentation methods is to summarize or represent the mesh cell in a specific
way which attaches the mesh cell vertices to the barycenter of the mesh cell
as features. This summarizing makes it hard to use multiple resolutions of the
tooth mesh in the segmentation methods. Utilizing multiple resolutions of the
data is common in point cloud processing algorithms such as BAAFNet[8].
Sampling from the tooth mesh is also difficult with conventional mesh cell
summarizing as it leads to loss of surface information and and causes
disconnectedness as shown in Fig. 2. It can also be noted that the existing
summarizing implicitly poses a structural constraint as shown in Fig. 1. This
structural constraint on the data is artificial. The reason is that the mesh
representation consists of mesh cells which are artificially created to
represent the entire object surface and the mesh cells could have been
alternatively laid out as well. In other words, it is possible to have
multiple mesh cell layouts for the same 3D dental surface as the mesh cells
are a way to approximate the surface. Given this constrained representation we
explore, in this paper, if we can utilize a simplified mesh cell
representation by relaxing the structural constraint, yet achieve high
segmentation performance. Our key contribution is - (a) proposing a novel
tooth mesh segmentation method that utilizes a simplified mesh cell
representation. Our model achieves competitive performance; (b) We are the
first to demonstrate that the simplified mesh cell representation can be
equally or even more effective if coupled with a suitable deep learning
network; (c) The simplified mesh cell representation obtained by relaxing the
implicit structural constraint can pave the way for utilization of multi
resolution tooth mesh in the future segmentation algorithms.
Fig. 2: (a) sample of a mesh. (b) the mesh after some triangles have been
sampled randomly by sampling their barycenter. Such sampling will result in
upsetting the mesh topology and loss of connectedness. Fig. 3: Overall
Architecture of our proposed network. Each mesh cell is summarized using the
barycenter and the normal at the barycenter. The data is processed via a
geometry processing branch and a curve processing branch
## 2 Methods
Our proposed method has three steps (1) Data preprocessing, (2) Data
augmentation, and (3) Segmentation network to segment the jaw into the seven
tooth labels and the background/gingiva label.
### 2.1 Data Pre-processing
We utilize 589 subjects from the public dataset. These subjects do not have
the wisdom teeth and hence they have a teeth count $\leq$ 14\. We utilize the
lower jaw scans. Each raw lower jaw scan has labels for every point. In this
work we are interested in tooth mesh segmentation hence we interpolate the
pointwise labels to mesh triangle labels using k nearest neighbor algorithm.
The raw lower jaw scan contains more than 100000 meshes. The meshes are
downsampled to 16000 meshes using quadric downsampling. Each mesh cell can be
characterized with four vertices - three vertices of the mesh triangle and the
barycenter of the mesh triangle. With these four points, a 24 dimensional
vector is constructed comprising of 12 coordinate vectors and 12 normal
vectors at the four points respectively as per the convention followed in [2,
3].
### 2.2 Data Augmentation
We perform three types of data augmentation to improve the model’s
generalization ability - 1) random rotation, 2) random translation, and 3)
random rescaling. We perform 40 augmentations for each data point, thereby,
effectively creating 40 new samples for each lower jaw scan.
### 2.3 Segmentation Network
Our proposed method is shown in Fig. 3. Our method consists of two parallel
branches - a geometry processing branch and a curve processing branch. The two
branches output two different global features which are then concatenated.
Finally two lightweight 1D convolutions process the concatenated global
features to give the segmentation scores.
The current mesh cell summarizing technique utilized by the state-of-the-art
methods introduces an implicit structural constraint by attaching the mesh
cell vertices to the barycenter. We aim to take away this implicit constraint
in our proposed method by summarizing the mesh cell with only the barycenter
and the normal at the barycenter. The relaxation in the structural constraint
and the absence of the mesh vertices could potentially hamper the ability of
the segmentation method in learning the representation of the mesh cell or,
broadly, the representation of the surface containing the barycenter. To
counter that effect we introduce the geometry processing branch in our tooth
segmentation network. This geometry processing branch is a PointMLP[9] network
and consists of a Geometric Affine Module (GAM) and a number of residual
point(ResP) blocks. The geometric affine module of the PointMLP[9] is of
interest to us as this module helps in creating a normalized representation of
the surface/neighborhood even in case of sparse and diverse geometric
structures. Once the vertices of the mesh cells are no longer attached to the
barycenter in the form of features, the barycenters alongwith the normals at
those barycenters become sparse. The PointMLP head helps in learning
representation from this comparatively sparse data and creating global
feature. In addition to the geometry processing branch, we also introduce a
curve processing branch in our network. We utilize CurveNet[10] for this
branch. The curve processing head is tasked with understanding and evaluating
curve features from the barycenters (not on the normals) of the mesh cells.
The intuition behind this step is that the different types of tooth have a
large difference in shape and size, e.g. the molar teeth and the incisor teeth
have different appearances. Hence, the curves induced on the barycenters
coordinates (not the normals) can convey meaningful information and thereby,
increase the representation learning capability of our tooth mesh segmentation
network. Similar to CurveNet[10], the curve processing branch consists of
Local Point Feature Aggregation (LPFA), Curve Intervention Convolution (CIC)
follwed by feature propagation and up-Curve Intervention convolution modules.
## 3 Experimental Results
### 3.1 Dataset & Evaluation Metrics
We use the public dataset 3D Teeth Seg Challenge 2022 [11]. The task is tooth
segmentation from the 3D dental model as C = 8 different semantic parts,
indicating the central incisor (T7), lateral incisor (T6), canine/cuspid (T5),
1st premolar (T4), 2nd premolar (T3), 1st molar (T2), 2nd molar (T1), and
background/gingiva (BG). There are 376 subjects in the training set, 95
subjects in the validation set and 118 subjects in the test set. We use Dice
Score(DSC), Overall Accuracy (OA), sensitivity (SEN) and Positive Predictive
Value (PPV) to evaluate the performance of our model.
Fig. 4: The qualitative comparison of tooth labeling via different methods. Due to space constraint, we could not show all the eleven methods. (zoom in for better view in color) Table 1: The tooth segmentation results from ten different methods in terms of the labelwise Dice Score. Method | BG | T1 | T2 | T3 | T4 | T5 | T6 | T7
---|---|---|---|---|---|---|---|---
PointNet [CVPR’17][12] | 0.9374 | 0.7836 | 0.9100 | 0.8853 | 0.9151 | 0.8937 | 0.8994 | 0.9236
PointNet++ [NeurIPS’17][13] | 0.9145 | 0.7706 | 0.8931 | 0.8663 | 0.8739 | 0.8276 | 0.7724 | 0.8275
DGCNN [ATG’19][14] | 0.9588 | 0.8377 | 0.9340 | 0.9269 | 0.9457 | 0.9319 | 0.9295 | 0.9370
MeshSegNet[TMI’20][15] | 0.9120 | 0.7026 | 0.7899 | 0.7653 | 0.8505 | 0.8211 | 0.6744 | 0.7845
MeshSegNet+GCO[TMI’20][15] | 0.9470 | 0.8408 | 0.8948 | 0.8925 | 0.916 | 0.8690 | 0.7681 | 0.8969
TSGCNet [CVPR’21][3] | 0.9528 | 0.6323 | 0.9055 | 0.9067 | 0.9352 | 0.9278 | 0.9065 | 0.9160
GAC [PRL’21][2] | 0.8995 | 0.6330 | 0.8099 | 0.7495 | 0.8189 | 0.8365 | 0.8130 | 0.8356
BAAFNet [CVPR’21][8] | 0.5016 | 0.4559 | 0.6676 | 0.6293 | 0.6634 | 0.6457 | 0.5767 | 0.6724
pointMLP [ICLR’22][9] | 0.9655 | 0.8552 | 0.9490 | 0.9405 | 0.9596 | 0.9490 | 0.9351 | 0.9436
PCT [CVM’21][16] | 0.7791 | 0.2974 | 0.5147 | 0.4496 | 0.3207 | 0.3654 | 0.4497 | 0.5788
MBESegNet [ISBI’22][5] | 0.8089 | 0.4107 | 0.6989 | 0.6852 | 0.7295 | 0.6512 | 0.5464 | 0.5255
CurveNet [ICCV’21][10] | 0.9540 | 0.7735 | 0.9132 | 0.9076 | 0.9291 | 0.9129 | 0.9085 | 0.9293
Ours | 0.9657 | 0.8654 | 0.9516 | 0.9462 | 0.9595 | 0.9495 | 0.9395 | 0.9488
Table 2: The tooth segmentation results from different methods in terms of the Overall Accuracy and the Dice Score. The input column specifies how many points (p) and how many normals (n) are used in the algorithm Method | Input | OA | DSC | SEN | PPV
---|---|---|---|---|---
PointNet[12] | 4p, 4n | 0.9167 | 0.8935 | 0.9033 | 0.9020
PointNet++[13] | 4p, 4n | 0.8820 | 0.8432 | 0.8546 | 0.8553
DGCNN[14] | 4p, 4n | 0.9435 | 0.9251 | 0.9334 | 0.9330
MeshSegNet[15] | 4p, 1n | 0.8914 | 0.8631 | 0.8787 | 0.8693
MeshSegNet+GCO[15] | 4p, 1n | 0.9319 | 0.9085 | 0.9295 | 0.9013
TSGCNet[3] | 4p, 4n | 0.9265 | 0.8853 | 0.9148 | 0.8928
GAC[2] | 4p, 4n | 0.8451 | 0.7994 | 0.8080 | 0.8346
BAAFNet[8] | 4p, 4n | 0.5910 | 0.6015 | 0.7458 | 0.5846
pointMLP[9] | 4p, 4n | 0.9537 | 0.9372 | 0.9468 | 0.9416
PCT[16] | 1p | 0.6192 | 0.4694 | 0.4994 | 0.5760
MBESegNet[5] | 4p, 1n | 0.7062 | 0.6320 | 0.7002 | 0.6344
CurveNet[10] | 1p | 0.9298 | 0.9127 | 0.9220 | 0.9136
Ours | 1p, 1n | 0.9553 | 0.9454 | 0.9505 | 0.9457
Table 3: The tooth segmentation results from ten different methods in terms of the Overall Accuracy and the Dice Score. b, b-n, v, v-n denote the barycenter, barycenter-normal, vertices, normals at the vertices respectively. Method | b | b-n | v | v-n | OA | DSC | SEN | PPV
---|---|---|---|---|---|---|---|---
Ablation1 | ✓ | ✓ | ✓ | ✓ | 0.9537 | 0.9372 | 0.9468 | 0.9416
Ablation2 | ✓ | ✓ | ✗ | ✗ | 0.9552 | 0.9405 | 0.9496 | 0.9435
Ablation3 | ✓ | ✗ | ✗ | ✗ | 0.9364 | 0.9157 | 0.9266 | 0.9213
Ablation4 | ✓ | ✗ | ✗ | ✗ | 0.9298 | 0.9127 | 0.9220 | 0.9136
Ours | ✓ | ✓ | ✗ | ✗ | 0.9553 | 0.9454 | 0.9505 | 0.9457
### 3.2 Implementation Details
The model was trained using the Adam optimizer for 800 epochs with a learning
rate 0.001 and batch size 24. Cross-entropy loss was used to train the model.
We select the best performing model of the 800 epochs for test. The best model
was selected based on validation Dice Score (DSC).
### 3.3 Results & Discussion
#### 3.3.1 Comparison with State-of-the-art
We validate our method extensively by comparing with eleven other methods. For
[2, 5], codebases were not available and hence we implemented simulations
following the methods description. Out of these eleven methods, the generic
point cloud segmentation methods PCT[16] and CurveNet[10] operate on only the
coordinates (1p) i.e. the barycenter of the mesh cell. MeshSegNet[15] and
MBESegNet[5] utilize the barycenter, normal at the barycenter and the vertices
of the mesh cell (4p, 1n). The other methods utilize the 24D vector (4p, 4n)
as has been described in the 2.1. The results are shown in Table 2. Our method
outperforms the eleven methods. Our method is more successful compared to the
other methods because of multiple factors. The 24 dimensional or even 15
dimensional mesh cell representation implicitly poses a structural constraint
on the data which is kind of artificial. Although data augmentation tries to
remedy this structural constraint, our geometry processing branch can relax
this constraint more effectively with the residual connections and affine
geometric module. At the same time the curve processing branch can enrich the
features by adding the information regarding the curves formed using the
barycenters. The curve processing branch also benefits by utilizing only the
barycenter because the addition of the mesh vertices information could have
confused the network. The relaxation in the structural constraint is a key
advantage in our method.
#### 3.3.2 Ablation Study
We performed ablation studies to illustrate the effectiveness of the proposed
method. The results are shown in Table 3. Ablation1 is the geometry processing
branch which is similar to PointMLP[9] but operates on the 24 dimensional
vector as feature of the mesh cell. Ablation2 is similar to Ablation1 but
Ablation2 only utilizes the barycenter and the normal at the barycenter. As we
can see between Ablation1 and Ablation2, the relaxation of the structural
constraint already has a positive effect on the geometry processing network.
Ablation3 is similar to Ablation1 but Ablation3 utilizes only the barycenter
and not the normal at the barycenter. This reaffirms the understanding that
the normals information can encode the surface information better than just
the coordinates. Ablation4 is the curve processing branch similar to
CurveNet[10]. We can see that each component of our carefully designed
segmentation network improves the performance of our method.
## 4 Conclusion
In this work, we proposed a method to segment teeth from tooth mesh data using
a simplified mesh cell representation. We demonstrate that although the state-
of-the-art tooth segmentation methods utilize the mesh vertices as a feature
of the mesh cell, this type of representation might be redundant at the
commonly used resolution of the tooth mesh utilized by these state-of-the-art
tooth segmentation algorithms. Rather this representation imposes an implicit
structural constraint on the data which may hamper the learning and also
prevent using the multi resolution of the tooth mesh data. Our proposed method
based on our intuition outperforms the existing methods thus compelling us to
question whether extra data always imply additional learning as generally
believed, or it can be self-limiting in certain scenarios.
## 5 Compliance with Ethical Standards
This research study was conducted retrospectively using human subject data
made available in open access by [11]. Ethical approval was not required as
confirmed by the license attached with the open access data.
## 6 Acknowledgments
The work has been funded by the Colgate-Palmolive Company.
## References
* [1] Tianran Yuan, Wenhe Liao, Ning Dai, Xiaosheng Cheng, and Qing Yu, “Single-tooth modeling for 3d dental model,” International journal of biomedical imaging, vol. 2010, 2010.
* [2] Yue Zhao, Lingming Zhang, Chongshi Yang, Yingyun Tan, Yang Liu, Pengcheng Li, Tianhao Huang, and Chenqiang Gao, “3d dental model segmentation with graph attentional convolution network,” Pattern Recognition Letters, vol. 152, pp. 79–85, 2021.
* [3] Lingming Zhang, Yue Zhao, Deyu Meng, Zhiming Cui, Chenqiang Gao, Xinbo Gao, Chunfeng Lian, and Dinggang Shen, “Tsgcnet: Discriminative geometric feature learning with two-stream graph convolutional network for 3d dental model segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6699–6708.
* [4] Chunfeng Lian, Li Wang, Tai-Hsien Wu, Mingxia Liu, Francisca Durán, Ching-Chang Ko, and Dinggang Shen, “Meshsnet: Deep multi-scale mesh feature learning for end-to-end tooth labeling on 3d dental surfaces,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019, pp. 837–845.
* [5] Zigang Li, Tingting Liu, Jun Wang, Changdong Zhang, and Xiuyi Jia, “Multi-scale bidirectional enhancement network for 3d dental model segmentation,” in 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). IEEE, 2022, pp. 1–5.
* [6] Mingxi Zhao, Lizhuang Ma, Wuzheng Tan, and Dongdong Nie, “Interactive tooth segmentation of dental models,” in 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference. IEEE, 2006, pp. 654–657.
* [7] Ananya Jana, Hrebesh Molly Subhash, and Dimitris Metaxas, “Automatic tooth segmentation from 3d dental model using deep learning: A quantitative analysis of what can be learnt from a single 3d dental model,” arXiv preprint arXiv:2209.08132, 2022.
* [8] Shi Qiu, Saeed Anwar, and Nick Barnes, “Semantic segmentation for real point cloud scenes via bilateral augmentation and adaptive fusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1757–1767.
* [9] Xu Ma, Can Qin, Haoxuan You, Haoxi Ran, and Yun Fu, “Rethinking network design and local geometry in point cloud: A simple residual mlp framework,” arXiv preprint arXiv:2202.07123, 2022.
* [10] Tiange Xiang, Chaoyi Zhang, Yang Song, Jianhui Yu, and Weidong Cai, “Walk in the cloud: Learning curves for point clouds shape analysis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 915–924.
* [11] Achraf Ben-Hamadou, Oussama Smaoui, Houda Chaabouni-Chouayakh, Ahmed Rekik, Sergi Pujades, Edmond Boyer, Julien Strippoli, Aurélien Thollot, Hugo Setbon, Cyril Trosset, et al., “Teeth3ds: a benchmark for teeth segmentation and labeling from intra-oral 3d scans,” arXiv preprint arXiv:2210.06094, 2022.
* [12] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.
* [13] Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” arXiv preprint arXiv:1706.02413, 2017.
* [14] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon, “Dynamic graph cnn for learning on point clouds,” Acm Transactions On Graphics (tog), vol. 38, no. 5, pp. 1–12, 2019\.
* [15] Chunfeng Lian, Li Wang, Tai-Hsien Wu, Fan Wang, Pew-Thian Yap, Ching-Chang Ko, and Dinggang Shen, “Deep multi-scale mesh feature learning for automated labeling of raw dental surfaces from 3d intraoral scanners,” IEEE transactions on medical imaging, vol. 39, no. 7, pp. 2440–2450, 2020.
* [16] Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R Martin, and Shi-Min Hu, “Pct: Point cloud transformer,” Computational Visual Media, vol. 7, no. 2, pp. 187–199, 2021.
|
††thanks: These authors contributed equally to this work.††thanks: These
authors contributed equally to this work.
# Spectroscopic Signatures of Strong Correlations and Unconventional
Superconductivity in Twisted Trilayer Graphene
Hyunjin Kim T. J. Watson Laboratory of Applied Physics, California Institute
of Technology, 1200 East California Boulevard, Pasadena, California 91125, USA
Institute for Quantum Information and Matter, California Institute of
Technology, Pasadena, California 91125, USA Department of Physics, California
Institute of Technology, Pasadena, California 91125, USA Youngjoon Choi T.
J. Watson Laboratory of Applied Physics, California Institute of Technology,
1200 East California Boulevard, Pasadena, California 91125, USA Institute for
Quantum Information and Matter, California Institute of Technology, Pasadena,
California 91125, USA Department of Physics, California Institute of
Technology, Pasadena, California 91125, USA Cyprian Lewandowski Institute
for Quantum Information and Matter, California Institute of Technology,
Pasadena, California 91125, USA Department of Physics, California Institute
of Technology, Pasadena, California 91125, USA Walter Burke Institute for
Theoretical Physics, California Institute of Technology, Pasadena, California
91125, USA Alex Thomson Institute for Quantum Information and Matter,
California Institute of Technology, Pasadena, California 91125, USA
Department of Physics, California Institute of Technology, Pasadena,
California 91125, USA Walter Burke Institute for Theoretical Physics,
California Institute of Technology, Pasadena, California 91125, USA
Department of Physics, University of California, Davis, California 95616, USA
Yiran Zhang T. J. Watson Laboratory of Applied Physics, California Institute
of Technology, 1200 East California Boulevard, Pasadena, California 91125, USA
Institute for Quantum Information and Matter, California Institute of
Technology, Pasadena, California 91125, USA Department of Physics, California
Institute of Technology, Pasadena, California 91125, USA Robert Polski T. J.
Watson Laboratory of Applied Physics, California Institute of Technology, 1200
East California Boulevard, Pasadena, California 91125, USA Institute for
Quantum Information and Matter, California Institute of Technology, Pasadena,
California 91125, USA Kenji Watanabe National Institute for Materials
Science, Namiki 1-1, Tsukuba, Ibaraki 305 0044, Japan Takashi Taniguchi
National Institute for Materials Science, Namiki 1-1, Tsukuba, Ibaraki 305
0044, Japan Jason Alicea Institute for Quantum Information and Matter,
California Institute of Technology, Pasadena, California 91125, USA
Department of Physics, California Institute of Technology, Pasadena,
California 91125, USA Walter Burke Institute for Theoretical Physics,
California Institute of Technology, Pasadena, California 91125, USA Stevan
Nadj-Perge Correspondence<EMAIL_ADDRESS>T. J. Watson Laboratory
of Applied Physics, California Institute of Technology, 1200 East California
Boulevard, Pasadena, California 91125, USA Institute for Quantum Information
and Matter, California Institute of Technology, Pasadena, California 91125,
USA
Magic-angle twisted trilayer graphene (MATTG) has emerged as a novel moiré
material that exhibits both strong electronic correlations and unconventional
superconductivity1, 2. However, spectroscopic studies of its electronic
properties are lacking, and the nature of superconductivity and the
corresponding order parameter in this system remain elusive. Here we perform
high-resolution scanning tunneling microscopy and spectroscopy of MATTG and
reveal extensive regions of atomic reconstruction that favor mirror-symmetric
stacking. In these regions we observe a cascade of symmetry-breaking
electronic transitions and doping-dependent band structure deformations
similar to those realized in magic-angle bilayers, as expected theoretically
given the commonality of flat bands3, 4. More strikingly, in a density window
spanning two to three holes per moiré unit cell, spectroscopic signatures of
superconductivity are manifest as pronounced dips in the tunneling conductance
at the Fermi level accompanied by coherence peaks that become gradually
suppressed at elevated temperatures and magnetic fields. The observed
evolution of the conductance with doping is consistent with a gate-tunable
transition from a gapped to a nodal superconductor, which we show
theoretically is compatible with a sharp transition from a Bardeen-Cooper-
Schrieffer (BCS) to a Bose-Einstein-condensation (BEC) superconductor with a
nodal order parameter. Within this doping window we also detect peak-dip-hump
structures suggesting that superconductivity is driven by strong coupling to
bosonic modes of MATTG. Our results pave the way for further understanding of
superconductivity and correlated states in graphene-based moiré structures
beyond twisted bilayers, where unconventional superconductivity and nodal
pairing are also reported5.
Figure 1a,b,c shows a schematic of the scanning tunneling microscopy (STM)
setup and MATTG topography formed by alternatingly rotating three graphene
layers by $\theta=1.5\degree$3, 1, 2, resulting in a moiré wavelength of
$L_{m}=a/[2\sin(\theta/2)]\approx 9$ nm, where $a=0.246$ nm is the graphene
crystal lattice (see Methods, sections 1 and 2 for fabrication and measurement
details). Since MATTG is composed of three layers, two independent moiré
patterns can in principle arise and, moreover, possible offsets between the
first and third layers, could result in even more complex outcomes.
Surprisingly, however, we consistently observe a unique triangular moiré
lattice, with no sign of an additional underlying moiré pattern, signaling the
formation of a single predominantly A-tw-A configuration in which the first
and third layers are aligned and second layer is twisted by $\theta$ (Fig.
1c,d). This observation suggests that mirror symmetric A-tw-A stacking is
preferred, in line with previous ab-initio theory calculations6 and transport
measurements1, 2. Additionally, in large-scale topographies, we occasionally
observe stripe-like features (Fig. 1b) that are not reported in twisted
bilayers. We attribute these stripes to domain boundaries where strain in the
top and bottom layers arises as a result of the the atomic reconstruction
necessary to maintain A-tw-A stacking across the domains (Fig. 1e; Methods,
section 3).
Spectroscopy of MATTG (Fig. 1f) upon electrostatic doping (controlled by the
gate voltage $V_{\rm Gate}$) is similar to magic-angle twisted bilayer
graphene (MATBG) in many respects—a reflection of the alternating-angle
stacking of the trilayer, which conspires to form spin/valley-degenerate flat
bands, together with additional dispersive Dirac cones3, 6. The two Van Hove
singularities (VHSs) originating from those flat bands, detected as peaks in
tunneling conductance $dI/dV$, are pushed apart at the charge neutrality point
(CNP, $\nu=0$) compared to full filling of four electrons (holes) per moiré
unit cell ($\nu=\pm 4$). The approximately fivefold change in VHS separation
indicates that the partially filled flat band structure is largely determined
by electronic correlations in analogy with the behaviour seen in MATBG7, 8, 9,
10. A well-developed cascade of flavor symmetry-breaking phase transitions11,
12 is also observed (Fig. 1f). The overall spectroscopic similarities between
MATTG and MATBG suggest that the flat bands in MATTG dominate the local
density of states (LDOS) in this regime. We do nevertheless detect subtle
signatures of the expected additional Dirac cones. Most obviously, contrary to
twisted bilayers, at $\nu=\pm 4$ the LDOS is neither completely suppressed nor
accompanied by quantum dot formation13 (see Extended Data Fig. 1)—indicating
the presence of gapless states intervening between the flat bands and remote
dispersive bands.
The LDOS at the Fermi level measured at finite magnetic fields13 provides
further signatures of the additional Dirac cones in MATTG (Fig. 2a). We
resolve clear Landau fans emanating from zero field around $\nu=0,\pm 4$ along
with $\nu=+1,\pm 2$; the latter signal Fermi surface reconstructions due to
flavour symmetry-breaking transitions in agreement with conclusions of
transport studies1, 2. The main fan sequence originating from $\nu=+4$ is
$+2,+6,\dots$ ($-2,-6,\dots$ for $\nu=-4$) instead of the $0,+4,\dots$ pattern
typically seen in MATBG devices. The relative Chern-number shift of $2$
naturally arises from the zeroth Landau level (LL) associated with the
additional Dirac cones, which contribute to the total Chern number at $\nu=\pm
4$. Finite-bias spectroscopy in magnetic fields more directly exposes the
presence of additional Dirac cones in the spectrum (Fig. 2c,d). Here we can
clearly identify the $N=0,\pm 1,\pm 2,\dots$ Landau levels originating from
the Dirac dispersion; the increase of Landau level separation with field (Fig.
2f) confirms the linear dispersion and yields a monolayer-graphene Dirac
velocity in agreement with theoretical expectations4, 6.
Spectroscopy at finite magnetic fields additionally uncovers filling-dependent
band structure renormalization in MATTG14, 15. The effect originates from the
inhomogeneous real-space charge distribution associated with different energy
eigenstates: the majority of the weight of the flat-band states (including
those near the VHS) are spatially located on the AAA moiré sites, whereas the
additional Dirac cones and flat-band states in the immediate vicinity of the
$\gamma$ point are more uniformly distributed (see Extended Data Fig. 2).
Electrostatic doping thereby gives rise to a Hartree potential that modifies
the band structure in a manner that promotes charge uniformity throughout the
unit cell. In twisted bilayer graphene it was found16 that this potential
generates additional band deformations 17, 18, 19, 20. Our simulations capture
a similar band-renormalization in MATTG accompanied by a displacement of the
additional Dirac cones away from the flat bands14, 15 (Fig. 2b). Both
effects—band deformations (Fig. 2e-h) and the relative Dirac cone shift—are
clearly confirmed in our measurements. Importantly, the position of the Dirac
point obtained from tracking the zeroth Landau level (Fig. 2c,d) falls within
$\pm 50$ meV depending on the exact doping; it resides below the lower flat-
band VHS at $\nu=+4$ but moves above the upper flat-band VHS at $\nu=-4$. This
pronounced shift may explain the large bandwidth estimate of $>100$ meV from
Ref. 1 (see Methods, section 4B,C for additional discussion). Finally, we note
that the Landau levels from the Dirac cones appear unaltered by the cascade of
phase transitions in the flat bands, suggesting that the flat-band and Dirac
sectors are not strongly coupled by interactions21.
Having established the foundational properties of MATTG band structure, we now
turn to the doping range $-3\lesssim\nu\lesssim-2$, where significant
suppression of the tunneling conductance is observed (Fig. 3a). We identify
two main doping regions—one at $-2.1<\nu<-1.9$ and the other at $-3<\nu<-2.2$.
The former interval, around $\nu\approx-2$, exhibits a correlation-induced gap
accompanied by Coulomb diamonds and nearly horizontal resonance peaks,
signaling the formation of quantum dots and a correlated insulating state22,
13, despite the presence of the additional Dirac cones.
Throughout the second interval, $-3<\nu<-2.2$, the tunneling conductance
minimum is well-pinned to the Fermi energy ($V_{\rm Bias}=0$) despite the
large change in filling. Strikingly, this suppression is accompanied by peak
structures symmetrically placed around the Fermi energy as line traces show in
Fig. 3b,c (note that the spectra taken at $-2.1<\nu<-1.9$ do not exhibit these
symmetric peaks; see Extended Data Fig. 4). The presence of such sharp narrow
peaks—which strongly resemble coherence peaks in superconductors and occur in
the filling range where transport experiments observe superconductivity1,
2—leads us to attribute this spectroscopic signature to superconductivity in
MATTG.
Temperature and magnetic field dependence of the tunneling spectra (Fig. 3d-g)
corroborates the connection to superconductivity while also establishing its
unconventional nature. As the temperature is increased, the coherence peaks on
both sides of the Fermi energy subside gradually until $2-2.5$ K (close to the
maximum critical temperature reported in transport1), where the hole-side peak
completely disappears (Fig. 3d,f) and the zero-bias conductance exhibits a
visible upturn (Fig. 3e; see also Extended Data Fig. 5 for more data).
Suppressed zero-bias conductance together with a significantly broadened
electron-side peak nevertheless survives at this temperature; both features
are washed out only around $T^{*}\approx 7$ K (Fig. 3e,f). Persistent
conductance suppression beyond the disappearance of coherence peaks is
typically interpreted as evidence of a pseudogap phase characteristic of
unconventional superconductors such as cuprates or thin films of disordered
alloys23, 24 (see Extended Data Fig. 6 for data near $\nu=+2$). Our
observation of two different temperature scales is consistent with the
existence of superconducting and pseudogap phases in MATTG. In any case, the
gradual disappearance of the coherence peak with temperature reaffirms its
superconducting origin.
Denoting the coherence peak-to-coherence peak distance as $2\Delta$, we find
maximal $\Delta\approx 1.6~{}\text{meV}$ near $\nu=-2.4$ (Fig. 3h). The
overall doping dependence of the spectroscopic gap resembles the doping
dependence of the critical temperature $T_{C}$1, 2, which also peaks around
$\nu\approx-2.4$, suggesting a correlation between these two quantities. The
maximal critical temperature $T_{C}\approx 2-2.5$ K from transport1 yields a
ratio $2\Delta/k_{B}T_{C}\approx 15-19$ ($k_{B}$ is Boltzmann’s constant) that
far exceeds the conventional BCS value ($\approx 3.5$)—highlighting the
strong-coupling nature of superconductivity in MATTG. The measured
spectroscopic gaps also imply a maximum Pauli limit of $\sim 10~{}\text{T}$
for the destruction of spin-singlet superconductivity.
The coherence peak height at base temperature ($T=400$ mK) also gradually
decreases with perpendicular magnetic field, similar to tunneling conductance
measurements through MATBG junctions25. We observe that the coherence peaks
are greatly diminished by $1$ T and therefore infer a critical field
$B_{C}\gtrsim 1~{}\text{T}$ at $\nu\approx-2.4$ (Fig. 3g; see also Extended
Data Fig. 5). This result is compatible with the small Ginzburg-Landau
coherence length of $\xi_{\rm GL}\approx 12~{}\text{nm}$ reported around
optimal doping1 upon using the naive estimate
$B_{C}\approx\Phi_{0}/{2\pi\xi_{\rm GL}^{2}}\sim 2~{}\text{T}$, where
$\Phi_{0}$ is the magnetic flux quantum. Note that LDOS suppression without
coherence peaks persists up to much larger fields (Extended Data Fig. 5f,g).
Interestingly, suppressed tunneling conductance within the coherence peaks
typically evolves from a U-shaped profile at $-2.4\lesssim\nu<-2.2$ (Fig. 3b)
to a V-shaped profile at $-3\lesssim\nu\lesssim-2.4$ (Fig. 3c), suggesting two
distinct superconducting regimes. Magnetic-field dependence of the tunneling
conductance further distinguishes these regimes: the field more efficiently
suppresses the spectroscopic gap in the V-shaped window compared to the
U-shaped window (Extended Data Fig. 5). The V-shaped tunneling spectra
resemble that of cuprates and can be well-fit using the standard Dynes
formula26 with a pairing order parameter that yields gapless nodal excitations
as reported in twisted bilayer graphene5 (Fig. 3c and Extended Data Fig. 7;
see Methods, section 5). The enhanced conductance suppression of the U-shaped
spectra instead suggests the onset of a fully gapped superconducting state.
One logical possibility is that the U- and V-shaped regimes admit distinct
superconducting order parameter symmetries that underlie a transition from a
gapped to gapless paired state on hole doping (similar behavior has been
proposed for cuprates 27). We stress, however, that a standard isotropic
s-wave pairing order parameter fails to adequately fit the U-shaped spectra,
though reasonable agreement can be obtained by postulating a mixture of
$s$-wave and nodal order parameters or a $d+id$-like order parameter (see
Methods, section 5 and Extended Data Fig. 7).
We point here to an alternative explanation whereby the U- to V-shaped regimes
can be understood in the context of BEC and BCS phases with a _single_ nodal
order parameter. In this scenario, starting from the correlation-induced
gapped flat bands at $\nu=-2$, hole doping initially introduces strongly bound
Cooper pair ‘molecules,’ rather than simply depleting the lower flat band;
i.e., the chemical potential remains within the gap of the correlated
insulator (Fig. 3i). Condensing the Cooper pair molecules yields a BEC-like
superconducting state that we assume exhibits a nodal order parameter.
Crucially, the original correlation-induced flat-band gap nevertheless
precludes gapless quasiparticle excitations. Further hole doping eventually
begins depleting the lower flat band (Fig. 3j), at which point the system
transitions to a BCS-like superconductor. Here, Cooper pair formation onsets
at the Fermi energy, and the nodal order parameter allows for gapless
quasiparticle excitations. (When compared against a BEC phase, we use ‘BCS’ to
describe a superconductor for which the chemical potential intersects a band,
independent of the pairing mechanism or coupling strength.) The gapped versus
gapless distinction implies that the U- and V-shaped regimes are separated by
a clear _transition_28, 29 as opposed to the well-studied BEC-BCS
_crossover_30, 31 operative when both regimes are fully gapped and not
topologically distinct.
We phenomenologically model such a transition by considering the tunneling
conductance of a system with electron and hole bands that experience doping-
dependent band separation and nodal pairing chosen to mimic experiment; for
details see Methods, section 6.2. In the fully gapped BEC phase, this model
yields U-shaped tunneling spectra (Fig. 3k) that qualitatively match the
measured conductance. Indeed, as in experiment, the conductance gap profile
does not fit an isotropic $s$-wave pairing amplitude well due to the
additional structure from the nodal order parameter. When the system enters
the BCS phase (the chemical potential lies inside the band), the gapless nodal
BCS phase instead yields a V-shaped tunneling profile (Fig. 3l) that also
qualitatively matches the experiment. This interpretation of the U- to
V-shaped transition is bolstered by transport measurements1 that reveal two
regimes for the Ginzburg-Landau coherence length (see Methods, section 6.2).
Adjacent to the coherence peaks, we observe dip-hump features in the tunneling
conductance that persist over a broad doping range (Fig. 4). The positive and
negative voltage dips are typically symmetric in energy, independent of
filling—ruling out the possibility that the dip-hump structure is intrinsic to
background density of states. Similar dip-hump features are observed
spectroscopically in a range of both conventional strongly coupled phonon
superconductors32, 33 as well as unconventional cuprate, iron-based and heavy
fermion superconductors34, 35, 36, 37, 38, 39. Such features are usually
interpreted as a signature of bosonic modes that mediate superconductivity and
can thus provide key insight into the pairing mechanism40, 41. If a
superconductor exhibits strong electron-boson coupling, dip-hump signatures
are expected to appear at energies $\Pi=\Delta+\Omega$, where $\Delta$ is the
spectroscopic gap defined above and $\Omega$ is the bosonic-mode excitation
energy42, 40, 41. We extract the energy of the mode $\Omega=\Pi-\Delta$ as a
function of doping (Fig. 4b) and find it to be correlated with $\Delta$. In
the V-shaped region, $\Omega/(2\Delta)$ anticorrelates with the spectroscopic
gap—in agreement with the trends seen in cuprates and iron-based compounds34,
35, 38, 37, 43—and is bounded to be less than $1$ (Fig. 4c). The upper bound
of $\Omega/(2\Delta)\leq 1$ suggests44, 45, 43 that the pairing glue
originates from a collective mode related to electronic degrees of freedom
(see Refs. 46 and 14 for examples of such mechanisms), as electronic
excitations with energy above $2\Delta$ become rapidly damped by the particle-
hole continuum, unlike for phonon modes. We cannot, however, rule out low-
energy ($<2\Delta$) phonons 47 through this line of argument since higher-
energy phonon dip-hump features may not be resolvable in our experiment. Even
if not directly related to the pairing mechanism, dip-hump features
anticorrelated with the gap may be valuable signatures of a proximate
competing order, as discussed in relation to the cuprates48, 49, 50 or even in
the context of twisted bilayer graphene 51. In the U-shaped region,
$\Omega/(2\Delta)$ does not exhibit a clear anticorrelation with the
spectroscopic gap, possibly due to subtleties with extracting the true
superconducting order parameter in the BEC phase.
Signatures of MATTG superconductivity presented in this work include: (i)
coherence peaks that are suppressed with temperature and magnetic field, but
persist well beyond the BCS limit; (ii) a pseudogap-like regime; (iii) dip-
hump structures in the tunneling conductance; and (iv) tunneling conductance
profiles that are not adequately fit with an $s$-wave order parameter, but
instead are compatible with a gate-tuned transition from a gapped BEC to a
gapless BCS phase with a common nodal order parameter. Parallel spectroscopic
measurements on twisted bilayer graphene revealed similar
phenomenology5—including nodal tunneling spectra, giant gap-to-$T_{C}$ ratios,
and pseudogap physics with anomalous resilience to temperature and magnetic
fields—suggesting a common origin of superconductivity in bilayers and
trilayers. Properties (i-iii) are typically associated with non-phonon-
mediated pairing, although phonon-driven mechanisms can exhibit some of these
features52, 53. Regardless of pairing-mechanism details, together with
property (iv), the observed signatures provide unambiguous spectroscopic
evidence of the unconventional nature of MATTG superconductivity. Future
theories addressing (i-iv) will likely be needed to pinpoint the exact
mechanism of superconductivity in this system.
## References
* [1] Park, J. M., Cao, Y., Watanabe, K., Taniguchi, T. & Jarillo-Herrero, P. Tunable strongly coupled superconductivity in magic-angle twisted trilayer graphene. _Nature_ 590, 249–255 (2021).
* [2] Hao, Z. _et al._ Electric field–tunable superconductivity in alternating-twist magic-angle trilayer graphene. _Science_ 371, 1133–1138 (2021).
* [3] Khalaf, E., Kruchkov, A. J., Tarnopolsky, G. & Vishwanath, A. Magic angle hierarchy in twisted graphene multilayers. _Phys. Rev. B_ 100, 085109 (2019).
* [4] Li, X., Wu, F. & MacDonald, A. H. Electronic Structure of Single-Twist Trilayer Graphene. _arXiv:1907.12338 [cond-mat]_ (2019). eprint 1907.12338.
* [5] _Spectroscopic signatures of nodal pairing and unconventional superconductivity in magic-angle bilayers were reported in August 2021 talks by Ali Yazdani’s Group. K. P. Nuckolls, Thomas Young Centre Moiré-Twistronics Workshop. See also:
Oh, M., Nuckolls, K. P., Wong, D., Lee, R. L., Liu, X., Watanabe, K.,
Taniguchi, T., Yazdani, Ali, Evidence for unconventional superconductivity in
twisted bilayer graphene, arXiv:2109.13944_ (2021).
* [6] Carr, S. _et al._ Ultraheavy and Ultrarelativistic Dirac Quasiparticles in Sandwiched Graphenes. _Nano Lett._ 20, 3030–3038 (2020).
* [7] Kerelsky, A. _et al._ Maximized electron interactions at the magic angle in twisted bilayer graphene. _Nature_ 572, 95–100 (2019).
* [8] Choi, Y. _et al._ Electronic correlations in twisted bilayer graphene near the magic angle. _Nature Physics_ 15, 1174–1180 (2019).
* [9] Xie, Y. _et al._ Spectroscopic signatures of many-body correlations in magic-angle twisted bilayer graphene. _Nature_ 572, 101–105 (2019).
* [10] Jiang, Y. _et al._ Charge order and broken rotational symmetry in magic-angle twisted bilayer graphene. _Nature_ 573, 91–95 (2019).
* [11] Zondiner, U. _et al._ Cascade of phase transitions and Dirac revivals in magic-angle graphene. _Nature_ 582, 203–208 (2020). eprint 1912.06150.
* [12] Wong, D. _et al._ Cascade of electronic transitions in magic-angle twisted bilayer graphene. _Nature_ 582, 198–202 (2020).
* [13] Choi, Y. _et al._ Correlation-driven topological phases in magic-angle twisted bilayer graphene. _Nature_ 589, 536–541 (2021).
* [14] Fischer, A. _et al._ Unconventional Superconductivity in Magic-Angle Twisted Trilayer Graphene. _arXiv:2104.10176 [cond-mat]_ (2021). eprint 2104.10176.
* [15] Phong, V. T., Pantaleón, P. A., Cea, T. & Guinea, F. Band Structure and Superconductivity in Twisted Trilayer Graphene. _arXiv:2106.15573 [cond-mat]_ (2021). eprint 2106.15573.
* [16] Choi, Y. _et al._ Interaction-driven Band Flattening and Correlated Phases in Twisted Bilayer Graphene. _arXiv:2102.02209 [cond-mat]_ (2021). eprint 2102.02209.
* [17] Guinea, F. & Walet, N. R. Electrostatic effects, band distortions, and superconductivity in twisted graphene bilayers. _PNAS_ 115, 13174–13179 (2018).
* [18] Rademaker, L., Abanin, D. A. & Mellado, P. Charge smoothening and band flattening due to Hartree corrections in twisted bilayer graphene. _Phys. Rev. B_ 100, 205114 (2019).
* [19] Goodwin, Z. A. H., Vitale, V., Liang, X., Mostofi, A. A. & Lischner, J. Hartree theory calculations of quasiparticle properties in twisted bilayer graphene. _Electron. Struct._ 2, 034001 (2020). eprint 2004.14784.
* [20] Calderón, M. J. & Bascones, E. Interactions in the 8-orbital model for twisted bilayer graphene. _Phys. Rev. B_ 102, 155149 (2020). eprint 2007.16051.
* [21] Christos, M., Sachdev, S. & Scheurer, M. S. Correlated insulators, semimetals, and superconductivity in twisted trilayer graphene. _arXiv:2106.02063 [cond-mat]_ (2021). eprint 2106.02063.
* [22] Jung, S. _et al._ Evolution of microscopic localization in graphene in a magnetic field from scattering resonances to quantum dots. _Nature Physics_ 7, 245–251 (2011).
* [23] Eagles, D. M. Possible Pairing without Superconductivity at Low Carrier Concentrations in Bulk and Thin-Film Superconducting Semiconductors. _Phys. Rev._ 186, 456–463 (1969).
* [24] Renner, C., Revaz, B., Genoud, J.-Y., Kadowaki, K. & Fischer, Ø. Pseudogap Precursor of the Superconducting Gap in Under- and Overdoped Bi2Sr2CaCu2O8+$\delta$. _Phys. Rev. Lett._ 80, 149–152 (1998).
* [25] Rodan-Legrain, D. _et al._ Highly tunable junctions and non-local Josephson effect in magic-angle graphene tunnelling devices. _Nat. Nanotechnol._ 16, 769–775 (2021).
* [26] Dynes, R. C., Narayanamurti, V. & Garno, J. P. Direct Measurement of Quasiparticle-Lifetime Broadening in a Strong-Coupled Superconductor. _Phys. Rev. Lett._ 41, 1509–1512 (1978).
* [27] Yeh, N.-C. _et al._ Evidence of Doping-Dependent Pairing Symmetry in Cuprate Superconductors. _Phys. Rev. Lett._ 87, 087003 (2001).
* [28] Botelho, S. S. & Sá de Melo, C. A. R. Lifshitz transition in $d$-wave superconductors. _Phys. Rev. B_ 71, 134507 (2005).
* [29] Borkowski, L. & de Melo, C. S. Evolution from the BCS to the Bose-Einstein Limit in a d-Wave Superconductor at T=0. _Acta Physica Polonica A_ 6, 691–698 (2001).
* [30] Chen, Q., Stajic, J., Tan, S. & Levin, K. BCS–BEC crossover: From high temperature superconductors to ultracold superfluids. _Physics Reports_ 412, 1–88 (2005).
* [31] Randeria, M. & Taylor, E. Crossover from Bardeen-Cooper-Schrieffer to Bose-Einstein Condensation and the Unitary Fermi Gas. _Annual Review of Condensed Matter Physics_ 5, 209–232 (2014).
* [32] Schrieffer, J. R., Scalapino, D. J. & Wilkins, J. W. Effective Tunneling Density of States in Superconductors. _Phys. Rev. Lett._ 10, 336–339 (1963).
* [33] McMillan, W. L. & Rowell, J. M. Lead Phonon Spectrum Calculated from Superconducting Density of States. _Phys. Rev. Lett._ 14, 108–112 (1965).
* [34] Lee, J. _et al._ Interplay of electron–lattice interactions and superconductivity in Bi2Sr2CaCu2O8+$\delta$. _Nature_ 442, 546–550 (2006).
* [35] Niestemski, F. C. _et al._ A distinct bosonic mode in an electron-doped high-transition-temperature superconductor. _Nature_ 450, 1058–1061 (2007).
* [36] Chi, S. _et al._ Scanning Tunneling Spectroscopy of Superconducting LiFeAs Single Crystals: Evidence for Two Nodeless Energy Gaps and Coupling to a Bosonic Mode. _Phys. Rev. Lett._ 109, 087002 (2012).
* [37] Shan, L. _et al._ Evidence of a Spin Resonance Mode in the Iron-Based Superconductor Ba0.6K0.4Fe2As2 from Scanning Tunneling Spectroscopy. _Phys. Rev. Lett._ 108, 227002 (2012).
* [38] Zasadzinski, J. F. _et al._ Correlation of Tunneling Spectra in Bi2Sr2CaCu2O8+$\delta$ with the Resonance Spin Excitation. _Phys. Rev. Lett._ 87, 067005 (2001).
* [39] Ramires, A. & Lado, J. L. Emulating Heavy Fermions in Twisted Trilayer Graphene. _Phys. Rev. Lett._ 127, 026401 (2021).
* [40] Carbotte, J. P. Properties of boson-exchange superconductors. _Rev. Mod. Phys._ 62, 1027–1157 (1990).
* [41] Song, C.-L. & Hoffman, J. E. Pairing insights in iron-based superconductors from scanning tunneling microscopy. _Current Opinion in Solid State and Materials Science_ 17, 39–48 (2013).
* [42] Scalapino, D. J., Schrieffer, J. R. & Wilkins, J. W. Strong-Coupling Superconductivity. I. _Phys. Rev._ 148, 263–279 (1966).
* [43] Yu, G., Li, Y., Motoyama, E. M. & Greven, M. A universal relationship between magnetic resonance and superconducting gap in unconventional superconductors. _Nature Phys_ 5, 873–875 (2009).
* [44] Anderson, P. W. & Ong, N. P. Theory of asymmetric tunneling in the cuprate superconductors. _Journal of Physics and Chemistry of Solids_ 67, 1–5 (2006).
* [45] Eschrig, M. & Norman, M. R. Effect of the magnetic resonance on the electronic spectra of high-Tc superconductors. _Phys. Rev. B_ 67, 144503 (2003).
* [46] Khalaf, E., Chatterjee, S., Bultinck, N., Zaletel, M. P. & Vishwanath, A. Charged skyrmions and topological origin of superconductivity in magic-angle graphene. _Science Advances_ 7, eabf5299 (2021). eprint 2004.00638.
* [47] Choi, Y. W. & Choi, H. J. Dichotomy of Electron-Phonon Coupling in Graphene Moire Flat Bands. _arXiv:2103.16132 [cond-mat]_ (2021). eprint 2103.16132.
* [48] Reznik, D. _et al._ Electron–phonon coupling reflecting dynamic charge inhomogeneity in copper oxide superconductors. _Nature_ 440, 1170–1173 (2006).
* [49] Le Tacon, M. _et al._ Inelastic X-ray scattering in YBa2Cu3O6.6 reveals giant phonon anomalies and elastic central peak due to charge-density-wave formation. _Nature Phys_ 10, 52–58 (2014).
* [50] Gabovich, A. M. & Voitenko, A. I. Charge density waves as the origin of dip-hump structures in the differential tunneling conductance of cuprates: The case of d-wave superconductivity. _Physica C: Superconductivity and its Applications_ 503, 7–13 (2014).
* [51] Cao, Y. _et al._ Nematicity and competing orders in superconducting magic-angle graphene. _Science_ 372, 264–271 (2021).
* [52] Lewandowski, C., Chowdhury, D. & Ruhman, J. Pairing in magic-angle twisted bilayer graphene: Role of phonon and plasmon umklapp. _Phys. Rev. B_ 103, 235401 (2021).
* [53] Chou, Y.-Z., Wu, F., Sau, J. D. & Sarma, S. D. Correlation-induced triplet pairing superconductivity in graphene-based moir\’e systems. _arXiv:2105.00561 [cond-mat]_ (2021). eprint 2105.00561.
* [54] Bistritzer, R. & MacDonald, A. H. Moiré bands in twisted double-layer graphene. _PNAS_ 108, 12233–12237 (2011).
* [55] Cea, T., Walet, N. R. & Guinea, F. Electronic band structure and pinning of Fermi energy to Van Hove singularities in twisted bilayer graphene: A self-consistent approach. _Phys. Rev. B_ 100, 205113 (2019).
* [56] Zasadzinski, J. F., Coffey, L., Romano, P. & Yusof, Z. Tunneling spectroscopy of Bi2Sr2CaCu2O8+$\delta$ Eliashberg analysis of the spectral dip feature. _Phys. Rev. B_ 68, 180504 (2003).
* [57] Pistolesi, F. & Strinati, G. C. Evolution from BCS superconductivity to Bose condensation: Calculation of the zero-temperature phase coherence length. _Phys. Rev. B_ 53, 15168–15192 (1996).
* [58] Stintzing, S. & Zwerger, W. Ginzburg-Landau theory of superconductors with short coherence length. _Phys. Rev. B_ 56, 9004–9014 (1997).
* [59] Cao, G., He, L. & Zhuang, P. BCS-BEC quantum phase transition and collective excitations in two-dimensional Fermi gases with $p$- and $d$-wave pairings. _Phys. Rev. A_ 87, 013613 (2013).
Acknowledgments: We acknowledge discussions with Felix von Oppen, Gil Refael,
Yang Peng, and Ali Yazdani. Funding: This work has been primarily supported by
Office of Naval Research (grant no. N142112635); National Science Foundation
(grant no. DMR-2005129); and Army Research Office under Grant Award
W911NF17-1-0323. Nanofabrication efforts have been in part supported by
Department of Energy DOE-QIS program (DE-SC0019166). S.N-P. acknowledges
support from the Sloan Foundation. J.A. and S.N.-P. also acknowledge support
of the Institute for Quantum Information and Matter, an NSF Physics Frontiers
Center with support of the Gordon and Betty Moore Foundation through Grant
GBMF1250; C.L. acknowledges support from the Gordon and Betty Moore
Foundation’s EPiQS Initiative, Grant GBMF8682. A.T. and J.A. are grateful for
the support of the Walter Burke Institute for Theoretical Physics at Caltech.
H.K. and Y.C. acknowledge support from the Kwanjeong fellowship.
Author Contribution: H.K. and Y.C. fabricated samples with the help of Y.Z.
and R.P., and performed STM measurements. H.K., Y.C., and S.N.-P. analyzed the
data. C.L. and A.T. provided the theoretical analysis supervised by J.A.
S.N.-P. supervised the project. H.K., Y.C., C.L., A.T., J.A., and S.N.-P.
wrote the manuscript with input from other authors.
Data availability: The data that support the findings of this study are
available from the corresponding authors on reasonable request.
Fig. 1: Topography and spectroscopy of MATTG at zero magnetic field. a,
Schematics of the STM experiment. MATTG is placed on an hexagonal Boron
Nitride (hBN) substrate and doping is controlled by a graphite back gate. b,
$290$ nm by $80$ nm area where two stripes separated by approximately $100$ nm
are observed (tunneling set point parameters: $V_{\mathrm{{Bias}}}=100$ mV,
$I=20$ pA; scale bar 50 nm). c, $26$ nm by $26$ nm topography showing moiré
lattices with corresponding moire length of approximately $9$ nm (scale bar
$10$ nm). The inset shows the atomic-scale hexagonal lattice of carbon atoms
(scale bar $0.5$ nm). d, Calculated local density of states (LDOS) at charge
neutrality originating from the bands within approximately $\pm 50$ meV energy
window for A-tw-A (upper panel) and A-tw-B (lower panel) stacking. While in
principle various configurations could arise, the A-tw-A stacking, where first
and third layers are aligned, is seen experimentally. The peaks in LDOS
correspond to AAA stacked regions where carbon atoms from three graphene
layers are aligned. e, Simulated atomic distribution of MATTG with the first
and third layers strained with respect to each other (See Methods, section 3
for simulation details). f, Tunneling conductance ($dI/dV$) spectroscopy as a
function of $V_{\mathrm{Gate}}$ at twist angle $\theta=1.51\degree$ on an AAA
site at $T=400$ mK. Clear signatures of symmetry breaking cascades, similar to
twisted gaphene bilayers12, 13, are observed.
Fig. 2: LDOS Landau fan diagram and doping-dependent band deformations in
MATTG. a, LDOS Landau fan diagram13 measured on an AAA site. The negative
magnetic field fan shows the corresponding schematic of gaps between LLs
emanating from the CNP (black); gaps emanating from non-zero integer fillings
(red); and gaps between LLs from the dispersive bands (purple). b, Calculated
MATTG band structure taking into account Hartree corrections. Horizontal
dashed lines represent the positions of the Fermi levels at each doping.
Electron (hole) doping shifts the Dirac-like band towards negative (positive)
energy relative to the flat band (see also Methods, section 4). c, d, Point
spectroscopy on an ABA site (in between AAA sites) at finite magnetic fields
$B=0.75~{}\text{T}$ (c) and $B=3~{}\text{T}$ (d). Black arrows indicate LLs
identified to originate from the additional Dirac cones characteristic of
MATTG. e, f, Calculated density of states with Hartree corrections at $\nu=4$
(e) and $\nu=-4$ (f) for $\theta=1.51\degree$ at $B=0~{}\text{T}$. g, h, Point
spectra taken at an AAA site at $B=0~{}\text{T}$ near $\nu=4$ ($V_{\rm
Gate}=15.6~{}\text{V}$, g) and $\nu=-4$ ($V_{\mathrm{Gate}}=-14.3~{}\text{V}$,
h). Note the asymmetric profile as expected from (e, f). i, Energies of LLs
extracted from (c, d) at $V_{\mathrm{Gate}}=0$ V and plotted versus ${\rm
sgn}(n)\sqrt{|n|B}$, with $n$ is the LL index, showing agreement with
expectations from a Dirac dispersion. All data in this figure are taken within
a $100\times 100$ nm2 MATTG area with average $\theta=1.48\pm 0.03\degree$.
The angles shown in the panels are obtained from measuring the exact distances
between the closest AAA sites. Measurements are taken at $T=2~{}\text{K}$.
Fig. 3: Spectroscopic gap in the $\mathbf{-3<\nu<-2}$ range and signatures of
unconventional superconductivity. a, Spectra near an AAA site (same area as
Fig. 2a). Purple and green arrows denote $\nu$ range over which U- and
V-shaped tunneling spectra, accompanied by clear coherence peaks, are
observed. b, c, Normalized spectra showing U-shaped (b) and V-shaped (c)
tunneling suppression. The data are normalized by a polynomial background, and
fit to the Dynes formula (c) with a nodal superconducting order parameter (see
Methods, section 5). d, Temperature dependence of the spectrum (lines
correspond to $T=0.4,2,3,4.5,5.6,7$ K). e, Normalized zero-bias conductance
vs. temperature; $T^{*}$ indicates the temperature at which the zero-bias
conductance reaches 90% of the conductance outside the gap. f, Coherence-peak
amplitude vs. temperature from normalized spectra on the electron (black) and
hole (red) side. The hole-side coherence peak gets fully suppressed around
$T_{c}\approx 2-2.5$ K. (d-f) are from the same dataset as Extended Data Fig.
5h-k. g, Magnetic-field dependence of the spectrum (lines correspond to
$B=0,100,200,300,400,600,800,1000$ mT), from the same dataset as Extended Data
Fig. 5a-d. h, Gap size $\Delta$ vs. $\nu$ ($V_{\rm Gate}$) extracted from (a)
separately for electron (yellow) and hole (black) side coherence peaks. Color
coding of different regions matches (a). i-l, Proposed BEC-BCS transition (i,
j) mechanism that qualitatively reproduces U- and V-shaped spectra (k, l); see
main text and Methods, section 6.2.
Fig. 4: Peak-dip-hump structure in MATTG. a, Line traces showing point spectra
for $V_{\rm Gate}$ ranging from $-9.7$ V to $-7.3$ V (same dataset as Fig.
3a). Each spectrum is divided by the mean value for clarity. Red dashed line
indicates the LDOS peak originating from the sub-band that abruptly shifts due
to the cascade near $\nu=-3$; black dashed line indicates the shoulder of the
upper flat band VHS. Black arrows denote the position of hole-side and
electron-side dip-hump structure identified from the local minimum/maximum in
$d^{2}I/dV^{2}$. b, Extracted energy $\Pi$ of the electron-side (red) and
hole-side (blue) dip-hump structure and corresponding energy $\Omega$ of the
bosonic mode on the electron side (purple) and hole side (green) versus
filling factor ($V_{\rm Gate})$. c, Ratio $\Omega/2\Delta$ plotted versus
$\Delta$ for both electron- and hole-side bosonic excitations. The black
dashed line is a linear regression of the data at $V_{\rm Gate}$ ranging from
$-9.7~{}\text{V}$ to $-8.6~{}\text{V}$ that shows anticorrelation of the two
quantities for fillings at which V-shaped tunneling spectra are observed.
Methods:
## 1 Device fabrication
Similarly as in our previous STM measurements8, 13, 16 the device is
fabricated using the Polydimethylsiloxane (PDMS)-assisted stack-and-flip
technique using $\sim$30nm hBN and monolayer graphene. The flakes are
exfoliated on SiO2 and identified optically. We use a poly(bisphenol A
carbonate) (PC)/PDMS stamp to pick up hBN at 90$\degree$C, and tear and twist
graphene layers at 40$\degree$C. The PC film with the stack is then peeled off
and transferred onto another clean PDMS, with MATTG side facing the PDMS. The
PC film is dissolved in N-Methyl-2-pyrrolidone (NMP), followed by cleaning
with Isopropyl alcohol (IPA). We kept the final PDMS in vacuum for several
days. The stack on it is then transferred onto a chip with a graphite back
gate and gold electrodes. Finally, MATTG is connected to the electrodes by
another graphite flake.
## 2 STM measurements
The STM measurements were performed in a Unisoku USM 1300J STM/AFM system
using a Platinum/Iridium (Pt/Ir) tip as in our previous works on bilayers8,
13, 16. All reported features are observed with many (more than ten) different
microtips. Unless specified otherwise, the parameters for $dI/dV$ spectroscopy
measurements were $V_{\rm Bias}=100$ mV and $I=1$ nA, and the lock-in
parameters were modulation voltage $V_{\rm mod}=0.2-1$ mV and frequency
$f=973$ Hz. The piezo scanner is calibrated on a Pb(110) crystal by matching
the lattice constant and verified by measuring the distance between carbon
atoms. The twist-angle uncertainty is approximately $\pm 0.01\degree$, and
determined by measuring moiré wavelengths from topography. Filling factor
assignment has been performed by taking Landau fan diagrams as discussed
previously13.
## 3 Stripe simulation
As mentioned in the main text, the stripes are believed to arise out of the
restructuring of the moiré lattice. The flat-band scenario of interest arises
when the top and bottom monolayers are AA stacked—all carbon atoms vertically
aligned—while the middle layer is rotated by a twist angle $\sim 1.5^{\circ}$.
While this situation seems understandably difficult to achieve during
fabrication, it was shown in Ref. 6 that the desired AA stacking of the top
and bottom is the energetically preferred configuration, and we therefore
expect the system to relax into this configuration across large regions of the
sample. This expectation is borne out by the observation of flat bands as well
as the presence of a single moiré lattice constant as seen in STM.
There are two primary features in Fig. 1b that we wish to reproduce. The
first, and most prominent, is the stripe, which can be obtained as follows. We
let $\boldsymbol{a}_{1}=a(0,1)$ and $\boldsymbol{a}_{2}=a(-\sqrt{3}/2,-1/2)$
denote the Bravais primitive vectors of the graphene monolayer, where
$a\approx 0.246\,\mathrm{\text{nm}}$ is the graphene lattice constant, and let
$R(\phi)=e^{-i\phi\sigma^{y}}$ be a matrix that rotates by angle $\phi$. The
bottom and middle lattices are simulated by plotting points at
$\Lambda_{\mathrm{bot}}=\big{\\{}R(-\theta/2)\big{(}n_{1}\boldsymbol{a}_{1}+n_{2}\boldsymbol{a}_{2}\big{)},n_{1,2}\in\mathbb{Z}\big{\\}}$
and
$\Lambda_{\mathrm{mid}}=\big{\\{}R(\theta/2)\big{(}n_{1}\boldsymbol{a}_{1}+n_{2}\boldsymbol{a}_{2}\big{)},n_{1,2}\in\mathbb{Z}\big{\\}}$.
The stripe is then entirely determined by strain in the top layer, where the
points plotted are instead
$\Lambda_{\mathrm{top}}=\big{\\{}R(-\theta/2)\big{(}n_{1}\boldsymbol{a}_{1}+n_{2}\boldsymbol{a}_{2}+f_{w}(-\frac{1}{2}n_{1}+n_{2})\boldsymbol{v}\big{)},n_{1,2}\in\mathbb{Z}\big{\\}}$
where $f_{w}(x)=\frac{1}{\pi}{\arctan}(2x/w)+\frac{1}{2}$ and
$\boldsymbol{v}=m_{1}\boldsymbol{a}_{1}+m_{2}\boldsymbol{a}_{2}$,
$m_{1,2}\in\mathbb{Z}$ is a Bravais lattice vector. The function $f_{w}(x)$ is
essentially a smoothened step function in that it interpolates between 0 and
1: $\lim_{x\to-\infty}f_{w}(x)=0$ and $\lim_{x\to+\infty}f_{w}(x)=1$. The size
of the intermediate regime and hence the stripe width is determined by the
parameter $w>0$, with $\lim_{w\to 0}f_{w}(x)=\Theta(x)$, the Heaviside
function. In our definition of $\Lambda_{\mathrm{top}}$, we chose to have
$f_{w}$ as a function $-\frac{1}{2}n_{1}+n_{2}$ since it results in a stripe
along the $(-1/2,\sqrt{3}/2)$ direction and thus well represents the stripes
shown in Fig. 1b. Putting these pieces together, one can see that in both
regions where $\left|-\frac{1}{2}n_{1}+n_{2}\right|$ is large, the lattice
points of $\Lambda_{\mathrm{bot}}$ and $\Lambda_{\mathrm{top}}$ should be
directly above one another. In the region
$-w/2\lesssim-\frac{1}{2}n_{1}+n_{2}\lesssim w/2$, the registry of the top and
bottom layers changes from AA to AB and then back to AA.
The procedure detailed above yields a stripe, but does not account for a
second feature of Fig. 1b: the moiré lattices on either side of the stripe are
offset by about half of a moiré unit cell in the vertical
($\hat{{\boldsymbol{y}}}$) direction, which corresponds to a displacement of
${\boldsymbol{D}}_{\mathrm{shift}}=\frac{\sqrt{3}}{2}L_{M}\hat{{\boldsymbol{y}}}$,
$L_{M}=a/\big{(}2\sin(\theta/2)\big{)}$. This offset at the moiré lattice
scale is a result of a shift of the top and bottom lattices relative to the
middle lattice occurring at the level of the microscopic scale of monolayer
graphene. In particular, displacing the top and bottom layers by
${\boldsymbol{v}}_{\mathrm{shift}}\approx\theta\hat{{\boldsymbol{z}}}\times{\boldsymbol{D}}_{\mathrm{shift}}\approx-\frac{\sqrt{3}}{2}a\hat{{\boldsymbol{x}}}$
moves the moiré lattice by ${\boldsymbol{D}}_{\mathrm{shift}}$. Such a shift
is readily implemented numerically by replacing the lattices
$\Lambda_{\mathrm{bot}}$ and $\Lambda_{\mathrm{top}}$ with
$\Lambda_{\mathrm{bot}}^{\prime}=\big{\\{}R(-\theta/2)\big{(}n_{1}\boldsymbol{a}_{1}+n_{2}\boldsymbol{a}_{2}+f_{w}(-\frac{1}{2}n_{1}+n_{2})\boldsymbol{v}_{\mathrm{shift}}\big{)},n_{1,2}\in\mathbb{Z}\big{\\}}$
and
$\Lambda_{\mathrm{top}}^{\prime}=\big{\\{}R(-\theta/2)\big{(}n_{1}\boldsymbol{a}_{1}+n_{2}\boldsymbol{a}_{2}+f_{w}(-\frac{1}{2}n_{1}+n_{2})(\boldsymbol{v}+{\boldsymbol{v}}_{\mathrm{shift}})\big{)},n_{1,2}\in\mathbb{Z}\big{\\}}$.
The middle layer is defined through $\Lambda_{\mathrm{mid}}$ as in the
previous paragraph. For ease of visualization,
$\Lambda_{\mathrm{top}}^{\prime}$ and $\Lambda_{\mathrm{bot}}^{\prime}$ are
plotted in black while $\Lambda_{\mathrm{mid}}$ is plotted in red.
We emphasize that the primary purpose of this calculation is to reproduce the
stripe in the simplest possible manner. A more complete study requires
understanding the energetics, which would not only be needed to predict that
width of the stripe (here, simply an input parameter), but which would also
result in lattice relaxation within a unit cell.
## 4 Continuum model and Interaction-driven band structure renormalization
### 4.1 Continnum model
In this section, we summarize the continuum model3, 6 used to capture the low-
energy theory of twisted trilayer graphene. In particular, we consider the
case where the top and bottom layers are directly atop one another (AA
stacked) and twisted by $-\theta/2$, while the middle layer is twisted by
$+\theta/2$. The electronic structure of MATTG is obtained by an extension3 of
the continuum model developed originally for twisted bilayer graphene (TBG)54.
As in that case, there are two independent sectors in the non-interacting
limit distinguished by the valley $K$ and $K^{\prime}$. Without loss of
generality, we therefore focus on valley $K$ in this section; the model
relevant to valley $K^{\prime}$ may be obtained in a straightforward manner
through time reversal. We let $\psi_{t}$, $\psi_{m}$, and $\psi_{b}$ denote
the spinors one obtains by expanding the dispersion of monolayer graphene
about valley $K$ for the top, middle and bottom layers, respectively. In terms
of the microscopic operators of the graphene monolayers, that means
$\psi_{\ell}({\boldsymbol{k}})=f_{\ell}({\boldsymbol{k}}+{\boldsymbol{K}}_{\ell})$,
$\ell=t,m,b$. Importantly, as a result of the twist, the $K$ points of the
different layers are not the same. The model is composed of a ‘diagonal’ Dirac
piece and an ‘off-diagonal’ tunneling piece accounting for the moiré
interlayer coupling: $H_{\mathrm{cont}}=H_{D}+H_{\mathrm{tun}}$. The Dirac
term is broken up into three components, one for each layer, with
$H_{D}=H_{t}+H_{m}+H_{b}$ where
$\displaystyle H_{\ell}$
$\displaystyle=\int_{\boldsymbol{k}}\psi^{\dagger}_{\ell}({\boldsymbol{k}})h_{\theta_{\ell}}({\boldsymbol{k}})\psi_{\ell}({\boldsymbol{k}}),$
$\displaystyle h_{\theta_{\ell}}({\boldsymbol{k}})$
$\displaystyle=-v_{0}e^{i\theta_{\ell}\sigma^{z}/2}\big{(}k_{x}\sigma^{x}+k_{y}\sigma^{y}\big{)}e^{-i\theta_{\ell}/2}.$
(1)
Above, $\ell=t,m,b$ identifies the layers, $v_{0}\sim
10^{6}\,\mathrm{\text{m/s}}$ is the Fermi velocity of the Dirac cones of
monolayer layer graphene, and $\sigma^{x,y,z}$ are Pauli matrices acting on
the A/B sublattice indices of the spinors $\psi_{\ell}$. The angle
$\theta_{\ell}$ indicates the angle by which each layer is rotated, with
$\theta_{t}=\theta_{b}=-\theta/2$ and $\theta_{m}=+\theta/2$. The magic angle
for this model occurs for $\theta\approx 1.5^{\circ}$, which is related to the
magic angle of TBG through a prefactor of $\sqrt{2}$:
$\theta=1.5^{\circ}\approx\sqrt{2}\times 1.05^{\circ}$. The origins of this
relation trace back to a similarity transformation that maps the MATTG
continuum model into one of a decoupled TBG-like band structure with an
interlayer coupling (to be discussed) multiplied by $\sqrt{2}$ and a graphene-
like Dirac cone. We refer to Ref. 3 for an in-depth explanation of this
relation.
We assume that tunneling only occurs between adjacent layers:
$\displaystyle H_{\mathrm{tun}}$
$\displaystyle=\sum_{j=1,2,3}\int_{\boldsymbol{k}}\Big{(}\psi^{\dagger}_{t}({\boldsymbol{k}})+\psi^{\dagger}_{b}({\boldsymbol{k}})\Big{)}T_{j}\psi_{m}({\boldsymbol{k}}+{\boldsymbol{q}}_{j})+h.c.,$
(2)
where the momenta shift and the tunneling matrices are given by
$\displaystyle{\boldsymbol{q}}_{j}$
$\displaystyle=\frac{4\pi}{3L_{M}}R\left(\frac{2\pi}{3}(j-1)\right)\begin{pmatrix}0\\\
-1\end{pmatrix},$ $\displaystyle T_{j}$
$\displaystyle=w_{0}+w_{1}\left(e^{-2\pi(j-1)i/3}\sigma^{+}+e^{2\pi(j-1)i/3}\sigma^{-}\right)$
(3)
with $R(\phi)=e^{-i\phi\sigma^{y}}$ is a $2\times 2$ matrix acting on vector
indices, $L_{M}=a/[2\sin(\theta/2)]$, and $\sigma^{\pm}=(\sigma^{x}\pm
i\sigma^{y})/2$. The tunneling strength is determined by the parameters
$w_{0}$ and $w_{1}$; in this paper we set
$(w_{0},w_{1})=(55,105)\,\mathrm{\text{meV}}$. (Note that the conventions used
in this section are rotated by 90∘ relative to those of section 3.)
This model possesses a number of symmetries. We have already alluded to time
reversal, with which one may obtain the continuum model Hamiltonian
corresponding to the valley $K^{\prime}=-K$. We therefore re-introduce a
valley label, writing $\psi_{\ell}\to\psi_{v,\ell}$ with $v=K,K^{\prime}$. A
number of spatial symmetries are also present in this model, but for our
purposes it is sufficient to note that the model is invariant under rotations
by $60^{\circ}$, under which the spinors transform as
$\psi_{\ell}({\boldsymbol{k}})\to\tau^{x}\sigma^{x}e^{2\pi
i\tau^{z}\sigma^{z}/3}\psi_{\ell}\big{(}R(2\pi/6){\boldsymbol{k}}\big{)}$,
where $\tau^{x,y,z}$ are Pauli matrices acting on the (now suppressed) valley
indices.
To diagonalize the continuum model, we recall that the spinor operators
$\psi_{\ell}$ are not all defined about a common momentum point. Hence the
tunneling term in Eq. (2) does not involve a momentum exchange of
${\boldsymbol{q}}_{j}$, but rather $K_{t}=K_{b}=K_{m}+{\boldsymbol{q}}_{j}$
and $K_{t}^{\prime}=K_{b}^{\prime}=K_{m}-{\boldsymbol{q}}_{j}$, which differ
by a moiré reciprocal lattice vector. We therefore define operators
$\Psi_{v,\ell}$ about a common momentum point for each valley through
$\Psi_{v,t/b}({\boldsymbol{k}})=\psi_{v,t/b}({\boldsymbol{k}})$ and
$\Psi_{K/K^{\prime},m}({\boldsymbol{k}})=\psi_{K/K^{\prime},m}({\boldsymbol{k}}\pm{\boldsymbol{q}}_{1})$,
where the $+$ ($-$) corresponds to $K$ ($K^{\prime}$) (the choice
${\boldsymbol{q}}_{1}$ is arbitrary—${\boldsymbol{q}}_{2}$ and
${\boldsymbol{q}}_{3}$ could be equally chosen). Grouping the valley, layer,
sublattice, and spin labels into a single indice, $\Psi_{\alpha}$, we can
express $H_{\mathrm{cont}}$ in matrix form as
$\displaystyle H_{\mathrm{cont}}$
$\displaystyle=\sum_{{\boldsymbol{G}},{\boldsymbol{G}}^{\prime}}\int_{{\boldsymbol{k}}\in\mathrm{mBZ}}\Psi^{\dagger}_{\alpha}({\boldsymbol{k}}+{\boldsymbol{G}})h^{(\mathrm{cont})}_{\alpha,{\boldsymbol{G}};\alpha^{\prime},{\boldsymbol{G}}^{\prime}}({\boldsymbol{k}})\Psi_{\alpha^{\prime}}({\boldsymbol{k}}+{\boldsymbol{G}}^{\prime});$
(4)
${\boldsymbol{G}}$ and ${\boldsymbol{G}}^{\prime}$ are moiré reciprocal
lattice vectors defined via
${\boldsymbol{G}}=n_{1}\boldsymbol{\mathcal{G}}_{1}+n_{2}\boldsymbol{\mathcal{G}}_{2}$,
$n_{1,2}\in\mathbb{Z}$ where
$\boldsymbol{\mathcal{G}}_{1}={\boldsymbol{q}}_{2}-{\boldsymbol{q}}_{1}$ and
$\boldsymbol{\mathcal{G}}_{2}={\boldsymbol{q}}_{3}-{\boldsymbol{q}}_{1}$. The
integration over ${\boldsymbol{k}}$ includes only those momenta within the
moiré Brillouin zone (mBZ).
### 4.2 Interaction-driven band structure renormalization
The presence of flat bands in MATTG necessitates the consideration of
interaction-driven band structure corrections. As demonstrated experimentally
in our previous work on twisted graphene bilayers16, filling-dependent
interaction effects, specifically Hartree and Fock corrections, drastically
alter the electron dispersion. Here we incorporate only a Hartree mechanism17,
18, 19, 20 in the analysis. In TBG we found16 that the main role of the Fock
correction, provided that one does not consider the nature of the correlated
states and the cascade, is to broaden the band structure at the charge
neutrality point ($\nu=0$) and to counteract band inversions at the zone
center promoted by Hartree effects. For comparison with the experiment
presented in Fig. 2, where we focus only on $\nu=\pm 4$, we can thus ignore
Fock corrections as a first approximation. Similar Hartree-driven band
structure renormalizations were considered recently in the literature14, 15,
and our analysis together with the experimental results are consistent with
their conclusions.
We introduce Coulomb interaction into the system through
$\displaystyle H_{C}$ $\displaystyle=\frac{1}{2}\int
d^{2}{\boldsymbol{r}}\,d^{2}{\boldsymbol{r}}^{\prime}\,\delta\rho({\boldsymbol{r}})V({\boldsymbol{r}}-{\boldsymbol{r}}^{\prime})\delta\rho({\boldsymbol{r}}^{\prime}).$
(5)
Here, $V({\boldsymbol{r}})=e^{2}/(4\pi\epsilon|{\boldsymbol{r}}|)$ is the
Coulomb potential and
$\delta\rho({\boldsymbol{r}})=\Psi^{\dagger}({\boldsymbol{r}})\Psi({\boldsymbol{r}})-\rho_{\mathrm{CN}}({\boldsymbol{r}})$,
where
$\rho_{\mathrm{CN}}({\boldsymbol{r}})=\langle\Psi^{\dagger}({\boldsymbol{r}})\Psi({\boldsymbol{r}})\rangle_{\mathrm{CN}}$
is the expectation value of the density at the charge neutrality point. The
use of $\delta\rho({\boldsymbol{r}})$ instead of $\rho({\boldsymbol{r}})$ in
the interaction is motivated by the expectation that the input parameters of
the model $H_{\mathrm{cont}}$ already include the effect of interactions at
the charge neutrality point. Although numerically expedient, this assumption
is not strictly correct since the input parameters in actuality refer to three
independent graphene monolayers. Nevertheless, for the purpose of making
qualitative comparisons with Fig. 2, we do not expect this distinction to be
important. The dielectric constant $\epsilon$ in the definition of
$V({\boldsymbol{r}})$ is used as a fitting parameter; see section 4.3 for
details.
We study the effect of the interacting continuum model of MATTG through a
self-consistent Hartree mean-field calculation. Instead of solving the many-
body problem, we obtain the quadratic Hamiltonian that best approximates the
full model when only the symmetric contributions of $H_{C}$ are included,
i.e., the Fock term is neglected as explained above. Thus instead of
$H_{\mathrm{cont}}+H_{C}$, we study the Hamiltonian
$\displaystyle H_{\mathrm{MF}}^{(\nu)}$
$\displaystyle=H_{\mathrm{cont}}+H^{(\nu)}_{\mathrm{H}}-\frac{1}{2}\langle
H_{\mathrm{H}}^{(\nu)}\rangle_{\nu},$ (6)
where $H_{\mathrm{H}}^{(\nu)}$ is the Hartree term at filling $\nu$,
$\displaystyle H_{\mathrm{H}}^{(\nu)}$
$\displaystyle=\int_{{\boldsymbol{k}},{\boldsymbol{k}}^{\prime},{\boldsymbol{q}}}V({\boldsymbol{q}})\langle\Psi^{\dagger}({\boldsymbol{k}}^{\prime}+{\boldsymbol{q}})\Psi({\boldsymbol{k}}^{\prime})\rangle_{\nu}\Psi^{\dagger}({\boldsymbol{k}})\Psi({\boldsymbol{k}}-{\boldsymbol{q}}),$
(7)
and the last term in Eq. (6) simply ensures there is no double counting when
one calculates the total energy. In the above equation,
$V({\boldsymbol{q}})=2\pi e^{2}/(\epsilon|{\boldsymbol{q}}|)$ is the Fourier
transform of the Coulomb interaction $V({\boldsymbol{r}})$ in Eq. (5), and the
expectation value
$\langle\hat{\mathcal{O}}\rangle_{\nu}=\langle\hat{\mathcal{O}}\rangle_{\mathrm{occ}}-\langle\hat{\mathcal{O}}\rangle_{\mathrm{CN}}$
only includes states that are filled up to $\nu$ _relative_ to charge
neutrality, as defined by diagonalizing the Hamiltonian
$H_{\mathrm{MF}}^{(\nu)}$. Typically, for a “jellium”-like model, the
expectation value vanishes save for ${\boldsymbol{q}}=0$, which is
subsequently cancelled by the background charge—allowing one to set
$V({\boldsymbol{q}}=0)=0$ and completely ignore the Hartree interaction.
However, because the moiré pattern breaks continuous translation symmetry,
momentum is only conserved modulo a reciprocal lattice vector. We therefore
obtain
$\displaystyle H_{\mathrm{H}}^{(\nu)}$
$\displaystyle=\sum_{{\boldsymbol{G}}}^{\prime}V({\boldsymbol{G}})\int_{{\boldsymbol{k}}^{\prime}}\langle\Psi^{\dagger}({\boldsymbol{k}}^{\prime}+{\boldsymbol{G}})\Psi({\boldsymbol{k}}^{\prime})\rangle_{\nu}\int_{{\boldsymbol{k}}}\Psi^{\dagger}({\boldsymbol{k}})\Psi({\boldsymbol{k}}-{\boldsymbol{G}}),$
(8)
where the prime above the summation over the moiré reciprocal lattice vectors
indicates that ${\boldsymbol{G}}=0$ is excluded. The self-consistent procedure
begins by assuming some initial value of $H_{\mathrm{H}}^{(\nu)}$ and
diagonalizing the corresponding mean-field Hamiltonian
$H_{\mathrm{MF}}^{(\nu)}$ to obtain the Bloch wavefunctions and energy
eigenvalues. These quantities are then used re-compute the expectation values
that define $H_{\mathrm{H}}^{(\nu)}$ and thus $H_{\mathrm{MF}}^{(\nu)}$. This
process is repeated until one obtains the quadratic Hamiltonian
$H_{\mathrm{MF}}^{(\nu)}$ that yields the correlation functions
$\langle\cdot\rangle_{\nu}$ used in its definition.
It has further been shown17, 55 that the Hartree potential is dominated by the
first ‘star’ of moiré reciprocal lattice vectors, which in our conventions
corresponds to
${\boldsymbol{G}}_{n}=R\big{(}2\pi(n-1)/6\big{)}\frac{4\pi}{\sqrt{3}L_{M}}(1,0)^{T}$
for $n=1,\dots,6$, with $R(\phi)$ a rotation matrix. In this last
approximation that we employ, the $2\pi/6$ rotation symmetry of the continuum
model greatly simplifies the calculation of the Hartree term. Notably,
$V({\boldsymbol{G}})\int_{{\boldsymbol{k}}^{\prime}}\langle\Psi^{\dagger}({\boldsymbol{k}}^{\prime}+{\boldsymbol{G}})\Psi({\boldsymbol{k}}^{\prime})\rangle_{\nu}$
must be the same for all ${\boldsymbol{G}}_{n}$, and, instead of Eq. (8), we
use
$\displaystyle H_{\mathrm{H}}^{(\nu)}$
$\displaystyle=V_{\mathrm{H}}^{(\nu)}\sum_{n=1}^{6}\int_{\boldsymbol{k}}\Psi^{\dagger}({\boldsymbol{k}})\Psi({\boldsymbol{k}}-{\boldsymbol{G}}_{n}),$
$\displaystyle V_{\mathrm{H}}^{(\nu)}$
$\displaystyle=\frac{1}{6}\sum_{n=1}^{6}V({\boldsymbol{G}}_{n})\int_{{\boldsymbol{k}}^{\prime}}\langle\Psi^{\dagger}({\boldsymbol{k}}^{\prime}+{\boldsymbol{G}})\Psi({\boldsymbol{k}}^{\prime})\rangle_{\nu}\,.$
(9)
The self-consistent procedure in this case is identical to that described in
the previous paragraph, but due to the reduced number of reciprocal lattice
vectors that are included in the summation the calculation is computationally
easier. Convergence is typically reached within $\sim 6$ iterations.
For clarity, all bands corresponding to different fillings plotted in Fig. 2b
have been shifted so that the Dirac points of the flat bands always occur at
the zero of the energy scale; it follows that the (independent) graphene-like
Dirac cone is then displaced in energy relative to the fixed reference point
of the flat bands for each filling. If this procedure was not performed for
clarity purposes, then the Hartree calculation would yield band structures
with a graphene-like Dirac cone fixed at one energy for all fillings, but with
shifted flat bands relative to it, as predicted in ab-initio calculations14.
### 4.3 Hartree correction and estimate of dielectric constant
As discussed in the previous section, due to Hartree corrections, the Dirac
cones shift downwards (upwards) in energy relative to the flat bands under
electron (hole) doping, as seen in Fig. 2b-d. These relative shifts are
measured to be rather large ($\approx 70\,\mathrm{\text{meV}}$ for $\nu=+4$
and $\approx 50\,\mathrm{\text{meV}}$ for $\nu=-4$), similar to the bandwidth
of the MATTG flat bands (approximately $50$ meV). These relative shifts allow
us to estimate an effective dielectric constant $\epsilon$ to be used in
Hartree band-structure-renormalization calculations. In particular, we find
that $\epsilon=12-13$ quantitatively reproduces the observed Dirac point
shifts at $\nu=\pm 4$. Finally, we note that the relative shift between Dirac
cones and flat bands may also explain a certain discrepancy between our
measurements and the bandwidth estimates of the flat bands found in transport1
that assumed fixed relative position between Dirac point and flat points. This
assumption, neglecting the Hartree correction leads to an overestimate of a
bandwidth by a factor of $\sim 2$ (we measure flat band width to be
approximately $50$ meV while Ref. 1 found it to be around $100$ meV).
## 5 Tunneling conductance normalization and fitting procedure
In Fig. 3b,c the tunneling conductance has been normalized by dividing the
spectra with a sixth-order polynomial fit that preserves the area of the
spectrum 56 (see also Extended Data Fig. 9). This procedure returns normalized
dI/dV curves that approach unity outside of the spectroscopic gap and removes
in part the large asymmetry between electrons and holes near $\nu=-2$ and
above $V_{\rm Bias}=5$ meV. We emphasize that the regimes displaying U- and
V-shaped tunneling spectra are clearly visible both before and after this
normalization procedure. The dip-hump structure persists after this step as
well (see black arrow in Extended Data Fig. 9).
The normalized dI/dV curves are fitted with the Dynes formula26,
$\frac{dI}{dV}\propto\int_{-\infty}^{\infty}d\omega\int_{0}^{2\pi}d\theta~{}\mathrm{Re}\left[\frac{\omega+i\Gamma}{\sqrt{(\omega+i\Gamma)^{2}-\Delta(\theta)^{2}}}\right]\left.\left(-\frac{df}{d\omega}\right)\right\rvert_{\omega=\omega+eV}\,,$
(10)
where $f(\omega)=1/(e^{\omega/k_{B}T}+1)$ ($k_{B}$ is a Boltzmann constant and
$T=400$ mK in our measurements); $\Delta(\theta)$ is the superconducting
pairing potential and; spectral broadening coming from disorder and finite
lifetime of Cooper pairs are incorporated by the parameter $\Gamma$. We
consider isotropic $s$-wave pairing, a pairing with a nodal order parameter,
and a combination of the two (see also section 6 and Extended Data Fig. 7 for
a more detailed discussion and fits). For the nodal case we use
$\mathrm{\Delta}(\theta)=\Delta_{0}\cos(2\theta)$ (i.e., a $d$-wave profile),
though any $\mathrm{\Delta}(\theta)=\Delta_{0}\cos(N\theta)$ with integer
$N\neq 0$ gives the same spectrum. We therefore do not distinguish between
different nodal order parameters, e.g., $p$\- versus $d$\- versus $f$-wave. In
the plots, we also took into account the broadening due to finite lock-in
modulation excitation $V_{\rm mod}=200\micro$ V.
## 6 Possible Scenarios of U-shaped to V-shaped spectral evolution
In the main text, we introduced the experimental observation that the
tunneling conductance exhibits two qualitatively different tunneling profiles
(U- vs. V-shaped) as a function of filling. We now discuss the details of two
possible scenarios for this outcome: $(i)$ a BCS-like superconductor with
filling-dependent order parameter symmetry and $(ii)$ a BEC-to-BCS transition
with a common nodal order parameter. As noted in the main text, we emphasize
that ‘BCS’ in this context does _not_ imply any assumptions regarding the
pairing mechanism or coupling strength, but simply refers to a pairing
scenario wherein the chemical potential lies inside the band. Finally, we
discuss the Ginzburg-Landau coherence length in the BEC-BCS transition
scenario and argue that it is consistent with the results of Ref. 1.
### 6.1 BCS-like superconductor with filling-dependent order parameter
symmetry
The existence of U- and V-shaped tunneling spectra suggests that
superconductivity evolves with doping from a fully gapped to a gapless state.
Here we address the possibility that these two regimes both arise from Cooper
pairing a partially filled band with a Fermi surface, but with qualitatively
different superconducting order parameters. This scenario _a priori_ does not
address the different behaviors of the Ginzburg-Landau coherence length
$\xi_{\rm GL}$ seen in Ref. 1, e.g., the scaling of $\xi_{\rm GL}$ with the
interparticle spacing (see section 6.2.2). Nevertheless, whatever mechanism
underlies the putative change in order parameter could potentially conspire to
yield such dependence.
The V-shaped spectra can be adequately fit by postulating a nodal order
parameter, as described in the main text and in section 5. In the present
scenario, the U-shaped spectra are best fit by invoking multiple co-existing
order parameters: either an $s$-wave gap together with a nodal order parameter
or a combination of two nodal order parameters (e.g.,
$d_{x^{2}-y^{2}}+id_{xy}$) that together produce a gap in the tunneling
conductance. Extended Data Fig. 7e displays the relevant fits. As noted in the
main text, a similar change in pairing order with doping has been proposed in
cuprates27 (albeit with a less pronounced U-to-V evolution). Moreover, it has
been argued that have argued that a $d_{x^{2}-y^{2}}+id_{xy}$ spin-
fluctuation-mediated pairing is energetically unfavourable compared to a real
superposition of the two order parameters.14
### 6.2 BEC-to-BCS transition
#### 6.2.1 Tunnelling current
To describe the tunneling current expected in the BEC-BCS transition scenario
and demonstrate qualitative consistency with experiment, we consider a
phenomenological two-parabolic-band model. Specifically, we model the system
near filling $\nu=-2$ with two bands of energy (in these two sections we set
$\hbar=1$)
$\xi_{\pm,{\boldsymbol{k}}}=\pm\left(\frac{k^{2}}{2m}+\Delta_{\mathrm{CI}}\right)-\mu,$
(11)
separated by a $2\Delta_{\mathrm{CI}}$ correlated-insulator gap. Each band
admits a two-fold ‘spin’ degeneracy—which need not coincide exactly with
physical spin, but could, e.g., represent some combination of spin and valley.
In the absence of pairing, $\mu$ residing in the electron band $\xi_{+}$ (hole
band $\xi_{-}$) corresponds to filling $\nu=-2+\delta n$ with $\delta n>0$
($\delta n<0$). We focus primarily on the hole-doping case relevant for
experiment.
For simplicity, we assume a ‘spin’-singlet, nodal $d$-wave pairing with a pair
field $\Delta_{\boldsymbol{k}}$ that is the same in the electron and hole
bands; inter-band pairing is neglected. (We anticipate that triplet pairing
would yield similar results, as would other nodal order parameters.) The
standard Bogoliubov–de Gennes formalism yields
$\displaystyle E_{\pm,{\boldsymbol{k}}}$
$\displaystyle=\sqrt{\xi_{\pm,{\boldsymbol{k}}}^{2}+\Delta_{{\boldsymbol{k}}}^{2}}\,,$
$\displaystyle u_{\pm,{\boldsymbol{k}}}^{2}$
$\displaystyle=1+\frac{\xi_{\pm,{\boldsymbol{k}}}}{E_{\pm,{\boldsymbol{k}}}}\,,$
$\displaystyle v_{\pm,{\boldsymbol{k}}}^{2}$
$\displaystyle=1-\frac{\xi_{\pm,{\boldsymbol{k}}}}{E_{\pm,{\boldsymbol{k}}}}$
(12)
with $u_{\pm,{\boldsymbol{k}}}^{2},v_{\pm,{\boldsymbol{k}}}^{2}$ coherence
factors describing overlap of the bare electron/hole wavefunctions with those
of quasiparticles with dispersion $E_{\pm,{\boldsymbol{k}}}$. The BEC phase
corresponds to $|\mu|<\Delta_{\mathrm{CI}}$. Here $\Delta_{\rm CI}$ renders
the quasiparticles fully gapped despite the assumed nodal $d$-wave order
parameter, and population of the electron and hole bands arises solely from
pairing. (At $\mu=0$, the symmetry built into the electron and hole bands
implies that the system remains undoped, corresponding to $\nu=-2$, even with
$\Delta_{{\boldsymbol{k}}}\neq 0$.) The regime $|\mu|>\Delta_{\rm CI}$
corresponds to a BCS phase wherein an electron- or hole-like Fermi surface
undergoes Cooper pairing, yielding gapless quasiparticle excitations due to
nodes in $\Delta_{{\boldsymbol{k}}}$. Figure 3i,j schematically depicts the
chemical potential associated with these two phases.
The tunneling current follows from
$I(eV,\mu)\propto\sum_{s=\pm}\int
d^{2}{\boldsymbol{k}}\,\left\\{u_{s,{\boldsymbol{k}}}^{2}\big{[}f(E_{s,{\boldsymbol{k}}}-eV)-f(E_{s,{\boldsymbol{k}}})\big{]}-v_{s,{\boldsymbol{k}}}^{2}\big{[}1-f(-E_{s,{\boldsymbol{k}}}-eV)-f(E_{s,{\boldsymbol{k}}})\big{]}\right\\},$
(13)
where $f(E)=1/(e^{E/k_{B}T}+1)$ is the Fermi-Dirac distribution; the
differential tunneling conductance $dI/dV$ is obtained by numerically
differentiating the current after the integral is evaluated. Below we will use
this general formula to evaluate the tunneling conductance across the BEC-BCS
transition. As a primer, however, it is instructive to examine limiting cases.
Consider first the conductance deep in the BCS phase. Here the current
simplifies dramatically for relevant voltages. First, focusing on the hole-
doping case with $\mu\ll-\Delta_{\rm CI}$, we can neglect the electron band to
an excellent approximation and focus solely on momenta near the Fermi surface
for the hole band. The remaining quasiparticle dispersion
$E_{-,{\boldsymbol{k}}}$ then has two ‘branches’ with the same
energy—corresponding to excitations above and below the hole-like Fermi
surface (i.e., with $\xi_{-,{\boldsymbol{k}}}>0$ and
$\xi_{-,{\boldsymbol{k}}}<0$). That is, for each momentum $k^{+}>k_{F}$
($k_{F}$ is the Fermi momentum), there exists a momentum $k^{-}<k_{F}$ such
that $\xi_{-,{\boldsymbol{k}}^{+}}=-\xi_{-,{\boldsymbol{k}}^{-}}$, but
$E_{-,{\boldsymbol{k}^{+}}}=E_{-,{\boldsymbol{k}^{-}}}$. The momentum-
dependent part of the coherence factors therefore cancels, yielding a
tunneling current
$I(eV,\mu)\propto\int
d^{2}{\boldsymbol{k}}\,\Big{\\{}\big{[}f(E_{-,{\boldsymbol{k}}}-eV)-f(E_{-,{\boldsymbol{k}}})\big{]}-\big{[}1-f(-E_{-,{\boldsymbol{k}}}-eV)-f(E_{-,{\boldsymbol{k}}})\big{]}\Big{\\}}$
(14)
that depends on the quasiparticle dispersion but not the coherence factors.
Upon taking $d^{2}{\boldsymbol{k}}\approx k_{F}dkd\theta$, carrying out a
variable change
$\omega=\sqrt{\xi_{-,{\boldsymbol{k}}}^{2}+\Delta_{{\boldsymbol{k}}}}$, and
assuming no $|{\boldsymbol{k}}|$ dependence in the pairing gap evaluated at
the Fermi surface [$\Delta_{{\boldsymbol{k}}}\rightarrow\Delta(\theta)$], we
arrive at the conventional BCS expression:
$\displaystyle I(eV,\mu)$
$\displaystyle\propto\int_{0}^{2\pi}d\theta\int_{\Delta(\theta)}^{\infty}d\omega\frac{\omega}{\sqrt{\omega^{2}-\Delta(\theta)^{2}}}\Big{\\{}\big{[}f(\omega-
eV)-f(\omega)\big{]}-\big{[}1-f(-\omega-eV)-f(\omega)\big{]}\Big{\\}}$
$\displaystyle\propto\int_{0}^{2\pi}d\theta\int_{\Delta(\theta)}^{\infty}d\omega\frac{\omega}{\sqrt{\omega^{2}-\Delta(\theta)^{2}}}\left(-\frac{df}{d\omega}eV\right)$
$\displaystyle\implies\frac{dI}{dV}$
$\displaystyle\propto\int_{0}^{2\pi}d\theta\int_{\Delta(\theta)}^{\infty}d\omega\frac{\omega}{\sqrt{\omega^{2}-\Delta(\theta)^{2}}}\left(-\frac{df}{d\omega}\right).$
(15)
Implementing the Dynes substitution26 $\omega\to\omega+i\Gamma$ then recovers
the expression from Eq. (10). The square-root factor in the denominator
underlies coherence peaks associated with pairing-induced density-of-states
rearrangement.
By contrast, in the BEC phase ($|\mu|<\Delta_{\rm CI}$), or sufficiently close
to the BEC-BCS transition, the simplifying procedure above breaks down. Both
electron and hole bands need to be retained; $\Delta_{{\boldsymbol{k}}}$ can
not be simply evaluated at a Fermi surface, and hence dependence on the
orientation _and_ magnitude of ${\bf k}$ become important; and since the
minimum of the quasiparticle dispersion $E_{\pm,{\boldsymbol{k}}}$ occurs at
or near ${\boldsymbol{k}}=0$, the momentum-dependent part of the coherence
factors no longer perfectly cancels. Together, these details manifest both
through a “softening” of the coherence peaks in the tunneling conductance and
the generation of a tunneling gap for _any_ pairing function
$\Delta_{{\boldsymbol{k}}}$, $d$-wave or otherwise, in the BEC state; cf. Fig.
3k,l.
Returning to the general current formula in Eq. (13), in simulations of Fig.
3k,l and supplemental simulations below, we employ a $d$-wave pairing
potential with
$\Delta_{{\boldsymbol{k}}}=\Delta_{0}h(k)\cos(2\theta).$ (16)
Here $k$ and $\theta$ are the magnitude and polar angle of ${\boldsymbol{k}}$,
while $\Delta_{0}$ sets the pairing energy scale. We take the $k$-dependent
prefactor to be $h({k})=\mathrm{tanh}({k}^{2}\ell^{2})$, where $\ell$ is
roughly the real-space distance over which the $d$-wave pairing potential
acts. This choice results in $\Delta_{{\boldsymbol{k}}}$ vanishing at $k=0$ as
required for $d$-wave pairing, and regularizes the unphysical divergence that
would appear with a simple $h(k)\propto k^{2}$ profile in a manner that
preserves locality in real-space. Let $\eta\equiv 2m\Delta_{0}\ell^{2}$ be a
dimensionless quantity involving $\ell$. In the regime of the BCS phase with
$k_{F}\ell\gg 1$, near the Fermi surface we have
$\Delta_{{\boldsymbol{k}}}\approx\Delta_{0}\cos(2\theta)$; hence the value of
$\eta$ is largely irrelevant provided $k_{F}^{2}/2m$ remains sufficiently
large compared to $\Delta_{0}$. In both the BCS regime with $k_{F}\ell\lesssim
1$ and throughout the BEC phase, the choice of $\eta$ is more significant.
Here, for the physically important ‘small’ momenta, the pairing behaves like
$\Delta_{{\boldsymbol{k}}}\approx\Delta_{0}k^{2}\ell^{2}\cos(2\theta)$ and
should be compared to the $k^{2}/2m$ kinetic energy scale. With $\eta\lesssim
1$, pairing effects are suppressed since the latter scale dominates over the
former. By contrast, with $\eta\gtrsim 1$ the pairing scale dominates and
correspondingly yields more dramatic signatures in density of states and
tunneling conductance. In particular, the coherence peaks appear most
prominently in the BEC phase at $\eta\gg 1$.
The tunneling conductance in the BEC and BCS phases can be studied as a
function of chemical potential or as a function of filling. In our formalism,
treating $\mu$ as the tuning parameter is more convenient since all $\mu$
dependence is contained in the quasiparticle dispersion
$E_{\pm,{\boldsymbol{k}}}$ and the relation between filling and $\mu$ evolves
nontrivially between the BEC and BCS phases. In experiment, however, the gate-
controlled filling $\nu$ is the natural tuning parameter. Additionally, the
pairing strength and $\nu=-2$ gap, modeled here by $\Delta_{0}$ and
$\Delta_{\rm CI}$, certainly depend on $\nu$—which further complicates the
relation between filling and $\mu$. We defer a careful examination of this
relation to future work. Instead, here we will simply explore the tunneling
conductance as a function of $\mu$, with $\mu$-dependent $\Delta_{0}$ and
$\Delta_{\rm CI}$ input parameters extracted (crudely) from the experiment as
follows.
First, for each filling we fix $\Delta_{0}$ to the measured location of
coherence peaks in Fig. 3h (and linearly extrapolate to continue to more
negative $\mu$ values). In the V-shaped regime this assignment is expected to
be quantitatively reliable, given our interpretation of that regime as a BCS
phase (which would indeed have coherence peaks set by $\Delta_{0}$). However,
the U-shaped regime, interpreted as a BEC phase, would have coherence peaks at
an energy determined by multiple parameters including $\mu,\Delta_{\rm CI}$,
and $\Delta_{0}$; thus here the assignment becomes an approximation that we
invoke for simplicity. We then obtain a $\Delta_{0}$ vs. $\mu$ profile by
naively replacing filling (or gate voltage) with $\mu$; i.e., we ignore the
nontrivial relation linking these quantities. To determine $\Delta_{\rm CI}$
vs. $\mu$, we first fix the value at $\mu=0$ to be $\Delta_{\rm CI,0}=2.7$
meV, corresponding to the $\nu=-2$ spectral gap seen in Extended Data Fig. 4.
We also fix the chemical potential $\mu_{*}$ corresponding to the BEC-BCS
transition, which in our model occurs when $-\mu_{*}=\Delta_{\rm
CI}(\mu_{*})$. We specifically set $\mu_{*}=-0.8$ meV so that the transition
coincides roughly with the experimentally observed U-to-V change in Fig. 3
(after replacing density as $\mu$ as described above). We phenomenologically
model the remaining $\mu$ dependence of $\Delta_{\rm CI}$ as
$\Delta_{\rm CI}(\mu)=\begin{cases}\Delta_{\rm CI,0}\frac{\gamma_{\rm
CI}^{2}}{\mu^{2}+\gamma_{\rm CI}^{2}}&\mu\geq\mu_{*}\\\
\alpha_{2}\mu^{2}+\alpha_{1}\mu+\alpha_{0}&\mu_{*}\geq\mu\end{cases}$ (17)
with $\alpha_{2}=\Delta_{\rm CI}(\mu^{+})/(\mu_{*}-\mu_{**})^{2}$,
$\alpha_{1}=-2\Delta_{\rm CI}(\mu^{+})\mu_{**}/(\mu_{*}-\mu_{**})^{2}$,
$\alpha_{0}=\Delta_{\rm CI}(\mu^{+})\mu_{**}^{2}/(\mu_{*}-\mu_{**})^{2}$ and
$\mu_{**}=-1.1$ meV. We further choose small enough $\gamma_{\rm CI}=0.1$ meV
to ensure coherence peak separation comparable with the experiment. The
parametrization above causes $\Delta_{\rm CI}$ to decrease upon hole doping
and eventually vanish at a chemical potential $\mu_{**}$ (we fix $\Delta_{\rm
CI}$ to zero beyond this point rather than allowing it to become negative).
This collapse of $\Delta_{\rm CI}$ is invoked to emulate experiment;
$\mu$-independent $\Delta_{\rm CI}$ would produce additional structure in the
tunneling conductance that is not resolved in measurements. Extended Data Fig.
8a illustrates the resulting $\mu$ dependence of $\Delta_{0}$ and $\Delta_{\rm
CI}$.
Given these parameters, we evaluate the bias voltage and $\mu$ dependence of
the tunneling conductance assuming $1/2m\ell^{2}=6.25~{}\mu$eV, which yields
values of $\eta$ as large as ${\sim}250$. Extended Data Fig. 8b,c presents
tunneling conductance color maps and linecuts; data from Fig. 3k,l were
generated from the same parameter set. While we caution against direct
comparison of Fig. 3a and Extended Data Fig. 8b given the crude model and
parameter extraction used for the latter, our simulations do robustly capture
the observed U- to V-shaped evolution. Improved modeling of experiment could
be pursued in several ways, e.g., by self-consistently relating $\mu$ and
filling, and by employing more sophisticated band-structure modeling that
accounts for density of states features at $\nu=-2$. The latter in particular
may be required to obtain more refined agreement with experimental details
such as the relative coherence peak heights in the U- and V-shaped regimes.
#### 6.2.2 Connection to coherence length measurements
Finally, we discuss the behaviour of the Ginzburg-Landau coherence length
$\xi_{\mathrm{GL}}$ in the proposed BEC-BCS transition scenario. The primary
intent of this analysis is to emphasize that this scenario is consistent with
the transport-based observations of Ref. 1, which found that
$\xi_{\mathrm{GL}}$ admits two distinct regimes. First, in the part of the
superconducting dome with $\nu\lesssim-2.5$—roughly our V-shaped
region—$\xi_{\rm GL}$ significantly exceeds the inter-particle spacing
$d=1/\sqrt{|\delta n|}$ (where $\delta n$ is measured relative to $\nu=-2$).
In this regime, the coherence length can be well captured by a standard form
$\xi_{\rm GL}=cv_{F}/\Delta$ expected from dimensional analysis in a BCS
phase, where $v_{F}$ is the Fermi velocity, $\Delta$ is the characteristic
pairing energy, and $c$ is a (presumably order-one) constant. Using $v_{F}\sim
10^{5}\,\mathrm{\text{m/s}}$ (comparable to the flat-band velocity extracted
from previous MATBG measurements13), our measured spectroscopic gaps $\Delta$
(see above in section 5), and $c\approx 2/3$ indeed yields coherence lengths
that quantitatively agree with Ref. 1 over this filling range. For example,
our measured $\Delta$ at $\nu=-2.5$ yields $\xi_{\rm GL}\approx
30\,\mathrm{\text{nm}}$. This agreement supports the emergence of a ‘BCS’
regime—albeit of a strongly coupled nature as confirmed by the anomalously
large $2\Delta/(k_{B}T_{C})$ ratio reported in the main text.
By contrast, in the complementary part of the superconducting dome with
$\nu\gtrsim-2.5$—coinciding roughly with our U-shaped region—Ref. 1 measured
$\xi_{\mathrm{GL}}$ values that closely track the relative inter-particle
spacing $d$ and become as small as $\sim 12\,\mathrm{\text{nm}}$. The
deviation from the form $\xi_{\mathrm{GL}}\propto v_{F}/\Delta$ can be
accounted for by the presence of an additional energy scale, the gap for
dissociating the Cooper-pair molecules, as well as the fact that $v_{F}$ has
no meaningful definition in the absence of a Fermi surface. Instead, the
scaling relation $\xi_{\rm GL}\propto d$ is predicted for a BEC regime in
related contexts57, 31, 58, and we briefly sketch how the pertinent scaling
may be obtained using the results of Ref. 57. We emphasize, however, that
direct use of this reference requires a number of simplifying assumptions that
limit the scope and applicability of the analysis. Although the arguments
outlined in the previous subsection hinge on the assumption of a nodal order
parameter, we specialize here to nodeless $s$-wave pairing. Nevertheless,
because the BEC phase is gapped regardless of the function form of the gap, we
do not expect this distinction to alter the functional relationship of
$\xi_{\mathrm{GL}}$ vis-à-vis the interparticle spacing $d=1/\sqrt{|\delta
n|}$. We also restrict our attention to the hole band,
$\xi_{-,{\boldsymbol{k}}}$, which can be viewed as taking the
$\Delta_{\mathrm{CI}}\to\infty$ limit in the model presented in the previous
subsection. For convenience, we drop the subscript ‘$-$’ as well as the
reference to $\Delta_{\mathrm{CI}}$, simply expressing the dispersion as
$\xi_{\boldsymbol{k}}\equiv\xi_{k}=-k^{2}/(2m)-\mu$, where $k$ is the
magnitude of the vector ${\boldsymbol{k}}$. It follows that $\mu>0$
corresponds to the BEC regime, while $\mu<0$ is the BCS regime (which we do
not consider here). As in the previous subsection, details of the symmetry
breaking leading to the $\nu=-2$ insulator are neglected, and a generic two-
fold ‘spin’ symmetry with quantum numbers labelled by $a=1,2$ is assumed to
remain. A filling $\delta n$ of the hole bands corresponds to a filling
$\nu=-2+\delta n$ of the TTG system with $\delta n<0$.
We start with a Hamiltonian
$\displaystyle H$
$\displaystyle=\sum_{{\boldsymbol{k}},a}c^{\dagger}_{a}({\boldsymbol{k}})\xi_{\boldsymbol{k}}c_{a}({\boldsymbol{k}})+\sum_{{\boldsymbol{k}},{\boldsymbol{k}}^{\prime},{\boldsymbol{q}}}Uc_{1}^{\dagger}({\boldsymbol{k}}+{\boldsymbol{q}}/2)c_{2}^{\dagger}(-{\boldsymbol{k}}+{\boldsymbol{q}}/2)c_{2}(-{\boldsymbol{k}}^{\prime}+{\boldsymbol{q}}/2)c_{1}({\boldsymbol{k}}^{\prime}+{\boldsymbol{q}}/2),$
(18)
where $U$ characterizes the interaction strength and
$c_{a=1,2}({\boldsymbol{k}})$ are electron annihilation operators. The
superconducting gap $\Delta$ that develops should be obtained from $H$ via a
self-consistent equation, but for simplicity, we instead consider $\Delta$ as
a constant, implying a superconducting spectrum given by
$E_{k}=\sqrt{\xi_{k}^{2}+\Delta^{2}}$. The macroscopically based coherence
length $\xi_{\mathrm{GL}}$ is proportional to the microscopically derived
$\xi_{\mathrm{phase}}$, which is identified with the inverse mass of the
canonical boson $\phi({\boldsymbol{r}})\sim
c_{1}({\boldsymbol{r}})c_{2}({\boldsymbol{r}})$ in the effective action
determined in Ref. 57. They find that $\xi_{\mathrm{phase}}=\sqrt{b/a}$ where
$\displaystyle a$
$\displaystyle=\frac{\Delta^{2}}{4\pi}\int_{0}^{\infty}dk\,k\,\frac{1}{E_{k}^{3}}\mathbin{\raisebox{2.15277pt}{,}}$
$\displaystyle b$ $\displaystyle=\frac{1}{32\pi
m}\int_{0}^{\infty}dk\,k\,\frac{\xi_{k}^{2}}{E_{k}^{5}}\left[-\frac{\xi_{k}^{2}-2\Delta^{2}}{\xi_{k}}+\frac{5\Delta^{2}}{2m}\frac{k^{2}}{E_{k}^{2}}\right].$
(19)
The model is analytically tractable, returning
$\displaystyle\xi_{\mathrm{phase}}$
$\displaystyle=\sqrt{\frac{1}{12m}\frac{1}{x-\mu}\left(\frac{\mu^{2}}{x^{2}}+\frac{x}{x+\mu}\right)},$
$\displaystyle x$ $\displaystyle=\sqrt{\mu^{2}+\Delta^{2}}.$ (20)
This expression is explicitly a function of $\mu$ and not of the density
$\delta n$ of the bands. We relate the two via
$\displaystyle\delta n$
$\displaystyle=-\frac{1}{2\pi}\int_{0}^{\infty}dk\,k\left(1+\frac{\xi_{k}}{E_{k}}\right),$
(21)
which can be solved and inverted to obtain $\mu$ as a function of $\delta n$:
$\displaystyle\mu$ $\displaystyle=\frac{(2\pi\delta
n/m)^{2}-\Delta^{2}}{4\pi\delta n/m}\cdot$ (22)
Deep in the BEC regime with $\delta n\to 0^{-}$, we find
$\displaystyle\xi_{\mathrm{phase}}$ $\displaystyle\xrightarrow{\,\delta n\to
0^{-}\,}\frac{1}{4\sqrt{-\pi\delta n}}\propto d,$ (23)
consistent with the observations of Ref. 1. Hence, when comparing with
experiment, $\xi_{\mathrm{phase}}$ has the same functional dependence on
$d=1/\sqrt{|\delta n|}$ in the BEC regime. Again, we emphasize that while the
coefficient may differ, we do not expect the presence of nodes in the
superconducting order parameter to alter our conclusions in this limit.
We now turn to the intermediate regime between the BCS and BEC limits. Based
on transport measurements, Ref. 1 proposed that MATTG can be tuned close to
the BEC-BCS crossover (see also Ref. 2). We advocate for a complementary
scenario, wherein the presence of gapless modes in the BCS regime implies that
the system undergoes a BEC to BCS _phase transition_. This distinction was
explicitly emphasized in Refs. 28 in the context of the cuprates, and the
corresponding transition was also explored in Refs. 58 and 59. The prospect of
a gate-tuned transition within the superconducting dome is especially
encouraging since it may be consistent with the apparent discontinuity in the
coherence length measured in Ref. 1. We leave the determination of the
coherence length across the transition for future work.
Extended Data Fig. 1: Spectroscopy of twisted bilayer and twisted trilayer
graphene. a, Point spectra of twisted bilayer graphene (TBG) on an AA site at
a twist angle $\theta=1.44\degree$, from a bilayer region found in the same
sample. b, Point spectra of twisted trilayer graphene (TTG) on an AAA site at
a twist angle $\theta=1.45\degree$. Unlike TBG at the similar angle,
signatures of correlations, such as enhancement of VHS separations at charge
neutrality and cascade of flavor symmetry breaking, are observed. c, Linecuts
taken from a and b around $\nu=-4$ (white dashed lines). While the
$dI/dV\sim\text{LDOS}$ between the flat bands and the remote band is zero for
TBG, the value is finite for TTG due to the existence of the additional Dirac
cones.
Extended Data Fig. 2: Comparison between spectra on ABA and AAA sites at
finite fields. a-b, Point spectroscopy as a function of $V_{\rm Gate}$ on ABA
stacked (a, the same as panel Fig. 2d) and on AAA stacked (b) region
($B=3~{}\text{T}$, $\theta=1.46\degree$). In comparison, flat bands appear to
be more prominent on the AAA site (b), while LLs from Dirac-like dispersion
and dispersive bands appear more pronounced at ABA site. This is a direct
consequence of LDOS from the flat bands being localized on the AAA sites. The
LDOS from Dirac-like bands is spatially uniformly distributed.
Extended Data Fig. 3: Distinguishing dispersive band LLs and Dirac band LLs
a-b, Point spectroscopy as a function of $V_{\rm Gate}$ on ABA stacked (a) and
AAA stacked (b) region ($B=8~{}\text{T}$, $\theta=1.46\degree$). Zeroth LL
from Dirac dispersion is clearly distinguished from other LLs as it crosses
the flat band. Other LLs from Dirac dispersion is distinguished from the
dispersive band from being parallel to the zeroth LL as a function of doping.
Additional LL is observed at this high magnetic field at $V_{\rm
Gate}>12~{}\text{V}$ which is more pronounced at AAA stacked region and can be
attributed to second Dirac cone due to finite displacement field present at
these $V_{\rm Gate}$.
Extended Data Fig. 4: Spectroscopy near $\nu=-2$. Linecuts taken from Fig. 3a
for $V_{\rm Gate}$ ranging from $-6.3$ V to $-7.4$ V in $100$ mV steps.
Starting from top, the observed gap is highly asymmetric and gradually evolves
to the more symmetric spectrum on the bottom. Vertical dashed line shows the
position of $V_{\rm Bias}=0~{}\text{mV}$. We interpret that asymmetric gap
(brown lines) corresponds to correlated insulator regime, while the symmetric
gap (black lines) indicates superconducting regime.
Extended Data Fig. 5: Additional data sets showing magnetic field and
temperature dependence of spectroscopic gap in the $\mathbf{-3<\nu<-2}$ range.
a-d, Point spectroscopy as a function of $V_{\rm Gate}$ at twist angle of
$\mathrm{\theta=1.51\degree}$ at magnetic field $\mathrm{B=0}$ T (a),
$\mathrm{B=300}$ mT (b), $\mathrm{B=600}$ mT (c), $\mathrm{B=1}$ T (d). e,
Line traces showing magnetic field dependence for $\mathrm{V_{Gate}=-7.8}$ V
(U-shaped regime). Color coding corresponds to magnetic field $\mathrm{B=0}$,
$0.1$, $0.2$, $0.3$, $0.4$, $0.4$, $0.8$, $1$ T. Plots are offset for clarity.
f, g, Gate spectroscopy measured at $B=2$ T (f) and $B=4$ T (g), for
$\theta=1.54\degree$ featuring gapped spectrum persisting $B\gtrsim 4$ T (data
taken at different point compared to a-e). h-k, Gate spectroscopy taken at
different temperatures $\mathrm{T=400}$ mK (h), $\mathrm{T=2}$ K (i),
$\mathrm{T=4}$ K (j), $\mathrm{T=7}$ K (k). i, Point spectroscopy measured as
a function of $V_{\rm Bias}$ and temperature at the same point as (h-k) for
${V_{\rm Gate}=-7.8}$ V.
Extended Data Fig. 6: Spectroscopic gap in the $+2<\nu<+3$ range. a, Tunneling
conductance spectroscopy at twist angle of $\mathrm{\theta=1.57\degree}$ on
AAA stacked region at $\mathrm{T=2}$ K showing well-developed gapped region on
the electron-side. b, Spectroscopy measured at the same region at
$\mathrm{T=400}$ mK. c, Spectroscopy as a function of temperature at the same
point as (a, b) for $\mathrm{V_{Gate}=10V}$. d, Spectroscopy focusing on hole
doping taken with the same micro-tip. While the spectrum for hole doping (d)
shows clear coherence peaks and dip-hump structures these features are absent
for the gap on the electron-side. We speculate that for electron doping, the
coherence peaks are suppressed even at our base temperature ($\mathrm{T=400}$
mK). The observed gap in this case is likely originating from pseudogap phase.
Extended Data Fig. 7: Normalization of tunneling conductance and fitting. a,
Tunneling conductance measured on Pb (110) surface at $T=400$ mK showing
superconducting gap. Blue dashed line is Dynes formula fit with two gaps with
following parameters, $\mathrm{\Delta_{1}=1.42}$ meV,
$\mathrm{\Delta_{2}=1.26}$ meV, $\mathrm{\Gamma=10}$ $\mu$eV, $\mathrm{T=400}$
mK used to obtain the base temperature. b, Same data as Fig. 3a showing larger
$V_{\rm Bias}$ range. Black dashed lines mark gate voltages $V_{\rm
Gate}=-7.5,-7.89,-8.4$ V with the corresponding line traces shown in
subsequent panels. c, Line cut in the U-shaped regime ($V_{\rm Gate}=-7.5$ V).
Red dotted line is polynomial fitting curve obtained as described in section
5. d, Normalized data obtained by dividing the raw data (black line in c) by
polynomial fit (red line in c). Blue line is Dynes formula fit with isotropic
gap. e, Same data as d with Dynes formula fits using different types of the
pairing gap symmetry: a nodal gap with $\Delta_{d}=1.40$ meV (green); $s+id$
pairing gap with $\Delta_{s}=0.72$ meV, $\Delta_{d}=1.22$ meV (brown); $d+id$
pairing gap with $\mathrm{\Delta_{d1}=1.00}$ meV, $\mathrm{\Delta_{d2}=1.30}$
meV (cyan). f, in the V-shaped regime ($V_{\rm Gate}=-7.89$ V). g, Normalized
data from f and Dynes formula fit using an isotropic gap (blue). h, Normalized
data from f with Dynes formula fits using a nodal gap with
$\Delta=1.44~{}\text{meV}$ (green). i, Another linecut in the V-shaped regime
($V_{\rm Gate}=-8.4~{}\text{V}$). j, Normalized data from i and Dynes formula
fit using an isotropic gap (blue, purple). k Normalized data from i and Dynes
formula fits green line is nodal gap with $\Delta=1.26~{}\text{meV}$.
Extended Data Fig. 8: Simulated tunneling conductance across the BEC-BCS
transition. a, Chemical potential dependence of $\Delta_{0}$ and $\Delta_{\rm
CI}$ used in simulations. Black data points represent coherence-peak locations
crudely extracted from experiment, as detailed in the text. b,c, Color map and
linecuts of differential conductance $dI/dV$ as a function of $\mu$. Here and
in Fig. 3k,j, we set $T=0.05$ meV and employed a nodal $d$-wave gap with
$1/2m\ell^{2}=6.25~{}\mu$eV. The BEC-BCS transition manifests as a clear
evolution from U- to V-shaped spectra as observed experimentally. We
nevertheless stress, as in the text, that panels b,c do not correspond
directly to Fig. 3a due in part to the nontrivial relation between chemical
potential $\mu$ and filling that has not been incorporated.
Extended Data Fig. 9: Peak-dip-hump analysis from $d^{2}I/dV^{2}$ local
minima/maxima. a, Hole-side superconducting gap spectrum measured at various
$V_{\rm Gate}$ ranging from $-8.0~{}\text{V}$ to $-9.2~{}\text{V}$ at
$\theta=1.51\degree$ region which is same dataset as Fig. 4a. b,
$d^{2}I/dV^{2}$ as a function of $V_{\rm Bias}$ by taking the first derivative
of the (a) and apply Gaussian filtering to make the trend clear. The
horizontal lines of the same color indicate the $d^{2}I/dV^{2}=0$ for each
$V_{\rm Gate}$.
Extended Data Fig. 10: Dip-hump structures observed at different magic-angle
area a, Gate spectroscopy measured at $\theta=1.51\degree$. b, Normalized
point spectra at range of $V_{\rm Gate}$ from $-8.6~{}\text{V}$ to
$-7.3~{}\text{V}$. c, Extracted position of the dip-hump and a coherence peak
versus $V_{\rm Gate}$ for $V_{\rm Bias}>0$ (blue and yellow, respectively) and
for $V_{\rm Bias}<0$ (red and black, respectively). d, Energy of the bosonic
mode versus $V_{\rm Gate}$, obtained by subtracting the corresponding energies
of the dip-hump feature and the coherence peak for $V_{\rm Bias}>0$ (purple)
and $V_{\rm Bias}<0$ (green). e, LDOS Landau fan diagram measured at the same
area as a on AAA region. Black lines indicate the gap between LLs emanating
from CNP. Red dashed lines indicate gaps between LLs emanating from integer
filling $\nu\neq 0$ of the flat bands. f, $\Omega/2\Delta$ versus $\Delta$
obtained from c,d. In this particular area the dip-hump structure could be
resolved mostly in U-shaped regime.
|
# Strong nonlocal sets of UPB
Bichen Che Information Security Center, State Key Laboratory of Networking
and Switching Technology, Beijing University of Posts and Telecommunications,
Beijing 100876, People’s Republic of China Information Security Center,
Beijing University of Posts and Telecommunications, Beijing 100876, People’s
Republic of China Zhao Dou<EMAIL_ADDRESS>Information Security Center,
State Key Laboratory of Networking and Switching Technology, Beijing
University of Posts and Telecommunications, Beijing 100876, People’s Republic
of China Min Lei Information Security Center, Beijing University of Posts
and Telecommunications, Beijing 100876, People’s Republic of China Yixian
Yang Information Security Center, State Key Laboratory of Networking and
Switching Technology, Beijing University of Posts and Telecommunications,
Beijing 100876, People’s Republic of China
###### Abstract
The unextendible product bases (UPBs) are interesting members from the family
of orthogonal product states. In this paper, we investigate the construction
of 3-qubit UPB with strong nonlocality of different sizes. First, a UPB set in
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ of size 12 is presented based on
the Shifts UPB, the structure of which is described by mapping the system to a
$3\times 3\times 3$ Rubik’s Cube. After observing the orthogonal graph of each
qubit, we provide a general method of constructing UPB in
${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$ of size
${{\left(d-1\right)}^{3}}+3\left(d-2\right)+1$. Second, for the more general
case where the dimensions of qubits are different, we extend the tile
structure to 3-qubit system and propose a Tri-tile structure for 3-qubit UPB.
Then, by means of this structure, a
${{C}^{4}}\otimes{{C}^{4}}\otimes{{C}^{5}}$ system of size 30 is obtained
based on a ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{4}}$ system. Similarly, we
generalize this approach to
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$ system
which has a similar composition to
${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$. Our research provides a positive
answer to the open questions raised in [Halder, et al., PRL, 122, 040403
(2019)], indicating that there do exist multi-qubit UPBs that can exhibit
strong quantum nonlocality without entanglement.
††preprint: APS/123-QED
## I Introduction
With its potential application value, quantum information has brought new
changes to the information industry[1, 2, 3, 4]. Entanglement between quantum
states is a kind of quantum resource, which can be used to achieve tasks that
cannot be accomplished by classical resources[5, 6, 7, 8], such as quantum
teleportation[9, 10], quantum algorithm[11, 12, 13], and quantum dense
coding[14]. For a long time, it has been believed that the nonlocality of
quantum entanglement leads to these properties of entangled states. However,
in 1999, Bennett et al.[15] proposed a set of orthogonal product states with
nonlocality, which aroused a wide discussion on the relationship between
entanglement and nonlocality.
Unextendible product bases (or UPB) is a class of incomplete orthogonal
product states, whose complementary space does not contain any product state
[16, 17, 18]. It cannot be distinguished perfectly by local operations and
classical communication (LOCC). Ref. [19, 20, 21] has shown that it can be
used in the production of bound entangled (BE) states and some special
bipartite entangled states that remain positive under partial transpose (PPT).
Nevertheless, most of the current efforts are devoted to the construction of
2-qubit UPBs, while little progress has been made on multi-qubit UPBs [22, 23,
24, 25, 26]. Chen et al.[22] investigated the minimum size of UPB with local
dimension equals 2, and analyzed the proposed sets using orthogonal graphs.
Bej et al.[23] proposed that a set of high-dimensional reducible unextendible
product states can be obtained by adding several orthogonal product states to
a set of low-dimensional unextendible product states in bipartite systems.
Recently, a method to construct UPBs of different large sizes in
${{C}^{m}}\otimes{{C}^{n}}$ was put forward by Shi et al. [24], which uses
U-tile structure. Multi-qubit UPBs are not only valuable in quantum circuit
construction and cryptography experiments, but also often used to construct
tight Bell inequalities without quantum violations [27]. Therefore, a general
construction method for constructing a multi-qubit UPB is needed, which is the
first idea (motivation) of this paper.
As one of the research hotspots in quantum information theory, the quantum
state discrimination problem is the basis of other quantum problems [28, 29,
30]. The distinguishable quantum states can be applied to some quantum
information processing tasks, such as distributed quantum computing, while the
indistinguishable quantum states are very common in the design of quantum
cryptography protocols. Bennett et al.[31] proposed that any set of orthogonal
product states in $2\otimes N$ is distinguishable by LOCC. Zhang et al.[32]
gave a general method to construct indistinguishable multipartite orthogonal
product states in
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes\cdots\otimes{{C}^{{{d}_{n}}}},\left({{C}^{{{d}_{1,2,\cdots,n}}}}\geq
3,n\geq 4\right)$. In addition to the above work, Halder et al. [33] recently
found that in some nonlocal tripartite systems, when two parties are measured
together, there is a possibility to distinguish some certain states of the
set. Hence, the concept of strong nonlocality has been proposed and widely
discussed. Inspired by Halder’s work, Zhang et al. [34] presented two sets of
quantum states with strong nonlocality in
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ and
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$. Shi et al. [35]
provided the process of constructing a set consisting of entangled states in
${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$ system based on the Rubik’s cube.
However, there is no relevant research on the construction of multi-qubit UPB
with strong nonlocality, which is the second idea (motivation) of this paper.
In addition, graph theory is an effective way to show the abstract structure
of state sets intuitively [36, 37]. It is also widely used in the construction
of quantum state sets [38, 39, 40], especially UPBs[41, 42, 43]. Johnston et
al. [37] analyzed the structure of UPB in ${{\left({{C}^{2}}\right)}^{\otimes
p}}$ and proposed the minimum size of this quantum system by using the
orthogonal graph. Bennett et al. [31] first provided two classical
construction methods for UPB construction, namely Pyramid structure and tile
structure. Hence, our third motivation is to use graph theory to better
analyze and display the internal structure and relations of the strongly
nonlocal states.
Therefore, on account of the aforesaid three motivations, the main focus in
this paper is to construct a general set of 3-qubit UPB with strong
nonlocality. First, based on the Shifts UPB, a UPB set in
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ is obtained. Geometrically, by
observing the orthogonal graph of each qubit and the corresponding $3\times
3\times 3$ Rubik’s cube, the structure of the states set and the relationship
between each qubit are analyzed in detail in Fig. 3 and Fig. 4. Then,
following this construction method, we construct a general 3-qubits UPB with
strong nonlocality of size ${{\left(d-1\right)}^{3}}+3\left(d-2\right)+1$ by
dividing a $d\times d\times d$ Rubik’s cube, and also give the general
expression of the states set. Second, after reviewing the connection between
UPBs and tile structure, we extend the tile structure from 2-qubit to 3-qubit
systems and introduce it in Definition 5. Moreover, by applying Proposition 1,
we generalize the construction of ${{C}^{4}}\times{{C}^{4}}\times{{C}^{5}}$
and show a universal approach to construct a strongly nonlocal UPB set with
high-dimensional,
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$, based on
a UPB set with low-dimensional,
${{C}^{\left({{d}_{1}}-1\right)}}\otimes{{C}^{\left({{d}_{2}}-1\right)}}\otimes{{C}^{\left({{d}_{3}}-1\right)}}$.
Our research on the general construction and discrimination of UPB can be
applied to many practical quantum protocols, because the security of protocols
is guaranteed fundamentally.
The rest of this paper is organized as follows. In Sec. II, we briefly
introduce the notations and several preliminary concepts of UPBs. Sec. III and
Sec. IV consist of the main contributions of the present work. Based on graph
theory, the general construction method of three-qubit UPB with the same
dimension and different dimensions are proposed respectively. Finally, we
summarize the results and discuss some open problems in Sec. V.
## II Notations and preliminaries
In this section, we briefly introduce the preliminary knowledge and some
notations. A multi-qubit pure state
$\left|v\right\rangle\in{{C}^{{{d}_{1}}}}\otimes\cdots\otimes{{C}^{{{d}_{p}}}}$,
is considered to be separable if and only if it can be written in the form
$\left|v\right\rangle=\left|{{v}_{1}}\right\rangle\otimes\cdots\otimes\left|{{v}_{p}}\right\rangle.$
The standard bases of ${{C}^{{{d}}}}$ is $\left\\{\left|0\right\rangle,\
\left|1\right\rangle,\ \cdots,\ \left|d-1\right\rangle\right\\}$. The symbol
of the tensor product is sometimes omitted in order to discuss multiple qubit
states more clearly. Note that throughout this paper, the states and operators
are not normalized for simplicity, and only pure states and positive operator-
valued measure (POVM) measurements are considered.
Definition 1: Consider a p partite quantum system
$\mathcal{H}=\otimes_{i=1}^{p}{{\mathcal{H}}_{i}}$. An orthogonal product
bases (PB) is a set S of pure orthogonal product states spanning a subspace
${{\mathcal{H}}_{S}}$ of $\mathcal{H}$. An uncompletable product bases (UCPB)
is a PB whose complementary subspace $\mathcal{H}_{S}^{\bot}$ contains fewer
mutually orthogonal product states than its dimension. An unextendible product
bases (UPB) is an uncompletable product bases that does not contain any
product state in complementary subspace $\mathcal{H}_{S}^{\bot}$.
UPB is nonlocal and cannot be perfectly distinguished by LOCC. But when
discussing the nonlocality of a multi-qubit system, it is found that there is
a certain probability that states can be distinguished when several qubits are
joined. Based on this phenomenon, the definition of strong nonlocality is
given.
Definition 2: In a multiparty system
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes\cdots\otimes{{C}^{{{d}_{n}}}},\
\left({{d}_{1,2,\ \cdots,\ n}}\geq 3,\ n\geq 4\right)$, if a set of orthogonal
product states is arbitrarily divided into i parts, and the entire system is
still locally irreducible in every new i parts, the system is called
i-divisible, $i=2,3,\ \cdots,\ (n-1)$.
Definition 3: In
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes\cdots\otimes{{C}^{{{d}_{n}}}},\
\left({{d}_{1,2,\ \cdots,\ n}}\geq 3,\ n\geq 4\right)$, if a set of orthogonal
product states is $(n-1)$-divisible, $(n-2)$-divisible…and 2- divisible
simultaneously, it is said that this system is strongly nonlocal.
For the measurement of nonlocal state sets, since multiple rounds of
measurement are required for multiple participants, it is necessary to carry
out the nontrivial orthogonality-preserving measurement to obtain useful
information without affecting the characteristics of the state set, which is
defined in Definition 4.
Definition 4: If a set of mutually orthogonal quantum states remains mutually
orthogonal after measurement, the measurement used to distinguish the quantum
states is defined as orthonormal preserving measurement (OPM). Furthermore,
such a measurement is called nontrivial if all the measurement matrices
constituting the OPM are not proportional to the identity operator, otherwise,
it is trivial.
Tile structure is one of the classical structures used to construct UPB. The
${{C}^{m}}\otimes{{C}^{n}}$ system can correspond to an $m\times n$ rectangle
$\Gamma$, which is paved by disjoint tiles $\left\\{{{t}_{i}}\right\\}$,
denoted by $\Gamma=\bigcup\nolimits_{i}{{{t}_{i}}}$. A tile ${{t}_{i}}$ should
be a rectangle that can be separated. In particular, we show how to construct
a 2-qubit UPB of size 5 in Fig. 1.
Figure 1: Tile structure.
Example 1: The following five states form a UPB in ${{C}^{3}}\otimes{{C}^{3}}$
system denoted the tile structure.
From Fig.1, we obtain a set of complete orthogonal product bases as Eq. (1),
which denoted as :
$\displaystyle\left|\psi_{0}^{\left(1\right)}\right\rangle=\frac{1}{\sqrt{2}}\left|0\right\rangle\left(\left|0\right\rangle+\left|1\right\rangle\right),\quad\left|\psi_{0}^{\left(2\right)}\right\rangle=\frac{1}{\sqrt{2}}\left|0\right\rangle\left(\left|0\right\rangle-\left|1\right\rangle\right)$
(1)
$\displaystyle\left|\psi_{1}^{\left(1\right)}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle+\left|1\right\rangle\right)\left|2\right\rangle,\quad\left|\psi_{1}^{\left(2\right)}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle-\left|1\right\rangle\right)\left|2\right\rangle,$
$\displaystyle\left|\psi_{2}^{\left(1\right)}\right\rangle=\frac{1}{\sqrt{2}}\left|2\right\rangle\left(\left|1\right\rangle+\left|2\right\rangle\right),\quad\left|\psi_{2}^{\left(2\right)}\right\rangle=\frac{1}{\sqrt{2}}\left|2\right\rangle\left(\left|1\right\rangle-\left|2\right\rangle\right),$
$\displaystyle\left|\psi_{3}^{\left(1\right)}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|1\right\rangle+\left|2\right\rangle\right)\left|0\right\rangle,\quad\left|\psi_{3}^{\left(2\right)}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|1\right\rangle-\left|2\right\rangle\right)\left|0\right\rangle,$
Let
$\left|S\right\rangle=\frac{1}{3}\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right)\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right).$
be a stopper state to force the unextendibility.
Then we claim that the set $\Psi$ is a UPB in ${{C}^{3}}\otimes{{C}^{3}}$
system.
$\psi=\beta\cup\left\\{\left|S\right\rangle\right\\}\backslash\left\\{\left|\psi_{i}^{\left(1\right)}\right\rangle\right\\}_{i=1}^{3}.$
The Shifts UPB in ${{C}^{2}}\otimes{{C}^{2}}\otimes{{C}^{2}}$ was proposed by
Bennet in 1999, which provides one of the oldest examples of a nontrivial UPB.
It consists of the following four states:
$\displaystyle\left|{{\psi}_{1}}\right\rangle=\left|0\right\rangle\left|1\right\rangle\left|0-1\right\rangle,$
(2)
$\displaystyle\left|{{\psi}_{2}}\right\rangle=\left|1\right\rangle\left|0-1\right\rangle\left|0\right\rangle,$
$\displaystyle\left|{{\psi}_{3}}\right\rangle=\left|0-1\right\rangle\left|0\right\rangle\left|1\right\rangle,$
$\displaystyle\left|{{\psi}_{4}}\right\rangle=\left(\left|0\right\rangle+\left|1\right\rangle\right)\left(\left|0\right\rangle+\left|1\right\rangle\right)\left(\left|0\right\rangle+\left|1\right\rangle\right).$
This UPB can be simply generalized to a UPB over any number of parties, each
with a one qubit Hilbert space. In this paper, the construction of a 3-qubit
UPB is also based on the Shifts UPB.
## III Tripartite system with same dimensions
Ref. [31] proved that the members of UPB cannot be perfectly distinguishable
by LOCC, that is, UPB is always nonlocal. Then, we set out to investigate what
kind of multi-qubit UPB structure is strongly nonlocal.
In Ref. [43], it was shown that starting from a two-qubit unextendible
entangled basis, it is possible to construct a three-qubit unextendible
entangled basis. Therefore, in this section, we proposed a set of strongly
nonlocal UPB in ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ based on Shifts
UPB in Lemma 1. In the same way, we construct a set of UPB in
${{C}^{4}}\otimes{{C}^{4}}\otimes{{C}^{4}}$ in Lemma 2. Furthermore, we
generalize these two constructions to
${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$ system for any $d\geq 3$ in
Theorem 1.
### III.1 Construct a UPB in ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$
system based on Shifts UPB
Shifts UPB is one of the most classical available 3-qubit UPB. When the
structure of it is shown by a Rubik’s Cube, it can be found that any subset of
two vectors on either side spans the two-dimensional space of that party,
preventing any new vector from being orthogonal to all the existing ones.
Following this idea, we construct a
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ state set in Eq. (3).
$\displaystyle{{C}_{1}}:\
\left\\{\begin{matrix}\left|{{\psi}_{0}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|0-1\right\rangle}_{C}},\\\
\left|{{\psi}_{1}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{2}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\end{matrix}\right.$ (3) $\displaystyle{{C}_{2}}:\
\left\\{\begin{matrix}\left|{{\psi}_{3}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|1-2\right\rangle}_{C}},\\\
\left|{{\psi}_{4}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{5}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\end{matrix}\right.$ $\displaystyle{{C}_{3}}:\
\left\\{\begin{matrix}\left|{{\psi}_{6}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1-2\right\rangle}_{C}}\\\
\left|{{\psi}_{7}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\left|{{\psi}_{8}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|2\right\rangle}_{C}}\\\
\left|{{\psi}_{9}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0-1\right\rangle}_{C}}\\\
\left|{{\psi}_{10}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|2\right\rangle}_{C}}\\\
\left|{{\psi}_{11}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\end{matrix}\right.$ $\displaystyle Stopper:\
\left|{{\psi}_{12}}\right\rangle={{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right)}_{A}}\otimes$
$\displaystyle{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right)}_{B}}\otimes{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right)}_{C}}.$
While $\left|{{\psi}_{12}}\right\rangle$ is considered as stopper states since
it stops inclusion of any other states from forming a COPB. Geometrically, by
mapping Eq. (3) to the Rubik’s Cube, the structure of UPB can be displayed
more intuitively and clearly, as shown in Fig. 2. The stopper state is not
depicted. And we find that any two of these vectors are not in a two
dimensional plane.
Figure 2: the Rubik’s Cube corresponding to
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$.
The composition of UPB can be roughly divided into three parts:
C1: The set of UPB with $d=2$ in the lower left corner, which structure is
similar to the Shifts UPB. It consists of three subcubes $\left\\{S{{C}_{1}},\
S{{C}_{2}},\ S{{C}_{3}}\right\\}$, and is nonlocal, as shown in Figure 3-a.
C2: The set of UPB with $d=2$in the upper right corner, which structure is the
same as the Shifts UPB. It includes three subcubes $\left\\{S{{C}_{1}},\
S{{C}_{2}},\ S{{C}_{3}}\right\\}$, and is nonlocal, as shown in Figure 3-b.
C3: Six edges of the $3\times 3\times 3$ Rubik’s Cube, corresponding to 6
subcubes $\left\\{S{{C}_{i}}\right\\}_{i=1}^{6}$. This system is nonlocal, as
shown in Fig. 3-c.
Figure 3: composition of UPB in ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$.
Each subsystem is nonlocal, so the whole system
${{C}_{1}}\cup{{C}_{2}}\cup{{C}_{3}}$ is also nonlocal.
Considering two qubits in UPB, if they have the same orthogonal graph after
permuting the qubits and relabeling the vertices, the two qubits are
considered to be equivalent. Fig. 4-a is an original orthogonal graph of three
qubits drawn respectively according to Eq. (5). The vertices
${{V}_{0}}\sim{{V}_{11}}$ in the graph correspond to
$\left|{{\psi}_{0}}\right\rangle$ to $\left|{{\psi}_{11}}\right\rangle$ in the
state set, and the lines between vertices represent the orthogonal
relationship between two states. It can be seen that every edge between two
vertices appears in at least one graph, indicating that all states in the set
are mutually orthogonal. Fig. 4-b can be obtained after rearranging the
vertices and corresponding lines. By observing Figure 4-b, we find that the
three qubits A, B, and C are equivalent because the orthogonal graph structure
of each party is the same. In other words, by permuting each basis of
${{C}^{3}}$ used in the construction, the original UPB still can be obtained.
Figure 4: Orthogonal graph of three qubits.
Therefore, only the properties of one qubit need to be discussed,and then the
properties of the other two qubits can be deduced from equivalence. From the
composition of the basis, it is obvious that these states have a cyclic
property as the cyclic property of the trace. In other words, the state set
has the same properties in the different divisions of A—BC,B—AC and C—AB. In
addition, the number 12 is also the minimum size that can be realized in an
orthogonal product states set with strong nonlocality.
Lemma 1 : In ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$, the 3-qubit UPB of
size 12 given by Eq.(3) is strongly nonlocal.
Proof: The proof can be summarized into two steps, and the first step is to
prove that this 3-qubit system is nonlocal. This step can be omitted in the
UPB set since UPB are incomplete and nonlocal.
The second step is to prove that the whole system remains nonlocal, regardless
of the arbitrary partitioning of three qubits. Next, we use the division
method of A—BC as an example to carry out the proof.
Physically, this method of division means that the subsystems B(Bob) and
C(Charlie) are treated together as a 9-dimensional subsystem BC on which joint
measurements are now allowed. On account of the original UPB is nonlocal, the
system will be still nonlocal when Charlie goes first. Then, we only need to
discuss the situation where the BC system goes first. In order to make the
proof process clearer, we first rewrite the original bases, let
$\left|00\right\rangle\to\left|0\right\rangle,\
\left|01\right\rangle\to\left|1\right\rangle,\ \cdots,\
\left|23\right\rangle\to\left|8\right\rangle$, and get the following states
Eq. (4):
$\displaystyle{{C}_{1}}:\
\left\\{\begin{matrix}\left|{{\psi}_{0}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|0-1\right\rangle}_{BC}},\\\
\left|{{\psi}_{1}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1-4\right\rangle}_{BC}},\\\
\left|{{\psi}_{2}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|3\right\rangle}_{BC}},\\\
\end{matrix}\right.$ (4) $\displaystyle{{C}_{2}}:\
\left\\{\begin{matrix}\left|{{\psi}_{3}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|7-8\right\rangle}_{BC}},\\\
\left|{{\psi}_{4}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|4-7\right\rangle}_{BC}},\\\
\left|{{\psi}_{5}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|5\right\rangle}_{BC}},\\\
\end{matrix}\right.$ $\displaystyle{{C}_{3}}:\,\
\left\\{\begin{matrix}\left|{{\psi}_{6}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{BC}}\\\
\left|{{\psi}_{7}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|6\right\rangle}_{BC}}\\\
\left|{{\psi}_{8}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|5-8\right\rangle}_{BC}}\\\
\left|{{\psi}_{9}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|6-7\right\rangle}_{BC}}\\\
\left|{{\psi}_{10}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|2\right\rangle}_{BC}}\\\
\left|{{\psi}_{11}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0-3\right\rangle}_{BC}}\\\
\end{matrix}\right.$ $\displaystyle Stopper:\
\left|{{\psi}_{12}}\right\rangle={{\left(\left|0\right\rangle+\cdots+\left|8\right\rangle\right)}_{A}}{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right)}_{BC}}.$
The 12 states in A—BC bipartition correspond to 12 blocks of the $3\times 9$
grid in Fig. 5. The yellow grid represents ${{C}_{1}}$ part, the blue grid
represents ${{C}_{2}}$ part, and the green grid represents ${{C}_{3}}$ part.
The whole graph is centrosymmetric.
Figure 5: The corresponding 3 × 9 grid of Eq. (4).
Suppose BC system starts with the nontrivial and non-disturbing measurement,
represented by a set of POVM elements $M_{m}^{\dagger}M_{m}$ on
${{d}^{2}}\times{{d}^{2}}$. the POVM measurement in
${{\left\\{\left|0\right\rangle,\left|1\right\rangle,\cdots,\left|8\right\rangle\right\\}}_{A}}$
basis can be written, which corresponds to the states Eq. (4):
Therefore, the original matrix can be reduced to:
$\displaystyle
M_{m}^{\dagger}M_{m}=\left[\begin{matrix}{{a}_{00}}&{{a}_{01}}&\cdots&{{a}_{07}}&{{a}_{08}}\\\
{{a}_{10}}&{{a}_{11}}&\cdots&{{a}_{17}}&{{a}_{18}}\\\
\vdots&\vdots&\ddots&\vdots&\vdots\\\
{{a}_{70}}&{{a}_{71}}&\cdots&{{a}_{77}}&{{a}_{78}}\\\
{{a}_{80}}&{{a}_{81}}&\cdots&{{a}_{87}}&{{a}_{88}}\\\ \end{matrix}\right]$
The post-measurement states could be expressed as
$\left(I\otimes{{M}_{m}}\right)\left|{{\varphi}_{i}}\right\rangle$, which
should be mutually orthogonal. Then
$\left\langle{{\varphi}_{j}}\right|\left(I\otimes
M_{m}^{\dagger}M_{m}\right)\left|{{\varphi}_{i}}\right\rangle=0$ is obtained.
According to this principle, the original matrix could be transformed into:
$\displaystyle M_{m}^{\dagger}M_{m}=\left[\begin{matrix}a&0&\cdots&0\\\
0&a&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&{{a}}\\\
\end{matrix}\right]$
Table I shows the detailed derivation process.
Table 1: POVM elements POVM Element | Corresponding States
---|---
${{a}_{0i}}={{a}_{i0}}=0$ | $i=1$ | $i=2$ | $i=3$ | $i=4$ | $i=5$
$\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{6}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{10}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{2}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$
$i=6$ | $i=7$ | $i=8$ | |
$\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{3}}\right\rangle$ | |
${{a}_{1i}}={{a}_{i1}}=0$ | $i=2$ | $i=3$ | $i=4$ | $i=5$ | $i=6$
$\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{10}}\right\rangle$ | $\left|{{\varphi}_{6}}\right\rangle$,$\left|{{\varphi}_{2}}\right\rangle$ | $\left|{{\varphi}_{6}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{6}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{6}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$
$i=7$ | $i=8$ | | |
$\left|{{\varphi}_{6}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{6}}\right\rangle$,$\left|{{\varphi}_{3}}\right\rangle$ | | |
${{a}_{2i}}={{a}_{i2}}=0$ | $i=3$ | $i=4$ | $i=5$ | $i=6$ | $i=7$
$\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{2}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$
$i=8$ | | | |
$\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{3}}\right\rangle$ | | | |
${{a}_{3i}}={{a}_{i3}}=0$ | $i=4$ | $i=5$ | $i=6$ | $i=7$ | $i=8$
$\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{3}}\right\rangle$
${{a}_{4i}}={{a}_{i4}}=0$ | $i=5$ | $i=6$ | $i=7$ | $i=8$ |
$\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{3}}\right\rangle$ |
${{a}_{5i}}={{a}_{i5}}=0$ | $i=6$ | $i=7$ | $i=8$ | |
$\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{3}}\right\rangle$ | |
${{a}_{6i}}={{a}_{i6}}=0$ | $i=7$ | $i=8$ | | |
$\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | | |
${{a}_{7i}}={{a}_{i7}}=0$ | $i=8$ | | | |
$\left|{{\varphi}_{4}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | | | |
${{a}_{00}}={{a}_{ii}}$ | $i=1$ | $i=2$ | $i=3$ | $i=4$ | $i=5$
$\left|{{\varphi}_{0,12}}\right\rangle$ | $\left|{{\varphi}_{6,12}}\right\rangle$ | $\left|{{\varphi}_{11,12}}\right\rangle$ | $\left|{{\varphi}_{1,12}}\right\rangle$ | $\left|{{\varphi}_{8,12}}\right\rangle$
$i=6$ | $i=7$ | $i=8$ | |
$\left|{{\varphi}_{9,12}}\right\rangle$ | $\left|{{\varphi}_{4,12}}\right\rangle$ | $\left|{{\varphi}_{3,12}}\right\rangle$ | |
Obviously, BC’s measurement matrix is proportional to the identity matrix, so
it means that BC system starts with a trivial measurement, and cannot get any
information from the measurement result. As for the other two division
methods, AB—C, AC—B, the proof method is similar to this. In summary, the
strong nonlocality of UPB can be maintained by arbitrarily dividing the three
qubits into two parts.
### III.2 Construct a UPB in ${{C}^{4}}\otimes{{C}^{4}}\otimes{{C}^{4}}$
system based on a UPB in ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ system
Based on the construction method of state set Eq. (3), we extend the set to
the case of $d=4$. Compared with the Rubik’s Cube of $3\times 3\times 3$, the
cube of $4\times 4\times 4$ has three new planes, $\left\\{\left(0,\ \cdots,\
3\right),\ \left(0,\ \cdots,\ 3\right),\ 3\right\\}$, $\left\\{\left(0,\
\cdots,\ 3\right),\ 3,\ \left(0,\ \cdots,\ 3\right)\right\\}$, $\left\\{3,\
\left(0,\ \cdots,\ 3\right),\ \left(0,\ \cdots,\ 3\right)\right\\}$. According
to tile structure, we divide the newly added plane and obtain Fig. 6.
Figure 6: composition of UPB in ${{C}^{4}}\otimes{{C}^{4}}\otimes{{C}^{4}}$.
The UPB of $d=4$ can be roughly divided into the following four parts:
C1: The UPB of ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ in the lower left
corner. It includes 12 subcubes and is nonlocal, as shown in Figure 7-a.
C2: The set of UPB with $d=2$in the upper right corner, which structure is the
same as the Shifts UPB. It includes three subcubes and is nonlocal, as shown
in Figure 7-b.
C3: The remaining parts of the three newly added planes. It can be decomposed
into six subcubes that cannot be further expanded, which is equivalent to
shifting all the previous C3 parts (six edges in the system $d=3$) up and back
by one grid. This system is non-local, as shown in Figure 7-c.
C4: Six edges of the $4\times 4\times 4$ Rubik’s Cube, corresponding to 6
subcubes. This system is nonlocal, as shown in Fig. 7-d.
Figure 7: composition of UPB in ${{C}^{4}}\otimes{{C}^{4}}\otimes{{C}^{4}}$.
By matching the Rubik’s cube with the states set, we get the Eq. (5). Compared
with ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$, there is one more part,
${{C}_{3}}$ in the composition of Eq. (5), which is the basis of the newly
added plane that has not been decomposed.
$\displaystyle{{C}_{1}}:\
\left\\{\begin{matrix}\left|{{\psi}_{0}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|0-1\right\rangle}_{C}},\\\
\left|{{\psi}_{1}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{2}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{3}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|1-2\right\rangle}_{C}},\\\
\left|{{\psi}_{4}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{5}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\left|{{\psi}_{6}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1-2\right\rangle}_{C}},\\\
\left|{{\psi}_{7}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{8}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\left|{{\psi}_{9}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0-1\right\rangle}_{C}},\\\
\left|{{\psi}_{10}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\left|{{\psi}_{11}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\end{matrix}\right.$ (5a) $\displaystyle{{C}_{2}}:\
\left\\{\begin{matrix}\left|{{\psi}_{12}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|2-3\right\rangle}_{C}},\\\
\left|{{\psi}_{13}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|2-3\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\left|{{\psi}_{14}}\right\rangle={{\left|2-3\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\end{matrix}\right.$ $\displaystyle{{C}_{3}}:\
\left\\{\begin{matrix}\left|{{\psi}_{15}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{16}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\left|{{\psi}_{17}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|1-2\right\rangle}_{C}},\\\
\left|{{\psi}_{18}}\right\rangle={{\left|2-3\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\left|{{\psi}_{19}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|2-3\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{20}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|2-3\right\rangle}_{C}},\\\
\end{matrix}\right.$ $\displaystyle{{C}_{4}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{21,22}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{C}},\\\
\left|{{\psi}_{23.24}}\right\rangle={{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\left|{{\psi}_{25,26}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{27,28}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{C}},\\\
\left|{{\psi}_{29.30}}\right\rangle={{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{31,32}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\end{array}\right.$ (5b) $\displaystyle Stopper:\
\left|{{\psi}_{30}}\right\rangle={{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{A}}\otimes$
$\displaystyle{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{B}}\otimes{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{C}}.$
Lemma 2: In ${{C}^{4}}\otimes{{C}^{4}}\otimes{{C}^{4}}$, the 3-qubit UPB of
size 34 given by Eq. (5) is strongly nonlocal.
The proof of Lemma 2 is given in Appendix A.
### III.3 Construct a UPB in ${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$
system based on a UPB in ${{C}^{d-1}}\otimes{{C}^{d-1}}\otimes{{C}^{d-1}}$
system
After analyzing the UPB structure constructed in Lemma1 and Lemma2, we propose
a UPB in ${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$ of size
${{\left(d-1\right)}^{3}}+3\left(d-2\right)+1$ based on the set
${{C}^{d-1}}\otimes{{C}^{d-1}}\otimes{{C}^{d-1}}$, and then obtain Proposition
1.
The composition of the general 3-qubit UPB is closely related to the
composition of the Rubik’s Cube and is always built on the basis of lower
dimensional state set. Thus, the division of a Rubik’s cube can also be based
on low-dimensional cubes, like peeling an onion. First, the three outer
layers, $\left\\{\left(0,\ \cdots,\ d-1\right),\ \left(0,\ \cdots,\
d-1\right),\ d-1\right\\}$, $\left\\{\left(0,\ \cdots,\ d-1\right),\ d-1,\
\left(0,\ \cdots,\ d-1\right)\right\\}$, $\left\\{d-1,\ \left(0,\ \cdots,\
d-1\right),\ \left(0,\ \cdots,\ d-1\right)\right\\}$ are divided into
different nonlocal blocks. Then, the inside cube is a tripartite system with
$d=d-1$. Again, we divide the three outer layers, $\left\\{\left(0,\ \cdots,\
d-2\right),\ \left(0,\ \cdots,\ d-2\right),\ d-2\right\\}$, $\left\\{\left(0,\
\cdots,\ d-2\right),\ d-2,\ \left(0,\ \cdots,\ d-2\right)\right\\}$,
$\left\\{d-2,\ \left(0,\ \cdots,\ d-2\right),\ \left(0,\ \cdots,\
d-2\right)\right\\}$ in the same way. And so on, until finally a 3-qubit UPB
with $d=2$ is left, we will divide it according to the Shifts UPB.
Figure 8: composition of UPB in ${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$.
Proposion 1: In ${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$, the UPB of size
${{\left(d-1\right)}^{3}}+3\left(d-2\right)+1$ given by Eq. (6) is strongly
nonlocal.
$\displaystyle{{C}_{1}}:\ C_{1}^{d-1}\otimes C_{1}^{d-1}\otimes C_{1}^{d-1}$
(6) $\displaystyle{{C}_{2}}:\ C_{2}^{d-1}+1$ $\displaystyle{{C}_{3}}:\
C_{3}^{d-1}+1,\quad C_{4}^{d-1}+1$ $\displaystyle{{C}_{4}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{{{\left(d-1\right)}^{3}}-3\left(d-2\right)}}\right\rangle={{\left|0+\cdots+w_{d}^{\left(d-2\right)s}\left(d-2\right)\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|i\right\rangle}_{C}}\\\
\left|{{\psi}_{{{\left(d-1\right)}^{3}}-2\left(d-2\right)}}\right\rangle={{\left|i\right\rangle}_{A}}{{\left|0+\cdots+w_{d}^{\left(d-2\right)s}\left(d-2\right)\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\left|{{\psi}_{{{\left(d-1\right)}^{3}}-\left(d-2\right)}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|i\right\rangle}_{B}}{{\left|0+\cdots+w_{d}^{\left(d-2\right)s}\left(d-2\right)\right\rangle}_{C}}\\\
\left|{{\psi}_{{{\left(d-1\right)}^{3}}}}\right\rangle={{\left|1+\cdots+w_{d}^{\left(d-1\right)s}\left(d-1\right)\right\rangle}_{A}}{{\left|i\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\left|{{\psi}_{{{\left(d-1\right)}^{3}}+\left(d-2\right)}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1+\cdots+w_{d}^{\left(d-1\right)s}\left(d-1\right)\right\rangle}_{B}}{{\left|i\right\rangle}_{C}}\\\
\left|{{\psi}_{{{\left(d-1\right)}^{3}}+2\left(d-2\right)}}\right\rangle={{\left|i\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1+\cdots+w_{d}^{\left(d-1\right)s}\left(d-1\right)\right\rangle}_{C}}\\\
\end{array}\right.$ $\displaystyle Stopper:\
\left|{{\psi}_{{{\left(d-1\right)}^{3}}+3\left(d-2\right)}}\right\rangle={{\left(\left|0\right\rangle+\cdots+\left|d-1\right\rangle\right)}_{A}}{{\left(\left|0\right\rangle+\cdots+\left|d-1\right\rangle\right)}_{B}}{{\left(\left|0\right\rangle+\cdots+\left|d-1\right\rangle\right)}_{C}}.$
Where $C_{k}^{d-1}\otimes C_{k}^{d-1}\otimes C_{k}^{d-1},\ k=1,2,3,4$
represents the ${{C}_{k}}$ part of the
${{C}^{d-1}}\otimes{{C}^{d-1}}\otimes{{C}^{d-1}}$ system. And $C_{3}^{d-1}+1$
means the simultaneous incrementing of three parties in the
$C_{3}^{d-1}\otimes C_{3}^{d-1}\otimes C_{3}^{d-1}$ system.
The proposed UPB is according to:
$\Psi=\beta\cup\left\\{\left|S\right\rangle\right\\}\backslash\left\\{\left|\psi_{i}^{\left(0,\
0,\ 0\right)}\right\rangle\right\\}_{i=0}$
Among them, the subtracted set $\left\\{\left|\psi_{i}^{\left(0,\ 0,\
0\right)}\right\rangle\right\\}_{i=0}^{2}$ is called the missing states, which
are not orthogonal to $\left|S\right\rangle$ but orthogonal to all states in
$\Psi\backslash\left\\{\left|S\right\rangle\right\\}$. So any state in
$H_{\psi}^{\bot}$ must be a linear combination of the missing states. It is
valuable in the discussion of bound entangled states.
The UPB of ${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$ can be roughly divided
into the following four parts:
C1: The set of UPB with $d=d-1$ in the lower left corner. It includes 12
subcubes and is nonlocal, as shown in Figure 9-a.
C2: The set of UPB with $d=2$ in the upper right corner, which structure is
the same as the Shifts UPB. It includes three subcubes and is nonlocal, as
shown in Figure 9-b.
C3: The remaining parts of the three newly added planes. It can be decomposed
into six subcubes that cannot be further expanded, which is equivalent to
shifting all the previous C3 and C4 parts of
${{C}^{d-1}}\otimes{{C}^{d-1}}\otimes{{C}^{d-1}}$ system, up and back by one
grid. This system is non-local, as shown in Figure 9-c.
C4: Six edges of the $d\times d\times d$ Rubik’s Cube, corresponding to 6
subcubes. This system is nonlocal, as shown in Fig. 9-d.
Figure 9: composition of UPB in ${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$.
## IV Tripartite system with different dimensions
In the previous section, we discussed the UPB with the same dimensions of
three parties. In this section, we continue to discuss another scenario which
is more general in tripartite systems and propose a set of strongly nonlocal
UPB in
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}},\quad
d{}_{1},\ d{}_{2},\ d{}_{3}\,\in\left[0,\ d-1\right]$.
### IV.1 A Construct a UPB in ${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{4}}$
system with strong nonlocality
Tile structure is a classical approach in the general construction of 2-qubit
UPB, which was first proposed by Bennet in 1999. Later, many researchers
studied it and proposed some related structures, such as Gen tile structure,
etc. In tile structure, any two sub-rectangles (i.e. tiles) cannot be combined
to form a new rectangle by a simple translation. In other words, any rectangle
cannot be split into two smaller sub-rectangles. Following this method, we
consider whether tile structure can be extended to 3-qubit.
The tripartite system
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$ can always
be uniquely mapped to a Rubik’s Cube whose section is one of these three
planes, ${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}$,
${{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$ and
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{3}}}}$. Therefore, we extend the
classical tile structure to three parties, which is defined as follows:
Definition 5: If two sub-cuboids (i.e. tiles) cannot be combined to form a
new cuboid (i.e. tiles) by a simple translation, then the structure of the
state set is defined as Tri-tile structure. In other words, any two vectors
are not in a two dimensional plane.
Moreover, we observe that the state set proposed in the previous section also
satisfies the Tri-tile structure. Take $d=3$ as an example, Fig. 10 can be
obtained, which displays all 9 sections of the cube. It can be seen that these
sections all meet the requirements of tile structure, so this
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ system satisfies Tri-tile
structure.
Figure 10: all sections of the cube $3\times 3\times 3$.
Proposition 2: If a 3-qubit UPB is strongly nonlocal, the tripartite state
set satisfies the Tri-tile structure.
Proof: A 3-qubit UPB with strong nonlocality cannot locally eliminate one or
more state while performing nontrivial orthogonality-preserving measurement.
Hence, all tiles should be irreducible in
${{\mathcal{H}}_{A}}\otimes{{\mathcal{H}}_{B}}\otimes{{\mathcal{H}}_{C}}$
Hilbert space, which means any two tiles cannot be combined into a new tiles
by simply shifting up and down or left and right. Therefore, this tripartite
state set satisfies the Tri-tile structure.
According to the definition of Tri-tile structure, we construct an incomplete
${{C}^{3}}\times{{C}^{3}}\times{{C}^{4}}$ system based on Shifts UPB in
Eq.(7), which is also strongly nonlocal. This system can lay the foundation
for the further construction of 3-qubit UPB with higher dimension.
$\displaystyle{{C}_{1}}:\
\left\\{\begin{matrix}\left|{{\psi}_{0}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|0-1\right\rangle}_{C}},\\\
\left|{{\psi}_{1}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{2}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\end{matrix}\right.$ (7) $\displaystyle{{C}_{2}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{3}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|2-3\right\rangle}_{C}},\\\
\left|{{\psi}_{4}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\left|{{\psi}_{5}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\end{array}\right.$ $\displaystyle{{C}_{3}}:\
\left\\{\begin{matrix}\left|{{\psi}_{6}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|2\right\rangle}_{C}}\\\
\left|{{\psi}_{7}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|1\right\rangle}_{C}}\\\
\end{matrix}\right.$ $\displaystyle{{C}_{4}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{8,9}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{C}}\\\
\left|{{\psi}_{10}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\left|{{\psi}_{11}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|3\right\rangle}_{C}}\\\
\left|{{\psi}_{12,13}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{C}}\\\
\left|{{\psi}_{14}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|3\right\rangle}_{C}}\\\
\left|{{\psi}_{15}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\end{array}\right.$ $\displaystyle Stopper:\
\left|{{\psi}_{16}}\right\rangle={{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right)}_{A}}\otimes$
$\displaystyle{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle\right)}_{B}}\otimes{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{C}}.$
### IV.2 Construct a UPB in ${{C}^{3}}\times{{C}^{3}}\times{{C}^{4}}$ system
based on a UPB in ${{C}^{4}}\times{{C}^{4}}\times{{C}^{5}}$ system
When constructing UPB of three parties with different dimensions, we adopt the
similar practical method as before to construct high-dimensional state sets
based on low-dimensional state sets. Though verifying whether the constructed
state set meets the Tri-tile structure, we can judge whether the state set is
strongly nonlocal more quickly and efficiently. Eq.(8) is a set of incomplete
UPB in ${{C}^{4}}\times{{C}^{4}}\times{{C}^{5}}$.
$\displaystyle{{C}_{1}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{0}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|0-1\right\rangle}_{C}},\\\
\left|{{\psi}_{1}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{2}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{3}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|2-3\right\rangle}_{C}},\\\
\left|{{\psi}_{4}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\left|{{\psi}_{5}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\left|{{\psi}_{6,7}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{C}},\\\
\left|{{\psi}_{8}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{9}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\left|{{\psi}_{10,11}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{C}},\\\
\left|{{\psi}_{12}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\left|{{\psi}_{13}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{14}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|0-1\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\left|{{\psi}_{15}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\end{array}\right.$ (8) $\displaystyle{{C}_{2}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{16}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|2\right\rangle}_{B}}{{\left|3-4\right\rangle}_{C}},\\\
\left|{{\psi}_{17}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|2-3\right\rangle}_{B}}{{\left|4\right\rangle}_{C}},\\\
\left|{{\psi}_{18}}\right\rangle={{\left|2-3\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|3\right\rangle}_{C}},\\\
\end{array}\right.$ $\displaystyle{{C}_{3}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{19}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|1-2\right\rangle}_{B}}{{\left|4\right\rangle}_{C}},\\\
\left|{{\psi}_{20}}\right\rangle={{\left|2-3\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|4\right\rangle}_{C}},\\\
\left|{{\psi}_{21}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|1\right\rangle}_{B}}{{\left|2-3\right\rangle}_{C}},\\\
\left|{{\psi}_{22,23}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{B}}{{\left|1\right\rangle}_{C}},\\\
\left|{{\psi}_{24,25,26}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3+w_{4}^{3s}4\right\rangle}_{C}},\\\
\left|{{\psi}_{27}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|2-3\right\rangle}_{B}}{{\left|2\right\rangle}_{C}},\\\
\left|{{\psi}_{28}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|1-2\right\rangle}_{C}},\\\
\end{array}\right.$ $\displaystyle{{C}_{4}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{29,30,31}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|0+w_{4}^{s}1+w_{4}^{2s}2+w_{4}^{3s}3\right\rangle}_{C}},\\\
\left|{{\psi}_{32,33}}\right\rangle={{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|4\right\rangle}_{C}},\\\
\left|{{\psi}_{34,35}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{36,37,38}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3+w_{4}^{3s}4\right\rangle}_{C}},\\\
\left|{{\psi}_{39,40}}\right\rangle={{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{A}}{{\left|3\right\rangle}_{B}}{{\left|0\right\rangle}_{C}},\\\
\left|{{\psi}_{41,42}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{B}}{{\left|4\right\rangle}_{C}},\\\
\end{array}\right.$ $\displaystyle Stopper:\
\left|{{\psi}_{43}}\right\rangle={{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{A}}\otimes$
$\displaystyle{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{B}}\otimes{{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{C}}.$
Similarly, the UPB of ${{C}^{4}}\times{{C}^{4}}\times{{C}^{5}}$ can be divided
into the following four parts:
C1: The UPB system of ${{C}^{3}}\times{{C}^{3}}\times{{C}^{4}}$ in the lower
left corner. It includes 14 subcubes and is nonlocal, as shown in Figure 11-a.
C2: The set of UPB with $d=2$in the upper right corner, which structure is the
same as the Shifts UPB. It includes three subcubes and is nonlocal, as shown
in Figure 11-b.
C3: The remaining parts of the three newly added planes. It can be combined
with the $C_{3}^{d-1}$ and $C_{4}^{d-1}$ parts and decomposed into 7 subcubes
that cannot be further expanded. This system is non-local, as shown in Figure
11-c.
C4: Six edges of the $4\times 4\times 5$ Rubik’s Cube, corresponding to 6
subcubes. This system is nonlocal, as shown in Fig. 11-d.
Figure 11: composition of UPB in ${{C}^{4}}\otimes{{C}^{4}}\otimes{{C}^{5}}$.
### IV.3 Construct a UPB in
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$ system
based on a UPB in
${{C}^{\left({{d}_{1}}-1\right)}}\otimes{{C}^{\left({{d}_{2}}-1\right)}}\otimes{{C}^{\left({{d}_{3}}-1\right)}}$
system
In this section, preserving the tile structure of the UPB in
${{C}^{\left({{d}_{1}}-1\right)}}\otimes{{C}^{\left({{d}_{2}}-1\right)}}\otimes{{C}^{\left({{d}_{3}}-1\right)}}$,
we propose a general construction method of UPB in
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}},\quad
d{}_{1},\ d{}_{2},\ d{}_{3}\,\in\left[0,\ d-1\right]$ according to the
proposed Tri-tile structure.
Continuing with the aforesaid construction method, we first divide the six
edges into a group of six states that cannot be further expanded. In addition
to the set in
${{C}^{\left({{d}_{1}}-1\right)}}\otimes{{C}^{\left({{d}_{2}}-1\right)}}\otimes{{C}^{\left({{d}_{3}}-1\right)}}$
system and a set of Shift UPB, there will be some bases left in the outermost
layers. Then, for the remaining bases, it is necessary to ensure that the
decomposition method of newly added states can satisfies the Tri-tile
structure.
Proposition 3: Let ${{\mathcal{H}}_{A}},\ {{\mathcal{H}}_{B}},\
{{\mathcal{H}}_{C}}$ be Hilbert spaces of dimension x, y, z respectively.
Suppose that $A=\left({{\left|1\right\rangle}_{A}},\cdots,\ \
{{\left|{{d}_{1}}\right\rangle}_{A}}\right)$,
$B=\left({{\left|1\right\rangle}_{B}},\cdots,\ \
{{\left|{{d}_{2}}\right\rangle}_{B}}\right)$,
$C=\left({{\left|1\right\rangle}_{C}},\cdots,\ \
{{\left|{{d}_{3}}\right\rangle}_{C}}\right)$ are ordered orthonormal bases
with respect to ${{\mathcal{H}}_{A}},\ {{\mathcal{H}}_{B}},\
{{\mathcal{H}}_{C}}$. In
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$
, the set of UPB given by Eq. (9) is pairwise orthogonal and strongly
nonlocal.
$\displaystyle{{C}_{1}}:\ C_{1}^{d-1}\otimes C_{1}^{d-1}\otimes C_{1}^{d-1}$
(9) $\displaystyle{{C}_{2}}:\ C_{2}^{d-1}+1$ $\displaystyle{{C}_{4}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{k}}\right\rangle={{\left|0+\cdots+w_{d}^{\left(d-2\right)s}\left(d-2\right)\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|i\right\rangle}_{C}}\\\
\left|{{\psi}_{k+d-2}}\right\rangle={{\left|i\right\rangle}_{A}}{{\left|0+\cdots+w_{d}^{\left(d-2\right)s}\left(d-2\right)\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\left|{{\psi}_{k+2\left(d-2\right)}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|i\right\rangle}_{B}}{{\left|0+\cdots+w_{d}^{\left(d-2\right)s}\left(d-2\right)\right\rangle}_{C}}\\\
\left|{{\psi}_{k+3\left(d-2\right)}}\right\rangle={{\left|1+\cdots+w_{d}^{\left(d-1\right)s}\left(d-1\right)\right\rangle}_{A}}{{\left|i\right\rangle}_{B}}{{\left|0\right\rangle}_{C}}\\\
\left|{{\psi}_{k+3\left(d-2\right)+\left(d-1\right)}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1+\cdots+w_{d}^{\left(d-1\right)s}\left(d-1\right)\right\rangle}_{B}}{{\left|i\right\rangle}_{C}}\\\
\left|{{\psi}_{k+3\left(d-2\right)+2\left(d-1\right)}}\right\rangle={{\left|i\right\rangle}_{A}}{{\left|0\right\rangle}_{B}}{{\left|1+\cdots+w_{d}^{\left(d-1\right)s}\left(d-1\right)\right\rangle}_{C}}\\\
\end{array}\right.$ $\displaystyle Stopper:\
\left|{{\psi}_{k+3\left(d-2\right)+3\left(d-1\right)}}\right\rangle={{\left(\left|0\right\rangle+\cdots+\left|d-1\right\rangle\right)}_{A}}{{\left(\left|0\right\rangle+\cdots+\left|d-1\right\rangle\right)}_{B}}{{\left(\left|0\right\rangle+\cdots+\left|d-1\right\rangle\right)}_{C}}.$
By mapping the set Eq. (9) to the Rubik’s Cube, the structure is shown in Fig.
12.
Figure 12: composition of UPB in
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$.
The UPB of ${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$
can be divided into the following four parts:
C1: The UPB system of
${{C}^{\left({{d}_{1}}-1\right)}}\otimes{{C}^{\left({{d}_{2}}-1\right)}}\otimes{{C}^{\left({{d}_{3}}-1\right)}}$
in the lower left corner.
C2: The set of UPB with $d=2$ in the upper right corner, which structure is
the same as the Shifts UPB. It includes three subcubes and is nonlocal.
C3: The remaining parts of the three newly added planes. This part is hard to
show in a definite expression. When constructing this part, It needs to ensure
that the structure of state set meets the Tri-tile structure after this
remaining part combined with the $C_{3}^{d-1}$ and $C_{4}^{d-1}$ parts.
C4: Six edges of the ${{d}_{1}}\otimes{{d}_{2}}\otimes{{d}_{3}}$ Rubik’s Cube,
corresponding to 6 subcubes.
## V Conclusion
In summary, we concentrated on the general construction method of 3-qubit UPB
exhibiting strong nonlocality. Firstly, a strongly nonlocal set
${{C}^{3}}\otimes{{C}^{3}}\otimes{{C}^{3}}$ of size 12 was presented based on
the Shifts UPB. After a simple observation of its orthogonal graphs and
corresponding $3\times 3\times 3$ Rubik’s cube, the relationship between three
qubits was discussed and the structure of UPB was analyzed. Following the idea
of deducing the high-dimensional UPB from the low-dimensional UPB, we tried to
construct the set with higher dimensions and obtained a general set
${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$ of size
${{\left(d-1\right)}^{3}}+3\left(d-2\right)+1$. Second, we extended the
2-qubit tile structure to 3-qubit and defined it as a Tri-tile structure,
which is conducive for us to judge whether the state set is strongly nonlocal
more quickly and efficiently. If the state set satisfies this structure, each
section in the corresponding Rubik’s Cube conforms to the tile structure. Then
on this basis, we gave the construction process of 3-qubit UPB in
${{C}^{{{d}_{1}}}}\otimes{{C}^{{{d}_{2}}}}\otimes{{C}^{{{d}_{3}}}}$ system.
The structure of it is similar to that of
${{C}^{d}}\otimes{{C}^{d}}\otimes{{C}^{d}}$ system, which also consists of
four parts.
It is noted that the state sets which are constructed using the proposed
general construction method include a considerable number of states.
Considering that reducing the number of quantum states is of great
significance for exploring which states affect the nonlocality of the system,
it is a valuable research direction to discuss the minimum number of states in
3-qubit UPB. In addition, there is another kind of state that is closely
related to UPB, bound entangled states (or BE states). Since UPBs proposed in
this paper have a large size, they can produce BE states of small rank. The
characteristics of BE states is also a worthy research direction.
###### Acknowledgements.
This work is supported by the National Key R&D Program of China (Grant Nos.
2020YFB1805405), the Foundation of Guizhou Provincial Key Laboratory of Public
Big Data (Grant No. 2019BDKFJJ014) and the Fundamental Research Funds for the
Central Universities (Grant Nos. 2019XD-A02, 2020RC38).
## Appendix A Proof of Lemma 1
Lemma 2: The following 34 states Eq. (5) is strongly nonlocal.
Proof: In order to make the proof process clearer, we first rewrite the
original state, let $\left|00\right\rangle\to\left|0\right\rangle,\
\left|01\right\rangle\to\left|1\right\rangle,\ \cdots,\
\left|33\right\rangle\to\left|15\right\rangle$. After rewriting the state Eq.
(5), we can get the following states Eq. (A1):
$\displaystyle{{C}_{1}}:\
\left\\{\begin{matrix}\left|{{\psi}_{0}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|0-1\right\rangle}_{BC}},\\\
\left|{{\psi}_{1}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|1-5\right\rangle}_{BC}},\\\
\left|{{\psi}_{2}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|4\right\rangle}_{BC}},\\\
\left|{{\psi}_{3}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|9-10\right\rangle}_{BC}},\\\
\left|{{\psi}_{4}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|5-9\right\rangle}_{BC}},\\\
\left|{{\psi}_{5}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|6\right\rangle}_{BC}},\\\
\left|{{\psi}_{6}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|1-2\right\rangle}_{BC}},\\\
\left|{{\psi}_{7}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|8\right\rangle}_{BC}},\\\
\left|{{\psi}_{8}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|6-10\right\rangle}_{BC}},\\\
\left|{{\psi}_{9}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|8-9\right\rangle}_{BC}},\\\
\left|{{\psi}_{10}}\right\rangle={{\left|0-1\right\rangle}_{A}}{{\left|2\right\rangle}_{BC}},\\\
\left|{{\psi}_{11}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|0-4\right\rangle}_{BC}},\\\
\end{matrix}\right.$ (10) $\displaystyle{{C}_{2}}:\
\left\\{\begin{matrix}\left|{{\psi}_{12}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|10-11\right\rangle}_{BC}},\\\
\left|{{\psi}_{13}}\right\rangle={{\left|2\right\rangle}_{A}}{{\left|11-15\right\rangle}_{BC}},\\\
\left|{{\psi}_{14}}\right\rangle={{\left|2-3\right\rangle}_{A}}{{\left|14\right\rangle}_{BC}},\\\
\end{matrix}\right.$ $\displaystyle{{C}_{3}}:\
\left\\{\begin{matrix}\left|{{\psi}_{15}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|14-15\right\rangle}_{BC}},\\\
\left|{{\psi}_{16}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|9-13\right\rangle}_{BC}},\\\
\left|{{\psi}_{17}}\right\rangle={{\left|2-3\right\rangle}_{A}}{{\left|7\right\rangle}_{BC}},\\\
\left|{{\psi}_{18}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|5-6\right\rangle}_{BC}},\\\
\left|{{\psi}_{19}}\right\rangle={{\left|1\right\rangle}_{A}}{{\left|7-11\right\rangle}_{BC}},\\\
\left|{{\psi}_{20}}\right\rangle={{\left|1-2\right\rangle}_{A}}{{\left|13\right\rangle}_{BC}},\\\
\end{matrix}\right.$ $\displaystyle{{C}_{4}}:\
\left\\{\begin{array}[]{*{35}{l}}\left|{{\psi}_{21,22}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|12+w_{4}^{s}13+w_{4}^{2s}14\right\rangle}_{BC}},\\\
\left|{{\psi}_{23.24}}\right\rangle={{\left|0+w_{4}^{s}1+w_{4}^{2s}2\right\rangle}_{A}}{{\left|3\right\rangle}_{BC}},\\\
\left|{{\psi}_{25,26}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|0+w_{4}^{s}4+w_{4}^{2s}8\right\rangle}_{BC}},\\\
\left|{{\psi}_{27,28}}\right\rangle={{\left|3\right\rangle}_{A}}{{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{BC}},\\\
\left|{{\psi}_{29.30}}\right\rangle={{\left|1+w_{4}^{s}2+w_{4}^{2s}3\right\rangle}_{A}}{{\left|12\right\rangle}_{BC}},\\\
\left|{{\psi}_{31,32}}\right\rangle={{\left|0\right\rangle}_{A}}{{\left|7+w_{4}^{s}11+w_{4}^{2s}15\right\rangle}_{BC}},\\\
\end{array}\right.$ $\displaystyle Stopper:\
\left|{{\psi}_{33}}\right\rangle={{\left(\left|0\right\rangle+\left|1\right\rangle+\left|2\right\rangle+\left|3\right\rangle\right)}_{A}}{{\left(\left|0\right\rangle+\cdots+\left|15\right\rangle\right)}_{B}}_{C}.$
Suppose BC system starts with the nontrivial and non-disturbing measurement,
represented by a set of POVM elements $M_{m}^{\dagger}M_{m}$ on
${{d}^{2}}\times{{d}^{2}}$. The POVM measurement in
${{\left\\{\left|0\right\rangle,\left|1\right\rangle,\cdots,\left|15\right\rangle\right\\}}_{A}}$
basis can be written, which corresponds to the states Eq. (A1):
$\displaystyle
M_{m}^{\dagger}M_{m}=\left[\begin{matrix}{{a}_{00}}&{{a}_{01}}&\cdots&{{a}_{0\left(14\right)}}&{{a}_{0\left(15\right)}}\\\
{{a}_{10}}&{{a}_{11}}&\cdots&{{a}_{1\left(14\right)}}&{{a}_{1\left(15\right)}}\\\
\vdots&\vdots&\ddots&\vdots&\vdots\\\
{{a}_{\left(14\right)0}}&{{a}_{\left(14\right)1}}&\cdots&{{a}_{\left(14\right)\left(14\right)}}&{{a}_{\left(14\right)\left(15\right)}}\\\
{{a}_{\left(15\right)0}}&{{a}_{\left(15\right)1}}&\cdots&{{a}_{\left(15\right)\left(14\right)}}&{{a}_{\left(15\right)\left(15\right)}}\\\
\end{matrix}\right]$
The post-measurement states could be expressed as
$\left(I\otimes{{M}_{m}}\right)\left|{{\varphi}_{i}}\right\rangle$, which
should be mutually orthogonal. Then
$\left\langle{{\varphi}_{j}}\right|\left(I\otimes
M_{m}^{\dagger}M_{m}\right)\left|{{\varphi}_{i}}\right\rangle=0$ is obtained.
According to this principle, the original matrix could be transformed into:
$\displaystyle M_{m}^{\dagger}M_{m}=\left[\begin{matrix}a&0&\cdots&0\\\
0&a&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&{{a}}\\\
\end{matrix}\right]$
Table II shows the detailed derivation process.
Table 2: POVM elements POVM Element | Corresponding States
---|---
${{a}_{0i}}={{a}_{i0}}=0$ | $i=1$ | $i=2$ | $i=3$ | $i=4$ | $i=5$
$\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{1}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{10}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{22}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{2}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$
$i=6$ | $i=7$ | $i=8$ | $i=9$ | $i=10$
$\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{17}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$
$i=11$ | $i=12$ | $i=13$ | $i=14$ | $i=15$
$\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{11}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$
${{a}_{1i}}={{a}_{i1}}=0$ | $i=2$ | $i=3$ | $i=4$ | $i=5$ | $i=6$
$\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{10}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{24}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{2}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{18}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$
$i=7$ | $i=8$ | $i=9$ | $i=10$ | $i=11$
$\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{17}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$
$i=12$ | $i=13$ | $i=14$ | $i=15$ |
$\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{0}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ |
${{a}_{2i}}={{a}_{i2}}=0$ | $i=3$ | $i=4$ | $i=5$ | $i=6$ | $i=7$
$\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{24}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{2}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{17}}\right\rangle$
$i=8$ | $i=9$ | $i=10$ | $i=11$ | $i=12$
$\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$
$i=13$ | $i=14$ | $i=15$ | |
$\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{10}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | |
${{a}_{3i}}={{a}_{i3}}=0$ | $i=4$ | $i=5$ | $i=6$ | $i=7$ | $i=8$
$\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{2}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{1}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{17}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$
$i=9$ | $i=10$ | $i=11$ | $i=12$ | $i=13$
$\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$
$i=14$ | $i=15$ | | |
$\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{24}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | | |
${{a}_{4i}}={{a}_{i4}}=0$ | $i=5$ | $i=6$ | $i=7$ | $i=8$ | $i=9$
$\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{18}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{17}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$
$i=10$ | $i=11$ | $i=12$ | $i=13$ | $i=14$
$\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$
$i=15$ | | | |
$\left|{{\varphi}_{2}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | | | |
${{a}_{5i}}={{a}_{i5}}=0$ | $i=6$ | $i=7$ | $i=8$ | $i=9$ | $i=10$
$\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{5}}\right\rangle$ | $\left|{{\varphi}_{18}}\right\rangle$,$\left|{{\varphi}_{17}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$
$i=11$ | $i=12$ | $i=13$ | $i=14$ | $i=15$
$\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{1}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$
${{a}_{6i}}={{a}_{i6}}=0$ | $i=7$ | $i=8$ | $i=9$ | $i=10$ | $i=11$
$\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{17}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{3}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$
$i=12$ | $i=13$ | $i=14$ | $i=15$ |
$\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{5}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ |
${{a}_{7i}}={{a}_{i7}}=0$ | $i=8$ | $i=9$ | $i=10$ | $i=11$ | $i=12$
$\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{7}}\right\rangle$ | $\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{9}}\right\rangle$ | $\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | $\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$
$i=13$ | $i=14$ | $i=15$ | |
$\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{17}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | |
${{a}_{8i}}={{a}_{i8}}=0$ | $i=9$ | $i=10$ | $i=11$ | $i=12$ | $i=13$
$\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{4}}\right\rangle$ | $\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | $\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$
$i=14$ | $i=15$ | | |
$\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{7}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | | |
${{a}_{9i}}={{a}_{i9}}=0$ | $i=10$ | $i=11$ | $i=12$ | $i=13$ | $i=14$
$\left|{{\varphi}_{4}}\right\rangle$,$\left|{{\varphi}_{8}}\right\rangle$ | $\left|{{\varphi}_{4}}\right\rangle$,$\left|{{\varphi}_{12}}\right\rangle$ | $\left|{{\varphi}_{4}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{4}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{4}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$
$i=15$ | | | |
$\left|{{\varphi}_{4}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | | | |
${{a}_{10i}}={{a}_{i10}}=0$ | $i=11$ | $i=12$ | $i=13$ | $i=14$ | $i=15$
$\left|{{\varphi}_{8}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | $\left|{{\varphi}_{8}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{8}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{8}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{8}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$
${{a}_{11i}}={{a}_{i11}}=0$ | $i=12$ | $i=13$ | $i=14$ | $i=15$ |
$\left|{{\varphi}_{19}}\right\rangle$,$\left|{{\varphi}_{30}}\right\rangle$ | $\left|{{\varphi}_{19}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{19}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{19}}\right\rangle$,$\left|{{\varphi}_{15}}\right\rangle$ |
${{a}_{12i}}={{a}_{i12}}=0$ | $i=13$ | $i=14$ | $i=15$ | |
$\left|{{\varphi}_{30}}\right\rangle$,$\left|{{\varphi}_{20}}\right\rangle$ | $\left|{{\varphi}_{30}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{30}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | |
${{a}_{13i}}={{a}_{i13}}=0$ | $i=14$ | $i=15$ | | |
$\left|{{\varphi}_{20}}\right\rangle$,$\left|{{\varphi}_{14}}\right\rangle$ | $\left|{{\varphi}_{20}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | | |
${{a}_{14i}}={{a}_{i14}}=0$ | $i=15$ | | | |
$\left|{{\varphi}_{14}}\right\rangle$,$\left|{{\varphi}_{13}}\right\rangle$ | | | |
${{a}_{00}}={{a}_{ii}}$ | $i=1$ | $i=2$ | $i=3$ | $i=4$ | $i=5$
$\left|{{\varphi}_{0,33}}\right\rangle$ | $\left|{{\varphi}_{6,33}}\right\rangle$ | $\left|{{\varphi}_{28,33}}\right\rangle$ | $\left|{{\varphi}_{11,33}}\right\rangle$ | $\left|{{\varphi}_{1,33}}\right\rangle$
$i=6$ | $i=7$ | $i=8$ | $i=9$ | $i=10$
$\left|{{\varphi}_{18,33}}\right\rangle$ | $\left|{{\varphi}_{11,33}}\right\rangle$ | $\left|{{\varphi}_{9,33}}\right\rangle$ | $\left|{{\varphi}_{4,33}}\right\rangle$ | $\left|{{\varphi}_{3,33}}\right\rangle$
$i=11$ | $i=12$ | $i=13$ | $i=14$ | $i=15$
$\left|{{\varphi}_{12,33}}\right\rangle$ | $\left|{{\varphi}_{22,33}}\right\rangle$ | $\left|{{\varphi}_{16,33}}\right\rangle$ | $\left|{{\varphi}_{15,33}}\right\rangle$ | $\left|{{\varphi}_{13,33}}\right\rangle$
Obviously, BC’s measurement matrix is proportional to the identity matrix, so
it means that BC’system starts with a trivial measurement, and cannot get any
information about shared state distinction from the measurement result. As for
the other two division methods, $AB|C$, $AC|B$, the proof method is similar to
this. In summary, the multi-party divided into any two parts can keep the
strong nonlocality of the quantum system.$\hfill\blacksquare$
## References
* [1] J. Walgate, A. J. Short, L. Hardy, and V. Vedral. Local distinguishability of multipartite orthogonal quantum states. Phys. Rev. Lett., 85(23):4972–4975, 2000.
* [2] L. Roa, J. C. Retamal, and C. Saavedra. Quantum-state discrimination. Phys. Lett. A, 66(1):012103, 2002.
* [3] R. Rahaman and M.G. Parker. Quantum scheme for secret sharing based on local distinguishability. Phys. Lett. A, 91(2):022330, 2015.
* [4] D. P. Divincenzo, D. Leung, and B. M. Terhal. Quantum data hiding. IEEE Trans. Inf. Theory, 48(3):580–598, 1988.
* [5] T. Eggeling and R. F. Werner. Hiding classical data in multi-partite quantum states. Phys. Rev. Lett., 89(9):097905, 2002.
* [6] J. Geabanacloche. Hiding messages in quantum data. J. Math. Phys, 43(9):4531–4536, 2002.
* [7] J. Wang, L. Li, H. Peng, and Y. Yang. Quantum-secret-sharing scheme based on local distinguishability of orthogonal multiqubit entangled states. Phys. Rev. A, 95(2):022320, 2017.
* [8] G. L. Long and X. S. Liu. Theoretically efficient high-capacity quantum-key-distribution scheme. Phys. Rev. A, 65(3):032302, 2002.
* [9] A. Furusawa, J. L. Sørensen, S. Braunstein, C. A. Fuchs, H. J. Kimble, and E. S. Polzik. Unconditional quantum teleportation. Science, 282(5389):706–709, 1998.
* [10] A. Karlsson and M. Bourennane. Quantum teleportation using three-particle entanglement. Phys. Rev. A, 58(6):4394, 1998.
* [11] N. Wiebe, D. Braun, and S. Lloyd. Quantum algorithm for data fitting. Phys. Rev. Lett., 109(5):050505, 2012.
* [12] A. W. Harrow, A. Hassidim, and S. Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 103(15):150502, 2009.
* [13] G. Wang. Quantum algorithm for linear regression. Phys. Rev. A, 96(1):012335, 2017.
* [14] D. Bruß, G. M. D’Ariano, M. Lewenstein, C. Macchiavello, A. Sen, and U. Sen. Distributed quantum dense coding. Phys. Rev. Lett., 93(21):210501, 2004.
* [15] C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. A. Smolin, and W. K. Wootters. Quantum nonlocality without entanglement. Phys. Rev. A, 59(2):1070, 1999.
* [16] C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. A. Smolin, and W. K. Wootters. Quantum nonlocality without entanglement. Phys. Rev. A, 59(2):1070, 1999.
* [17] N. Johnston. The structure of qubit unextendible product bases. J. Phys. A: Math Theor., 47(42):424034, 2014.
* [18] S. De Rinaldis. Distinguishability of complete and unextendible product bases. Phys. Rev. A, 70(2):022309, 2004.
* [19] D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal. Unextendible product bases, uncompletable product bases and bound entanglement. Commun. Math. Phys., 238(3):379–410, 2003.
* [20] S. Agrawal, S. Halder, and M. Banik. Genuinely entangled subspace with all-encompassing distillable entanglement across every bipartition. Phys. Rev. A, 99(3):032335, 2020.
* [21] S. Halder. Distance between bound entangled states from unextendible product bases and separable states. Quantum Reports, 2(1):49–56, 2020.
* [22] J. Chen and N. Johnston. The minimum size of unextendible product bases in the bipartite case (and some multipartite cases). Commun. Math. Phys., 333(1):351–365, 2015.
* [23] P. Bej and S. Halder. Unextendible product bases, bound entangled states, and the range criterion. Phys. Rev. A, 386:126992, 2021.
* [24] F. Shi, X. Zhang, and L. Chen. Unextendible product bases from tile structures and their local entanglement-assisted distinguishability. Phys. Rev. A, 101(6):062329, 2020.
* [25] M. Nawareg. Concurrence of multiqubit bound entangled states constructed from unextendible product bases. Phys. Rev. A, 101(3):032342, 2020.
* [26] Y. Sun and L. Chen. The construction and local distinguishability of multiqubit unextendible product bases. arXiv:2102.11553, 2021.
* [27] R. Augusiak, T. Fritz, M. Kotowski, M. Kotowski, M. Pawłowski, M. Lewenstein, and A. Acín. Tight bell inequalities with no quantum violation from qubit unextendible product bases. Phys. Rev. A, 85(4):042113, 2012.
* [28] H. Fan. Distinguishability and indistinguishability by local operations and classical communication. Phys. Rev. Lett., 92(17):177905, 2004.
* [29] Y. H. Yang, F. Gao, G. J. Tian, T. Q. Cao, and Q. Y. Wen. Local distinguishability of orthogonal quantum states in a 2*2*2 system. Phys. Rev. A, 88(2):024301, 2013.
* [30] J. Walgate, A. J. Short, L. Hardy, and V. Vedral. Local distinguishability of multipartite orthogonal quantum states. Phys. Rev. Lett., 85(23):4972–4975, 2000.
* [31] C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal. Unextendible product bases and bound entanglement. Phys. Rev. Lett., 82(26):5385, 1999.
* [32] Z.-C. Zhang, K.-J. Zhang, and F. Gao. Construction of nonlocal multipartite quantum states. Phys. Rev. A, 95(5):052344, 2017.
* [33] S. Halder, M. Banik, S. Agrawal, and S. Bandyopadhyay. Strong quantum nonlocality without entanglement. Phys. Rev. Lett., 122(4):040403, 2019.
* [34] Z.-C. Zhang and X.-D. Zhang. Strong quantum nonlocality in multipartite quantum systems. Phys. Rev. A, 99(6):062108, 2017.
* [35] F. Shi, M. Hu, L. Chen, and X. Zhang. Strong quantum nonlocality with entanglement. Phys. Rev. A, 102(4):042202, 2020.
* [36] J. L. Gross and J. Yellen. Graph theory and its applications. CRC press, 2005.
* [37] N. Johnston. The structure of qubit unextendible product bases. J. Phys. A: Math Theor., 47(42):424034, 2014.
* [38] A. Belhaj, M. B. Sedra, and A. Segui. Graph theory and qubit information systems of extremal black branes. J. Phys. A: Math Theor., 48(4):045401, 2014.
* [39] D. Markham and B. C. Sanders. Graph states for quantum secret sharing. Phys. Rev. A, 78(4):042309, 2008.
* [40] S. Halder, M. Banik, and S. Ghosh. Graph states for quantum secret sharing. Phys. Rev. A, 99(6):062329, 2019.
* [41] K. Wang and L. Chen. The construction of 7-qubit unextendible product bases of size ten. Quantum Inf. Process, 19:1–17, 2020.
* [42] F. Shi and X. Zhang. Tile structures for unextendible product bases. arXiv:2003.03898, 2020.
* [43] Y. Guo, Y. Jia, and X. Li. Multipartite unextendible entangled basis. Quantum Inf. Process, 14(9):3553–3568, 2015.
|
$\Omega^{2}_{(u,v)}\coloneqq\\{(u+2hv,v)+k(h\eta,\eta)+l(0,\eta)\mid
k,l\in[-1,1]\\}.$ (34)
When $\tfrac{v_{\min}-v}{\eta}<-1$ and $1<\tfrac{v_{\max}-v}{\eta}$, then
$\Omega^{2}_{(u,v)}\subset R$ and we carry on with the induction to Step 2.
Otherwise, we go directly to Step 3 as $\Omega^{n+1}\cap R^{c}\neq\emptyset$.
##### Step 2
Lower–bound for $n\geq 3$, while $\Omega^{n}_{(u,v)}\cap R^{c}=\emptyset$
We start by defining the parallelograms $\Omega^{n}_{(u,v)}$ and showing some
properties of the vectors that generate them. Then, by induction that for
$n\geq 2$, we will show the following statement:
For
$0<\epsilon<\min\left(\tfrac{1}{4},\tfrac{\eta}{2}(v_{\max}-v_{\min})\right)$
and $\eta<(v_{\max}-v_{\min})/2$, $\tilde{P}^{n}$ has a density $f^{\star n}$
lower–bounded by $\left(\tfrac{\eta\epsilon}{2}\right)^{n-2}\mu_{0}^{n}$ on
$\Omega^{n}_{(u,v)}$. (35)
Our induction is valid while $\Omega^{n}_{u,v}\subset R$ and Step 3 shows how
to modify it when it ceases to be the case. Our arguments become progressively
more geometric, for what we find the illustration of the proof in Figure 9
helpful.
[width=1.0]mixing_proof_2
Figure 9: Illustration of $\Omega^{n}_{(u,v)}$ for $n=2,\,3$ and of the
segment $P_{\lambda}$. Our argument consists in showing that for any
$(z_{1},z_{2})\in\Omega^{n+1}_{(u,v)}$, the length of the intersection of
$P_{\lambda}$ with $\Omega^{n}_{(u,v)}$ is at least $\eta\epsilon/2$. While
the dark green region is $\Omega^{3}$, the lighter colour shows a larger
region where the lower–bound is valid.
To define $\Omega^{n}_{(u,v)}$, let $v_{2}\coloneqq
T\left(0,\eta\right)=\left(h\eta,\eta\right)$ and for $n\geq 3$,
$v_{n}=(1-\epsilon)\left(T\left(0,\eta\right)+T(v_{n-1})\right)\in\mathbb{R}^{2}.$
(36)
For $n\geq 3$, we define
$\Omega^{n}_{(u,v)}\coloneqq\left\\{T^{n}\left(u,v\right)+l\left(0,\eta\right)+kv_{n}\mid
l,k\in[-1,1]\right\\}.$ (37)
Notice that if we take $n=2$ in (37), we get $\Omega^{2}_{(u,v)}$ from as
defined in (34).
While one can obtain an explicit expression of $v_{n}$, it is of little
pratical interest: we only need to ensure that the horizontal component of
$v_{n}$ remains sufficiently large. This is detailed in the proof of Lemma
E.5.
Since we have shown the statement (35) for $n=2$, we proceed with the
induction step. For $\left(z_{1},z_{2}\right)\in\Omega^{n+1}_{(u,v)}\cap R$,
we calculate
$\displaystyle\tilde{P}^{n+1}((u,v),]-\infty,z_{1}]\times[v_{\min},z_{2}])$
$\displaystyle=\int_{R\cap\\{x+yh\leq
z_{1}\\}}\tilde{P}^{n}((u,v),dxdy)P(y,[v_{\min},z_{2}])$
$\displaystyle=\int_{R\cap\\{x+yh\leq
z_{1}\\}}f_{(u,v)}^{\star{n}}(x,y)P(y,[v_{\min},z_{2}])dxdy.$
We can rewrite $R\cap\\{x+yh\leq z_{1}\\}=\\{(x,y)\in\mathbb{R}\mid y\in
I,\,x\leq z_{1}-yh\\}$. Differentiating with respect to $z_{1}$, we obtain
$\frac{\partial\tilde{P}^{n+1}((u,v),]-\infty,z_{1}]\times[v_{\min},z_{2}])}{\partial
z_{1}}=\int_{y}f_{(u,v)}^{\star{n}}(z_{1}-yh,y)P(y,[v_{\min},z_{2}])dxdy,$
where $f_{v}$ is defined in (12). For $z_{2}\geq v_{\min}$, we get
$f_{(u,v)}^{\star
n+1}(z_{1},z_{2})=\frac{\partial^{2}\tilde{P}^{n+1}((u,v),]-\infty,z_{1}]\times[v_{\min},z_{2}])}{\partial
z_{1}\partial z_{2}}=\int_{I}f_{(u,v)}^{\star{n}}(z_{1}-yh,y)f_{y}(z_{2})dy.$
(38)
The expression in (38) is similar to that from Step 1, except that it is
integrated over $I$. We can lower–bound the integrand: $f_{(u,v)}^{\star{n}}$
is lower–bounded by $\left(\tfrac{\eta\epsilon}{2}\right)^{n-2}\mu_{0}^{n}$ on
$\Omega^{n}_{(u,v)}$ for $n\geq 2$ and $f_{y}$ by $\mu_{0}$ on
$[y-\eta,y+\eta]\cap I$. To lower-bound $f_{(u,v)}^{\star n+1}$, it remains to
lower–bound the length of the integration domain. For the calculations, we
take the following parametrisation of $[z_{2}-\eta,z_{2}+\eta]\cap I$,
$\left\\{\lambda\in[-1,1]\mid P_{\lambda}\coloneqq
P_{0}+\lambda\eta(-h,1)\in\Omega^{n}_{(u,v)}\right\\},\qquad\text{where
}P_{0}=T^{-1}(z_{1},z_{2}).$ (39)
###### Lemma E.5.
The length of the segment (39) is at least $\tfrac{\epsilon}{2}$.
For the sake of readablity, we differ the proof of Lemma E.5 to Section E.2.1.
Finally, going back to (38), we have the desired lower–bound
$f_{(u,v)}^{\star(n+1)}(z_{1},z_{2})\geq\left(\tfrac{\eta\epsilon}{2}\right)^{n-2}\mu_{0}^{n}\times\mu_{0}\times\left(\tfrac{\eta\epsilon}{2}\right)=\left(\tfrac{\eta\epsilon}{2}\right)^{(n+1)-2}\mu_{0}^{n+1},\qquad\text{
for }(z_{1},\,z_{2})\in\Omega^{n+1}_{u,v}.$
In addition, for
$\epsilon<1\left/\left(1+\tfrac{3(v_{\max}-v_{\min})}{2\eta}\right)\right.$,
${\left(0,1\right)}\cdot{(v_{n+1}-v_{n})}=(1-\epsilon)\eta-\epsilon{\left(0,1\right)}\cdot{v_{n}}>(1-\epsilon)\eta-\epsilon(v_{\max}-v_{\min})>\epsilon\tfrac{v_{\max}-v_{\min}}{2}.$
The height of $\Omega^{n}_{u,v}$ grows with $n$, by at least a constant,
positive term. Hence, it eventually reaches ${v_{\max}-v_{\min}}$, in which
case $\Omega^{n}\cap R^{c}\neq\emptyset$.
##### Step 3
First non–empty intersection with the boundary
Let $n_{0}\coloneqq\min\\{n\in\mathbb{N}\mid\mu(\Omega^{n}\cap R^{c})>0\\}$.
Without loss of generality, $\Omega^{n_{0}}$ extends beyond $v_{\min}$. We
will now construct a region $\tilde{\Omega}^{n_{0}+1}\subset R$ such that
$f_{(u,v)}^{\star(n_{0}+1)}\geq\left(\tfrac{\eta\epsilon}{2}\right)^{n_{0}-2}\mu_{0}^{n_{0}}$
on $\tilde{\Omega}^{n_{0}+1}$ and for which
$\tilde{\Omega}^{n_{0}+1}\cap(\mathbb{R}\times\\{v_{\min}\\})$ is
lower–bounded. Since we can choose $\eta$ arbitrarily small, we can treat the
lower and upper boundaries independently, so we focus on the construction of
$\tilde{\Omega}^{n_{0}+1}_{u,v}$ on the boundary
$\mathbb{R}\times\\{v_{\min}\\}$ first.
For $P\in\mathbb{R}\times\\{v_{\min}\\}$, we consider $P_{\lambda}$ as in
(39), under the constraint that the integration segment lies within $R$, that
is, $\\{\lambda\in[0,1]\mid P_{\lambda}\in\Omega^{n_{0}}\\}$. We denote the
length of this segment by $L(P)$
$L(P)\coloneqq\left|\left\\{\lambda\in[-1,1]\mid
P+\eta\lambda\left(-h,1\right)\in\Omega^{n_{0}}\cap R\right\\}\right|,$
and we let $A,B$ be the endpoints of
$\Omega^{n_{0}}\cap(\mathbb{R}\times\\{v_{\min}\\})$. We rely on the following
claim, whose proof is in Section E.2.2. Figure 10 illustrates the situation.
The set $\\{P\in\Omega^{n_{0}}\cap\mathbb{R}\times\\{v_{\min}\\}\mid
L(P)\geq\tfrac{\epsilon}{2}\\}$ is not empty. If we denote by
$D=A+\left(x_{D},0\right)$ and $E=A+\left(x_{E},0\right)$ its right- and left-
endpoints, then for some $c_{0}>0$ independent of $(u,v)$, we have
$x_{B}+c_{0}\leq x_{D}-x_{E}.$ (40)
[width=1.0]mixing_proof_boundary
Figure 10: Illustration of Step 3 and the proof of (40). The leftmost polygon
represents $\Omega^{n_{0}-1}$, at the iteration before the first non-trivial
intersection occurs. The two middle parallelograms illustrate the two cases,
$x_{E}\leq x_{B}$ and $x_{B}\leq x_{E}$ respectively, from the proof of (40).
On the right, the bottom part of the polygon $\tilde{\Omega}^{n_{0}+1}$ as
constructed in Step 3. The dashed lines represent the integration segments,
whose length is measured by $L$.
In particular, $L(P)\geq\tfrac{\epsilon}{2}$ implies that $f_{(u,v)}^{\star
n_{0}+1}(P)\geq\left(\tfrac{\eta\epsilon}{2}\right)^{n_{0}-1}\mu_{0}^{n_{0}+1}$.
By convexity of $\Omega^{n_{0}}\cap R$, we have $L(P)\geq\tfrac{\epsilon}{2}$
for $P$ on the segment $T(E)T(D)$, so we can include that segment in
$\tilde{\Omega}^{n_{0}+1}$. As $L(E)=L(P)$, where
$P=E+(1-\epsilon/2)k\eta\left(-h,1\right),$ for $k\in[0,1]$, we have the same
lower–bound on the density holds on $T(P)$. So, we can include a segment of
height $\eta(1-\epsilon/2)$ above $T(E)$ in $\tilde{\Omega}^{n_{0}+1}$.
Therefore, we can define $\tilde{\Omega}^{n_{0}+1}$ as the polygon with
vertices $T(E)+\left(0,\eta(1-\epsilon/2)\right)$, $T(E)$, $T(D)$,
$T^{n_{0}+1}\left(u,v\right)-\left(0,\eta\right)+v_{n_{0}+1}$ and
$T^{n_{0}+1}\left(u,v\right)+\left(0,\eta\right)+v_{n_{0}+1}$.
We have obtained a convex pentagon $\tilde{\Omega}^{n_{0}+1}$ on which
$\tilde{P}^{n_{0}+1}((u,v),\cdot)$ is lower–bounded by a measure with density
lower–bounded by
$\left(\tfrac{\eta\epsilon}{2}\right)^{n_{0}-1}\mu_{0}^{n_{0}+1}$. Because $T$
preserves lengths on horizontal cross-sections, (40) implies that the length
of $T(D)T(E)$ is equal to that of $ED$, which is longer by $c_{0}=\eta h/4$
than the intersection at $n_{0}$.
##### Step 4
Induction for $n>n_{0}+1$
Assume that
$\tilde{\Omega}^{n_{0}+1}\cap(\mathbb{R}\times\\{v_{\max}\\})=\emptyset$. As a
consequence of calculations for Step 2, $\tilde{\Omega}^{n}_{u,v}$ is growing
upwards. Indeed, the calculations rely on Assumption (12) and the fact that
$v_{n}$ has a horizontal component whose length we control. Therefore, they
adapt to $\tilde{\Omega}^{n}_{u,v}$, with $v_{n}$ being the vector from $T(D)$
to ${T^{n_{0}+1}\left(u,v\right)-\left(0,\eta\right)+v_{n_{0}+1}}$.
In addition, (40) still holds. Indeed, redefine $A,\,B,\,D$ and $E$, except
with $n_{0}$ replaced by $n_{0}+1$ in the expression of $L$. We notice that
$A,\,B$ coincide with $T(E)$ and $T(D)$ from the previous iteration. Because
$AB$ is now of length at least $c_{0}=h\eta/4$, the proof is easier as we fall
in the first case. We define $\tilde{\Omega}^{n_{0}+2}$ as in Step 3.
We can now iterate this procedure, obtaining a lower–bound of $f_{u,v}^{\star
n}$ by a uniform constant, on a convex and polygonal domain
$\tilde{\Omega}^{n}$. Crucially, both the height of $\tilde{\Omega}^{n}$ and
the length of its intersection with $\mathbb{R}\times\\{v_{\min}\\}$ grow, by
uniformly lower–bounded amounts.
##### Step 5
Intersection with both boundaries
For some $n_{1}\in\mathbb{N}$, the intersection
$\tilde{\Omega}^{n_{1}}_{u,v}\cap(\mathbb{R}\cap\\{v_{\max}\\})$ is not
trivial. By a procedure analogue to that in Step 3, we can define
$\tilde{\Omega}^{n_{1}+1}$, which non–trivially intersects both boundaries.
Using the procedure from Step 4, it is clear that the intersection will not
only remain non–trivial with $n$, but also increase.
##### Step 6
Cross-sections with length at least $1$
By definition, $\tilde{\Omega}^{n}$ is delimited by a convex, polygonal
domain. The length of any horizontal cross-section of $\tilde{\Omega^{n}}$ is
lower–bounded by the minimum of the lengths of the intersections with the
lower and upper boundaries111To see this, consider the parallelogram on the 4
vertices of $\tilde{\Omega}^{n}$ which belong to the boundary. That
parallelogram is included in $\tilde{\Omega^{n}}$ by convexity, so the lengths
of the horizontal sections between the length of both bases.. Recall that by
Step 4, these two are increasing, and this by at least $h\eta/4$ at each
iteration. Hence, for some $n=n_{u,v}$, all horizontal sections of
$\tilde{\Omega}^{n_{u,v}}_{u,v}$ are of length at least 1.
By construction of $\tilde{\Omega}^{n}_{(u,v)}$, we have obtained a region
such that for any $n\geq n_{u,v}$,,
1. 1.
$\tilde{P}^{n}((u,v),\cdot)$ is lower–bounded by
$\left(\frac{\eta\epsilon}{2}\right)^{n-2}\mu_{0}^{n}\mu$ on
$\tilde{\Omega}^{n}_{u,v}$, ($\mu$ being the Lebesgue measure)
2. 2.
$\\{\tilde{\Omega}^{n}_{(u,v)}+(k,0)\\}_{k\in\mathbb{Z}}$ is a cover of $R$.
##### Step 7
Uniform lower–bound
We now show that we can choose a uniform $N\in\mathbb{N}$, such that
$n_{u,v}\leq N$ for all $(u,v)\in R$. Fix $(u,v)\in R$ and let
$\bar{\Omega}^{2}_{u,v}$ be defined as in (33), except with $\tfrac{\eta}{2}$
instead of $\eta$. We can then perform Step 1 to Step 6, so we obtain a domain
$\bar{\Omega}^{\bar{n}_{u,v}}_{u,v}$ with cross-sections of length at least 1,
for some $\bar{n}_{u,v}\geq n_{u,v}$.
Notice that the shrinked parallelogram at $n=2$ is contained in parallelograms
for different initial conditions. Specifically, we have
$\bar{\Omega}^{2}_{u,v}\subset\Omega^{2}_{x,y}$ for $(x,y)\in C_{u,v}$, where
$C_{u,v}=T^{-2}(\bar{\Omega}^{2}_{u,v})$. In particular,
$n_{x,y}\leq\bar{n}_{u,v}$, for all $(x,y)\in C_{u,v}$. Since
$(\overset{\circ}{C_{u,v}})_{(u,v)\in[0,1]\times I}$ is an open cover of
$[0,1]\times I$, by compacity, we can find a finite cover
$\\{C_{u_{k},v_{k}}\\}_{k=1}^{K}$. Clearly, $N=\max_{1\leq k\leq
K}\bar{n}_{u_{k},v_{k}}<\infty$ gives a uniform bound on
$(n_{u,v})_{(u,v)\in[0,1]\times I}$. The bound is also valid on $R\times I$,
because the whole construction is invariant with respect to horizontal
translations.
Finally, for $(u,v)\in R$, we have that $\tilde{P}^{N}((u,v),\cdot)$ is
lower–bounded by $\left(\frac{\eta\epsilon}{2}\right)^{N-2}\mu_{0}^{N}\mu$, on
$\Omega_{u,v}$ and $\\{\Omega_{(u,v)}+(k,0)\\}_{k\in\mathbb{Z}}$ is a cover of
$R$, where $\Omega_{u,v}\coloneqq\tilde{\Omega}^{N}_{u,v}$.
##### Step 8
Conclusion
We can now go back to $(\mathrm{frac}\gamma_{n},V_{n})$. By lower–bounding
$\tilde{P}^{N}$ with a uniform measure, we can use the same arguments as in
Section E.1 to conclude. For $A\in\mathcal{B}([0,1]\times I)$, we have
$\displaystyle\mathrm{frac}_{\star}\tilde{P}^{N}((u,v),A)$
$\displaystyle=\tilde{P}^{N}((u,v),\mathrm{frac}^{-1}(A))$
$\displaystyle\geq\tilde{P}^{N}((u,v),\mathrm{frac}^{-1}(A)\cap\Omega_{(u,v)})$
$\displaystyle\geq C\mu(\mathrm{frac}^{-1}(A)\cap\Omega_{(u,v)})$
$\displaystyle(\text{minorating on $\Omega_{(u,v)}$})$
$\displaystyle=C\mu(\cup_{k\in Z}A+(k,0)\cap\Omega_{(u,v)})$
$\displaystyle(\\{A+(k,0)\\}_{k}\text{ disjoint})$ $\displaystyle=C\sum_{k\in
Z}\mu(A+(k,0)\cap\Omega_{(u,v)})$ $\displaystyle=C\sum_{k\in
Z}\mu(A\cap(\Omega_{(u,v)}-(k,0)))$ $\displaystyle(\mu\text{ translation--
invariant})$ $\displaystyle\geq C\mu(\cup_{k\in
Z}A\cap(\Omega_{(u,v)}-(k,0)))$ $\displaystyle=C\mu(A)$
$\displaystyle(\\{\Omega_{(u,v)}+(k,0)\\}_{k\in\mathbb{Z}}\text{ is a cover of
$R$}),$
where
$C=C_{\eta,\epsilon,\mu_{0},N}\coloneqq\left(\frac{\eta\epsilon}{2}\right)^{N-2}\mu_{0}^{N}$.
The lower–bound is uniform in $(u,v)$ and also shows that the measure is non-
trivial. We conclude the proof of Proposition E.1 by applying Theorem E.2.
#### E.2.1 Proof of Lemma E.5
We recall that for some $l,k\in[-1,1]$,
$\left(z_{1},z_{2}\right)=T^{n+1}\left(u,v\right)+l\left(0,\eta\right)+kv_{n},$
where $v_{n}$ is given in (36), so
$P_{\lambda}\coloneqq
T^{-1}\left(z_{1},z_{2}+\lambda\eta\right)=T^{n}\left(u,v\right)+\eta(l+\lambda)\left(-h,1\right)+k(1-\epsilon)\left(\left(0,\eta\right)+v_{n}\right).$
(41)
For a parallelogram $\Omega$ generated by vectors $x,\,y$ and centered around
the origin, we have
$P\in\Omega\iff\left\\{\begin{array}[]{ccccc}{x}^{T}{y^{\perp}}&\leq&{P}^{T}{y^{\perp}}&\leq&{-x}^{T}{y^{\perp}}\\\
{-y}^{T}{x^{\perp}}&\leq&{P}^{T}{x^{\perp}}&\leq&{y}^{T}{x^{\perp}},\\\
\end{array}\right.$ (42)
where $(x_{1},x_{2})^{\perp}=(x_{2},-x_{1})$. Combining (41) with (42), we
have that $P_{\lambda}\in\Omega^{n}_{(u,v)}$ if and only if
$\displaystyle\left\\{\begin{array}[]{c}\eta{\left(0,1\right)}\cdot{v_{n}^{\perp}}\leq\eta(l+\lambda){\left(-h,1\right)}\cdot{v_{n}^{\perp}}+k(1-\epsilon)\left(\eta{\left(0,1\right)}\cdot{v_{n}^{\perp}}+{v_{n}}\cdot{v_{n}^{\perp}}\right)\leq-\eta{\left(0,1\right)}\cdot{v_{n}^{\perp}}\\\
-\eta{v_{n}}\cdot{\left(0,1\right)^{\perp}}\leq\eta^{2}(l+\lambda){\left(-h,1\right)}\cdot{\left(0,1\right)^{\perp}}+k(1-\epsilon)\left(\eta^{2}{\left(0,1\right)}\cdot{\left(0,1\right)^{\perp}}+\eta{v_{n}}\cdot{\left(0,1\right)^{\perp}}\right)\leq\eta{v_{n}}\cdot{\left(0,1\right)^{\perp}}\\\
\end{array}\right.$ $\displaystyle\iff$
$\displaystyle\left\\{\begin{array}[]{ccccc}{\left(1,0\right)}\cdot{v_{n}}(-1+k(1-\epsilon))&\leq&-(l+\lambda){\left(1,h\right)}\cdot{v_{n}}&\leq&{\left(1,0\right)}\cdot{v_{n}}(1+k(1-\epsilon))\\\
{v_{n}}\cdot{\left(1,0\right)}(-1-k(1-\epsilon))&\leq&(-h\eta)(l+\lambda)&\leq&{v_{n}}\cdot{\left(1,0\right)}(1-k(1-\epsilon)).\\\
\end{array}\right.$
As ${\left(1,h\right)}\cdot{v_{n}}>0$ and denoting
$a_{n}\coloneqq\frac{1}{h\eta}{\left(1,0\right)}\cdot{v_{n}},\qquad
b_{n}\coloneqq
1-\frac{{\left(0,h\right)}\cdot{v_{n}}}{{\left(1,h\right)}\cdot{v_{n}}},$
we have
$P_{\lambda}\in\Omega^{n}\iff\left\\{\begin{array}[]{ccccc}b_{n}(-1-k(1-\epsilon))-l&\leq&\lambda&\leq&b_{n}(1-k(1-\epsilon))-l\\\
a_{n}(-1+k(1-\epsilon))-l&\leq&\lambda&\leq&a_{n}(1+k(1-\epsilon))-l.\\\
\end{array}\right.$
Finally, taking into account that $\lambda\in[-1,1]$, we obtain that
$\lambda\in[\max(-1,b_{n}(-1-k(1-\epsilon))-l,a_{n}(-1+k(1-\epsilon))-l),\min(1,b_{n}(1-k(1-\epsilon))-l,a_{n}(1+k(1-\epsilon))-l)],$
which is of length
$\displaystyle\min($ $\displaystyle
1,b_{n}(1-k(1-\epsilon))-l,a_{n}(1+k(1-\epsilon))-l)+$ $\displaystyle+\min($
$\displaystyle 1,b_{n}(1+k(1-\epsilon))+l,a_{n}(1-k(1-\epsilon))+l)=$
$\displaystyle=\min($ $\displaystyle
2\min(1,a_{n},b_{n}),(a_{n}+b_{n})(1-k(1-\epsilon)),1+b_{n}(1+k(1-\epsilon))+l,$
$\displaystyle
1+a_{n}(1-k(1-\epsilon))+l,1+b_{n}(1-k(1-\epsilon))-l,1+a_{n}(1+k(1-\epsilon)-l)$
We claim that for $n\geq 2$,
$a_{n}\geq 1,\qquad b_{n}\geq\frac{1}{2}.$ (43)
Combining (43) with $l,k\in[-1,1]$, $0<\epsilon\leq\tfrac{1}{2}$, we conclude
that the length of (39) is at least $\tfrac{\epsilon}{2}$.
It remains to show (43). For $a_{n}$, we proceed by induction. Using
$v_{2}=T\left(0,\eta\right)$ for $n=2$ and
$v_{3}=(1-\epsilon)\eta\left(3h,2\right)$, we verify that $a_{2},\,a_{3}\geq
1$. Notice that
${\left(T\left(x,y\right)\right)}\cdot{\left(1,0\right)}={\left(\left(x,y\right)+\left(yh,0\right)\right)}\cdot{\left(1,0\right)}={\left(x,y\right)}\cdot{\left(1,h\right)}$.
Then,
${v_{n+1}}\cdot{\left(1,0\right)}=(1-\epsilon){\left(T\left(0,\eta\right)+T(v_{n})\right)}\cdot{\left(1,0\right)}=(1-\epsilon)\left[h\eta+{v_{n}}\cdot{\left(1,h\right)}\right].$
Using the induction hypothesis, ${v_{n}}\cdot{\left(1,0\right)}\geq h\eta$
combined with ${v_{n}}\cdot{\left(0,h\right)}\geq 0$,
${v_{n+1}}\cdot{\left(1,0\right)}\geq 2h\eta(1-\epsilon)\geq h\eta,$
since $\epsilon\leq\tfrac{1}{2}$.
For $b_{n}$, we can calculate directly
$b_{2}=\tfrac{h\eta}{h\eta+h\eta}=\tfrac{1}{2}$. For $n\geq 3$, we can express
$v_{n}$ using (36), so that
$\displaystyle\frac{{\left(1,0\right)}\cdot{v_{n}}}{{\left(1,h\right)}\cdot{v_{n}}}=$
$\displaystyle\frac{{\left(1,0\right)}\cdot{\left(T\left(0,\eta\right)+T(v_{n-1})\right)}}{{\left(1,h\right)}\cdot{\left(T\left(0,\eta\right)+T(v_{n-1})\right)}}$
$\displaystyle=$
$\displaystyle\frac{h\eta+{\left(1,0\right)}\cdot{T(v_{n-1})}}{2h\eta+{\left(1,h\right)}\cdot{T(v_{n-1})}}$
$\displaystyle=$
$\displaystyle\frac{1}{2}+\frac{1}{2}\frac{{\left(\left(1,0\right)-\left(0,h\right)\right)}\cdot{T(v_{n-1})}}{2h\eta+{\left(1,h\right)}\cdot{T(v_{n-1})}}$
$\displaystyle\geq$ $\displaystyle\frac{1}{2},$
where the last inequality follows from
${\left(\left(1,0\right)-\left(0,h\right)\right)}\cdot{T(v_{n-1})}={\left(1,0\right)}\cdot{v_{n-1}}+{\left(0,h\right)}\cdot{v_{n-1}}-{\left(0,h\right)}\cdot{v_{n-1}}\geq
0.$
∎
#### E.2.2 Proof of (40)
Notice first that $L(A)=0$ and $L(P)=0$ for any $P\in R\times\\{v_{\min}\\}$
to the left of $A$, so that $0\leq x_{B},x_{D},x_{E}$. Second, consider
$P=A+\left(x_{B}+(1-\epsilon/2)\eta h\tfrac{1}{1-b},0\right)$. As
$L(P)=\tfrac{\epsilon}{2}$, we have that $\\{P\mid
L(P)\geq\tfrac{\epsilon}{2}\\}\neq\emptyset$, so $D$ and $E$ exist. In
addition, we know that $x_{D}\geq x_{B}+(1-\epsilon/2)\eta h\tfrac{1}{1-b}$.
In particular, $b\leq 1$ implies that $x_{B}<x_{D}$.
Since $A\in\Omega^{n_{0}}_{u,v}$, we can write
$A=T^{n_{0}}\left(u,v\right)+l_{A}\eta\left(0,\eta\right)-v_{n_{0}}$. By
definition, $n_{0}$ is the first time such that $\Omega^{n}_{0}\cap R^{c}$ has
non-trivial measure, so, using the relation between $\Omega^{n_{0}-1}$ and
$\Omega^{n_{0}}$, we can conclude that $l_{A}\leq 0\leq 1-\epsilon/2$.
We distinguish two cases, depending which one of $\eta\epsilon h/2$ or $x_{B}$
is greater. First, if $\eta h\epsilon/2\leq x_{B}$, then $x_{E}=\eta
h\epsilon/2$. Indeed, the triangle formed by $A$, $E$ and
$A+\left(0,\eta\epsilon/2\right)$ is in $\Omega^{n_{0}}$, so
$L\left(A+\left(\eta h\epsilon/2,0\right)\right)\geq\epsilon/2$. Therefore,
$\displaystyle x_{D}-x_{E}$ $\displaystyle\geq
x_{B}+(1-\tfrac{\epsilon}{2})\eta h\tfrac{1}{1-b}-\eta h\tfrac{\epsilon}{2}$
$\displaystyle\geq x_{B}+\eta h(2(1-\tfrac{\epsilon}{2})-\tfrac{\epsilon}{2})$
$\displaystyle\geq x_{B}+\tfrac{5}{4}\eta h,$
where in the last two inequalities, we have use that $\tfrac{1}{2}\leq b$ and
$\epsilon\leq\tfrac{1}{2}$.
Next, if $x_{B}<\eta h\epsilon/2$, then $\eta h\epsilon/2\leq x_{E}$. So, for
$x\leq\eta h$,
$L\left(A+\left(x,0\right)\right)=\frac{1}{\eta
h}\left(x-\tfrac{{\left(0,h\right)}\cdot{v_{n}}}{{\left(1,h\right)}\cdot{v_{n}}}\
(x-x_{B})_{+}\right)=\frac{1}{\eta h}(xb-x_{B}(1-b)).$ (44)
Notice that $L\left(A+\left(\eta h,0\right)\right)\geq\frac{\epsilon}{2}$ for
$\epsilon\leq\frac{2}{5}$ small enough,
$\displaystyle L\left(A+\left(\eta h,0\right)\right)-\frac{\epsilon}{2}$
$\displaystyle=\frac{1}{\eta h}(\eta hb-x_{B}(1-b))-\frac{\epsilon}{2}$
$\displaystyle=b-(1-b)\frac{x_{B}}{\eta h}-\frac{\epsilon}{2}$
$\displaystyle\geq b-(1-b)\frac{\epsilon}{2}-\frac{\epsilon}{2}$
$\displaystyle=b\left(1+\frac{\epsilon}{2}\right)-\frac{3}{2}\epsilon$
$\displaystyle\geq\frac{1}{2}\left(1-\frac{5}{2}\epsilon\right),$
so $x_{E}\leq\eta h$. Using (44), we find that $x_{E}=\frac{1}{b}(\eta
h\epsilon/2+x_{B}(1-b))$. Finally,
$\displaystyle x_{D}-x_{E}-x_{B}$
$\displaystyle=x_{B}\left(1-\tfrac{1-b}{b}\right)+\eta
h\tfrac{1-\epsilon/2}{1-b}-\tfrac{1}{b}\eta h\epsilon/2-\eta
h\tfrac{\epsilon}{2}$ $\displaystyle=x_{B}(2-\tfrac{1}{b})+\eta
h\left(\tfrac{1}{1-b}+\tfrac{\epsilon}{2}(\tfrac{1}{b}-\tfrac{1}{1-b})\right)-\eta
h\tfrac{\epsilon}{2}$ $\displaystyle=x_{B}(2-\tfrac{1}{b})+\tfrac{\eta
h}{1-b}\left(1-\tfrac{\epsilon}{b}(b-\tfrac{1}{2}))\right)-\eta
h\tfrac{\epsilon}{2}.$
Since $\tfrac{1}{2}\leq b\leq 1$, we have $\tfrac{b-1/2}{b}\leq 1$ and
$\tfrac{1}{1-b}\geq 2$, so that
$x_{D}-x_{E}-x_{B}\geq 2(1-\epsilon)\eta h-\eta h\epsilon/2\geq\eta
h(1-\tfrac{3}{2}\epsilon).$
Combining the two cases with $\epsilon<\tfrac{1}{2}$, we conclude that
$x_{D}-x_{E}\geq x_{B}+\eta
h\min\left(\left(1-\tfrac{\epsilon}{2}\right),\tfrac{5}{4}\right)\geq
x_{B}+\tfrac{1}{4}\eta h.$
∎
## Appendix F Mixing-preserving operations: mixing coefficients of
$(X_{n})_{n\in\mathbb{N}}$
###### Proposition F.1.
Let $X_{n}$ be as in(13). For any $k\in\mathbb{N}$,
$\beta_{X}(k+M-1)\leq\beta_{S}(k)\leq\beta_{\mathrm{frac}(\gamma)}(k)+\beta_{W}(k).$
The proposition is a consequence of Lemmata F.2 and F.3, combined with the
fact that $\phi$ is continuous, so
$\beta_{\phi(\gamma)}(k)\leq\beta_{\mathrm{frac}(\gamma)}(k)$. The proofs of
the lemmata essentially consist in manipulating the definitions.
###### Lemma F.2.
For two random variables $U:(\Omega_{U},\sigma^{U})\rightarrow\mathbb{R}$,
$V:(\Omega_{V},\sigma^{V})\rightarrow\mathbb{R}$ with
$(\beta_{U}(k))_{k\in\mathbb{N}},\,(\beta_{V}(k))_{k\in\mathbb{N}}\in\ell^{1}$
summable, we have ${\beta_{U+V}(k)\leq\beta_{U}(k)+\beta_{V}(k)}$. If $U$ and
$V$ are defined on the same probability space, but are independent, the same
holds true.
###### Proof.
Define $Z\coloneqq U+V$. Then, $Z$ is $(\Omega_{Z},\sigma^{Z})$-measurable,
where $\Omega_{Z}=\Omega_{U}\times\Omega_{V}$ and
$\sigma^{Z}=\sigma^{U}\otimes\sigma^{V}$. As $\sigma^{Z}$ is generated by
products of elements from $\sigma^{U}$ and $\sigma^{V}$, we only need to
consider (countable) partitions $\mathcal{A}_{U},\mathcal{B_{U}}$ and
$\mathcal{A}_{V},\mathcal{B}_{V}$ of
$\sigma^{U}_{-\infty,0},\sigma^{U}_{k,\infty}$ and
$\sigma^{V}_{-\infty,0},\sigma^{V}_{k,\infty}$ respectively. For any
$A_{U}\in\mathcal{A}_{U},A_{V}\in\mathcal{A}_{V}$ and
$B_{U}\in\mathcal{B}_{U},B_{V}\in\mathcal{B}_{V}$, by definition of the
product probability measure,
$\displaystyle P((A_{U}{\times}A_{V})\cap(B_{U}\times
B_{V})){-}P(A_{U}{\times}A_{V})P(B_{U}{\times}B_{V})$
$\displaystyle=(P_{U}(A_{U}\cap
B_{U}){-}P_{U}(A_{U})P_{U}(B_{U}))P_{V}(A_{V}\cap B_{V})$
$\displaystyle{+}P_{U}(A_{U})P_{U}(B_{U})(P_{V}(A_{V}\cap
B_{V}){-}P_{V}(A_{V})P_{V}(B_{V})).$
Since $\beta_{U}$ is is summable, $\sum_{A_{U},B_{U}}|P_{U}(A_{U}\cap
B_{U})-P_{U}(A_{U})P_{U}(B_{U})|<\infty$ (idem for $V$), so we can regroup
terms and
$\begin{array}[]{rlc}{\sum\limits_{\begin{subarray}{c}A_{U}\in\mathcal{A}_{U},A_{V}\in\mathcal{A}_{V},\\\
B_{U}\in\mathcal{B}_{U},B_{V}\in\mathcal{B}_{V}\end{subarray}}}P((A_{U}{\times}A_{V})\cap(B_{U}{\times}B_{V}))-P(A_{U}{\times}A_{V})P(B_{U}{\times}B_{V}){=}&\lx@intercol\sum\limits_{\begin{subarray}{c}A_{U}\in\mathcal{A}_{U},\\\
B_{U}\in\mathcal{B}_{U}\end{subarray}}(P_{U}(A_{U}\cap
B_{U}){-}P_{U}(A_{U})P_{U}(B_{U}))\hfil\lx@intercol\\\
&{\times}\sum\limits_{\begin{subarray}{c}A_{V}\in\mathcal{A}_{V},\\\
B_{V}\in\mathcal{B}_{V}\end{subarray}}P_{V}(A_{V}\cap B_{V})&(=1)\\\
&{+}\sum_{\begin{subarray}{c}A_{U}\in\mathcal{A}_{U},\\\
B_{U}\in\mathcal{B}_{U}\end{subarray}}P_{U}(A_{U})P_{U}(B_{U})&(=1)\\\
&\lx@intercol\times\sum\limits_{\begin{subarray}{c}A_{V}\in\mathcal{A}_{V},\\\
B_{V}\in\mathcal{B}_{V}\end{subarray}}(P_{V}(A_{V}\cap
B_{V}){-}P_{V}(A_{V})P_{V}(B_{V}))\hfil\lx@intercol\\\
\leq&\beta_{U}(k)+\beta_{V}(k).&\end{array}$
We conclude by taking the sup over partitions of $\Omega_{Z}.$ ∎
###### Lemma F.3.
Consider $S=(S_{i})_{i\in\mathbb{N}}$ with coefficients $\beta_{S}(k)$ and
define $X_{n}=(S_{n},\ldots,S_{n+M-1})$. Then,
${\beta_{X}(k+M-1)\leq\beta_{S}(k).}$
###### Proof.
First, note that the $\sigma$-algebra generated by a vector coincides with the
$\sigma$-algebra generated by its components
$\displaystyle\sigma(X_{n_{1}},\ldots X_{n_{2}})$
$\displaystyle=\sigma((S_{n_{1}},\ldots,S_{n_{1}+M-1}),\ldots,(S_{n_{2}},\ldots,S_{n_{2}+M-1}))$
$\displaystyle=\sigma(S_{n_{1}},\ldots,S_{n_{2}+M-1})$
$\displaystyle=\sigma^{S}_{n_{1},n_{2}+M-1}.$
Then, any partition $\mathcal{A}\subset\sigma^{X}_{n_{1},n_{2}}$ is also in
$\sigma^{S}_{n_{1},n_{2}+M-1}$. Since $\beta_{X}$ is defined as a sup over
such partitions, $\beta_{X}(k+M-1)\leq\beta_{S}(k)$. For $k\leq M$, we can
take $\mathcal{A}=\mathcal{B}\subset\sigma(S_{k})$. Since $S_{k}$ is a
continuous random variable, $\beta_{X}(k)=1$. ∎
## Appendix G Gaussian approximation for dependent data
###### Theorem G.1 ([Kosorok, 2008, Theorem 11.22]).
Let $(X_{n})_{n\in\mathbb{N}}\subset\mathbb{R}^{d}$ be a stationary sequence
and consider a functional family $\mathcal{F}=(F_{t})_{t\in\mathbb{T}}$ with
finite bracketing entropy. Suppose there exists $r\in]2,\infty[$, such that
$\sum_{k=1}^{\infty}k^{\tfrac{2}{r-2}}\beta_{X}(k)<\infty,$ (45)
Then, $\sqrt{N}(\hat{F}_{t}-F_{t})$ converges to a tight, zero–mean Gaussian
$G_{d}$ process with covariance (15).
###### Theorem G.2 ([Bühlmann, 1995, Theorem1]).
Let $(X_{n})_{n\in\mathbb{N}}\subset\mathbb{R}^{d}$ be a stationary sequence
and consider a functional family $\mathcal{F}=(F_{t})_{t\in\mathbb{T}}$ with
finite bracketing entropy. Suppose that
$\beta_{X}(k)\xrightarrow[k\to\infty]{}0$ decrease exponentially and that
$\mathcal{F}$ satisfies (19,21). Let the bootstrap sample be generated with
the Moving Block Bootstrap, where the block size $L(n)$ satisfying
$L(n)\rightarrow\infty$ and $L(n)=\mathcal{O}(n^{1/2-\epsilon})$ for some
$0<\epsilon<\tfrac{1}{2}$. Then,
$\sqrt{N}(\hat{F}_{N}^{*}-\mathbb{E}^{*}[\hat{F}_{N}^{*}])\rightarrow^{*}G_{d}\qquad\text{
in probability,}$
where $G_{d}$ is the zero-mean Gaussian Process with the covariance (15).
## Appendix H Proofs of Propositions 4.4 and 4.5
###### Proof of Proposition 4.4.
We first note that when $A_{h}\leq\epsilon$, then
$\mathrm{pers}_{p,\epsilon}^{p}(h)=0$. For the non-trivial case, we follow the
proof of Theorem 4.13 in [Perez, 2022]. An upper-bound of the covering number
of the image of $h$, at radius $\tau>0$ is $T(2\Lambda/\tau)^{1/\alpha}+1$, so
that
$\mathrm{pers}_{p,\epsilon}^{p}(h)\leq
p\int_{\epsilon}^{A(f)}\left(T\left(\frac{2\Lambda}{\tau}\right)^{1/\alpha}+1\right)(\tau-\epsilon)^{p-1}d\tau=(A_{h}-\epsilon)^{p}+pT(2\Lambda)^{1/\alpha}\int_{\epsilon}^{A(f)}\frac{(\tau-\epsilon)^{p-1}}{\tau^{1/\alpha}}d\tau$
We recall that since $\frac{A_{h}}{\tau}\geq 1$ and $\frac{1}{\alpha}\leq
p-1$, $(\frac{A_{h}}{\tau})^{1/\alpha}\leq(\frac{A_{h}}{\tau})^{p-1}$, so
$\frac{(\tau-\epsilon)^{p-1}}{\tau^{1/\alpha}}=\frac{1}{A_{h}^{1/\alpha}}\left(\frac{A_{h}}{\tau}\right)^{1/\alpha}(\tau-\epsilon)^{p-1}\leq
A_{h}^{p-1-1/\alpha}\left(1-\frac{\epsilon}{\tau}\right)^{p-1}.$
Finally, by recognizing that $1-\epsilon/\tau\leq 1-\epsilon/A_{h},$ we obtain
$\displaystyle\mathrm{pers}_{p,\epsilon}^{p}(h)$
$\displaystyle\leq(A_{h}-\epsilon)^{p}+pT(2\Lambda)^{1/\alpha}A_{h}^{p-1-1/\alpha}(1-\epsilon/A_{h})^{p-1}(A_{h}-\epsilon)$
$\displaystyle\leq(A_{h}-\epsilon)(1-\epsilon/A_{h})^{p-1}[A_{h}^{p-1}+pT(2\Lambda)^{1/\alpha}A_{h}^{p-1-1/\alpha}]$
$\displaystyle\leq(A_{h}-\epsilon)^{p}\left(1+pT\left(\tfrac{2\Lambda}{A_{h}}\right)^{1/\alpha}\right)$
$\displaystyle\leq(A_{h}-\epsilon)^{p}\left(1+pT\left(\tfrac{2\Lambda}{\epsilon}\right)^{1/\alpha}\right),$
where we have used that $\epsilon^{1/\alpha}\leq A_{h}^{1/\alpha}$ ∎
By Hölder continuity, $A_{h}\leq T^{\alpha}\Lambda$, so the ratio
$\tfrac{T\Lambda^{1/\alpha}}{A_{h}^{1/\alpha}}\geq 1$ denotes how small the
amplitude of $h$ is relative to what it could be, under the Hölder assumption.
Interestingly, that term increases as $A_{h}$ gets smaller, but the whole
bound is indeed increasing in $A_{h}$, which is of the order of
$A_{h}^{p}+A_{h}^{2-1/\alpha}$.
###### Proof of Proposition 4.5.
Let $f,g\in C([0,T])$ such that $\|f-g\|_{\infty}<\epsilon/4$. Let
$\Gamma:D(f)\rightarrow D(g)$ be a matching. Recall that
$|w_{\epsilon}(b,d)-w_{\epsilon}(\eta_{b},\eta_{d})|\leq|b-\eta_{b}|+|d-\eta_{d}|\leq
2\|(b,d)-(\eta_{b},\eta_{d})\|_{\infty}$. In addition, if $d-b<\epsilon/2$,
then both $w_{\epsilon}(b,d)=0=w_{\epsilon}(\Gamma(b,d))$. Using the bound on
the difference of $p$-powers as in the proof of Proposition 4.3,
$\displaystyle\left|\sum_{(b,d)\in
D(f)}w_{\epsilon}(b,d)^{p}-\sum_{(b^{\prime},d^{\prime})\in
D(g)}w_{\epsilon}(b^{\prime},d^{\prime})^{p}\right|$ $\displaystyle\leq
p\sum_{(b,d)\in
D(f)}|w_{\epsilon}(b,d)-w_{\epsilon}(\Gamma(b,d))|\max\\{w_{\epsilon}(b,d)^{p-1},w_{\epsilon}(\Gamma(b,d))^{p-1}\\}$
$\displaystyle\leq 2p\|f-g\|_{\infty}\sum_{\begin{subarray}{c}(b,d)\in D(f)\\\
d-b\geq\epsilon/2\end{subarray}}\max\\{w_{\epsilon}(b,d)^{p-1},w_{\epsilon}(\Gamma(b,d))^{p-1}\\}$
$\displaystyle\leq p\left(\sum_{\begin{subarray}{c}(b,d)\in D(f)\\\
d-b\geq\epsilon/2\end{subarray}}(w_{\epsilon}(b,d)+2\epsilon/4)^{p-1}\right)\|f-g\|_{\infty}.$
Since $f$ is continuous on a compact domain, it is uniformly continuous, so
the right-hand side is finite and depends only on $f$.
For the Lipschitz character, we follow the proof of [Perez, 2022, Lemma 3.20].
For $f,g\in C^{\alpha}_{\Lambda}([0,T])$,
$\displaystyle\left|\sum_{(b,d)\in
D(f)}w_{\epsilon}(b,d)^{p}-\sum_{(b^{\prime},d^{\prime})\in
D(g)}w_{\epsilon}(b^{\prime},d^{\prime})^{p}\right|$ $\displaystyle\leq
p\sum_{(b,d)\in
D(f)}|w_{\epsilon}(b,d)-w_{\epsilon}(\Gamma(b,d))|\max\\{w_{\epsilon}(b,d)^{p-1},w_{\epsilon}(\Gamma(b,d))^{p-1}\\}$
$\displaystyle\leq 2p\|f-g\|_{\infty}\left(\sum_{(b,d)\in
D(f)}w_{\epsilon}(b,d)^{p-1}+\sum_{(b^{\prime},d^{\prime})\in
D(g)}w_{\epsilon}(b^{\prime},d^{\prime})^{p-1}\right)$
$\displaystyle=2p(\mathrm{pers}_{p-1,\epsilon}^{p-1}(D(f))+\mathrm{pers}_{p-1,\epsilon}^{p-1}(D(g))\|f-g\|_{\infty}.$
By Lemma 4.4,
$\mathrm{pers}_{p-1,\epsilon}^{p-1}(D(f))\leq\frac{2^{1/\alpha}}{1-1/(p-1)\alpha}\Lambda^{p-1}T^{(p-1)\alpha-1}$,
so that
$|\mathrm{pers}_{p,\epsilon}^{p}(D(f))-\mathrm{pers}_{p,\epsilon}^{p}(D(g))|\leq\frac{2^{2+1/\alpha}}{1-1/(p-1)\alpha}\Lambda^{p-1}T^{(p-1)\alpha-1}\|f-g\|_{\infty}.$
∎
## Appendix I Lipschitz constant for $k^{pi}$ and $k^{pi,t}$
First, $(x,y)\mapsto\exp(-(x^{2}+y^{2}))$ is $\tfrac{2\sqrt{2}}{e}-$Lipschitz
with respect to the Euclidean norm, so $\tfrac{4}{e}-$Lipschitz for the
Minkowski norm. Let us now consider
$k^{pi,t}(b,d)(x,y)=\tfrac{1}{2\pi\sigma^{2}}\left(2-\tfrac{\|(b,d)-(x,y)\|_{\infty}}{\sigma}\right)_{+}^{r}\exp\left(-\tfrac{(b-x)^{2}+(d-y)^{2}}{2\sigma^{2}}\right)$.
Then, for ${r>1}$,
$\displaystyle\left|\left(2-\tfrac{\|(b,d)-(x,y)\|_{\infty}}{\sigma}\right)_{+}^{r}-\left(2-\tfrac{\|(b^{\prime},d^{\prime})-(x,y)\|_{\infty}}{\sigma}\right)_{+}^{r}\right|=\left|\int_{0}^{1}\tfrac{d}{dt}\left(2-\tfrac{\|(b,d)+(b^{\prime}-b,d^{\prime}-d)t-(x,y)\|_{\infty}}{\sigma}\right)_{+}^{r}dt\right|$
$\displaystyle\leq$
$\displaystyle\int_{0}^{1}\left|r\left(2-\tfrac{|b+(b^{\prime}-b)t-x|}{\sigma}\right)_{+}^{r-1}(-1)^{b-x>b^{\prime}-bt}\tfrac{(b^{\prime}-b)}{\sigma}1_{|b+(b^{\prime}-b)t-x|\geq|d+(d-d^{\prime})t-y|}\right.$
$\displaystyle+\left.r\left(2-\tfrac{|d+(d^{\prime}-d)t-y|}{\sigma}\right)_{+}^{r-1}(-1)^{d-y>d^{\prime}-dt}\tfrac{(d^{\prime}-d)}{\sigma}1_{|b+(b^{\prime}-b)t-x|\leq|d+(d-d^{\prime})t-y|}\right|dt.$
$\displaystyle\leq$
$\displaystyle\int_{0}^{1}\tfrac{r}{\sigma}\left(\left(2-\tfrac{|b+(b^{\prime}-b)t-x|}{\sigma}\right)_{+}^{r-1}|b-b^{\prime}|+r\left(2-\tfrac{|d+(d^{\prime}-d)t-y|}{\sigma}\right)_{+}^{r-1}|d-d^{\prime}|\right)dt$
$\displaystyle\leq$
$\displaystyle\tfrac{r}{\sigma}\left((2-\tfrac{\min(|b-x|,|b^{\prime}-x|)}{\sigma})_{+}^{r-1}|b-b^{\prime}|+r(2-\tfrac{\min(|d-y|,|d^{\prime}-y|)}{\sigma})_{+}^{r-1}|d-d^{\prime}|\right)$
$\displaystyle\leq$
$\displaystyle\tfrac{2r}{\sigma}(2-\tfrac{\min(\|(b,d)-(x,y)\|_{\infty},\|(b^{\prime},d^{\prime})-(x,y)\|_{\infty})}{\sigma})_{+}^{r-1}\|(b,d)-(b^{\prime},d^{\prime})\|_{\infty}$
$\displaystyle\leq$
$\displaystyle\tfrac{2^{r}r}{\sigma}\|(b,d)-(b^{\prime},d^{\prime})\|_{\infty}.$
Then, we obtain
$\displaystyle|k^{pi,t}(b,d)(x,y)-k^{pi,t}(b^{\prime},d^{\prime})(x,y)|\leq$
$\displaystyle\tfrac{1}{2\pi\sigma^{2}}\left|\left(2-\tfrac{\|(b,d)-(x,y)\|_{\infty}}{\sigma}\right)_{+}^{r}-\left(2-\tfrac{\|(b^{\prime},d^{\prime})-(x,y)\|_{\infty}}{\sigma}\right)_{+}^{r}\right|\exp\left(-\tfrac{(b-x)^{2}+(d-y)^{2}}{2\sigma^{2}}\right)$
$\displaystyle+\tfrac{1}{2\pi\sigma^{2}}\left(2-\tfrac{\|(b^{\prime},d^{\prime})-(x,y)\|_{\infty}}{\sigma}\right)_{+}^{r}\left|\exp\left(-\tfrac{(b-x)^{2}+(d-y)^{2}}{2\sigma^{2}}\right)-\exp\left(-\tfrac{(b^{\prime}-x)^{2}+(d^{\prime}-y)^{2}}{2\sigma^{2}}\right)\right|$
$\displaystyle\leq$
$\displaystyle\tfrac{1}{2\pi\sigma^{2}}\tfrac{2^{r}r}{\sigma}\|(b,d)-(b^{\prime},d^{\prime})\|_{\infty}+\tfrac{1}{2\pi\sigma^{2}}2^{r}\tfrac{4}{e}\left\|\left(\tfrac{b-x}{\sigma},\tfrac{d-y}{\sigma}\right)-\left(\tfrac{b^{\prime}-x}{\sigma},\tfrac{d^{\prime}-y}{\sigma}\right)\right\|_{\infty}$
$\displaystyle\leq$
$\displaystyle\tfrac{2^{r-1}}{\pi\sigma^{3}}\left(r+2\right)\|(b,d)-(b^{\prime}d^{\prime})\|_{\infty}.$
## Appendix J Moments of the Hölder constant of a stochastic process
Let $(W_{t})_{t\in[0,T]}$ be a stochastic process. A path $t\mapsto
W_{t}(\omega)$ is said to be $\alpha$-Hölder if
$|W_{t}(\omega)-W_{s}(\omega)|\leq\Lambda_{W(\omega)}|s-t|^{\alpha}$, for any
$s,t\in[0,T]$. Many processes, for example Gaussian processes, do not admit a
uniform constant. Based on [Azäis and Wschebor, 2009, Hu and Le, 2013,
Shevchenko, 2017], we will now give a condition under which
$\Lambda_{W,\omega}$ is a random variable and we will calculate its moments.
###### Proposition J.1 ([Azäis and Wschebor, 2009, Proposition 1.11]).
Suppose $W$ satisfies (9) with $K_{r_{2},r_{1}}$ and let
$\alpha\in]0,\tfrac{r_{1}}{r_{2}}[$. Then, there exists a version
$(V_{t})_{t\in[0,1]}$ of $W$ and a random variable $\Lambda_{V,\alpha}>0$,
such that, for all $s,t\in[0,1]$,
$P(|V_{t}-V_{s}|\leq\Lambda_{V,\alpha}|t-s|^{\alpha})=1\quad\text{and}\quad
P(W(t)=V(t))=1.$
###### Theorem J.2 ([Shevchenko, 2017]).
Let $r_{2}\in\mathbb{N}$ be such that $K_{r_{2},\alpha r_{2}}<\infty$ and
$1-\alpha>\tfrac{1}{r_{2}}$, $r_{2}\geq 2$,
$\mathbb{E}[\Lambda_{W}]\leq 16\
\tfrac{\alpha+1}{\alpha}TK_{r_{2},r_{2}\alpha+1}^{1/r_{2}}.$
In addition,
$\mathbb{E}[\Lambda_{W}^{k}]\leq\begin{cases}\left(2^{3+2/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}\right)^{k}K_{r_{2},r_{2}\alpha+1}^{k/r_{2}},&\text{for
}0<k\leq r_{2},\\\ \left(2^{3+2/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}\right)^{k}K_{k,k(\alpha+2/r_{2})-1},&\text{for
}k>r_{2}.\end{cases}$
###### Lemma J.3 (Garsia–Rodemich–Rumsey Inequality [Hu and Le, 2013, Lemma
1.1]).
Let $G:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ be a non–decreasing function
with $\lim_{x\rightarrow\infty}G(x)=\infty$ and $\delta:[0,T]\rightarrow[0,T]$
continuous and non–decreasing with $\delta(0)=0$. Let $G^{-1}$ and
$\delta^{-1}$ be lower–inverses. Let $f:[0,T]\rightarrow\mathbb{R}$ be a
continuous functions such that
$\int_{0}^{T}\int_{0}^{T}G\left(\frac{|f(x)-f(y)|}{\delta(x-y)}\right)dxdy\leq
B<\infty.$
Then, for any $s,t\in[0,T]$,
$|f(s)-f(t)|\leq 8\int_{0}^{|s-t|}G^{-1}(4B/u^{2})d\delta(u).$
###### Proof of Theorem J.2.
Consider a path $W(\omega)$ of the stochastic process and set
$B(\omega)\coloneqq\int_{0}^{T}\int_{0}^{T}G\left(\frac{|W_{t}(\omega)W_{s}(\omega)|}{\delta(t-s)}\right)dtds$,
where $G(u)=u^{r_{2}}$ and $\delta(u)=u^{\alpha+2/r_{2}}$. Then,
$G^{-1}(u)=u^{1/r_{2}}$ and
$\tfrac{d}{du}\delta=(\alpha+2/r_{2})u^{\alpha+2/r_{2}-1}$. Applying Lemma
J.3,
$\displaystyle|W_{t}(\omega)-W_{s}(\omega)|$ $\displaystyle\leq
8\int_{0}^{|s-t|}G^{-1}(4B(\omega)/u^{2})d\delta(u)$ $\displaystyle\leq
8\int_{0}^{|t-s|}\left(\frac{4B(\omega)}{u^{2}}\right)^{1/r_{2}}(\alpha+2/p)u^{\alpha+2/r_{2}-1}du$
$\displaystyle\leq
8(4B(\omega))^{1/r_{2}}(\alpha+2/r_{2})\int_{0}^{|t-s|}u^{\alpha-1}du$
$\displaystyle=8(4B(\omega))^{1/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}|t-s|^{\alpha}.$
As this is valid for any $s,\,t\in[0,T]$, $\Lambda_{W}(\omega)\leq
8(4B(\omega))^{1/r_{2}}\ \tfrac{\alpha+2/r_{2}}{\alpha}$. By Jensens’
inequality,
$\mathbb{E}[\Lambda_{W}]\leq 2^{3+2/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}\mathbb{E}[B(\omega)^{1/r_{2}}]\leq
2^{3+2/r_{2}}\ \tfrac{\alpha+2/r_{2}}{\alpha}\mathbb{E}[B(\omega)]^{1/r_{2}}.$
(46)
By linearity of expectation,
$\displaystyle\mathbb{E}\left[\int_{0}^{T}\int_{0}^{T}G\left(\frac{|W_{t}(\omega)W_{s}(\omega)|}{\delta(t-s)}\right)dtds\right]$
$\displaystyle=\int_{0}^{T}\int_{0}^{T}\frac{\mathbb{E}[|W_{t}(\omega)W_{s}(\omega)|^{r_{2}}]}{\delta(t-s)^{r_{2}}}dtds$
$\displaystyle=\int_{0}^{T}\int_{0}^{T}\frac{\mathbb{E}[|W_{t}(\omega)W_{s}(\omega)|^{r_{2}}]}{|t-s|^{p\alpha+2}}dtds$
$\displaystyle\leq\int_{0}^{T}\int_{0}^{T}K_{p,p\alpha+1}dtds$
$\displaystyle=T^{2}K_{r_{2},r_{2}\alpha+1}.$
Finally, $\mathbb{E}[\Lambda_{W}]\leq 2^{3+2/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}T^{2/r_{2}}K_{r_{2},r_{2}\alpha+1}^{1/r_{2}}$,
as long as $r_{2}\alpha+1\leq r_{2}$ and we can simplify the constants if
$r_{2}>2$. Consider now the higher moments. If $k\leq r_{2}$, we can still
apply Jensens’ inequality in (46):
$\mathbb{E}[\Lambda_{W}^{k}]\leq\left(2^{3+2/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}\right)^{r_{2}}\mathbb{E}[B(\omega)^{k/r_{2}}]\leq\left(2^{3+2/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}\right)^{k}\mathbb{E}[B(\omega)]^{k/r_{2}}\leq\left(2^{3+2/r_{2}}\
\tfrac{\alpha+2/r_{2}}{\alpha}\right)^{k}K_{r_{2},r_{2}\alpha+1}^{k/r_{2}}.$
However, if $k\geq r_{2}$,
$\displaystyle\mathbb{E}\left[\left(\int_{0}^{T}\int_{0}^{T}G\left(\frac{|W_{t}(\omega)W_{s}(\omega)|}{\delta(t-s)}\right)dtds\right)^{k/r_{2}}\right]$
$\displaystyle=\int_{0}^{T}\int_{0}^{T}\frac{\mathbb{E}[|W_{t}(\omega)W_{s}(\omega)|^{k}]}{\delta(t-s)^{k}}dtds$
$\displaystyle=\int_{0}^{T}\int_{0}^{T}\frac{\mathbb{E}[|W_{t}(\omega)W_{s}(\omega)|^{k}]}{|t-s|^{k\alpha+2k/r_{2}}}dtds$
$\displaystyle\leq\int_{0}^{T}\int_{0}^{T}K_{k,k(\alpha+2/r_{2})-1}dtds$
$\displaystyle=T^{2}K_{k,k(\alpha+2/r_{2})-1}.$
∎ |
∎
11institutetext: J. Arrington 22institutetext: Lawrence Berkeley National
Laboratory, 22email<EMAIL_ADDRESS>33institutetext: M. Yurov
44institutetext: Los Alamos National Laboratory
# A measurement of two-photon exchange in Super-Rosenbluth separations with
positron beams
John R Arrington Mikhail Yurov
(Received: date / Accepted: date)
###### Abstract
The proton electric and magnetic form factors, $G_{E}$ and $G_{M}$, are
intrinsically connected to the spatial distribution of charge and
magnetization in the proton. For decades, Rosenbluth separation measurement,
based on measurements of the angular dependence of elastic electron-proton
scattering, were used to separate $G_{E}$ and $G_{M}$. More recently,
polarized electron scattering measurements were used to improve the precision
of the separation but showed significant disagreement with Rosenbluth
measurements. The discrepancy was confirmed using a new ‘Super-Rosenbluth’
technique to improve the precision of the Rosenbluth extraction.
This large and unexpected discrepancy raised significant questions about our
understanding of the proton, and became a high-priority issue for the field.
Since then, a string of theoretical and experimental studies attempting to
understand the discrepancy have led to the conclusion that two-photon exchange
(TPE) corrections are most likely responsible for the discrepancy. TPE can be
measured directly by comparing positron-proton (e+p) and electron-proton (e-p)
scattering, but these measurements are extremely challenging. To date, TPE
contributions have been directly observed only at lower $Q^{2}$ values, below
the region where the Rosenbluth and polarization measurements strongly
disagree.
In this work, we show that there are significant benefits to combining the
Super-Rosenbluth technique with positron beam measurements. In addition to
providing improved precision and a greater kinematic reach, the use of the
Super-Rosenbluth technique provides a comparison of e+ and e- scattering that
is insensitive to some of the key systematic uncertainties associated with
direct comparisons of e+p to e-p scattering.
###### Keywords:
Two Photon Exchange Proton Form Factors Elastic Scattering
###### pacs:
13.60.Fz 13.40.Gp 13.40.Ks
## 1 Nucleon form factors measurements
The proton electromagnetic form factors provide unique insight into the
proton’s spatial structure, encoding the radial distribution of its charge and
magnetization Kelly:2002if . In the one-photon exchange approximation, the
proton form factors can be related to the reduced cross section for electron-
proton (e-p) or positron-proton (e+p) elastic scattering Perdrisat:2006hj ;
Arrington:2006zm ,
$\sigma_{R}(Q^{2},\varepsilon)=\tau G_{M}^{2}(Q^{2})+\varepsilon
G_{E}^{2}(Q^{2}),$ (1)
where $G_{E}(Q^{2})$ and $G_{M}(Q^{2})$ are the charge and magnetic form
factors of the proton, $Q^{2}$ is the four-momentum transfer squared,
$\tau=Q^{2}/4M_{p}^{2}$, $M_{p}$ is the mass of the proton, and
$\varepsilon=1/\left[1\,+\,2(1+\tau)\tan^{2}(\theta_{e}/2)\right]$ is the
virtual photon polarization parameter, and $\theta_{e}$ is the electron
scattering angle. The form factors $G_{E}$ and $G_{M}$ are normalized at
$Q^{2}=0$ to the proton’s charge (1) and magnetic moment ($\mu_{p})$ in units
of elementary charge and nuclear magnetons, respectively.
At fixed $Q^{2}$, $\sigma_{R}$ depends linearly on $\varepsilon$. By keeping
$Q^{2}$ fixed but making measurements at multiple $\varepsilon$ values (by
varying the beam energy and electron scattering angle) one can map out the
$\varepsilon$ dependence of the reduced cross section. A linear fit in
$\varepsilon$ allows for the extraction of $G_{M}(Q^{2})$ from the value of
the fit at $\varepsilon=0$ and $G_{E}(Q^{2})$ from the slope. This method of
extracting the form factors is known as a Rosenbluth separation
Rosenbluth:1950yq or Longitudinal-Transverse (LT) separation.
Measurements of the elastic e-p cross section from the 1960s through the 1990s
showed that the form factors approximately followed the dipole form,
$G_{E}(Q^{2})\approx G_{M}(Q^{2})/\mu_{p}^{2}\approx
G_{D}(Q^{2})=1/(1+Q^{2}/0.71)^{2},$ (2)
with $Q^{2}$ in units of GeV2. Therefore, even for $\varepsilon=1$, the
contribution of the charge form factor is suppressed relative to the magnetic
form factor by a factor of $1/(\tau\mu_{p}^{2})\approx 0.5/Q^{2}$. At
$Q^{2}\approx 1$ GeV2, $G_{E}$ contributes at most 30% to the cross section,
and its contribution decreases as $1/Q^{2}$ at larger $Q^{2}$ values. Because
of this, direct Rosenbluth separations are severely limited in precision above
$Q^{2}=3$-4 GeV2 Arrington:2003df . In addition, the extraction of $G_{E}$ is
very sensitive to any experimental corrections that vary with $\varepsilon$,
which was a particular problem for analyses that combined different
experiments to obtain data covering a range of $\varepsilon$.
Polarization measurements provided a way to overcome the limitations of the
Rosenbluth technique at larger $Q^{2}$ values Akhiezer:1957aa ; Arnold:1980zj
. Measurements utilizing polarized electron beams in combination with
polarized targets or recoil polarization measurements, referred to
collectively as PT measurements, are sensitive to the combination
$R_{p}=\mu_{p}G_{E}/G_{M}$, rather than the individual form factors. As such,
polarization measurements of $R_{p}$ combined with Rosenbluth separations,
which yield precise values for $G_{M}$ at large $Q^{2}$, were expected to
allow for reliable measurements of both form factors over a wide range of
$Q^{2}$.
Figure 1: Measurements of $R_{p}=\mu_{p}G_{E}/G_{M}$ for the proton. Red
triangles correspond to polarization measurements Jones:1999rz ; Gayou:2001qd
, Cyan crosses are Rosenbluth extractions from Ref. Arrington:2003qk , and
black circles are the Super-Rosenbluth measurements from Ref. Qattan:2004ht .
Figure adapted from Qattan:2004ht .
The first precision measurements of the form factors utilizing polarization
observables Jones:1999rz showed a significant deviation from the Rosenbluth
observation of $R_{p}\approx 1$, with $R_{p}$ decreasing approximately
linearly with $Q^{2}$. This dramatic discrepancy led to reexaminations of
previous Rosenbluth extractions Arrington:2003df ; Arrington:2003qk as well
as new high-$Q^{2}$ Rosenbluth extractions Qattan:2004ht ; Christy:2004rc ,
and new polarization measurements Punjabi:2005wq ; Puckett:2010ac ;
Puckett:2011xg ; Puckett:2017flj . These efforts demonstrated that there was a
clear inconsistency, illustrated in Figure 1, between the form factors
extracted using these two techniques up to $Q^{2}=6$ GeV2, and that the
discrepancy was above what could be explained by the experimental and
theoretical uncertainties quoted by the measurements Arrington:2003df ;
Arrington:2003qk ; Qattan:2004ht .
An important work in confirming and quantifying this discrepancy was the so-
called “Super-Rosenbluth” experiment Qattan:2004ht . The Super-Rosenbluth (SR)
separation is identical to conventional Rosenbluth measurements except for the
fact that the struck proton, rather than the scattered electron, is detected.
Proton detection provides a large number of benefits when interested in the
ratio $R_{p}$ Qattan:2004ht ; Qattan:2005zd , which depends only on the
relative $\varepsilon$ dependence of the reduced cross section. Figure 2
illustrates some of the advantages of the SR technique. At fixed $Q^{2}$, the
proton momentum is constant, so all momentum-dependent corrections cancel out
when examining the $\varepsilon$ dependence. The cross section for proton
detection has a dramatically smaller $\varepsilon$ dependence compared to
electron detection, making the measurements less sensitive to rate-dependent
effects and removing the need to increase beam current (and thus target
heating corrections) as the cross section decreases. The cross section is
generally much less sensitive to the knowledge of the beam energy and angle of
the detected particle. Finally, the cross section is insensitive to radiative
corrections where the scattered (undetected) electron radiates a photon,
reducing both the size and $\varepsilon$-dependence of the radiative
corrections Qattan:2005zd . Because of these advantages, the SR measurement
allowed for an extraction of $R_{p}$ with precision comperable to the
polarization measurements. These new, precise results were consistent with
conventional Rosenbluth extractions (Fig. 1) and were significantly less
sensitive to experimental systematics. This helped to rule out several
potential experimental corrections that may have caused the discrepancy,
provided a test of the standard radiative correction procedures, and gave a
better quantitative measurement of the discrepancy.
Figure 2: Comparison of electron and proton detection in elastic ep scattering
for a range of $Q^{2}$ values. The top left panel shows the detected particle
momentum vs $\varepsilon$, showing that proton detection at fixed $Q^{2}$ is
insensitive to momentum-dependent detector response. The top right panel shows
the cross section, which varies very little with $\varepsilon$ and is
significantly higher for proton detection in the low-$\varepsilon$ region. The
bottom left (right) panel show the cross section sensitivity to uncertainty in
beam energy (particle angle). The sensitivity to beam energy is lower for
proton detection in all kinematics, and the sensitivity to angle dramatically
reduced for large $\varepsilon$ measurements.
### 1.1 Two-photon exchange corrections
While experimental efforts were focusing on confirming the discrepancy, the
contribution of TPE diagrams were being examined as a potential explanation
for the discrepancy Guichon:2003qm ; Blunden:2003sp . TPE corrections are
expected to be small for both cross section and polarization measurements, but
it was demonstrated that a relatively small, $\varepsilon$-dependent TPE
correction could significantly impact the extraction of $G_{E}$ from
high-$Q^{2}$ Rosenbluth measurements. At large $Q^{2}$, the $\varepsilon$
dependence arising from $G_{E}$ is only 5–10% of the reduced cross section,
making even a percent-level correction significant if it modifies the
$\varepsilon$ dependence. In addition to changing the Rosenbluth slope, TPE
contributions can also cause the reduced cross section to deviate from the
linear behavior expected in the Born approximation.
As noted above, the Super-Rosenbluth experiment Qattan:2004ht was
instrumental in confirming that the discrepancy was significant and could not
be easily explained by experimental uncertainties. The experiment gained
additional importance in the context of TPE as a likely source of the
discrepancy. The high-precision extraction of $R_{p}$ allowed for the best
quantification of the LT-PT discrepancy and thus the size of the linear TPE
contributions. In addition, it provided significantly improved tests for non-
linear contributions from TPE exchange Tvaskis:2005ex . The quantification of
the LT-PT discrepancy, combined with the limits on non-linear contributions,
allows for an extraction of the form factors Arrington:2007ux ;
Bernauer:2013tpr ; Ye:2017gyb from combined analysis of polarization data and
cross section (with calculated and/or phenomenological TPE corrections). Under
the assumptions of these combined fits, the TPE corrections do not dominate
the uncertainties in the extracted form factors. However, this relies on the
assumption that TPE contributions fully explain the observed discrepancy, and
so far there is no direct observation of TPE for $Q^{2}\geq 2$ GeV2. Without
knowing that TPE fully resolve the discrepancy, we cannot be certain that
these extractions are reliable.
More direct tests were made by comparing positron-proton and electron-proton
scattering, where the TPE contributions change sign. A global analysis of
earlier measurements Arrington:2003ck showed evidence for TPE contributions
with an $\varepsilon$ dependence consistent with the observed discrepancy, but
the data showing non-zero TPE contributions were limited to $Q^{2}$ values
below 1 GeV2. In addition, new experiments were proposed and carried out to
study TPE in e-p and e+p scattering Adikaram:2014ykv ; Rimal:2016toz ;
Rachek:2014fam ; Henderson:2016dea . These experiments confirmed the presence
of TPE contributions up to $Q^{2}\approx 1.5$ GeV2 and were in qualitative
agreement with TPE calculations Blunden:2005ew ; Zhou:2014xka , but did not
extend to the $Q^{2}$ region where a clear discrepancy was observed and lacked
sufficient precision to look for non-linear contributions in $\varepsilon$.
At the present time, while there are significant indications that TPE
corrections are responsible for the form factor discrepancy, there is no
direct confirmation. Direct extractions of TPE from e+p/e-p cross section
ratios indicate the presence of TPE corrections, but do not extend to $Q^{2}$
values where a large discrepancy is observed. Comparisons of LT and PT
measurements Arrington:2003df ; Qattan:2005zd , including a recent result with
improved radiative correction procedures Gramolin:2016hjt , show indications
of a discrepancy at the 2$\sigma$ level up to $Q^{2}\approx 6$ GeV2 (and only
one sigma at 8 GeV2) GMP12 , but cannot identify the source of the
discrepancy. Additional understanding is required to have reliable extractions
of the form factors and to validate calculations of these corrections for
other electron scattering observables Arrington:2003qk ; Arrington:2006hm ;
Blunden:2009dm ; Arrington:2011dn ; Arrington:2011kb ; Blunden:2012ty ;
Hall:2013loa ; Hall:2015loa ; Afanasev:2017gsk :
* •
Confirmation of TPE as the source of the discrepancy above 1.5 GeV2.
* •
Better constraints on the size of TPE from improved Rosenbluth separations for
$Q^{2}>2$-3 GeV2.
* •
Improved constraints on non-linear contributions for all $Q^{2}$ values.
Additional data exist that can help address the second of these questions.
Data from high-$Q^{2}$ form factor measurements GMP12 can extend the $Q^{2}$
range of the LT-PT comparisons above 6 GeV2, while additional Super-Rosenbluth
measurements up to 6 GeV2 SR2 will improve the constraints on TPE in this
region as well as improve measurements of (or constraints on) non-linear
contributions. However, we still have no direct evidence that the source of
the discrepancy in the region of significant LT-PT discrepancy is entirely due
to TPE. Without a direct demonstration that TPE fully explains the
discrepancy, the assumptions currently being made to extract the form factors
could yield incorrect results. In addition, testing TPE calculations in
elastic e-p and e+p scattering at modest-to-high $Q^{2}$ values will give us
confidence that such calculations can be applied to other observables.
In the remainder of this paper, we lay out a proposal to combine the benefits
of the Super-Rosenbluth technique with the direct sensitivity of e+p/e-p cross
section comparisons. Several of the benefits provided by the SR measurement
address challenges in the direct e+p/e-p cross section comparisons, allowing
for a direct confirmation of the TPE hypothesis, as well as providing
significantly improved constraints on both the size and non-linear
contributions of TPE compared to electron Super-Rosenbluth measurements or
conventional direct comparisons of e+p and e-p scattering. The discussion here
expands on the ideas presented in Refs. yurov2017 ; Accardi:2020swt
## 2 Super-Rosenbluth measurements with positron beams
While e+p/e-p cross section comparisons provide the most direct measurement of
TPE, they are extremely challenging in practice. Both collider measurements
and fixed target experiments utilizing secondary positron beams provide modest
luminosity, making it challenging to reach larger $Q^{2}$ and low
$\varepsilon$ values where the cross section is low, but where TPE
contributions are expected to be largest. In some cases, there are different
corrections and systematic uncertainties associated with e+ and e- beams,
limiting the sensitivity even where sufficient statistics can be collected.
Differences between positron and electron running conditions can also limit
the precision of the measurements in cases where it is not possible to change
between e- and e+ beams quickly, which can lead to different run conditions
for the positron and electron data. Finally, measurements utilizing a fixed
beam energy do not allow for a direct extraction at fixed $Q^{2}$ at different
angles, and thus cannot directly measure the $\varepsilon$ dependence at fixed
$Q^{2}$.
Several of these limitations are reduced or eliminated when using the Super-
Rosenbluth technique. As shown in Fig. 2, the cross section for proton
detection is significantly higher than for electron detection at low
$\varepsilon$, where TPE are largest, offsetting the low luminosity and
extending the kinematic reach of the measurements. This also allows for
measurements to be taken at a fixed beam current, avoiding significant changes
in rate-dependent corrections or target heating effects that can be
significant when using high beam currents for measurements at low-cross
section kinematics. By focusing on the $\varepsilon$ dependence, the
extraction of $R_{p}=\mu_{p}G_{E}/G_{M}$ comes from comparison of positron
cross sections measured at different kinematics. Only after the extraction of
$R_{p}$ from the positron measurements do we compare it to electron results,
making the result significantly less sensitive to differences between electron
and positron beam characteristics, and eliminating the need for careful
comparisons of beam quality or frequent changes between electron and positron
beams to account for potential long-term drifts in the detector response.
Figure 3: Reduced cross section from E01-001 Qattan:2004ht (magenta points)
along with linear fit. The dotted black line is the expected behavior based on
the $\varepsilon=1$ value from the cross section measurements combined with
the slope based on polarization transfer measurements of $\mu_{p}G_{E}/G_{M}$.
The red dashed line represents the expected positron reduced cross section
assuming that the LT-PT discrepancy is fully explained by TPE. The positron
Super-Rosenbluth measurement should yield uncertainties as good or better than
those of the E01-001 experiment. Figure adapted from Ref. yurov2017
To take advantage of these benefits, the Super-Rosenbluth measurement requires
measurements at multiple beam energies to perform Rosenbluth separations at
each $Q^{2}$ value measured. This allows for cancellation of many systematic
uncertainties that are exactly (or nearly) identical for different
$\varepsilon$ settings at fixed $Q^{2}$. Measurements with a larger number of
energies improve the sensitivity to the $\varepsilon$ dependence which is
especially beneficial when looking for non-linear contributions. Figure 3
shows the reduced cross section measurements from E01-001 Qattan:2004ht , the
slope expected based on polarization measurements (assuming no TPE corrections
to the polarization observables), and a projection for the expected positron
measurements. Electron measurements for $Q^{2}=4.1$ GeV2 suggest that the
contribution from $G_{E}(Q^{2})$ is extremely small and that the slope
extracted in the Rosenbluth separation mainly reflects TPE contributions,
yielding a negative slope (unphysical in the one-photon exchange
approximation) for positron measurements.
### 2.1 Proposed positron Super-Rosenbluth measurements
Because the main benefit of the Super-Rosenbluth technique relies on
cancellation between corrections at fixed $Q^{2}$ but different $\varepsilon$
values, rather than cancellation between corrections for positron and electron
beams, the experiment can be performed with only positrons and compared to
existing electron Super-Rosenbluth measurements. However, it is beneficial to
make positron and electron measurements using the same detectors, as the
resolution of the measured proton’s angle and momentum is important in
isolating elastic scattering and avoiding inelastic backgrounds Qattan:2004ht
. Thus, the approach taken here is to optimize a set of positron SR
measurements, and then to make the same measurements using electron beams. The
positron current will be limited by the source, while the electron beams can
be run at larger currents, such that the time is dominated by positron
measurements. For the following projections, we assume a 2 $\mu$A positron
beam current and use the existing SR measurements to make projections for
statistical and systematic uncertainties.
The initial SR measurements Qattan:2004ht were performed in Hall A at
Jefferson Lab Alcorn:2004sb , with an average beam current of 60 $\mu$A
impinging on a 4 cm liquid hydrogen target, with an allocation of 10 days of
beamtime. Precise extractions of $\mu G_{E}/G_{M}$ were made at $Q^{2}$=2.64,
3.2, and 4.1 GeV2, with significantly smaller corrections and uncertainties
than any other Rosenbluth separations in this kinematic region. Accounting for
the reduction to 2 $\mu$A beam current for positrons and replacing the 4 cm
target with a 10 cm target gives a measurement with a factor of 12 reduction
in luminosity compared to the previous experiment. Because only one of the
High Resolution Spectrometers (HRSs) was used for these measurements in the
original experiment, we will make up a factor of two by using both
spectrometers. We can save another factor of two by reducing the statistics
since, even for the highest $Q^{2}$ setting, the statistical uncertainties
were below the systematic uncertainties of the measurement, usually by a
significant factor. In this scenario, we increase the run time by a factor of
three, yielding a 30 day measurement that would provide nearly identical final
uncertainties on the extracted value of $\mu G_{E}/G_{M}$ and slightly reduced
sensitivity to deviations from linearity in the reduced cross section due to
the slightly larger statistical uncertainties.
The Hall A measurement ran with five beam energies corresponding to two
different linac energy settings. The follow-up measurement E05-017 E05017 ran
in Hall C with 17 beam energies yurov_phd , allowing for a larger
$\varepsilon$ range and more $\varepsilon$ points at each $Q^{2}$, with two
dedicated linearity scans with 10 or more $\varepsilon$ points. The experiment
used similar energies and target as the Hall A experiment and covered $Q^{2}$
values from 0.4-5.8 GeV2 with 30 days of beamtime. A full version of this
measurement using positrons is not feasible: as with the original measurement
only the High-Momentum Spectrometer (HMS) can cover the necessary kinematic
range, and the high-$Q^{2}$ points are statistics limited, meaning a
significant reduction in statistics would significantly reduce the
sensitivity. As such, one would have to make up the full factor of 12 in
luminosity through increased run time. Thus, we base our projections on the
Hall A electron measurements presented above. Note that while the plan
presented here assumes data taking in Hall A with the two HRS spectrometers,
the experiment could also be performed in Hall C with the HMS spectrometer
with essentially the same figure of merit; Experiment E01-001 used the central
1.6 msr of the HRS, while E05-017 used 3.2 msr in the HMS.
Figure 4: Potential kinematics for the proposed measurement. The curves
indicate the elastic kinematics for beam energies corresponding to an energy
per pass of 2.2 GeV (solid line), 0.78 GeV (short-dash), and 0.6 GeV.
Horizontal lines represent $Q^{2}$ values that provide a good lever arm in
$\varepsilon$. Measurements up to $Q^{2}=4.5$ GeV2 are straightforward under
the assumptions given in the text, and higher beam currents or a longer target
would allow a precision measurements at $Q^{2}\approx 5.7$ GeV2. The red line
indicates the highest beam energy used in previous measurements E05017 ;
yurov_phd , and the red shaded region indicates the increased $\varepsilon$
coverage with higher energies. Above $Q^{2}\approx 3$ GeV2, the higher beam
energies will provide a significant increase in the $\varepsilon$ coverage and
a corresponding reduction in the uncertainty on $\mu_{p}G_{E}/G_{M}$.
Corresponding electron measurements could be taken with a factor of 10 or more
increase in beam current, meaning that the electron measurements could be
performed with minimal beam time for running, plus overhead for the required
beam energy changes. While one could compare the positron SR separations to
polarization measurements directly, as was done with the electron SR
(illustrated in Fig. 3), comparing electron and positron SR measurements
doubles the size of the TPE effects observed in extracting the Rosenbluth
slope or the deviations from linearity since the TPE contributions have the
opposite sign for positrons and electrons. It also makes the comparison
independent of TPE contributions to the polarization observables, although
these are believed to be very small Guichon:2003qm ; Afanasev:2005ex ;
Meziane:2010xc .
Note that the uncertainties in $R_{p}$ should match those from experiment
E01-001 Qattan:2004ht assuming measurements at identical kinematics. However,
there is an additional gain that comes from the increased reach in
$\varepsilon$ possible with an 11 GeV beam (compared to the 4.7 GeV (5.15 GeV)
maximum beam energy from the Hall A (Hall C) measurement). This increases the
lever arm in $\varepsilon$ by a factor of 1.5 for $Q^{2}=4.1$ GeV2, reducing
the extracted uncertainty in $R_{p}$ by an identical factor, making the
positron measurement at least as precise as the completed electron
measurement, or allowing for comparable precision at higher $Q^{2}$ values.
Therefore, at the cost of additional overhead for beam energy changes, the
$Q^{2}$ range could be increased somewhat while yielding precision identical
to the previous measurement, and additional measurements could be added for
several $Q^{2}$ values below 3 GeV2, where the run times are minimal. Figure 4
shows an example of the kinematic coverage that would be possible using three
different values of the beam energy per pass. Note that these linac settings
also allow for a measurement at 5.7 GeV2 (not assumed in the scenario
presented above), given additional time or running with higher luminosity. It
would also allow for significantly improved checks on linearity, with more
points and a wider range in $\varepsilon$ for $Q^{2}$ values up to 2-3 GeV2.
Changing from 3 $Q^{2}$ points from 2.64-4.1 GeV2 to a total of 8 $Q^{2}$
values from 0.5-4.5 Gev2, with additional $\varepsilon$ points for
measurements below $Q^{2}=2.5$ GeV2, would only increase the measurements by
3-5 days.
Figure 5: The left figure shows $\mu_{p}G_{E}/G_{M}$ from the Bosted fit to
electron scattering data (top magenta curve), a parameterization of the
polarization transfer results (black curve), and a prediction for the results
of positron LT separations, assuming that TPE yields the difference. Note that
for $Q^{2}>2.7$ GeV2, the slope in the Rosenbluth separation for positrons
becomes negative, yielding an unphysical result for the form factor ratio. The
right figure shows the same curves, but for $(\mu_{p}G_{E}/G_{M})^{2}$. The
blue and black points represent uncertainties on existing SR and polarization
measurements, respectively (placed on the parameterizations), and the red and
magenta point indicate the projected uncertainties for the proposed
measurements.
In conclusion, we have presented a plan for a Super-Rosenbluth measurement
utilizing proposed positron beams at Jefferson Lab Accardi:2020swt . Based on
the results of the previous Super-Rosenbluth measurement, we show that a 2
$\mu$A positron beam at Jefferson Lab would allow for a series of positron SR
measurements over a wide range of $Q^{2}$ (0.5-4.5 GeV2) and $\varepsilon$,
covering the region where TPE are large and believed to explain the
discrepancy between polarization and Rosenbluth extractions of the form
factors. The measurement will provide improved precision compared to previous
SR measurements, and will include electron SR measurements to be made at the
same $Q^{2}$ values, to allow a direct comparison and to take advantage of the
enhanced range of $\varepsilon$ available with 11 GeV beams. Figure 5 shows
projections for the proposed measurements on positrons (and electrons),
compared to a subset of polarization measurements and the E01-001
Qattan:2004ht Super-Rosenbluth results.
The existing electron Super-Rosenbluth measurement already provides the
world’s best precision on $\mu G_{E}/G_{M}$ from Rosenbluth experiments, as
well as the tightest constraints on the size and nonlinear contributions from
TPE through a comparison to polarization measurements. The TPE contribution
associated with a direct comparison of electron to positron measurements is
twice as large as in the comparison to polarization data, and the
uncertainties in the data set will be better, giving this approach
significantly more sensitivity to TPE. This measurement would provide the
first test of the hypothesis that TPE contributions explain the observed form
factor discrepancy at $Q^{2}>1.5$ GeV2, and would significantly increase our
quantitative extraction of TPE contributions as a function of $Q^{2}$ and
$\varepsilon$, validating form factor extractions and giving greater
confidence in TPE calculations that must be applied in other reactions where
direct, or even indirect, experimental constraints on TPE are not feasible.
###### Acknowledgements.
This work was supported by U.S. Department of Energy, Office of Science,
Office of Nuclear Physics, under contract number DE-AC02-05CH11231.
## References
* (1) J.J. Kelly, Phys. Rev. C 66, 065203 (2002). DOI 10.1103/PhysRevC.66.065203
* (2) C. Perdrisat, V. Punjabi, M. Vanderhaeghen, Prog. Part. Nucl. Phys. 59, 694 (2007)
* (3) J. Arrington, C. Roberts, J. Zanotti, J. Phys. G 34, S23 (2007)
* (4) M.N. Rosenbluth, Phys. Rev. 79(4), 615 (1950)
* (5) J. Arrington, Phys. Rev. C 68, 034325 (2003)
* (6) A.I. Akhiezer, L.N. Rozentsveig, I.M. Shumushkevich, Soviet Physics JETP 6, 588 (1958). Zhurnal Eksperimental’noi i Teoretischeskoi Fiziki, 33, 765 (1957)
* (7) R.G. Arnold, C.E. Carlson, F. Gross, Phys. Rev. C23, 363 (1981)
* (8) M.K. Jones, et al., Phys. Rev. Lett. 84, 1398 (2000)
* (9) O. Gayou, et al., Phys. Rev. Lett. 88, 092301 (2002)
* (10) J. Arrington, Phys. Rev. C 69, 022201 (2004)
* (11) I.A. Qattan, et al., Phys. Rev. Lett. 94, 142301 (2005)
* (12) M.E. Christy, et al., Phys. Rev. C70, 015206 (2004)
* (13) V. Punjabi, et al., Phys. Rev. C71, 055202 (2005). [Erratum: Phys. Rev. C71, 069902 (2005)]
* (14) A.J.R. Puckett, et al., Phys. Rev. Lett. 104(24), 242301 (2010)
* (15) A.J.R. Puckett, et al., Phys. Rev. C85, 045203 (2012)
* (16) A.J.R. Puckett, et al., Phys. Rev. C96, 055203 (2017)
* (17) I.A. Qattan, Precision Rosenbluth Measurement of the Proton Elastic Electromagnetic Form Factors and Their Ratio at $Q^{2}$ = 2.64 GeV2, 3.20 GeV2 and 4.10 GeV2. Phd thesis, Northwestern University (2005). ArXiv:nucl-ex/0610006
* (18) P.A. Guichon, M. Vanderhaeghen, Phys.Rev.Lett. 91, 142303 (2003)
* (19) P. Blunden, W. Melnitchouk, J. Tjon, Phys.Rev.Lett. 91, 142304 (2003)
* (20) V. Tvaskis, J. Arrington, M. Christy, R. Ent, C. Keppel, Y. Liang, G. Vittorini, Phys. Rev. C 73, 025206 (2006)
* (21) J. Arrington, W. Melnitchouk, J. Tjon, Phys. Rev. C 76, 035205 (2007)
* (22) J.C. Bernauer, et al., Phys. Rev. C90, 015206 (2014)
* (23) Z. Ye, J. Arrington, R.J. Hill, G. Lee, Phys. Lett. B 777, 8 (2018)
* (24) J. Arrington, Phys. Rev. C69(3), 032201 (2004)
* (25) D. Adikaram, et al., Phys. Rev. Lett. 114, 062003 (2015)
* (26) D. Rimal, et al., Phys. Rev. C95, 065201 (2017)
* (27) I.A. Rachek, et al., Phys. Rev. Lett. 114, 062005 (2015)
* (28) B.S. Henderson, et al., Phys. Rev. Lett. 118, 092501 (2017)
* (29) P.G. Blunden, W. Melnitchouk, J.A. Tjon, Phys. Rev. C72(3), 034612 (2005)
* (30) H.Q. Zhou, S.N. Yang, Eur. Phys. J. A 51, 105 (2015)
* (31) A.V. Gramolin, D.M. Nikolenko, Phys. Rev. C93(5), 055201 (2016)
* (32) B. Wojtsekhowski, J. Arrington, M. Christy, S. Gilad, and V. Sulkosky. spokespersons, Jefferson Lab experiment E12-07-108
* (33) J. Arrington, I. Sick, Phys. Rev. C 76, 035201 (2007)
* (34) P.G. Blunden, W. Melnitchouk, J.A. Tjon, Phys. Rev. C 81, 018202 (2010). DOI 10.1103/PhysRevC.81.018202
* (35) J. Arrington, P.G. Blunden, W. Melnitchouk, Prog. Part. Nucl. Phys. 66, 782 (2011)
* (36) J. Arrington, K. de Jager, C.F. Perdrisat, J. Phys. Conf. Ser. 299, 012002 (2011)
* (37) P.G. Blunden, W. Melnitchouk, A.W. Thomas, Phys. Rev. Lett. 109, 262301 (2012). DOI 10.1103/PhysRevLett.109.262301
* (38) N.L. Hall, P.G. Blunden, W. Melnitchouk, A.W. Thomas, R.D. Young, Phys. Lett. B 731, 287 (2014). DOI 10.1016/j.physletb.2014.04.033. [Erratum: Phys.Lett.B 733, 380 (2014)]
* (39) N.L. Hall, P.G. Blunden, W. Melnitchouk, A.W. Thomas, R.D. Young, Phys. Lett. B 753, 221 (2016). DOI 10.1016/j.physletb.2015.11.081
* (40) A. Afanasev, P.G. Blunden, D. Hasell, B.A. Raue, Prog. Part. Nucl. Phys. 95, 245 (2017)
* (41) J. Arrington. spokesperson, Jefferson Lab experiment E05-017
* (42) M. Yurov, J. Arrington, AIP Conf. Proc. 1970, 020004 (2018)
* (43) A. Accardi, et al., e+@JLab White Paper: An Experimental Program with Positron Beams at Jefferson Lab (2020). ArXiv:2007:15081 [nucl-ex]
* (44) J. Alcorn, et al., Nucl. Instrum. Meth. A 522, 294 (2004)
* (45) J. Arrington. spokesperson, Jefferson Lab experiment E05-017
* (46) M. Yurov, Measurements of Proton Electromagnetic Form Factors and Two-photon Exchange in Elastic Electron-Proton Scattering. Phd thesis, University of Virginia (2017)
* (47) A.V. Afanasev, C.E. Carlson, Phys. Rev. Lett. 94, 212301 (2005)
* (48) M. Meziane, et al., Phys. Rev. Lett. 106, 132501 (2011). DOI 10.1103/PhysRevLett.106.132501
|
# $L^{p}$ estimates for the Caffarelli-Silvestre extension operators
G. Metafune L. Negro C. Spina Dipartimento di Matematica e Fisica “Ennio De
Giorgi”, Università del Salento, C.P.193, 73100, Lecce, Italy. -mail:
<EMAIL_ADDRESS>di Matematica e Fisica “Ennio De
Giorgi”, Università del Salento, C.P.193, 73100, Lecce, Italy. email:
<EMAIL_ADDRESS>di Matematica e Fisica“Ennio De Giorgi”,
Università del Salento, C.P.193, 73100, Lecce, Italy. e-mail:
<EMAIL_ADDRESS>
###### Abstract
We study elliptic and parabolic problems governed by the singular elliptic
operators
$\mathcal{L}=\Delta_{x}+D_{yy}+\frac{c}{y}D_{y}-\frac{b}{y^{2}}$
in the half-space $\mathbb{R}^{N+1}_{+}=\\{(x,y):x\in\mathbb{R}^{N},y>0\\}$.
Mathematics subject classification (2010): 47D07, 35J70.
Keywords: elliptic operators, discontinuous coefficients, kernel estimates,
maximal regularity.
## 1 Introduction
In this paper we study solvability and regularity of elliptic and parabolic
problems associated to the degenerate operators
$\mathcal{L}=\Delta_{x}+D_{yy}+\frac{c}{y}D_{y}-\frac{b}{y^{2}}\quad{\rm
and}\quad D_{t}-\mathcal{L}$
in the half-space $\mathbb{R}^{N+1}_{+}=\\{(x,y):x\in\mathbb{R}^{N},y>0\\}$ or
$(0,\infty)\times\mathbb{R}^{N+1}_{+}$.
Here $b,\ c$ are constant real coefficients and we use
$L_{y}=D_{yy}+\frac{c}{y}D_{y}-\frac{b}{y^{2}}.$ Note that singularities in
the lower order terms appear when either $b$ or $c$ is different from 0.
The operators $\Delta_{x}$, $L_{y}$ commute and the whole operator
$\mathcal{L}$ satisfies the scaling property
$I_{s}^{-1}\mathcal{L}I_{s}=s^{2}\mathcal{L}$, if $I_{s}u(x,y)=u(sx,sy)$.
When $b=0$, then $L_{y}$ is a Bessel operator (we shall denote it by $B_{y}$)
and both $\mathcal{L}=\Delta_{x}+B_{y}$ and $D_{t}-\mathcal{L}$ play a major
role in the investigation of the fractional powers $(-\Delta_{x})^{s}$ and
$(D_{t}-\Delta_{x})^{s}$, $s=(1-c)/2$, through the “extension procedure” of
Caffarelli and Silvestre [4], after the pioneering work by Muckenhoupt and
Stein, [14]. For this reason, ${\mathcal{L}}$ and $D_{t}-\mathcal{L}$ are
named the “extension operators”. We refer the reader to [8, Section 10] for an
exposition of the theory in the language of semigroups and for the references
to the wide literature on the extension problem, both in the elliptic and
parabolic case.
Here we study unique solvability of the problems $\lambda u-\mathcal{L}u=f$
and $D_{t}v-\mathcal{L}v=g$ in $L^{p}$ spaces under appropriate boundary
conditions, and initial conditions in the parabolic case, together with the
regularity of $u,v$. In the language of semigroup theory, we prove that
$\mathcal{L}$ generates an analytic semigroup, characterize its domain and
show that it has maximal regularity, which means that both $D_{t}v$ and
$\mathcal{L}v$ have the same regularity as $g$.
Both the domains of $\Delta_{x}$ and $L_{y}$ are known, in their corresponding
$L^{p}$-spaces. Clearly $D(\Delta_{x})=W^{2,p}(\mathbb{R}^{N})$, $1<p<\infty$.
However $D(L_{y})\subset L^{p}(0,\infty)$ is more delicate, the boundary
conditions and the regularity up to $y=0$ depend on the coefficients $c,b$. We
shall devote Sections 2, 3, 4 to a careful study of the 1d operator $L_{y}$,
starting form the Bessel case where $b=0$, this last both under Dirichlet and
Neumann boundary conditions at $y=0$. We shall provide in both cases the
description of the domain, pointwise estimates for the heat kernel and its
gradient. The general case is reduced to the Bessel one, through a change of
variables. We study $L_{y}$ also in weighted spaces
$L^{p}((0,\infty),y^{m}dy)$; the cases $m=0$ and $m=c$ are the most important:
the first corresponds to the Lebesgue measure, the second to the symmetrizing
one. However we need general $m$ also for technical reasons. This makes the
exposition slightly heavier, but it is unavoidable in our approach. Not all
the results in these sections are completely new. A description of the domain
under Dirichlet boundary conditions is in [22] but here we have more precise
results; we are not aware of a similar description in the case of Neumann
boundary conditions for $B_{y}$. The heat kernel is known among probabilists
but a purely analytic derivation can be found in [9] for Neumann boundary
conditions. Here we prefer to give an analytic proof in both cases and provide
manageable and precise estimates.
The elliptic operator $\mathcal{L}$ is studied through estimates like
$\|\Delta_{x}u\|_{p}+\|L_{y}u\|_{p}\leq C\|\mathcal{L}u\|_{p}$ (1)
where the $L^{p}$ norms are taken over $\mathbb{R}_{+}^{N+1}$. This kind of
estimates are quite natural in this context but not easy to prove. Of course
they are equivalent to $\|D_{x_{i}x_{j}}u\|_{p}\leq C\|\mathcal{L}u\|_{p}$, by
the Calderón-Zygmund inequalities in the $x$-variables, and can be restated by
saying that $\mathcal{L}$ is closed on $D(\Delta_{x})\cap D(L_{y})$ or that
$\Delta_{x}\mathcal{L}^{-1}$ is bounded. Note that the weaker inequality (1)
with $\|\mathcal{L}u\|_{p}+\|u\|_{p}$ on the right hand side implies the
stronger one as stated, by the scaling properties of the operators involved.
Estimates for $D_{yy}u$ follow if and only if they hold for the one
dimensional operator $L_{y}$ but those for the mixed derivatives $D_{x_{i}y}u$
are more subtle. They are certainly true when $D_{yy}L_{y}^{-1}$ is bounded,
by Calderón-Zygmund with respect to all $x,y$ variables, but we shall prove
that they hold if (and only if) $D_{y}(I-L_{y})^{-1}$ is bounded, which was
quite unexpected for us.
Let us explain how to obtain (1) when $p=2$ and introduce our approach for
general $p$. Assuming that $\Delta_{x}u+L_{y}u=f$ and taking the Fourier
transform with respect to $x$ (with covariable $\xi$) we obtain
$-|\xi|^{2}\hat{u}(\xi,y)+L_{y}\hat{u}(\xi,y)=\hat{f}(\xi,y)$ and then
$|\xi|^{2}\hat{u}(\xi,y)=-|\xi|^{2}(|\xi|^{2}-L_{y})^{-1}\hat{f}(\xi,y)$.
Assuming that $L_{y}$ generates a bounded semigroup in $L^{2}(0,\infty)$, then
$|\xi|^{2}\|(|\xi|^{2}-L_{y})^{-1}\|\leq C$ and
$\int_{0}^{\infty}|\xi|^{4}|\hat{u}(\xi,y)|^{2}dy\leq
C^{2}\int_{0}^{\infty}|\hat{f}(\xi,y)|^{2}dy$
which gives, after integration with respect to $\xi$ and Plancherel equality,
$\|\Delta_{x}u\|_{2}=\||\xi|^{2}\hat{u}\|_{2}\leq C\|f\|_{2}.$
When $p\neq 2$ and denoting by ${\cal F}$ the Fourier transform with respect
to $x$ we get, formally,
$\Delta_{x}\mathcal{L}^{-1}=-{\cal
F}^{-1}\left(|\xi|^{2}(|\xi|^{2}-L_{y})^{-1}\right){\cal F}$
and the boundedness of $\Delta_{x}\mathcal{L}^{-1}$ is equivalent to say that
the operator valued map $\xi\in\mathbb{R}^{N}\to
M(\xi)=|\xi|^{2}(|\xi|^{2}-L_{y})^{-1}\in B(L^{p}(0,\infty))$ is a bounded
Fourier multiplier in
$L^{p}(\mathbb{R}^{N};L^{p}(0,\infty))=L^{p}(\mathbb{R}_{+}^{N+1})$. Here we
use a vector valued Mikhlin multiplier theorem which relies on the
$\mathcal{R}$-boundedness of the family $M(\xi)$ and its derivatives, which we
deduce from heat kernel estimates. We use a similar strategy for
$\nabla_{x}D_{y}\mathcal{L}^{-1}$ which this time rests on estimates for the
gradient of the heat kernel of $L_{y}$.
It is important to note that the closedness of $\Delta_{x}+L_{y}$ on the
intersection of the corresponding domains does not follow from general
results. In fact, $e^{tL_{y}}$ is not contractive and does not admit Gaussian
estimates, except for special cases; moreover it is bounded in
$L^{p}(0,\infty)$ only for certain intervals of $p$ depending on the
coefficients $c,b$.
The strategy for proving the parabolic estimates
$\|D_{t}v\|_{p}+\|\mathcal{L}v\|_{p}\leq C\|(D_{t}-\mathcal{L})v\|_{p}$
($L^{p}$ norms on $(0,\infty)\times\mathbb{R}^{N+1}_{+}$), is similar after
taking the Fourier transform with respect to $t$.
Both the elliptic and parabolic estimates rely on a vector valued Mikhlin
multiplier theorem and share the name “maximal regularity” even though this
term is often restricted to the parabolic case.
The functional analytic approach for maximal regularity is widely described in
[13] and in the new books [10], [11]. The whole theory relies on a deep
interplay between harmonic analysis and structure theory of Banach spaces but
largely simplifies when the underlying Banach spaces are $L^{p}$ spaces, by
using classical square function estimates. This last approach has been
employed extensively in [5], showing that uniformly parabolic operators have
maximal regularity, under very general boundary conditions.
We deduce the boundedness of vector valued multipliers by the
$\mathcal{R}$-boundedness of a family of integral operators, which we prove
through an extrapolation result in [2] which involves a family of Muckenhoupt
weighted estimates. Here we adopt the same strategy as T. A. Bui, see [3], in
the case of Schrödinger operators with inverse square potentials. Section 7 is
really the core of the paper, while Section 6 contains all relevant
definitions and results for the subsequent proofs.
We work in $L^{p}(\mathbb{R}^{N+1}_{+},y^{m}dxdy)$ not just for the sake of
generality but because our proof relies on weighted estimates: we are unable
to obtain the result just fixing the Lebesgue measure or the symmetrizing one
$y^{c}dxdy$ but we have to work simultaneously in different homogeneous
spaces.
As an application of our results, in Section 9 we deduce Rellich inequalities
for $\mathcal{L}=\Delta_{x}+L_{y}$ by the analogous results for the one
dimensional operator $L_{y}$, using the closedness of $\mathcal{L}$ on the
intersection of the domains of $\Delta_{x}$ and $L_{y}$.
Notation. For $N\geq 0$,
$\mathbb{R}^{N+1}_{+}=\\{(x,y):x\in\mathbb{R}^{N},y>0\\}$. For
$m\in\mathbb{R}$ we consider the measure $d\mu_{m}=y^{m}dxdy$ in
$\mathbb{R}^{N+1}_{+}$. We write $L^{p}_{m}(\mathbb{R}^{N+1})$ for
$L^{p}(\mathbb{R}_{+}^{N+1};y^{m}dxdy)$ and often only $L^{p}_{m}$ when
$\mathbb{R}^{N+1}_{+}$ is understood. Similarly
$W^{k,p}_{m}(\mathbb{R}^{N+1}_{+})=\\{u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+}):\partial^{\alpha}u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+})\quad|\alpha|\leq k\\}$. We use often
$W^{k,p}_{m}$ thus omitting $\mathbb{R}^{N+1}_{+}$ and $W^{k,p}_{0,m}$ for the
closure of $C_{c}^{\infty}(\mathbb{R}^{N+1}_{+})$ in $W^{k,p}_{m}$ and we use
$H^{k}_{m}$ for $W^{k,2}_{m}$.
Acknowledgements. The authors thank S. Fornaro, N. Garofalo, D. Pallara and V.
Vespri for several comments on a previous version of the manuscript.
## 2 Bessel operators in $1d$
In this section we state and prove the main properties of the degenerate
operator
$B=D_{yy}+\frac{c}{y}D_{y}=y^{-c}D_{y}\left(y^{c}D_{y}\right)$
on the half line $\mathbb{R}_{+}=]0,\infty[$ needed for our purposes.
### 2.1 Weighted $L^{2}$ spaces and Bessel operators
We use the Sobolev spaces defined in Appendix B, for $p=2$ and $N=0$.
According to the above notation, for $c\in\mathbb{R}$ we use
$L^{2}_{c}=\\{u:\mathbb{R}_{+}\to\mathbb{C}:\int_{0}^{\infty}|u(y)|^{2}y^{c}dy<\infty\\}$,
$H^{1}_{c}=\\{u\in L^{2}_{c},u^{\prime}\in L^{2}_{c}\\}$, where $u^{\prime}$
is understood as a distribution in the open interval $]0,\infty[$. Both
$L^{2}_{c}$ and $H^{1}_{c}$ are Hilbert spaces under their canonical inner
products; moreover $C_{c}^{\infty}(0,\infty)$ is contained in both and dense
in $L^{2}_{c}$. We denote by $H^{1}_{0,c}$ the closure of
$C_{c}^{\infty}(0,\infty)$ in $H^{1}_{c}$. We need the following properties
proved in greater generality in Appendix B.
###### Lemma 2.1
* (i)
If $|c|\geq 1$, then $H^{1}_{0,c}=H^{1}_{c}$. When $c\leq-1$, then
$\displaystyle\lim_{y\to 0}u(y)=0$ for every $u\in H^{1}_{c}$.
* (ii)
If $|c|<1$ and $u\in H^{1}_{c}$, then $\displaystyle\lim_{y\to
0}u(y)=\ell\in\mathbb{C}$. Moreover, $\ell=0$ if and only if $u\in
H^{1}_{0,c}$.
$B$ is associated to the symmetric form in $L^{2}_{c}$
$\displaystyle\mathfrak{a}(u,v)$
$\displaystyle:=\int_{0}^{\infty}D_{y}uD_{y}\overline{v}\,y^{c}dy=\int_{0}^{\infty}(Bu)\,\overline{v}\,y^{c}dy.$
For any $c\in\mathbb{R}$ we may consider $H^{1}_{0,c}$ as domain of the form
and, accordingly, define the Bessel operator with Dirichlet boundary
conditions $B^{d}$ by
$D(B^{d})=\\{u\in H^{1}_{0,c}:\exists f\in L^{2}_{c}\ {\rm such\ that}\
\mathfrak{a}(u,v)=\int_{0}^{\infty}f\overline{v}y^{c}\,dy\ {\rm for\ every}\
v\in H^{1}_{0,c}\\},\quad B^{d}u=-f$ (2)
Similarly, by considering $H^{1}_{c}$ we obtain the Bessel operator with
Neumann Boundary conditions $B^{n}$ defined as
$D(B^{n})=\\{u\in H^{1}_{c}:\exists f\in L^{2}_{c}\ {\rm such\ that}\
\mathfrak{a}(u,v)=\int_{0}^{\infty}f\overline{v}y^{c}\,dy\ {\rm for\ every}\
v\in H^{1}_{c}\\},\quad B^{n}u=-f$ (3)
$B^{d},B^{n}$ are non-positive self-adjoint operators and $u\in
H^{2}_{loc}(0,\infty)$ with $B^{d}u=B^{n}u=u_{yy}+\frac{c}{y}u_{y}$ for $y>0$,
if $u\in D(B^{d})$ or $u\in D(B^{n})$, by standard arguments.
###### Lemma 2.2
If $c>-1$ and $u\in D(B^{n})$, then $\displaystyle\lim_{y\to
0}y^{c}u^{\prime}(y)=0$.
Proof. By assumption for $v\in H^{1}_{c}$
$\displaystyle\int_{0}^{\infty}u_{y}v_{y}y^{c}dy$
$\displaystyle=-\int_{0}^{\infty}(B^{n}u)vy^{c}dy=-\lim_{\varepsilon\to
0}\int_{\varepsilon}^{\infty}\frac{d}{dy}(y^{c}u_{y})vdy$
$\displaystyle=\lim_{\varepsilon\to
0}\left(\int_{\varepsilon}^{\infty}u_{y}v_{y}y^{c}dy-\varepsilon^{c}u_{y}(\varepsilon)v(\varepsilon)\right)=\int_{0}^{\infty}u_{y}v_{y}y^{c}dy-\lim_{\varepsilon\to
0}\varepsilon^{c}u_{y}(\varepsilon)v(\varepsilon).$
Choosing $v\equiv 1$ near $0$, which is possible since $c>-1$, we get the
result.
Observe that:
* •
when $|c|\geq 1$ then $B^{d}=B^{n}$ and, when $c\leq-1$, $u(0)=0$ for every
$u\in D(B^{d})$ by Lemma 2.1 (i);
* •
when $|c|<1$ then $B^{d}$ and $B^{n}$ are different and $u\in D(B^{d})$
fulfils $u(0)=0$, by Lemma 2.1 (ii).
Even though $B^{d}$ and $B^{n}$ are defined for every $c\in\mathbb{R}$, we
shall use $B^{d}$ when $c<1$ and $B^{n}$ when $c>-1$, according to the
literature. This allows to unify some formulas.
### 2.2 The resolvents and the heat kernels of $B^{d}$ and $B^{n}$
We start by recalling some well-known facts about the modified Bessel
functions $I_{\nu}$ and $K_{\nu}$ which constitute a basis of solutions of the
modified Bessel equation
$z^{2}\frac{d^{2}v}{dz^{2}}+z\frac{dv}{dz}-(z^{2}+\nu^{2})v=0,\quad\textrm{\emph{Re}\,}z>0.$
We recall that for $\textrm{\emph{Re}\,}z>0$ one has
$I_{\nu}(z)=\left(\frac{z}{2}\right)^{\nu}\sum_{m=0}^{\infty}\frac{1}{m!\,\Gamma(\nu+1+m)}\left(\frac{z}{2}\right)^{2m},\quad
K_{\nu}(z)=\frac{\pi}{2}\frac{I_{-\nu}(z)-I_{\nu}(z)}{\sin\pi\nu},$
where limiting values are taken for the definition of $K_{\nu}$ when $\nu$ is
an integer. The basic properties of these functions we need are collected in
the following lemma, see e.g., [1, Sections 9.6 and 9.7].
###### Lemma 2.3
For $\nu>-1$, $I_{\nu}$ is increasing and $K_{\nu}$ is decreasing (when
restricted to the positive real half line). Moreover they satisfy the
following properties if $z\in\Sigma_{\pi/2-\varepsilon}$.
* (i)
$I_{\nu}(z)\neq 0$ for every $\textrm{\emph{Re}\,}z>0$.
* (ii)
$I_{\nu}(z)\approx\frac{1}{\Gamma(\nu+1)}\left(\frac{z}{2}\right)^{\nu},\quad\text{as
}|z|\to 0,\qquad I_{\nu}(z)\approx\frac{e^{z}}{\sqrt{2\pi
z}}(1+O(|z|^{-1}),\quad\text{as }|z|\to\infty$.
* (iii)
If $\nu\neq 0$,
$K_{\mu}(z)\approx\frac{\nu}{|\nu|}\frac{1}{2}\Gamma(|\nu|)\left(\frac{z}{2}\right)^{-|\nu|},\qquad
K_{0}(z)\approx-\log z,\qquad\text{as }|z|\to 0$
$K_{\mu}(z)\approx\sqrt{\frac{\pi}{2z}}e^{-z},\quad\text{as }|z|\to\infty$.
* (iv)
$I_{\nu}^{\prime}(z)=I_{\nu+1}(z)+\frac{\nu}{z}I_{\nu}(z)$,
$K_{\nu}^{\prime}(z)=K_{\nu+1}(z)+\frac{\nu}{z}K_{\nu}(z)$, for every
$\textrm{\emph{Re}\,}z>0$.
Note that
$|I_{\nu}(z)|\simeq
C_{\nu,\epsilon}(1\wedge|z|)^{\nu+\frac{1}{2}}\frac{e^{Rez}}{\sqrt{|z|}},\qquad
z\in\Sigma_{\frac{\pi}{2}-\epsilon}$ (4)
for suitable constants $C_{\nu,\epsilon}>0$ which may be different in lower an
in the upper estimate.
Let us compute the resolvent operator of $B^{n}$. When we write $\sqrt{z}$ we
mean the square root of $z$ having positive real part.
###### Proposition 2.4
Let $c>-1$ and $\lambda\in\mathbb{C}\setminus(-\infty,0]$. Then, for every
$f\in L^{2}_{c}$,
$(\lambda-B^{n})^{-1}f=\int_{0}^{\infty}G^{n}(\lambda,y,\rho)f(\rho)\rho^{c}d\rho$
with
$G^{n}(\lambda,y,\rho)=\begin{cases}y^{\frac{1-c}{2}}\rho^{\frac{1-c}{2}}\,I_{\frac{c-1}{2}}(\sqrt{\lambda}\,y)K_{{\frac{|1-c|}{2}}}(\sqrt{\lambda}\,\rho)\quad
y\leq\rho\\\\[6.45831pt]
y^{\frac{1-c}{2}}\rho^{\frac{1-c}{2}}\,I_{\frac{c-1}{2}}(\sqrt{\lambda}\,\rho)K_{\frac{|1-c|}{2}}(\sqrt{\lambda}\,y)\quad
y\geq\rho,\end{cases}$ (5)
Proof. Let us first consider the case $\lambda=\omega^{2}$, $|\omega|=1$. By
setting $u(y)=y^{\nu}v(\omega y)$, $\nu=(1-c)/2$, the homogeneous equation
$D_{yy}u+\frac{c}{y}D_{y}u-\omega^{2}u=0$
transforms into the complex equation
$z^{2}\frac{d^{2}v}{dz^{2}}+z\frac{dv}{dz}-(z^{2}+\nu^{2})v=0,\quad\textrm{\emph{Re}\,}z>0.$
Assume first that $-1<c\leq 1$ so that $0\leq\nu<1$. Then
$u_{1}(z)=z^{\nu}I_{-\nu}(\omega z)$ and $u_{2}(z)=z^{\nu}K_{\nu}(\omega z)$
constitute a basis of solutions. Since the Wronskian of $K_{\nu}$, $I_{-\nu}$
is $1/r$, see [1, 9.6 and 9.7], that of $u_{1}$,$u_{2}$ is $r^{-c}$. It
follows that every solution of
$D_{yy}u+\frac{c}{y}D_{y}u-\omega^{2}u=f$
is given by
$u(y)=\int_{0}^{\infty}G^{n}(\omega^{2},y,\rho)f\left(\rho\right)\rho^{c}d\rho+c_{1}y^{\nu}I_{-\nu}(\omega
y)+c_{2}y^{\nu}K_{\nu}(\omega y),$ (6)
with $c_{1},\ c_{2}\in\mathbb{C}$ and
$G^{n}(\omega^{2},y,\rho)=\begin{cases}y^{\nu}\rho^{\nu}\,I_{-\nu}(\omega
y)K_{\nu}(\omega\rho)\quad y\leq\rho\\\\[4.30554pt]
y^{\nu}\rho^{\nu}\,I_{-\nu}(\omega\rho)K_{\nu}(\omega y)\quad
y\geq\rho\end{cases}$
Next we use Lemma 2.3 to show that
$\sup_{y\in(0,+\infty)}\int_{0}^{\infty}|G^{n}(\omega,y,\rho)|\rho^{c}d\rho<+\infty.$
Indeed, for $y\leq 1$, recalling that $|\omega|=1$, one has
$\displaystyle\int_{0}^{\infty}|G^{n}(\omega^{2},y,\rho)|\rho^{c}d\rho=$
$\displaystyle\int_{0}^{y}y^{\nu}\rho^{\nu}\,|I_{-\nu}(\omega\rho)||K_{\nu}(\omega
y)|\,\rho^{c}d\rho+\int_{y}^{\infty}y^{\nu}\rho^{\nu}\,|I_{-\nu}(\omega
y)||K_{\nu}(\omega\rho)|\,\rho^{c}d\rho$ $\displaystyle\leq$ $\displaystyle
C\left(\int_{0}^{y}\rho^{c}d\rho+\int_{y}^{1}\rho^{c}d\rho+\int_{1}^{\infty}\rho^{\frac{1+c}{2}}(\sqrt{\rho})^{-1}e^{-\textrm{\emph{Re}\,}{\omega}\rho}\right)\leq
C$
and similarly for $y>1$. By the symmetry of the kernel and Young’s inequality
the integral operator $T$ defined by $G^{n}(\omega^{2},\cdot,\cdot)$ is
therefore bounded in $L^{2}_{c}$.
Let $f\in C_{c}^{\infty}((0,\infty))$ with support in $(a,b)$ with $a>0$ and
$u=(\omega^{2}-B^{n})^{-1}f\in D(B^{n})$. Then $u$ is given by (6) with
$c_{1}=0$, since $T$ is bounded in $L^{2}_{c}$, $K_{\nu}$ is exponentially
decreasing and $I_{-\nu}$ is exponentially increasing near $\infty$. Since
$\displaystyle u(y)=$
$\displaystyle\int_{0}^{y}y^{\nu}\rho^{\nu}K_{\nu}(\omega
y)I_{\nu}(\omega\rho)f\left(\rho\right)\,\rho^{c}d\rho+\int_{y}^{b}y^{\nu}\rho^{\nu}K_{\nu}(\omega\rho)I_{-\nu}(\omega
y)f\left(\rho\right)\,\rho^{c}d\rho+c_{2}y^{\nu}K_{\nu}(\omega y)$
we have for $y<a$
$\displaystyle u(y)=$
$\displaystyle\int_{a}^{b}y^{\nu}\rho^{\nu}K_{\nu}(\omega\rho)I_{-\nu}(\omega
y)f\left(\rho\right)\,\rho^{c}d\rho+c_{2}y^{\nu}K_{\nu}(\omega
y)=c_{1}y^{\nu}I_{-\nu}(\omega y)+c_{2}y^{\nu}K_{\nu}(\omega y)$
for some $c_{1},c_{2}\in\mathbb{C}$. From Lemma 2.3 it follows that
$v(y)=y^{\nu}I_{-{\nu}}(\omega y)$ satisfies the Neumann condition $\lim_{y\to
0}y^{c}v^{\prime}(y)=0$ whereas $y^{\nu}K_{\nu}(\omega y)$ does not. Since
$u\in D(B^{n})$, by Lemma 2.2 $y^{c}u^{\prime}(y)\to 0$ and hence $c_{2}=0$.
By density, $(\omega^{2}-B^{n})^{-1}=T$, since both operators are bounded and
coincide on compactly supported functions.
Finally let us compute the resolvent for a general
$\lambda\not\in(-\infty,0]$. If $M_{s}u(y)=u(sy)$, then
$M_{\sqrt{|\lambda|}}B^{n}M_{\sqrt{|\lambda|}^{-1}}=\frac{1}{|\lambda|}B^{n}$;
setting $\lambda=|\lambda|\omega$ we get using the previous step
$\displaystyle(\lambda-B^{n})^{-1}f$
$\displaystyle=|\lambda|^{-1}M_{\sqrt{|\lambda|}}(\omega-B^{n})^{-1}M_{\sqrt{|\lambda|}^{-1}}f=\frac{1}{|\lambda|}\int_{0}^{\infty}G^{n}(\omega,y\sqrt{|\lambda|},\rho)f\left(\frac{\rho}{\sqrt{|\lambda|}}\right)\rho^{c}d\rho$
$\displaystyle=|\lambda|^{\frac{c-1}{2}}\int_{0}^{\infty}G^{n}(\omega,y\sqrt{|\lambda|},\rho\sqrt{|\lambda|})f\left(\rho\right)\,\rho^{c}d\rho$
which gives (7) when $-1<c\leq 1$. When $c>1$, we use $I_{|\nu|}$, $K_{|\nu|}$
as a basis of solutions of Bessel equation and proceed as before.
A similar proof gives the resolvent of $B^{d}$. We omit the details, see also
[16, Section 4.2].
###### Proposition 2.5
Let $c<1$ and $\lambda\in\mathbb{C}\setminus(-\infty,0]$. Then, for every
$f\in L^{2}_{c}$,
$(\lambda-B^{d})^{-1}f=\int_{0}^{\infty}G^{d}(\lambda,y,\rho)f(\rho)\rho^{c}d\rho$
with
$G^{d}(\lambda,y,\rho)=\begin{cases}y^{\frac{1-c}{2}}\rho^{\frac{1-c}{2}}\,I_{\frac{1-c}{2}}(\sqrt{\lambda}\,y)K_{\frac{1-c}{2}}(\sqrt{\lambda}\,\rho)\quad
y\leq\rho\\\\[8.61108pt]
y^{\frac{1-c}{2}}\rho^{\frac{1-c}{2}}\,I_{\frac{1-c}{2}}(\sqrt{\lambda}\,\rho)K_{\frac{1-c}{2}}(\sqrt{\lambda}\,y)\quad
y\geq\rho.\end{cases}$ (7)
Note that when $|c|<1$ the resolvent of $B^{n}$ uses $I_{\frac{c-1}{2}}$
whereas $B^{d}$ is constructed with $I_{\frac{1-c}{2}}$.
Next we compute the heat kernel of $e^{zB^{n}},e^{zB^{d}}$, proceeding as in
[16, Section 4.2]. These heat kernels are known and usually computed by
probabilistic methods. Instead we provide a purely analytical proof and refer
also to [8] for a similar approach in the Neumann case.
For $z\in C_{+},y,\rho>0$ we denote now by $p(z,y,\rho)$ the heat kernel of
the operator $B$ and argue first for positive $t$. We look for a smooth
function $p(t,y,\rho)$ such that, for every $f\in L^{2}_{c}$
$e^{tB}f(y)=\int_{0}^{\infty}p(t,y,\rho)f(\rho)\,d\rho.$
Note that the kernel is written with respect to the Lebesgue measure rather
than $y^{c}dy$. The function $p$ should then satisfy
$\begin{cases}p_{t}(t,y,\rho)=D_{yy}p(t,y,\rho)+\frac{c}{y}D_{y}p(t,y,\rho)\\\
p(0,y,\rho)=\delta_{\rho}.\end{cases}$
It follows that
$\tilde{p}(t,y,\rho)=y^{\frac{c}{2}}p(t,y,\rho)\rho^{-\frac{c}{2}}$ satisfies
with $\nu^{2}=(c-1)^{2}/4$
$\begin{cases}{\tilde{p}}_{t}(t,y,\rho)=D_{yy}{\tilde{p}}(t,y,\rho)-\frac{1}{y^{2}}\left(\nu^{2}-\frac{1}{4}\right){\tilde{p}}(t,y,\rho)\\\
{\tilde{p}}(0,y,\rho)=\delta_{\rho}.\end{cases}$ (8)
Since $\lambda^{2}B=M_{\lambda}^{-1}BM_{\lambda}$ we obtain
$e^{t\lambda^{2}B}=M_{\lambda}^{-1}e^{tB}M_{\lambda}$. Rewriting this identity
using the kernel $\tilde{p}$ and setting $\lambda^{2}t=1$ we obtain
${\tilde{p}}(t,y,\rho)=\frac{1}{\sqrt{t}}{\tilde{p}}\left(1,\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right):=\frac{1}{\sqrt{t}}F\left(\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right).$
Then (8) becomes
$\displaystyle D_{yy}F\left(\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right)$
$\displaystyle-\frac{1}{y^{2}}\left(\nu^{2}-\frac{1}{4}\right)tF\left(\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right)+$
$\displaystyle+\frac{1}{2}F\left(\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right)+\frac{1}{2}\frac{y}{\sqrt{t}}D_{y}F\left(\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right)+\frac{1}{2}\frac{\rho}{\sqrt{t}}D_{\rho}F\left(\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right)=0$
that is
$\displaystyle D_{yy}F\left(y,\rho\right)$
$\displaystyle-\frac{1}{y^{2}}\left(\nu^{2}-\frac{1}{4}\right)F\left(y,\rho\right)+\frac{1}{2}F\left(y,\rho\right)+\frac{1}{2}yD_{y}F\left(y,\rho\right)+\frac{1}{2}\rho
D_{\rho}F\left(\frac{y}{\sqrt{t}},\frac{\rho}{\sqrt{t}}\right)=0.$
Since for large $y$ the operator $B$ behaves like $D^{2}$, having in mind the
gaussian kernel, we look for a solution of the form
$F(y,\rho)=\frac{1}{\sqrt{4\pi}}\exp\left\\{-\frac{(y-\rho)^{2}}{4}\right\\}H(y\rho)$
with $H$ depending only on the product of the variables. By straightforward
computations, we deduce
$\rho^{2}D_{yy}H(y\rho)+\rho^{2}D_{y}H(y\rho)-\frac{1}{y^{2}}\left(\nu^{2}-\frac{1}{4}\right)H(y\rho)=0$
or
$DH(x)+D_{x}H(x)-\frac{1}{x^{2}}\left(\nu^{2}-\frac{1}{4}\right)H(x)=0.$
Setting $H(x)=u(x)e^{-\frac{x}{2}}$, $u$ solves
$D_{xx}u-\frac{1}{4}u(x)-\frac{1}{x^{2}}\left(\nu^{2}-\frac{1}{4}\right)u(x)=0$
and $v(x)=u(2x)$ satisfies
$D_{xx}v-v(x)-\frac{1}{x^{2}}\left(\nu^{2}-\frac{1}{4}\right)v(x)=0.$
It follows that $v(x)=c_{1}\sqrt{x}I_{\nu}(x)+c_{2}\sqrt{x}K_{|\nu|}(x)$.
Since the function $H$ captures the behaviour of the heat kernel near the
origin (the behaviour at infinity is governed by the gaussian factor) and
since the resolvents of $B^{n},B^{d}$ are constructed with $\nu=(c-1)/2$,
$\nu=(1-c)/2$, respectively, we choose $c_{2}=0$, $c_{1}=1$ and $\nu$
accordingly. Therefore in the case of Neumann boundary conditions,
$u(x)=v\left(\frac{x}{2}\right)=\kappa\sqrt{\frac{x}{2}}I_{\frac{c-1}{2}}\left(\frac{x}{2}\right)$,
$H(y\rho)=u(y\rho)e^{-\frac{y\rho}{2}}=\kappa\sqrt{\frac{y\rho}{2}}I_{\frac{c-1}{2}}\left(\frac{y\rho}{2}\right)e^{-\frac{y\rho}{2}}$,
$F(y,\rho)=\frac{\kappa}{\sqrt{4\pi}}\exp\left\\{-\frac{(y-\rho)^{2}}{4}\right\\}\sqrt{\frac{y\rho}{2}}I_{\frac{c-1}{2}}\left(\frac{y\rho}{2}\right)e^{-\frac{y\rho}{2}}=\frac{\kappa}{\sqrt{4\pi}}\sqrt{\frac{y\rho}{2}}\exp\left\\{-\frac{y^{2}+\rho^{2}}{4}\right\\}I_{\frac{c-1}{2}}\left(\frac{y\rho}{2}\right)$
and
$\tilde{p}(t,y,\rho)=\frac{1}{\sqrt{4\pi
t}}H\left(\frac{y\rho}{t}\right)\exp\left\\{-\frac{(y-\rho)^{2}}{4t}\right\\}=\frac{\kappa}{t\sqrt{4\pi}}\sqrt{\frac{y\rho}{2}}\exp\left\\{-\frac{y^{2}+\rho^{2}}{4t}\right\\}I_{\frac{c-1}{2}}\left(\frac{y\rho}{2t}\right).$
It follows that
$p^{n}(t,y,\rho)=y^{-\frac{c}{2}}\tilde{p}(t,y,\rho)\rho^{\frac{c}{2}}=\frac{\kappa}{t\sqrt{4\pi}}\left(y\rho\right)^{\frac{1-c}{2}}\rho^{c}\exp\left\\{-\frac{y^{2}+\rho^{2}}{4t}\right\\}I_{\frac{c-1}{2}}\left(\frac{y\rho}{2t}\right).$
(9)
In the case of Dirichlet boundary conditions it is sufficient to change
$I_{\frac{c-1}{2}}$ with $I_{\frac{1-c}{2}}$ to obtain the corresponding
kernel.
$p^{d}(t,y,\rho)=y^{-\frac{c}{2}}\tilde{p}(t,y,\rho)\rho^{\frac{c}{2}}=\frac{\kappa}{t\sqrt{4\pi}}\left(y\rho\right)^{\frac{1-c}{2}}\rho^{c}\exp\left\\{-\frac{y^{2}+\rho^{2}}{4t}\right\\}I_{\frac{1-c}{2}}\left(\frac{y\rho}{2t}\right).$
(10)
Finally, we give a formal proof and compute the constant $\kappa$.
###### Theorem 2.6
For $z\in\mathbb{C}_{+}$ the heat kernels of the operators $B^{n},B^{d}$ are
given by
$p_{B^{n}}(z,y,\rho)=\frac{1}{2z}\rho^{c}(y\rho)^{\frac{1-c}{2}}\exp\left\\{-\frac{y^{2}+\rho^{2}}{4z}\right\\}I_{\frac{c-1}{2}}\left(\frac{y\rho}{2z}\right),\qquad
c>-1.$
$p_{B^{d}}(z,y,\rho)=\frac{1}{2z}\rho^{c}\left(y\rho\right)^{\frac{1-c}{2}}\exp\left\\{-\frac{y^{2}+\rho^{2}}{4z}\right\\}I_{\frac{1-c}{2}}\left(\frac{y\rho}{2z}\right),\qquad
c<1.$
Proof. Let us consider $B^{n}$ and $t\in\mathbb{R}^{+}$, first. The Laplace
transform of the right hand side of (9) is given by, see [7, p.200],
$\begin{cases}\frac{2\kappa}{\sqrt{4\pi}}\rho^{c}(y\rho)^{\frac{1-c}{2}}I_{-\sqrt{D}}(y\sqrt{\lambda})K_{\sqrt{D}}(\rho\sqrt{\lambda})\quad
y\leq\rho\\\\[4.30554pt]
\frac{2\kappa}{\sqrt{4\pi}}\rho^{c}(y\rho)^{\frac{1-c}{2}}I_{-\sqrt{D}}(\rho\sqrt{\lambda})K_{\sqrt{D}}(y\sqrt{\lambda})\quad
y\geq\rho.\end{cases}$
For $\kappa=\sqrt{\pi}$ it coincides with the kernel
$\rho^{c}G^{n}(\lambda,y,\rho)$ of the resolvent operator
$(\lambda-B^{n})^{-1}$, see Proposition 2.4. Let $S(t)$ be the operator
defined through the kernel $p_{B^{n}}$, that is (9) with $\kappa=\sqrt{\pi}$.
By Lemma 3.1 below, $\|S(t)\|\leq C$, $t\geq 0$, in $L^{2}_{c}$. Given $f\in
C_{c}^{\infty}((0,\infty))$, let $u(t,y)=S(t)f(y)$. By the construction of the
kernel $p$ we have $u_{t}=Bu$ pointwise. Finally, for $\lambda>0$,
$\displaystyle\int_{0}^{\infty}e^{-\lambda t}u(t,y)\,dt$
$\displaystyle=\int_{0}^{\infty}e^{-\lambda
t}\,dt\int_{0}^{\infty}p_{B^{n}}(t,y,\rho)f(\rho)\,d\rho=\int_{0}^{\infty}f(\rho)\,d\rho\int_{0}^{\infty}e^{-\lambda
t}p_{B^{n}}(t,y,\rho)\,dt$
$\displaystyle=\int_{0}^{\infty}G^{n}(\lambda,y,\rho)\rho^{c}f(\rho)\,d\rho.$
It follows that the Laplace transform of $S(t)f$ coincides with the resolvent
of $B^{n}$, hence, by uniqueness, $S(t)$ is the generated semigroup and
$p_{B^{n}}(t,\cdot,\cdot)$ its kernel.
For complex times we argue in a similar way; we fix
$0\leq|\theta|<\frac{\pi}{2}$ and $\omega=e^{i\theta}\in\mathbb{C}_{+}$. Then
for $t>0$, $p(t\omega,\cdot,\cdot)$ is the heat kernel of the scaled semigroup
$T_{\omega}(t)=e^{t\omega B^{n}}$ whose generator is $A_{\omega}=\omega
B^{n}$. Its resolvent is then given, for $\lambda>0$, by $(\lambda-
A_{\omega})^{-1}=\omega^{-1}\left(\omega^{-1}\lambda-B^{n}\right)^{-1}$ and
its integral kernel is $\omega^{-1}G^{n}(\omega^{-1}\lambda,y\rho)$. The same
argument above applied to $T_{\omega}(t)$ proves then the assertion for
$z=t\omega$. The proof for $B^{d}$ is similar.
The following result is proved in [9].
###### Proposition 2.7
If $c>-1$, then $e^{zB^{n}}1=1$.
Proof. The proof follows using the explicit expression of $p_{B^{n}}$ of
Theorem 2.6 and the identity
$\displaystyle\int_{0}^{\infty}e^{-\alpha\rho^{2}}\rho^{1+\nu}I_{\nu}(z\rho)\,d\rho=\frac{z^{\nu}}{(2\alpha)^{\nu+1}}\,e^{\frac{z^{2}}{4\alpha}},\qquad\nu>-1$
which holds for every $z,\alpha\in\mathbb{C}_{+}$. See [1, Formula 11.4.29,
page 486] where the latter equality is expressed in terms of the Bessel
functions $J_{\nu}$ which satisfies $I_{\nu}(z)=e^{-\frac{1}{2}\nu\pi
i}J_{\nu}\left(e^{\frac{1}{2}\pi i}z\right)$.
### 2.3 Heat kernel bounds
The asymptotic behaviour of Bessel functions allows to deduce explicit bounds
for the heat kernels $p_{B^{n}}$ and $p_{B^{d}}$.
###### Proposition 2.8
Let $p_{B^{n}},p_{B^{d}}$ be the kernels defined in Theorem 2.6. Then for
every $\varepsilon>0$, there exist $C_{\varepsilon},\kappa_{\varepsilon}>0$
such that for $z\in\Sigma_{\frac{\pi}{2}-\varepsilon}$
$\displaystyle|p_{B^{d}}(z,y,\rho)|$ $\displaystyle\leq
C_{\varepsilon}|z|^{-\frac{1}{2}}\left(\frac{y}{|z|^{\frac{1}{2}}}\wedge
1\right)^{1-c}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right),\quad
c<1$
and
$\displaystyle|p_{B^{n}}(z,y,\rho)|$ $\displaystyle\leq
C_{\varepsilon}|z|^{-\frac{1}{2}}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{c}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right),\quad
c>-1$
Proof. Using (4) we get
$\displaystyle|p_{B^{d}}(z,y,\rho)|$ $\displaystyle\leq
C_{\varepsilon}\,|z|^{-1}\rho^{c}\left(y\rho\right)^{\frac{1-c}{2}}\exp\left\\{-\frac{Rez}{4|z|^{2}}(y^{2}+\rho^{2})\right\\}\left(\frac{y\rho}{2|z|}\wedge
1\right)^{\frac{1-c}{2}+\frac{1}{2}}\left(\frac{2|z|}{y\rho}\right)^{\frac{1}{2}}\exp\left\\{\frac{Rez}{2|z|^{2}}y\rho\right\\}.$
$\displaystyle\leq
C_{\varepsilon}\,|z|^{-1}\rho^{c}\left(y\rho\right)^{\frac{1-c}{2}}\left(\frac{y\rho}{|z|}\wedge
1\right)^{1-\frac{c}{2}}\left(\frac{2|z|}{y\rho}\right)^{\frac{1}{2}}\exp\left(-\frac{|y-\rho|^{2}}{\kappa^{\prime}_{\varepsilon}|z|}\right)$
$\displaystyle=C_{\varepsilon}|z|^{-\frac{1}{2}}\left(\frac{y}{\rho}\right)^{-\frac{c}{2}}\left(\frac{y\rho}{|z|}\wedge
1\right)^{1-\frac{c}{2}}\exp\left(-\frac{|y-\rho|^{2}}{\kappa^{\prime}_{\varepsilon}|z|}\right)$
$\displaystyle\leq
C^{\prime}_{\varepsilon}|z|^{-\frac{1}{2}}\left(\frac{y}{|z|^{-\frac{1}{2}}}\wedge
1\right)^{1-c}\left(\frac{\rho}{|z|^{-\frac{1}{2}}}\wedge
1\right)\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right)$
by Lemmas 10.1, 10.2. The proof for $p_{B^{n}}$ is similar.
Note that the constant $\kappa_{\varepsilon}$ above is explicit. For example,
for $z\geq 0$, that is $\varepsilon=\pi/2$, then
$\kappa^{\prime}_{\varepsilon}=4$ in the above proof and we can take
$\kappa_{\varepsilon}=4+\delta$ for every $\delta>0$.
Next we prove bounds for the gradients of the heat kernels.
###### Proposition 2.9
For every $\varepsilon>0$, there exist
$C_{\varepsilon},\kappa_{\varepsilon}>0$ such that for
$z\in\Sigma_{\frac{\pi}{2}-\varepsilon}$
$\displaystyle|D_{y}p_{B^{d}}(z,y,\rho)|\leq
C_{\varepsilon}|z|^{-1}\left(\frac{y}{|z|^{\frac{1}{2}}}\wedge
1\right)^{-c}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right),\quad
c<1;$
and
$\displaystyle|D_{y}p_{B^{n}}(z,y,\rho)|$ $\displaystyle\leq
C_{\varepsilon}|z|^{-1}\left(\frac{y}{|z|^{\frac{1}{2}}}\wedge
1\right)\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{c}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right),\quad
c>-1.$
Proof. We give a proof first for $B^{d}$. Differentiating $p_{B^{d}}$ with
respect to $y$ we obtain
$\displaystyle
D_{y}p_{B^{d}}(z,y,\rho)=\Bigg{[}\frac{1-c}{2y}-\frac{y}{2z}+\frac{\rho}{2z}\frac{I^{\prime}_{\nu}\left(\frac{y\rho}{2z}\right)}{I_{\nu}\left(\frac{y\rho}{2z}\right)}\Bigg{]}p_{B^{d}}(z,y,\rho),$
where $\nu=(1-c)/2$. We recall now that, see Lemma 2.3,
$\displaystyle I_{\nu}^{\prime}(s)=I_{\nu+1}(s)+\frac{\nu}{s}I_{\nu}(s).$
This implies
$\displaystyle D_{y}p_{B^{d}}(z,y,\rho)$
$\displaystyle=\Bigg{[}\frac{1-c}{y}-\frac{y}{2z}+\frac{\rho}{2z}\frac{I_{\nu+1}\left(\frac{y\rho}{2z}\right)}{I_{\nu}\left(\frac{y\rho}{2z}\right)}\Bigg{]}p_{B^{d}}(z,y,\rho)$
$\displaystyle=\frac{1}{\sqrt{z}}\left[(1-c)\frac{\sqrt{z}}{y}+\frac{y}{2\sqrt{z}}\left(\frac{I_{\nu+1}\left(\frac{y\rho}{2z}\right)}{I_{\nu}\left(\frac{y\rho}{2z}\right)}-1\right)-\frac{y-\rho}{2\sqrt{z}}\frac{I_{\nu+1}\left(\frac{y\rho}{2z}\right)}{I_{\nu}\left(\frac{y\rho}{2z}\right)}\right]p_{B^{d}}(z,y,\rho)$
The boundedness of $\frac{I_{\nu+1}}{I_{\nu}}$ gives
$\left|\frac{y-\rho}{2\sqrt{z}}\frac{I_{\nu+1}\left(\frac{y\rho}{2z}\right)}{I_{\nu}\left(\frac{y\rho}{2z}\right)}\right|\leq
C\left|\frac{y-\rho}{2\sqrt{z}}\right|\leq
Ce^{\varepsilon\frac{|y-\rho|^{2}}{|z|}}$
Next we use the estimate $\left|\frac{I_{\nu+1}(w)}{I_{\nu}(w)}-1\right|\leq
C(1\wedge|w|^{-1})$ for $w\in\Sigma_{\frac{\pi}{2}-\varepsilon}$ to bound
$K(\xi,\eta)=\xi\left(\frac{I_{\nu+1}\left(\frac{\xi\eta}{2}\right)}{I_{\nu}\left(\frac{\xi\eta}{2}\right)}-1\right),$
where $\xi=\frac{y}{\sqrt{z}}$ and $\eta=\frac{\rho}{\sqrt{z}}$.
Clearly $|K(\xi,\eta)|\leq C$ if $|\xi|\leq 2$ and $|K(\xi,\eta)|\leq
Ce^{\varepsilon|\xi-\eta|^{2}}$ if $|\xi|\geq 2$ and $|\eta|\leq 1$. Finally,
if $|\xi|\geq 2$, $|\eta|\geq 1$, then $|K(\xi,\eta)|\leq
C\frac{|\xi|}{|\xi\eta|}\leq\frac{C}{|\eta|}\leq C$. Then
$|D_{y}p_{B^{d}}(z,y\rho)|\leq
C\frac{1}{\sqrt{|z|}}\left(1+\frac{|(1-c)\sqrt{z}|}{y}\right)e^{\varepsilon\frac{|y-\rho|^{2}}{|z|}}|p_{B^{d}}(z,y,\rho)|$
(11)
and the thesis follows from Proposition 2.8.
Concerning $B^{n}$ we first note that $\nu=(c-1)/2$ in the above notation.
Then we get
$D_{y}p_{B^{n}}(z,y,\rho)=\Bigg{[}-\frac{y}{2z}+\frac{\rho}{2z}\frac{I_{\nu+1}\left(\frac{y\rho}{2z}\right)}{I_{\nu}\left(\frac{y\rho}{2z}\right)}\Bigg{]}p_{B^{n}}(z,y,\rho)=\frac{y}{2z}\Bigg{[}-1+\frac{\rho}{y}\frac{I_{\nu+1}\left(\frac{y\rho}{2z}\right)}{I_{\nu}\left(\frac{y\rho}{2z}\right)}\Bigg{]}p_{B^{n}}(z,y,\rho).$
Proceeding as before we get (11) without the term $(1-c)\sqrt{z}/y$ and we
only look for a better estimate in the region $\frac{y}{\sqrt{|z|}}<1$.
Setting $\xi=\frac{y}{\sqrt{z}}$ and $\eta=\frac{\rho}{\sqrt{z}}$ as before,
we prove a bound for
$K_{0}(\xi,\eta)=-1+\frac{\eta}{\xi}\frac{I_{\nu+1}\left(\frac{\eta\xi}{2}\right)}{I_{\nu}\left(\frac{\eta\xi}{2}\right)}$
in the case $|\xi|<1$. Using the estimate
$\frac{|I_{\nu+1}(w)|}{|I_{\nu}(w)|}\leq C(1\wedge|w|)$, we get
$|K_{0}(\xi,\eta)|\leq
1+C\left|\frac{\eta}{\xi}\right|\left(1\wedge|\eta\xi|\right)$. Assume first
$|\xi\eta|\leq 1$. Then $|K_{0}(\xi,\eta)|\leq C\left(1+|\eta|^{2}\right)\leq
C$ if $|\eta|\leq 1$ and $|K_{0}(\xi,\eta)|\leq
Ce^{\varepsilon|\xi-\eta|^{2}}$ if $|\eta|>1$. Let now $|\xi\eta|>1$. Then
$\left|\frac{\eta}{\xi}\right|\leq|\eta|^{2}$ and $|K_{0}(\xi,\eta)|\leq
C\left(1+|\eta|^{2}\right)\leq Ce^{\varepsilon|\xi-\eta|^{2}}$. It follows
that, when $\frac{y}{\sqrt{|z|}}<1$,
$|D_{y}p_{B^{n}}(z,y\rho)|\leq\frac{C}{\sqrt{|z|}}\frac{y}{\sqrt{|z|}}e^{\varepsilon|\xi-\eta|^{2}}|p_{B^{n}}(z,y,\rho)|$
and the thesis follows from Proposition 2.8.
## 3 The semigroups $e^{zB^{d}}$ and $e^{zB^{n}}$
In this section we show that the operators defined through the kernels
$p_{B^{d}}$ and $p_{B^{n}}$ are strongly continuous semigroups in
$L^{p}_{m}=L^{p}(\mathbb{R}_{+};y^{m}dy)$. We define
$\\{e^{zB}\\}_{z\in\mathbb{C}_{+}}$, $\\{D_{y}e^{zB}\\}_{z\in\mathbb{C}_{+}}$
for $f\in C_{c}^{\infty}(0,\infty)$ by
$[e^{zB}f](y):=\int_{0}^{\infty}p(z,y,\rho)f(\rho)\,d\rho,\quad[D_{y}e^{zB}f](y):=\int_{0}^{\infty}D_{y}p(z,y,\rho)f(\rho)\,d\rho$
where $p=p_{B^{d}}$ or $p=p_{B^{n}}$ and, accordingly, we write $e^{zB^{d}}$
and $e^{zB^{n}}$.
The following lemma is consequence of the heat kernel estimates of
Propositions 2.8, 2.9 and Proposition 12.2.
###### Lemma 3.1
Let $\theta\geq 0$, $\delta=\pi/2-\varepsilon$, $\varepsilon>0$. The following
properties hold for $z\in\Sigma_{\delta}$.
* (i)
If $c<1$ and $c-1+\theta<\frac{m+1}{p}<2$, then $\|e^{zB^{d}}\|_{L^{p}_{m}\to
L^{p}_{m-p{\theta}}}\leq C|z|^{-\frac{\theta}{2}}.$
* (ii)
If $c<1$ and $c+\theta<\frac{m+1}{p}<2$, then
$\|\sqrt{z}D_{y}e^{zB^{d}}\|_{L^{p}_{m}\to L^{p}_{m-p{\theta}}}\leq
C|z|^{-\frac{\theta}{2}}.$
* (iii)
If $c>-1$ and $\theta<\frac{m+1}{p}<c+1$, then $\|e^{zB^{n}}\|_{L^{p}_{m}\to
L^{p}_{m-p{\theta}}}\leq C|z|^{-\frac{\theta}{2}}.$
* (iv)
If $c>-1$ and $\theta-1<\frac{m+1}{p}<c+1$, then
$\|\sqrt{z}D_{y}e^{zB^{n}}\|_{L^{p}_{m}\to L^{p}_{m-p{\theta}}}\leq
C|z|^{-\frac{\theta}{2}}.$
###### Proposition 3.2
* (i)
If $c<1$ and $c-1<\frac{m+1}{p}<2$, then $\\{e^{zB^{d}}\\}$ is a bounded
analytic semigroup of angle $\pi/2$ in $L^{p}_{m}$.
* (ii)
If $c>-1$ and $0<\frac{m+1}{p}<c+1$, then $\\{e^{zB^{n}}\\}$ is a bounded
analytic semigroup of angle $\pi/2$ in $L^{p}_{m}$.
Proof. The boundedness of the families
$\\{e^{zB^{d}}\\}_{z\in\Sigma_{\delta}}$,
$\\{e^{zB^{n}}\\}_{z\in\Sigma_{\delta}}$ follows from the previous lemma with
$\theta=0$; the semigroup law is inherited from the one of $L^{2}_{c}$ via a
density argument and we have only to prove the strong continuity at $0$. Let
$f,g\in C_{c}^{\infty}(0,\infty)$. Then as $z\to 0$, $z\in\Sigma_{\delta}$,
$\int_{0}^{\infty}(e^{zB}f)\,g\,y^{m}dy=\int_{0}^{\infty}(e^{zB}f)\,g\,y^{m-c}y^{c}dy\to\int_{0}^{\infty}fgy^{m-c}y^{c}dy=\int_{0}^{\infty}fgy^{m}dy,$
by the strong continuity of $e^{zB}$ in $L^{2}_{c}$. By density and uniform
boundedness of the family $(e^{zB})_{z\in\Sigma_{\delta}}$ this holds for
every $f\in L^{p}_{m}$, $g\in L^{p^{\prime}}_{m}$. The semigroup is then
weakly continuous, hence strongly continuous.
We denote by $B_{m,p}^{d},B_{m,p}^{n}$ the generators of
$e^{zB^{d}},e^{zB^{n}}$ in $L^{p}_{m}$ and characterize their domain. Observe
that, since the heat kernels of these semigroups are given by Theorem 2.6,
their resolvents are those of Propositions 2.5, 2.4.
We recall that the Sobolev spaces $W^{k,p}_{m}$ are studied in detail in
Appendix B and that traces at the boundary $y=0$ are well-defined when
$(m+1)/p<1$.
It is useful to define $D(B_{m,p,max})=\\{u\in L^{p}_{m}\cap
W^{2,p}_{loc}(\mathbb{R}_{+}):Bu\in L^{p}_{m}\\}$. We start with $B^{n}$.
###### Proposition 3.3
If $c>-1$ and $0<\frac{m+1}{p}<c+1$, then
$D(B_{m,p}^{n})=\\{u\in D(B_{m,p,max}):\;\frac{D_{y}u}{y},\;D_{yy}u\in
L^{p}_{m}\\}.$
Moreover, $D(B_{m,p}^{n})=\\{u\in W^{2,p}_{m}:D_{y}u(0)=0\\}$ when
$\frac{m+1}{p}<1$ and $D(B_{m,p}^{n})=W^{2,p}_{m}$ when $\frac{m+1}{p}>1$.
Proof. Let $D$ be the right-hand side above, $u\in D(B^{d}_{m,p})$ and let
$f:=u-B^{n}_{m,p}u$. Then
$u=(I-B^{n}_{m,p})^{-1}f=\int_{0}^{\infty}e^{-t}e^{tB^{n}}f\,dt$ and
$D_{y}u=\int_{0}^{\infty}e^{-t}D_{y}e^{tB^{d}}f\,dt$. Using Minkowski’s
inequality and Lemma 3.1 (iv) with $\theta-1<0<\frac{m+1}{p}<c+1$, we get
$\displaystyle\|y^{-\theta}D_{y}u\|_{L^{p}_{m}}=\|D_{y}u\|_{L^{p}_{m-\theta
p}}\leq\int_{0}^{\infty}e^{-t}\frac{1}{\sqrt{t}}\|\sqrt{t}D_{y}e^{tB^{d}}f\|_{L^{p}_{m-\theta
p}}\,dt\leq
C\|f\|_{L^{p}_{m}}\int_{0}^{\infty}e^{-t}t^{-\frac{\theta+1}{2}}\,dt\,$
and then $y^{-\theta}D_{y}u\in L^{p}_{m}$ for every $0\leq\theta<1$. To reach
$\theta=1$ let $v=D_{y}u$ and $g=D_{y}v+c\frac{v}{y}=Bu$. Then
$v(y)=y^{-c}\int_{0}^{y}g(s)s^{c}\,ds+Ky^{-c}:=w(y)+Ky^{-c}$ (12)
(note that the integral converges, by Hölder inequality, since
$(c+1)>\frac{m+1}{p}$). By Hardy inequality, see Lemma 10.3,
$\|\frac{w}{y}\|_{L^{p}_{m}}\leq C\|g\|_{L^{p}_{m}}$ but $y^{-c-\theta}\not\in
L^{p}_{m}(0,1)$ for $\theta<1$, sufficiently close to $1$. It follows that
$K=0$, $v=w$ and then $D_{y}u/y\in L^{p}_{m}$ and, by difference, $D_{yy}u\in
L^{p}_{m}$, too.
This shows that $D(B^{n}_{m,p})\subset D$ and we have only to show that $I-B$
is injective on $D$. Assume that $u\in D$ and that $u-Bu=0$, then
$u(y)=c_{1}y^{\frac{1-c}{2}}I_{\frac{c-1}{2}}+c_{2}y^{\frac{1-c}{2}}K_{\frac{|1-c|}{2}}$.
However $c_{1}=0$, since $I_{\frac{c-1}{2}}$ is exponentially increasing at
$\infty$. Concerning $u_{2}(y)=y^{\frac{1-c}{2}}K_{\frac{|1-c|}{2}}$ we note
that its derivative, as $y\to 0$, behaves like $y^{-1}$ when $c\leq 1$ and
like $y^{-c}$ when $c>1$. In both cases $(D_{y}u_{2})/y\not\in L^{p}_{m}$.
Then $c_{2}=0$ and $u=0$.
The last part follows from Proposition 11.4 applied to $D_{y}u$, taking into
account that $W^{1,p}_{m}=W^{1,p}_{0,m}$ when $(m+1)/p>1$. In the case
$(m+1)/p<1$ observe that $D_{y}u$ has a finite limit as $y\to 0$, by Lemma
11.1, which must be equal to $0$, otherwise $(D_{y}u)/y\not\in L^{p}_{m}$.
###### Corollary 3.4
If $c>-1$, $0<\frac{m+1}{p}<c+1$ and $u\in D(B^{n}_{m,p})$, then $\lim_{y\to
0}y^{\frac{m+1}{p}-1}D_{y}u=0$, hence $\lim_{y\to 0}y^{c}D_{y}u=0$. Moreover
* (i)
if $\frac{m+1}{p}<2$, then $D_{y}u\in L^{1}(0,1)$ and therefore $\lim_{y\to
0}u(y)$ exists finite;
* (ii)
if $\frac{m+1}{p}=2$, then $\frac{u}{y^{2\theta}}\in L^{p}_{m}$ if
$2\theta<2$;
* (iii)
if $\frac{m+1}{p}>2$ then $\frac{u}{y^{2}}\in L^{p}_{m}$.
Proof. By (12) with $K=0$ and Hölder inequality we get
$|D_{y}u|\leq y^{-c}\int_{0}^{y}|g(s)|s^{\frac{m}{p}}s^{c-\frac{m}{p}}\,ds\leq
y^{1-\frac{m+1}{p}}\left(\int_{0}^{y}|g(s)|^{p}s^{m}\,ds\right)^{\frac{1}{p}}$
and the first statement follows and yields $D_{y}u\in L^{1}(0,1)$ when
$(m+1)/p<2$. This proves $(i)$. The proof of $(ii)$ is similar since
$D_{y}u=o(y^{-1})$, hence $u=o(\log y)$ as $y\to 0$.
Assume now that $(iii)$ holds and write, by (12) again,
$D_{y}u=y^{-c}\int_{0}^{y}g(s)s^{c}\,ds=y\int_{0}^{1}g(ty)t^{c}\,dt.$
Then
$\frac{|u(y)-u(1)|}{y^{2}}\leq\frac{1}{y^{2}}\int_{y}^{1}s\,ds\int_{0}^{1}|g(ts)|t^{c}\,dt\leq\int_{1}^{\infty}\eta\,d\eta\int_{0}^{1}|g(t\eta
y)|t^{c}\,dt$
and Minkowski’s inequality gives
$\left\|\frac{|u(y)-u(1)|}{y^{2}}\right\|_{L^{p}_{m}}\leq\int_{1}^{\infty}\eta\,d\eta\int_{0}^{1}t^{c}\|g(t\eta\cdot)\|_{L^{p}_{m}}\,dt=\|g\|_{L^{p}_{m}}\int_{1}^{\infty}\eta^{1-\frac{m+1}{p}}d\eta\int_{0}^{1}t^{c-\frac{m+1}{p}}dt.$
Since also $y^{-2}u(1)\in L^{p}_{m}(0,1)$, the proof of $(iii)$ is complete.
Next we consider $B^{d}$.
###### Proposition 3.5
Let $c<1$ and $c-1<\frac{m+1}{p}<2$. Then
$\displaystyle D(B_{m,p}^{d})=$ $\displaystyle\\{u\in
D(B_{m,p,max}):\;y^{-2\theta}u\in L^{p}_{m}{\rm\ for\ every\ }0\leq\theta\leq
1$ $\displaystyle\ {\rm such\ that\ }c-1+2\theta<\frac{m+1}{p}<2\\}.$
Moreover
* (i)
$(1\wedge u)^{2-2\theta}D_{yy}u,(1\wedge u)^{1-2\theta}D_{y}u\in L^{p}_{m}$
for every $u\in D(B_{m,p}^{d})$ and $\theta$ as above;
* (ii)
$D_{y}u\in L^{1}(0,1)$ and $\lim_{y\to 0}u(y)=0$ for every $u\in
D(B_{m,p}^{d})$.
Proof. Let $u\in D(B^{d}_{m,p})$ and let $f:=u-B^{d}_{m,p}u$. Then
$u=(I-B^{d}_{m,p})^{-1}f=\int_{0}^{\infty}e^{-t}e^{tB^{d}}f\,dt$. Then using
Minkowski’s inequality and Lemma 3.1 we get when $c-1+2\theta<\frac{m+1}{p}<2$
and $0\leq\theta<1$,
$\displaystyle\|y^{-2\theta}u\|_{L^{p}_{m}}=\|u\|_{L^{p}_{m-2\theta
p}}\leq\int_{0}^{\infty}e^{-t}\|e^{tB^{d}}f\|_{L^{p}_{m-2\theta p}}\,dt\leq
C\|f\|_{L^{p}_{m}}\int_{0}^{\infty}e^{-t}t^{-\theta}\,dt\,$
which yields $D(B_{m,p}^{d})\subset D$ where $D=\\{u\in
D(B_{m,p,max}):\;y^{-2\theta}u\in L^{p}_{m}\\}$ for every $0\leq\theta<1$ such
that $c-1+2\theta<\frac{m+1}{p}<2$. The equality $D(B_{m,p}^{d})=D$ follows
from the injectivity of $I-B$ on $D$, as in Proposition 3.3, since the
function $u_{2}(y)=y^{\frac{1-c}{2}}K_{\frac{1-c}{2}}\approx c\ \neq 0$ does
not belong to $D$ (choosing $2\theta$ sufficiently close to
$\frac{m+1}{p}-(c-1)$ or to $2$).
To reach the case when $\theta=1$ and to add the integrability of
$D_{y}u,D_{yy}u$ we argue as in the proposition above. If $g=Bu$ then
$D_{y}u(y)=y^{-c}\int_{1}^{y}g(s)s^{c}\,ds+Ky^{-c}:=w(y)+Ky^{-c}.$
Hölder inequality gives $|w(y)|\leq
C\|g\|_{L^{p}_{m}}(y^{-c}+y^{1-\frac{m+1}{p}})$ and then the assumption $c<1$
and $(m+1)/p<2$ show that $D_{y}u\in L^{1}(0,1)$ (with respect to the Lebesgue
measure). It follows that $\lim_{y\to 0}u(y)$ exists finite and must be 0, by
the same argument for $u_{2}$. Then $|u(y)|\leq
C\|g\|_{L^{p}_{m}}(y^{1-c}+y^{2-\frac{m+1}{p}})$. At this point the estimate
for $(1\wedge y)^{-2\theta+1}D_{y}u$ is elementary when $\theta<1$ (and that
for $D_{yy}u$ follows from the equation, multiplying by $y^{2-2\theta}$). If
$\theta=1$, that is when $c+1<(m+1)/p<2$, then $Ky^{-c-1}\in L^{p}_{m}(0,1)$
and $\frac{w}{y}\in L^{p}_{m}$ by Hardy inequality, see Lemma 10.3. The
integrability of $\frac{u}{y^{2}}$ is proved as in Corollary 3.4 (iii). Using
$u(0)=0$ we obtain
$\frac{|u(y)|}{y^{2}}\leq\int_{0}^{1}\eta\,d\eta\int_{1}^{\infty}|g(t\eta
y)|t^{c}\,dt+Ky^{-c-1}.$
Since $y^{-c-1}\in L^{p}_{m}(0,1)$ we may assume that $K=0$ and Minkowski
inequality gives
$\left\|\frac{u}{y^{2}}\right\|_{L^{p}_{m}}\leq\int_{0}^{1}\eta\,d\eta\int_{1}^{\infty}t^{c}\|g(t\eta\cdot)\|_{L^{p}_{m}}\,dt=\|g\|_{L^{p}_{m}}\int_{0}^{1}\eta^{1-\frac{m+1}{p}}d\eta\int_{1}^{\infty}t^{c-\frac{m+1}{p}}dt.$
Note that when $c<(m+1)/p<2$, $\theta=1/2$ is allowed and
$D(B^{d}_{m,p})\subset W^{1,p}_{m}$. The embeddings above do not hold outside
the indicated ranges: just take $u(y)=y^{1-c}$ near $0$.
For the next corollary we recall that $\lim_{y\to 0}u(y)$ and $\lim_{y\to
0}D_{y}u(y)$ exists finite if $u\in W^{2,p}_{m}$ when $(m+1)/p<1$, see Lemma
11.1, and that both are equal to $0$, when $m\leq-1$, see Lemma 11.2.
When $1\leq(m+1)/p<2$, Hölder inequality gives $|D_{y}u|\leq
C\|D_{yy}u\|_{L^{p}_{m}}(1+y^{1-\frac{m+1}{p}})$ if $1<(m+1)/p<2$, or
$|D_{y}u|\leq C\|D_{yy}u\|_{L^{p}_{m}}(1+|\log y|)$ if $m=p-1$. In both cases
$D_{y}u\in L^{1}(0,1)$ and $u(0)$ exists finite.
###### Corollary 3.6
Assume that $c<1$ and $c+1<\frac{m+1}{p}<2$. Then $D(B_{m,p}^{d})\subset
W^{2,p}_{m}$ and
* (i)
If $m\leq-1$, then $D(B_{m,p}^{d})=W^{2,p}_{m};$
* (ii)
If $0<\frac{m+1}{p}<1$ then $D(B_{m,p}^{d})=\\{u\in
W^{2,p}_{m}:u(0)=D_{y}u(0)=0\\}$
* (iii)
If $1\leq\frac{m+1}{p}<2$, then $D(B_{m,p}^{d})=\\{u\in
W^{2,p}_{m}:u(0)=0\\}$.
Proof. By Proposition 3.5 the inclusion $D(B_{m,p}^{d})\subset\\{u\in
W^{2,p}_{m}:u(0)=0\\}$ follows. When $(m+1)/p<1$, then also $D_{y}u(0)=0$
otherwise $D_{y}u/y\not\in L^{P}_{m}$. This shows that in all cases
$D(B_{m,p}^{d})$ is contained in the right hand sides. To show the equality it
suffices, therefore, to note that the function
$u_{2}(y)=y^{\frac{1-c}{2}}K_{\frac{1-c}{2}}\approx c\ \neq 0$ (see the proof
of Proposition 3.5) does not belong to the right hand sides (in case (iii)
observe also that $D_{y}u\in L^{1}(0,1)$ so that $u(0)$ exists finite).
## 4 Degenerate operators in weighted spaces
In this section we add a potential term to $B$ and study the whole operator
$L=D_{yy}+\frac{c}{y}D_{y}-\frac{b}{y^{2}}$
on the (open) half line $\mathbb{R}_{+}=]0,\infty[$. However, we shall
consider $L$ only with Dirichlet boundary conditions at $0$, hence $L=B^{d}$,
when $b=0$, with the understanding that $B^{n}=B^{d}$ when $c\geq 1$.
If $1<p<\infty$, we define the maximal operator $L_{p,max}$ through the domain
$D(L_{m,p,max})=\\{u\in L_{m}^{p}\cap W^{2,p}_{loc}(\mathbb{R}_{+}):Lu\in
L_{m}^{p}\\}.$ (13)
The equation $Lu=0$ has solutions $y^{-s_{1}}$, $y^{-s_{2}}$ where
$s_{1},s_{2}$ are the roots of the indicial equation $f(s)=-s^{2}+(c-1)s+b=0$
given by
$s_{1}:=\frac{c-1}{2}-\sqrt{D},\quad s_{2}:=\frac{c-1}{2}+\sqrt{D}$ (14)
where
$D:=b+\left(\frac{c-1}{2}\right)^{2}.$ (15)
The above numbers are real if and only if $D\geq 0$. When $D<0$ the equation
$u-Lu=f$ cannot have positive distributional solutions for certain positive
$f$, see [22].When $b=0$, then $\sqrt{D}=|c-1|/2$ and $s_{1}=0,s_{2}=c-1$ for
$c\geq 1$ and $s_{1}=c-1,s_{2}=0$ for $c<1$.
A multiplication operator transforms $L$ into a Bessel operator and allows to
transfer the results of the previous section to this more general situation.
###### Lemma 4.1
For $k\in\mathbb{R}$, let $(T_{k}u)(y):=y^{k}u(y),y>0.$ Then $T_{k}$ maps
isometrically $L^{p}_{m+kp}$ onto $L_{m}^{p}$ and for every $u\in
W^{2,1}_{loc}\left(\mathbb{R}_{+}\right)$ one has
$T_{-k}LT_{k}u=\tilde{L}u:=D_{yy}u+\frac{\tilde{c}}{y}D_{y}u-\frac{\tilde{b}}{y^{2}}u$
$\tilde{b}=b-k\left(c+k-1\right),\qquad\tilde{c}=c+2k.$ (16)
Moreover the discriminant $\tilde{D}$ and the parameter $\tilde{\gamma}$,
$\tilde{s}_{1,2}$ of $\tilde{L}$ defined in (15), (14) are given by
$\displaystyle\tilde{D}=D,\quad\tilde{s}_{1,2}=s_{1,2}+k,\quad{\tilde{s}}^{\ast}_{1,2}=s^{\ast}_{1,2}-k.$
(17)
Observe now that, choosing $k=-s_{i}$, $i=1,2$, we get $\tilde{b}=0$,
$\tilde{c}_{i}=c-2s_{i}$. The operators
$T_{s_{i}}LT_{-s_{i}}=B_{i}=D_{yy}+\frac{c-2s_{i}}{y}D_{y}$
are therefore Bessel operators. The following two results can be found also in
[16] and [22] when $m=0$ or $m=c$.
###### Proposition 4.2
Assume that $D>0$. If $s_{1}<\frac{m+1}{p}<s_{2}+2$ then
$L_{m,p}=(L,D(L_{m,p}))$ where
$\displaystyle D(L_{m,p})$ $\displaystyle=\bigg{\\{}u\in
D(L_{m,p,max})\;:\;y^{-2\theta}u\in L_{m}^{p}\quad$ $\displaystyle\textrm{for
every }\ \theta\in(0,1]\ \textrm{such that}\
s_{1}+2\theta<\frac{m+1}{p}<s_{2}+2.\bigg{\\}}$ (18)
generates a bounded positive analytic semigroup of angle $\pi/2$ on
$L_{m}^{p}$. Moreover
* (i)
$(1\wedge y)^{2-2\theta}D_{yy}u,(1\wedge y)^{1-2\theta}D_{y}u\in L^{p}_{m}$
for every $u\in D(L_{m,p})$ and $\theta$ as above;
* (ii)
$\lim_{y\to 0}y^{s_{2}}u(y)=0$ for every $u\in D(L_{m,p}^{d})$.
Proof. We use the identity
$T_{s_{2}}LT_{-s_{2}}=D_{yy}+\frac{c-2s_{2}}{y}D_{y}:=B^{d}$ and apply
Proposition 3.5 in $L^{p}_{m-s_{2}p}$. Note that $c-2s_{2}=1-2\sqrt{D}<1$ and
that $s_{1}+2\theta<\frac{m+1}{p}<s_{2}+2$ is equivalent to
$c-2s_{2}-1+2\theta<\frac{m-s_{2}p+1}{p}<2$. Since, by definition,
$D(L_{m,p})=T_{-s_{2}}D(B^{d}_{m-s_{2}p,p})$, (4.2) is immediate. The
verification of $(i)$ and $(ii)$ is similar. If $u=y^{-s_{2}}v\in D(L_{m,p})$,
then $\lim_{y\to 0}y^{s_{2}}u(y)=\lim_{y\to 0}v(y)=0$ and
$y^{1-2\theta}D_{y}u=y^{-s_{2}}(y^{1-2\theta}D_{y}v-s_{2}y^{-2\theta}v)\in
L^{p}_{m}$, by Proposition 3.5, again.
Let us now turn to the case $D=0$, where $s_{1}=s_{2}$.
###### Proposition 4.3
Assume that $D=0$. If $s_{1}<\frac{m+1}{p}<s_{1}+2$ then
$L_{m,p}=(L,D(L_{m,p}))$ where
$\displaystyle D(L_{m,p})$ $\displaystyle=\bigg{\\{}u\in
D(L_{m,p,max})\;:\;\exists\lim_{y\to 0}y^{s_{1}}u(y)\in\mathbb{C}\bigg{\\}}$
$\displaystyle=\left\\{u\in D(L_{m,p,max})\;:\;y^{-2\theta_{0}}|\log
y|^{-\frac{2}{p}}u\in L^{p}_{m}\Bigl{(}0,\frac{1}{2}\Bigr{)}\right\\}$
with $\theta_{0}=\frac{1}{2}(\frac{m+1}{p}-s_{1})\in(0,1)$ generates a bounded
positive analytic semigroup of angle $\pi/2$ on $L_{m}^{p}$. Moreover,
$(1\wedge y)^{2-2\theta}D_{yy}u,(1\wedge y)^{1-2\theta}D_{y}u\in L^{p}_{m}$
for every $u\in D(L_{m,p})$ and $s_{1}+2\theta<\frac{m+1}{p}<s_{1}+2$.
Proof. Let us write
$\displaystyle D_{1}\\!$ $\displaystyle=\\!\\{u\in D(L_{m,p,max}):\lim_{y\to
0}y^{s_{1}}u(y)\in\mathbb{C}\\},$ $\displaystyle D_{2}\\!$
$\displaystyle=\\!\\{u\in D(L_{m,p,max}):y^{-2\theta_{0}}|\log
y|^{-\frac{2}{p}}u\in L^{p}_{m}(0,\frac{1}{2})\\}$
and note that $D_{1}\subset D_{2}$, by the choice of $\theta_{0}$.
We use the identity
$T_{s_{1}}LT_{-s_{1}}=D_{yy}+\frac{c-2s_{1}}{y}D_{y}=D_{yy}+\frac{1}{y}D_{y}=B^{n}$
and apply Proposition 3.3 in $L^{p}_{m-s_{1}p}$ since
$c-2s_{1}=1-2\sqrt{D}=1$. Note that $s_{1}+2\theta<\frac{m+1}{p}<s_{1}+2$ is
equivalent to $2\theta<\frac{m-s_{1}p+1}{p}<2$. If $u=y^{-s_{1}}v\in
D(L_{m,p})=T_{-s_{1}}D(B^{n}_{m-s_{1}p,p})$, then $\lim_{y\to
0}y^{s_{1}}u(y)=\lim_{y\to 0}v(y)\in\mathbb{C}$, by Corollary 3.4 (i) (and
similarly
$y^{1-2\theta}D_{y}u=y^{-s_{1}}(y^{1-2\theta}D_{y}v-s_{1}y^{-2\theta}v)\in
L^{p}_{m}$). This shows that $D(L_{m,p})\subset D_{1}\subset D_{2}$ and the
equality follows since $I-L$ is injective on $D_{2}$. In fact, if $u-Lu=0$,
then $v=y^{s_{1}}u$ solves $v-Bv=0$, hence $v(y)=c_{1}I_{0}+c_{2}K_{0}$.
However, $c_{1}=0$, since $I_{0}$ grows exponentially at $\infty$ and
$c_{2}=0$ since $K_{0}\approx-\log y$, as $y\to 0$, hence does not satisfy the
integrability condition required by $D_{2}$ near $y=0$.
An alternative description of the domain is contained in the following
proposition, where we do not need to distinguish between $D>0$ and $D=0$.
###### Proposition 4.4
If $D\geq 0$ and $s_{1}<\frac{m+1}{p}<s_{2}+2$, then
$\displaystyle D(L_{m,p})$ $\displaystyle=\bigg{\\{}u\in
D(L_{m,p,max})\;:\;s_{1}\frac{u}{y^{2}}+\frac{D_{y}u}{y}\in
L^{p}_{m}\bigg{\\}}.$ (19)
Proof. As in the case $D=0$ we use the identity
$T_{s_{1}}LT_{-s_{1}}=D_{yy}+\frac{c-2s_{1}}{y}D_{y}=\tilde{B}^{n}$ (this last
is in $L^{p}_{m-s_{1}p}$), after observing that $c-2s_{1}=1+2\sqrt{D}\geq 1$.
Note that the conditions $s_{1}+2\theta<\frac{m+1}{p}<s_{2}+2$ and
$2\theta<\frac{m-s_{1}p+1}{p}<c-2s_{1}+1$ are equivalent.
This definition yields the same operator as in Proposition 4.2 if (and only
if) $T_{-s_{2}}D(B^{d}_{m-s_{2}p,p})=T_{-s_{1}}D(\tilde{B}^{n}_{m-s_{1}p,p})$
but this holds since $L$ endowed with both domains generates a semigroup and
the first contains the second.
Indeed, let $u\in T_{s_{2}-s_{1}}D(\tilde{B}^{n}_{m-s_{1}p,p})$, that is
$y^{s_{1}-s_{2}}u\in D(\tilde{B}^{n}_{m-s_{1}p,p})$. Then by construction
$\tilde{B}^{n}_{m-s_{1}p,p}(y^{s_{1}-s_{2}}u)\in L^{p}_{m-s_{1}p}$ is
equivalent to $B^{d}_{m-s_{2}p,p}u\in L^{p}_{m-s_{2}p}$. Analogously,
Corollary 3.4 applied to $y^{s_{1}-s_{2}}u$ yields $y^{-2\theta}u\in
L^{p}_{m-s_{2}p}$ for every $0\leq\theta\leq 1$ such that
$c-s_{2}-1+2\theta<\frac{m-s_{2}p+1}{p}<2$. By Proposition 4.2 this proves
that $u\in D(\tilde{B}^{n}_{m-s_{1}p,p})$ i.e.
$T_{s_{2}-s_{1}}D(\tilde{B}^{n}_{m-s_{1}p,p})\subseteq
D(\tilde{B}^{n}_{m-s_{1}p,p})$.
Applying now Proposition 3.3 and Corollary 3.4 to $v=y^{s_{1}}u$ we get, in
addition, that $u\in D(L_{m,p})$ if and only if $u\in D(L_{m,p,max})$ and
$(a)\quad s_{1}\frac{u}{y^{2}}+\frac{D_{y}u}{y}\in L^{p}_{m},\quad(b)\quad
s_{1}(s_{1}-1)\frac{u}{y^{2}}+2s_{1}\frac{D_{y}u}{y}+D_{yy}u\in L^{p}_{m}.$
However, $(b)$ follows from $(a)$ and $u\in D(L_{m,p,max})$ since
$\displaystyle
Lu-\left(s_{1}(s_{1}-1)\frac{u}{y^{2}}+2s_{1}\frac{D_{y}u}{y}+D_{yy}u\right)$
$\displaystyle=\left((2-c)s_{1}-2b\right)\frac{u}{y^{2}}+(1+2\sqrt{D})\frac{D_{y}u}{y}$
$\displaystyle=(1+2\sqrt{D})\left[s_{1}\frac{u}{y^{2}}+\frac{D_{y}u}{y}\right].$
We now deduce the estimates for the heat kernel and its derivative for the
operator $L$ from those for $B$. We shall consider only the case $s_{1}\neq
0$; in fact, if $s_{1}=0$, then $b=0$ and $c\geq 1$, hence $L=B^{n}=B^{d}$ and
the estimates are those of Propositions 2.8, 2.9.
###### Proposition 4.5
Let $1<p<\infty$ such that $0\neq s_{1}<\frac{m+1}{p}<s_{2}+2$. Then for
$z\in\mathbb{C}_{+}$
$\displaystyle
e^{zL}f(y)=\int_{\mathbb{R}^{+}}p_{L}(z,y,\rho)f(\rho)d\rho,\quad f\in
L_{m}^{p}$ (20)
where
$p_{L}(z,y,\rho)=\frac{1}{2z}y^{\frac{1-c}{2}}\rho^{\frac{1+c}{2}}\exp\left\\{-\frac{y^{2}+\rho^{2}}{4z}\right\\}I_{\sqrt{D}}\left(\frac{y\rho}{2z}\right).$
For every $\varepsilon>0$, there exist $C_{\varepsilon}>0$ and
$\kappa_{\varepsilon}>0$ such that for
$z\in\Sigma_{\frac{\pi}{2}-\varepsilon}$
$|p_{L}(z,y,\rho)|\leq
C_{\varepsilon}|z|^{-\frac{1}{2}}\left(\frac{y}{|z|^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}+c}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right)$
and
$y^{-1}|p_{L}(z,y,\rho)|+|D_{y}p_{L}(z,y,\rho)|\leq
C_{\varepsilon}|z|^{-1}\left(\frac{y}{|z|^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}-1}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}+c}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right).$
Proof. Let us consider the isometry $T_{-s_{1}}:L^{p}_{m}\to
L^{p}_{m-s_{1}p}$. Then for every $u\in L^{p}_{m}$
$e^{zL}u=T_{-s_{1}}\,e^{zB}\left(T_{s_{1}}u\right)$
where $B$ is the pure Bessel operator $D_{yy}+\frac{c-2s_{1}}{y}$ on
$L^{p}_{m-s_{1}p}$. Let $p_{L}(z,y,\rho)$ be the heat kernel of
$\left(e^{zL}\right)_{z\in\mathbb{C}_{+}}$ on $L_{m}^{p}$. The last relation
between the semigroups translate into the analogous equality for the heat
kernels:
$\displaystyle p_{L}(z,y,\rho)=y^{-s_{1}}p_{B^{n}}(z,y,\rho)\rho^{s_{1}}.$
From Proposition 2.8 and from Lemma 10.2 it follows that
$\displaystyle|p_{L}(z,y,\rho)|$
$\displaystyle=y^{-s_{1}}|p_{B}^{n}(z,y,\rho)|\rho^{s_{1}}\leq\frac{C_{\varepsilon}}{|z|^{\frac{1}{2}}}y^{-s_{1}}\rho^{s_{1}}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{c-2s_{1}}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right)$
$\displaystyle=\frac{C_{\varepsilon}}{|z|^{\frac{1}{2}}}\left(\frac{y}{|z|^{\frac{1}{2}}}\right)^{-s_{1}}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\right)^{s_{1}}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{c-2s_{1}}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right)$
$\displaystyle\leq\frac{C_{\varepsilon}}{|z|^{\frac{1}{2}}}\left(\frac{y}{|z|^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{c-s_{1}}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right).$
Concerning the derivative with respect to $y$, we get
$\displaystyle
D_{y}p_{L}(z,y,\rho)=-s_{1}y^{-s_{1}-1}p_{B^{n}}(z,y,\rho)\rho^{s_{1}}+y^{-s_{1}}D_{y}p_{B^{n}}(z,y,\rho)\rho^{s_{1}}.$
From Proposition 2.9, Proposition 2.8 and Lemma 10.2 it follows that
$\displaystyle D_{y}p_{L}(z,y,\rho)$
$\displaystyle=-s_{1}y^{-s_{1}-1}p_{B^{n}}(z,y,\rho)\rho^{s_{1}}+y^{-s_{1}}D_{y}p_{B^{n}}(z,y,\rho)\rho^{s_{1}}$
$\displaystyle\leq\frac{C_{\varepsilon}}{|z|}\left(\frac{y}{|z|^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}-1}\left(\frac{\rho}{|z|^{\frac{1}{2}}}\wedge
1\right)^{c-s_{1}}\exp\left(-\frac{|y-\rho|^{2}}{\kappa_{\varepsilon}|z|}\right).$
The estimate for $y^{-1}p_{L}$ follows easily from that of $p_{L}$.
## 5 Remarks on domain characterization and uniqueness
The domain characterizations for $B^{n},B^{d},L$ can be stated by adding
explicit estimates. For example, in Proposition 3.3, one can add
$\|D_{yy}u\|_{L^{p}_{m}}+\|y^{-1}D_{y}u\|_{L^{p}_{m}}\leq
C\|Bu\|_{L^{p}_{m}},\quad u\in D(B^{n}_{m,p})$
(the additional term $\|u\|_{L^{p}_{m}}$ does not appear, by scaling). This
follows from the proof (actually, this is the proof) but can be also deduced
from the statement by the closed graph theorem. This remark applies to all the
domain characterizations including those of the next sections; we decided not
to write down them explicitly for exposition reasons but we shall use in
Section 8.
As already pointed out, the assumption $D\geq 0$ is crucial for positivity and
always satisfied in the case of Bessel operators. When $D<0$ the equation
$u-Lu=f$ cannot have positive distributional solutions for certain positive
$f$, see [22]. However, $L$ can be the generator of a semigroup even when
$D<0$, see [18] for Schrödinger operators with inverse square potential
$\Delta-b|y|^{-2}$ with $b<-1/4$.
We note that the results for $B^{d}$ can be deduced from those for
$\tilde{B}^{n}$ (here we denote by $B,\tilde{B}$ two different but related
Bessel operators). This simple (but surprising) fact is actually the core of
the proof of Proposition 4.4 where the operators $L_{m,p}$ are transformed via
similarity to pure Bessel operators with $c\geq 1$. This approach, however,
needs a change in the reference measures and the knowledge of Bessel operators
in $L^{p}_{m}$ for every admissible $m$. We prefer to start with a form in
$L^{2}$ of the symmetrizing measure, as it is usually done. Moreover, using
both the direct approach and the transformation above, one gets different and
complementing descriptions of $D(L_{m,p})$, see Propositions 4.4, 4.2 and
subsequent corollaries, which are likely difficult to discover simultaneously
using only one method.
A natural question arises if different boundary conditions can be imposed to
produce different semigroups. This is the case, for example, for the Bessel
operators of Section 3, in the range $-1<c<1$ where $B^{n}\neq B^{d}$. To
state the uniqueness question more precisely we define $L_{m,p,min}$ as the
closure, in $L^{p}_{m}$ of $(L,C_{c}^{\infty}(\mathbb{R}_{+}))$ (the closure
exists since this operator is contained in the closed operator $L_{m,p,max}$)
and it is clear that $L_{m,p,min}\subset L_{m,p,max}$. We look at realizations
$L_{D}=(L,D)$, such that $L_{m,p,min}\subset L_{D}\subset L_{m,p,max}$. The
following results can be found in [17, Propositions 2.4, 2.5] and [22,
Propositions 3.12, 3.28, 3.30] in the $N$-dimensional case and for $m=0$. The
generalization to any $m$ is straightforward, through the transformation
$T_{k}$.
###### Proposition 5.1
* (i)
If $\frac{m+1}{p}\not\in(s_{1},s_{2}+2)$, then no realization
$L_{m,p,min}\subset L_{D}\subset L_{m,p,max}$ generates a semigroup in
$L^{p}_{m}$;
* (ii)
$L_{m,p,max}$ generates a semigroup if and only if $s_{1}<\frac{m+1}{p}\leq
s_{2}$;
* (iii)
$L_{m,p,min}$ generates a semigroup if and only if
$s_{1}+2\leq\frac{m+1}{p}<s_{2}+2$.
In particular $L$ generates a unique semigroup in cases $(ii)$ and $(iii)$ and
$L_{m,p}=L_{m,p,max}$ or $L_{m,p}=L_{m,p,min}$, respectively.
Therefore if the intervals $(s_{1},s_{2}]$ and $[s_{1}+2,s_{2}+2)$ overlap,
that is if $s_{1}+2\leq s_{2}$ or equivalently $D\geq 1$, we have uniqueness
in all $L^{p}_{m}$ for which there is generation and, moreover,
$L_{m,p,max}=L_{m,p,min}$ if $(m+1)/p\in[s_{1}+2,s_{2}]$.
Uniqueness fails if $s_{2}<s_{1}+2$, i.e. $0\leq D<1$, and
$(m+1)/p\in(s_{2},s_{1}+2)$, as we show below but only for $D>0$.
###### Proposition 5.2
If $0<D<1$ and $s_{2}<\frac{m+1}{p}<s_{1}+2$, then $(L,D(L))$ where
$\displaystyle D(L)$ $\displaystyle=\bigg{\\{}u\in
D(L_{m,p,max})\;:\;s_{2}\frac{u}{y^{2}}+\frac{D_{y}u}{y}\in
L^{p}_{m}\bigg{\\}}$ (21)
generates a bounded positive analytic semigroup of angle $\pi/2$ on
$L_{m}^{p}$.
Proof. We proceed as in the proof of Proposition 4.4 but in place of the
isometry $T_{-s_{1}}$ we use the identity
$T_{s_{2}}LT_{-s_{2}}=D_{yy}+\frac{c-2s_{2}}{y}D_{y}=\tilde{B}^{n}$ (this last
is in $L^{p}_{m-s_{2}p}$), after observing that, under the given hypotheses,
$c-2s_{2}=1-2\sqrt{D}>-1$. Note that the conditions
$s_{2}<\frac{m+1}{p}<s_{1}+2$ and $0<\frac{m-s_{2}p+1}{p}<c-2s_{2}+1$ are
equivalent. The generation result follows then by similarity from Proposition
3.2. The description of $D(L)$ follows by applying Proposition 3.3 to
$v=y^{s_{2}}u\in D(\tilde{B}^{n}_{m-s_{2}p,p})$, $u\in D(L)$, as in the proof
of Proposition 4.4.
We point out that in the range $s_{2}<\frac{m+1}{p}<s_{1}+2$ the operators
$L_{m,p}$ of Section 4 and $(L,D(L))$ just constructed are different. In fact,
a compactly supported function $u$ which is equal to $y^{-s_{1}}$ in a
neighborhood of $0$, belongs to $D(L_{m,p})$ but not to $D(L)$ (otherwise
$y^{-2}u$ would be in $L^{p}_{m}$ and this is not the case since
$(m+1)/p<s_{1}+2$).
When $c=0$, then $L$ is a 1d-Schrödinger operator with inverse square
potential and the condition $s_{2}<s_{1}+2$ becomes $-\frac{1}{4}\leq
b<\frac{3}{4}$. That uniqueness really does not occur is proved for example in
[21] where different positive and analytic semigroups are exhibited in
$L^{2}$. In the case of Bessel operators, the condition $s_{2}<s_{1}+2$
becomes $-1<c<3$, and not $-1<c<1$ as one could guess.
We close this section by describing cores for these degenerate operators.
Observe that by (iii) of the above proposition, $C_{c}^{\infty}(0,\infty)$ is
a core for $L_{m,p}$ if and only if $s_{1}+2\leq(m+1)/p<s_{2}+2$.
###### Proposition 5.3
* (i)
If $c>-1$ and $0<\frac{m+1}{p}<c+1$, then $\mathcal{D}=\left\\{u\in
C_{c}^{\infty}([0,\infty)):D_{y}u(0)=0\right\\}$ is a core for $B^{n}_{m,p}$.
* (ii)
If $s_{1}<\frac{m+1}{p}<s_{2}+2$, then $\mathcal{D}=\left\\{u=y^{-s_{1}}v:v\in
C_{c}^{\infty}([0,\infty),\ D_{y}v(0)=0\right\\}$ is a core for $D(L_{m,p})$.
Proof. (i) By Proposition 3.3, $\mathcal{D}\subset D(B^{n}_{m,p})$. Let $u\in
D(B^{n}_{m,p})$, $f=(I-B^{n})u$, $f_{k}=f\chi_{[k^{-1},\infty)}$ and
$u_{k}=(I-B^{n}_{m,p})^{-1}f_{k}$ so that $u_{k}\to u$ with respect to the
graph norm. By Proposition 2.4,
$u_{k}(y)=c_{k}y^{\frac{1-c}{2}}I_{\frac{c-1}{2}}(y)$ for $y\leq\frac{1}{k}$,
hence $u_{k}\in C^{\infty}([0,\frac{1}{k}])$ and $D_{y}u_{k}(0)=0$ by Lemma
2.3. Now the proof is straightforward. We fix $k$ and a cut-off function
$\phi$ which is equal to $1$ in $[0\frac{1}{2k}]$ and to $0$ for
$y\geq\frac{1}{k}$; then write $u_{k}=\phi u_{k}+(1-\phi)u_{k}$ and smooth
$(1-\phi)u_{k}$ by using convolutions (plus cut-off at $\infty$ to make
everything with compact support).
The proof of (ii) is similar using the Green function of $L_{m,p}$ or using
the transormation $T_{-s_{1}}$ as in Proposition 4.4.
## 6 Sums of closed operators
The operator $\mathcal{L}=\Delta_{x}+L_{y}$ (we write $\Delta_{x}$, $L_{y}$ to
indicate the variables on which the operators act) is the sum of the Laplacian
$\Delta_{x}$ and of the degenerate one dimensional operator $L_{y}$ which
clearly commute on smooth functions. Regularity properties for $\mathcal{L}$
follow once we prove the estimate
$\|\Delta_{x}u\|_{p}+\|L_{y}u\|_{p}\leq C\|\mathcal{L}u\|_{p}$ (22)
where the $L^{p}$ norms are taken over $\mathbb{R}^{N+1}_{+}$ on a
sufficiently large set of functions $u$. This is equivalent to saying that the
domain of $\mathcal{L}$ is the intersection of the domain of $\Delta_{x}$ and
$L_{y}$ (after aprropriate tensorization) or that the operator
$\Delta_{x}{\mathcal{L}}^{-1}$ is bounded. This strategy arose first in the
study of maximal regularity of parabolic problems, that is for the equation
$u_{t}=Au+f,u(0)=0$ where $A$ is the generator of an analytic semigroup on a
Banach space $X$. Estimates like
$\|u_{t}\|_{p}+\|Au\|_{p}\leq\|f\|_{p}$
where now the $L^{p}$ norm is that of $L^{p}([0,T[;X)$ can be interpreted as
closedness of $D_{t}-A$ on the intersection of the respective domains or,
equivalently, boundedness of the operator $A(D_{t}-A)^{-1}$ in
$L^{p}([0,T[;X)$.
Nowadays this strategy is well established and relies on Mikhlin vector-valued
multiplier theorems. Let us state the relevant definitions and main results we
need, referring the reader to [5], [24] or [13].
Let ${\cal S}$ be a subset of $B(X)$, the space of all bounded linear
operators on a Banach space $X$. ${\cal S}$ is $\mathcal{R}$-bounded if there
is a constant $C$ such that
$\|\sum_{i}\varepsilon_{i}S_{i}x_{i}\|_{L^{p}(\Omega;X)}\leq
C\|\sum_{i}\varepsilon_{i}x_{i}\|_{L^{p}(\Omega;X)}$
for every finite sum as above, where $(x_{i})\subset X,(S_{i})\subset{\cal S}$
and $\varepsilon_{i}:\Omega\to\\{-1,1\\}$ are independent and symmetric random
variables on a probability space $\Omega$. It is well-known that this
definition is independent of $1\leq p<\infty$ and that
$\mathcal{R}$-boundedness is equivalent to boundedness when $X$ is an Hilbert
space. When $X$ is an $L^{p}$ space (with respect to any $\sigma$-finite
measure), testing $\mathcal{R}$-boundedness is equivalent to proving square
function estimates, see [13, Remark 2.9 ].
###### Proposition 6.1
Let ${\cal S}\subset B(L^{p}(\Sigma))$, $1<p<\infty$. Then ${\cal S}$ is
$\mathcal{R}$-bounded if and only if there is a constant $C>0$ such that for
every finite family $(f_{i})\in L^{p}(\Sigma),(S_{i})\in{\cal S}$
$\left\|\left(\sum_{i}|S_{i}f_{i}|^{2}\right)^{\frac{1}{2}}\right\|_{L^{p}(\Sigma)}\leq
C\left\|\left(\sum_{i}|f_{i}|^{2}\right)^{\frac{1}{2}}\right\|_{L^{p}(\Sigma)}.$
By the proposition above $\mathcal{R}$-boundedness follows from domination. We
formualate this simple but important fact as a corollary.
###### Corollary 6.2
Let ${\cal S},{\cal T}\subset B(L^{p}(\Sigma))$, $1<p<\infty$ and asuume that
$\cal T$ is $\mathcal{R}$ bounded and that for every $S\in\cal S$ there exists
$T\in\cal T$ such that $|Sf|\leq|Tf|$ pointwise, for every $f\in
L^{p}(\Sigma)$. Then ${\cal S}$ is $\mathcal{R}$-bounded.
Proof. This follows since
$\left(\sum_{i}|S_{i}f_{i}|^{2}\right)^{\frac{1}{2}}\leq\left(\sum_{i}|T_{i}f_{i}|^{2}\right)^{\frac{1}{2}}$
pointwise.
Let $(A,D(A))$ be a sectorial operator in a Banach space $X$; this means that
$\rho(-A)\supset\Sigma_{\pi-\phi}$ for some $\phi<\pi$ and that
$\lambda(\lambda+A)^{-1}$ is bounded in $\Sigma_{\pi-\phi}$. The infimum of
all such $\phi$ is called the spectral angle of $A$ and denoted by $\phi_{A}$.
Note that $-A$ generates an analytic semigroup if and only if
$\phi_{A}<\pi/2$. The definition of $\mathcal{R}$-sectorial operator is
similar, substituting boundedness of $\lambda(\lambda+A)^{-1}$ with
$\mathcal{R}$-boundedness in $\Sigma_{\pi-\phi}$. As above one denotes by
$\phi^{R}_{A}$ the infimum of all $\phi$ for which this happens; since
$\mathcal{R}$-boundedness implies boundedness, we have
$\phi_{A}\leq\phi^{R}_{A}$.
The $\mathcal{R}$-boundedness of the resolvent characterizes the regularity of
the associated inhomogeneous parabolic problem, as we explain now.
An analytic semigroup $(e^{-tA})_{t\geq 0}$ on a Banach space $X$ with
generator $-A$ has maximal regularity of type $L^{q}$ ($1<q<\infty$) if for
each $f\in L^{q}([0,T];X)$ the function $t\mapsto
u(t)=\int_{0}^{t}e^{-(t-s)A})f(s)\,ds$ belongs to $W^{1,q}([0,T];X)\cap
L^{q}([0,T];D(B))$. This means that the mild solution of the evolution
equation
$u^{\prime}(t)+Au(t)=f(t),\quad t>0,\qquad u(0)=0,$
is in fact a strong solution and has the best regularity one can expect. It is
known that this property does not depend on $1<q<\infty$ and $T>0$. A
characterization of maximal regularity is available in UMD Banach spaces,
through the $\mathcal{R}$-boundedness of the resolvent in sector larger than
the right half plane or, equivalently, of the semigroup in a sector around the
positive axis. In the case of $L^{p}$ spaces it can be restated in the
following form, see [13, Theorem 1.11]
###### Theorem 6.3
Let $(e^{-tA})_{t\geq 0}$ be a bounded analytic semigroup in $L^{p}(\Sigma)$
with generator $-A$. Then $T(\cdot)$ has maximal regularity of type $L^{q}$ if
and only if there are constants $0<\phi<\pi/2$, $C>0$ such that for every
finite sequence $(\lambda_{i})\subset\Sigma_{\pi/2+\phi}$, $(f_{i})\subset
L^{p}$
$\left\|\left(\sum_{i}|\lambda_{i}(\lambda_{i}+A)^{-1}f_{i}|^{2}\right)^{\frac{1}{2}}\right\|_{L^{p}(\Sigma)}\leq
C\left\|\left(\sum_{i}|f_{i}|^{2}\right)^{\frac{1}{2}}\right\|_{L^{p}(\Sigma)}.$
or, equivalently for every finite sequence $(z_{i})\subset\Sigma_{\phi}$,
$(f_{i})\subset L^{p}$
$\left\|\left(\sum_{i}|e^{-z_{i}A}f_{i}|^{2}\right)^{\frac{1}{2}}\right\|_{L^{p}(\Sigma)}\leq
C\left\|\left(\sum_{i}|f_{i}|^{2}\right)^{\frac{1}{2}}\right\|_{L^{p}(\Sigma)}.$
Finally we state the operator-valued Mikhlin Fourier multiplier theorem in the
N-dimensional case, see [5, Theorem 3.25] or [13, Theorem 4.6].
###### Theorem 6.4
Let $1<p<\infty$, $M\in C^{N}(\mathbb{R}^{N}\setminus\\{0\\};B(L^{p}(\Sigma))$
be such that the set
$\left\\{|\xi|^{|\alpha|}D^{\alpha}_{\xi}M(\xi):\xi\in\mathbb{R}^{N}\setminus\\{0\\},\
|\alpha|\leq N\right\\}$
is $\mathcal{R}$-bounded. Then the operator $T_{M}={\cal F}^{-1}M{\cal F}$ is
bounded in $L^{p}(\mathbb{R}^{N},L^{p}(\Sigma))$, where $\cal{F}$ denotes the
Fourier transform.
### 6.1 Muckenhoupt weighted estimates
Let $\left(S,d,\nu\right)$ be a space of homogeneous type, that is a metric
space endowed with a Borel measure $\nu$ which is doubling on balls. When
$X=L^{p}\left(S,d,\nu\right)$ the square function estimate in Theorem 6.1 can
be reduced to a family of Muckenhoupt weighted estimates of the type
$\|e^{-zA}f\|_{L^{p}(w)}\leq C\|f\|_{L^{p}(w)},\quad z\in\Sigma_{\phi},$
see Theorem 6.7 below. With this in mind, we recall preliminarily, the
definition and the essential properties about Muckenhoupt weights. For the
proof of the following results as well as for further details, we refer the
reader to [2, Chapter 2 and 5] and [25, Chapter 1]. Let $w$ be a weight i.e. a
non-negative locally integrable function defined on $S$; we use the notation
$-\hskip-10.81218pt\int_{E}w=\frac{1}{\nu(E)}\int_{E}w(x)\,d\nu(x),\qquad
w(E)=\int_{E}w(x)\,d\nu(x).$
Let $\mathcal{M}_{\nu}$ denote the uncentered maximal operator over balls in
$S$ defined by
$\displaystyle\mathcal{M}_{\nu}f(x):=\sup_{B\ni
x}\,-\hskip-10.81218pt\int_{B}|f|,\quad x\in S,$ (23)
where the supremum is taken over all balls of $S$ containing $x$. We recall
that $\mathcal{M}_{\nu}$ is bounded on $L^{p}(w)$ if and only if $w\in A_{p}$,
see for example [6, Theorem 7.3].
We say that $w\in A_{p}$, $1<p<\infty$, if there exists a constant $C$ such
that for every ball $B\subseteq S$ one has
$\displaystyle\Big{(}-\hskip-10.81218pt\int_{B}w\Big{)}\,\Big{(}-\hskip-10.81218pt\int_{B}w^{1-p^{\prime}}\Big{)}^{p-1}\leq
C.$ (24)
For $p=1$, we say that $w\in A_{1}$ if there is a constant $C$ such hat
$\mathcal{M}_{\nu}w\leq C\,w$ a.e..
The weight $w$ is in the reverse Hölder class of order $q$, $w\in RH_{q}$,
$1<q\leq\infty$, if there is a constant $C$ such that for every ball
$B\subseteq S$
$\Big{(}-\hskip-10.81218pt\int_{B}w^{q}\Big{)}^{\frac{1}{q}}\leq
C\,-\hskip-10.81218pt\int_{B}w,$
with the usual modification for $q=\infty$. For $p=1$, $RH_{1}$ is the set of
all weights. The best constants appearing in the previous inequalities are
referred respectively as the $A_{p}$ and the $RH_{q}$ constants of $w$.
We sum up in the following proposition the properties we need about these
classes of weights.
###### Proposition 6.5
The following properties hold:
* (i)
$A_{1}\subset A_{p}\subset A_{q}$ for every $1\leq p\leq q\leq\infty$;
* (ii)
$w\in A_{p}$, $1<p<\infty$, if and only if $w^{1-p^{\prime}}\in
A_{p^{\prime}}$;
* (iii)
If $w\in A_{p}$, $1<p<\infty$, then there exists $1<q<p$ such that $w\in
A_{q}$;
* (iv)
$RH_{\infty}\subset RH_{q}\subset RH_{p}$ for $1<p\leq q\leq\infty$;
* (v)
If $w\in RH_{q}$, $1<q<\infty$, then there exists $q<p<\infty$ such that $w\in
RH_{p}$;
* (vi)
$A_{\infty}:=\bigcup_{1\leq p<\infty}A_{p}=\bigcup_{1<q\leq\infty}RH_{q}.$
* (vii)
Let $1<p_{0}<p<q_{0}<\infty$. Then we have
$w\in A_{\frac{p}{p_{0}}}\cap RH_{\left(\frac{q_{0}}{p}\right)^{\prime}}\iff
w^{-\frac{p^{\prime}}{p}}=w^{1-p^{\prime}}\in
A_{\frac{p^{\prime}}{q_{0}^{\prime}}}\cap
RH_{\left(\frac{p^{\prime}_{0}}{p^{\prime}}\right)^{\prime}}.$
* (viii)
If $1\leq p\leq\infty$ and $1\leq r<\infty$ then
$w\in A_{p}\cap RH_{r}\quad\Longleftrightarrow\quad
w^{r},w^{-\frac{1}{p-1}}\in A_{\infty}\quad\Longleftrightarrow\quad w^{r}\in
A_{r\,(p-1)+1}.$
Proof. Properties $(i)$-$(vi)$ can be found in [6, Chapter 7], [25, Chapter
1]. Point (vii) follows as in [2, Lemma 4.4]. The first equivalence in (viii)
is proved in [25, Lemma 11, Chapter 1]; the second follows as in [12].
A proof of the following result is in [25, Corollary 14] or [6, Chapter 7].
###### Lemma 6.6
Let $w\in A_{p}\cap RH_{r}$, $1<r,p<\infty$. Then there exists a constant
$C>1$ such that for any ball $B$ and any measurable subset $E\subset B$,
$C^{-1}\left(\frac{\nu(E)}{\nu(B)}\right)^{p}\leq\frac{w(E)}{w(B)}\leq
C\left(\frac{\nu(E)}{\nu(B)}\right)^{\frac{r-1}{r}}.$
We now state an extrapolation result originally due to Rubio de Francia,
adapted as in [2, Theorem 4.9], which allows to reduce the square function
estimate in Theorem 6.1 to a family of Muckenhoupt weighted estimates. Only
weights and pairs of functions appear and no operator is involved. In what
follows we consider families $\mathcal{F}=\\{(f,g):f,g\in L_{+}^{0}(S)\\}$,
where $L_{+}^{0}(S)$ is the set of all non-negative, measurable functions
defined on $S$.
###### Theorem 6.7
Let $\left(S,d,\nu\right)$ be a space of homogeneous type and let
$\mathcal{F}\subseteq L_{+}^{0}(S)\times L_{+}^{0}(S)$. Suppose that there
exists $p$ with $p_{0}\leq p\leq q_{0}$ (and $p<\infty$ if $q_{0}=\infty$),
such that for $(f,g)\in\mathcal{F}$,
$\|f\|_{L^{p}(w)}\leq C\|g\|_{L^{p}(w)},\qquad\mbox{for all }w\in
A_{\frac{p}{p_{0}}}\cap RH_{\left(\frac{q_{0}}{p}\right)^{\prime}},$
Then, for all $p_{0}<q<q_{0}$ and $(f,g)\in\mathcal{F}$ we have
$\|f\|_{L^{q}(w)}\leq C\,\|g\|_{L^{q}(w)},\qquad\mbox{for all }w\in
A_{\frac{q}{p_{0}}}\cap RH_{\left(\frac{q_{0}}{q}\right)^{\prime}},$
Moreover, for all $p_{0}<q,r<q_{0}$ and
$\\{(f_{j},g_{j})\\}\subset\mathcal{F}$ we have
$\Big{\|}\Big{(}\sum_{j}(f_{j})^{r}\Big{)}^{1/r}\Big{\|}_{L^{q}(w)}\leq
C\,\Big{\|}\Big{(}\sum_{j}(g_{j})^{r}\Big{)}^{1/r}\Big{\|}_{L^{q}(w)},\quad\mbox{for
all }w\in A_{\frac{q}{p_{0}}}\cap RH_{\left(\frac{q_{0}}{q}\right)^{\prime}}.$
All the constants $C$ above may vary from line to line but depend only on the
$A_{s}$ and $RH_{s}$ constants of $w$.
Combining Theorem 6.1 and Theorem 6.7 we derive the following characterization
of maximal regularity in terms of boundedness over $L^{p}(w)$ spaces.
###### Theorem 6.8
Let $\left(S,d,\nu\right)$ be a space of homogeneous type, $p_{0}\leq p\leq
q_{0}$ with $p<\infty$ if $q_{0}=\infty$ and $p_{0}<2<q_{0}$. Let
$(e^{-zA})_{z\in\Sigma_{\delta}}$ be a bounded analytic semigroup in
$L^{p}\left(S,\nu\right)$ defined in a sector $\Sigma_{\delta}$, $\delta>0$.
Suppose that such that for $f\in L^{p}\left(S,\nu\right)$,
$\|e^{-zA}f\|_{L^{p}(w)}\leq C\|f\|_{L^{p}(w)},\qquad\mbox{for all
}z\in\Sigma_{\delta},\qquad\mbox{for all }w\in A_{\frac{p}{p_{0}}}\cap
RH_{\left(\frac{q_{0}}{p}\right)^{\prime}},$
where $C$ depends only on the $A_{s}$ and $RH_{s}$ constants of $w$.
Then, for all $p_{0}<q<q_{0}$, $(e^{-tA})_{t\geq 0}$ has maximal regularity on
$L^{q}(w)$ for all $w\in A_{\frac{q}{p_{0}}}\cap
RH_{\left(\frac{q_{0}}{q}\right)^{\prime}}$.
The following three lemmas will be crucial in the proof of maximal regularity.
###### Lemma 6.9
Let $w\in A_{p}$, $p\geq 1$, and let $\nu_{w}$ be the measure $wd\nu$. Denote
by $\mathcal{M}_{\nu_{w}}$ and $\mathcal{M}_{\nu}$ the maximal function
defined by $\nu_{w}$ and $\nu$. Then $\left(S,d,\nu_{w}\right)$ is a space of
homogeneous type and
$\displaystyle\mathcal{M}_{\nu}f\leq
A_{p}(w)^{\frac{1}{p}}\left(\mathcal{M}_{\nu_{w}}|f|^{p}\right)^{\frac{1}{p}},\quad
f\in L^{1}_{loc}\left(S,\nu\right),$
where $A_{p}(w)$ is the $A_{p}$ constant of $w$.
Proof. The doubling condition for the measure $\nu_{w}$ follows from that of
$\nu$ and Lemma 6.6. To prove the second claim, let $f\in
L^{1}_{loc}\left(S,\nu\right)$. Then for every ball $B$ of $S$ one has,
applying Hölder’s inequality,
$\displaystyle\frac{1}{\nu(B)}\int_{B}|f|d\nu$
$\displaystyle=\frac{1}{\nu(B)}\int_{B}|f|w^{\frac{1}{p}}w^{-\frac{1}{p}}d\nu\leq\frac{1}{\nu(B)}\left(\int_{B}|f|^{p}wd\nu\right)^{\frac{1}{p}}\left(\int_{B}w^{1-p^{\prime}}d\nu\right)^{\frac{1}{p^{\prime}}}.$
Using (24) we get
$\displaystyle\frac{1}{\nu(B)}\int_{B}|f|d\nu\leq
A_{p}(w)^{\frac{1}{p}}\left(\frac{1}{\nu_{w}(B)}\int_{B}|f|^{p}wd\nu\right)^{\frac{1}{p}}$
which, taking the supremum over $B$, yields the required claim. The case $p=1$
follows similarly.
###### Lemma 6.10
Let $p$ be a non-negative, locally integrable function on $\mathbb{R}^{M}$ and
consider the measure $\nu=p\,dy$. Let $\mathcal{M_{\nu}}$ be the uncentered
maximal operator relative to $\nu$, defined as in (23). If $0\leq\phi\in
L^{1}\left(\mathbb{R}^{M},\nu\right)$ is radial and decreasing then
$\displaystyle|(\phi\ast
pf)(y)|\leq\|\phi\|_{L^{1}\left(\mathbb{R}^{M},\nu\right)}\mathcal{M}_{\nu}f(y),\quad
y\in\mathbb{R}^{M},\quad f\in L^{1}_{loc}\left(\mathbb{R}^{M},\nu\right).$
If $p$ is homogeneous of degree $k$ i.e. $p(ty)=t^{k}p(y)$ for all
$x\in\mathbb{R}^{N}$ and $t>0$, then setting $\phi_{t}:=t^{-M-k}\phi(t^{-1}y)$
one has
$\displaystyle\sup_{t>0}|(\phi_{t}\ast
pf)(y)|\leq\|\phi\|_{L^{1}\left(\mathbb{R}^{M},\nu\right)}\mathcal{M}_{\nu}f(y).$
Proof. Let us suppose preliminarily that $\phi$ is a simple function and let
us write, for some $a_{1},\dots,a_{k}>0$ and balls $B_{1},\dots,B_{k}$
centered at $0$,
$\displaystyle\phi(y)=\sum_{j=1}^{k}a_{j}\chi_{B_{j}}(y).$
Then, since
$\|\phi\|_{L^{1}\left(\mathbb{R}^{M},\nu\right)}=\sum_{j=1}^{k}a_{j}\,\nu(B_{j})$
and $(\chi_{B_{j}}\ast pf)(y)=\int_{y-B_{j}}f(z)d\nu$ , we get
$\displaystyle(\phi\ast
pf)(y)=\sum_{j=1}^{k}a_{j}\,\nu(B_{j})\frac{1}{\nu(B_{j})}(\chi_{B_{j}}\ast
pf)(y)\leq\|\phi\|_{L^{1}\left(\mathbb{R}^{M},\nu\right)}\mathcal{M}_{\nu}f(y).$
In the general case the first required claim follows since $\phi$ can be
approximated by a sequence of simple functions which increase to it
monotonically. To prove the second claim it is enough to observe that, under
the homogeneity assumptions on $p$, one has
$\|\phi_{t}\|_{L^{1}\left(\mathbb{R}^{M},\nu\right)}=\|\phi\|_{L^{1}\left(\mathbb{R}^{M},\nu\right)}$
###### Lemma 6.11
Let $m\in\mathbb{R}$ be such that $M+m>0$ and let $d\mu_{m}=|y|^{m}dy$. For
every $k\in\mathbb{R}$ let us consider the radial weight $w(y)=|y|^{k}$. The
following properties hold.
* (i)
If $1\leq p\leq\infty$ then $w\in A_{p}\left(\mu_{m}\right)$ if and only if
$-(M+m)<k<(M+m)(p-1)$.
* (ii)
If $1\leq p\leq\infty$ and $1\leq r<\infty$ then $w\in A_{p}(\mu_{m})\cap
RH_{r}(\mu_{m})$ if and only if $-\frac{M+m}{r}<k<(M+m)(p-1)$.
Proof. To prove (i), we start by considering balls of center $y_{0}$ and
radius $1$. Fix $R>1$ and assume first that $|y_{0}|\leq R$. Then both
$|y|^{k}$ and $|y|^{-\frac{k}{p-1}}$ are integrable in $B(y_{0},1)$ with
respect to the measure $\mu_{m}$ and
$\left(\frac{1}{\mu_{m}(B(y_{0},1))}\int_{B(y_{0},1)}|y|^{k}\
d\mu_{m}\right)\left(\frac{1}{\mu_{m}(B(y_{0},1))}\int_{B(y_{0},1)}|y|^{-\frac{k}{p-1}}\
d\mu_{m}\right)^{p-1}\leq C$ (25)
for some positive constant $C$ depending on $R$. On the other hand, when
$|y_{0}|>R$, then
$\left(\frac{1}{\mu_{m}(B(y_{0},1))}\int_{B(y_{0},1)}|y|^{k}\
d\mu_{m}\right)\approx|y_{0}|^{k},\quad\left(\frac{1}{\mu_{m}(B(y_{0},1))}\int_{B(y_{0},1)}|y|^{-\frac{k}{p-1}}\
d\mu_{m}\right)^{p-1}\approx|y_{0}|^{-k}$
and the left hand side in (25) is bounded from above and below by a constant.
For a general ball of radius $r$ the claim follows by scaling. Property (ii)
follows using (i) and property (viii) of Proposition 6.5.
## 7 $\mathcal{R}$-boundedness for a family of integral operators
In this section we study the ${\mathcal{R}}$-boundedness of the family of
integral operators
$\displaystyle
S^{\alpha,\beta}(t)f(y)=t^{-\frac{M}{2}}\,\left(\frac{|y|}{\sqrt{t}}\wedge
1\right)^{-\alpha}\int_{\mathbb{R}^{M}}\left(\frac{|z|}{\sqrt{t}}\wedge
1\right)^{-\beta}\exp\left(-\frac{|y-z|^{2}}{\kappa t}\right)f(z)\,dz,$ (26)
where $\kappa$ is a positive constant. We omit the dependence on $\kappa$ even
though in some proofs we need to vary it.
For $m\in\mathbb{R}$ we consider the measure $d\mu_{m}=|y|^{m}dy$ on
$\mathbb{R}^{M}$ and study the action of $S^{\alpha,\beta}(t)$ over the space
$L^{p}_{m}=L^{p}(\mathbb{R}^{M},d\mu_{m})$ for $1<p<\infty$. We prove that
when $S^{\alpha,\beta}(1)$ is bounded in $L^{p}_{m}$, then the family
$\left(S^{\alpha,\beta}(t)\right)_{t>0}$ is also $\mathcal{R}$-bounded on
$L^{p}_{m}$. Note that we write the kernel of these operators always with
respect to the Lebesgue measure, even when they act in weighted spaces.
We start by observing that the scale homogeneity of $S^{\alpha,\beta}$ is $2$
since a change of variables yields
$\displaystyle
S^{\alpha,\beta}(t)\left(I_{s}f\right)=I_{s}\left(S^{\alpha,\beta}(s^{2}t)f\right),\qquad
I_{s}f(y)=f(sy),\qquad t,s>0,$
which in particular gives
$\displaystyle
S^{\alpha,\beta}(t)f=I_{1/\sqrt{t}}\left(S^{\alpha,\beta}(1)I_{\sqrt{t}}f\right),\qquad
t>0.$
The boundedness of $S^{\alpha,\beta}(t)$ in $L^{p}_{m}$ is then equivalent to
that for $t=1$ and $\|S^{\alpha,\beta}(t)\|_{p}=\|S^{\alpha,\beta}(1)\|_{p}$
and this is equivalent to $\alpha<\frac{M+m}{p}<M-\beta$ (in particular
$\alpha+\beta<M$), see Proposition 12.2, with $\theta=0$.
For future purpose we also observe that the adjoint of $S^{\alpha,\beta}(t)$
taken with respect to the measure $\mu_{m}$ is given by the operator
$\displaystyle\left(S^{\alpha,\beta}(t)\right)^{\ast
m}f(y)=t^{-\frac{M}{2}}\,\left(\frac{|y|}{\sqrt{t}}\wedge
1\right)^{-\beta}\int_{\mathbb{R}^{N}}\left(\frac{|y|}{|z|}\right)^{-m}\left(\frac{|z|}{\sqrt{t}}\wedge
1\right)^{-\alpha}\exp\left(-\frac{|y-z|^{2}}{\kappa t}\right)f(z)\,dz.$ (27)
### 7.1 $\mathcal{R}$-boundedness when $M+m>$0
If $d$ denotes the euclidean distance, $\left(\mathbb{R}^{M},d,\mu_{m}\right)$
is of homogeneous type. In what follows we write $A_{p}(\mu_{m})$,
$RH_{p}(\mu_{m})$, $\mathcal{M}_{\mu_{m}}$ to denote respectively the class of
Muckenhoupt weights, the reverse Hölder class and the maximal function over
balls taken with respect to the measure $\mu_{m}$. When $m=0$ we write
$A_{p}$, $RH_{p}$, $\mathcal{M}$.
We observe preliminarily that (26) yields when $m\geq 0$
$\displaystyle|S^{\alpha,\beta}(t)f(y)|$ $\displaystyle\leq
Ct^{-\frac{M+m}{2}}\,\left(\frac{|y|}{\sqrt{t}}\wedge
1\right)^{-\alpha}\int_{\mathbb{R}^{M}}\left(\frac{|z|}{\sqrt{t}}\wedge
1\right)^{-\beta-m}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|\,d\mu_{m}(z).$ (28)
This follows after observing that, when $m\geq 0$, one has
$\displaystyle|z|^{-m}\left(|y|\wedge 1\right)^{-\beta}\leq\left(|z|\wedge
1\right)^{-\beta-m},\quad z\in\mathbb{R}^{M}\setminus\\{0\\}.$
When $m<0$, up to a small perturbation of the constant in the exponential
argument, estimate (28) continues to hold in the range
$\frac{|y|}{\sqrt{t}}\leq 1$, $\frac{|z|}{\sqrt{t}}\geq 1$. Indeed in this
case one has, for $\epsilon>0$ and for some $K>0$,
$\displaystyle|z|^{-m}\exp\left(-\varepsilon|y-z|^{2}\right)\leq
K,\quad|y|\leq 1,\;|z|\geq 1.$
This implies, for $\frac{|y|}{\sqrt{t}}\leq 1$, $\frac{|z|}{\sqrt{t}}\geq 1$
and $\kappa^{\prime}>\kappa$
$\displaystyle|S^{\alpha,\beta}(t)f(y)|$ $\displaystyle\leq
Ct^{-\frac{N}{2}}\,\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha}\int_{\mathbb{R}^{M}}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|\,dz$ $\displaystyle\leq
CKt^{-\frac{M+m}{2}}\,\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha}\int_{\mathbb{R}^{M}}\exp\left(-\frac{|y-z|^{2}}{\kappa^{\prime}t}\right)|f(z)|\,d\mu_{m}(z).$
(29)
We prove the $\mathcal{R}$-boundedness of the family
$\left(S^{\alpha,\beta}(t)\right)_{t\geq 0}$ using the extrapolation result of
Theorem 6.7. We follow the proof in [3, Theorem 2.9] but new complications
arise because the operator is non-symmetric and the measure $\mu_{m}$ is not
the Lebesgue one. In particular we have to distinguish between the cases
$m\geq 0$ and $-M<m<0$ and both the maximal functions with respect to the
Lebesgue measure and the weighted one appear. Note that, since $M+m>0$,
condition (iii) in Proposition 12.2 implies $\beta<M$ and $\alpha<M+m$.
For the reader’s convenience, in what follows we write for $t>0$,
$B=B(0,\sqrt{t})$ and
$\displaystyle S^{\alpha,\beta}(t)f$
$\displaystyle=\chi_{B^{c}}\left(S^{\alpha,\beta}(t)f\chi_{B^{c}}\right)+\chi_{B}\left(S^{\alpha,\beta}(t)(f\chi_{B})\right)+\chi_{B^{c}}\left(S^{\alpha,\beta}(t)f\chi_{B}\right)+\chi_{B}\left(S^{\alpha,\beta}(t)(f\chi_{B^{c}})\right)$
$\displaystyle:=S^{\alpha,\beta}_{1}(t)f+S^{\alpha,\beta}_{2}(t)f+S^{\alpha,\beta}_{3}(t)f+S^{\alpha,\beta}_{4}(t)f.$
(30)
###### Proposition 7.1
Let $M+m>0$, $1<p<\infty$ and assume that $\alpha<\frac{M+m}{p}<M-\beta$. Let
$\displaystyle q_{0}$ $\displaystyle=$ $\displaystyle\frac{M+m}{\alpha}\ {\rm
when}\ \alpha>0,\quad q_{0}=\infty\ {\rm when\ }\alpha\leq 0$ (31)
$\displaystyle p_{0}$ $\displaystyle=$
$\displaystyle\left(\frac{M+m}{\beta+m}\right)^{\prime}\ {\rm when}\
\beta+m>0,\quad p_{0}=1\ {\rm when\ }\beta+m\leq 0$
so that $p_{0}<p<q_{0}$. Then for every weight
$w\in A_{\frac{p}{p_{0}}}(\mu_{m})\cap
RH_{\left(\frac{q_{0}}{p}\right)^{\prime}}(\mu_{m})$
there exist $C>0$ depending on the $A_{\frac{p}{p_{0}}}(\mu_{m})$ and
$RH_{\left(\frac{q_{0}}{p}\right)^{\prime}}(\mu_{m})$ constants of $w$ such
that for every $t\geq 0$ one has
$\|S^{\alpha,\beta}(t)f\|_{L^{p}(w)}\leq C\|f\|_{L^{p}(w)},\quad f\in
L^{p}(\mathbb{R}^{M},wd\mu_{m})=:L^{p}(w).$
Finally, if in addition $p_{0}<2<q_{0}$ (i.e. $S^{\alpha,\beta}(1)$ is bounded
on $L^{2}(\mathbb{R}^{M},wd\mu_{m})$) then the family
$\left(S^{\alpha,\beta}(t)\right)_{t\geq 0}$ is $\mathcal{R}$-bounded on
$L^{p}(\mathbb{R}^{M},wd\mu_{m})$.
We split the proof in four lemmas according to (7.1).
###### Lemma 7.2
The estimate of Proposition 7.1 holds for $(S^{\alpha,\beta}_{1}(t))_{t\geq
0}$.
Proof. Assume first that $m\geq 0$. Then using (28) and Lemma 6.10 with
$p(y)=|y|^{m}$ we get
$\displaystyle|S^{\alpha,\beta}_{1}(t)f(y)|$ $\displaystyle\leq
Ct^{-\frac{M+m}{2}}\int_{\mathbb{R}^{M}}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|\,d\mu_{m}(z)\leq C\mathcal{M}_{\mu_{m}}f(y),$
The claim then follows since $\mathcal{M}_{\mu_{m}}$ is bounded on $L^{p}(w)$.
When $-M<m<0$ we use (26) (and Lemma 6.10 with respect to the Lebesgue
measure) to get
$\displaystyle|S^{\alpha,\beta}_{1}(t)f(y)|$ $\displaystyle\leq
Ct^{-\frac{M}{2}}\int_{\mathbb{R}^{M}}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|\chi_{B^{c}}(z)\,dy\leq C\mathcal{M}f(y),$
Since $w\in A_{\frac{p}{p_{0}}}(\mu_{m})$, by Proposition 6.5 there exists $r$
sufficiently close to $p_{0}$ such that $p_{0}<r<p<q_{0}$ and $w\in
A_{\frac{p}{r}}(\mu_{m})$. Since $-M<m<0$, Lemma 6.11 (i) gives $|y|^{m}\in
A_{r}(dy)$ and then Lemma 6.9 yields
$\displaystyle|S^{\alpha,\beta}_{1}(t)f(y)|$ $\displaystyle\leq
C\left(\mathcal{M}_{\mu_{m}}|f|^{r}(y)\right)^{\frac{1}{r}}.$
Since $w\in A_{\frac{p}{r}}(\mu_{m})$, $\mathcal{M}_{\mu_{m}}$ is bounded on
$L^{\frac{p}{r}}(w)$ and we get $\|S^{\alpha,\beta}_{1}(t)f\|_{L^{p}(w)}\leq
C\|f\|_{L^{p}(w)}$.
###### Lemma 7.3
The estimate of Proposition 7.1 holds for $(S^{\alpha,\beta}_{2}(t))_{t\geq
0}$.
Proof. Using (26) and Hölder’s inequality we get
$\displaystyle|S^{\alpha,\beta}_{2}(t)f(y)|$ $\displaystyle\leq
Ct^{-\frac{M+m}{2}}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha}\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-\beta-m}|f(z)|\,d\mu_{m}(z)$
$\displaystyle\leq
Ct^{-\frac{M+m}{2}}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha}\|f\|_{L^{p}(w)}\left(\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-(\beta+m)p^{\prime}}w(z)^{1-p^{\prime}}\
d\mu_{m}(z)\right)^{\frac{1}{p^{\prime}}}.$
Setting $v=w^{1-p^{\prime}}$ this implies
$\displaystyle\|S^{\alpha,\beta}_{2}(t)f\|_{L^{p}(w)}^{p}\leq
Ct^{-\frac{M+m}{2}p}\|f\|_{L^{p}(w)}^{p}\left(\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-(\beta+m)p^{\prime}}v(z)\
d\mu_{m}(z)\right)^{\frac{p}{p^{\prime}}}\int_{B}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha
p}w(y)\ d\mu_{m}(y).$
Let us treat the first integral. If $\beta+m>0$, then one has
$\displaystyle\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-(\beta+m)p^{\prime}}v(z)\
d\mu_{m}(z)$ $\displaystyle=\sum_{j\geq
0}\int_{2^{-j-1}\leq\frac{|z|}{\sqrt{t}}<2^{-j}}\left(\frac{|z|}{\sqrt{t}}\right)^{-(\beta+m)p^{\prime}}v(z)\
d\mu_{m}(z)$ $\displaystyle\leq C\sum_{j\geq
0}2^{j(\beta+m)p^{\prime}}v(2^{-j}B).$
By property (vii) of Proposition 6.5, $v\in
A_{\frac{p^{\prime}}{q^{\prime}_{0}}}\cap
RH_{\left(\frac{p^{\prime}_{0}}{p^{\prime}}\right)^{\prime}}$; by property (v)
of Proposition 6.5 there exists $r>p^{\prime}$ such that $v\in
RH_{\left(\frac{p^{\prime}_{0}}{r}\right)^{\prime}}$. Lemma 6.6 then implies
$v(2^{-j}B)\leq
Cv(B)\left(\frac{\mu_{m}\left(2^{-j}B\right)}{\mu_{m}\left(B\right)}\right)^{\frac{(\beta+m)r}{M+m}}=Cv(B)2^{-jr(\beta+m)}.$
Therefore since $\beta+m>0$
$\displaystyle\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-(\beta+m)p^{\prime}}v(z)\
d\mu_{m}(z)$ $\displaystyle\leq Cv(B)\sum_{j\geq
0}2^{-j(\beta+m)(r-p^{\prime})}=Cv(B).$
The last inequality holds also when $\beta+m\leq 0$, since in this case
$\displaystyle\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-(\beta+m)p^{\prime}}v(z)\
d\mu_{m}(z)\leq\int_{B}v(z)\ d\mu_{m}(y)=v(B).$
Similarly if $\alpha>0$ then
$\displaystyle\int_{B}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha p}w(y)\
d\mu_{m}(y)$ $\displaystyle=\sum_{j\geq
0}\int_{2^{-j-1}\leq\frac{|y|}{\sqrt{t}}<2^{-j}}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha
p}w(y)\ d\mu_{m}(y)$ $\displaystyle\leq C\sum_{j\geq 0}2^{j\alpha
p}w(2^{-j}B).$
Since $w\in A_{\frac{p}{p_{0}}}\cap
RH_{\left(\frac{q_{0}}{p}\right)^{\prime}}$ by property (v) of Proposition 6.5
there exists $r>p$ such that $w\in
RH_{\left(\frac{q_{0}}{r}\right)^{\prime}}$. By Lemma 6.6 then
$w(2^{-j}B)\leq Cw(B)2^{-jr\alpha}.$
Therefore
$\displaystyle\int_{B}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha p}w(y)\
d\mu_{m}(y)\leq Cw(B)\sum_{j\geq 0}2^{-j\alpha(r-p)}=Cw(B).$
The last inequality holds also when $\alpha\leq 0$, since in this case
$\displaystyle\int_{B}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha}w(y)\
d\mu_{m}(y)\leq\int_{B}w(y)\ d\mu_{m}(y)=w(B).$
Putting together the last inequalities we have in any case
$\|S^{\alpha,\beta}_{2}(t)f\|_{L^{p}(w)}^{p}\leq
C\|f\|_{L^{p}(w)}^{p}t^{-p\frac{M+m}{2}}\left(v(B)\right)^{\frac{p}{p^{\prime}}}w(B).$
Since $\beta<M$ from property (i) of Proposition 6.5 we get $w\in
A_{\frac{p}{p_{0}}}\subseteq A_{p}$ which implies, by the definition (24) of
$A_{p}$ weights,
$\sup_{t>0}t^{-p\frac{M+m}{2}}\left(v(B)\right)^{\frac{p}{p^{\prime}}}w(B)<\infty$.
###### Lemma 7.4
The estimate of Proposition 7.1 holds for $(S^{\alpha,\beta}_{3}(t))_{t\geq
0}$.
Proof. Using (26) we get
$\displaystyle|S^{\alpha,\beta}_{3}(t)f(y)|$ $\displaystyle\leq
Ct^{-\frac{M+m}{2}}\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-\beta-m}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|\,d\mu_{m}(z).$
Let us fix $r$ such that $p_{0}^{\prime}<r<p<q_{0}$. Applying Hölder’s
inequality we obtain
$\displaystyle|S^{\alpha,\beta}_{3}(t)f(y)|$ $\displaystyle\leq
C\left(t^{-\frac{M+m}{2}}\int_{\mathbb{R}^{M}}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|^{r}\ d\mu_{m}(z)\right)^{\frac{1}{r}}$ $\displaystyle\hskip
86.11084pt\times\left(t^{-\frac{M+m}{2}}\int_{B}\left(\frac{|z|}{t^{\frac{1}{2}}}\right)^{-(\beta+m)r^{\prime}}\
d\mu_{m}(z)\right)^{\frac{1}{r^{\prime}}}.$
The substitution $\xi=z/\sqrt{t}$ and Lemma 6.10 yield
$\displaystyle|S^{\alpha,\beta}_{3}(t)f(y)|$ $\displaystyle\leq
C\left(\mathcal{M}_{\mu_{m}}|f|^{r}(y)\right)^{\frac{1}{r}}\left(\int_{B(0,1)}|\xi|^{-(\beta+m)r^{\prime}+m}\
d\xi\right)^{\frac{1}{r^{\prime}}}=C\left(\mathcal{M}_{\mu_{m}}|f|^{r}(y)\right)^{\frac{1}{r}}$
Since $w\in A_{\frac{p}{p_{0}}}$, by Proposition 6.5 there exists $r$
sufficiently close to $p_{0}$ such that $p_{0}<r<p<q_{0}$ and $w\in
A_{\frac{p}{r}}$ . This implies that $\mathcal{M}_{\mu_{m}}$ is bounded on
$L^{\frac{p}{r}}(w)$ which, using the latter inequality, proves the required
claim.
Finally, in order to prove the boundedness of $S^{\alpha,\beta}_{4}(t)$, we
employ estimates (28), (7.1) which allow to equivalently prove, up to a small
modification of the constant in the exponential argument, the boundedness of
the operator
$\displaystyle F_{4}(t)f(y)$
$\displaystyle=\chi_{B}(y)\,{t}^{-\frac{M+m}{2}}\left(\frac{|y|}{\sqrt{t}}\right)^{-\alpha}\,\int_{B^{c}}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|\,d\mu_{m}(z).$
We apply a duality argument and we observe that the adjoint of $F_{4}(t)$
taken with respect to the measure $\mu_{m}$ is given by the operator
$\displaystyle\left(F_{4}(t)^{\ast m}\right)f(y)$
$\displaystyle=t^{-\frac{M+m}{2}}\int_{B}\left(\frac{|z|}{\sqrt{t}}\right)^{-\alpha}\exp\left(-\frac{|y-z|^{2}}{\kappa
t}\right)|f(z)|\,d\mu_{m}(z)$ $\displaystyle=S^{\beta,\alpha}_{3}(t)f(y)$
With this aim let us observe that $q_{0}^{\prime}<p^{\prime}<p_{0}^{\prime}$.
###### Lemma 7.5
The estimate of Proposition 7.1 holds for $(S^{\alpha,\beta}_{4}(t))_{t\geq
0}$.
Proof. We apply a duality argument. Let $g\in
L^{p^{\prime}}(\mathbb{R}^{M},w\mu_{m})$; since $(F_{4}(t))^{\ast
m}=S^{\beta,\alpha}_{3}(t)$ we obtain
$\displaystyle\int_{\mathbb{R}^{M}}F_{4}(t)fg\,w\mu_{m}=\int_{\mathbb{R}^{M}}f\,S^{\beta,\alpha}_{3}(t)(gw)\mu_{m}=\int_{\mathbb{R}^{M}}f\,\frac{S^{\beta,\alpha}_{3}(t)(gw)}{w}w\mu_{m}.$
Using Hölder’s inequality we then yield
$\displaystyle\left|\int_{\mathbb{R}^{M}}F_{4}(t)fg\,w\mu_{m}\right|\leq\|f\|_{L^{p}(\omega)}\left\|\frac{S^{\beta,\alpha}_{3}(t)(gw)}{w}\right\|_{L^{p^{\prime}}(\omega)}=\|f\|_{L^{p}(\omega)}\left\|S^{\beta,\alpha}_{3}(t)(gw)\right\|_{L^{p^{\prime}}(\omega^{1-p^{\prime}})}.$
By property (vii) of Proposition 6.5, $\omega^{1-p^{\prime}}\in
A_{\frac{p^{\prime}}{q_{0}^{\prime}}}\cap
RH_{\left(\frac{p_{0}^{\prime}}{p^{\prime}}\right)^{\prime}}$. Then using the
estimate for $S^{\beta,\alpha}_{3}$ and with $p$ replaced by $p^{\prime}$ we
get
$\displaystyle\left|\int_{\mathbb{R}^{M}}F_{4}(t)fg\,w\mu_{m}\right|\leq
C\|f\|_{L^{p}(\omega)}\left\|gw\right\|_{L^{p^{\prime}}(\omega^{1-p^{\prime}})}=C\|f\|_{L^{p}(\omega)}\left\|g\right\|_{L^{p^{\prime}}(\omega)}$
which concludes the proof.
We can finally prove Proposition 7.1.
(Proof of Proposition 7.1) The first claim follows by using (7.1) and Lemmas
7.2, 7.3, 7.4, 7.5. If $p=2$ satisfies $p_{0}<2<q_{0}$, then the
$\mathcal{R}$-boundedness of $(S^{\alpha,\beta}(t))_{t\geq 0}$ on
$L^{p}(\mathbb{R}^{M},wd\mu_{m})$ follows by Theorems 6.1, 6.7 since, in this
case, the boundedness of $S^{\alpha,\beta}(t)$ in all the Muckenhoupt weighted
spaces just proved implies the square function estimate required by Theorem
6.1.
### 7.2 $\mathcal{R}$-boundedness in the general case
Let us remove the assumptions $M+m>0$ and $p_{0}<2<q_{0}$ first in the case of
the Lebesgue measure, that is when $m=0$. Since we use different measures here
we do not shorten $L^{p}(\mathbb{R}^{M},|y|^{m}dy)$ to $L^{p}_{m}$.
###### Theorem 7.6
Let $1<p<\infty$ and let us suppose that $\alpha<\frac{M+m}{p}<M-\beta$.
Then$\left(S^{\alpha,\beta}(t)\right)_{t\geq 0}$ is $\mathcal{R}$-bounded on
$L^{p}(\mathbb{R}^{M})$.
Proof. If $\alpha<\frac{M}{2}<M-\beta$ i.e. $\alpha,\beta<\frac{M}{2}$, the
thesis is part of Proposition 7.1 with $m=0$. Let us suppose now
$\alpha\geq\frac{M}{2}$ or $\beta\geq\frac{M}{2}$; in this case
$S^{\alpha,\beta}(t)$ is not bounded on $L^{2}\left(\mathbb{R}^{M}\right)$ and
therefore $p\neq 2$.
Given $m\in\mathbb{R}$, let us consider the isometry
$\displaystyle T_{\frac{m}{p}}:L^{p}(\mathbb{R}^{M},|y|^{m}dy)\to
L^{p}(\mathbb{R}^{M},dy),\quad f\mapsto|y|^{\frac{m}{p}}f.$
A straightforward computation shows that
$\displaystyle
T_{-\frac{m}{p}}S^{\alpha,\beta}(t)T_{\frac{m}{p}}f=\tilde{S}^{\alpha,\beta,\frac{m}{p}}(t)f$
where $\tilde{S}^{\alpha,\beta,\frac{m}{p}}$ is the operator defined in (39)
with $r=m/p$. Lemma 12.1 gives
$\displaystyle|T_{-\frac{m}{p}}S^{\alpha,\beta}(t)T_{\frac{m}{p}}f|\leq
CS^{\alpha+\frac{m}{p},\beta-\frac{m}{p}}(t)|f|$ (32)
Since the $\mathcal{R}$-boundedness of a family of operators is preserved by
isometries and pointwise domination, from the equality (32) one can easily
deduce that $\left(S^{\alpha,\beta}(t)\right)_{t\geq 0}$ is
$\mathcal{R}$-bounded on $L^{p}(\mathbb{R}^{M})$ if there exists
$m\in\mathbb{R}$ such that
$\left(S^{\alpha+\frac{m}{p},\beta-\frac{m}{p}}(t)\right)_{t\geq 0}$ is
$\mathcal{R}$-bounded on $L^{p}(\mathbb{R}^{M},|y|^{m}dy)$. From Proposition
7.1 it is then sufficient to require
$\displaystyle 0$
$\displaystyle<M+m,\qquad\alpha+\frac{m}{p}<\frac{M+m}{2}<M-\beta+\frac{m}{p}.$
By elementary calculation the latter inequalities read as
$\displaystyle\begin{cases}M+m>0;\\\\[5.59721pt]
m\left(\frac{1}{p}-\frac{1}{2}\right)<\frac{M}{2}-\alpha;\\\\[5.59721pt]
m\left(\frac{1}{p}-\frac{1}{2}\right)>\beta-\frac{M}{2},\end{cases}$
If $p<2$ the system has a solution $m$ when
$\beta-\frac{M}{2}<-M\left({\frac{1}{p}-\frac{1}{2}}\right)<\frac{M}{2}-\alpha$
that is when $\alpha<\frac{M}{p}<M-\beta$. If $p>2$ the claim follows in the
same way.
The results for $S^{\alpha,\beta}(t)$ in
$L^{p}\left(\mathbb{R}^{M},d\mu_{m}\right)$ are immediate consequence of those
of $S^{\alpha-\frac{m}{p},\beta+\frac{m}{p}}(t)$ in
$L^{p}(\mathbb{R}^{M},dy)$. Note that the condition $M+m>0$ is no longer
required.
###### Theorem 7.7
If , $1<p<\infty$ and $\alpha<\frac{M+m}{p}<M-\beta$, then the family
$\left(S^{\alpha,\beta}(t)\right)_{t\geq 0}$ is $\mathcal{R}$-bounded on
$L^{p}_{m}=L^{p}(\mathbb{R}^{M},d\mu_{m})$.
Proof. Let us consider the isometry
$\displaystyle T_{-\frac{m}{p}}:L^{p}(\mathbb{R}^{M},dy)\to
L^{p}(\mathbb{R}^{M},|y|^{m}dy),\quad f\mapsto|y|^{-\frac{m}{p}}f.$
Then, as done in the previous proof, one has, using Lemma 12.1,
$\displaystyle|T_{\frac{m}{p}}S^{\alpha,\beta}(t)T_{-\frac{m}{p}}f|=|T^{\alpha,\beta,-\frac{m}{p}}f|\leq
CS^{\alpha-\frac{m}{p},\beta+\frac{m}{p}}|f|.$
By construction, the boundedness conditions for
$S^{\alpha-\frac{m}{p},\beta+\frac{m}{p}}(1)$ in $L^{p}(\mathbb{R}^{M},dy)$
are satisfied under the given hypotheses on $S^{\alpha,\beta}$. Then the
family $S^{\alpha-\frac{m}{p},\beta+\frac{m}{p}}$ is $\mathcal{R}$-bounded in
$L^{p}(\mathbb{R}^{M},dy)$ by Theorem 7.6; the same result then also holds for
$T_{\frac{m}{p}}S^{\alpha,\beta}T_{-\frac{m}{p}}$ by domination. By similarity
this yields the $\mathcal{R}$-boundedness of $S^{\alpha,\beta}$ in
$L^{p}(\mathbb{R}^{M},|y|^{m}dy)$.
## 8 Domain and maximal regularity for $\mathcal{L}=\Delta_{x}+L_{y}$
### 8.1 Basic facts
Here we deduce generation results for the whole operator
$\mathcal{L}=\Delta_{x}+L_{y}$ by standard tensor product arguments. If $X,Y$
are function spaces over $G_{1},G_{2}$ we denote by $X\otimes Y$ the algebraic
tensor product of $X,Y$, that is the set of all functions
$u(x,y)=\sum_{i=1}^{n}f_{i}(x)g_{i}(y)$ where $f_{i}\in X,g_{i}\in Y$ and
$x\in G_{1},y\in G_{2}$. If $T,S$ are linear operators on $X,Y$ we denote by
$T\otimes S$ the operator on $X\otimes Y$ defined by
$\left(\left(T\otimes
S\right)u\right)(x,y)=\sum_{i=1}^{n}(Tf_{i})(x)(Sg_{i})(y)$
and we keep the same notation to denote its bounded extension to the
completion of $X\otimes Y$, if no ambiguity can arise. The generation result
for $\mathcal{L}$ follows from well-known general facts. We start by two
preliminary lemmas where the tensor product of a semigroup $(e^{tA})_{t\geq
0}$ with the identity operator is considered. The first follows from [23, AI,
Section 3.7].
###### Lemma 8.1
Let $(e^{tA})_{t\geq 0}$ be the semigroup generated by $(A,D(A))$ in
$L^{p}(\Omega,d\mu)$. The family $(e^{tA})_{t\geq 0}\otimes I)_{t\geq 0}$ on
$L^{p}(\Omega\times\Lambda,d\mu\otimes
d\nu)=\overline{L^{p}(\Omega,d\mu)\otimes L^{p}(\Lambda,d\nu)}$ is a semigroup
generated by the closure $\overline{A\otimes I}$ of the operator $A\otimes I$
initially defined on $D(A)\otimes L^{p}(\Lambda)$.
Let us introduce the operator $(A^{\otimes},D(A^{\otimes}))$
$\displaystyle D(A^{\otimes})$ $\displaystyle:=\\{u\in
L^{p}(\Omega\times\Lambda):\ u(\,\cdot\,,y)\in D(A)\ \textrm{for almost
every}\ y\in\Lambda,\ Au(\cdot,y)\in L^{p}(\Omega\times\Lambda)\\}$
$\displaystyle A^{\otimes}u(\,\cdot\,,y)$
$\displaystyle:=Au(\,\cdot\,,y),\quad\textrm{for almost every}\ y\in\Lambda.$
We can identify the domain of the generator as follows.
###### Lemma 8.2
Keeping the notation of the previous Lemma, we have $\overline{A\otimes
I}=A^{\otimes}$.
Proof. We start by proving that $A^{\otimes}$ is a closed operator. Let
$(u_{n})\in D(A^{\otimes})$ such that $u_{n}\to u$, $A^{\otimes}u_{n}\to v$ in
$L^{p}(\Omega\times\Lambda)$. Up to considering a subsequence,
$u_{n}(\cdot,y)\to u(\cdot,y)$, $Au_{n}(\cdot,y)\to v(\cdot,y)$ for almost
every $y\in\Lambda$. Then, since $A$ is closed, $u(\cdot,y)\in D(A)$ and
$v(\cdot,y)=Au(\cdot,y)$ for almost every $y\in\Lambda$. It follows $u\in
D(A^{\otimes})$ and $A^{\otimes}u=v$. Next we prove that $\overline{A\otimes
I}\subset A^{\otimes}$. By the definition, it easily follows that $A\otimes
I\subset A^{\otimes}$. Since $A^{\otimes}$ is closed by the previous step, the
inclusion $\overline{A\otimes I}\subset A^{\otimes}$ is proved. Finally we
prove that, for $\lambda$ large enough, the operator $\lambda-A^{\otimes}$ is
injective. Indeed, if $u\in D(A^{\otimes})$ and $\lambda u-A^{\otimes}u=0$,
then $\lambda u(\cdot,y)-Au(\cdot,y)=0$ for almost every $y\in\Lambda$ and, by
the injectivity of $A$, $u(\cdot,y)=0$ for almost every $y\in\Lambda$.
We therefore deduce the following generation result, under the assumption that
$A_{m,p}$ generates in $L^{p}_{m}(0,\infty)$. Here $A_{m,p}$ denotes the
degenerate operator $L_{m,p}$ of Section 4, with Dirichlet boundary
conditions, or the Bessel operator $B^{n}_{m,p}$ of Section 3, under Neumann
boundary conditions. We still write $A_{y}$ to indicate that $A$ acts only in
the $y$ variable.
###### Lemma 8.3
Let $(e^{z\Delta_{x}})_{z\in\Sigma_{\phi}}$,
$(e^{zA_{y}})_{z\in\Sigma_{\phi}}$ be the semigroups generated by
$\left(\Delta_{x},W^{2,p}(\mathbb{R}^{N})\right)$ in $L^{p}(\mathbb{R}^{N})$,
$\left(A_{m,p},D(A_{m,p})\right)$ in $L_{m}^{p}(0,\infty)$, respectively. The
operators $\Delta_{x}^{\otimes}$ and $A_{m,p}^{\otimes}$ defined by
$\displaystyle D(\Delta_{x}^{\otimes}):=\Big{\\{}$ $\displaystyle u\in
L^{p}_{m}(\mathbb{R}_{+}^{N+1}):\ u(\,\cdot\,,y)\in W^{2,p}(\mathbb{R}^{N})\
\textrm{for a.e.}\ y\in(0,\infty),\ \nabla_{x}u(\cdot,y),\Big{.}$
$\displaystyle\Big{.}D^{2}_{x}u(\cdot,y)\in
L^{p}_{m}(\mathbb{R}_{+}^{N+1}),\quad\Delta_{x}^{\otimes}u(\,\cdot\,,y):=\Delta_{x}u(\,\cdot\,,y),\quad\textrm{for
almost every}\ y\in(0,\infty)\Big{\\}};$ $\displaystyle
D(A_{m,p}^{\otimes}):=\Big{\\{}$ $\displaystyle u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+}):\ u(x,\cdot)\in D(A_{m,p})\ \textrm{for a.e.}\
x\in\mathbb{R}^{N},\ A_{y}u(x,\cdot)\Big{.}$ . $\displaystyle\in
L^{p}_{m}(\mathbb{R}_{+}^{N+1}),\quad
A_{m,p}^{\otimes}u(x,\cdot):=A_{y}u(x,\cdot),\quad\textrm{for almost every}\
x\in\mathbb{R}^{N}\Big{\\}}$
generate the semigroups $(e^{z\Delta_{x}}\otimes I)_{z\in\Sigma_{\phi}}$,
$(I\otimes e^{zA_{y}})_{z\in\Sigma_{\phi}}$ in
$L^{p}_{m}(\mathbb{R}_{+}^{N+1})$.
### 8.2 Maximal regularity and domain characterization
We can finally prove maximal regularity and domain characterization for
$\mathcal{L}=\Delta_{x}+L_{y}$ and $\mathcal{L}=\Delta_{x}+B^{n}_{y}$. Both
cases have similar proofs but some details are different since the domain of
$B^{n}$ is more regular. In the gradient estimates for $B^{n}$, in fact, the
factor $y^{-s_{1}-1}$ does not appear, see Proposition 2.9, in contrast with
Proposition 4.5 where it is assumed that $s_{1}\neq 0$. However, if $s_{1}=0$,
then $b=0$ and $c\geq 1$ so that $L=B^{d}=B^{n}$. Therefore, we distinguish
between the cases of $L$ with $s_{1}\neq 0$ and $B^{n}$, the case of $L$ with
$s_{1}=0$ being included in the last.
### 8.3 $\mathcal{L}=\Delta_{x}+L_{y}$ with $s_{1}\neq 0$
First we state a $\mathcal{R}$-boundedness result for the heat kernel of $L$
and its gradient. We remark that the $\mathcal{R}$-boundedness of the heat
kernel has been also proved in [17].
###### Theorem 8.4
If $s_{1}<\frac{m+1}{p}<s_{2}+2$, then the family
$(e^{zL_{m,p}})_{z\in\Sigma_{\phi}}$, $\phi<\pi/2$ is $\mathcal{R}$-bounded in
$L^{p}_{m}(0,\infty)$, hence $L_{m,p}$ has maximal regularity.
If $s_{1}+1<\frac{m+1}{p}<s_{2}+2$, then families
$(\frac{\sqrt{z}}{y}e^{zL_{m,p}})_{z\in\Sigma_{\phi}}$,
$(\sqrt{z}D_{y}e^{zL_{m,p}})_{z\in\Sigma_{\phi}}$, $\phi<\pi/2$ are
$\mathcal{R}$-bounded in $L^{p}_{m}(0,\infty)$.
Proof. By Proposition 4.5 we have
$|e^{zL_{m,p}}f|\leq CS^{\alpha,\beta}(c|z|)|f|$
pointwise, for $\alpha=s_{1}$, $\beta=s_{1}-c$ and suitable positive constants
$C,c$ (note also that since $s_{1}+s_{2}=c-1$, then $1-\beta=s_{2}+2$). The
assertion for $(e^{zL_{m,p}})_{z\in\Sigma_{\phi}}$ then follow from Theorem
7.7 together with Corollary 6.2.
Those for $(\frac{\sqrt{z}}{y}e^{zL_{m,p}})_{z\in\Sigma_{\phi}}$,
$(\sqrt{z}D_{y}e^{zL_{m,p}})_{z\in\Sigma_{\phi}}$ are proved in a similar way,
using Proposition 4.5 and setting setting $\alpha=s_{1}+1$ and
$\beta=s_{1}-c$.
###### Proposition 8.5
Let $s_{1}<\frac{m+1}{p}<s_{2}+2$. Then the closure of the operator
$\mathcal{L}$, initially defined on $W^{2,p}(\mathbb{R}^{N})\otimes
D(L_{m,p})$, generates a bounded analytic semigroup of angle $\pi/2$,
$(e^{z\mathcal{L}})_{z\in C_{+}}$, in $L^{p}_{m}(\mathbb{R}_{+}^{N+1})$ which
has maximal regularity.
Proof. Observe first that
$\mathcal{L}=\Delta_{x}\otimes I+I\otimes L_{y}$
on $W^{2,p}(\mathbb{R}^{N})\otimes D(L_{m,p})$. The family
$(e^{z\Delta_{x}}\otimes e^{zL_{y}})_{z\in\mathbb{C}_{+}}$ is a semigroup and
leaves $W^{2,p}(\mathbb{R}^{N})\otimes D(L_{m,p})$ invariant. This last, being
dense, is then a core for the generator. The $\mathcal{R}$-boundedness of the
family $(e^{z\mathcal{L}})_{z\in\Sigma_{\phi}}=(e^{z\Delta_{x}}\otimes
e^{zL_{y}})_{z\in\Sigma_{\phi}}$, $\phi<\pi/2$ follows by composition writing
$(e^{z\Delta_{x}}\otimes e^{zL_{y}})=(e^{z\Delta_{x}}\otimes I)\circ(I\otimes
e^{zL_{y}})$, using the above theorem and the $\mathcal{R}$-boundedness of
$(e^{z\Delta_{x}})_{z\in\Sigma_{\phi}}$.
We note that, by construction, $e^{t\mathcal{L}}$ consists of integral
operators. For $t>0$,
$z_{1}=(x_{1},y_{1}),z_{2}=(x_{2},y_{2})\in\mathbb{R}^{N+1}_{+}$
$\displaystyle e^{t\mathcal{L}}f(z_{1})$
$\displaystyle=\int_{R^{N+1}_{+}}p(t,z_{1},z_{2})f(z_{2})dm(z_{2}),\quad f\in
L^{p}_{m}\left(\mathbb{R}^{N+1}\right)$ $\displaystyle p(t,z_{1},z_{2})$
$\displaystyle=(4\pi
t)^{-\frac{N}{2}}e^{-\frac{|x_{1}-x_{2}|^{2}}{4t}}p_{L_{y}}(t,y_{1},y_{2})$
$\displaystyle\simeq
t^{-\frac{N+1}{2}}\left(\frac{|y_{1}|}{t^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}}\left(\frac{|y_{2}|}{t^{\frac{1}{2}}}\wedge
1\right)^{-s_{1}+c}\exp\left(-\frac{|z_{1}-z_{2}|^{2}}{\kappa t}\right).$
Before describing the domain of $\mathcal{L}_{m,p}$, let us show how the
results for the $1d$ operator $L_{y}$ give easily a core.
###### Proposition 8.6
If $s_{1}<\frac{m+1}{p}<s_{2}+2$, then
$\mathcal{D}=\left\\{u=y^{-s_{1}}v:v\in
C_{c}^{\infty}(\mathbb{R}^{N}\times[0,\infty)),\ D_{y}v(x,0)=0\right\\}$
is a core for $D(\mathcal{L}_{m,p})$.
Proof. $C_{c}^{\infty}(\mathbb{R}^{N})$ is a core for $\Delta_{x}$ and
$\mathcal{D}_{1}=\left\\{u=y^{-s_{1}}v:v\in C_{c}^{\infty}[0,\infty),\
D_{y}v(0)=0\right\\}$ is a core for $L_{m,p}$, by Proposition 5.3. Then
$C_{c}^{\infty}(\mathbb{R}^{N})\otimes\mathcal{D}_{1}\subset\mathcal{D}$ is
dense in $W^{2,p}(\mathbb{R}^{N})\otimes D(L_{m,p})$ for the norm
$\|u\|=\|u\|_{L^{p}_{m}}+\|\Delta_{x}u\|_{L^{p}_{m}}+\|L_{y}u\|_{L^{p}_{m}}$,
hence for the graph norm induced by $\mathcal{L}$. Since
$W^{2,p}(\mathbb{R}^{N})\otimes D(L_{m,p})$ is a core for $\mathcal{L}_{m,p}$,
the proof is complete.
###### Theorem 8.7
Let $D\geq 0$, $s_{1}<\frac{m+1}{p}<s_{2}+2$. Then
$D(\mathcal{L}_{m,p})=\Big{\\{}u\in
W^{2,p}_{loc}(\mathbb{R}^{N+1}_{+}):u,\nabla_{x}u,D^{2}_{x}u,L_{y}u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+})\Big{\\}}.$
Proof. Observe that, by construction,
$\mathcal{S}(\mathbb{R}^{N})\otimes D(L_{m,p})\subset
W^{2,p}(\mathbb{R}^{N})\otimes D(L_{m,p})\subset D(\Delta_{x}^{\otimes})\cap
D(L_{m,p}^{\otimes})\subset D(\mathcal{L}_{m,p})$ (33)
where $\mathcal{S}(\mathbb{R}^{N})$ denotes the Schwartz class. Note that
$\mathcal{S}(\mathbb{R}^{N})\otimes D(L_{m,p})$ is a core for
$\mathcal{L}_{m,p}$ by the above proposition (or also since it is invariant
for $(e^{z\Delta_{x}}\otimes e^{zL_{y}})_{z\in\mathbb{C}_{+}}$).
We endow $D(\mathcal{L}_{m,p})$ with the graph norm and
$Z:=D(\Delta_{x}^{\otimes})\cap D(L_{m,p}^{\otimes})$ with the norm
$\|u\|_{Z}=\|u\|_{L^{p}_{m}}+\|\Delta_{x}u\|_{L^{p}_{m}}+\|L_{y}u\|_{L^{p}_{m}},\
\ u\in Z,$
so that the embedding $Z\subset D(\mathcal{L}_{m,p})$ is continuous. Let us
show that the graph norm and the norm of $Z$ are equivalent on
$\mathcal{S}(\mathbb{R}^{N})\otimes D(L_{m,p})$. Let
$u\in\mathcal{S}(\mathbb{R}^{N})\otimes D(L_{m,p})$ and $f=\mathcal{L}u$. By
taking the Fourier transform with respect to $x$ (with co-variable $\xi$) we
obtain
$(-|\xi|^{2}+L_{y})\hat{u}(\xi,\cdot)=\hat{f}(\xi,\cdot),\qquad|\xi|^{2}\hat{u}(\xi,\cdot)=-|\xi|^{2}(|\xi|^{2}-L_{y})^{-1}\hat{f}(\xi,\cdot).$
This means $\Delta_{x}u=-{\cal F}^{-1}M(\xi){\cal F}f$, where ${\cal F}$
denotes the Fourier transform and $M(\xi)=|\xi|^{2}(|\xi|^{2}-L_{y})^{-1}$.
The estimate $\|\Delta_{x}u\|_{p}\leq C\|f\|_{p}$ (norms on
$L^{p}_{m}(\mathbb{R}^{N+1})$) follows from the boundedness of the multiplier
$M$ in $L^{p}(\mathbb{R}^{N};L^{p}_{m}(0,\infty))$ which we prove using
Theorem 6.4. In fact, since $(e^{tL_{y}})_{t\geq 0}$ is $\mathcal{R}$-bounded
by Theorem 8.4, then the family
$\Gamma(\lambda)=\lambda(\lambda-L_{y})^{-1}=\int_{0}^{\infty}\lambda
e^{-\lambda t}e^{tL_{y}}\,dt,\quad\lambda>0$
is $\mathcal{R}$-bounded by [13, Corollary 2.14] and satisfies Mikhlin
condition in Theorem 6.4 for every $N$, by the resolvent equation (or arguing
as in Lemma 8.9 below). The same then holds for $M(\xi)=\Gamma(|\xi|^{2})$, as
readily verified.
The estimate $\|L_{y}u\|_{p}\leq C\|f\|_{p}$ follows by difference and shows
the equivalence of the graph norm and of the norm of $Z$ on
$\mathcal{S}(\mathbb{R}^{N})\otimes D(L_{m,p})$. If $u\in
D(\mathcal{L}_{m,p})$, let $(u_{n})\subset\mathcal{S}(\mathbb{R}^{N})\otimes
D(L_{m,p})$ converge to $u$ with respect to the graph norm. Then $(u_{n})$ is
a Cauchy sequence with respect to the norm of $Z$, hence $u\in Z$ and the
equivalence of the corresponding norms extends to $Z=D(\mathcal{L}_{m,p})$.
A more detailed description of the domain of $\mathcal{L}$ follows immediately
from the above theorem and the domain description of $D(L_{m,p})$ of Section
4. We do not list all the results that can be obtained in this way, since this
is straightforward. See however Corollary 8.10 below for an important case.
When $(m+1)/p>s_{1}+1$ and $u\in D(\mathcal{L}_{m,p})$, the mixed derivatives
$D_{y}\nabla_{x}u$ belong to $L^{p}_{m}$ even though $D_{yy}u$ could be not in
$L^{p}_{m}$.
###### Theorem 8.8
Let $D\geq 0$, $s_{1}+1<\frac{m+1}{p}<s_{2}+2$. Then
$D(\mathcal{L}_{m,p})=\\{u\in
W^{2,p}_{loc}(\mathbb{R}^{N+1}_{+}):u,\nabla_{x}u,D^{2}_{x}u,\frac{\nabla_{x}u}{y},D_{y}\nabla_{x}u,L_{y}u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+})\\}.$
Proof. We proceed as in Theorem 8.7 to estimate $D_{y}\nabla_{x}u$ for
$u\in\mathcal{S}(\mathbb{R}^{N})\otimes D(L_{m,p})$. We have
$(-|\xi|^{2}+L_{y})\hat{u}(\xi,\cdot)=\hat{f}(\xi,\cdot),\qquad\xi
D_{y}\hat{u}(\xi,\cdot)=-\xi D_{y}(|\xi|^{2}-L_{y})^{-1}\hat{f}(\xi,\cdot).$
and this time the multiplier is
$M(\xi)=-\xi
D_{y}(|\xi|^{2}-L_{y})^{-1}=-\xi\int_{0}^{\infty}e^{-|\xi|^{2}t}D_{y}e^{tL_{y}}\,dt=-\xi\int_{0}^{\infty}\frac{e^{-|\xi|^{2}t}}{\sqrt{t}}S_{t}\,dt$
where $(S_{t})_{t\geq 0}=(\sqrt{t}D_{y}e^{tL_{y}})_{t\geq 0}$ is $R$-bounded,
by Theorem 8.4. The Mikhlin condition of Theorem 6.4 follows from [13,
Corollary 2.14] and the lemma below.
The proof for $y^{-1}\nabla_{x}u$ is similar.
###### Lemma 8.9
Let $h(\xi,t)=\frac{\xi}{\sqrt{t}}e^{-|\xi|^{2}t}$. Then if $|\alpha|=k$
$|\xi|^{k}\int_{0}^{\infty}|D^{\alpha}_{\xi}h(\xi,t)|dt\leq C_{k}.$
Proof. Let $g(\eta)=\eta e^{-|\eta|^{2}}$ so that
$h(\xi,t)=\frac{1}{t}g(\xi\sqrt{t})$ . Let us observe that for $|\alpha|=k>0$
is $|D^{\alpha}g(\eta)|=P^{k+1}(\eta)e^{-|\eta|^{2}}$ where $P^{k+1}$ is a
vector polynomial of degree $k+1$. This implies in particular that
$|\eta|^{k-1}|D^{\alpha}_{\eta}g(\eta)|\leq
C_{k}e^{-|\eta|^{2}},\qquad\text{for}\quad k\geq 0$
which yields
$|D^{\alpha}_{\xi}h(\xi,t)|=t^{\frac{k}{2}-1}|D^{\alpha}_{\xi}g(\xi\sqrt{t})|\leq
C_{k}t^{-\frac{1}{2}}|\xi|^{1-k}e^{-|\xi|^{2}t}.$
Then one has
$\displaystyle|\xi|^{k}\int_{0}^{\infty}|D^{\alpha}_{\xi}h(\xi,t)|dt$
$\displaystyle\leq
C_{k}|\xi|\int_{0}^{\infty}t^{-\frac{1}{2}}e^{-|\xi|^{2}t}dt=C_{k}\int_{0}^{\infty}s^{-\frac{1}{2}}e^{-s}ds=C_{k}\sqrt{\pi}.$
Finally, if $(m+1)/p>s_{1}+2$, then $D(\mathcal{L}_{m,p})$ has the maximal
regularity one can expect.
###### Corollary 8.10
Let $D>0$, $s_{1}+2<\frac{m+1}{p}<s_{2}+2$. Then
$D(\mathcal{L}_{m,p})=\\{u\in
W_{m}^{2,p}(\mathbb{R}^{N+1}_{+}):\frac{u}{y^{2}},\frac{\nabla u}{y}\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+})\\}$
.
Proof. We have only to show that $y^{-2}u,y^{-1}D_{y}u,D_{yy}\in L^{p}_{m}$,
since the rest follows from Theorem 8.8. Using Proposition 4.2 with $\theta=1$
we get
$\int_{0}^{\infty}\left(|D_{yy}u(x,y)|^{p}+\frac{|D_{y}u(x,y)|^{p}}{y^{p}}+\frac{|u(x,y)|^{p}}{y^{2p}}\right)y^{m}dy\leq
C\int_{0}^{\infty}|L_{y}u(x,y)|^{p}y^{m}dy$
(the additional term containing $u$ on the right hand side does not appear, by
homogeneity reasons). Integrating with respect to $x\in\mathbb{R}^{N}$ and
using the estimate $\|L_{y}u\|_{L^{p}_{m}(\mathbb{R}^{N+1}_{+})}\leq
C\|\mathcal{L}u\|_{L^{p}_{m}(\mathbb{R}^{N+1}_{+})}$ the proof is complete.
### 8.4 $\mathcal{L}=\Delta_{x}+L_{y}$ with $s_{1}=0$ and
$\mathcal{L}^{n}=\Delta_{x}+B^{n}_{y}$
If $s_{1}=0$ then $b=0$, $c\geq 1$ and $L=B=D_{yy}+\frac{c}{y}D_{y}$ is a
Bessel operator. Since $c\geq 1$, then $B^{d}=B^{n}$, see Section 2, and
therefore it is sufficient to deal with $B^{n}$. Note, however, that
$s_{1}=c-1$ for $B^{n}$ when $c<1$.
###### Theorem 8.11
If $c>-1$ and $\frac{m+1}{p}\in(0,c+1)$, then the families
$(e^{zB^{n}_{m,p}})_{z\in\Sigma_{\phi}}$,
$(\sqrt{z}D_{y}e^{zB^{n}_{m,p}})_{z\in\Sigma_{\phi}}$, $\phi<\pi/2$ are
$R$-bounded in $L^{p}_{m}(0,\infty)$. In particular, $B^{n}_{m,p}$ has maximal
regularity.
Proof. All the assertions follow from Theorem 7.7 and the heat kernel
estimates of Propositions 2.8, 2.9, setting $\alpha=0$ or $\alpha=-1$ and
$\beta=-c$.
###### Proposition 8.12
If $c>-1$ and $0<\frac{m+1}{p}<c+1$, then
$\mathcal{D}=\left\\{v\in C_{c}^{\infty}(\mathbb{R}^{N}\times[0,\infty)),\
D_{y}v(x,0)=0\right\\}$
is a core for $D(\mathcal{L}^{n}_{m,p})$.
Proof. Identical to that of Proposition 8.6
###### Theorem 8.13
Let $c>-1$, $0<\frac{m+1}{p}<c+1$. Then
$\mathcal{L}^{n}_{m,p}=\Delta_{x}+B^{n}_{y}$ with domain
$D(\mathcal{L}^{n}_{m,p})=\\{u\in
W_{m}^{2,p}(\mathbb{R}^{N+1}_{+}):\frac{D_{y}u}{y}\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+})\\}$
has maximal regularity in $L^{p}_{m}(\mathbb{R}^{N+1}_{+})$. In particular, if
$\frac{m+1}{p}<1$ then
$D(\mathcal{L}^{n}_{m,p})=\\{u\in
W_{m}^{2,p}(\mathbb{R}^{N+1}_{+}):D_{y}u(x,0)=0,\quad x\in\mathbb{R}^{N}\\}$
and if $\frac{m+1}{p}>1$ then
$D(\mathcal{L}^{n}_{m,p})=W_{m}^{2,p}(\mathbb{R}^{N+1}_{+}).$
Proof. We apply Theorem 8.11 as in Proposition 8.5 and Theorem 8.7 to prove
that $\mathcal{L}^{n}_{m,p}$ has maximal regularity and is closed on the
intersection of the domains of $\Delta_{x}$ and $B^{n}_{y}$. Then the same
argument as in Theorem 8.8 yield the $L^{p}_{m}$ boundedness of the mixed
derivatives. By the domain characterization of $B^{n}$ and the closedness of
$\Delta_{x}+B^{n}_{y}$ again we finally have $D_{yy}u,y^{-1}D_{y}u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+})$ for $u\in D(B^{n}_{m,p})$.
When $(m+1)/p>1$, by Hardy inequality of Proposition 11.6 and the equality
$W^{1,p}_{m}=W^{1,p}_{0,m}$ of Proposition 11.4, we get
$D(\mathcal{L}_{m,p})=W_{m}^{2,p}(\mathbb{R}^{N+1}_{+})$. Instead, if
$(m+1)/p<1$, then $(D_{y}u)/y\in L^{p}_{m}$ and $D_{y}u(x,0)=0$ are equivalent
for $u\in W^{2,p}_{m}$. In fact, if $D_{y}u(x,0)=0$, then $D_{y}u\in
W^{1,p}_{0,m}$ and $(D_{y}u)/y\in L^{p}_{m}$ by Proposition 11.6. Conversely,
if $(D_{y}u)/y\in L^{p}_{m}$, since
$\int_{0}^{\infty}\frac{|D_{y}u(x,y)|^{p}}{y^{p}}\,dy=\infty$ whenever
$D_{y}u(x,0)\neq 0$, we get $D_{y}u(x,0)=0$ a.e.
Note that the closedness of $\Delta_{x}+B_{y}$ and the domain characterization
of $B^{n}$ allow to conclude that $\Delta_{x}u,D_{yy}u\in
L^{p}_{m}(\mathbb{R}^{N+1})$, for $u\in D(\mathcal{L}^{n}_{m,p})$, hence
$D_{x_{i}x_{j}}u\in L^{p}_{m}(\mathbb{R}^{N+1})$ by the Calderón-Zygmund
inequality in $\mathbb{R}^{N}$. However, to deduce that the mixed derivatives
$D_{x_{i}y}u$ belongs to $L^{p}_{m}(\mathbb{R}^{N+1})$ one needs the
boundedness of singular integrals in $L^{p}_{m}(\mathbb{R}^{N+1})$ which
usually requires that the one dimensional weight $|y|^{m}$ belongs to
$A_{p}(\mathbb{R}^{N+1})$, that is $0<(m+1)/p<1$, a restriction that does not
appear in the above theorem.
## 9 Rellich inequalities
Rellich inequalities have been intensively studied in any dimension, also for
degenerate operators, but here we recall only the 1d result. All the $L^{p}$
norms in this section are taken with respect to the Lebesgue measure and
accordingly we write $L_{p},\mathcal{L}_{p}$ for $L_{m,p},\mathcal{L}_{m,p}$,
when $m=0$. We set for $1\leq p\leq\infty$
$\displaystyle\gamma_{p}:$
$\displaystyle=\left(\frac{1}{p}-2\right)\left(\frac{1}{p^{\prime}}+c\right)$
and
$\displaystyle{\cal P}_{p}:$
$\displaystyle=\left\\{\lambda=-\xi^{2}+i\xi\left(3-\frac{2}{p}+c\right)-\gamma_{p}\;;\;\xi\in\mathbb{R}\right\\}.$
(34)
Observe that ${\cal P}_{p}$ is a parabola with vertex at $-\gamma_{p}$ when
$3-\frac{2}{p}+c\neq 0$, otherwise coincides with the semiaxis
$]-\infty,-\gamma_{p}]$.
###### Theorem 9.1
([19, Section 3],[15, Section 4]) There exists a positive constant $C$ such
that
$\left\||y|^{-2}u\right\|_{p}\leq C\|Lu\|_{p}$ (35)
holds for every $u\in D(L_{p,max})$ such that $u/|y|^{2}\in L^{p}(0,\infty)$,
if and only if, $b\not\in{\cal P}_{p}$.
Observe that
$b+\gamma_{p}=\left(\frac{1}{p}-s_{1}-2\right)\left(s_{2}+2-\frac{1}{p}\right)$
so that $b\not\in{\cal P}_{p}$ means $\frac{1}{p}\neq s_{1}+2,s_{2}+2$ when
$3-\frac{2}{p}+c\neq 0$ and $\frac{1}{p}\in(s_{1}+2,s_{2}+2)$ when
$3-\frac{2}{p}+c=0$.
When $s_{1}+2<\frac{1}{p}<s_{2}+2$, independently of the value of
$3-\frac{2}{p}+c$, Rellich inequalities can be proved by integrating by parts,
see the Remark below. In the other cases the proof relies on spectral theory
and best constants are known only in special cases. We refer the reader to
[19, Section 3], again.
In the next result we show that Rellich inequalities hold for
$\mathcal{L}=\Delta_{x}+L_{y}$ ($L_{y}=L$) in the generation interval
$s_{1}<\frac{1}{p}<s_{2}+2$ if and only if they hold for $L_{y}$. In the
proof, the closedness of the sum $\Delta_{x}+L_{y}$ on the intersection of the
corresponding domains plays a major role.
Note that the theorem below (as that above) does not concern with Rellich
inequalities in the whole domain of $\mathcal{L}$, as described in the
previous section, but on the (possibly) smaller subspace of all $u$ in the
maximal domain, satisfying $u/|y|^{2}\in L^{p}$.
###### Theorem 9.2
Assume $s_{1}<\frac{1}{p}<s_{2}+2$. There exists $C>0$ such that
$\left\||y|^{-2}|u|\right\|_{p}\leq C\|\mathcal{L}u\|_{p}$ (36)
holds for every $u\in D(\mathcal{L}_{p,max})$ such that $u/|y|^{2}\in
L^{p}(\mathbb{R}_{+}^{N+1})$, if and only if $b\not\in{\cal P}_{p}$.
Proof. Assume that Rellich inequalities hold for the complete operator
$\mathcal{L}$ and let $u(x,y)=z(x)\psi(y)$ with
$\|z\|_{L^{p}(\mathbb{R}^{N})}=1$. Then
$\mathcal{L}u=\Delta_{x}u+L_{y}u=\psi\Delta_{x}z+zL_{y}\psi$
and (36) is equivalent to
$\int_{0}^{\infty}\frac{|\psi(y)|^{p}}{|y|^{2p}}\,dy=\int_{0}^{\infty}|z(x)|^{p}\,dx\int_{0}^{\infty}\frac{|\psi(y)|^{p}}{|y|^{2p}}\,dy\leq
C\int_{\mathbb{R}^{N+1}}\left(|\psi(y)\Delta_{x}z(x)+z(x)L_{y}\psi(y)\right|)^{p}\,dx\,dy.$
Let $z_{R}(x)=R^{-\frac{N}{p}}z\left(\frac{x}{R}\right)$. Then
$\|z_{R}\|_{L^{p}(\mathbb{R}^{N})}=1$, $\|\Delta_{x}z_{R}\|_{p}\to 0$ as
$R\to\infty$ and letting $R\to\infty$
$\int_{0}^{\infty}\frac{|\psi(y)|^{p}}{|y|^{2p}}\,dy\leq
C\int_{0}^{\infty}|L_{y}\psi(y)|^{p}\,dy$
so that Rellich inequalities hold for $L_{y}$.
Next, assume that Rellich inequalities hold for $L_{y}$ and let $u\in
D(\mathcal{L}_{p,max})$ be such that $u/|y|^{2}\in L^{p}(\mathbb{R}^{N+1})$.
Then $u\in D(\mathcal{L}_{p})=D(\Delta_{x})\cap D(L_{y,p})$ and for almost all
$x\in\mathbb{R}^{N}$, $u(x,\cdot)\in D(L_{p,max})$ and $u(x,\cdot)/|y|^{2}\in
L^{p}((0,\infty))$. Then
$\int_{0}^{\infty}\frac{|u(x,y)|^{p}}{|y|^{2p}}\,dy\leq
C\int_{0}^{\infty}|L_{y}u(x,y)|^{p}\,dy$
and, integrating with respect to $x\in\mathbb{R}^{N}$ and using the closedness
of $\Delta_{x}+L_{y}$ we get
$\int_{\mathbb{R}^{N+1}}\frac{|u(x,y)|^{p}}{|y|^{2p}}\,dx\,dy\leq
C\int_{\mathbb{R}^{N+1}}|L_{y}u(x,y)|^{p}\,dx\,dy\leq
C\int_{\mathbb{R}^{N+1}}|\mathcal{L}u(x,y)|^{p}\,dx\,dy.$
###### Remark 9.3
When $s_{1}+2<\frac{1}{p}<s_{2}+2$ the best constant $C$ above is
$(b+\gamma_{p})^{-1}>0$. This can be seen multiplying $\mathcal{L}u$ by
$u|u|^{p-2}y^{2-2p}$ and integrating by parts, assuming $u$ smooth and with
support faraway from $\\{y=0\\}$).
## 10 Appendix A: auxiliary inequalities
###### Lemma 10.1
For every $\varepsilon>0$ there exists $C>0$ such that for $r,s>0$
$(1\wedge r)(1\wedge s)\leq 1\wedge rs\leq C(1\wedge r)(1\wedge
s)\,e^{\epsilon|r-s|^{2}}.$
Proof. $(1\wedge rs)=(1\wedge r)(1\wedge s)$ when $r,s\leq 1$ or $r,s\geq 1$.
Assume that $s\leq 1\leq r$. Then $(1\wedge r)(1\wedge s)=s\leq 1\wedge rs$.
Conversely, if $rs\leq 1$ then $1\wedge rs=rs\leq Cse^{\varepsilon(r-s)^{2}}$
for a suitable $C>0$, since $s\leq 1\leq r$. If, instead, $rs\geq 1$, then
$1\wedge rs=1\leq Cr^{-1}e^{\varepsilon(r-s)^{2}}\leq
Cse^{\varepsilon(r-s)^{2}}$.
###### Lemma 10.2
If $\gamma_{1}\leq\gamma_{2}$ then for every $\varepsilon>0$ there exists
$C>0$ such that
$\frac{|y|^{\gamma_{1}}}{|z|^{\gamma_{2}}}\leq C\frac{(|y|\wedge
1)^{\gamma_{1}}}{(|z|\wedge
1)^{\gamma_{2}}}\exp\left(\epsilon|y-z|^{2}\right).$ (37)
Proof. If $|y|\leq 1$ and $|z|\leq 1$ this is clearly true. Assume that
$|z|\leq 1\leq|y|$. Then $|y-z|^{2}\geq(|y|-1)^{2}$ and
$|y|^{\gamma_{1}}\leq C\exp\left\\{\epsilon(|y|-1)^{2}\right\\}\leq
C\exp\left\\{\epsilon(|y-z|)^{2}\right\\}$
and (37) holds. If $|y|\leq 1\leq|z|$ we argue in similar way. Finally, when
$|y|\geq 1$, $|z|\geq 1$ we write $y=r\omega,z=\rho\eta$ with
$|\omega|=|\eta|=1$. The left hand side of (37) is then
$(r/\rho)^{\gamma_{1}}\rho^{\gamma_{2}-\gamma_{1}}\leq(r/\rho)^{\gamma_{1}}$
which is now symmetric in $r,\rho$. Assuming that $r\geq\rho\geq 1$ we write
$r=s\rho$ with $s\geq 1$, the inequality $s^{\gamma_{1}}\leq
Ce^{\epsilon(s-1)^{2}}\leq Ce^{\epsilon(s-1)^{2}\rho^{2}}$ ($s,\rho\geq 1$)
implies that
$\frac{|y|^{\gamma_{1}}}{|z|^{\gamma_{2}}}\leq\left(\frac{r}{\rho}\right)^{\gamma_{1}}\leq
Ce^{\epsilon|r-\rho|^{2}}\leq Ce^{\epsilon|y-z|^{2}}.$
The following Hardy inequalities have been used several times throughout the
paper.
###### Lemma 10.3
* (i)
When $c+1>\frac{m+1}{p}$ the map
$H_{1}f(y)=\frac{1}{y^{c+1}}\int_{0}^{y}f(s)s^{c}\,ds$ is bounded from
$L^{p}_{m}$ to itself.
* (ii)
When $c+1<\frac{m+1}{p}$ the map
$H_{2}f(y)=\frac{1}{y^{c+1}}\int_{y}^{\infty}f(s)s^{c}\,ds$ is bounded from
$L^{p}_{m}$ to itself.
Proof. Take $f\geq 0$ and let $w=H_{1}f$. Then
$w(y)=\int_{0}^{1}f(ty)t^{c}\,dt$ and by Minkowski’s inequality
$\|w\|_{L^{p}_{m}}\leq\int_{0}^{1}t^{c}\left(\int_{0}^{\infty}f(ty)^{p}y^{m}dy\right)^{\frac{1}{p}}=\int_{0}^{1}t^{c-\frac{m+1}{p}}\left(\int_{0}^{\infty}f(x)^{p}y^{m}dx\right)^{\frac{1}{p}}=C\|f\|_{L^{p}_{m}}$
with $C=\left((c+1)-\frac{m+1}{p}\right)^{-1}$. The proof for $H_{2}$ is
similar.
## 11 Appendix B: Sobolev spaces with weights
For $1<p<\infty$ let $W^{k,p}_{m}(\mathbb{R}^{N+1}_{+})=\\{u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+}):\partial^{\alpha}u\in
L^{p}_{m}(\mathbb{R}^{N+1}_{+})\quad|\alpha|\leq k\\}$. We use often
$W^{k,p}_{m}$ thus omitting $\mathbb{R}^{N+1}_{+}$ and $W^{k,p}_{0,m}$ for the
closure of $C_{c}^{\infty}(\mathbb{R}^{N+1}_{+})$ in $W^{k,p}_{m}$.
###### Lemma 11.1
If $\frac{m+1}{p}<1$ then $L^{p}_{m}$ embeds into $L^{1}(Q\times(0,1))$, where
$Q$ is any cube of $\mathbb{R}^{N}$. It follows that $W^{1,p}_{m}$ embeds into |
# Neither Fast Nor Slow: How to Fly Through Narrow Tunnels
Luqi Wang, Hao Xu, Yichen Zhang and Shaojie Shen All authors are with the
Department of Electronic and Computer Engineering, Hong Kong University of
Science and Technology, Hong Kong, China. $\\{$lwangax, hxubc,
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Nowadays, multirotors are playing important roles in abundant types of
missions. During these missions, entering confined and narrow tunnels that are
barely accessible to humans is desirable yet extremely challenging for
multirotors. The restricted space and significant ego airflow disturbances
induce control issues at both fast and slow flight speeds, meanwhile bringing
about problems in state estimation and perception. Thus, a smooth trajectory
at a proper speed is necessary for safe tunnel flights. To address these
challenges, in this letter, a complete autonomous aerial system that can fly
smoothly through tunnels with dimensions narrow to 0.6 m is presented. The
system contains a motion planner that generates smooth mini-jerk trajectories
along the tunnel center lines, which are extracted according to the map and
Euclidean Distance Field (EDF), and its practical speed range is obtained
through computational fluid dynamics (CFD) and flight data analyses. Extensive
flight experiments on the quadrotor are conducted inside multiple narrow
tunnels to validate the planning framework as well as the robustness of the
whole system.
## I Introduction
Because of their agility and maneuverability, multirotors, as one of the most
ubiquitous types of micro aerial vehicle (MAV), are playing important roles in
an abundance of missions, including inspection[1], search & rescue[2], and
surveillance[3]. During these missions, multirotors are desired to enter
confined and narrow spaces that are barely accessible to humans.
(a) The straight tunnel.
(b) Curved tunnel case 1.
(c) Curved tunnel case 2.
(d) Curved tunnel case 3.
(e) Vent pipe case 1.
(f) Vent pipe case 2.
Figure 1: The narrow tunnels and vent pipes for the flight tests and the
composite images indicating the trajectories.
One of the typical yet extremely challenging and barely addressed scenarios
that is focussed by this letter is traversing a narrow tunnel-like area, for
instance, drainage and ventilation conduits and various other types of
pipelines. However, these narrow tunnels are exceptionally tricky for
multirotors. As pointed out by the previous works[4, 5, 6], the challenges are
the following:
* •
The difficulty in control. The restricted manoeuvring space and the
problematic ego airflow disturbances from the proximity effects can be
detrimental during flight, even when the multirotor is equipped with a
controller possessing decent performance in broader areas.
* •
The difficulty in state estimation and perception. Besides the absence of a
global positioning system, the lack of geometric features and external
illuminations induces unobservability in ranging-based state estimations and
the failure of visual state estimations.
To compensate for control error caused by large ego airflow disturbances and
thus ensure safety, in previous work [6], the multirotor and the induced
control error bounds are kept away from the obstacles. However, in narrow
tunnels, the restricted space prevents the multirotor from maintaining a
sufficient safety margin, ruling out this solution. In this case, inspired
by[7], an intuitive solution is to increase the flight speed to mitigate the
disturbances from the proximity effects. Although flying at a high speed can
nullify most disturbances caused by the turbulent downwash, the unbalanced
forces caused by the proximity effects still exist[8, 9]. Hence, a motion
planner is required for generating a smooth trajectory along the tunnel center
line. Nonetheless, even with the commands on the center line, larger flying
speeds can also produce larger control errors [10], owing to the limited
control bandwidth of the propulsion system, easily causing crashes in such
narrow spaces. Therefore, a proper flight speed range is essential as well.
Initially, this work purely aimed at developing a planner and figuring out the
practical speeds to address the control issues and tried to bypass the state
estimation and perception issues. However, since the multirotor needs to fly
inside narrow tunnels where external positioning systems are not available, a
complete tunnel autonomous flight system including a state estimation and a
perception module needs to be developed. In this work, we choose from the
state-of-the-art (SOTA) state estimation and perception methods to build up
the system. As pointed out in[4], LiDAR-based odometries are not applicable
since the symmetric geometry of tunnels causes unobservability in the
longitudinal direction, while the alternative, visual-inertial odometry (VIO),
can be made functional by adding illuminations on the multirotor. To make the
state estimation and perception module function properly, a smooth motion
planner along the tunnel center line at a practical speed is even more
necessary. Despite the illumination by auxiliary lights, the lighting
conditions inside tunnels are still not comparable to well-illuminated areas
outside. Therefore, to ensure good feature tracking performance of the VIO,
which also plays an important role in the perception module, smooth motion
commands are required. Even with smooth commands along the tunnel center line,
which benefits the VIO and the mapping, flight speeds can still play an
important role. On the one hand, it is obvious that fast flights are not
favorable to the VIO system, since the large motion blur and parallax
occurring at high speeds in such constrained scenarios can engender more
unstable feature tracking under the limited illumination[11]. On the other
hand, at a slow flight speed, the shaky motion induced by airflow disturbances
can also result in poor feature tracking performance, affecting both the VIO
and the map, and thus further causing control issues. As a result, neither
fast nor slow speeds in narrow tunnels are practical, and the compensation is
tremendously crucial for safe flights.
In this letter, a complete narrow tunnel autonomous flight system is proposed,
and the impacts of different speeds, specifically on the control and state
estimation performance are investigated. First, a double-phase motion planner
containing center line extraction and trajectory optimization is designed for
flights through narrow tunnels. Then, the proposed motion planner is deployed
on a customized quadrotor platform, and numerous flight tests are performed in
a straight narrow tunnel at various speeds are performed to collect real-time
data and further analyzed with the data collected in broader areas to
determine the optimal speed range. During the speed selection process,
computational fluid dynamics (CFD) analyses are also conducted to validate the
intuition about the relationship between speed and disturbance. Moreover,
multiple flight experiments in several curved narrow tunnels and vent pipes
are conducted to validate the proposed motion planner in more complex
situations as well as prove the robustness of the whole system.
The contributions of this letter are the following:
1. 1.
A double-phase motion planning framework to smoothly fly a multirotor through
narrow tunnels.
2. 2.
The optimal speed range for the quadrotor to fly through the narrow tunnels as
determined through straight tunnel flight data.
3. 3.
A set of CFD analyses on the validation of the relationship between speed and
disturbance.
4. 4.
Comprehensive integration of a narrow tunnel autonomous flight system,
including the proposed motion planning method, together with visual-inertial
state estimation, mapping, and control modules, into a customized quadrotor
platform equipped with illuminations. Extensive narrow tunnel flights are
performed to verify the planning method and the robustness of the entire
system, meanwhile collecting data for analyses. To the authors’ best
knowledge, this is the first autonomous multirotor system that can fly through
tunnels narrow to 0.6m.
## II Related Work
Multirotor flights in tunnel-like confined areas: Multirotor flights in
constrained and cluttered environments have been studied for years, and a
large amount of motion planning frameworks, together with system integration,
have been proposed [12, 13, 14, 15]. However, flights in tunnel-like confined
areas are not as comprehensively studied. As mentioned in [4], the state
estimation inside tunnels can be challenging due to the lack of light and
geometry features. Additionally, multiple proximity effects, including the
ground effect, which repels the multirotor from the ground, and the near-wall
effect and ceiling effect, which attract the multirotor towards the ceiling
and walls [16, 17, 9] combine to form extremely complex scenarios that bring
about severe control issues[5]. In [18], another navigation method with system
integration is proposed. However, since the planned trajectories are not
smooth enough, even in tunnels with a large diameter of several meters, the
control performance may still be unsatisfactory. Therefore, it is likely that
in narrow tunnels with dimensions narrow to 0.6 m, which are the focus of this
work, the performance would be even worse and the situation exceptionally
challenging.
Speed effects during multirotor flights: In general, flying at a high speed is
considered by previous works to be more difficult. During high-speed flights,
the feature tracking in the state estimation system becomes less stable and
the requirement for low-latency computation time is not easy to achieve[19].
Additionally, due to the unstable tracking and the physical limitation of the
multirotor, the control error for tracking a high-speed trajectory becomes
larger [10, 20], which can be dangerous when flying near obstacles, and may
even result in a collision. As a result, multiple motion planning approaches
that adapt speeds according to the distance from obstacles, i.e. flying faster
in broader areas and slower in narrow areas, have been proposed [20, 21].
However, the ego airflow disturbances become more serious near obstacles,
especially during slow flights, leading to a large control error [6]. Without
considering these disturbances can also be disastrous.
## III System and Workflow for Tunnel Flights
### III-A System Architecture
(a) The customized quadrotor platform.
(b) The system architecture and the workflow.
Figure 2: The hardware and the software system architecture, together with the
workflow for the quadrotor to fly through narrow tunnels.
To facilitate flight inside narrow tunnels, a customized quadrotor platform,
as shown in Fig. 2(a), is designed. Owing to the absence of external
localization systems, an onboard state estimation and mapping system is
required for safe flight. Due to the lack of geometry features and the
complete lack of external illumination, as mentioned in Sec. I, VIO together
with additional LEDs is adopted for the state estimation. For the perception
module, a relatively light-insensitive LiDAR is a practical solution on
account of the abrupt change in lighting conditions at the entrance and the
exit of a tunnel, bringing about severe issues for vision-based depth
estimation, further corrupting the map fusion. Our system uses an Intel
RealSense LiDAR camera L515 consisting of both a LiDAR and a color camera as
the main perception sensor. As indicated in Fig. 2, a software system
consisting of state estimation, perception, planning, and high-level control,
is integrated into a DJI Manifold 2-C onboard computer. The SOTA VIO, VINS-
Mono [22] is adopted for state estimation, while Fiesta [23] is adopted for
the map fusion along with Euclidean distance field (EDF) update for the
planning module. The control commands are derived by the high-level controller
according to the planned trajectory as well as the VIO, and are further sent
to the DJI N3 flight controller for execution by the motors.
### III-B Tunnel Flight Workflow
Figure 3: The detected ArUco markers at the tunnel entrance and the
corresponding marker IDs.
To ensure safe and smooth tunnel flights, we design a workflow as shown in
Fig. 2. Since the sudden change in aerodynamics at the tunnel entrance can
cause unsteady motion in flight, a more precise pose of the center of the
tunnel entrance is desirable to minimize the disturbance. As shown in Fig. 3,
four ArUco markers can be easily installed symmetrically at the tunnel
entrance for entrance localization during the pre-tunnel initialization. By
combining the detected 2-D marker positions on a color image and the
corresponding depths from the depth camera, adding to the camera pose
estimated from the VIO, the 3-D marker positions can be calculated. Multiple
detections are performed and the outliers are rejected to obtain more accurate
3-D positions. The position of the entrance is obtained as the mean of the
four marker positions, while the direction of the tunnel entrance is derived
by solving the normal of the least-squares error plane determined by the four
marker positions using singular value decomposition (SVD). A mini-jerk
trajectory [24] is generated towards the entrance with a target ending
velocity, where the magnitude is the desired speed inside the tunnel and the
direction is aligned with the tunnel entrance. When the quadrotor reaches the
entrance, the state will switch to intra-tunnel and the peri-replanning that
keeps the quadrotor on the center line according to the updating map is
performed at 10 Hz until the quadrotor reaches the exit. Details about the
planning will be introduced in Sec. III-C. At the moment the exit is reached,
a post-planning for deceleration is carried out also utilizing the mini-jerk
trajectory.
### III-C Double-phase Motion Planning in Narrow Tunnels
#### III-C1 Tunnel Center Line Extraction
Algorithm 1 Tunnel Center Line Extraction
Notation: Start point $P$, Start velocity $V$, Waypoints $\mathcal{W}$, Point
$p$, Tunnel dimension $D$, Search step length $S$, EDF value $d$, Point set
$\mathcal{P}$, Plan distance $d_{p}$, Plan range $R_{p}$, Tunnel center line
trajectory $T$
Input: $P$, $V$
Output: $T$
Initialize :
$d_{p}\leftarrow 0$
$dir\leftarrow V$.normalize()
$\mathcal{W}$.push_back($P$)
$p\leftarrow GradientAscend(P+S\cdot dir,dir)$
while $p.d\leq 0.5\cdot D\ \&\&\ d_{p}\leq R_{p}$ do
$\mathcal{W}$.push_back($p$)
$\mathcal{P}\leftarrow SphereRandomSample(p)$
for each $p_{i}\in\mathcal{P}$ do
$p_{i}\leftarrow SphereGradientDescend(p_{i})$
end for
$dir\leftarrow PlaneFit(\mathcal{P},dir).normal()$
$p\leftarrow GradientAscend(P+S\cdot dir,dir)$
$d_{p}\leftarrow d_{p}+dist(p,\mathcal{W}.back())$
end while
$T\leftarrow Bspline(\mathcal{W})$
return $T$
Since the space inside a narrow tunnel is constrained, the proximity effects
are significant. As mentioned in [17, 9, 6], the proximity effects, i.e. the
ground effect, the near-wall effect, and the ceiling effect, can generate not
only additional mean forces but also disturbances. In narrow tunnels,
turbulent downwashes bounce back and forth, inducing complicated and
unpredictable effects rather than a simple superposition of the proximity
effects, and thus bring tremendous difficulty in flight. With consideration to
these effects, it is crucial to reserve as large a clearance as possible from
the tunnel walls; thus, following the center line is a desirable resolution.
Additionally, on account of the 0.25m minimum working range of the LiDAR,
keeping enough clearances from the walls is also beneficial to the perception
module, further justifying the solution.
The algorithm for the tunnel center line extraction is shown in Alg. 1. During
this process, the local EDF keeps updating according to the fused map to
facilitate the following procedures. With the assumption of constant tunnel
dimensions $D$ and the search step length $S$, the waypoints on the tunnel
center line can be extracted from the start point $P$ and the start velocity
$V$, which are determined from the currently executing trajectory command. For
each iteration, firstly, the gradient ascent of the point $p$ is performed in
the plane normal to the current direction $dir$ according to the EDF value.
Then, when $p$ reaches the position with the local maximum EDF value, eight
random samples on the sphere centered at $p$ with the radius of the EDF value
at $p$ are generated in each of the corresponding octants. After that,
gradient descent of the sampled points $p_{i}$ pertaining to the EDF value is
performed to move the points towards the positions nearest to the tunnel
surfaces. Finally, the forward step direction is obtained through the least-
squares plane fitting of the eight points using SVD. The loop is repeated
until the EDF value at $p$ is greater than the radius of the tunnel or the
planned distance reaches the maximum range. When the loop reaches the end, the
waypoints are parameterized into a B-spline trajectory according to the
desired speed.
#### III-C2 Trajectory Optimization
The extracted B-spline tunnel center line is generally jerky due to the
perception noise, and thus cannot be directly used for a smooth flight.
Therefore, we need to generate an optimized smooth trajectory at the desired
speed from the center line for the quadrotor to execute. A trajectory
optimization method extended from our previous work[14] is proposed.
The total cost function $f_{total}$ is defined as the weighted sum of the
smoothness cost $f_{s}$, the waypoint constraint cost $f_{w}$, the speed
constraint cost $f_{v}$, and the initial and end state constraint costs
$f_{i}$ and $f_{e}$:
$f_{total}=\lambda_{s}f_{s}+\lambda_{w}f_{w}+\lambda_{v}f_{v}+\lambda_{i}f_{i}+\lambda_{e}f_{e}.$
(1)
The smoothness cost $f_{s}$ is set to be the elastic band cost [25, 26]
approximating the third-order derivatives of the control points, and is
closely related to the jerk cost on the trajectory:
$f_{s}=\sum\limits_{i=0}^{N-p_{b}}\|-\mathbf{Q}_{i}+3\mathbf{Q}_{i+1}-3\mathbf{Q}_{i+2}+\mathbf{Q}_{i+3}\|^{2},$
(2)
where $p_{b}$ is the order of the B-spline and $p_{b}\geq 3$, $\mathbf{Q}_{i}$
represents the position of the $i$-th control point, and $N$ represents the
total number of control points.
The waypoint constraint cost $f_{w}$ is defined as the summation of the
squared deviations of the waypoints $W_{i}$ to ensure the optimized trajectory
follows the center line:
$f_{w}=\sum\limits_{i=0}^{N-p_{b}}\|EvalBspline(\mathbf{Q}_{i},...,\mathbf{Q}_{i+p_{b}-1})-W_{i}\|^{2},$
(3)
where the $EvalBspline$ function evaluates the waypoint on the B-spline
according to the corresponding control points.
The speed constraint cost $f_{v}$ is penalized on the control points deviating
from the desired speed $v_{des}$ in order to maintain constant desired speed
during flight:
$f_{v}=\sum\limits_{i=0}^{N-1}(\|\frac{\mathbf{Q}_{i+1}-\mathbf{Q}_{i}}{\delta
t}\|-v_{des})^{2},$ (4)
where $\delta t$ is the time interval between the control points.
The initial and end state constraint costs $f_{i}$ and $f_{e}$ are added
according to the difference between the states on the trajectory to be
optimized and the original initial and end states:
$f_{i}=\sum\limits_{i=0}^{k}\|EvalBspline_{i}(\mathbf{Q}_{0},...,\mathbf{Q}_{p_{b}-1})-I_{i}\|^{2},$
(5)
$f_{e}=\sum\limits_{i=0}^{k}\|EvalBspline_{i}(\mathbf{Q}_{N-p_{b}},...,\mathbf{Q}_{N-1})-E_{i}\|^{2},$
(6)
where the $EvalBspline_{i}$ function evaluates the $i$-th derivative of the
point on the B-spline according to the corresponding control points, and
$I_{i}$ and $E_{i}$ indicate the $i$-th derivative of the original initial
state and ending state, respectively. In practice, the weight of the initial
state cost $\lambda_{i}$ is chosen to be larger than the other weights to
achieve smoothness at the start of the trajectory.
Even with soft constraints of the initial state, the initial state on the
optimized B-spline still differs slightly from the original initial state.
Hence, a hard constraint is enforced using the mini-jerk trajectory
generator[24]. Subsequent to the initial state, the waypoints for generating
the final trajectory are selected on the optimized B-spline with a constant
time interval. The final smooth mini-jerk trajectory along the tunnel center
line is sent to the trajectory server and then to the controller for
execution.
### III-D Speed Range Selection
#### III-D1 CFD Analysis
It is intuitive that flying at higher speeds can mitigate the effect of ego
airflow disturbances. To validate this intuition, a set of CFD analyses at 6
different speeds of 0.2 m/s, 0.5 m/s, 1 m/s, 1.5 m/s, 2 m/s and 2.5 m/s is
conducted according to the experiment settings. For a near-hovering 1.23 kg
quadrotor with 5-inch propellers, the model can be simplified as four 5-inch
fans with a pressure jump of 240 Pa with the thin fan approximation. The pitch
angles for constant speed flights are obtained from straight-line flights
outside the tunnel, as mentioned in Sec. IV-A, while the pitch results are
shown in Fig. 4. The pitch data are linear to the speed with a slope of 0.041,
which coincides with the rotor drag theory [27]. The y-intercept not being
zero is possibly because of manufacturing or installation error.
The CFD simulations aim to examine the quatrotor flying from right to left in
the tunnel, while with the moving frame setup, the airflow moves towards the
right from the inlet through the static quadrotor. As shown in Fig. 5, four
fans representing the quadrotor are placed in a tunnel of 0.6 m in diameter
and 2 m in length, at a distance of 0.5 m from the inlet. The flow velocity at
the inlet is set to be the different flight speeds during each simulation,
while the outlet is set to be the outlet vent with a restriction of backflows.
The walls are set to be the default wall conditions with friction, moving
towards the right at the flight speed. The pressure-based, standard k-epsilon
turbulence and standard wall model, together with the standard SIMPLE
algorithm CFD solver in Ansys Fluent, are adopted to solve the simulation
problem.
Figure 4: The pitch data and the fit line against the flight speed in broad
areas. The slope of the line is 0.041.
(a) 0.2 m/s.
(b) 0.5 m/s.
(c) 1.0 m/s.
(d) 1.5 m/s
(e) 2.0 m/s
(f) 2.5 m/s
Figure 5: The forward streamlines from the shadow of the front-right propeller
and the back-right propeller from the CFD result. The color indicates the
speed of the flow on the streamlines.
The CFD results of the flights at six different speeds are shown in Fig. 5. In
consideration of the symmetry, the figures only depict the forward streamlines
from the 250 samples with equal distance on the shadow of the front-right
propeller and the back-right propeller each. It can be clearly seen that with
the increase of speed, turbulent flows from the propellers are gradually
shaken off by the quadrotor, which can reduce the disturbances from the ego
airflow.
#### III-D2 Speed Selection Workflow
Although the CFD analysis can verify the intuition that flying at higher
speeds can mitigate the effect of ego airflow disturbances, the errors from
modeling, discretization, and so on, as well as complex scenarios in real-
world environments, make it hard to generate precise quantitative results for
direct usage. Therefore, an experiment-based speed selection workflow is still
necessary. Experiments for speed selection are conducted in a straight narrow
tunnel of around 0.6 m in diameter, as shown in Fig. 1(a). The proposed
autonomous tunnel flight system traverses the tunnel with desired speeds from
0.1 m/s to 2.5 m/s, with an interval of 0.1 m/s. The flight at each desired
speed is repeated 10 times for data collection. The control error data are
compared with data of straight-line flights recorded at the same flight speeds
in a broader area outside the tunnel. Additionally, the feature tracking data
of the VIO system is also collected during the tunnel flights for further
analyses. Finally, the practical speed range is selected according to the
root-mean-square errors (RMSEs) of the positions and the minimum number of
features tracked by the VIO system.
## IV Experiment and Result
### IV-A Experiment Setup
Experiments are conducted in multiple narrow tunnels of 0.6 m in diameter and
two vent pipes of 0.7 m in diameter, as shown in Fig. 1. The customized 1.23
kg quadrotor platform with 5-inch propellers shown in Fig. 2(a) has a diameter
of 40 cm, meaning that there is only around 10 cm of clearance on each side in
the narrow tunnels and vent pipes.
The first set of experiments is conducted in a 6 m-long straight tunnel for
speed selection as stated in Sec. III-D2. The second set of experiments is
conducted in three differently curved tunnels, as shown in Fig. 1(b), 1(c),
and 1(d). The flight speeds are set to be 0.2 m/s, 0.5 m/s, 1 m/s, 1.5 m/s,
and 2 m/s, and six flights are performed for data collection at each speed.
Then the flight data are analyzed to verify the speed selection result, as
well as the robustness of the autonomous flight system. The third set of
experiments is conducted in two vent pipes with circular cross-sections of
around 0.7 m in diameter, as shown in Fig. 1(e) and 1(f). These vent pipes
that are commonly used in factories are adopted to validate the proposed
system at the selected speed in more realistic scenarios. Comparison
experiments with a SOTA motion planning method[14] and manual flights of an
experienced pilot using a commercial FPV drone shown in Fig. 6 are also
performed.
Figure 6: The autonomous tunnel flight system and the commercial FPV system
for manual flights, which are shown in the red and the purple frame.
(a) The visualization of the straight tunnel case shown in Fig. 1(a).
(b) The visualization of the curved tunnel case 1 shown in Fig. 1(b).
(c) The visualization of the curved tunnel case 2 shown in Fig. 1(c).
(d) The visualization of the curved tunnel case 3 shown in Fig. 1(d).
(e) The visualization of the vent pipe case 1 shown in Fig. 1(e).
(f) The visualization of the vent pipe case 2 shown in Fig. 1(f).
Figure 7: Visualization screenshots of the quadrotor flying through the
narrow tunnels shown in Fig. 1. The color coding indicates the height and the
black arrows are the estimated tunnel entrance. The axes indicate the current
pose of the quadrotor. The small grey arrows indicate the searched waypoints,
while the black transparent spheres indicate the spheres for the gradient
descent. The red curve is the extracted tunnel center line, which is a
B-spline parameterized from the waypoints, while the blue curve is the
optimized trajectory for execution.
### IV-B Straight Tunnel and Straight Line Flight Result
Fig. 7(a) shows a visualization screenshot taken during the straight tunnel
flights of the quadrotor using the double-phase motion planning explained in
Sec. III-C. No crashes are reported during the 250 flights mentioned in Sec.
IV-A, validating the robustness of the proposed motion planning algorithm
together with the integrated system.
(a) The longitudinal RMSE.
(b) The lateral RMSE.
(c) The vertical RMSE.
(d) The minimum number of features tracked by the VIO system.
Figure 8: The box plots of the RMSE on the three directions and the minimum
number of features can be tracked by the VIO system during the flights in the
straight tunnel shown in Fig. 1(a) and the comparison in position errors with
the straight-line flights in broad areas outside the tunnel. The green lines
indicate the error differences. The data in the optimal speed range are framed
in orange rectangles.
As stated in Sec. I, one of the outcomes we are interested in is the
controller performance against the flight speeds. The box plots of the RMSEs
of the positions during the 250 straight tunnel flights and the 250 straight
line flights in broad areas are shown in Fig. 8. The plots of the error
differences clearly indicate the effect on position control brought by the
tunnel. In general, the position errors inside the tunnel are larger than the
errors in broad areas, especially in the longitudinal direction, and the error
difference in this direction indicates the longitudinal disturbance brought by
the tunnel is almost constant. However, it can be observed that in the
vertical direction, when the flight speed is greater than 1 m/s, the error
difference almost drops to 0, i.e., the errors inside the tunnel are
comparable to the errors in broad areas, indicating mitigation of the effects
of the airflow disturbances compared with the scenarios with speeds of less
than 1 m/s. Similar outcomes in the lateral direction can also be observed
within 1.5 m/s. However, as the speed further increases, the errors increase
dramatically in the lateral direction. During the experiments, when the speed
rises above 2.0 m/s, it can be observed that the quadrotor diverts in most of
the flights. Concurrently, as indicated in Table I, the failure in which the
quadrotor makes contact with the tunnel wall occurs more and more frequently
with the increment in speed, bringing about potential safety issues, despite
successful traversals. In broader areas, these errors can be easily corrected
by the controller. However, at a high speed in the narrow tunnel, the control
bandwidth limit can be easily reached due to the unsteady flow. Meanwhile, on
account of the near-wall effect, the quadrotor is more likely to be attracted
to the wall as it approaches it, further enhancing the effect, and thus
producing positive feedback. This eventually induces large diversions and
error differences, and even collisions. Furthermore, although the longitudinal
errors have no correlation to safety, the steady errors from 1 m/s to 1.5 m/s
also indicate stable control performance, demonstrating the desirability of
this speed range for the controller.
TABLE I: Fail Rate in Straight Tunnel Flights Speed $(m/s)$ | $\leq$ 2.0 | 2.1 | 2.2 | 2.3 | 2.4 | 2.5
---|---|---|---|---|---|---
Fail Rate | 0% | 10% | 30% | 40% | 30% | 60%
The other critical result is the minimum number of features tracked by the VIO
system during the tunnel flights, as shown in the box plot in Fig. 8(d). It
can be observed that the minimum number of tracked features generally
decreases with the increase in the flight speed, which accords with common
sense. Nonetheless, during flights at slower speeds, typically under 0.5 m/s,
the variation of the data is generally larger than it is at higher speeds and
occasionally the number of tracked features is even lower than that at flights
at higher speeds. This phenomenon is possibly attributed to the heavy shaking
motion caused by the severer flow disturbance effect at low speeds. It is also
observed that within the speed range of 1 m/s to 1.5 m/s, the variance in the
number of tracked features is small and those numbers are usually large
enough, namely, larger than 20, which also demonstrates the practicability of
this speed range.
### IV-C Curved Tunnel Result
Visualization screenshots during the three curved tunnel cases are shown in
Fig. 7(b), 7(c), and 7(d). During the 90 flights, no failures are reported,
further proving the robustness of the motion planning algorithm and the
autonomous system in curved tunnels.
The box plots in Fig. 9 show the controller performance in three cases in
terms of the position RMSEs against the flight speeds. It can be observed that
due to the altitude changes of the tunnel in case 1, the control difficulty is
increased in that direction, causing the increase in vertical error compared
with the other two cases. As a consequence of their larger variation in the
lateral direction, the lateral RMSEs of case 2 and 3 are greater than that of
case 1. Despite the additional control difficulty brought about by the change
in the lateral and vertical direction as well as the yaw, the position errors
in the lateral and vertical direction reach their minimum at 1 m/s in all
three cases. The errors in the longitudinal direction for case 2 and 3
increase as the speed increases, which generally aligns with the data from the
previous straight tunnel flights. However, for the case 1 tunnel with vertical
variations, the longitudinal RMSE reaches its minimum at 1 m/s, which may be
induced by the change in the flow conditions caused by the variations, i.e.,
the blockage of the flow at the front or the back, turning the turbulent flow
back towards the quadrotor, engendering larger disturbances at slower speeds.
The minimum number of features tracked by the VIO system is generally the same
for case 1 and case 3 compared with the straight tunnel flights, as shown in
Fig. 9(d). However, for case 2, the minimum number of features is even smaller
at slow speeds of 0.2 m/s and 0.5 m/s compared with that at higher speeds due
to the large disturbance and yaw change in the middle position of the tunnel.
Additionally, the numbers of tracked features have larger variances at slow
speeds for all three cases, which also coincides with the previous results of
the straight tunnel flights.
Therefore, despite the additional control difficulty, the flight data from all
three curved tunnel cases demonstrate the superiority of the flight speed of 1
m/s, which is generally consistent with the proper speed range derived from
the straight tunnel flight data.
(a) The longitudinal RMSE.
(b) The lateral RMSE.
(c) The vertical RMSE.
(d) The minimum number of features tracked by the VIO system.
Figure 9: The box plots of the RMSE on the three directions and the minimum
number of features can be tracked by the VIO system during the flights in the
curved tunnels shown in Fig. 1(b) to 1(d).
### IV-D Vent Pipe Result and Comparison
In the vent pipe scenarios, which highly resemble the real scenes in
factories, the speed of 1 m/s derived from the previous experiments is adopted
in consideration of the comparable size of the vent pipe with the previous
tunnels. Visualization screenshots during the two cases are shown in Fig. 7(e)
and 7(f). Despite the shape difference, the autonomous flight system traverses
the pipes smoothly without any collisions. We also compare our method with a
SOTA motion planning method[14] using the same vent pipe shown in Fig. 1(e),
and the same hardware system. However, instead of smooth flights through the
pipe performed by the proposed system, the quadrotor can never fly through the
pipe and even crash at the entrance using the SOTA method. Additionally, we
also invite an experienced pilot to try to fly a commercial FPV drone, DJI FPV
drone, which has a comparable size with our proposed quadrotor platform,
through the pipes. Even with the equipment including the goggles and the
remote controller, as well as the industry level drone, the pilot crashes the
drone in both pipes. The comparisons in the realistic environment further
prove the validity of the proposed method as well as the value and robustness
of the proposed autonomous tunnel flight system.
## V Extendability of the System
In the experiment and result, the system is proved to be adaptive and robust.
It can be easily predicted that tunnels with dimensions larger than 0.6 m are
easier to be traversed due to the weaker ego airflow disturbances inside[16,
9, 6]. Hence, the proposed system can still be functional in wider tunnels.
Additionally, the experiments have also proved the practicability in the two
majority shapes of tunnels, i.e. square shape and circular shape, meaning that
the system can be extended to traverse a vast number of tunnels. For another
multirotor with a different size or configuration, the motion planning method
is still valid, while the workflow for speed selection mentioned in Sec.
III-D2 may need to be repeated for better performance.
## VI Conclusion
In this letter, we propose an autonomous narrow tunnel flight system. Firstly,
we develop a robust double-phase motion planning framework for narrow tunnels,
which consists of gradient-based tunnel center line extraction and trajectory
optimization. Then, the planner together with state estimation, perception,
and control modules, is integrated onto a customized quadrotor platform
equipped with illuminations to form a complete autonomous tunnel flight
system. CFD analyses are conducted to verify the intuition regarding speed and
disturbance, and extensive straight tunnel flight experiments are performed to
obtain a practical flight speed range according to the control and the feature
tracking data. Moreover, flights through multiple curved tunnels along with
comparison experiments in vent pipes are also conducted to verify the
practicability of the proposed autonomous system in more complex and realistic
scenarios as well as prove the robustness of the integrated system. In this
work, we assume a constant flight speed through a narrow tunnel with constant
dimensions. In the future, we plan to extend the framework to flights of
varying speeds in tunnels with varying dimensions by conducting further
experiments and analyses.
## References
* [1] K. Máthé, L. Buşoniu, L. Barabás, C.-I. Iuga, L. Miclea, and J. Braband, “Vision-based control of a quadrotor for an object inspection scenario,” in _Proc. of the Intl. Conf. on Unma. Air. Syst.(ICUAS)_. IEEE, 2016, pp. 849–857.
* [2] L. Wang, D. Cheng, F. Gao, F. Cai, J. Guo, M. Lin, and S. Shen, “A collaborative aerial-ground robotic system for fast exploration,” in _Proc. of the Intl. Sym. on Exp. Robot. (ISER)_ , 2018, pp. 59–71.
* [3] S. G. Manyam, S. Rasmussen, D. W. Casbeer, K. Kalyanam, and S. Manickam, “Multi-uav routing for persistent intelligence surveillance reconnaissance missions,” in _Proc. of the Intl. Conf. on Unma. Air. Syst.(ICUAS)_ , 2017, pp. 573–580.
* [4] T. Özaslan, G. Loianno, J. Keller, C. J. Taylor, V. Kumar, J. M. Wozencraft, and T. Hood, “Autonomous navigation and mapping for inspection of penstocks and tunnels with mavs,” _IEEE Robot. Autom. Ltr. (RA-L)_ , vol. 2, no. 3, pp. 1740–1747, 2017.
* [5] C. H. Vong, K. Ryan, and H. Chung, “Integral backstepping position control for quadrotors in tunnel-like confined environments,” in _Proc. of the IEEE Intl. Conf. on Robot. and Autom. (ICRA)_. IEEE, 2019, pp. 6425–6431.
* [6] L. Wang, B. Zhou, C. Liu, and S. Shen, “Estimation and adaption of indoor ego airflow disturbance with application to quadrotor trajectory planning,” in _Proc. of the IEEE Intl. Conf. on Robot. and Autom. (ICRA)_. IEEE, 2021, pp. 384–390.
* [7] I. Cheeseman and W. Bennett, “The effect of ground on a helicopter rotor in forward flight,” 1955.
* [8] A. E. Jimenez-Cano, P. J. Sanchez-Cuevas, P. Grau, A. Ollero, and G. Heredia, “Contact-based bridge inspection multirotors: Design, modeling, and control considering the ceiling effect,” _IEEE Robot. Autom. Ltr. (RA-L)_ , vol. 4, no. 4, pp. 3561–3568, 2019.
* [9] S. A. Conyers, “Empirical evaluation of ground, ceiling, and wall effect for small-scale rotorcraft,” 2019.
* [10] B. Morrell, R. Thakker, G. Merewether, R. Reid, M. Rigter, T. Tzanetos, and G. Chamitoff, “Comparison of trajectory optimization algorithms for high-speed quadrotor flight near obstacles,” _IEEE Robot. Autom. Ltr. (RA-L)_ , vol. 3, no. 4, pp. 4399–4406, 2018.
* [11] P. Liu, X. Zuo, V. Larsson, and M. Pollefeys, “Mba-vo: Motion blur aware visual odometry,” _arXiv preprint arXiv:2103.13684_ , 2021.
* [12] J. Chen, T. Liu, and S. Shen, “Online generation of collision-free trajectories for quadrotor flight in unknown cluttered environments,” in _Proc. of the IEEE Intl. Conf. on Robot. and Autom. (ICRA)_. IEEE, 2016, pp. 1476–1483.
* [13] G. Loianno, C. Brunner, G. McGrath, and V. Kumar, “Estimation, control, and planning for aggressive flight with a small quadrotor with a single camera and imu,” _IEEE Robot. Autom. Ltr. (RA-L)_ , vol. 2, no. 2, pp. 404–411, 2016.
* [14] B. Zhou, F. Gao, L. Wang, C. Liu, and S. Shen, “Robust and efficient quadrotor trajectory generation for fast autonomous flight,” _IEEE Robot. Autom. Ltr. (RA-L)_ , vol. 4, no. 4, pp. 3529–3536, 2019.
* [15] F. Gao, L. Wang, B. Zhou, X. Zhou, J. Pan, and S. Shen, “Teach-repeat-replan: A complete and robust system for aggressive flight in complex environments,” _IEEE Trans. Robot. (TRO)_ , vol. 36, no. 5, pp. 1526–1545, 2020.
* [16] C. Powers, D. Mellinger, A. Kushleyev, B. Kothmann, and V. Kumar, “Influence of aerodynamics and proximity effects in quadrotor flight,” in _Proc. of the Intl. Sym. on Exp. Robot. (ISER)_. Springer, 2013, pp. 289–302.
* [17] G. M. Eberhart, “Modeling of ground effect benefits for multi-rotor small unmanned aerial systems at hover,” Ph.D. dissertation, Ohio University, 2017\.
* [18] T. Elmokadem and A. V. Savkin, “A method for autonomous collision-free navigation of a quadrotor uav in unknown tunnel-like environments,” _Robotica_ , pp. 1–27, 2021.
* [19] S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar, “Vision-based state estimation and trajectory control towards high-speed flight with a quadrotor.” in _Proc. of Robot.: Sci. and Syst. (RSS)_ , vol. 1. Citeseer, 2013, p. 32.
* [20] D. Fridovich-Keil, S. L. Herbert, J. F. Fisac, S. Deglurkar, and C. J. Tomlin, “Planning, fast and slow: A framework for adaptive real-time safe trajectory planning,” in _Proc. of the IEEE Intl. Conf. on Robot. and Autom. (ICRA)_. IEEE, 2018, pp. 387–394.
* [21] L. Quan, Z. Zhang, C. Xu, and F. Gao, “Eva-planner: Environmental adaptive quadrotor planning,” _arXiv preprint arXiv:2011.04246_ , 2020.
* [22] T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” _IEEE Trans. Robot. (TRO)_ , vol. 34, no. 4, pp. 1004–1020, 2018.
* [23] L. Han, F. Gao, B. Zhou, and S. Shen, “Fiesta: Fast incremental euclidean distance fields for online motion planning of aerial robots,” in _Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst.(IROS)_. IEEE, 2019, pp. 4423–4430.
* [24] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in _Proc. of the IEEE Intl. Conf. on Robot. and Autom. (ICRA)_. IEEE, 2011, pp. 2520–2525.
* [25] S. Quinlan and O. Khatib, “Elastic bands: Connecting path planning and control,” in _Proc. of the IEEE Intl. Conf. on Robot. and Autom. (ICRA)_. IEEE, 1993, pp. 802–807.
* [26] Z. Zhu, E. Schmerling, and M. Pavone, “A convex optimization approach to smooth trajectories for motion planning with car-like robots,” in _Proc. of the IEEE Control and Decision Conf. (CDC)_. IEEE, 2015, pp. 835–842.
* [27] M. Faessler, A. Franchi, and D. Scaramuzza, “Differential flatness of quadrotor dynamics subject to rotor drag for accurate tracking of high-speed trajectories,” _IEEE Robot. Autom. Ltr. (RA-L)_ , vol. 3, no. 2, pp. 620–626, 2017.
|
# Nonlinear chiral kinetic theory
Kazuya Mameda Department of Physics, Tokyo University of Science, Tokyo
162-8601, Japan RIKEN iTHEMS, RIKEN, Wako 351-0198, Japan
###### Abstract
From quantum field theory, we derive the chiral kinetic theory involving
nonlinear quantum corrections coupled with spacetime-dependent electromagnetic
fields and fluid velocity gradients. An equilibrium Wigner function determined
by the kinetic equation verifies the nondissipativeness of the charge induced
by the magneto-vortical coupling. We reveal that this nonlinear chiral kinetic
theory is consistent with the one-loop Euler–Heisenberg effective theory,
indicating an indirect evidence of the trace anomaly in the kinetic theory. We
also argue a potential issue on the regularization, and demonstrate the
availability of the point-splitting regularization in the nonlinear chiral
kinetic theory.
††preprint: RIKEN-iTHEMS-Report-23
## I Introduction
The chiral kinetic theory (CKT) is one of the prominent theoretical tools to
describe transport phenomena of massless degrees of freedom. In this
framework, a lot of transport phenomena are displayed with the Berry monopole
[1, 2, 3], as in the electron transport theory [4]. A significant advantage of
the CKT is the versatile applicability not only to heavy-ion collisions [5,
6], Weyl semimetal [7, 8] and neutrino physics [9, 10, 11], but also to the
photonic transport [12, 13, 14, 15]. The CKT has also inspired us to elucidate
many aspects in relativistic quantum transport, such as the Lorentz covariance
[16, 17, 18], collisional effects [18, 19, 20, 21], the mass corrections [22,
23, 24], the strong magnetic field limit [25, 26, 27, 28], the different
derivations [29, 30, 31, 32, 33, 34], and gravitational contributions [35, 36,
37, 38] (see also Ref. [39] and reference therein).
In spite of various developments, the usual CKT includes only the linear
quantum correction. One limitation of this linear CKT is found in the
transport phenomena induced by the nonlinear coupling of background fields. A
particular example belonging to this category is the charge density of chiral
fermions under external magnetic field and vortical field. Such an induced
charge is originally discovered from the diagrammatic computation based on the
linear response theory [40], and the agreement is found from the Dirac theory
of a rotating fermions (for instance, see Ref. [41]). Importantly, this charge
generation is believed to be originated from quantum anomaly, and thus to be
nondissipative [42]. Nevertheless, the nondissipativeness cannot be verified
within thermal field theory, including the linear response theory. Indeed, the
equilibration under magnetic field and rotation is subtle, since the
coexistence of these external fields generates the drift force playing a role
of an effective electric field. The kinetic theory based on the Wigner
function [43] would provide a field-theoretical manifestation of the
nondissipativeness, and thus the anomalous nature. In this direction, the off-
equilibrium formulation of the kinetic theory is required, beyond the near-
equilibrium studies [44, 28].
Another limitation of the linear CKT is uncovered in the trace anomaly of
quantum electrodynamics (QED), which is also the nonlinear quantum effect in
the kinetic theory. While the chiral anomaly is well known as a consequence of
the Berry curvature, it is unobvious how the trace anomaly is interpreted in
the kinetic description. An important clue to answer this question is the
consistency of the kinetic theory and quantum field theory. Particularly, the
CKT and the Euler–Heisenberg effective theory [45, 46] should inherit the same
QED properties, since both theories describe fermionic dynamics under
background electromagnetic fields. Such a consistency is also a guiding
principle in developing the CKT with nonlinear quantum corrections.
In this paper, based on quantum field theory, we formulate the nonlinear CKT,
i.e., the CKT involving the nonlinear quantum correction coupled with
spacetime-dependent electromagnetic and fluid velocity fields. For this
purpose, we derive the off-equilibrium Wigner function [43] in the
collisionless limit as a simple attempt. Although the equilibrium state is not
completely determined in the collisionless case, the frame-independence of the
Wigner function provides a strong constraint for the equilibrium [37]. From an
equilibrium Wigner function found in this way, we show the nondissipativeness
of the magneto-vortical transport found in Ref. [40]. We also find that the
nonlinear CKT yields transport phenomena consistent with the Euler–Heisenberg
effective theory. This consistency further elucidates the kinetic encoding of
the charge renormalization and the QED $\beta$-function, which is an indirect
evidence of the trace anomaly in the CKT.
As a striking difference from the linear CKT, the nonlinear CKT bears an
inevitable ultraviolet divergence to be properly regularized. In this paper,
we pose a potential issue on this regularization; the competent techniques,
such as Pauli–Villars regularization and dimensional regularization, are
incompatible with the CKT. Instead, we implements the point-splitting
regularization [47] in the nonlinear CKT. Despite the violation of the
translational invariance, this scheme is not only compatible with the Wigner
function, but also helpful in elucidating the consistency with the
Euler–Heisenberg theory.
This paper is organized as follows. In Sec. II, we derive the off-equilibrium
Wigner function at $O(\hbar^{2})$, except for the distribution function. In
Sec. III, analyzing the frame-dependence of the nonlinear CKT, we identify an
equilibrium Wigner function. In Sec. IV, we demonstrate the computational
manner of the momentum integral in the CKT, including the implementation of
the point-splitting regularization. In Sec. V, we evaluate the $O(\hbar^{2})$
contributions to the equilibrium charge current and energy-momentum tensor. In
Sec. VI, we show the consistency of the nonlinear CKT and the Euler–Heisenberg
theory. Section VII is devoted to the summary of this paper. We set $e=1$ in
this paper unless otherwise stated, and use the mostly negative Minkowski
metric.
## II Nonlinear chiral kinetic theory
### II.1 Transport equations
Based on quantum field theory, the transport theory is constructed from the
Dyson-Schwinger equation for the Green’s function. When we consider virtual
gauge fields, the corresponding equation for Dirac propagators yields the
collisional kinetic theory. This is important for pursuing the dynamical
evolution in practical systems. Nevertheless, since our present interest is to
formulate the kinetic theory with nonlinear quantum corrections, through this
paper, we only focus on the collisionless limit.
We consider the Dirac theory of fermion fields $\psi$ and $\bar{\psi}$ coupled
with an external electromagnetic field $A_{\mu}$. The two-point correlation
functions
$S_{\alpha\beta}^{<}(x,y):=\langle\bar{\psi}_{\beta}(y)\psi_{\alpha}(x)\rangle$
and
$S_{\alpha\beta}^{>}(x,y):=\langle\psi_{\alpha}(x)\bar{\psi}_{\beta}(y)\rangle$
obey
$D_{x,\mu}S^{<}(x,y)=S^{>}(x,y)\overleftarrow{D}_{x,\mu}=0$ (1)
with $D_{\mu}\psi(x):=(\partial_{\mu}+{\mathrm{i}}A_{\mu}/\hbar)\psi(x)$ and
$\bar{\psi}(x)\overleftarrow{D}_{\mu}:=\psi(x)(\overleftarrow{\partial}_{\mu}-{\mathrm{i}}A_{\mu}/\hbar)$.
Note that here we implicitly enclosed the Wilson line, which ensures the gauge
covariance of $S^{\gtrless}$. This is equivalent to define the gauge covariant
translation operator as $\psi(x+y):={\mathrm{e}}^{y\cdot D}\psi(x)$. Fourier-
transforming Eq. (1), we get the transport equation of the Wigner function
$W^{\gtrless}(x,p):=\int_{y}{\mathrm{e}}^{-{\mathrm{i}}p\cdot
y/\hbar}S^{\gtrless}(x-y/2,x+y/2)$ (2)
with $\int_{y}:=\int{\mathrm{d}}^{4}y$. The original transport equation of
$W^{\gtrless}(x,p)$ contains the full quantum effect, and can be expanded in
terms of $\hbar$ [43]. This expansion is the same as that in terms of the
spacetime gradient since $\hbar$ always accompanies a spacetime derivative.
The first nonlinear terms of $O(\hbar^{2})$ thus emerge together with the
second power of background electromagnetic fields and vortical fields, and
their derivatives. In the following analysis, we discuss only the lesser part
$W(x,p):=W^{<}(x,p)$, which describes the kinetic theory of fermions.
In four-dimensional spacetime, the Wigner function can be decomposed with the
basis of the Clifford algebra as
$W=\mathcal{F}+i\gamma^{5}\mathcal{P}+\gamma^{\mu}\mathcal{V}_{\mu}+\gamma^{5}\gamma^{\mu}\mathcal{A}_{\mu}+\tfrac{1}{2}\sigma^{\mu\nu}{\mathcal{S}}_{\mu\nu},$
(3)
where $\mathcal{F}$, $\mathcal{P}$, $\mathcal{V}_{\mu}$ $\mathcal{A}_{\mu}$
and $\mathcal{S}_{\mu\nu}$ are some coefficient fields dependent on $x^{\mu}$
and $p_{\mu}$. For the transport equation of chiral fermions, the right-handed
projection of $W(x,p)$ is decoupled (and so is the left-handed one) from other
channels. We denote this by
$\mathcal{R}(x,p):=\frac{1}{2}\mathrm{tr}[\gamma^{\mu}P_{\mathrm{R}}W(x,p)]$
(4)
with $P_{\mathrm{R}}:=\frac{1}{2}(1+\gamma^{5})$ and the trace is for the
spinor indices. The equations of motion for $\mathcal{R}^{\mu}$ are derived as
follows:
$\displaystyle(\Delta_{\mu}+\hbar^{2}P_{\mu})\mathcal{R}^{\mu}=0,$ (5)
$\displaystyle(p_{\mu}+\hbar^{2}Q_{\mu})\mathcal{R}^{\mu}=0,$ (6)
$\displaystyle\hbar\varepsilon_{\mu\nu\rho\sigma}\Delta^{\rho}\mathcal{R}^{\sigma}+4\Bigl{[}p_{[\mu}+\hbar^{2}Q_{[\mu}\Bigr{]}\mathcal{R}_{\nu]}=0.$
(7)
Here we defined
$X_{[\mu}Y_{\nu]}:=\frac{1}{2}(X_{\mu}Y_{\nu}-X_{\nu}Y_{\mu})$, the Levi-
Civita tensor with $\varepsilon^{0123}=1$ and the following differential
operators:
$\Delta_{\mu}=\partial_{\mu}-F_{\mu\lambda}\partial_{p}^{\lambda},\quad
P_{\mu}=\frac{1}{24}(\partial_{p}\cdot\partial)^{2}F_{\mu\nu}\partial_{p}^{\nu},\quad
Q_{\mu}=-\frac{1}{12}\partial_{p}\cdot\partial F_{\mu\nu}\partial^{\nu}_{p}.$
(8)
Contracting Eq. (7) with $p^{\nu}$ and using Eq. (6), we get the useful
equation
$p^{2}\mathcal{R}_{\mu}=\frac{\hbar}{2}\varepsilon_{\mu\nu\rho\sigma}p^{\nu}\Delta^{\rho}\mathcal{R}^{\sigma}+2\hbar^{2}p^{\nu}Q_{[\mu}\mathcal{R}_{\nu]}-\hbar^{2}p_{\mu}Q\cdot\mathcal{R}.$
(9)
Once $\mathcal{R}^{\mu}$ is determined from the above equations of motion, we
can compute physical quantities. By implementing the inverse Wigner
transformation of two point functions, the charge current, energy-momentum
tensor and spin tensor are expressed with $\mathcal{R}^{\mu}$, as follows:
$\displaystyle J^{\mu}(x,y)$ $\displaystyle=$ $\displaystyle
2\int_{p}{\mathrm{e}}^{{\mathrm{i}}p\cdot y/\hbar}\mathcal{R}^{\mu}(x,p),$
(10) $\displaystyle T^{\mu\nu}(x,y)$ $\displaystyle=$ $\displaystyle
2\int_{p}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}\Bigl{[}p^{(\mu}\mathcal{R}^{\nu)}(x,p)+\hbar^{2}Q^{(\mu}\mathcal{R}^{\nu)}(x,p)\Bigr{]},$
(11) $\displaystyle S^{\mu\nu\rho}(x,y)$ $\displaystyle=$
$\displaystyle-2\hbar\,\varepsilon^{\mu\nu\rho\sigma}\int_{p}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}\mathcal{R}_{\sigma}(x,p)$ (12)
with $\int_{p}:=\int\frac{{\mathrm{d}}^{4}p}{(2\pi)^{4}}$ and
$X_{(\mu}Y_{\nu)}:=\frac{1}{2}(X_{\mu}Y_{\nu}+X_{\nu}Y_{\mu})$. In Appendix A,
we derive Eqs. (10)-(12) from the two-point functions. In the usual analysis
with the Wigner function approach, the above quantities are defined in the
$y\to 0$ limit. However, this parameter $y$ plays a role of the ultraviolet
regulator when we implement the point-splitting regularization. For this
reason, hereafter we keep $y$ finite.
From these expressions (10)-(12), it is manifested that Eqs. (5)-(7)
correspond to the Ward identities which massless fermions should respect. The
first equation (5) is related to charge conservation, and thus interpreted as
the kinetic equation, which determines the distribution function in
$\mathcal{R}^{\mu}$. The latter two (6) and (7) imply the conformal invariance
and the Lorentz invariance (i.e., angular momentum conservation),
respectively. These two determine the off-equilibrium Wigner function, except
for the distribution function.
### II.2 Solution up to $O(\hbar^{2})$
In the following, we look for the solution of Eqs. (6)-(7) and (9), with the
parametrization:
$\mathcal{R}^{\mu}=\mathcal{R}^{\mu}_{(0)}+\hbar\mathcal{R}^{\mu}_{(1)}+\hbar^{2}\mathcal{R}_{(2)}^{\mu}.$
(13)
For the latter computation of the nonlinear solution
$\mathcal{R}^{\mu}_{(2)}$, let us first briefly review the $O(\hbar^{0})$ and
$O(\hbar)$ parts [18]. The $O(\hbar^{0})$ solution is readily found from Eqs.
(6) and (9) as
$\mathcal{R}^{\mu}_{(0)}=2\pi\delta(p^{2})p^{\mu}f_{(0)},$ (14)
where $f_{(0)}$ is a function that satisfies $\delta(p^{2})p^{2}f_{(0)}=0$.
The delta function $\delta(p^{2})$ represents the on-shell condition of the
chiral fermion: $p^{2}=(p_{0})^{2}-|{\boldsymbol{p}}|^{2}=0$. This $f_{(0)}$
has both particle and antiparticle contributions. At equilibrium, $f_{(0)}$ is
the Fermi distribution function, with which the Wigner function
$\mathcal{R}^{\mu}_{(0)}$ reproduces the usual lesser propagator [48].
Let us solve the first-order part. Inserting the zeroth-order solution (14)
into Eq. (9), we get the first-order correction as
$\mathcal{R}^{\mu}_{(1)}=2\pi\delta(p^{2})\biggl{[}\widetilde{\mathcal{R}}^{\mu}_{(1)}-\frac{1}{p^{2}}\tilde{F}^{\mu\nu}p_{\nu}f_{(0)}\biggr{]}$
(15)
with
$\tilde{F}_{\mu\nu}=\frac{1}{2}\varepsilon_{\mu\nu\rho\sigma}F^{\rho\sigma}$.
The second term is apparently singular, but it accounts for the chiral anomaly
in the CKT. Also, we emphasize the existence of the first term, which is
admitted as long as it satisfies
$\delta(p^{2})p^{2}\widetilde{\mathcal{R}}^{\mu}_{(1)}=0$ and
$\delta(p^{2})p\cdot\widetilde{\mathcal{R}}_{(1)}=0$. This extra term is
determined from Eq. (7) at $O(\hbar)$, as follows:
$\begin{split}\widetilde{\mathcal{R}}_{\mu}^{(1)}\delta(p^{2})=\delta(p^{2})\biggl{[}p_{\mu}\frac{n\cdot\widetilde{\mathcal{R}}_{(1)}}{p\cdot
n}+\frac{\varepsilon_{\mu\nu\rho\sigma}p^{\rho}n^{\sigma}}{2p\cdot
n}\Delta^{\nu}f_{(0)}\biggr{]},\end{split}$ (16)
where we introduce an arbitrary vector field $n^{\mu}(x)$. Thus, the first
correction part is given by
$\mathcal{R}^{\mu}_{(1)}=2\pi\delta(p^{2})\biggl{[}p^{\mu}f_{(1)}+\biggl{(}\Sigma_{n}^{\mu\nu}\Delta_{\nu}-\frac{1}{p^{2}}\tilde{F}^{\mu\nu}p_{\nu}\biggr{)}f_{(0)}\biggr{]},$
(17)
where we define
$f_{(1)}:=\frac{n\cdot\widetilde{\mathcal{R}}^{(1)}}{p\cdot
n},\quad\Sigma_{n}^{\mu\nu}:=\frac{\varepsilon^{\mu\nu\rho\sigma}p_{\rho}n_{\sigma}}{2p\cdot
n}.$ (18)
This tensor $\Sigma_{n}^{\mu\nu}$ corresponds to the spin of chiral fermions
and $n^{\mu}$ is the degrees of freedom for the frame choice of the spin [16,
17]. It is worth mentioning that
$\delta(p^{2})p^{2}\widetilde{\mathcal{R}}^{\mu}_{(1)}=0$ implies
$\delta(p^{2})p^{2}f_{(1)}=0$. Such a nonsingular condition for $f_{(1)}$ is
important, in particular, when we determine the equilibrium form of $f_{(1)}$.
Also, $\delta(p^{2})p^{2}f_{(1)}=0$ ensures that the above solution (17)
fulfills Eqs. (6) and (9).
In a totally parallel manner, we can solve the second-order part
$\mathcal{R}^{\mu}_{(2)}$. The derivation is shown in Appendix B (see also
Ref. [37]). The result is
$\begin{split}\mathcal{R}_{\mu}^{(2)}&=2\pi\delta(p^{2})\biggl{[}p_{\mu}f_{(2)}+\biggl{(}\Sigma_{\mu\nu}^{u}\Delta^{\nu}-\frac{1}{p^{2}}\tilde{F}_{\mu\nu}p^{\nu}\biggr{)}f_{(1)}-\Sigma_{\mu\nu}^{u}\varepsilon^{\nu\rho\sigma\lambda}\Delta_{\rho}\frac{n_{\sigma}}{2p\cdot
n}\Delta_{\lambda}f_{(0)}\biggr{]}\\\
&\quad+\frac{2\pi}{p^{2}}\biggl{[}-p_{\mu}Q\cdot
p+2p^{\nu}Q_{[\mu}p_{\nu]}\biggr{]}f_{(0)}\delta(p^{2})\\\
&\quad+2\pi\frac{\delta(p^{2})}{p^{2}}\biggl{(}\frac{1}{2}\varepsilon_{\mu\nu\rho\sigma}p^{\nu}\Delta^{\rho}+\frac{p_{\mu}p^{\nu}}{p^{2}}\tilde{F}_{\nu\sigma}-\tilde{F}_{\mu\sigma}\biggr{)}\biggl{(}\Sigma^{\sigma\lambda}_{n}\Delta_{\lambda}-\frac{1}{p^{2}}\tilde{F}^{\sigma\lambda}p_{\lambda}\biggr{)}f_{(0)}\\\
&\quad+2\pi\frac{\delta(p^{2})}{p^{2}}\Sigma_{\mu\nu}^{u}\biggl{[}\Delta_{\alpha}\Sigma^{\alpha\nu}_{n}+\frac{n_{\alpha}}{p\cdot
n}\tilde{F}^{\alpha\nu}+\frac{1}{p^{2}}\tilde{F}^{\nu\lambda}p_{\lambda}\biggr{]}p\cdot\Delta
f_{(0)},\end{split}$ (19)
where $\Delta_{\mu}$ and $Q_{\mu}$ operate all on the right. Here, another
vector field $u^{\mu}$ and spin tensor $\Sigma^{\mu\nu}_{u}$ are introduced,
similarly to $n^{\mu}$ and $\Sigma^{\mu\nu}_{n}$ in $\mathcal{R}^{\mu}_{(1)}$.
The new factor $f_{(2)}$ is the second-order counterpart of $f_{(1)}$, and is
required to satisfy the nonsingular condition $\delta(p^{2})p^{2}f_{(2)}=0$.
For $u^{\mu}=n^{\mu}$, the above solution
$\mathcal{R}^{\mu}=\mathcal{R}^{\mu}_{(0)}+\hbar\mathcal{R}^{\mu}_{(1)}+\hbar^{2}\mathcal{R}^{\mu}_{(2)}$
can be recast in a simpler form. Then, $f_{{(0)}}$, $f_{{(1)}}$ and
$f_{{(2)}}$ in $\mathcal{R}^{\mu}$ are totally combined as the single function
$f:=f_{(0)}+\hbar f_{(1)}+\hbar^{2}f_{(2)}$, as are so in the gravitational
case [37]. Inserting this $\mathcal{R}^{\mu}$ into Eq. (5), we get the
$n^{\mu}$-dependent nonlinear chiral kinetic equation to determine the single
distribution function $f$. Such a structure is the same as the linear chiral
kinetic equation. For this reason, $u^{\mu}$ could be regarded as the degrees
of freedom for the Lorentz transformation. On the other hand, the above
interpretation of $u^{\mu}$ is inapplicable for $u^{\mu}\neq n^{\mu}$, and
thus the physical meaning of $u^{\mu}$ is not completely identified. To
address this problem, we should study the Lorentz transformation up to
$O(\hbar^{2})$ in quantum field theory [18]. Although this is an important
task to manifest the nonlinear-order side-jump effect [16, 17], we will
analyze it in a future publication. Hereafter, we call both $n^{\mu}$ and
$u^{\mu}$ the frame vectors.
## III Equilibrium
### III.1 Frame-dependence
As is well known, the CKT depends on the frame vectors $n^{\mu}$ and
$u^{\mu}$. Since the frames are auxiliary fields to obtain the solutions (17)
and (19), however, physical quantities should be independent of the frames,
and so is $\mathcal{R}^{\mu}$. On the other hand, the distribution function
depends on the frame [17]. In the linear CKT, the frame transformation law of
$f_{(1)}$ is determined by imposing $\mathcal{R}^{\mu}_{(1)}$ keeps frame-
independent [18]. Similarly, in the nonlinear CKT, we can compute the
transformation law of $f_{(2)}$ from the frame-independence of
$\mathcal{R}^{\mu}_{(2)}$ [37]. Let us first focus on the variation in terms
of $n^{\mu}$. Suppose that we take the transformation of the frame vector as
$n^{\mu}\to n^{\prime\mu}$. Then the corresponding transformation of the
distribution function is written as $f_{(1)}\to f_{(1)}+\delta_{n}f_{(1)}$,
$f_{(2)}\to f_{(2)}+\delta_{n}f_{(2)}$. It is worthwhile to mention that the
variations $\delta_{n}f_{{(1)},{(2)}}$ should be nonsingular because so are
$f_{{(1)},{(2)}}$. That is, we impose
$\delta(p^{2})p^{2}\delta_{n}f_{(1)}=\delta(p^{2})p^{2}\delta_{n}f_{(2)}=0$.
The frame-independence of $\mathcal{R}_{(1)}^{\mu}$ is represented as
$\mathcal{R}_{(1)}^{\mu}|_{n^{\prime}}-\mathcal{R}_{(1)}^{\mu}|_{n}=0$, where
$\mathcal{R}_{(1)}^{\mu}|_{n}$ is the Wigner function in Eq. (17) with a frame
$n^{\mu}$. From this equation, we determine the transformation law of
$f_{(1)}$, as follows: [17, 18]
$\begin{split}\delta_{n}f_{(1)}&=-\frac{n^{\mu}}{p\cdot
n}\Sigma_{\mu\nu}^{n^{\prime}}\Delta^{\nu}f_{(0)}+p^{2}\delta_{n}g_{(1)},\end{split}$
(20)
where $\delta_{n}g_{(1)}$ is a nonsingular scalar fulfills
$\delta(p^{2})p^{2}\delta_{n}g_{(1)}=0$. In the linear CKT, this
$\delta_{n}g_{(1)}$ can be ignored; such a term does not affect
$\mathcal{R}^{\mu}_{(1)}$. This is, however, not the case in the nonlinear
CKT. Indeed, from a similar but more complicated evaluation for
$\mathcal{R}^{\mu}_{(2)}$, we obtain the variation of $f_{(2)}$ as
$\begin{split}\delta_{n}f_{(2)}&=\Sigma_{\mu\nu}^{u}\biggl{[}\Delta^{\mu}\frac{\varepsilon^{\nu\rho\alpha\beta}n_{\alpha}n^{\prime}_{\beta}}{2p\cdot
n\,p\cdot
n^{\prime}}\Delta_{\rho}f_{(0)}-F^{\mu\nu}\delta_{n}g_{(1)}\biggr{]}\\\
&\quad+\frac{1}{p^{2}}\biggl{[}\Sigma^{u}_{\mu\nu}\Delta^{\mu}-\tilde{F}_{\mu\nu}\biggl{(}\frac{p^{\mu}}{p^{2}}-\frac{u^{\mu}}{p\cdot
u}\biggr{)}\biggr{]}\Sigma^{\nu\lambda}_{n^{\prime}}\frac{n_{\lambda}}{p\cdot
n}p\cdot\Delta f_{(0)}.\end{split}$ (21)
which involves $\delta_{n}g_{(1)}$. The same analysis can be performed for the
variation in terms of $u^{\mu}$. Then, we find $\delta_{u}f_{(1)}=0$ and
$\delta_{u}f_{(2)}=-\frac{u^{\mu}}{p\cdot
u}\Sigma^{u^{\prime}}_{\mu\nu}\biggl{[}\Delta^{\nu}f_{(1)}-\varepsilon^{\nu\rho\sigma\lambda}\Delta_{\rho}\frac{n_{\sigma}}{2p\cdot
n}\Delta_{\lambda}f_{(0)}+\frac{1}{p^{2}}\biggl{(}\Delta_{\alpha}\Sigma_{n}^{\alpha\nu}+\frac{n_{\alpha}}{p\cdot
n}\tilde{F}^{\alpha\nu}+\frac{1}{p^{2}}\tilde{F}^{\nu\lambda}p_{\lambda}\biggr{)}p\cdot\Delta
f_{(0)}\biggr{]}.$ (22)
### III.2 Equilibrium Wigner function
Let us apply the above argument to the equilibrium solution of the nonlinear
CKT. In the collisionless case, the kinetic theory itself cannot generally
determine equilibrium. The frame transformation laws (20)-(22) however provide
strong constraints to fix the equilibrium distribution functions. To
illustrate this fact, let us here employ the equilibrium distribution function
so that the classical Wigner function (14) is reproduced as the well-known
form of the lesser Green’s function of free fermions, that is,
$\displaystyle
f_{(0)}=\epsilon(p_{0})\,n_{F}(-\mu+p\cdot\xi),\quad\partial_{\mu}\alpha-
F_{\mu\nu}\beta^{\nu}=0,\quad\partial_{\mu}\beta_{\nu}+\partial_{\nu}\beta_{\mu}=0,$
(23)
where we define $\epsilon(x):=\theta(x)-\theta(-x)$ with the step function
$\theta(x)$, and the Fermi distribution function
$n_{F}(x):=({\mathrm{e}}^{\beta x}+1)^{-1}$. The parameters $\alpha$ and
$\beta^{\mu}$ are defined as $\alpha=-\beta\mu$, $\beta^{\mu}=\beta\xi^{\mu}$
and $\xi\cdot\xi=1$ with chemical potential $\mu$, inverse temperature $\beta$
and fluid velocity $\xi^{\mu}$. The Wigner function $\mathcal{R}^{\mu}_{(0)}$
with this $f_{(0)}$ in fact solves the classical kinetic equation (5):
$\Delta\cdot\mathcal{R}_{(0)}=2\pi\delta(p^{2})f^{\prime}_{(0)}p^{\mu}(\partial_{\mu}\alpha+p^{\nu}\partial_{\mu}\beta_{\nu}-F_{\mu\nu}\beta^{\nu})=0$
with $f^{\prime}_{(0)}={\mathrm{d}}f_{(0)}(x)/{\mathrm{d}}x$ and
$x=\alpha+\beta\cdot p$.
Then, using the above $f_{(0)}$, we compute the transformation law of
$f_{(1)}$ and $f_{(2)}$. From Eq. (20), we obtain
$\begin{split}\delta_{n}f_{(1)}&=f^{\prime}_{(0)}\frac{1}{2}(\Sigma^{\nu\rho}_{n^{\prime}}-\Sigma^{\nu\rho}_{n})\partial_{\nu}\beta_{\rho}+p^{2}\biggl{[}\delta_{n}g_{(1)}-f_{(0)}^{\prime}\frac{\varepsilon^{\mu\nu\alpha\beta}n_{\alpha}n_{\beta}^{\prime}}{4p\cdot
np\cdot n^{\prime}}\partial_{\nu}\beta_{\rho}\biggr{]}.\end{split}$ (24)
The above equation holds when we choose
$f_{{(1)}}=f_{(0)}^{\prime}\frac{1}{2}\Sigma_{n}^{\mu\nu}\partial_{\mu}\beta_{\nu},\quad\delta_{n}g_{(1)}=f_{(0)}^{\prime}\frac{\varepsilon^{\mu\nu\alpha\beta}n_{\alpha}n_{\beta}^{\prime}}{4p\cdot
np\cdot n^{\prime}}\partial_{\nu}\beta_{\rho}.$ (25)
Similarly, the variations of $f_{(2)}$ are calculated as follows:
$\begin{split}\delta_{n}f_{(2)}&=\frac{1}{4}\Sigma_{\mu\nu}^{u}\Delta^{\mu}\varepsilon^{\nu\beta\rho\lambda}\,\biggl{(}\frac{n^{\prime}_{\beta}}{p\cdot
n^{\prime}}-\frac{n_{\beta}}{p\cdot
n}\biggr{)}\partial_{\rho}\beta_{\lambda}f^{\prime}_{(0)},\\\
\delta_{u}f_{(2)}&=\frac{1}{4}(\Sigma_{\mu\nu}^{u^{\prime}}-\Sigma_{\mu\nu}^{u})\Delta^{\mu}\varepsilon^{\nu\beta\rho\lambda}\frac{n_{\beta}}{p\cdot
n}\partial_{\rho}\beta_{\lambda}f^{\prime}_{(0)}.\end{split}$ (26)
We note that all singular terms with $(p^{2})^{-1}$ or $(p^{2})^{-2}$ in Eqs.
(21) and (22) disappear, thanks to $p\cdot\Delta f_{(0)}=0$. The above
equations indicate that the second-order quantum correction $f_{(2)}$ may be
deduced as
$f_{(2)}=\Sigma^{u}_{\mu\nu}\Delta^{\mu}\biggl{(}f^{\prime}_{(0)}\frac{\varepsilon^{\nu\rho\sigma\lambda}}{4\,p\cdot
n}n_{\rho}\partial_{\sigma}\beta_{\lambda}\biggr{)}+\phi_{(2)}.$ (27)
Here $\phi_{(2)}$ is a frame-independent term in the equilibrium distribution
function. Such an ambiguity in $f_{(2)}$ cannot be determined in the present
framework, which ignore the collisional effect.
At the equilibrium we found above, the Wigner function (19) is reduced. First,
we assume $\phi_{(2)}=0$ for simplicity. Plugging Eqs. (25) and (27) into
Eq.(19), one can show that the frame-dependence of $\mathcal{R}^{\mu}_{(2)}$
is totally compensated, as it should. Eventually, Eq. (19) is recast into the
four different pieces as $\mathcal{R}^{(2)}_{\mu}=\mathcal{R}^{(\partial
F)}_{\mu}+\mathcal{R}^{(FF)}_{\mu}+\mathcal{R}^{(F\omega)}_{\mu}+\mathcal{R}^{(\omega\omega)}_{\mu}$
with
$\displaystyle\mathcal{R}^{(\partial F)}_{\mu}$
$\displaystyle=2\pi\frac{\delta(p^{2})}{p^{2}}\cdot\frac{1}{12}\Biggl{[}p_{\mu}f_{(0)}\biggl{(}-\frac{8}{p^{2}}\biggr{)}p^{\rho}\partial^{\lambda}F_{\rho\lambda}+p_{\mu}f_{(0)}^{\prime}\biggl{(}\partial^{\rho}F_{\rho\lambda}\beta^{\lambda}-\frac{4}{p^{2}}p^{\rho}p\cdot\partial
F_{\rho\lambda}\beta^{\lambda}\biggr{)}$
$\displaystyle\qquad\qquad\quad+p_{\mu}f_{(0)}^{\prime\prime}\biggl{(}2p^{\rho}\beta\cdot\partial
F_{\rho\lambda}\beta^{\lambda}\biggr{)}+f_{(0)}\biggl{(}8\partial^{\lambda}F_{\mu\lambda}-\frac{8}{p^{2}}p\cdot\partial
F_{\mu\lambda}p^{\lambda}\biggr{)}$
$\displaystyle\qquad\qquad\quad+f^{\prime}_{(0)}\biggl{(}p\cdot\partial
F_{\mu\lambda}\beta^{\lambda}+p^{\nu}\partial_{\mu}F_{\nu\lambda}\beta^{\lambda}\biggr{)}+f_{(0)}^{\prime\prime}\biggl{(}-p^{2}\beta\cdot\partial
F_{\mu\lambda}\beta^{\lambda}\biggr{)}\Biggr{]}$ (28)
$\displaystyle\mathcal{R}^{(FF)}_{\mu}$
$\displaystyle=2\pi\frac{\delta(p^{2})}{(p^{2})^{2}}\,\cdot
2\biggl{(}-\frac{p_{\mu}p^{\nu}}{p^{2}}F_{\nu\sigma}+F_{\mu\sigma}\biggr{)}F^{\sigma\lambda}p_{\lambda}f_{(0)},$
(29) $\displaystyle\mathcal{R}^{(F\omega)}_{\mu}$
$\displaystyle=2\pi\frac{\delta(p^{2})}{p^{2}}\biggl{(}-p_{\mu}\frac{p^{\nu}p_{\rho}}{p^{2}}\omega_{\nu\sigma}\tilde{F}^{\sigma\rho}+\frac{3}{4}\omega_{\mu\sigma}\tilde{F}^{\sigma\nu}p_{\nu}+\frac{1}{4}\tilde{F}_{\mu\sigma}\omega^{\sigma\nu}p_{\nu}\biggr{)}f_{(0)},$
(30) $\displaystyle\mathcal{R}^{(\omega\omega)}_{\mu}$
$\displaystyle=2\pi\delta(p^{2})\cdot\frac{1}{4}\biggl{(}p_{\mu}\frac{p^{\nu}p^{\rho}}{p^{2}}\omega_{\nu\sigma}{\omega_{\rho}}^{\sigma}-\omega_{\mu\sigma}{\omega_{\nu}}^{\sigma}p^{\nu}\biggr{)}f_{(0)}^{\prime\prime},$
(31)
where we introduce
$\omega^{\mu\nu}:=\frac{\beta^{-1}}{2}\varepsilon^{\mu\nu\rho\sigma}\partial_{\rho}\beta_{\sigma}$.
We also note that the derivative of vorticity disappears, i.e.,
$\mathcal{R}^{\mu}_{(\partial\omega)}=0$, owing to the identity
$\partial_{\mu}\partial_{\nu}\beta_{\rho}=0$ for the Killing vector
$\beta_{\rho}$.
At this point, it is not guaranteed that the above $\mathcal{R}^{\mu}$ is
really an equilibrium Wigner function, because we have not yet analyzed the
$O(\hbar^{2})$ part of the kinetic equation (5) 111One can readily check that
the $O(\hbar)$ part of Eq. (5) holds for the linear-order solution (17). .
Plugging Eqs. (14) and (III.2)-(31) to the kinetic equation (5) and carrying
out a tedious computation, we arrive at
$\begin{split}\delta(p^{2})\biggl{[}\biggl{(}\frac{f^{\prime\prime}_{(0)}}{6p^{2}}\partial^{\mu}\beta^{\rho}p^{\nu}p_{\rho}-\frac{f^{\prime\prime}_{(0)}}{8}\partial^{\mu}\beta^{\nu}-\frac{f_{(0)}^{\prime\prime\prime}}{12}\partial^{\mu}\beta^{\rho}\beta^{\nu}p_{\rho}\biggr{)}\beta\cdot\partial+\frac{f_{(0)}^{\prime\prime}}{24}p^{\mu}\beta^{\nu}\beta^{\rho}\beta^{\sigma}\partial_{\rho}\partial_{\sigma}\biggr{]}F_{\mu\nu}=0.\end{split}$
(32)
Using
$\beta^{\rho}\beta^{\sigma}\partial_{\rho}\partial_{\sigma}F_{\mu\nu}=\beta\cdot\partial(\beta\cdot\partial
F_{\mu\nu})-(\beta\cdot\partial\beta_{\sigma})\partial^{\sigma}F_{\mu\nu}$, we
find that all the terms in the above kinetic equation contain
$\beta\cdot\partial F_{\mu\nu}$ or $\beta\cdot\partial\beta_{\mu}$. As long as
we consider a finite $F_{\mu\nu}$, hence, the above reduced kinetic equation
implies that either of the following conditions should be fulfilled: 222Note
that $\partial_{\mu}\beta_{\nu}=0$ is an equilibrium condition. In this case,
$\beta\cdot\partial F_{\mu\nu}=0$ automatically holds because of
$0=\partial_{[\mu}\partial_{\nu]}\alpha=\partial_{[\mu}(F_{\nu]\lambda}\beta^{\lambda})$.
This condition is however a special case of the condition (33a).
$\displaystyle 1)\quad\beta\cdot\partial
F_{\mu\nu}=0,\quad\beta\cdot\partial\beta_{\mu}=0,$ (33a) $\displaystyle
2)\quad\partial_{\lambda}F_{\mu\nu}=0.$ (33b)
These are the additional equilibrium conditions on top of those in Eq. (23).
The meaning of the condition (33a) is understandable when we take
$\xi^{\mu}=(1,\boldsymbol{0})$. The first equation in Eq. (33a) implies the
time-independence of background electromagnetic fields. The second means that
the background fluid has no acceleration, or equivalently, there is no
temperature gradient:
$0=\beta\cdot\partial\beta_{\mu}=-\beta\partial_{\mu}\beta$ with
$\beta:=\sqrt{\beta\cdot\beta}$. On the other hand, the acceleration term is
admitted under the condition (33b), where electromagnetic fields are constant.
This is the case employed in Ref. [44].
We here discuss the case with $\phi_{(2)}\neq 0$ in Eq. (27). One can readily
check that in this case the extra term $\delta(p^{2})p\cdot\Delta\phi_{(2)}$
emerges in the kinetic equation (32). However, the singular term with $p^{-2}$
cannot be eliminated by the $\phi_{(2)}$ term, since
$\delta(p^{2})\phi_{(2)}=0$ is required from the nonsingular condition
$\delta(p^{2})f_{(2)}=0$. Moreover, the other terms in Eq. (32) are not
canceled by the $\phi_{(2)}$ term. Hence,
$\delta(p^{2})p\cdot\Delta\phi_{(2)}=0$ is demanded. As the simplest choice,
we may take $\phi_{(2)}=0$ hereafter. This is a difference from the CKT in
curved spacetime; under a weak static gravitational field, a finite
$\phi_{(2)}$ is required for the realization of an equilibrium [37].
## IV Momentum integral
### IV.1 Regularization
The equilibrium physical quantities are computed as the momentum integral with
the Winger function in Eqs. (III.2)-(31) with the distribution function (23)
under the condition (33). Before the computation, we demonstrate how to
evaluate the momentum integrals. The integrals that we encounter in the
following section are generally written as
$\int_{p}2\pi\frac{{\mathrm{d}}^{l}\delta(p^{2})}{({\mathrm{d}}p^{2})^{l}}p^{\mu_{1}}\cdots
p^{\mu_{j}}\frac{{\mathrm{d}}^{k}f_{(0)}(p_{0})}{{\mathrm{d}}p_{0}^{k}}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}$ (34)
with $f_{(0)}$ given by Eq. (23). Here we replaced the singular factor
$(p^{2})^{-l}$ in the Wigner functions with the derivative of $\delta(p^{2})$,
through the identity
$l!\delta(x)=(-x)^{l}{\mathrm{d}}^{l}\delta(x)/{\mathrm{d}}x^{l}$.
For the latter convenience, we here decompose Eq. (23) into the vacuum and
matter parts as
$f_{(0)}(p_{0})=f_{{(0)}\mathrm{vac}}(p_{0})+f_{{(0)}\mathrm{mat}}(p_{0})$
with $f_{{(0)}\mathrm{vac}}(p_{0}):=-\theta(-p_{0})$ and
$f_{{(0)}\mathrm{mat}}(p_{0}):=\theta(p_{0})n_{F}(p_{0}-\mu)+\theta(-p_{0})n_{F}(-p_{0}+\mu)$.
In Eq. (34) the former may result in the divergence at the ultraviolet regime
$p_{0}\sim-\infty$ unless $k\geq 1$. For this divergence, the parameter
$y^{\mu}$ plays a role of the cutoff scale. This is nothing but the point-
splitting regularization. On the other hand, the latter does not require such
a regulation. Therefore, in the following, we evaluate these two contributions
in different ways; for the vacuum contributions, we keep $y$ finite so that
the point-splitting regularization is implemented, but for the matter part we
take $y\to 0$ before integration 333The point-splitting regularization with
$n_{F}(p_{0}\mp\mu)$ would in principle be possible, but is not so easy as
that of the vacuum; due to the pole at $p_{0}=\pm\mu+{\mathrm{i}}(2n+1)\pi T$
for $n=0,\pm 1,\cdots$, it is nontrivial to perform the Wick rotation, which
is required in implementing the point-splitting regularization.. It should
also be emphasized that we face no infrared divergence in Eq. (34), thanks to
the cancellation of those from the vacuum and matter parts.
We comment on the regularization in the CKT. In usual quantum field theory,
when we regularize a divergent integral, it is preferred to choose a
regularization scheme to respect the gauge, Lorentz, and translational
invariances. It is, however, not so easy to find out such an appropriate
scheme for Eq. (34). For instance, the Pauli–Villars scheme is obviously
unsuitable, since the CKT possesses no mass parameter; a Pauli–Villars
regulator would be useful for the kinetic theory of massive fermions [22, 23,
24]. Dimensional regularization is also incompatible with the CKT, since
$\varepsilon^{\mu\nu\rho\sigma}$ and $\gamma^{5}$ cannot be extended
straightforwardly in a general $d$-dimensional spacetime [49]. Indeed, the
Wigner functions derived in Secs. II-III are no longer correct in $d\neq 4$
dimensions, for the following two reasons. First, the Clifford basis
decomposition (3) is unjustified in $d\neq 4$ dimensions. This implies that
our starting point at Eqs. (5)-(7) is modified. Second, the uselessness of the
Schouten identity in $d\neq 4$ dimensions brings a lot of extra singular terms
with $p^{-2}$ in intermediate steps of calculation. Then we would not derive
the solution that satisfies appropriate conditions, such as
$\delta(p^{2})p^{2}f_{(2)}=0$.
The above circumstance compels us to choose a regularization scheme that
sacrifices at least one symmetry. Among such schemes, the point-splitting
regularization is compatible with the Wigner function because the point-
splitting parameter is naturally introduced as $y^{\mu}$, as shown in the
charge current (10) and the energy-momentum tensor (11). This is the reason
why we employ the point-splitting regularization in this paper. Although this
scheme in general violates the translational invariance (namely,
$\partial_{\mu}T^{\mu\nu}+F^{\mu\nu}J_{\mu}\neq 0$), it can reveal the
consistency with the Euler–Heisenberg theory, as we discuss later. The
analysis with a more appropriate regularization will be shown in feature
publication.
### IV.2 Matter part
We demonstrate how to compute the matter part in Eq. (34). We perform first
the integral in terms of $p_{0}$ and then $p_{i}$. In this way, by decomposing
each $p^{\mu}$ into the transverse component to
$\xi^{\mu}:=(1,\boldsymbol{0})$ and the longitudinal one, we can replace
integrands with nonvanishing tensor form; for instance, $p_{\alpha}\to
p_{0}\xi_{\alpha}$ and
$p_{\alpha}p_{\beta}\to(p_{0})^{2}\xi_{\alpha}\xi_{\beta}+\frac{{\boldsymbol{p}}^{2}}{3}\Delta_{\alpha\beta}$
with the transverse projector
$\Delta^{\mu\nu}:=\xi^{\mu}\xi^{\nu}-g^{\mu\nu}$. Performing the tensor
decomposition of the integrands, we express Eq. (34) as the linear combination
of
$\mathcal{I}^{l}_{n,m,k}:=\int_{p}2\pi\frac{{\mathrm{d}}^{l}\delta(p^{2})}{({\mathrm{d}}p^{2})^{l}}(p_{0})^{n}|{\boldsymbol{p}}|^{m-n}\frac{{\mathrm{d}}^{k}f_{{(0)}\mathrm{mat}}}{{\mathrm{d}}p_{0}^{k}}.$
(35)
In order to handle the derivative on $\delta(p^{2})$, we use the chain rule;
e.g.
$\frac{{\mathrm{d}}}{{\mathrm{d}}p^{2}}\delta(p^{2})=\frac{1}{2p_{0}}\frac{{\mathrm{d}}}{{\mathrm{d}}p_{0}}\delta(p^{2})$.
Then, the integration by parts in terms of $p_{0}$ removes the derivative on
$\delta(p^{2})$. It is worthwhile to notice that this step generates no
surface term because of $f_{{(0)}\mathrm{mat}}(p_{0}\to\pm\infty)=0$. In
Appendix C, we show the detailed evaluation. After this step, the integral
$\mathcal{I}^{l}_{n,m,k}$ is written as the linear combination of another
integral sequence
$\begin{split}\mathcal{J}_{m,k}&:=\int_{0}^{\infty}{\mathrm{d}}p\,p^{m}\frac{{\mathrm{d}}^{m}}{{\mathrm{d}}p^{m}}\Bigl{[}n_{F}(p-\mu)-(-1)^{a+b}n_{F}(p+\mu)\Bigr{]}.\end{split}$
(36)
There is an important remark about the above computation manner. In Eq. (35),
we have only the matter part $f_{{(0)}\mathrm{mat}}$ since the vacuum part is
evaluated with the point-splitting regularization. In some regularization
scheme, it is in principle possible to evaluate Eq. (35) including the vacuum
contribution. In this case, we replace $f_{{(0)}\mathrm{mat}}$ with
$f_{(0)}=f_{{(0)}\mathrm{vac}}+f_{{(0)}\mathrm{mat}}$ in the integrand and
evaluate the integral in the almost same manner. Only one difference is that
we carefully take into account the surface term contributions from the
$p_{0}$-integral. Such contributions always appear for $k=0$ due to the vacuum
contribution at ultraviolet regime: $f_{{(0)}}(p_{0}\to+\infty)=0$ but
$f_{{(0)}}(p_{0}\to-\infty)=-1$. Although Ref. [44] performs a similar
integration by parts, the above surface terms are missing.
### IV.3 Vacuum part
Now we compute the vacuum contribution of Eq. (34) with the point-splitting
regularization. What we need to evaluate is
$\mathcal{K}_{n}^{\mu_{1}\cdots\mu_{m}}(y):=\int_{p}2\pi\frac{{\mathrm{d}}^{n}\delta(p^{2})}{({\mathrm{d}}p^{2})^{n}}p^{\mu_{1}}\cdots
p^{\mu_{m}}\bigl{[}-\theta(-p_{0})\bigr{]}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}.$ (37)
It is efficient to first evaluate $\mathcal{K}_{1}$,
$\mathcal{K}_{2}^{\mu\nu}$ and $\mathcal{K}_{3}^{\mu\nu\rho\sigma}$, which
would lead to the logarithmic ultraviolet divergence without the point-
splitting. After the contour deformation to obtain an integral on the
Euclidean momentum phase space, we can evaluate these three integrals. As
shown in Appendix D, the result is as follows:
$\begin{split}\mathcal{K}_{1}(y)=-\frac{\mathcal{J}(y)}{8\pi^{2}},\quad\mathcal{K}_{2}^{\mu\nu}(y)=\frac{\mathcal{J}(y)}{16\pi^{2}}g^{\mu\nu},\quad\mathcal{K}_{3}^{\mu\nu\rho\sigma}(y)=-\frac{\mathcal{J}(y)}{32\pi^{2}}(g^{\mu\nu}g^{\rho\sigma}+g^{\mu\rho}g^{\nu\sigma}+g^{\mu\sigma}g^{\nu\rho}),\end{split}$
(38)
with the regularized integral:
$\mathcal{J}(y):=\int_{0}^{y^{-1}}\frac{{\mathrm{d}}p}{p}.$ (39)
We again emphasize that the infrared divergence at $p\sim 0$ are completely
canceled by those of the matter part.
All other types of integrals in Eq. (37) are generated by the derivative of
Eq. (38) with respect to $y^{\mu}$. It is important to remind then that in the
point-splitting regularization, we take the limit of $y\to 0$ symmetrically at
the end of evaluation, as follows [50]:
$\underset{y\to
0}{\mathrm{symm\,lim}}\,\frac{y^{\mu}}{y^{2}}=0,\qquad\underset{y\to
0}{\mathrm{symm\,lim}}\,\frac{y^{\mu}y^{\nu}}{y^{2}}=\frac{g^{\mu\nu}}{4}.$
(40)
Thanks to the first equation, for example, we readily find
$\mathcal{K}^{\mu}_{1}=-{\mathrm{i}}\hbar\partial^{\mu}_{y}\mathcal{K}_{1}\propto
y^{\mu}/y^{2}\to 0$ in this limit. Eventually, the integrals (37) other than
the three in Eq. (38) vanish in the following section.
## V Equilibrium transport
We can now evaluate the charge current (10) and the energy-momentum tensor
(11), from the momentum integral (34). It is then convenient to introduce the
following four-vector fields:
$\begin{split}&B^{\mu}:=\tilde{F}^{\mu\nu}\xi_{\nu},\quad
E^{\mu}:=F^{\mu\nu}\xi_{\nu},\\\
&\omega^{\mu}:=\omega^{\mu\nu}\xi_{\nu}=\frac{1}{2}\beta^{-1}\varepsilon^{\mu\nu\rho\sigma}\xi_{\nu}\partial_{\rho}\beta_{\sigma},\quad
a^{\mu}:=\beta^{-1}\xi_{\nu}\partial^{\mu}\beta^{\nu}=\beta^{-1}\partial^{\mu}\beta.\end{split}$
(41)
Hereafter, we focus on the equilibrium cases described by either the condition
(33a) or (33b), on top of those in Eq. (23). Therefore, in the following
analysis, either $a_{\mu}$ or $\partial_{\mu}F_{\mu\nu}$ should vanish
depending on the choice of Eq. (33a) or (33b).
The classical and the first-order contributions can be evaluated with the
integral formulas in Appendix C. As derived in many literatures, we get [17]:
$\begin{split}J^{\mu}_{(1)}&=\frac{\mu}{4\pi^{2}}B^{\mu}+\biggl{(}\frac{\mu^{2}}{4\pi^{2}}+\frac{T^{2}}{12}\biggr{)}\omega^{\mu},\\\
T^{\mu\nu}_{(1)}&=\biggl{(}\frac{\mu^{2}}{4\pi^{2}}+\frac{T^{2}}{12}\biggr{)}B^{(\mu}\xi^{\nu)}+\biggl{(}\frac{\mu^{3}}{3\pi^{2}}+\frac{\mu
T^{2}}{3}\biggr{)}\omega^{(\mu}\xi^{\nu)},\end{split}$ (42)
which represent the chiral magnetic effect [51, 52, 53] and the chiral
vortical effect [54, 55, 56].
For the nonlinear-order contributions to Eqs. (10) and (11), we differently
evaluate the matter and vacuum part, with the help of the integral formulas in
Appendices. C and D, respectively. Since $\mathcal{R}^{\mu}_{(2)}$ is
decomposed into the four different pieces (III.2)-(31), so are the
corresponding charge current $J^{\mu}_{(2)}$ and energy-momentum tensor
$T^{\mu\nu}_{(2)}$. The resulting expressions are as follows:
$\begin{split}&J_{(\partial
F)}^{\mu}=-\frac{\mathcal{J}_{-1,0}-\mathcal{J}}{12\pi^{2}}\partial_{\lambda}F^{\mu\lambda},\\\
&J_{(FF)}^{\mu}=\frac{\mathcal{J}_{-1,1}}{12\pi^{2}}\biggl{[}\frac{1}{2}\xi^{\mu}(E^{2}+B^{2})+\varepsilon^{\mu\nu\rho\sigma}\xi_{\nu}E_{\rho}B_{\sigma}\biggr{]},\\\
&J_{(F\omega)}^{\mu}=\frac{1}{8\pi^{2}}\Bigl{[}-\xi^{\mu}(B\cdot\omega+E\cdot
a)+\varepsilon^{\mu\nu\rho\sigma}\xi_{\nu}B_{\rho}a_{\sigma}\Bigr{]},\\\
&J^{\mu}_{(\omega\omega)}=-\frac{\mu}{4\pi^{2}}\xi^{\mu}(\omega^{2}+a^{2}),\end{split}$
(43) $\begin{split}T^{\mu\nu}_{(\partial
F)}&=\frac{\mu}{24\pi^{2}}\biggl{[}-\xi^{(\mu}\partial_{\lambda}F^{\nu)\lambda}+2\xi^{\mu}\xi^{\nu}\xi^{\lambda}\partial^{\rho}F_{\rho\lambda}-g^{\mu\nu}\xi_{\lambda}\partial_{\rho}F^{\rho\lambda}+\xi_{\lambda}\partial^{(\mu}F^{\nu)\lambda}\biggr{]},\\\
T^{\mu\nu}_{(FF)}&=\frac{\mathcal{J}_{-1,0}-\mathcal{J}}{12\pi^{2}}\biggl{[}{F^{\mu}}_{\sigma}F^{\nu\sigma}-\frac{1}{4}g^{\mu\nu}F_{\alpha\beta}^{2}\biggr{]},\\\
T^{\mu\nu}_{(F\omega)}&=\frac{\mu}{8\pi^{2}}\biggl{[}-\xi^{\mu}\xi^{\nu}(\omega\cdot
B+a\cdot
E)+\omega^{(\mu}B^{\nu)}+a^{(\mu}E^{\nu)}+2\xi^{(\mu}\varepsilon^{\nu)\rho\sigma\lambda}a_{\rho}B_{\sigma}\xi_{\lambda}\biggr{]},\\\
T^{\mu\nu}_{(\omega\omega)}&=\biggl{(}\frac{\mu^{2}}{2\pi^{2}}+\frac{T^{2}}{6}\biggr{)}\biggl{[}\biggl{(}\frac{1}{4}g^{\mu\nu}-\xi^{\mu}\xi^{\nu}\biggr{)}(\omega^{2}+a^{2})+\xi^{(\mu}\varepsilon^{\nu)\rho\sigma\lambda}a_{\rho}\omega_{\sigma}\xi_{\lambda}\biggr{]}.\end{split}$
(44)
Here, the longitudinal component of the derivative disappears, i.e.,
$\xi\cdot\partial F_{\mu\nu}=0$, due to the equilibrium condition (33). For
the energy-momentum tensor, the term with $Q^{\mu}$ in Eq. (11) yields no
contribution, as is readily checked. We again emphasize that either
$\partial_{\lambda}F_{\mu\nu}$ or $a^{\mu}$ is admitted to survive due to the
conditions (33a) and (33b), respectively.
We should make a comparison with Ref. [44]. The authors derived almost the
same transport as above, except for $J^{\mu}_{(\partial F)}$,
$T^{\mu\nu}_{(\partial F)}$ and $T^{\mu\nu}_{(FF)}$. The first two were not
computed since the authors focused only on constant background fields. The
stark difference from Ref. [44] is found in $T^{\mu\nu}_{(FF)}$. The two
underlying reasons of this difference are elucidated by recalling the
arguments in Sec. IV. First, the authors did not take into account finite
surface terms because of the vacuum contribution in Eq. (34). Second, they
implemented dimensional regularization, without caring about the modification
on the Clifford algebra in $d\neq 4$ dimensions. As a result, while our
energy-momentum tensor agrees with that from the Euler–Heisenberg effective
theory, that derived in Ref. [44] does not (see Sec.VI).
An important observation in Eqs. (LABEL:eq:J2) and (44) is the finite
contributions from the magneto-vortical terms $J^{\mu}_{(F\omega)}$ and
$T^{\mu\nu}_{(F\omega)}$. In particular, the charge density
$J^{0}_{(F\omega)}\sim B\cdot\omega$ agree with that derived in Refs. [40, 41,
44, 28]. There is, however, a crucial contrast with them in terms of the
derivations. On the one hand, the above early studies implicitly assume an
equilibrium under magnetic field and vorticity, despite the subtlety of this
assumption; the interplay of magnetic field and rotation classically generates
an effective electric field, which in general prohibits the equilibration. On
the other hand, our $J^{0}_{(F\omega)}\sim B\cdot\omega$ is derived from the
equilibrium Wigner function, which is determined by the kinetic equation. We
hence verifies the nondissipativeness of the above magneto-vortical effect,
based on quantum field theory. This is one of the main findings in this paper.
However, the above result does not reproduce the induced current $\sim
B\cdot\omega B^{\mu}/|B|$, which is discovered in Ref. [40]. This is because
our classical Wigner function $\mathcal{R}^{\mu}_{(0)}$ is independent of
$B^{\mu}$. Contrary, if $\mathcal{R}^{\mu}_{(0)}$ depends on $B^{\mu}$, there
emerges $B^{\mu}/|B|$ as a possible tensorial basis of
$\mathcal{R}^{\mu}_{(2)}$, similarly to the fluid velocity $\xi^{\mu}$. This
is in fact the case of the CKT in the strong magnetic field [28]. Hence,
although the magneto-vortical coupling generates both the charge $\sim
B\cdot\omega$ and the current $\sim B\cdot\omega B^{\mu}/|B|$, they are
qualitatively different. Such a difference would be related to their anomalous
nature [57].
Let us argue the conservation laws for the transport in Eqs. (LABEL:eq:J2) and
(44). One can compute the divergences $\partial_{\mu}J^{\mu}_{(2)}$ with the
help of Eq. (84) and the formulas in Appendix E. We then observe
$\partial_{\mu}J^{\mu}_{(F\omega)}=\partial_{\mu}J^{\mu}_{(\omega\omega)}=\partial_{\mu}(J^{\mu}_{(FF)}+J^{\mu}_{(\partial
F)})=0$. Therefore, the nonlinear contribution of the charge current is
conserved:
$\partial_{\mu}J^{\mu}_{(2)}=0,$ (45)
where we impose $a_{\mu}\partial_{\nu}F_{\rho\sigma}=0$ because of the
equilibrium condition (33). This relation holds under both the conditions
(33a) and (33b). The divergence $\partial_{\mu}T^{\mu\nu}_{(2)}$ is computed
in a similar manner. We find
$\partial_{\mu}T^{\mu\nu}_{(F\omega)}+F^{\mu\nu}J_{\mu}^{(F\omega)}=\partial_{\mu}T^{\mu\nu}_{(\omega\omega)}+F^{\mu\nu}J_{\mu}^{(\omega\omega)}=0$,
but
$\begin{split}&\partial_{\mu}T^{\mu\nu}_{(FF)}+F^{\mu\nu}(J_{\mu}^{(FF)}+J_{\mu}^{(\partial
F)})=\frac{1}{48\pi^{2}}\Bigl{[}a^{\nu}F_{\alpha\beta}F^{\alpha\beta}-4a^{\mu}F_{\mu\sigma}F^{\nu\sigma}\Bigr{]},\\\
&\partial_{\mu}T^{\mu\nu}_{(\partial
F)}=\frac{1}{48\pi^{2}}\Bigl{[}-\xi^{\nu}E_{\mu}\partial_{\lambda}F^{\lambda\mu}+2\xi_{\mu}E^{\nu}\partial_{\lambda}F^{\lambda\mu}-2\xi_{\lambda}E_{\mu}\partial^{(\mu}F^{\nu)\lambda}\Bigr{]},\end{split}$
(46)
where we again drop the product terms $\sim
a_{\mu}\partial_{\lambda}F_{\nu\rho}$. Thus, we arrive at
$\partial_{\mu}T^{\mu\nu}_{(2)}+F^{\mu\nu}J_{\mu}^{(2)}\neq 0.$ (47)
This violation of the translational invariance is a compensation of the point-
splitting regularization.
Lastly, we look at the trace of the energy-momentum tensors in Eq. (44). We
first notice that $T^{\mu\nu}_{(\partial F)}$, $T^{\mu\nu}_{(F\omega)}$ and
$T^{\mu\nu}_{(\omega\omega)}$ are traceless irrelevantly to the regularization
scheme. The same is true for $T^{\mu\nu}_{(FF)}$, as long as we utilize the
point-splitting regularization. Eventually, no trace anomaly is reproduced:
${T^{\mu}}_{\mu{(2)}}=0.$ (48)
This is another compensation of the point-splitting regularization; the
energy-momentum conservation and the tracelessness do not simultaneously hold.
We emphasize that the QED trace anomaly stems from the fermion loop
corrections, regardless of whether electromagnetic fields are background or
not [58, 59, 60]; it is generally inevitable to introduce some regularization
scale. Hence, Eq. (48) is just a consequence of our regularization.
## VI Consistency with Euler–Heisenberg effective theory
For consistency check, let us make a comparison with the Euler–Heisenberg
effective theory, which is described by the following effective Lagrangian:
$\begin{split}\mathcal{L}_{\mathrm{EH}}&=-\mathcal{F}-\frac{e^{2}}{8\pi^{2}}\int_{s_{0}}^{\infty}\frac{{\mathrm{d}}s}{s}\,{\mathrm{e}}^{-sm^{2}}\frac{\operatorname{Re}\cosh\Bigl{[}\hbar\,es\sqrt{2(\mathcal{F}+{\mathrm{i}}\mathcal{G})}\Bigr{]}}{\operatorname{Im}\cosh\Bigl{[}\hbar\,es\sqrt{2(\mathcal{F}+{\mathrm{i}}\mathcal{G})}\Bigr{]}}\,\mathcal{G}\end{split}$
(49)
with $\mathcal{F}:=F_{\alpha\beta}^{2}/4$,
$\mathcal{G}:=F^{\alpha\beta}\tilde{F}_{\alpha\beta}/4$ and $m$ being the
fermion mass. Here $\hbar$ and $e$ are explicitly written. In the above
Lagrangian, we do not put the conventional counterterms $\sim 1/s^{3}$ and
$\sim e^{2}\mathcal{F}/s$, which accounts for the vacuum energy
renormalization and the charge renormalization. Instead of this minimal
subtraction, we introduced the ultraviolet cutoff parameter $s_{0}$, which
plays the similar role to $y^{-1}$ in the point-splitting regularization. The
charge current and the energy-momentum tensor are obtained from the derivative
of the corresponding action with respect to gauge field $A_{\mu}$ and metric
tensor $g_{\mu\nu}$, respectively. We define them as
$\begin{split}J^{\mu}_{\mathrm{EH}}:=-\frac{\delta}{\delta
A_{\mu}}\int{\mathrm{d}}^{4}x\,(\hbar^{2}\mathcal{L}_{\mathrm{EH}}),\qquad
T^{\mu\nu}_{\mathrm{EH}}:=\frac{2}{\sqrt{-g}}\frac{\delta}{\delta
g^{\mu\nu}}\int{\mathrm{d}}^{4}x\sqrt{-g}\,(\hbar^{2}\mathcal{L}_{\mathrm{EH}}).\end{split}$
(50)
For the energy-momentum tensor $T^{\mu\nu}_{\mathrm{EH}}$, we utilized the
effective action in a general curved spacetime with $g:=\det(g_{\mu\nu})$. We
note that the factor $\hbar^{2}$ is from our convention for the comparison
with the CKT analysis; on top of $\hbar^{-1}$ by definition of action, the
extra $\hbar^{3}$ is multiplied because we abbreviate the $\hbar^{-3}$ in the
momentum phase space, following the usual convention in the CKT.
The above Lagrangian can be expanded in terms of power of $\hbar$. This is
generally written as follows:
$\hbar^{2}\mathcal{L}_{\mathrm{EH}}=-\hbar^{2}\mathcal{F}+\mathcal{L}_{\mathrm{EH{(0)}}}+\hbar^{2}\mathcal{L}_{\mathrm{EH{(2)}}}+\hbar^{4}\mathcal{L}_{\mathrm{EH(4)}}+\cdots.$
(51)
For the latter convenience, here we multiplied $\hbar^{2}$ by both sides. In
Eq. (51), what we are now interested in is
$\begin{split}\mathcal{L}_{\mathrm{EH}(2)}=-\frac{e^{2}}{48\pi^{2}}F_{\mu\nu}^{2}\int_{s_{0}}^{\infty}\frac{{\mathrm{d}}s}{s}\,{\mathrm{e}}^{-sm^{2}}=-\frac{e^{2}}{24\pi^{2}}F_{\mu\nu}^{2}\int_{0}^{s_{0}^{-1/2}}\frac{{\mathrm{d}}p}{p}\,{\mathrm{e}}^{-m^{2}/p^{2}}.\end{split}$
(52)
The mass parameter $m$ is the convergence factor of the infrared regime at
$s\to\infty$ or $p\to 0$. In our CKT analysis at equilibrium, we do not care
about the infrared divergence, thanks to the cancellation by the matter part
of $f_{(0)}$. For comparison with the CKT, we consider the limit of $m\to 0$
444Even if we take the massless limit after performing the integration in
$\mathcal{L}_{\mathrm{EH{(2)}}}$, the logarithmically divergent behavior in
terms of $s_{0}$ is unchanged. Thus, the order of taking the limit is
irrelevant to the present discussion. . By replacing $s_{0}^{-1/2}$ with
$y^{-1}$, we reduce Eq. (52) to
$\begin{split}\mathcal{L}_{\mathrm{EH}(2)}\bigl{|}_{m\to
0}=-\frac{e^{2}\mathcal{J}}{24\pi^{2}}F_{\mu\nu}^{2},\end{split}$ (53)
where $\mathcal{J}$ is given by Eq. (39). Inserting Eq. (53) into Eq. (50) and
setting $e=1$ as we do in the CKT, we arrive at the following relations:
$\begin{split}J^{\mu}_{\mathrm{EH}{(2)}}\bigl{|}_{m\to
0}&=\frac{\mathcal{J}}{6\pi^{2}}\partial_{\lambda}F^{\mu\lambda}=2J^{\mu}_{(\partial
F)\,\mathrm{vac}},\\\ T^{\mu\nu}_{\mathrm{EH}{(2)}}\bigl{|}_{m\to
0}&=-\frac{\mathcal{J}}{6\pi^{2}}\biggl{[}{F^{\mu}}_{\sigma}F^{\nu\sigma}-\frac{1}{4}g^{\mu\nu}F_{\alpha\beta}^{2}\biggr{]}=2T^{\mu\nu}_{(FF)\,\mathrm{vac}},\end{split}$
(54)
where ‘$\mathrm{vac}$’ denotes the vacuum contribution. The factor $2$ on the
right-hand sides is understood as the degrees of freedom of chirality. These
relations guarantee the correctness of $J^{\mu}_{(\partial F)}$ and
$T^{\mu\nu}_{(FF)}$ in Eqs. (LABEL:eq:J2) and (44). We note that the matter
part and the vortical terms in Eqs. (LABEL:eq:J2) and (44) are not included
here, as they are not enclosed in Eq. (49).
There is an important remark about the spacetime-dependence of electromagnetic
fields. Since the original Euler–Heisenberg effective theory is for a constant
$F_{\mu\nu}$, one might be skeptical that the above comparison is meaningful
for $J^{\mu}_{\mathrm{EH{(2)}}}\sim\partial_{\lambda}F^{\mu\lambda}$. If we
take into account the coordinate-dependence of $F_{\mu\nu}$, the effective
Lagrangian acquires the derivative corrections. However, the leading
derivative correction is of $O\bigl{(}(\partial F)^{2}\bigr{)}$ or of
$O\bigl{(}F\partial^{2}F\bigr{)}$ [61, 62]. In the power counting of $\hbar$,
such a term is the fourth-order term, as it contains four derivatives of gauge
field. For this reason, even when $F_{\mu\nu}$ is spacetime-dependent, the
Lagrangian $\mathcal{L}_{\mathrm{EH{(2)}}}$ is unmodified and thus so is Eq.
(54). Therefore, we conclude that the nonlinear CKT is consistent with the
Euler–Heisenberg effective theory.
The Euler–Heisenberg effective theory also reveals underlying physics of the
logarithmic behavior of $\mathcal{J}$ in Eqs. (LABEL:eq:J2) and (44). To
illustrate it, we write the Lagrangian (49) in the following form:
$\hbar^{2}\mathcal{L}_{\mathrm{EH}}=-\frac{\hbar^{2}}{4}F_{\mu\nu}^{2}\biggl{(}1+\frac{e^{2}}{12\pi^{2}}\log\frac{s_{0}^{-1}}{m^{2}}\biggr{)}+\mathrm{const.}+O(\hbar^{4}),$
(55)
where the constant is the term without $F_{\mu\nu}$. The logarithmic behavior
is the same as that found in the vacuum polarization of QED. From the above
Lagrangian, hence, we read off the effective charge
$e^{2}_{\mathrm{eff}}(M):=e^{2}(1+\frac{e^{2}}{12\pi^{2}}\log\frac{M^{2}}{m^{2}})$,
and the $\beta$-function
$\beta(e_{\mathrm{eff}}):=M{\mathrm{d}}e_{\mathrm{eff}}(M)/{\mathrm{d}}M=e^{3}_{\mathrm{eff}}(M)/(12\pi^{2})$
[63]. Equation (54) shows that this characteristic of the charge
renormalization is inherited not only in the Euler–Heisenberg theory, but also
in the nonlinear CKT through the same logarithm $\mathcal{J}\sim\log y^{-1}$.
At the same time, in spite of Eq. (48), we find that the logarithm
$\mathcal{J}$ in Eq. (44) is an indirect evidence of the trace anomaly, which
is determined by the QED $\beta$-function.
## VII Summary
In this paper, we formulated the nonlinear CKT under arbitrary background
electromagnetic fields. We derived the off-equilibrium Wigner function for
arbitrary frame vectors. Imposing the frame-independence of this Wigner
function, we identified an equilibrium Wigner function, which solves the
kinetic equation. As an application, we compute the transport phenomena at the
equilibrium. We then found that the charge induced by the interplay of
magnetic field and vorticity [40] are permitted at the equilibrium of the
nonlinear CKT. This analysis based on the Wigner function is, to the best of
our knowledge, the first field-theoretical verification of the
nondissipativeness of the above charge generation. Besides, as an important
finding, we also showed that the nonlinear CKT and the Euler–Heisenberg
effective theory share equivalent transport phenomena. The ultraviolet
logarithmic behavior in the nonlinear CKT is not only a kinetic encoding of
the charge renormalization but also an indirect signature of the trace anomaly
in the kinetic description.
Also, we posed the potential issue that the prominent schemes, i.e.,
Pauli–Villars regularization and dimensional regularization, are incompatible
with the CKT. The incompatibility of the latter scheme is one reason of the
fact that the energy-momentum tensor in Ref. [44] disagree with that derived
from the Euler–Heisenberg effective theory. For this reason, we employed the
point-splitting regularization, which is much more compatible with the Wigner
function but cannot directly reproduce the trace anomaly. For the complete
reproduction of the trace anomaly in the kinetic description, we should find
out an appropriate regularization in CKT, or rely on frameworks other than the
CKT. For the latter option, the kinetic theory of massive fermions involving
the $O(\hbar^{2})$ correction is one of the candidates, since Pauli–Villars
regularization could be applicable.
Several potential developments are invoked from the nonlinear CKT. First, the
nonlinear transport phenomena is one of the pivotal research fields in
condensed matter physics [64]. Also, the merit of the nonlinear CKT could be
found in, for instance, the so-called nonlinear Hall effect [65, 66], which
originates from the Berry curvature dipole. In the nonlinear CKT, such
contribution would be hidden (see also Refs. [67, 68, 69], which argue
nonlinear corrections of the Berry curvature). Besides, it is straightforward
but complicated to extend the present nonlinear CKT to the collisional case by
starting from the Kadanoff–Baym equation with fermionic self-energy [70, 71].
In the nonlinear CKT, it is also interesting to take into account dynamical
gauge fields, which bring the chiral plasma instabilities [72]. These
applications will be discussed elsewhere.
###### Acknowledgements.
The author thanks Yoshimasa Hidaka for giving valuable comments.
## Appendix A Charge current, energy-momentum tensor and spin tensor at
$O(\hbar^{2})$
In this Appendix, we derive the charge current and energy-momentum tensor with
the Wigner function. The former for the right-handed massless fermions is
defined as
$\begin{split}J^{\mu}(x,y):=\mathrm{tr}\Bigl{\langle}\bar{\psi}_{+}\gamma^{\mu}P_{\mathrm{R}}\psi_{-}\Bigr{\rangle},\end{split}$
(56)
where we define $P_{\mathrm{R}}:=\frac{1}{2}(1+\gamma^{5})$,
$O_{-}:={\mathrm{e}}^{-y\cdot D/2}O(x)$ and
$O_{+}:=O(x){\mathrm{e}}^{y\cdot\overleftarrow{D}/2}$ with
$D_{\mu}:=\partial_{\mu}+{\mathrm{i}}A_{\mu}/\hbar$,
$\overleftarrow{D}_{\mu}:=\overleftarrow{\partial}_{\mu}-{\mathrm{i}}A_{\mu}/\hbar$.
Here, the operators ${\mathrm{e}}^{-y\cdot D/2}$ and
${\mathrm{e}}^{y\cdot\overleftarrow{D}/2}$ represent the covariant
translation, and thus their insertion is equivalent to enclosing the Wilson
line [43]. In the $y\to 0$ limit, the above current is reduced to the usual
definition in quantum field theory. Let us here recall that the Wigner
function is defined as
$\mathcal{R}^{\mu}(x,p):=\frac{1}{2}\mathrm{tr}\Bigl{[}\gamma^{\mu}P_{\mathrm{R}}W(x,p)\Bigr{]},\quad
W_{ab}(x,p):=\int_{y}{\mathrm{e}}^{-{\mathrm{i}}p\cdot
y/\hbar}\,\mathrm{tr}\Bigl{\langle}(\bar{\psi}_{+})_{b}(\psi_{-})_{a}\Bigr{\rangle}.$
(57)
Then, performing the inverse Wigner transformation, we write the above current
as Eq. (10):
$J^{\mu}(x,y)=2\int_{p}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}\mathcal{R}^{\mu}(x,p).$ (58)
For the spin tensor, the inverse Wigner transformation of the standard field-
theoretical definition yields Eq. (12):
$\begin{split}S^{\mu\nu\rho}(x,y)&:=\frac{\hbar}{4}\,\mathrm{tr}\Bigl{\langle}\bar{\psi}_{+}\bigl{\\{}\gamma^{\mu},\sigma^{\nu\rho}\bigr{\\}}P_{\mathrm{R}}\psi_{-}\Bigr{\rangle}=-2\hbar\varepsilon^{\mu\nu\rho\sigma}\int_{p}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}\mathcal{R}_{\sigma}(x,p)\end{split}$ (59)
with $\sigma^{\mu\nu}=\frac{{\mathrm{i}}}{2}[\gamma^{\mu},\gamma^{\nu}]$.
Let us derive the kinetic expression of the energy-momentum tensor. Unlike the
charge current and spin tensor, the definition of the energy-momentum tensor
is ambiguous due to the derivative operator. We here employ the canonical
energy-momentum tensor defined as follows:
$\begin{split}T_{\text{can}}^{\mu\nu}(x,y)&:=\frac{{\mathrm{i}}\hbar}{2}(t^{\mu\nu}-g^{\mu\nu}{t^{\lambda}}_{\lambda}),\quad
t^{\mu\nu}=\mathrm{tr}\Bigl{\langle}\bar{\psi}_{+}\gamma^{\mu}P_{\mathrm{R}}(D^{\nu}\psi)_{-}-(\bar{\psi}\overleftarrow{D}^{\nu})_{+}\gamma^{\mu}P_{\mathrm{R}}\psi_{-}\Bigr{\rangle}.\end{split}$
(60)
Note that $(D^{\nu}\psi)_{-}={\mathrm{e}}^{-y\cdot D/2}D_{\mu}\psi$ is
inequivalent to $D_{\mu}\psi_{-}=D_{\mu}{\mathrm{e}}^{-y\cdot D/2}\psi$ when
electromagnetic fields are spacetime-dependence [see Eq. (62)]. In the limit
of $y\to 0$, this definition is consistent with the classical canonical
momentum tensor
$\Theta^{\mu\nu}(x)=\partial^{\mu}\psi\frac{\partial\mathcal{L}}{\partial\partial_{\nu}\psi}+\partial^{\mu}\bar{\psi}\frac{\partial\mathcal{L}}{\partial\partial_{\nu}\bar{\psi}}-g^{\mu\nu}\mathcal{L}$
(61)
with
$\mathcal{L}=\frac{{\mathrm{i}}\hbar}{2}\Bigl{[}\bar{\psi}(x)\gamma^{\lambda}P_{\mathrm{R}}D_{\lambda}\psi(x)-\bar{\psi}(x)\overleftarrow{D}_{\lambda}\gamma^{\lambda}P_{\mathrm{R}}\psi(x)\Bigr{]}$.
In Eq. (60), the last term with $g^{\mu\nu}$ vanishes due to the Dirac
equation. To reduce the first two terms, we prepare the following identities:
$\begin{split}D_{\mu}{\mathrm{e}}^{y\cdot
D}\psi(x)&=\biggl{[}{\mathrm{e}}^{y\cdot
D}D_{\mu}+\frac{{\mathrm{i}}y^{\lambda}}{\hbar}\mathcal{F}_{\mu\lambda}(x,y){\mathrm{e}}^{y\cdot
D}\biggr{]}\psi(x),\\\ \partial_{\mu}^{y}{\mathrm{e}}^{y\cdot
D}\psi(x)&=\biggl{[}D_{\mu}{\mathrm{e}}^{y\cdot
D}-\frac{{\mathrm{i}}y^{\lambda}}{\hbar}\mathcal{G}_{\mu\lambda}(x,y){\mathrm{e}}^{y\cdot
D}\biggr{]}\psi(x)\end{split}$ (62)
with
$\mathcal{F}_{\mu\lambda}(x,y)=\sum_{n=0}^{\infty}\frac{(y\cdot\partial)^{n}}{(n+1)!}F_{\mu\lambda}(x),\quad\mathcal{G}_{\mu\lambda}(x,y)=\sum_{n=0}^{\infty}\frac{(y\cdot\partial)^{n}}{(n+2)!}F_{\mu\lambda}(x).$
(63)
These are derived from
${\mathrm{e}}^{Y}X{\mathrm{e}}^{-Y}={\mathrm{e}}^{\mathcal{C}(Y)}X$ with
$\mathcal{C}(Y)X:=[Y,X]$. Performing the inverse Wigner transformation, we
rewrite Eq. (60) as
$\begin{split}T^{\mu\nu}_{\text{can}}(x,y)&=\biggl{[}-{\mathrm{i}}\hbar\partial^{\nu}_{y}+\frac{1}{12}y\cdot\partial
F^{\nu\lambda}y_{\lambda}\biggr{]}\mathrm{tr}\Bigl{\langle}\bar{\psi}_{+}\gamma^{\mu}P_{\mathrm{R}}\psi_{-}\Bigr{\rangle}\\\
&=2\int_{p}{\mathrm{e}}^{ip\cdot
y/\hbar}p^{\nu}\mathcal{R}^{\mu}(x,p)+\frac{1}{12}y\cdot\partial
F^{\nu\lambda}y_{\lambda}\cdot 2\int_{p}{\mathrm{e}}^{ip\cdot
y/\hbar}\mathcal{R}^{\mu}(x,p)\\\ \end{split}$ (64)
up to $O(\hbar^{2})$. In the second line, we need carefully to perform the
integral by parts because the surface terms in general are generated. At least
at the equilibrium described by Eq. (23), however, we can show that no surface
term appears, as follows. The second term can be decomposed into the
contributions from the vacuum and the matter parts, namely,
$f_{{(0)}\mathrm{vac}}(p_{0})=-\theta(-p_{0})$ and
$f_{{(0)}\mathrm{mat}}(p_{0})=\theta(p_{0})n_{F}(p_{0}-\mu)+\theta(-p_{0})n_{F}(-p_{0}+\mu)$.
The former should be proportional to $y^{\lambda}y^{\rho}y^{-3}$ for the
dimensional reason, and thus vanishes in the symmetric limit of $y\to 0$. The
latter yields no surface term because of
$f_{{(0)}\mathrm{mat}}(p_{0})\delta(p^{2})\to 0$ for $p_{\mu}\to 0$.
Therefore, at the equilibrium, performing the integral by parts leads to
$\begin{split}T^{\mu\nu}_{\text{can}}(x,y)&=2\int_{p}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}(p^{\nu}+\hbar^{2}Q^{\nu})\mathcal{R}^{\mu}(x,p).\end{split}$ (65)
Its symmetric part is given by Eq. (11).
## Appendix B Solution at $O(\hbar^{2})$
In this Appendix, we derive the second-order solution
$\mathcal{R}^{\mu}_{(2)}$. The basic step of the following calculation is
parallel to Ref. [37]. Inserting $\mathcal{R}_{(0)}^{\mu}$ and
$\mathcal{R}_{(1)}^{\mu}$ into Eq. (9), we find
$\mathcal{R}_{\mu}^{(2)}=2\pi\delta(p^{2})\widetilde{\mathcal{R}}_{\mu}^{(2)}+\frac{2\pi}{p^{2}}\biggl{[}-p_{\mu}Q\cdot
pf_{(0)}+p^{\nu}\mathcal{D}_{\mu\nu}-p^{\nu}\tilde{F}_{\mu\nu}f_{(1)}\biggr{]}\delta(p^{2}),$
(66)
with $\widetilde{\mathcal{R}}_{\mu}^{(2)}$ satisfying
$p^{2}\delta(p^{2})\widetilde{\mathcal{R}}^{\mu}_{(2)}=\delta(p^{2})p\cdot\widetilde{\mathcal{R}}_{(2)}=0$.
Here, we have introduced
$\mathcal{D}_{\mu\nu}\delta(p^{2}):=\frac{1}{2}\varepsilon_{\mu\nu\rho\sigma}\Delta^{\rho}\biggl{(}\Sigma^{\sigma\lambda}_{n}(\Delta_{\lambda}f_{(0)})-\frac{1}{p^{2}}\tilde{F}^{\sigma\lambda}p_{\lambda}f_{(0)}\biggr{)}\delta(p^{2})+2Q_{[\mu}p_{\nu]}f_{(0)}\delta(p^{2}),$
(67)
where $(\Delta_{\lambda}f_{(0)})$ represents the derivative operation acting
only on $f_{(0)}$, but others operate on all on the right. For this
$\mathcal{R}^{\mu}_{(2)}$, Eq. (7) yields
$\begin{split}\widetilde{\mathcal{R}}_{\mu}^{(2)}\delta(p^{2})&=\delta(p^{2})\Bigl{[}p_{\mu}f_{(2)}+\Sigma_{\mu\nu}^{u}\Delta^{\nu}f_{(1)}\Bigr{]}+\frac{1}{p^{2}}\varepsilon^{\alpha\beta\gamma\nu}\Sigma_{\mu\nu}^{u}p_{\alpha}\mathcal{D}_{\beta\gamma}\delta(p^{2}),\end{split}$
(68)
where we introduce a vector $u^{\nu}$, and define
$f_{(2)}:=u\cdot\widetilde{\mathcal{R}}_{(2)}/(p\cdot u)$ and
$\Sigma_{u}^{\mu\nu}:=\varepsilon^{\mu\nu\rho\sigma}p_{\rho}u_{\sigma}/(2p\cdot
u)$, similarly to Eq. (18). The last term with complicated structure due to
the Levi-Civita symbols can be reduced with the Schouten identity:
$\varepsilon^{\mu\nu\rho\sigma}p^{\lambda}+\varepsilon^{\nu\rho\sigma\lambda}p^{\mu}+\varepsilon^{\rho\sigma\lambda\mu}p^{\nu}+\varepsilon^{\sigma\lambda\mu\nu}p^{\rho}+\varepsilon^{\lambda\mu\nu\rho}p^{\sigma}=0$
and
$\begin{split}&\Sigma_{n}^{\lambda[\mu}p^{\nu]}=-\frac{1}{2}\Sigma_{n}^{\mu\nu}p^{\lambda}-\frac{1}{4}\varepsilon^{\mu\nu\lambda\rho}p_{\rho}+\frac{1}{4}\varepsilon^{\mu\nu\lambda\rho}\frac{n_{\rho}p^{2}}{p\cdot
n}.\end{split}$ (69)
In the CKT at $O(\hbar^{2})$, Eq. (69) is quite helpful in the sense that the
frame-independent part and the $p^{2}$ term can be extracted. After
straightforward computation with these relations, we arrive at
$\begin{split}\frac{1}{p^{2}}\varepsilon^{\alpha\beta\gamma\nu}\Sigma_{\mu\nu}^{u}p_{\alpha}\mathcal{D}_{\beta\gamma}\delta(p^{2})&=-\delta(p^{2})\Sigma_{\mu\nu}^{u}\varepsilon^{\nu\rho\sigma\lambda}\Delta_{\rho}\frac{n_{\sigma}}{2p\cdot
n}\Delta_{\lambda}f_{(0)}\\\
&\quad+\frac{\delta(p^{2})}{p^{2}}\Sigma_{\mu\nu}^{u}\biggl{[}\Delta_{\alpha}\Sigma^{\alpha\nu}_{n}+\frac{n_{\alpha}}{p\cdot
n}\tilde{F}^{\alpha\nu}+\frac{1}{p^{2}}\tilde{F}^{\nu\lambda}p_{\lambda}\biggr{]}p\cdot\Delta
f_{(0)}.\end{split}$ (70)
It should be mentioned that the singular factors $(p^{2})^{-1}$ and
$(p^{2})^{-2}$ in Eq. (70) does not conflict with the nonsingular condition
$\delta(p^{2})p^{2}\widetilde{\mathcal{R}}^{\mu}_{(2)}=0$. One can show this
by noting $(p^{2})^{-n}\delta(p^{2})p\cdot\Delta f_{(0)}\neq 0$ for $n\geq 1$
but $\delta(p^{2})p\cdot\Delta f_{(0)}=0$, which follows from the classical
kinetic equation (5). Plugging Eqs. (68) and (70) into Eq. (66), and
proceeding computation, we obtain the second-order solution
$\mathcal{R}^{\mu}_{(2)}$ in Eq. (19).
## Appendix C Integral formulas for matter contribution
In this Appendix, we derive the integral formulas for the matter contribution.
At equilibrium, the matter contribution in Eq. (34) is the following form:
$\int_{p}2\pi\frac{\delta(p^{2})}{(p^{2})^{l}}p^{\mu_{1}}\cdots
p^{\mu_{j}}\frac{{\mathrm{d}}^{k}f_{{(0)}\mathrm{mat}}}{{\mathrm{d}}p_{0}^{k}}$
(71)
with
$f_{{(0)}\mathrm{mat}}=\theta(p_{0})n_{F}(p_{0}-\mu)+\theta(-p_{0})n_{F}(-p_{0}+\mu)$
and $n_{F}(x)=({\mathrm{e}}^{\beta x}+1)^{-1}$. In the integrands, we can
implement the following replacement:
$\begin{split}p_{\alpha}&\to p_{0}\xi_{\alpha},\\\
p_{\alpha}p_{\beta}&\to(p_{0})^{2}\xi_{\alpha}\xi_{\beta}+\frac{{\boldsymbol{p}}^{2}}{3}\Delta_{\alpha\beta},\\\
p_{\alpha}p_{\beta}p_{\gamma}&\to(p_{0})^{3}\xi_{\alpha}\xi_{\beta}\xi_{\gamma}+\frac{p_{0}{\boldsymbol{p}}^{2}}{3}(\xi_{\alpha}\Delta_{\beta\gamma}+\xi_{\beta}\Delta_{\gamma\alpha}+\xi_{\gamma}\Delta_{\alpha\beta}),\\\
p_{\alpha}p_{\beta}p_{\gamma}p_{\delta}&\to(p_{0})^{4}\xi_{\alpha}\xi_{\beta}\xi_{\gamma}\xi_{\delta}\\\
&\quad+\frac{(p_{0})^{2}{\boldsymbol{p}}^{2}}{3}(\xi_{\alpha}\xi_{\beta}\Delta_{\gamma\delta}+\xi_{\alpha}\xi_{\gamma}\Delta_{\beta\delta}+\xi_{\alpha}\xi_{\delta}\Delta_{\beta\gamma}+\xi_{\beta}\xi_{\gamma}\Delta_{\alpha\delta}+\xi_{\beta}\xi_{\delta}\Delta_{\alpha\gamma}+\xi_{\gamma}\xi_{\delta}\Delta_{\alpha\beta})\\\
&\quad+\frac{|{\boldsymbol{p}}|^{4}}{15}(\Delta_{\alpha\beta}\Delta_{\gamma\delta}+\Delta_{\alpha\gamma}\Delta_{\beta\delta}+\Delta_{\alpha\delta}\Delta_{\beta\gamma}),\end{split}$
(72)
with $\xi^{\mu}:=(1,\boldsymbol{0})$ and the transverse projector
$\Delta^{\mu\nu}:=\xi^{\mu}\xi^{\nu}-g^{\mu\nu}$. Then, the above integral is
represented as a linear combination of
$\mathcal{I}^{l}_{n,m,k}=\int_{p}2\pi\frac{{\mathrm{d}}^{l}\delta(p^{2})}{({\mathrm{d}}p^{2})^{l}}F_{n,m,k},\quad
F_{n,m,k}:=(p_{0})^{n}|{\boldsymbol{p}}|^{n-m}\frac{{\mathrm{d}}^{k}f_{{(0)}\mathrm{mat}}}{{\mathrm{d}}p_{0}^{k}},$
(73)
with $\int_{p}=\int\frac{{\mathrm{d}}^{4}p}{(2\pi)^{4}}$. We start from
$\delta(p^{2})=\frac{1}{2|{\boldsymbol{p}}|}(\delta_{+}+\delta_{-}),\quad\delta_{\pm}:=\delta(p_{0}\mp|{\boldsymbol{p}}|).$
(74)
Then the first, second, and third derivatives are computed as
$\begin{split}\delta^{\prime}(p^{2})&=\frac{1}{4p_{0}|{\boldsymbol{p}}|}(\delta^{\prime}_{+}+\delta^{\prime}_{-}),\\\
\delta^{\prime\prime}(p^{2})&=-\frac{1}{8p_{0}^{3}|{\boldsymbol{p}}|}(\delta^{\prime}_{+}+\delta^{\prime}_{-})+\frac{1}{8p_{0}^{2}|{\boldsymbol{p}}|}(\delta^{\prime\prime}_{+}+\delta^{\prime\prime}_{-}),\\\
\delta^{\prime\prime\prime}(p^{2})&=\frac{3}{16p_{0}^{5}|{\boldsymbol{p}}|}(\delta^{\prime}_{+}+\delta^{\prime}_{-})-\frac{3}{16p_{0}^{4}|{\boldsymbol{p}}|}(\delta^{\prime\prime}_{+}+\delta^{\prime\prime}_{-})+\frac{1}{16p_{0}^{3}|{\boldsymbol{p}}|}(\delta^{\prime\prime\prime}_{+}+\delta^{\prime\prime\prime}_{-}).\end{split}$
(75)
where the primes on $\delta_{\pm}$ denote the derivative with respect to
$p_{0}$. For $l=0$, we readily compute Eq. (73) by using Eq. (74).
For $l\geq 1$, performing the integration by parts, we can replace
${\mathrm{d}}/{\mathrm{d}}p_{0}$ in Eq. (75) with that on $F_{n,m,k}$. For
instance, the integral for $l=1$ reads
$\begin{split}\mathcal{I}^{1}_{n,m,k}&=-\int_{\boldsymbol{p}}\frac{1}{4|{\boldsymbol{p}}|}\int{\mathrm{d}}p_{0}(\delta_{+}+\delta_{-})\frac{{\mathrm{d}}}{{\mathrm{d}}p_{0}}\frac{F_{n,m,k}}{p_{0}}\end{split}$
(76)
with $\int_{\boldsymbol{p}}=\int\frac{{\mathrm{d}}^{3}p}{(2\pi)^{3}}$. In a
similar manner, we obtain
$\begin{split}\mathcal{I}^{2}_{n,m,k}&=\int_{\boldsymbol{p}}\frac{1}{8|{\boldsymbol{p}}|}\int{\mathrm{d}}p_{0}(\delta_{+}+\delta_{-})\biggl{[}\frac{{\mathrm{d}}}{{\mathrm{d}}p_{0}}\frac{F_{n,m,k}}{p_{0}^{3}}+\frac{{\mathrm{d}}^{2}}{{\mathrm{d}}p_{0}^{2}}\frac{F_{n,m,k}}{p_{0}^{2}}\biggr{]},\end{split}$
(77)
$\begin{split}\mathcal{I}^{3}_{n,m,k}&=-\int_{\boldsymbol{p}}\frac{1}{16|{\boldsymbol{p}}|}\int{\mathrm{d}}p_{0}(\delta_{+}+\delta_{-})\biggl{[}\frac{{\mathrm{d}}}{{\mathrm{d}}p_{0}}\frac{3F_{n,m,k}}{p_{0}^{5}}+\frac{{\mathrm{d}}^{2}}{{\mathrm{d}}p_{0}^{2}}\frac{3F_{n,m,k}}{p_{0}^{4}}+\frac{{\mathrm{d}}^{3}}{{\mathrm{d}}p_{0}^{3}}\frac{F_{n,m,k}}{p_{0}^{3}}\biggr{]}.\end{split}$
(78)
Carrying out the momentum integration in Eqs. (76), (77) and (78), we finally
derive
$\displaystyle\mathcal{I}_{n,m,k}^{0}$ $\displaystyle=$
$\displaystyle\frac{1}{4\pi^{2}}\mathcal{J}_{m+1,k},$ (79)
$\displaystyle\mathcal{I}_{n,m,k}^{1}$ $\displaystyle=$
$\displaystyle\frac{-1}{8\pi^{2}}\biggl{[}(n-1)\mathcal{J}_{m-1,k}+\mathcal{J}_{m,k+1}\biggr{]},$
(80) $\displaystyle\mathcal{I}_{n,m,k}^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{16\pi^{2}}\biggl{[}(n-1)(n-3)\mathcal{J}_{m-3,k}+(2n-3)\mathcal{J}_{m-2,k+1}+\mathcal{J}_{m-1,k+2}\biggr{]},$
(81) $\displaystyle\mathcal{I}_{n,m,k}^{3}$ $\displaystyle=$
$\displaystyle\frac{-1}{32\pi^{2}}\biggl{[}(n-1)(n-3)(n-5)\mathcal{J}_{m-5,k}+3(n^{2}-5n+5)\mathcal{J}_{m-4,k+1}$
(82)
$\displaystyle\qquad\quad+3(n-2)\mathcal{J}_{m-3,k+2}+\mathcal{J}_{m-2,k+3}\biggr{]},$
where the integral sequence $\mathcal{J}_{m,k}$ is given by
$\mathcal{J}_{m,k}:=\int_{0}^{\infty}{\mathrm{d}}p\,p^{m}\frac{{\mathrm{d}}^{k}}{{\mathrm{d}}p^{k}}\Bigl{[}n_{F}(p-\mu)-(-1)^{m+k}n_{F}(p+\mu)\Bigr{]}.$
(83)
One can show the following recursion equations for $m\geq 0$ and $k\geq 0$:
$\mathcal{J}_{m+1,k+1}=-(m+1)\mathcal{J}_{m,k},\quad\partial_{\mu}\mathcal{J}_{m,k}=E_{\mu}\mathcal{J}_{m,k+1}+a_{\mu}(\mathcal{J}_{m+1,k+1}+k\mathcal{J}_{m,k}).$
(84)
The former is useful to reduce Eqs. (80)-(82), and the latter is helpful to
compute the divergence of $J^{\mu}$ and $T^{\mu\nu}$.
## Appendix D Point-splitting regularization
In this Appendix, we demonstrate the evaluation of $\mathcal{K}_{1}$,
$\mathcal{K}^{\mu\nu}_{2}$ and $\mathcal{K}^{\mu\nu\rho\sigma}_{3}$, where
$\mathcal{K}_{n}^{\mu_{1}\cdots\mu_{m}}(y):=\int_{p}2\pi\frac{{\mathrm{d}}^{n}\delta(p^{2})}{({\mathrm{d}}p^{2})^{n}}p^{\mu_{1}}\cdots
p^{\mu_{m}}\bigl{[}-\theta(-p_{0})\bigr{]}{\mathrm{e}}^{{\mathrm{i}}p\cdot
y/\hbar}.$ (85)
As usual, the point-splitting regularization is implemented with the Euclidean
momentum integral. For the above integral, the simple Wick rotation with
$p_{0}\to-{\mathrm{i}}p_{4}$ cannot be admitted due to the delta function and
step function, which are defined on real space. For this reason, we first
write them as
$\delta(x)=\frac{1}{\pi}{\rm
Im}\frac{1}{x-{\mathrm{i}}\epsilon},\quad\theta(x)=\frac{1}{2\pi{\mathrm{i}}}\int_{-\infty}^{\infty}{\mathrm{d}}\tau\frac{{\mathrm{e}}^{{\mathrm{i}}x\tau}}{\tau-{\mathrm{i}}\eta}$
(86)
with positive infinitesimals $\epsilon$ and $\eta$.
Let us first compute $\mathcal{K}_{1}(y)$, which can be expressed as
$\mathcal{K}_{1}(y)=\frac{1}{2{\mathrm{i}}}(\mathcal{K}_{+}-\mathcal{K}_{-}),\quad\mathcal{K}_{\pm}:=\frac{-2}{2\pi{\mathrm{i}}}\int_{-\infty}^{\infty}\frac{{\mathrm{d}}\tau}{\tau+{\mathrm{i}}\eta}\int_{p}\frac{{\mathrm{e}}^{{\mathrm{i}}p_{0}(\tau+y_{0}/\hbar)-{\mathrm{i}}{\boldsymbol{p}}\cdot\boldsymbol{y}/\hbar}}{(p^{2}\mp{\mathrm{i}}\epsilon)^{2}}.$
(87)
We can now deform the contours of $p_{0}$-integral, together with that of
$\tau$-integral, along the imaginary axis. Introducing another positive
infinitesimal $\delta$, we compute
$\begin{split}\mathcal{K}_{\pm}&=\frac{-2}{2\pi{\mathrm{i}}}\int_{{\mathrm{i}}\infty+\delta}^{-{\mathrm{i}}\infty+\delta}\frac{{\mathrm{d}}\tau}{\tau+{\mathrm{i}}\eta}\int_{\boldsymbol{p}}\int_{{\mathrm{i}}\infty}^{-{\mathrm{i}}\infty}(\pm
1)\frac{{\mathrm{d}}p_{0}}{2\pi}\frac{{\mathrm{e}}^{{\mathrm{i}}p_{0}(\tau+y_{0}/\hbar)-{\mathrm{i}}{\boldsymbol{p}}\cdot\boldsymbol{y}/\hbar}}{(p^{2}\mp{\mathrm{i}}\epsilon)^{2}}\\\
&=(\pm
1)\cdot\frac{-2}{2\pi{\mathrm{i}}}\cdot\frac{1}{{\mathrm{i}}}\int_{-\infty}^{\infty}\frac{{\mathrm{d}}\tau_{E}}{-{\mathrm{i}}\tau_{E}+\delta+{\mathrm{i}}\eta}\int_{\boldsymbol{p}}\int_{-\infty}^{\infty}\frac{{\mathrm{d}}p_{4}}{2\pi{\mathrm{i}}}\frac{{\mathrm{e}}^{p_{4}(-{\mathrm{i}}\tau_{E}+\delta+y_{0}/\hbar)-{\mathrm{i}}{\boldsymbol{p}}\cdot\boldsymbol{y}/\hbar}}{(-p^{2}_{E})^{2}}\\\
&=(\pm
1)\cdot\frac{-2}{2\pi{\mathrm{i}}}\cdot\frac{1}{{\mathrm{i}}}\int_{-\infty}^{\infty}\frac{{\mathrm{d}}\tau_{E}}{\tau_{E}+{\mathrm{i}}\delta}\int_{p_{E}}\frac{{\mathrm{e}}^{-{\mathrm{i}}p_{4}(\tau_{E}+{\mathrm{i}}\delta)-{\mathrm{i}}p_{E}\cdot
y_{E}/\hbar}}{(p^{2}_{E})^{2}}\\\ &=(\pm
1)\cdot(-2{\mathrm{i}})\int_{p_{E}}\theta(p_{4})\frac{{\mathrm{e}}^{-{\mathrm{i}}p_{E}\cdot
y_{E}/\hbar}}{(p^{2}_{E})^{2}},\end{split}$ (88)
where we denote the Euclidean splitting parameter as
$y_{E}=\sqrt{y_{4}^{2}+\boldsymbol{y}^{2}}$ with $y_{4}:=iy_{0}$ and inner
product as $p_{E}\cdot y_{E}=\boldsymbol{p}\cdot\boldsymbol{y}+p_{4}y_{4}$. In
the following, we suppress the subscript $E$. Due to this contour deformation,
the integral (87) is represented as the momentum integral in the Euclidean
four-dimensional half hypersphere:
$\begin{split}\mathcal{K}_{1}(y)&=-2\int_{p}\theta(p_{4})\frac{{\mathrm{e}}^{-{\mathrm{i}}p\cdot
y/\hbar}}{p^{4}}=-\frac{1}{4\pi^{2}}\int_{0}^{\infty}\frac{{\mathrm{d}}p}{p}\int_{p_{4}>0}\frac{{\mathrm{d}}\Omega}{2\pi^{2}}{\mathrm{e}}^{-{\mathrm{i}}py\cos\omega/\hbar},\end{split}$
(89)
where $\omega$ is the angular valuable defined by $p\cdot y=py\cos\omega$.
To proceed, it is useful to introduce two integral sequences. The first one is
$\mathcal{Z}_{n}(x):=\int_{0}^{1}\frac{{\mathrm{d}}\zeta}{\pi/2}\zeta^{n}\sqrt{1-\zeta^{2}}\,{\mathrm{e}}^{-{\mathrm{i}}x\zeta}.$
(90)
This $\mathcal{Z}_{n}(x)$ can be written with the Bessel function of the first
kind $J_{n}(x)$ and the Struve function ${\bf H}_{n}(x)$, as follows:
$\begin{split}\mathcal{Z}_{1}(x)&=\frac{2}{3\pi}-{\mathrm{i}}\frac{J_{2}(x)}{x}-\frac{{\bf
H}_{2}(x)}{x},\\\
\mathcal{Z}_{2}(x)&=\frac{-2{\mathrm{i}}x}{15\pi}+\frac{J_{2}(x)-xJ_{3}(x)}{x^{2}}+{\mathrm{i}}\frac{-{\bf
H}_{2}(x)+x{\bf H}_{3}(x)}{x^{2}},\\\
\mathcal{Z}_{3}(x)&=\frac{4}{15\pi}+{\mathrm{i}}\frac{-3J_{3}(x)+xJ_{4}(x)}{x^{2}}+\frac{3{\bf
H}_{3}(x)-x{\bf H}_{2}(x)}{x^{2}},\\\
\mathcal{Z}_{4}(x)&=-\frac{2{\mathrm{i}}x}{21\pi}+\frac{2xJ_{4}(x)-(x^{2}-3)J_{3}(x)}{x^{3}}+{\mathrm{i}}\frac{-2x{\bf
H}_{4}(x)+(x^{2}-3){\bf H}_{3}(x)}{x^{3}},\\\
\mathcal{Z}_{5}(x)&=-\frac{2(x^{2}-8)}{105\pi}+{\mathrm{i}}\frac{(x^{2}-15)J_{4}(x)}{x^{3}}+\frac{-(x^{2}-15){\bf
H}_{4}(x)}{x^{3}}.\end{split}$ (91)
One important property of $\mathcal{Z}_{n}(x)$ is that the integral from $0$
to $\infty$ are analytically evaluated as:
$\begin{split}\int_{0}^{\infty}{\mathrm{d}}z\,\mathcal{Z}_{1}(z)&=-\frac{{\mathrm{i}}}{2},\quad\int_{0}^{\infty}{\mathrm{d}}z\,\mathcal{Z}_{2}(z)=-\frac{2{\mathrm{i}}}{3\pi},\quad\int_{0}^{\infty}{\mathrm{d}}z\,\mathcal{Z}_{3}(z)=-\frac{{\mathrm{i}}}{8},\\\
\int_{0}^{\infty}{\mathrm{d}}z\,\mathcal{Z}_{4}(z)&=-\frac{4i}{15\pi},\quad\int_{0}^{\infty}{\mathrm{d}}z\,\mathcal{Z}_{5}(z)=-\frac{{\mathrm{i}}}{16}.\end{split}$
(92)
Another property is the recurrence relation
$\mathcal{Z}^{\prime}_{n}(x)=-{\mathrm{i}}\mathcal{Z}_{n+1}(x),\quad\mathcal{Z}_{n}(x)=-{\mathrm{i}}\int_{\infty}^{x}{\mathrm{d}}z\,\mathcal{Z}_{n+1}(z),$
(93)
where the latter follow from
$\mathcal{Z}_{n}(x)\underset{x\to\infty}{\longrightarrow}0$.
The second useful integral is
$\mathcal{A}^{\mu_{1}\cdots\mu_{n}}(x):=\int_{p_{4}>0}\frac{{\mathrm{d}}\Omega}{2\pi^{2}}\hat{p}^{\mu_{1}}\cdots\hat{p}^{\mu_{n}}{\mathrm{e}}^{-{\mathrm{i}}x\cos\omega}.$
(94)
This tensor is decomposed into the longitudinal component to
$\hat{y}^{\mu}:=y^{\mu}/y$ and transverse one with the projector
$\tilde{\Delta}^{\mu\nu}:=\delta^{\mu\nu}-\hat{y}^{\mu}\hat{y}^{\nu}$. The
coefficients of them is determined by $\mathcal{Z}_{n}$, as follows:
$\begin{split}\mathcal{A}&=\mathcal{Z}_{0},\quad\mathcal{A}^{\mu}=\hat{y}^{\mu}\mathcal{Z}_{1},\quad\mathcal{A}^{\mu\nu}=\hat{y}^{\mu}\hat{y}^{\nu}\mathcal{Z}_{2}+\tilde{\Delta}^{\mu\nu}\frac{1}{3}\Bigl{[}\mathcal{Z}_{0}-\mathcal{Z}_{2}\Bigr{]},\\\
\mathcal{A}^{\mu\nu\rho}&=\hat{y}^{\mu}\hat{y}^{\nu}\hat{y}^{\rho}\mathcal{Z}_{3}+\left(\hat{y}^{\mu}\tilde{\Delta}^{\nu\rho}+\hat{y}^{\nu}\tilde{\Delta}^{\rho\mu}+\hat{y}^{\rho}\tilde{\Delta}^{\mu\nu}\right)\frac{1}{3}\Bigl{[}\mathcal{Z}_{1}-\mathcal{Z}_{3}\Bigr{]},\\\
\mathcal{A}^{\mu\nu\rho\sigma}&=\hat{y}^{\mu}\hat{y}^{\nu}\hat{y}^{\rho}\hat{y}^{\sigma}\mathcal{Z}_{4}\\\
&\quad+\left(\tilde{\Delta}^{\mu\nu}\hat{y}^{\rho}\hat{y}^{\sigma}+\tilde{\Delta}^{\nu\rho}\hat{y}^{\sigma}\hat{y}^{\mu}+\tilde{\Delta}^{\rho\sigma}\hat{y}^{\mu}\hat{y}^{\nu}+\tilde{\Delta}^{\mu\sigma}\hat{y}^{\nu}\hat{y}^{\rho}+\tilde{\Delta}^{\mu\rho}\hat{y}^{\nu}\hat{y}^{\sigma}+\tilde{\Delta}^{\nu\sigma}\hat{y}^{\mu}\hat{y}^{\rho}\right)\frac{1}{3}\Bigl{[}\mathcal{Z}_{2}-\mathcal{Z}_{4}\Bigr{]},\\\
&\quad+\left(\tilde{\Delta}^{\mu\nu}\tilde{\Delta}^{\rho\sigma}+\tilde{\Delta}^{\mu\rho}\tilde{\Delta}^{\nu\sigma}+\tilde{\Delta}^{\mu\sigma}\tilde{\Delta}^{\nu\rho}\right)\frac{1}{15}\Bigl{[}\mathcal{Z}_{0}-2\mathcal{Z}_{2}+\mathcal{Z}_{4}\Bigr{]},\end{split}$
(95)
where we abbreviate the argument $x$ on $\mathcal{A}^{\mu_{1}\cdots\mu_{n}}$
and $\mathcal{Z}_{n}$.
Let us come back to the evaluation of $\mathcal{K}_{1}$ With the help of
$\mathcal{A}$ and $\mathcal{Z}_{0}$, we get
$\begin{split}\mathcal{K}_{1}(y)&=-\frac{1}{4\pi^{2}}\int_{0}^{\infty}\frac{{\mathrm{d}}p}{p}\mathcal{A}(py/\hbar)=-\frac{1}{4\pi^{2}}\int_{0}^{\infty}\frac{{\mathrm{d}}p}{p}\mathcal{Z}_{0}(py/\hbar)\\\
&=-\frac{(-{\mathrm{i}}/\hbar)}{4\pi^{2}}\int_{0}^{\infty}{\mathrm{d}}p\int_{\infty}^{y}{\mathrm{d}}z\,\mathcal{Z}_{1}(pz/\hbar)\\\
&=-\frac{(-{\mathrm{i}})}{4\pi^{2}}\int_{\infty}^{y}\frac{{\mathrm{d}}z}{z}\cdot\frac{-{\mathrm{i}}}{2}=-\frac{\mathcal{J}}{8\pi^{2}}\end{split}$
(96)
with
$\mathcal{J}:=\int_{0}^{y^{-1}}\frac{{\mathrm{d}}p}{p}.$ (97)
Here we interchanged the order of integration, as do in the point-splitting
regularization for axial anomaly in two dimensions [50]. The ultraviolet
logarithmic divergence is now regularized by the splitting parameter $y$. The
infrared divergence ($z=0$) is to be canceled by the matter part.
In the same manner, we can calculate $\mathcal{K}^{\mu\nu}_{2}$ and
$\mathcal{K}^{\mu\nu\rho\sigma}_{3}$. The only extra relation to be utilized
is
$\frac{{\mathrm{d}}^{n}\delta(p^{2})}{({\mathrm{d}}p^{2})^{n}}=\frac{(-1)^{n}\,n!}{\pi}\mathrm{Im}\frac{1}{\,(p^{2}-{\mathrm{i}}\epsilon)^{n+1}}.$
(98)
Finally, we get the following expressions:
$\mathcal{K}_{2}^{\mu\nu}(y)=\frac{\mathcal{J}}{16\pi^{2}}g^{\mu\nu},\quad\mathcal{K}_{3}^{\mu\nu\rho\sigma}(y)=-\frac{\mathcal{J}}{32\pi^{2}}(g^{\mu\nu}g^{\rho\sigma}+g^{\mu\rho}g^{\nu\sigma}+g^{\mu\sigma}g^{\nu\rho}),$
(99)
where we perform the analytic continuation to Minkowski spacetime after
integration.
## Appendix E Formulas of background fields
In this Appendix, we show several formulas of electromagnetic field and fluid
vorticity fields, which are defined in Eq. (LABEL:eq:Fomega). The two rank
tensors $F_{\mu\nu}$, $\beta^{-1}\partial_{\mu}\beta_{\nu}$ and their duals
are expanded with $E_{\mu},B_{\mu},a_{\mu}$ and $\omega_{\mu}$ as follows:
$\begin{split}&F_{\mu\nu}=E_{\mu}\xi_{\nu}-E_{\nu}\xi_{\mu}-\varepsilon_{\mu\nu\rho\sigma}B^{\rho}\xi^{\sigma},\quad\tilde{F}_{\mu\nu}=B_{\mu}\xi_{\nu}-B_{\nu}\xi_{\mu}+\varepsilon_{\mu\nu\rho\sigma}E^{\rho}\xi^{\sigma},\\\
&\beta^{-1}\partial_{\mu}\beta_{\nu}=a_{\mu}\xi_{\nu}-a_{\nu}\xi_{\mu}-\varepsilon_{\mu\nu\rho\sigma}\omega^{\rho}\xi^{\sigma},\quad\omega_{\mu\nu}=\omega_{\mu}\xi_{\nu}-\omega_{\nu}\xi_{\mu}+\varepsilon_{\mu\nu\rho\sigma}a^{\rho}\xi^{\sigma}.\end{split}$
(100)
Thanks to them, one can show
$\varepsilon_{\mu\nu\rho\sigma}\omega^{\rho}E^{\sigma}=-\varepsilon_{\mu\nu\rho\sigma}a^{\rho}B^{\sigma}$
or equivalently,
$\omega_{[\mu}E_{\nu]}=-a_{[\mu}B_{\nu]}.$ (101)
Using the equilibrium conditions $\partial_{\mu}\alpha=F_{\mu\nu}\beta^{\nu}$
in Eq. (23) and $\beta\cdot\partial F_{\mu\nu}=0$ in Eq. (33), we find
$0=\partial_{[\mu}\partial_{\nu]}\alpha=F_{\lambda[\mu}\partial_{\nu]}\beta^{\lambda}.$
(102)
Combined with Eqs. (100) and (101), this leads to
$a_{[\mu}E_{\nu]}=\omega_{[\mu}B_{\nu]}=0.$ (103)
Besides, by using the expanded form (100), we express the derivatives of the
background fields as
$\begin{split}&\partial_{\mu}E_{\nu}=\xi_{\mu}\xi_{\nu}(E\cdot
a+B\cdot\omega)-g_{\mu\nu}B\cdot\omega+B_{\mu}\omega_{\nu}+2\xi_{(\mu}\varepsilon_{\nu)\rho\sigma\lambda}a^{\rho}B^{\sigma}\xi^{\sigma}+\xi^{\rho}\partial_{\mu}F_{\nu\rho},\\\
&\partial_{\mu}B_{\nu}=\xi_{\mu}\xi_{\nu}(a\cdot B-\omega\cdot
E)+g_{\mu\nu}\omega\cdot
E-E_{\mu}\omega_{\nu}-2\xi_{(\mu}\varepsilon_{\nu)\rho\sigma\lambda}a^{\rho}E^{\sigma}\xi^{\lambda}+\xi^{\lambda}\partial_{\mu}\tilde{F}_{\nu\lambda},\\\
&\partial_{\mu}a_{\nu}=\xi_{\mu}\xi_{\nu}(a^{2}+\omega^{2})-g_{\mu\nu}\omega^{2}+\omega_{\mu}\omega_{\nu}-a_{\mu}a_{\nu}+2\xi_{(\mu}\varepsilon_{\nu)\rho\sigma\lambda}a^{\rho}\omega^{\sigma}\xi^{\lambda},\\\
&\partial_{\mu}\omega_{\nu}=g_{\mu\nu}a\cdot\omega-2a_{\mu}\omega_{\nu}.\end{split}$
(104)
## References
* Stephanov and Yin [2012] M. A. Stephanov and Y. Yin, Chiral Kinetic Theory, Phys. Rev. Lett. 109, 162001 (2012), arXiv:1207.0747 [hep-th] .
* Son and Yamamoto [2012] D. T. Son and N. Yamamoto, Berry Curvature, Triangle Anomalies, and the Chiral Magnetic Effect in Fermi Liquids, Phys. Rev. Lett. 109, 181602 (2012), arXiv:1203.2697 [cond-mat.mes-hall] .
* Chen _et al._ [2013] J.-W. Chen, S. Pu, Q. Wang, and X.-N. Wang, Berry Curvature and Four-Dimensional Monopoles in the Relativistic Chiral Kinetic Equation, Phys. Rev. Lett. 110, 262301 (2013), arXiv:1210.8312 [hep-th] .
* Xiao _et al._ [2010] D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on electronic properties, Rev. Mod. Phys. 82, 1959 (2010), arXiv:0907.2021 [cond-mat.mes-hall] .
* Liu and Yin [2021] S. Y. F. Liu and Y. Yin, Spin polarization induced by the hydrodynamic gradients, JHEP 07, 188, arXiv:2103.09200 [hep-ph] .
* Fu _et al._ [2021] B. Fu, S. Y. F. Liu, L. Pang, H. Song, and Y. Yin, Shear-Induced Spin Polarization in Heavy-Ion Collisions, Phys. Rev. Lett. 127, 142301 (2021), arXiv:2103.10403 [hep-ph] .
* Gorbar _et al._ [2016] E. V. Gorbar, I. A. Shovkovy, S. Vilchinskii, I. Rudenok, A. Boyarsky, and O. Ruchayskiy, Anomalous Maxwell equations for inhomogeneous chiral plasma, Phys. Rev. D 93, 105028 (2016), arXiv:1603.03442 [hep-th] .
* Gorbar _et al._ [2017a] E. V. Gorbar, V. A. Miransky, I. A. Shovkovy, and P. O. Sukhachov, Consistent Chiral Kinetic Theory in Weyl Materials: Chiral Magnetic Plasmons, Phys. Rev. Lett. 118, 127601 (2017a), arXiv:1610.07625 [cond-mat.str-el] .
* Yamamoto [2016] N. Yamamoto, Chiral transport of neutrinos in supernovae: Neutrino-induced fluid helicity and helical plasma instability, Phys. Rev. D 93, 065017 (2016), arXiv:1511.00933 [astro-ph.HE] .
* Yamamoto and Yang [2020] N. Yamamoto and D.-L. Yang, Chiral Radiation Transport Theory of Neutrinos, Astrophys. J. 895, 56 (2020), arXiv:2002.11348 [astro-ph.HE] .
* Yamamoto and Yang [2021] N. Yamamoto and D.-L. Yang, Magnetic field induced neutrino chiral transport near equilibrium, Phys. Rev. D 104, 123019 (2021), arXiv:2103.13159 [hep-ph] .
* Hattori _et al._ [2021] K. Hattori, Y. Hidaka, N. Yamamoto, and D.-L. Yang, Wigner functions and quantum kinetic theory of polarized photons, JHEP 02, 001, arXiv:2010.13368 [hep-ph] .
* Huang _et al._ [2020] X.-G. Huang, P. Mitkin, A. V. Sadofyev, and E. Speranza, Zilch vortical effect, Berry phase, and kinetic theory, JHEP 10, 117, arXiv:2006.03591 [hep-th] .
* Yamamoto [2017] N. Yamamoto, Photonic chiral vortical effect, Phys. Rev. D 96, 051902 (2017), arXiv:1702.08886 [hep-th] .
* Mameda _et al._ [2022] K. Mameda, N. Yamamoto, and D.-L. Yang, Photonic spin Hall effect from quantum kinetic theory in curved spacetime, Phys. Rev. D 105, 096019 (2022), arXiv:2203.08449 [hep-th] .
* Chen _et al._ [2014] J.-Y. Chen, D. T. Son, M. A. Stephanov, H.-U. Yee, and Y. Yin, Lorentz Invariance in Chiral Kinetic Theory, Phys. Rev. Lett. 113, 182302 (2014), arXiv:1404.5963 [hep-th] .
* Chen _et al._ [2015] J.-Y. Chen, D. T. Son, and M. A. Stephanov, Collisions in Chiral Kinetic Theory, Phys. Rev. Lett. 115, 021601 (2015), arXiv:1502.06966 [hep-th] .
* Hidaka _et al._ [2017] Y. Hidaka, S. Pu, and D.-L. Yang, Relativistic chiral kinetic theory from quantum field theories, Phys. Rev. D 95, 091901 (2017), arXiv:1612.04630 [hep-th] .
* Yang _et al._ [2020a] D.-L. Yang, K. Hattori, and Y. Hidaka, Effective quantum kinetic theory for spin transport of fermions with collsional effects, JHEP 07, 070, arXiv:2002.02612 [hep-ph] .
* Weickgenannt _et al._ [2021] N. Weickgenannt, E. Speranza, X.-l. Sheng, Q. Wang, and D. H. Rischke, Derivation of the nonlocal collision term in the relativistic Boltzmann equation for massive spin-1/2 particles from quantum field theory, Phys. Rev. D 104, 016022 (2021), arXiv:2103.04896 [nucl-th] .
* Lin [2022] S. Lin, Quantum kinetic theory for quantum electrodynamics, Phys. Rev. D 105, 076017 (2022), arXiv:2109.00184 [hep-ph] .
* Gao and Liang [2019] J.-H. Gao and Z.-T. Liang, Relativistic Quantum Kinetic Theory for Massive Fermions and Spin Effects, Phys. Rev. D 100, 056021 (2019), arXiv:1902.06510 [hep-ph] .
* Weickgenannt _et al._ [2019] N. Weickgenannt, X.-L. Sheng, E. Speranza, Q. Wang, and D. H. Rischke, Kinetic theory for massive spin-1/2 particles from the Wigner-function formalism, Phys. Rev. D 100, 056018 (2019), arXiv:1902.06513 [hep-ph] .
* Hattori _et al._ [2019] K. Hattori, Y. Hidaka, and D.-L. Yang, Axial Kinetic Theory and Spin Transport for Fermions with Arbitrary Mass, Phys. Rev. D 100, 096011 (2019), arXiv:1903.01653 [hep-ph] .
* Hattori _et al._ [2017] K. Hattori, S. Li, D. Satow, and H.-U. Yee, Longitudinal Conductivity in Strong Magnetic Field in Perturbative QCD: Complete Leading Order, Phys. Rev. D 95, 076008 (2017), arXiv:1610.06839 [hep-ph] .
* Sheng _et al._ [2018] X.-l. Sheng, D. H. Rischke, D. Vasak, and Q. Wang, Wigner functions for fermions in strong magnetic fields, Eur. Phys. J. A 54, 21 (2018), arXiv:1707.01388 [hep-ph] .
* Lin and Yang [2020] S. Lin and L. Yang, Chiral kinetic theory from Landau level basis, Phys. Rev. D 101, 034006 (2020), arXiv:1909.11514 [nucl-th] .
* Lin and Yang [2021] S. Lin and L. Yang, Magneto-vortical effect in strong magnetic field, JHEP 06, 054, arXiv:2103.11577 [nucl-th] .
* Manuel and Torres-Rincon [2014a] C. Manuel and J. M. Torres-Rincon, Kinetic theory of chiral relativistic plasmas and energy density of their gauge collective excitations, Phys. Rev. D 89, 096002 (2014a), arXiv:1312.1158 [hep-ph] .
* Manuel and Torres-Rincon [2014b] C. Manuel and J. M. Torres-Rincon, Chiral transport equation from the quantum Dirac Hamiltonian and the on-shell effective field theory, Phys. Rev. D 90, 076007 (2014b), arXiv:1404.6409 [hep-ph] .
* Carignano _et al._ [2018] S. Carignano, C. Manuel, and J. M. Torres-Rincon, Consistent relativistic chiral kinetic theory: A derivation from on-shell effective field theory, Phys. Rev. D 98, 076005 (2018), arXiv:1806.01684 [hep-ph] .
* Carignano _et al._ [2020] S. Carignano, C. Manuel, and J. M. Torres-Rincon, Chiral kinetic theory from the on-shell effective field theory: Derivation of collision terms, Phys. Rev. D 102, 016003 (2020), arXiv:1908.00561 [hep-ph] .
* Mueller and Venugopalan [2017] N. Mueller and R. Venugopalan, Worldline construction of a covariant chiral kinetic theory, Phys. Rev. D 96, 016023 (2017), arXiv:1702.01233 [hep-ph] .
* Mueller and Venugopalan [2018] N. Mueller and R. Venugopalan, The chiral anomaly, Berry’s phase and chiral kinetic theory, from world-lines in quantum field theory, Phys. Rev. D 97, 051901 (2018), arXiv:1701.03331 [hep-ph] .
* Liu _et al._ [2019] Y.-C. Liu, L.-L. Gao, K. Mameda, and X.-G. Huang, Chiral kinetic theory in curved spacetime, Phys. Rev. D 99, 085014 (2019), arXiv:1812.10127 [hep-th] .
* Liu _et al._ [2020] Y.-C. Liu, K. Mameda, and X.-G. Huang, Covariant Spin Kinetic Theory I: Collisionless Limit, Chin. Phys. C 44, 094101 (2020), arXiv:2002.03753 [hep-ph] .
* Hayata _et al._ [2021] T. Hayata, Y. Hidaka, and K. Mameda, Second order chiral kinetic theory under gravity and antiparallel charge-energy flow, JHEP 05, 023, arXiv:2012.12494 [hep-th] .
* Gao _et al._ [2021] L.-L. Gao, S. Kaushik, D. E. Kharzeev, and E. J. Philip, Chiral kinetic theory of anomalous transport induced by torsion, Phys. Rev. B 104, 064307 (2021), arXiv:2010.07123 [cond-mat.mes-hall] .
* Hidaka _et al._ [2022] Y. Hidaka, S. Pu, Q. Wang, and D.-L. Yang, Foundations and applications of quantum kinetic theory, Prog. Part. Nucl. Phys. 127, 103989 (2022), arXiv:2201.07644 [hep-ph] .
* Hattori and Yin [2016] K. Hattori and Y. Yin, Charge redistribution from anomalous magnetovorticity coupling, Phys. Rev. Lett. 117, 152002 (2016), arXiv:1607.01513 [hep-th] .
* Ebihara _et al._ [2017] S. Ebihara, K. Fukushima, and K. Mameda, Boundary effects and gapped dispersion in rotating fermionic matter, Phys. Lett. B 764, 94 (2017), arXiv:1608.00336 [hep-ph] .
* Kharzeev and Yee [2011] D. E. Kharzeev and H.-U. Yee, Anomalies and time reversal invariance in relativistic hydrodynamics: the second order and higher dimensional formulations, Phys. Rev. D 84, 045025 (2011), arXiv:1105.6360 [hep-th] .
* Elze _et al._ [1986] H. T. Elze, M. Gyulassy, and D. Vasak, Transport Equations for the QCD Quark Wigner Operator, Nucl. Phys. B276, 706 (1986).
* Yang _et al._ [2020b] S.-Z. Yang, J.-H. Gao, Z.-T. Liang, and Q. Wang, Second-order charge currents and stress tensor in a chiral system, Phys. Rev. D 102, 116024 (2020b), arXiv:2003.04517 [hep-ph] .
* Heisenberg and Euler [1936] W. Heisenberg and H. Euler, Consequences of Dirac’s theory of positrons, Z. Phys. 98, 714 (1936), arXiv:physics/0605038 .
* Schwinger [1951] J. Schwinger, On Gauge Invariance and Vacuum Polarization, Phys. Rev. 82, 664 (1951).
* Schwinger [1962] J. Schwinger, Gauge Invariance and Mass. II, Phys. Rev. 128, 2425 (1962).
* Bellac [2011] M. L. Bellac, _Thermal Field Theory_ (Cambridge University Press, 2011).
* ’t Hooft and Veltman [1972] G. ’t Hooft and M. J. G. Veltman, Regularization and Renormalization of Gauge Fields, Nucl. Phys. B 44, 189 (1972).
* Peskin and Schroeder [1995] M. E. Peskin and D. V. Schroeder, _An Introduction to quantum field theory_ (Addison-Wesley, 1995).
* Vilenkin [1980] A. Vilenkin, Equilibrium parity violating current in a magnetic field, Phys. Rev. D 22, 3080 (1980).
* Nielsen and Ninomiya [1983] H. B. Nielsen and M. Ninomiya, Adler-Bell-Jackiw anomaly and Weyl fermions in crystal, Phys. Lett. B 130, 389 (1983).
* Fukushima _et al._ [2008] K. Fukushima, D. E. Kharzeev, and H. J. Warringa, The chiral magnetic effect, Phys. Rev. D 78, 074033 (2008), arXiv:0808.3382 [hep-ph] .
* Vilenkin [1979] A. Vilenkin, Macroscopic parity violating effects: Neutrino fluxes from rotating black holes and in rotating thermal radiation, Phys. Rev. D 20, 1807 (1979).
* Son and Surówka [2009] D. T. Son and P. Surówka, Hydrodynamics with Triangle Anomalies, Phys. Rev. Lett. 103, 191601 (2009), arXiv:0906.5044 [hep-th] .
* Landsteiner _et al._ [2011] K. Landsteiner, E. Megias, and F. Pena-Benitez, Gravitational Anomaly and Transport, Phys. Rev. Lett. 107, 021601 (2011), arXiv:1103.5006 [hep-ph] .
* Bu and Lin [2020] Y. Bu and S. Lin, Magneto-vortical effect in strongly coupled plasma, Eur. Phys. J. C 80, 401 (2020), arXiv:1912.11277 [hep-th] .
* Giannotti and Mottola [2009] M. Giannotti and E. Mottola, The Trace Anomaly and Massless Scalar Degrees of Freedom in Gravity, Phys. Rev. D 79, 045014 (2009), arXiv:0812.0351 [hep-th] .
* Bastianelli and Broccoli [2019] F. Bastianelli and M. Broccoli, On the trace anomaly of a Weyl fermion in a gauge background, Eur. Phys. J. C 79, 292 (2019), arXiv:1808.03489 [hep-th] .
* Bastianelli and Chiese [2022] F. Bastianelli and L. Chiese, Chiral fermions, dimensional regularization, and the trace anomaly, Nucl. Phys. B 983, 115914 (2022), arXiv:2203.11668 [hep-th] .
* Lee _et al._ [1989] H. W. Lee, P. Y. Pac, and H. K. Shin, Derivative expansions in quantum electrodynamics, Phys. Rev. D 40, 4202 (1989).
* Gusynin and Shovkovy [1996] V. P. Gusynin and I. A. Shovkovy, Derivative expansion for the one loop effective Lagrangian in QED, Can. J. Phys. 74, 282 (1996), arXiv:hep-ph/9509383 .
* Schwartz [2014] M. D. Schwartz, _Quantum Field Theory and the Standard Model_ (Cambridge University Press, 2014).
* Boyd [2020] R. W. Boyd, _Nonlinear optics_ (Academic press, 2020).
* Sodemann and Fu [2015] I. Sodemann and L. Fu, Quantum nonlinear hall effect induced by berry curvature dipole in time-reversal invariant materials, Phys. Rev. Lett. 115, 216806 (2015).
* Du _et al._ [2021] Z. Du, H.-Z. Lu, and X. Xie, Nonlinear hall effects, Nature Reviews Physics 3, 744 (2021).
* Gao _et al._ [2014] Y. Gao, S. A. Yang, and Q. Niu, Field induced positional shift of bloch electrons and its dynamical implications, Phys. Rev. Lett. 112, 166601 (2014).
* Gao _et al._ [2015] Y. Gao, S. A. Yang, and Q. Niu, Geometrical effects in orbital magnetic susceptibility, Phys. Rev. B 91, 214405 (2015).
* Gorbar _et al._ [2017b] E. V. Gorbar, V. A. Miransky, I. A. Shovkovy, and P. O. Sukhachov, Second-order chiral kinetic theory: Chiral magnetic and pseudomagnetic waves, Phys. Rev. B 95, 205141 (2017b), arXiv:1702.02950 [cond-mat.mes-hall] .
* Kadanoff and Baym [1962] L. Kadanoff and G. Baym, _Quantum Statistical Mechanics: Green’s Function Methods in Equilibrium and Nonequilibrium Problems_ (Benjamin, 1962).
* Blaizot and Iancu [2002] J.-P. Blaizot and E. Iancu, The Quark gluon plasma: Collective dynamics and hard thermal loops, Phys. Rept. 359, 355 (2002), arXiv:hep-ph/0101103 .
* Akamatsu and Yamamoto [2013] Y. Akamatsu and N. Yamamoto, Chiral Plasma Instabilities, Phys. Rev. Lett. 111, 052002 (2013), arXiv:1302.2125 [nucl-th] .
|
figurec
# An exchange-based surface-code quantum computer architecture in silicon
Charles D. Hill<EMAIL_ADDRESS>School of Physics, The University of
Melbourne, Parkville, 3010, Australia School of Mathematics and Statistics,
The University of Melbourne, Parkville, 3010, Australia Muhammad Usman
<EMAIL_ADDRESS>Centre for Quantum Computation and Communication, School
of Physics, The University of Melbourne, Parkville, 3010, Australia School of
Computing and Information Systems, Melbourne School of Engineering, The
University of Melbourne, Parkville, 3010, Australia Lloyd C.L. Hollenberg
<EMAIL_ADDRESS>Centre for Quantum Computation and Communication,
School of Physics, The University of Melbourne, Parkville, 3010, Australia
###### Abstract
Phosphorus donor spins in silicon offer a number of promising characteristics
for the implementation of robust qubits. Amongst various concepts for scale-
up, the shared-control concept takes advantage of 3D scanning tunnelling
microscope (STM) fabrication techniques to minimise the number of control
lines, allowing the donors to be placed at the pitch limit of $\geq$30 nm,
enabling dipole interactions. A fundamental challenge is to exploit the faster
exchange interaction, however, the donor spacings required are typically 15 nm
or less, and the exchange interaction is notoriously sensitive to lattice site
variations in donor placement. This work presents a proposal for a fast
exchange-based surface-code quantum computer architecture which explicitly
addresses both donor placement imprecision commensurate with the atomic-
precision fabrication techniques and the stringent qubit pitch requirements.
The effective pitch is extended by incorporation of an intermediate donor
acting as an exchange-interaction switch. We consider both global control
schemes and a scheduled series of operations by designing GRAPE pulses for
individual CNOTs based on coupling scenarios predicted by atomistic tight-
binding simulations. The architecture is compatible with the existing
fabrication capabilities and may serve as a blueprint for the experimental
implementation of a full-scale fault-tolerant quantum computer based on donor
impurities in silicon.
###### pacs:
Valid PACS appear here
## I Introduction
Quantum computing based on spin qubits formed by phosphorus donors in silicon
Kane (1998) is an attractive approach for large scale implementation of
quantum information processing. Some of the milestones achieved to date
include single shot spin readout Morello _et al._ (2010), the demonstration
of single qubits based on both electron Pla _et al._ (2012) and nuclear Pla
_et al._ (2013) spins, the fabrication of donor based devices in silicon based
on scanning tunnelling microscope (STM) techniques Fuechsle _et al._ (2012);
Weber _et al._ (2012), the post-fabrication pinpointing of their locations in
silicon with the exact lattice site precision Usman _et al._ (2016), and a
direct two-electron SWAP operation He _et al._ (2019). With ongoing
experimental efforts focused on increasing the number of qubits in quantum
devices and achieving control with high fidelities, the challenges associated
with scale-up and the design of a universal quantum computer architecture
incorporating quantum error correction come into sharper focus.
The development of topological quantum error correction (TQEC) codes such as
the surface code has provided a scheme for error correction with a relatively
high threshold that is commensurate with experiments Bravyi and Kitaev (1998);
Dennis _et al._ (2002); Raussendorf _et al._ (2007); Wang _et al._ (2011).
While the physical requirements of the surface code are relatively
straightforward to contemplate a two dimensional array of nearest-neighbour
coupled qubits. However, for all physical qubit platforms, even with
assumptions about quantum interconnects Nguyen _et al._ (2017), the
challenges inherent in the spatial arrangement of gates, and temporal
characterisation and control complexity for large numbers of independent
qubits to carry out TQEC are formidable. Since Kane’s original concept for a
1D qubit array Kane (1998), a number of proposals have been presented
addressing scalability issues, particularly with respect to the requirements
of incorporating quantum error correction Hill _et al._ (2015); Pica _et
al._ (2016); Gorman _et al._ (2016); Tosi _et al._ (2017); Cai _et al._
(2019). In Ref. Hill _et al._ (2015), a surface-code architecture was
reported for impurity spins in silicon which was based on the dipole
interactions between the P impurities. This work presented a detailed design
introducing shared control, however it was limited to dipole couplings which
are of the order of kHz. The difficulty of providing fast, available couplings
in solid state architectures has led to several proposals. Pica et al. Pica
_et al._ (2016) proposed a surface code architecture, in which electrons were
shuttled between neighbouring qubits. Gorman et al addressed the problem of
coupling by mechanically moving a set of probe qubits in order to establish
the required couplings Gorman _et al._ (2016). Tosi et al Tosi _et al._
(2017) proposed the use of long range couplings provided by a flip-flop qubit,
a combination of electronic and nuclear spin states that can be controlled
with a microwave electric fields. For donor spins in silicon, the
incorporation of exchange interaction in surface-code based error correction
schemes is still an open question.
Figure 1: 3D Surface-code Architecture: The schematic diagram plots the layout
of the proposed surface-code architecture based on phosphorus (P) donor qubits
in silicon. The architecture is based on previously published scheme Hill _et
al._ (2015), however it is updated to exploit the fast exchange interaction
between P donor electron spins. The data qubits are separated by 32 nm and
additional coupler qubits (orange dots) are incorporated in-between data
qubits to control (turn on/off) interaction between them. The qubit plane is
addressed by top and bottom gates shown by the blue and gray stripes.
The introduction of shared control Hill _et al._ (2015) in donor qubit
architecture design space reduces the spatial complexity and dovetails
naturally with the repetitive spatio-temporal control requirements of surface
code TQEC. Assuming a high level of qubit uniformity and a fundamental qubit
pitch of $\geq$ 30 nm, corresponding to the fundamental contol line pitch
limit in these devices, CNOT gates were based on the donor electron spin
dipole interaction with a phase-matched electron loading protocol to rectify
timing variations associated with the hyperfine interaction. Ideally, one
would use the exchange interaction, however, the severe spacing requirements
($\leq$ 15nm) and variations in the exchange coupling work against the design
of a direct exchange based 2D array for TQEC. Here, we address these problems
by introducing an intermediate donor acting as an exchange coupler. The qubit
donors containing quantum data can be spaced comfortably with respect to a
control line pitch of 35 nm, and phase matched loading at qubit donors is no
longer required. Atomic level simulations, with typical placement variations
expected in STM fabrication, indicate CNOT gate times at O($\upmu$sec) are
possible and the overall scheme has potential to meet the stringent control
requirements of the surface code.
## II Results & Discussions
### II.1 Overview of the Architecture
Figure 1 schematically illustrates the layout of the exchange-based surface-
code architecture proposed in this work. The architecture, as its predecessor
dipole-based architecture Hill _et al._ (2015), is based on three-dimensional
layout. In Figure 1 (a) The colored dots indicate 2D arrangement of donor
atoms, interleaved with black squares representing SET islands for
loading/unloading and readout of electron to/from qubits. The nuclear spins on
donors define the qubit states as shown in Figure 1 (b). The 2D qubit plane is
sandwiched between the top and bottom layout of wires forming source and
drain. The exponential decay of the exchange interaction with the separation
between the donor atoms is well known in the literature, as is the sensitivity
of the interaction to valley interference effects Cullis and Marko (1970);
Wellard _et al._ (2003); Gonzalez-Zalba _et al._ (2014); Hu _et al._
(2005); Sarma _et al._ (2005); Wellard _et al._ (2004); Kettle _et al._
(2006, 2004); Koiller _et al._ (2004); Song and Sarma (2016); Wellard and
Hollenberg (2005); Testolin _et al._ (2007); Saraiva _et al._ (2015); Pica
_et al._ (2014); Koiller _et al._ (2005); Voisin _et al._ (2020); Usman
(2021). This results in a tension between donor separation and exchange
strength to design a fast CNOT gate while maintaining sufficient distance
between the atoms to allow for control wires, also known as pitch problem. In
the previous dipole-based architecture Hill _et al._ (2015), the separation
between the adjacent donor atoms was taken to be 30 nm, defined by the gate-
leakage pitch limit for STM control-lines. At such distances, the exchange
interaction is effectively zero. In our scheme we introduce a coupler donor
which switches the exchange on and off by loading and unloading an electron to
that position (Figure 1 (c)).
Figure 2: Coupler-mediated CNOT gate: A schematic circuit diagram showing
conceptual triple-donor CNOT gate construction illustrated for the case
$\lvert 1\rangle\lvert 1\rangle\rightarrow\lvert 1\rangle\lvert 0\rangle$. In
our convention, arrows with single (double) heads label nuclear (electron)
spins, and down (up) direction of arrows define $\lvert 1\rangle$ ($\lvert
0\rangle$). The CNOT gate comprises three phosphorus donor qubit: target (T),
control (C), and coupler (c). (a-f) The spin configurations of electron and
nuclear spins on three qubits are shown at various stages of the CNOT circuit
operation.
The design of a robust two-qubit CNOT gate is a fundamental component of any
quantum computer architecture. Figure 2 plots the schematic diagram (center
circuit) of our design for a two-qubit CNOT gate based on the coupler qubit,
digitally controlling the exchange interaction between the control and target
data qubits. This mechanism allows the placement of control and target qubits
at distances commensurate with the pitch limit of STM control lines and yet
achieve MHz to GHz exchange interactions mediated via the coupler qubit. The
operation sequence of the proposed CNOT gate is explained in steps (a) to (f)
as shown in the diagram. We have indicated both the nuclear and electron qubit
spins on each qubit by plotting single and double head arrows, respectively.
As shown in (a), we assume that the gate is initialised as both electron spins
on control and target qubits in down-spin configuration and the nuclear spins
encode the qubit information. In the second step, (b), the coupler qubit is
loaded with an electron in down-spin configuration. Next, (c), the nuclear and
the electron spins of the target and control qubits are swapped. The CNOT
operation is performed between the target and control qubits (d), and then the
electron/nuclear spins are swapped again (e) to store the information back in
the nuclear spins. Finally, (f) brings the circuit back to the initial
condition by unloading the electron from the coupler qubit. This will turn off
the interaction between the target and control qubits.
Figure 3: Exchange distributions for triple donor protocol: (a) The possible
spatial locations are shown within the $\pm a_{0}$ placement precision for the
target (T), coupler (c) and control (C) dopants. Each dopant atom could be
placed on one of the possible nine locations, resulting in 81 values for
exchange interaction $J_{\rm Tc}$ and $J_{\rm cC}$. However, due to silicon
crystal symmetry, only 15 configurations are distinct. (b, c) The distinct
values of exchange interactions $J_{\rm Tc}$ and $J_{\rm cC}$ are plotted for
14 nm and 18 nm separations selected between target/coupler and
coupler/control, respectively.
### II.2 Exchange strength and distribution
The current state-of-the-art scanning tunnelling microscope (STM) based
atomic-precision fabrication technology Fuechsle _et al._ (2012) has
demonstrated donor placement with $\pm a_{0}$ accuracy, where $a_{0}$ is the
lattice constant of silicon. However, even such small variations in the donor
position may lead to considerably large variations in exchange interaction
Cullis and Marko (1970); Wellard _et al._ (2003); Gonzalez-Zalba _et al._
(2014); Hu _et al._ (2005); Sarma _et al._ (2005); Wellard _et al._ (2004);
Kettle _et al._ (2006, 2004); Koiller _et al._ (2004); Song and Sarma
(2016); Wellard and Hollenberg (2005); Testolin _et al._ (2007); Saraiva _et
al._ (2015); Pica _et al._ (2014); Koiller _et al._ (2005); Voisin _et al._
(2020); Usman (2021), placing stringent requirement on uniformity assumptions
in the design of control schemes for large-scale architectures Testolin _et
al._ (2007); Usman (2021). In the past, strategies have been developed to
mitigate the impact of exchange variations, which include the design of robust
composite pulse schemes such as BB1 Hill (2007), exchange characterisation
Testolin _et al._ (2007), the application of electric fields Wang _et al._
(2016) and the placement of donor atoms along the [110] crystal direction
Voisin _et al._ (2020). In this work, we propose the application of a small
strain field (5%) which allows full control of exchange interaction variation
for both in-plane and out-of-plane donor position variations Usman (2021).
Fig. 3 (a) plots a schematic illustration of donor positions for target,
coupler and control qubits. Each qubit is indicated by the target donor
position and the possible locations under $\pm a_{0}$ placement imprecision,
which is commensurate with the precision placement of donor atoms by STM
fabrication techniques. This results in 81 possible configurations between
target and coupler, and likewise another 81 possible configurations between
coupler and control. We note that due to the symmetry of the silicon crystal
lattice, only 15 configurations out of the 81 possibilities are distinct.
The calculation of exchange interaction is performed based on the atomistic
tight-binding wave functions of donor electrons in silicon Usman _et al._
(2015a, b) and the Heiltler-London theory Wellard and Hollenberg (2005). Fig.
3 (c) and (d) plots the computed exchange values for the 15 distinct donor
configurations between the target and coupler and between the coupler and
control, respectively. As an example, the separations between the target and
the coupler qubits is selected as 14 nm, and between the coupler and control
qubits as 18 nm. These separations allow a pitch of 32 nm which is consistent
with the reported STM control-line requirements ($\geq$ 30 nm) Hill _et al._
(2015). We note that the two separations are purposely selected to be slightly
different (18 nm and 14 nm), to minimise frequency band overlaps which will
allow efficient design of control pulses addressing individual donor pairs.
Figure 3(c) and (d) show a relatively small variation in exchange interaction
(about a factor of 5 or less), when compared to roughly three orders of
magnitude variation reported for similar donor position uncertainties in
unstrained silicon substrate Voisin _et al._ (2020). This considerably
suppressed variation in exchange strength has important implication for the
fidelity of CNOT gate which sharply decreases when the exchange distribution
is large Testolin _et al._ (2007). Furthermore, full exchange control can be
achieved in strained silicon system by an external in-plane electric field
which can provide a tuning of factor or ten or more for donor separations
above 14 nm Usman (2021).
The application of strain offers another direct benefit in terms of CNOT gate
operation times as the interaction time is inversely proportional to exchange
strength. Figure 4 plots exchange strength for various donor separations along
the [100] and [110] directions for both unstrained and strained silicon
environments. From these plots, a two-fold impact of strain is evident. First,
the application of strain significantly boosts the strength of exchange
interaction, as also reported in the literature Wellard and Hollenberg (2005);
Koiller _et al._ (2002, 2004); Sarma _et al._ (2005); Kettle _et al._
(2006). For example, our calculations show that donors placed at 20 nm
separation in strained silicon will have roughly the same exchange
interactions as the donor pairs which are 12-14 nm separations in the
unstrained silicon. This implies that donors can be placed much larger
distances in strained system without sacrificing exchange interaction or CNOT
interaction times, which is important to meet the pitch requirements of a
large-scale architecture. From our calculations, we estimate O($\upmu$sec)
interaction times for donor separations of upto 25 nm in strained silicon
case, which is drastically faster when compared to O($m$sec) interaction times
for unstrained silicon substrates. Secondly, the exchange interaction in
strained environment is highly uniform, i.e., nearly same strength along the
[100] and [110] directions. The uniformity of exchange strength with respect
to donor placement orientation ([100] and [110]) will be useful in the design
of a planar 2D surface-code architecture such as proposed in this work (Figure
1).
Figure 4: Exchange enhancement: (a, b) Exchange interactions ($J$) between two
P atoms separated along the [100] and [110] directions are plotted for both
unstrained (diamond symbols) and 5% strained (square symbols) silicon
substrates. The $J$ values are presented in the exchange term of the effective
spin Hamiltonian ($J\vec{\sigma^{e}_{1}}\cdot\vec{\sigma^{e}_{2}}$), in which
case $J$ = $\frac{E_{T}-E_{S}}{4}$, where $E_{T}-E_{S}$ is the singlet-triplet
splitting. The conversion of energy to frequency is based on 1 meV $\sim$ 242
MHz.
### II.3 GRAPE Pulse Engineering
The configurations of donor separations as shown in Figure 3 lead to a
distribution of corresponding interaction strengths, $J_{Tc}$ and $J_{cC}$.
Typically, at the selected spacing of 14-18 $\mathrm{nm}$ these coupling
strengths are larger than the hyperfine interaction, $A$, and so do not fall
into the regime described in Figure 2. Conceptually, the same operations are
being applied, however since all three electrons are strongly interacting, the
control pulses do not lend themselves to such a simple description. In order
to quantitatively determine control pulses required, we calculated pulses for
the electron to electron CNOT gate from control to target electrons using
numerically optimized GRAPE sequences.
Figure 5: Engineered Pulse Control: Schematic showing the strategy for
developing control pulses for a large array of donors. (a) The placement of
donors gives rise to different transition frequencies (b) Several of these
frequencies will overlap between distinct donor triples. (c) From these donor
triples, we identify sets of potential candidates triples for concurrent
pulses - spatially separated and either non-overlapping transitions in
frequency space, or with frequencies amenable to a broadband pulse (d) Optimal
pulses are found numerically using GRAPE which concurrently applies a CNOT to
all donor triples in that set. Difference colors indicate optimised pulse
sequences for different frequency combinations.
Since a wide range of exchange interaction strengths would be present in our
architecture, our strategy for implementing these pulses started from a simple
electron spin Hamiltonian (in the absence of an $AC$ control pulse applied):
$\displaystyle H_{\rm en}$ $\displaystyle=$ $\displaystyle
g\mu_{B}B(Z_{T}+Z_{C}+Z_{c})+g_{n}\mu_{n}B(Z_{nT}+Z_{nC}+Z_{nc})$ (1)
$\displaystyle+A_{T}\sigma_{T}\cdot\sigma_{nT}+A_{C}\sigma_{C}\cdot\sigma_{nC}+A_{c}\sigma_{c}\cdot\sigma_{nc}$
$\displaystyle+J_{Tc}\sigma_{T}\cdot\sigma_{c}+J_{cC}\sigma_{c}\cdot\sigma_{C}$
where $T$, $C$, and $c$ subscripts refer to the electron spins corresponding
to target, coupler and control qubits respectively, and the corresponding
$nT$, $nC$, and $nc$ refer to the nuclear spins. Here, and throughout the
paper, $X$, $Y$ and $Z$ are the Pauli spin operators, and $\sigma\cdot\sigma$
the exchange interaction between spins. Using the approximation that nuclear
spins remain static during this evolution, the electron spin Hamiltonian can
be reduced to the more tractable,
$\displaystyle H_{\rm e}$ $\displaystyle=$
$\displaystyle(g\mu_{B}B+A_{T})Z_{T}+(g\mu_{B}B-A_{C})Z_{C}+(g\mu_{B}B+A_{c})Z_{c}$
(2)
$\displaystyle+J_{Tc}\sigma_{T}\cdot\sigma_{c}+J_{cC}\sigma_{c}\cdot\sigma_{C}$
We wish to control the electron spins with a transverse $AC$ field,
$\displaystyle H_{\rm AC}$ $\displaystyle=$ $\displaystyle
g\mu_{B}B_{AC}\cos{\omega_{r}t}\left(X_{T}+X_{c}+X_{C}\right)$ (3)
$\displaystyle+g\mu_{B}B_{AC}\sin{\omega_{r}t}\left(Y_{T}+Y_{c}+Y_{C}\right)$
where typically $\omega$ is chosen to be resonant with a transition between
two of the eigenstates of $H$ given in Eqn. (2).
Not every transition between every pair of eigenstates is allowed. As an
illustrative example, if $J_{Tc}\gg A$ and $J_{Tc}\gg J_{cC}$ then a
transverse field of the form of Eqn. (3) would not excite transitions between
the singlet and triplet eigenstates due to symmetry considerations. Note,
however, that over a long time period, even though an individual transition
might not be able to be individually addressed, the symmetry can be broken
because the central spin interacts with both neighbours. Such disallowed
transitions can be identified numerically by considering the off-diagonal
elements of $H_{AC}$ given in Eqn. (3) written in the eigenbasis of $H_{e}$
given in Eqn. (2). In addition, two transitions can lie close in frequency,
and not able to be individually addressed in experiment. These two
considerations given rise to a viable set of control frequencies, $\omega$
which significantly excite transition between eigenstates of $H_{e}$ and can
be effectively addressed in experiment.
We performed GRAPE numerical optimization to determine gate pulse sequences
for the CNOT gate between electron spins. To do this, we considered each of
the different resonant frequencies which excite transitions between
eigenstates of the system as different control parameters. At each time-step,
it was possible to vary the strength of the $AC$ field applied, as well as the
phase of the applied microwave field. Using gradient ascent, we optimized the
trace fidelity,
$F(U)=\mathrm{Tr}\left[U_{C}U_{G}\right]$ (4)
where $U_{C}$ is the perfect CNOT gate applied between electronic spin states
1 and 3 and leaving the second electronic spin unchanged. $U_{G}$ is the
obtained evolution obtained from a given GRAPE pulse sequence.
We repeated GRAPE for each of the 225 different pairs of strengths of exchange
interactions $J_{Tc}$ and $J_{cC}$, obtaining a numerically optimized CNOT
pulse sequence in each case. Almost all pulse sequences resulted in a high
fidelity CNOT gate, accurate to $0.1\%$ accuracy. Only six CNOT gates had
lower fidelities. We note that there are 225 different triples of qubits. To
operate on each of these triples independently would require 225 different
pulse schemes - such as those calculated by GRAPE. However, many of these
pulses can, in principle, be applied in parallel. This can be applied in
parallel if (i) pulses have disjoint frequencies, which do not overlap, (ii)
broadband pulses can be applied to implement the gate on triples with near-
lying frequencies.
Pulses with disjoint frequencies can be operated in parallel, since an out of
resonance field will not excite transitions in off-resonant spins. The larger
the number of triples with non-overlapping frequencies, the more operations
that can be applied in parallel because they have disjoint frequencies. A
rough estimate of the number of triples (CNOT gates) that can be made is as
follows: If any two triples have a probability of 30% (40%) of having a
transition with an overlapping frequencies, then approximately 12 (9) of the
225 CNOT gates can be chosen to operate in parallel. Further tuning of
exchange interactions can be performed by the application of external electric
fields Usman (2021), which could allow more frequencies to be operated in
parallel.
## III Summary
We have introduced a new concept for the incorporation of fast exchange
interaction in surface-code architecture scheme for donor spin qubits in
silicon. The proposal is underpinned by the design of a CNOT gate in which the
coupling between target and control data qubits in mediated by an additional
coupler qubit which can selectively turn on/off exchange interaction between
data qubits. The introduction of coupler qubit allows data qubits to be placed
at large separations ($\geq$ 30 nm) commensurate with the requirements of a
large-scale architecture. We also discuss the application of a small strain
field ( 5%) which provides important benefits such as significant enhancement
in exchange strength leading to O($\upmu$sec) interaction times, suppressed
exchange variation arising from the donor placement inaccuracy and uniformity
in exchange interactions along the [100] and [110] crystal directions. We
consider a both global control as well as targeted GRAPE control based on
mapping frequency distributions arising from exchange variations. The work
here is a step on the path to the design and implementation of a large-scale
error-corrected quantum computer architecture based on atomic spin qubits in
silicon.
Acknowledgements: This work was supported by the Australian Research Council
(ARC) funded Center for Quantum Computation and Communication Technology
(CE170100012). Computational resources were provided by the National Computing
Infrastructure (NCI) and Pawsey Supercomputing Center through National
Computational Merit Allocation Scheme (NCMAS). This research was undertaken
using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This
Facility was established with the assistance of LIEF Grant LE170100200.
Author contributions: All authors contributed in the development of the
concept, planning, data analysis and writing of the manuscript.
Conflict of Interest: The authors declare no competing financial or non-
financial interests. A patent application has been filed based on aspects of
the architecture design.
Data availability: The data that support the findings of this study are
available within the article. Further information can be provided upon
reasonable request.
## References
* Kane (1998) B. E. Kane, Nature 393, 133 (1998).
* Morello _et al._ (2010) A. Morello, J. J. Pla, F. A. Zwanenburg, K. W. Chan, K. Y. Tan, H. Huebl, M. Mottonen, C. D. Nugroho, C. Yang, J. A. van Donkelaar, A. D. C. Alves, D. N. Jamieson, C. C. Escott, L. C. L. Hollenberg, R. G. Clark, and A. S. Dzurak, Nature 467, 687 (2010).
* Pla _et al._ (2012) J. J. Pla, K. Y. Tan, J. P. Dehollain, W. H. Lim, J. J. L. Morton, D. N. Jamieson, A. S. Dzurak, and A. Morello, Nature 489, 541 (2012).
* Pla _et al._ (2013) J. Pla, K. Y. Tan, J. P. Dehollain, W. H. Lim, J. J. L. Morton, F. A. Zwanenburg, D. N. Jamieson, A. S. Dzurak, and A. Morello, Nature 496, 334 (2013).
* Fuechsle _et al._ (2012) M. Fuechsle, J. A. Miwa, S. Mahapatra, H. Ryu, S. Lee, O. Warschkow, L. C. L. Hollenberg, G. Klimeck, and M. Y. Simmons, Nature Nanotechnology 7, 242 (2012).
* Weber _et al._ (2012) B. Weber, S. Mahapatra, H. Ryu, S. Lee, A. Fuhrer, T. C. G. Reusch, D. L. Thompson, W. C. T. Lee, G. Klimeck, L. C. L. Hollenberg, and M. Y. Simmons, Science 335, 64 (2012).
* Usman _et al._ (2016) M. Usman, J. Bocquel, J. Salfi, B. Voisin, A. Tankasala, R. Rahman, M. Y. Simmons, S. Rogge, and L. Hollenberg, Nature Nanotechnology 11, 763 (2016).
* He _et al._ (2019) Y. He, S. K. Gorman, D. Keith, L. Kranz, J. G. Keizer, and M. Y. Simmons, Nature 571, 371 (2019).
* Bravyi and Kitaev (1998) S. Bravyi and A. Kitaev, arXiv: quant-ph/9811052 (1998).
* Dennis _et al._ (2002) E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, J. Math. Phys. 43, 4452 (2002).
* Raussendorf _et al._ (2007) R. Raussendorf, J. Harrington, and K. Goyal, New J. Phys. 9, 199 (2007).
* Wang _et al._ (2011) D. S. Wang, A. G. Fowler, and L. C. L. Hollenberg, Phys. Rev. A 83, 020302 (2011).
* Nguyen _et al._ (2017) T. Nguyen _et al._ , Sci. Rep. 7, 13386 (2017).
* Hill _et al._ (2015) C. D. Hill, E. Peretz, S. Hile, M. House, M. Fuechsle, S. Rogge, M. Y. Simmons, and L. Hollenberg, Science Advances 1, e1500707 (2015).
* Pica _et al._ (2016) G. Pica, B. W. Lovett, R. N. Bhatt, T. Schenkel, and S. A. Lyon, Phys. Rev. B 93, 035306 (2016).
* Gorman _et al._ (2016) J. Gorman, N. Nickerson, P. Ross, J. Morton, and S. Benjamin, npj Quantum Information 2, 15019 (2016).
* Tosi _et al._ (2017) G. Tosi, F. Mohiyaddin, V. Schmitt, S. Tenberg, R. Rahman, G. Klimeck, and A. Morello, Nature Comm. 8, 450 (2017).
* Cai _et al._ (2019) Z. Cai _et al._ , Quantum 3 (2019).
* Cullis and Marko (1970) P. R. Cullis and J. R. Marko, Phys. Rev. B 1 (1970).
* Wellard _et al._ (2003) C. J. Wellard, L. C. L. Hollenberg, F. Parisoli, L. M. Kettle, H.-S. Goan, J. A. L. McIntosh, and D. N. Jamieson1, Phys. Rev. B 68, 195209 (2003).
* Gonzalez-Zalba _et al._ (2014) M. F. Gonzalez-Zalba, A. Saraiva, M. J. Calderon, D. Heiss, B. Koiller, and A. J. Ferguson, Nanoletters 14, 5672 (2014).
* Hu _et al._ (2005) X. Hu, B. Koiller, and S. D. Sarma, Phys. Rev. B 71, 235332 (2005).
* Sarma _et al._ (2005) S. D. Sarma, R. de Sousa, X. Hu, and B. Koiller, Solid Stat. Comm. 133, 737 (2005).
* Wellard _et al._ (2004) C. J. Wellard, L. Hollenberg, L. M. Kettle, and H.-S. Goan, J. Phys.: Cond. Matt. 16, 5697 (2004).
* Kettle _et al._ (2006) L. Kettle, H.-S. Goan, and S. C. Smith, Phys. Rev. B 73, 115205 (2006).
* Kettle _et al._ (2004) L. M. Kettle, H.-S. Goan, S. C. Smith, L. C. L. Hollenberg, and C. J. Wellard, J. Phys.: Cond. Matt. 16, 1011 (2004).
* Koiller _et al._ (2004) B. Koiller, R. B. Capaz, X. Hu, and S. D. Sarma, Phys. Rev. B 70, 115207 (2004).
* Song and Sarma (2016) Y. Song and S. D. Sarma, Appl. Phys. Lett. 109, 253113 (2016).
* Wellard and Hollenberg (2005) C. J. Wellard and L. C. L. Hollenberg, Phys. Rev. B 72, 085202 (2005).
* Testolin _et al._ (2007) M. J. Testolin, C. Hill, C. J. . Wellard, and L. C. L. Hollenberg, Phys. Rev. A 76, 012302 (2007).
* Saraiva _et al._ (2015) A. L. Saraiva, A. Baena, M. J. Calderon, and B. Koiller, J. Phys.: Cond. Matt. 27, 154208 (2015).
* Pica _et al._ (2014) G. Pica, B. W. Lovett, R. N. Bhatt, and S. A. Lyon, Phys. Rev. B 89, 235306 (2014).
* Koiller _et al._ (2005) B. Koiller, X. Hu, R. Capaz, A. Martins, and S. D. Sarma, An Acad Bras Cienc 77, 201 (2005).
* Voisin _et al._ (2020) B. Voisin, J. Bocquel, A. Tankasala, M. Usman, J. Salfi, R. Rahman, M. Y. Simmons, L. Hollenberg, and S. Rogge, Nature Communications 11, 6124 (2020).
* Usman (2021) M. Usman, Computational Materials Science 193, 110280 (2021).
* Hill (2007) C. D. Hill, Phys. Rev. Lett. 98, 180501 (2007).
* Wang _et al._ (2016) Y. Wang, A. Tankasala, L. Hollenberg, G. Klimeck, M. Y. Simmons, and R. Rahman, NPJ Quantum Information 2, 16008 (2016).
* Usman _et al._ (2015a) M. Usman, R. Rahman, J. Salfi, J. Bocquel, B. Voisin, S. Rogge, G. Klimeck, and L. C. L. Hollenberg, J. Phys.: Cond. Matt. 27, 154207 (2015a).
* Usman _et al._ (2015b) M. Usman, C. D. Hill, R. Rahman, G. Klimeck, M. Y. Simmons, S. Rogge, and L. C. L. Hollenberg, Phys. Rev. B 91, 245209 (2015b).
* Koiller _et al._ (2002) B. Koiller, X. Hu, and S. D. Sarma, Phys. Rev. B 66, 115201 (2002).
|
# Structural Knowledge-Driven Meta-Learning for Task Offloading in Vehicular
Networks with Integrated Communications, Sensing and Computing
Ruijin Sun<EMAIL_ADDRESS>Yao Wen<EMAIL_ADDRESS>Nan
Cheng<EMAIL_ADDRESS>Wei Wang<EMAIL_ADDRESS>Rong
Chai<EMAIL_ADDRESS>Yilong Hui<EMAIL_ADDRESS>
###### Abstract
Task offloading is a potential solution to satisfy the strict requirements of
computation-intensive and latency-sensitive vehicular applications due to the
limited onboard computing resources. However, the overwhelming upload traffic
may lead to unacceptable uploading time. To tackle this issue, for tasks
taking environmental data as input, the data perceived by roadside units (RSU)
equipped with several sensors can be directly exploited for computation,
resulting in a novel task offloading paradigm with integrated communications,
sensing and computing (I-CSC). With this paradigm, vehicles can select to
upload their sensed data to RSUs or transmit computing instructions to RSUs
during the offloading. By optimizing the computation mode and network
resources, in this paper, we investigate an I-CSC-based task offloading
problem to reduce the cost caused by resource consumption while guaranteeing
the latency of each task. Although this non-convex problem can be handled by
the alternating minimization (AM) algorithm that alternatively minimizes the
divided four sub-problems, it leads to high computational complexity and local
optimal solution. To tackle this challenge, we propose a creative structural
knowledge-driven meta-learning (SKDML) method, involving both the model-based
AM algorithm and neural networks. Specifically, borrowing the iterative
structure of the AM algorithm, also referred to as structural knowledge, the
proposed SKDML adopts long short-term memory (LSTM) network-based meta-
learning to learn an adaptive optimizer for updating variables in each sub-
problem, instead of the handcrafted counterpart in the AM algorithm.
Furthermore, to pull out the solution from the local optimum, our proposed
SKDML updates parameters in LSTM with the global loss function. Simulation
results demonstrate that our method outperforms both the AM algorithm and the
meta-learning without structural knowledge in terms of both the online
processing time and the network performance.
###### keywords:
Knowledge-driven meta-learning, integration of communication, sensing and
computing, task offloading, vehicular networks
††journal: Journal of Information and Intelligence
[label1]organization=State Key Laboratory of ISN, Xidian
University,city=Xi’an, postcode=710071, state=Shanxi, country=China
[label2]organization=Chongqing Key Laboratory of Mobile Communication
Technology, Chongqing University of Posts and
Telecommunications,city=Chongqing, postcode=400065, country=China
## 1 Introduction
### 1.1 Motivation and Contribution
Recently, with the development of intelligent transportation and wireless
communications, vehicular networks have attracted increasing interest from
both academia and industry [1]. By interconnecting vehicles and
infrastructures, such as roadside units (RSUs), vehicular networks extend the
range of information exchange, leading to improved transportation efficiency
and safety [2], [3], [4]. Furthermore, to assist automotive driving, more and
more sensors (e.g., cameras, radar) are integrated into vehicles to sense
environmental information from all directions, which will generate
approximately 1-GB data per second and should be processed by the on-board
processors in real-time [5]. Due to the limited computing resources on
vehicles, locally processing such computation-sensitive tasks cannot meet the
latency requirements. One potential solution to significantly lessen the on-
broad computational workload and processing latency is the mobile edge
computing (MEC) technology, which offloads the sensed environmental data of
vehicles to nearby RSUs with edge servers for computing [6], [7], [8].
To jointly utilize the computation resource in edge serves and the
communication resource in wireless networks within the integrated
communication and computing (I-CC) framework, resource management for task
offloading in MEC-enabled vehicular networks has been a hot research topic.
Most works in this field aim to reduce the overall processing latency of tasks
[9] or the system cost caused by resource consumption [10], as the response
time is the primary metric for real-time vehicular applications and resources
in networks are scarce and limited. In those works [11], a universal
phenomenon is gradually revealed that the uploading time of input data is the
major source of the latency, due to the limited bandwidth and the large size
of input data. With the explosive proliferation of various in-vehicle
applications, this dilemma of unaffordable uploading time would become more
severe. For example, tasks involving three-dimensional (3-D) reconstruction
necessitate the transmission of original high-resolution video frames to RSUs
for deep map fusion. Given the substantial volume of video data, like that
from a camera boasting 7680$\times$4320 resolution (i.e., 8K resolution)
demanding 11.9 Gb/s per pixel at 30 frames per second, attempting to upload
within the 10 Gb/s peak rate of a 5G network would take approximately 1.2
seconds[12]. Such a huge volume of input data results in unacceptable
transmission latency for latency-sensitive vehicular networks.
To tackle this challenge, for most driving-related vehicular applications
taking environmental data as input, a novel task offloading paradigm with
integrated communication, sensing and computing (I-CSC) has emerged [13] to
exploit not only the computation resource of RSUs with MEC serves but also the
environmental data perceived by sensors in RSUs. Specifically, to assist road
monitoring and automotive driving, various sensors, such as light detection
and ranging (LiDAR), cameras, have been equipped on RSUs, which can acquire
similar environmental information with nearby vehicles. Although the data
sensed by vehicles and RSUs in different locations provide distinct
viewpoints, this can be eliminated by pre-conducting the coordinate
transformation with several matrix operations [14]. Consequently, the
environmental data sensing of RSUs makes computation instruction transmission
become a new MEC mode for task offloading. Compared with the data uploading
MEC mode, the size of computation instruction will be much smaller, leading to
considerably reduced transmission latency. With this I-CSC-based task
offloading paradigm, computation mode selection problems are investigated in
[13] and [15] to minimize the offloading latency, where model-based
optimization theory and data-driven reinforcement learning are employed,
respectively. However, the model-driven approaches leverage mathematical
theories like optimization and game theory, and require precise modeling of
dynamic network features, which perform poorly in complex scenarios. Moreover,
these approaches involve multiple iterations for real-time computation,
leading to longer processing time and unsuitability for low-latency services.
On the other hand, the data-driven approaches utilize neural networks to learn
complex mapping relationships from data. They trade offline training for
quicker online computation but rely on stable networks and abundant high-
quality training data, resulting in poor generalization. Furthermore, data-
driven neural networks are regarded as “black-box” and lack interpretability.
Motivated by this, in this paper, we investigate an I-CSC-based task
offloading problem in vehicular networks, and propose a novel structural
knowledge-driven meta-learning (SKDML) method, exploiting both the
interpretability of model-based methods and the fast inference of data-driven
methods. Specifically, to figure out the tradeoff between latency and resource
consumption in the I-CSC paradigm, in this paper, a joint computation mode
selection and resource allocation problem is formulated to minimize the system
total cost caused by resource consumption while ensuring the latency
requirement of each task, where three computation modes are considered, i.e.,
the local computation mode, the data transmission-based MEC mode and the
instruction transmission-based MEC mode. Then, to solve this non-convex
problem, a creative SKDML method is proposed, which keeps the inner-and-outer-
iterative structure of the model-based alternative minimization (AM)
algorithm, referred to as structural knowledge and adopts long short-term
memory (LSTM) networks to learn adaptive strategies for variable updating. The
main contributions of this paper are summarized as follows.
* 1.
This paper investigates resource allocation with a pioneering I-CSC-based
computational offloading scheme. While prevailing research predominantly
emphasizes latency as the primary optimization criterion, offloading costs are
often overlooked. It is crucial to note that as latency diminishes, the
associated overhead costs tend to surge. Recognizing cost as a paramount
metric, this paper aims to strike a balance between task offloading latency
and its associated costs. Consequently, we formulate a problem model that
incorporates latency tolerance as a constraint, with the offloading cost as
the primary optimization objective.
* 2.
In order to address the challenges posed by the non-convex problem’s high
computational complexity and susceptibility to locally optimal solutions, the
paper presents a groundbreaking structural SKDML method. This method
synergistically combines the model-based AM algorithm with neural networks.
The SKDML leverages the iterative structure of the AM algorithm, employing
LSTM network-based meta-learning to develop an adaptive optimizer tailored for
variable updates across individual sub-problems. In addition, the proposed
SKDML method is engineered to navigate away from local optima, enhancing
solution robustness.
* 3.
Simulation results have shown that the proposed SKDML has a relatively shorter
online processing time and superior performance compared to the AM algorithm
and the meta-learning approach without knowledge. Specifically, the proposed
SKDML has a convergence time improvement of approximately 15% compared to the
AM algorithm, and an improvement of approximately 47% compared to meta-
learning without knowledge. In terms of performance, our proposed algorithm
improves by approximately 50% compared to the AM algorithm and by
approximately 47% compared to the meta-learning without knowledge.
### 1.2 Related Works
In this subsection, We introduced the existing model-driven solutions to task
offloading problems in the I-CC mode. However, due to the drawbacks of model-
driven approaches such as long online processing time, we also introduced the
existing paper on data-driven solutions to task offloading problems. Finally,
we introduced the tasks offloading issues that exist in I-CSC mode, and
summarized our methods.
First, we have introduced existing research papers on mathematical model-
driven solutions for tasks offloading problems in I-CC mode. For mathematical
model-driven resource allocation methods, Zhang et al. [16] proposed a
Stackelberg game-based approach to maximize the utility of both vehicles and
MEC servers. Similarly, Zhao et al. [17] introduced an adaptive vehicle
clustering algorithm based on the fuzzy C-means algorithm, which can reduce
vehicle power consumption while meeting the required vehicle latency. Liu et
al. [18] propose a distributed computation offloading scheme by formulating
the computation offloading decision-making problem as a multi-user game.
Shahzad et al. [19] used the “dynamic programming with Hamming distance
termination” method to offload tasks and reduce the energy use of the mobile
device. However, it is more used for noncritical tasks, not suitable for
sensor-driven autonomous driving services. Du et al. [20] devised a continuous
relaxation and Lagrangian dual decomposition-based iterative algorithm for
joint radio resource and power allocation. Liu et al. [21] proposed a game-
based distributed computation scheme, where the users compete for the edge
cloud’s finite computation resources via a pricing approach. To minimize both
latency and energy consumption, Dinh et al. [22] both transfer the
multiobjective optimization problem into a single-objective optimization
problem by weighting coefficients. To maximize the total revenue, Wang et al.
[23] formulate an optimization problem by jointly considering the offloading
decision, resource allocation, and caching in heterogeneous wireless cellular
networks and propose a distributed algorithm based on the alternating
direction method of multipliers (ADMM). To address autonomous vehicles often
have insufficient onboard resources to provide the required computation
capacity, Cui et al. [24] advocated a novel approach to offload computation-
intensive autonomous driving services to roadside units and cloud. And
combined an integer linear programming (ILP) formulation for offline
optimization of the scheduling strategy and a fast heuristics algorithm for
online adaptation. However, model-driven have issues such as long online
processing time, which cannot meet the low latency requirements of connected
and autonomous vehicles (CAV) tasks.
To address issues such as long online processing time for model-driven
algorithms, some works adopt data-driven methods to manage the resource in
task offloading. For instance, Dai et al. proposed a dynamic resource
allocation architecture for computing and caching in [25], and utilized deep
reinforcement learning (DRL) to maximize system utility. He et al. used DRL to
maximize a reward function defined by system utility in order to solve the
joint optimization problem of communication, caching, and computation in [26].
Zhang et al. [9] proposed a distributed computing offloading algorithm to
optimize task offloading latency. Zhang et al. [10] utilized the cognitive
radio (CR) to alleviate the spectrum scarcity problem during computation
offloading. To reduce transmission costs among vehicle-to-infrastructure (V2I)
and vehicle-to-vehicle (V2V) communications, they propose a deep Q-learning
method to schedule the communication modes and resources. Li et al. [27]
studied a collaborative computing approach in vehicular networks and proposed
a DRL technique to tackle a complex decision-making problem. Cheng et al. [28]
proposed a novel DRL-based system with two-phase resource allocation and task
scheduling to reduce energy costs for cloud service providers with large data
centers and a large number of user requests with dependencies. Targeting the
problem of multi-objective work-flow scheduling in the heterogeneous IaaS
cloud environment. Wang et al. [29] modeled the multiobjective workflow
scheduling as a stochastic Markov game and develop a decentralized multi-task
deep reinforcement learning framework that is capable of obtaining correlated
equilibrium solutions of workflow scheduling.
In addition to the papers dedicated to the resource allocation of I-CC, the
resource allocation methods of I-CSC are also emerging. Qi et al. [13]
presented a traffic-aware task offloading mechanism that optimally combines
communication and sensing abilities of serving nodes (SNs) to minimize overall
response time (ORT) for computation-intensive and latency-sensitive vehicular
applications, and used binary search algorithm to solve this problem. Gong et
al. [15] proposed an environment-aware offloading mechanism (EAOM) based on an
integrated computation, sensing and communication systems to minimize the ORT
of the system and used deep optimization problem of deterministic policy
gradient (DDPG).
The related work regarding I-CSC mainly concentrates on reducing the task
latency subject to resource constraints. For CAV tasks, safety is the primary
metric required to be guaranteed, often reflected in task latency. Ensuring
the latency-sensitive tasks are completed within their maximum threshold is
crucial for safety. Furthermore, it is worth pointing out that the latency
reduction is at the expense of consuming more resources and energy, resulting
in large system costs. In light of this, in this paper, we minimize the
offloading cost consisting of energy consumption and resource payment under
latency constraints. To solve this problem, we propose a knowledge-driven
algorithm, an innovative fusion of traditional algorithms with neural
networks, to improve the performance.
## 2 System Model
In this section, the vehicle-RSU cooperation architecture is proposed. Next,
we provide a detailed exposition of the specific scenarios investigated,
establish three computation modes and analyze the latency and cost of each
mode. Finally, the total cost minimization problem is formulated with the
communication and computing resources constraints in section 2.5.
### 2.1 Offloading Framework with Integrated Sensing, Computation and
Communication for Vehicular Networks
Figure 1: Vehicle-RSU cooperation architecture.
As shown in Fig. 1, the integrated sensing, computing and communication
framework for vehicular networks consists of one centralized control center,
several RSUs and multiple vehicles, where RSUs with MEC servers are deployed
along the roadside to cover the partitioned road segments respectively. Both
vehicles and RSUs are equipped with an adequate number of sensors, such as
high-definition cameras, LiDAR and millimeter-wave radar, to perceive the
surrounding environmental information. The sensed real-time data serves as
input for some computation-intensive and latency-intensive vehicular tasks,
for instance, AR-based automotive driving with 3D Reconstruction. Due to the
limited onboard computation resources in vehicles, such tasks are usually
offloaded to RSUs with powerful computation ability.
For vehicles close to an RSU with multiple sensors, it is possible that the
RSU can perceive similar environmental data as vehicles do. Although the
perception data collected by the RSU and vehicles varies in perspective, such
incongruities can be mitigated by providing coordinates of vehicles to the RSU
to pre-conduct the coordinate transformation via matrix operation [26]. As a
consequence, during the task offloading, each vehicle can select to transmit
the simple task-related computation instruction with its coordinate, instead
of massive environmental data, to the RSU, which remarkably decreases the
volume of transmitted data. Armed with vehicles’ coordinates and computation
instructions, RSUs employ their own sensed environmental data to facilitate
the computation offloading process. This novel I-CSC paradigm has the
potential to significantly reduce the latency of computation-intensive
vehicular tasks.
In this paper, we consider the task offloading of vehicles with I-CSC in the
coverage of one RSU, where the set of vehicles is defined by
$\mathcal{I}=\\{1,2,...,I\\}$. Each vehicle carries a computation-intensive
and latency-intensive task that takes environmental data as input. According
to [9], such computing tasks can be arbitrarily split into two independent
subtasks that run in parallel. Therefore, to improve the computation
offloading efficiency, we consider a continuous task offloading strategy,
where a portion of the task is locally processed by the vehicle and the
remainder is simultaneously offloaded to the RSU for parallel computing. The
task offloading ratio is denoted as $\eta_{i}\in[0,1]$, indicating the
proportion of the task offloaded to the RSU. To keep the same with practical
networks, in our paper, the orthogonal frequency division multiplexing (OFDM)
communication technology is utilized, where each user occupies one subcarrier
and there is no adjacent channel inference. As the RSU also perceives the
environmental data, in the offloading process, vehicles can choose to transmit
either the environmental data or the computation instruction to the RSU
depending on the input data size and the network status, which is referred to
as data transmission-based (DataT-based) or instruction transmission based
(InstrT-based) MEC mode. The variable of transmission mode selection for task
$i$, denoted as $\alpha_{i}$, is a binary value, where $\alpha_{i}=1$
corresponds to the data transmission and $\alpha_{i}=0$ indicates the
computation instruction transmission. Therefore, as illustrated in Fig. 2, the
considered task offloading framework with I-CSC for vehicular networks
consists of three computation modes, i.e., the local computation mode, the
DataT-based MEC mode and the InstrT-based MEC mode. While the conventional
task offloading is only with the integrated communication and computing, which
includes the local computation mode and the DataT-based MEC mode.
Figure 2: The latency of the considered three computation modes.
Notice that latency is of particular significance for driving-related
vehicular tasks and tasks generated by different vehicles in dynamic road
situations may have various maximum latency tolerances. Together with the
considered I-CSC framework, the task for vehicle $i$ is defined as
$\phi_{i}=\\{t_{i}^{\text{max}},c_{i},b_{i},S_{i}^{\text{instr}},b_{i}^{\text{instr}}\\}$,
where $t_{i}^{\text{max}}$ signifies the maximum latency tolerance for the
task, $c_{i}$ represents the task’s input data size, $b_{i}$ denotes the
required number of CPU cycles for the task computation, $S_{i}^{\text{instr}}$
indicates the data size of computation instruction and vehicles’ coordinate,
and $b_{i}^{\text{instr}}$ represents the required number of CPU cycles for
the coordinate transformation. For each task, there are three possible
computation modes in the considered I-CSC framework, whose latency
compositions are respectively plotted in Fig. 2. In addition to the latency,
communication and computation resource cost is also an important metric that
vehicle owners care about. In what follows, therefore, both the latency and
the resource cost for each computation mode in the I-CSC framework are
analyzed mathematically.
### 2.2 Local Computation Mode
When a portion of a task is processed locally by the onboard computation
resource in the vehicle, its total latency is just the computing latency. Let
$f_{i}^{l}$ represent the CPU frequency of vehicle $i$, the latency of task
$i$ in the local computation mode is defined as
$t_{i}^{l}=\frac{(1-\eta_{i})b_{i}}{f^{l}_{i}}.$ (1)
Similarly, the resource cost in the local computation mode only involves the
computing energy cost. Let $e_{2}$ denote the cost of energy per unit consumed
locally, then the energy cost of task $i$ in local computation mode is
$C_{i}^{l}=e_{2}(1-\eta_{i})c_{i}(f_{i}^{l})^{2},$ (2)
### 2.3 DataT-based MEC Mode
For the DataT-based MEC mode, the input data of tasks sensed by the vehicle is
transmitted to RSU and computed at the co-located MEC server. Hence, the
latency of this mode includes the task uploading latency, the computing
latency, and downlink feedback latency.
The latency in transmitting data from the vehicle to the RSU is
$t_{i}^{\text{data}}=\frac{\eta_{{i}}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})},$
(3)
where $W_{i}$ is the bandwidth allocated to the $i$-th vehicle user by RSU,
$P_{i}$ is the transmit power of the i-th vehicle, $g_{i}$ represents the
channel gain, and $\sigma^{2}$ is the noise power during the transmission
process. Then, the computing latency of the RSU is
$t_{i}^{\text{comp}}=\frac{\eta_{i}b_{i}}{f_{i}},$ (4)
where $f_{i}$ is the CPU frequency allocated to task $i$ in the MEC server. As
the amount of task result is too small, the downlink latency can be ignored.
Therefore, the total latency in this mode is
$t_{i}^{D}=\frac{\eta_{i}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+\frac{\eta_{\mathrm{i}}b_{i}}{f_{i}}.$
(5)
During the process, the cost of completing the task is more complex than that
of the local computation mode. It consists of two parts, one is the energy
cost in the process of unloading transmission and computing. The other is the
cost of bandwidth and computing resources to be paid to the RSU during
resource allocation.
The energy cost in this mode is the energy consumed in the transmission
process. The energy cost consumed in the transmission process of vehicle $i$
is as follows
$C_{i}^{e}=P_{i}\frac{(1-\alpha_{i})\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})},$
(6)
where $e_{1}$ represents the energy cost per unit of energy consumed during
transmission.
The payment cost for vehicle $i$ to the RSU is expressed as
$C_{i}^{d}=\mu_{1}W_{i}+\mu_{2}f_{i},$ (7)
where $\mu_{1}$ represents the payment cost per unit of bandwidth, and
$\mu_{2}$ represents the payment cost per unit of computation resources.
The total cost in DataT-based MEC mode is
$C_{i}^{D}=P_{i}\frac{(1-\alpha_{i})\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+\mu_{1}W_{i}+\mu_{2}f_{i}.$
(8)
### 2.4 InstrT-based MEC Mode
When the sub-task is executed in RSU and the vehicle transmits the computation
instruction to RSU, the latency includes the computation instruction
transmission latency, instruction conversion latency, computation latency and
downlink transmission latency.
The transmission latency when uploading computation instructions to RSU is
$t_{i}^{\text{instr}}=\frac{\eta_{i}S_{i}^{\text{instr}}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}.$
(9)
To make the viewpoint of data obtained at the RSU consistent with the data
that the vehicle wants to process at this time, it is necessary to preprocess
the environmental data sensed by the RSU. This can be achieved by performing
matrix operations using the vehicle coordinates included in the computing
instruction. The conversion time is related to the size of the perceived data,
which is expressed as
$t_{i}^{\text{tra}}=\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}.$ (10)
After the computing instruction arrives at RSU, the computing latency of the
RSU is
$t_{i}^{\text{comp}}=\frac{\eta_{i}b_{i}}{f_{i}}.$ (11)
Since the transmission time of the computing instruction is small relative to
the coordinate conversion time, we ignore the transmission time in this paper.
Therefore, the total computation latency in InstrT-based MEC mode is
$t_{i}^{\text{In}}=\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}}.$
(12)
Compared with the DataT-based MEC mode, the energy consumption generated by
InstrT-based MEC mode is particularly small and can be ignored. Hence, the
total cost in InstrT-based MEC Mode is the payment cost paid by the vehicle
$i$ to the RSU
$C_{i}^{\text{In}}=\mu_{1}W_{i}+\mu_{2}f_{i}.$ (13)
### 2.5 Problem Formulation
The latency and cost of the three modes are analyzed above. This paper stands
from the perspective of the vehicle user. When the vehicle needs to process a
task, the user hopes to complete the task with the least cost within the time
requirement.
The latency of task completion depends on the longest latency of subtasks in
the three modes. With the help of variables $\alpha_{i}$, the latency of
DataT-based MEC mode and InstrT-based MEC mode can be merged, and the latency
of task processing at RSU can be obtained as follows
$t_{i}^{\text{RSU}}=\alpha_{i}\frac{\eta_{\mathrm{i}}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+(1-\alpha_{i})\frac{\eta_{\mathrm{i}}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{\mathrm{i}}b_{i}}{f_{i}}.$
(14)
The total latency to complete the task is the maximum value of the local
processing latency and RSU processing latency. So the total latency $t_{i}$ is
$t_{i}=\text{max}\\{t_{i}^{\text{RSU}},t_{i}^{l}\\}.$ (15)
The cost of completing task $i$ is the sum of the respective costs under the
three modes. So the cost of task $i$ is
$C_{i}=P_{i}\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+e_{2}(1-\eta_{i})c_{i}(f_{i}^{l})^{2}+\mu_{1}W_{i}+\mu_{2}f_{i}.$
(16)
Let $G(\bm{\alpha,\eta,p,w,f})$ denote the total cost of the entire system,
then
$G(\bm{\alpha,\eta,p,w,f})=\sum_{i=1}^{I}C_{i},$ (17)
where $\boldsymbol{\alpha}=[\alpha_{1},\alpha_{2},...,\alpha_{I}]^{T}$,
$\boldsymbol{\eta}=[\eta_{1},\eta_{2},...,\eta_{I}]^{T}$,
$\boldsymbol{p}=[P_{1},P_{2},...,P_{I}]^{T}$,
$\boldsymbol{w}=[W_{1},W_{2},...,W_{I}]^{T}$ and
$\boldsymbol{f}=[f_{1},f_{2},...,f_{I}]^{T}$.
In this paper, we consider minimizing the total cost under the constraints of
communication resources, computing resources, and latency conditions. We can
optimize the transmission mode variable $\bm{\alpha}$, task partitioning
variable $\bm{\eta}$, vehicle transmit power variable $\bm{p}$, and the base
station’s allocation of bandwidth and computation resources variables $\bm{w}$
and $\bm{f}$ to minimize the total cost. The final formulated problem is
represented as
$\displaystyle\mathcal{P}1:\underset{\;\bm{\alpha,\eta,p,w,f}}{\text{minimize}}$
$\displaystyle\qquad G(\bm{\alpha,\eta,p,w,f})$ (18a) s. t.
$\displaystyle\qquad\text{C1}:\eta_{i}\in[0,1],\forall{i\in\\{1,2,...,I\\}}$
(18b)
$\displaystyle\qquad\text{C2}:\alpha_{i}=\\{0,1\\},\forall{i\in\\{1,2,...,I\\}}$
(18c) $\displaystyle\qquad\text{C3}:P_{i}\leq
P_{\text{car}},\forall{i\in\\{1,2,...,I\\}}$ (18d)
$\displaystyle\qquad\text{C4}:\sum_{i=1}^{I}f_{i}\leq f_{\text{RSU}},$ (18e)
$\displaystyle\qquad\text{C5}:\sum_{i=1}^{I}W_{i}\leq W_{\text{RSU}},$ (18f)
$\displaystyle\qquad\text{C6}:t_{i}\leq
t_{i}^{\text{max}},\forall{i\in\\{1,2,...,I\\}},$ (18g)
where C1 represents the task partitioning ratio constraint, C2 represents the
transmission mode constraint, C3 represents the maximum transmission power
constraint for the vehicles with $P_{car}$ denoting the maximum transmit power
of a vehicle, C4 represents the computation resource constraint for the BS
with $f_{RSU}$ being the total computation resource at the RSU, C5 represents
the bandwidth constraint for the RSU with $W_{RSU}$ being the total bandwidth
available at the RSU, and C6 represents the maximum latency constraint. Due to
$t_{i}=\text{max}\\{t_{i}^{\text{RSU}},t_{i}^{l}\\}$, we can convert the $C6$
constraint into the following two constraints
$\text{C6a}:t_{i}^{\text{RSU}}\leq
t_{i}^{\text{max}},\forall{i\in\\{1,2,...,I\\}},$ (19)
$\text{C6b}:t_{i}^{l}\leq t_{i}^{\text{max}},\forall{i\in\\{1,2,...,I\\}}.$
(20)
Then, the final objective optimization problem is equivalently expressed as
$\displaystyle\mathcal{P}2:\underset{\;\bm{\alpha,\eta,p,w,f}}{\text{minimize}}$
$\displaystyle\qquad G(\bm{\alpha,\eta,p,w,f})$ (21a) s. t.
$\displaystyle\qquad\text{C1}:\eta_{i}\in[0,1],\forall{i\in\\{1,2,...,I\\}}$
(21b)
$\displaystyle\qquad\text{C2}:\alpha_{i}=\\{0,1\\},\forall{i\in\\{1,2,...,I\\}}$
(21c) $\displaystyle\qquad\text{C3}:P_{i}\leq
P_{\text{car}},\forall{i\in\\{1,2,...,I\\}}$ (21d)
$\displaystyle\qquad\text{C4}:\sum_{i=1}^{I}f_{i}\leq f_{\text{RSU}},$ (21e)
$\displaystyle\qquad\text{C5}:\sum_{i=1}^{I}W_{i}\leq W_{\text{RSU}},$ (21f)
$\displaystyle\qquad\text{C6a}:t_{i}^{\text{RSU}}\leq
t_{i}^{\text{max}},\forall{i\in\\{1,2,...,I\\}},$ (21g)
$\displaystyle\qquad\text{C6b}:t_{i}^{l}\leq
t_{i}^{\text{max}},\forall{i\in\\{1,2,...,I\\}}.$ (21h)
Due to the coupling between variables, this problem is non-convex.
## 3 Model-based AM Algorithm for Task Offloading with I-CSC
The original problem $\mathcal{P}2$ is highly challenging to solve directly
due to its nonconvex nature [30], which is caused by the mutual coupling of
$\boldsymbol{\alpha},\boldsymbol{\eta},\boldsymbol{p},\boldsymbol{w},\boldsymbol{f}$
in the objective function, as well as the maximum latency constraints. To
tackle this in a computationally efficient manner, we employ the widely-used
model-based AM algorithm, whose main idea is breaking down the original
complex problem with multiple variables into sub-problems involving partial
variables, solving them in turn [31] while keeping the other variables fixed.
In this section, we decompose problem $\mathcal{P}2$ into four sub-problems,
i.e., transmission mode selection, task offloading ratio decision,
transmission power allocation, as well as bandwidth and computing resource
allocation, and alternatively tackle them to find a good solution.
### 3.1 Transmission Mode Selection
We first solve the problem of transmission mode selection for each CAV. In the
original problem $\mathcal{P}2$, the variable that determines the transmission
mode of vehicle $i$ is the binary variable $\alpha_{i}$. When the other
variables are fixed, the objective function of the transmission mode selection
problem is defined as $g(\bm{\alpha})$, which is expressed as
$\displaystyle\mathcal{P}3:$
$\displaystyle\min_{\bm{\alpha}}\quad\sum_{i=1}^{I}P_{i}\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}$
(22a) $\displaystyle\text{s.
t.}\qquad\alpha_{i}\in\\{0,1\\},\forall{i\in\\{1,2,...,I\\}},$ (22b)
$\displaystyle\qquad\quad\frac{\alpha_{i}\eta_{i}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+(1-\alpha_{i})\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}}\leq
t_{i}^{\text{max}},\forall{i\in\\{1,2,...,I\\}}.$ (22c)
Observing from problem $\mathcal{P}$3, one can find that the objective
function is a linear combination of the decision variable $\alpha_{i}$, and
there is no coupling of the transmission choices between different vehicles in
the constraints. Therefore, problem $\mathcal{P}$3 can be divided into $I$
parallel problems with each aiming to optimize a single variable $\alpha_{i}$.
Specifically, the problem of optimizing $\alpha_{i}$ is expressed as
$\displaystyle\mathcal{P}3^{\prime}:$ $\displaystyle\min_{\alpha_{i}}\quad
P_{i}\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}$
(23a) $\displaystyle\text{s. t.}\qquad\alpha_{i}\in\\{0,1\\},$ (23b)
$\displaystyle\qquad\quad\frac{\alpha_{i}\eta_{i}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+(1-\alpha_{i})\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}}\leq
t_{i}^{\text{max}}.$ (23c)
Note that $\mathcal{P}3^{\prime}$ is a binary linear programming problem with
constraints, which is easy to optimally solve by the implicit enumeration
method.
### 3.2 Task Offloading Ratio Decision
The second sub-problem we face is the task offloading ratio decision, where we
fix the variables $\boldsymbol{p},\boldsymbol{w},\boldsymbol{f}$ and
$\boldsymbol{\alpha}$ to obtain the optimal value of $\boldsymbol{\eta}$ for
that case. The constraints considered are C1, C6a, and C6b. The objective
function is defined as $g(\bm{\eta})$, and represented as follows
$\displaystyle\mathcal{P}4:$
$\displaystyle\min_{\bm{\eta}}\quad\sum_{i=1}^{I}(P_{i}\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+e_{2}(1-\eta_{i})c_{i}{f^{l}_{i}}^{2}+\mu_{1}W_{i}+\mu_{2}f_{i})$
(24a) $\displaystyle\text{s.
t.}\quad\eta_{i}\in[0,1],\forall{i\in\\{1,2,...,I\\}},$ (24b)
$\displaystyle\qquad~{}~{}\eta_{i}\leq\frac{t_{i}^{\max}}{\alpha_{i}{\frac{c_{i}}{W_{i}\log_{2}\left(1+\frac{P_{i}g_{i}}{\sigma^{2}}\right)}}+(1-\alpha_{i})\frac{b_{i}^{\text{intar}}}{f_{i}}+\frac{b_{i}}{f_{i}}},\forall{i\in\\{1,2,...,I\\}},$
(24c) $\displaystyle\qquad~{}~{}\eta_{i}\leq
1-\frac{t_{i}^{\max}f^{l}_{i}}{b_{i}},\forall{i\in\\{1,2,...,I\\}}.$ (24d)
Similar to the first sub-problem, the decision $\eta_{i}$ under different
tasks has no coupling effect in the objective function and constraints, so we
can still decompose the original problem into sub-problems in the single-
vehicle case. After eliminating the terms in the optimization objective
function that are not related to $\eta_{i}$, the sub-problem can be simplified
as
$\displaystyle\mathcal{P}4^{\prime}:$
$\displaystyle\underset{\eta_{i}}{\operatorname{minimize}}\left(P_{i}\frac{\alpha_{i}c_{i}e_{1}}{W_{i}\log_{2}\left(1+\frac{P_{i}g_{i}}{\sigma^{2}}\right)}-e_{2}c_{i}\left(f^{l}_{i}\right)^{2}\right)\eta_{i}$
(25) $\displaystyle\text{ s. t. }\quad\eta_{i}\in[0,1],$ (26)
$\displaystyle\qquad\quad\eta_{i}\leq\frac{t_{i}^{\max}}{\alpha_{i}\frac{c_{i}}{W_{i}\log_{2}\left(1+\frac{P_{i}g_{i}}{\sigma^{2}}\right)}+(1-\alpha_{i})\frac{b_{i}^{\text{intar}}}{f_{i}}+\frac{b_{i}}{f_{i}}},$
(27) $\displaystyle\qquad\quad\eta_{i}\leq
1-\frac{t_{i}^{\max}f^{l}_{i}}{b_{i}}.$ (28)
Then, the three constraints in problem $\mathcal{P}4^{\prime}$ can be
equivalently transformed into the following form
$\displaystyle\eta_{i}\in\left[\max\left(0,1-\frac{t_{i}^{\max}f^{l}_{i}}{b_{i}}\right),\min\left(1,\frac{t_{i}^{\max}}{\alpha_{i}\frac{c_{i}}{W_{i}\log_{2}\left(1+\frac{P_{i}g_{i}}{\sigma^{2}}\right)}+\left(1-\alpha_{i}\right)\frac{b_{i}^{\text{intar}}}{f_{i}}+\frac{b_{i}}{f_{i}}}\right)\right]$
(29)
with
$\displaystyle
1-\frac{t_{i}^{\max}f^{l}_{i}}{b_{i}}\leq\min\left(1,\frac{t_{i}^{\max}}{\alpha_{i}\frac{c_{i}}{W_{i}\log_{2}\left(1+\frac{P_{i}g_{i}}{\sigma^{2}}\right)}+\left(1-\alpha_{i}\right)\frac{b_{i}^{\text{intar}}}{f_{i}}+\frac{b_{i}}{f_{i}}}\right),$
(30)
where (30) is to ensure that problem $\mathcal{P}4^{\prime}$ is feasible.
We observe that the problem is a simple single-variable linear optimization
problem with constraints, and the optimal solution can be obtained by Newton’s
method.
### 3.3 Transmission Power Allocation
In the transmission power allocation problem, there is also no coupling
between the transmission powers of different vehicles, so the power allocation
problem can still be decomposed into a sub-problem for the single-vehicle
case, where the constraints that play a role are C3 and C6a. The objective
function is defined as $g(\bm{p})$, and expressed as follows
$\displaystyle\mathcal{P}5:$ $\displaystyle\min_{P_{i}}\quad
P_{i}\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}$
(31a) $\displaystyle\text{s. t.}\quad~{}~{}P_{i}\leq P_{\text{car}},$ (31b)
$\displaystyle\qquad\quad
P_{i}\geq\frac{\sigma^{2}(e^{\frac{\eta_{i}c_{i}}{W_{i}(t_{i}^{\max}-(1-\alpha_{i})\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}})}}-1)}{g_{i}}.$
(31c)
The objective function in this problem is monotonically increasing, and the
proof procedure is as follows
$\displaystyle\frac{\partial\frac{P_{i}\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}}{\partial
P_{i}}=\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}-\frac{P_{i}\alpha_{i}\eta_{i}c_{i}e_{1}W_{i}\frac{g_{i}}{\sigma^{2}}\ln
2}{(W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}}))^{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}>0.$
(32)
Thus, when the feasible domain is nonempty, the optimal power can be obtained
as
$P_{i}=\frac{\sigma^{2}(e^{\frac{\eta_{i}c_{i}}{W_{i}(t_{i}^{max}-(1-\alpha_{i})\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}})}}-1)}{g_{i}}.$
(33)
### 3.4 Bandwidth and Computing Resource Allocation
Finally, we solve the allocation of bandwidth and computational resources, in
which the variables are bandwidth $\boldsymbol{w}$ and computational resources
$\boldsymbol{f}$. The constraints associated with this are C4, C5, and C6a,
and the objective function is defined as $g(\bm{w,f})$. Mathematically, the
bandwidth and computing resource allocation problem is expressed as follows
$\displaystyle\mathcal{P}6:$
$\displaystyle\min_{\bm{w,f}}\quad\sum_{i=1}^{I}P_{i}\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+e_{2}(1-\eta_{i})c_{i}{f^{l}_{i}}^{2}+\mu_{1}W_{i}+\mu_{2}f_{i}$
(34a) $\displaystyle\text{s. t.}\qquad\sum_{i=1}^{I}f_{i}\leq f_{\text{RSU}},$
(34b) $\displaystyle~{}~{}\quad\qquad\sum_{i=1}^{I}W_{i}\leq W_{\text{RSU}},$
(34c)
$\displaystyle\qquad\quad\frac{\alpha_{i}\eta_{i}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+(1-\alpha_{i})\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}}\leq
t_{i}^{\max},\forall{i\in\\{1,2,...,I\\}}.$ (34d)
As shown in problem $\mathcal{P}$6, the objective function is a non-negative
linear weighted summation of functions $\frac{1}{W_{i}}$ and $f_{i}$, which
are respectively related to decision variables $W_{i}$ and $f_{i}$. Notice
that the fractional function $\frac{1}{W_{i}}$ with $W_{i}>0$ is convex and
linear function $f_{i}$ is also convex. According to the convexity
preservation theorem that the non-negative linear weighted summation of convex
functions is also convex [30], the objective function in $\mathcal{P}$6 is a
convex function. In a similar way, one can also prove that all constraints in
problem $\mathcal{P}$6 are convex. As a consequence, problem $\mathcal{P}$6 is
a convex problem. Therefore, it can be optimally solved using the Lagrangian
dual decomposition method.
Let $u$, $v$, and $\bm{q}$ be Lagrangian multipliers with
$\bm{q}=\\{q_{1},q_{2},...,q_{I}\\}$, the Lagrangian function is expressed as
$\mathcal{L}(\bm{w},\bm{f},u,v,\bm{q})=G(\bm{w},\bm{f})+u(\sum_{i=1}^{I}f_{i}-f_{\text{RSU}})+v(\sum_{i=1}^{I}W_{i}-W_{\text{RSU}})+\sum_{i=1}^{I}q_{i}(T_{i}(W_{i},f_{i})),$
(35)
where
$\displaystyle
G(\bm{w},\bm{f})=\sum_{i=1}^{I}(P_{i}\frac{\alpha_{i}\eta_{i}c_{i}e_{1}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+e_{2}(1-\eta_{i})c_{i}{f^{l}_{i}}^{2}+\mu_{1}W_{i}+\mu_{2}f_{i}),$
(36) $\displaystyle
T_{i}(W_{i},f_{i})=(\frac{\alpha_{i}\eta_{i}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+(1-\alpha_{i})\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}}-t_{i}^{\max}).$
(37)
As a result, the dual function of the original problem $\mathcal{P}$6 is
expressed as
$h(u,v,\bm{q})=\min_{\bm{w},\bm{f}}\mathcal{L}(\bm{w},\bm{f},u,v,\bm{q})$ (38)
and the dual problem is represented as
$\max_{u,v,\bm{q}}h(u,v,\bm{q})=\max_{u,v,\bm{q}}\min_{\bm{w},\bm{f}}\mathcal{L}(\bm{w},\bm{f},u,v,\bm{q})$
(39)
As directly obtaining the closed-form solution of the dual problem is
difficult, an iterative algorithm is adopted to tackle this problem by
alternatively updating primal variables $\\{\bm{w},\bm{f}\\}$ and Lagrangian
multipliers $\\{u,v,\bm{q}\\}$.
With the given Lagrangian multiples, the optimal bandwidth resource and
computational resource allocation with closed forms in the $n$-th iteration
can be derived as
$\displaystyle\quad\frac{\partial\mathcal{L}(\bm{w},\bm{f},u,v,\bm{q})}{\partial\bm{w}}=0,$
$\displaystyle
W_{i}^{n}=\sqrt{\frac{P_{i}\alpha_{i}\eta_{i}c_{i}e_{1}+q_{i}\alpha_{i}\eta_{i}c_{i}}{(\mu_{1}+v_{i})\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}},$
(40)
$\displaystyle\quad\frac{\partial\mathcal{L}(\bm{w},\bm{f},u,v,\bm{q})}{\partial\bm{f}}=0,$
$\displaystyle
f_{i}^{n}=\sqrt{\frac{q_{i}[(1-\alpha_{i})\eta_{i}c_{i,j}b_{i}^{\text{intar}}+\eta_{i}b_{i}]}{(\mu_{2}+u_{i})}}.$
(41)
Then, Lagrange multipliers in the $n$-th iteration are iteratively updated by
$\displaystyle u^{n+1}=[u^{n}+\lambda_{1}(\sum_{i=1}^{I}f_{i}-f_{RSU})]^{+},$
(42) $\displaystyle
v^{n+1}=[v^{n}+\lambda_{1}(\sum_{i=1}^{I}W_{i}-W_{RSU})]^{+},$ (43)
$\displaystyle
q_{i}^{n+1}=[q_{i}^{n}+\lambda_{2}(\frac{\alpha_{i}\eta_{i}c_{i}}{W_{i}\log_{2}(1+\frac{P_{i}g_{i}}{\sigma^{2}})}+(1-\alpha_{i})\frac{\eta_{i}b_{i}^{\text{intar}}}{f_{i}}+\frac{\eta_{i}b_{i}}{f_{i}}-t_{i}^{\text{max}})]^{+},$
(44)
where $\lambda_{1},\lambda_{2}$ are step sizes. Due to the different update
orders of magnitude, two different positive step sizes are used.
The overarching algorithmic procedure for this subproblem unfolds as follows.
We begin by establishing the iteration count for variable updates. Each
iteration for updating variables $\bm{w}$ and $\bm{f}$ with the given
Lagrangian multipliers. Then, Lagrangian multipliers are updated with given
$\bm{w}$ and $\bm{f}$. This cycle of updates for variables $\bm{w}$, $\bm{f}$
and the Lagrangian multipliers perpetuates until the designated number of
iterations is reached. Upon completion, the final values of $\bm{w}$ and
$\bm{f}$ are optimized solutions.
### 3.5 Overall Alternating Minimization Algorithm for Computation Offloading
Algorithm 1 AM Algorithm
1: Input: random initialization
${\boldsymbol{\alpha}}^{0},{\boldsymbol{\eta}}^{0}$, ${\boldsymbol{p}}^{0}$,
${\boldsymbol{w}}^{0}$, ${\boldsymbol{f}}^{0}$, number of outer loops $K$;
2: for $k\leftarrow 0$ to $K$ do
3: Calculate transmission mode selection ${\boldsymbol{\alpha}}^{k}$ and
obtain the solution to problem $\mathcal{P}3$ by utilizing one-dimensional
search;
4: Calculate task Offloading ratio decision ${\boldsymbol{\eta}}^{k}$ and
obtain the solution to problem $\mathcal{P}4$ by utilizing one-dimensional
search;
5: Calculate transmission power ${\boldsymbol{p}}^{k}$ and obtain the solution
to problem $\mathcal{P}5$ based on (33);
6: Initialize Lagrangian multiplier $u^{0},v^{0},\bm{q}^{0}$;
7: for $n\leftarrow 0$ to $N$ do
8: Calculate bandwidth and computing resource $\bm{w}^{n}$, $\bm{f}^{n}$ based
on (40), (41);
9: Update Lagrangian multiplier $u^{n},v^{n},\bm{q}^{n}$ according to (42),
(43), (44);
10: end for
11: $\bm{w}^{k}=\bm{w}^{N}$, $\bm{f}^{k}=\bm{f}^{N}$;
12: end for
In this subsection, the overall AM algorithm of problem $\mathcal{P}$2 is
summarised, which is outlined in Algorithm 1. The AM Algorithm initiates by
setting random initial values to determine specific parameters and the
aggregate number of iterations as $K$. The algorithm systematically undertakes
updates across four sub-problems. Specifically, during the $k$-th iteration,
the transmission mode selection is updated first, considering other variables
as fixed inputs. Following this, the algorithm leverages a one-dimensional
search approach to address problem $\mathcal{P}3$, updated the transmission
mode selection ${\boldsymbol{\alpha}}^{k}$. Progressing forward, using the
updated ${\boldsymbol{\alpha}}^{k}$ obtained from the $k$-th iterations and
${\boldsymbol{\eta}}^{k-1}$, ${\boldsymbol{p}}^{k-1}$, $\bm{w}^{k-1}$,
$\bm{f}^{k-1}$ obtained from the $k-1$ th iterations as fixed inputs, problem
$\mathcal{P}4$ is solved to determine the task offloading ratio
${\boldsymbol{\eta}}^{k}$. Subsequently, the transmission power allocation
${\boldsymbol{p}}^{k}$ is updated by handling problem $\mathcal{P}5$. The
final sub-problem $\mathcal{P}6$ is solved by the Lagrangian dual
decomposition technique, which is an iterative algorithm updating primal
variables $\\{\bm{w},\bm{f}\\}$ and Lagrangian multipliers $\\{u,v,\bm{q}\\}$
alternatively. Upon finalizing the $k$-th iteration, all pertinent variables
are updated to their latest states, poised for the subsequent iteration. After
fulfilling the designated $K$ iterations, the algorithm yields the updated
variables as the final resource allocation strategy of the AM algorithm. The
relationship of the four subproblems in the AM algorithm is illustrated in
Fig. 3.
Figure 3: The relationship of four subproblems in the AM algorithm.
For the computational complexity analysis, we observe that Algorithm 1
includes an outer loop with $K$ iterations. In each outer loop, four
subproblems are alternatively tackled. As the calculation of the first three
subproblems is relatively simple, the complexity of each outer loop is
dominated by the fourth subproblem, which is an iterative algorithm with $N$
inner loops. In each inner loop, the primal variables and dual variables need
to be updated $3I+2$ times. Hence, the computational complexity of Algorithm 1
is $\mathcal{O}(KN(3I+2))$, which is simplified as $\mathcal{O}(KNI)$.
## 4 Structural Knowledge-Driven Meta-Learning for I-CSC-based Task
Offloading
Although problem $\mathcal{P}$2 can be tackled by the model-based AM algorithm
with explainability, the obtained solution is usually locally optimal for the
overall problem even if each sub-problem achieves its global solution. This is
because, in the AM algorithm, the sub-problem is optimized to minimize the
local objective function other than the global objective function.
Furthermore, the high computational complexity of the AM algorithm leads to
long online processing time, which fails to keep pace with the rapid response
requirements of driving-related tasks. As with powerful approximation ability
and fast online inference, the machine learning method has attracted lots of
attention in wireless resource allocation problems. However, the “black-box”
nature and weak interpretability hinder its widespread application in wireless
networks. To tackle these challenges above, in this paper, we propose a novel
structural knowledge-driven meta-learning method involving both the
explainable AM algorithm and the neural network to handle the non-convex
problem $\mathcal{P}2$. In the following, the framework of the proposed SKDML
is first introduced and the specific SKDML method for the I-CSC-based task
offloading problem is then presented in detail.
### 4.1 The Proposed Structural Knowledge-Driven Meta-Learning Framework
To simultaneously exploit the interpretability of model-based AM algorithms
and the strong approximation ability of neural networks, in this paper, a
novel SKDML method combining AM algorithm and neural network is proposed to
solve the non-convex I-CSC-based task offloading problem $\mathcal{P}$2\. As
illustrated in Fig. 4, the proposed SKDML framework, motivated by [32],
maintains the original inner-and-outer iterative structure of the model-base
AM algorithm, referred to as structural knowledge in this paper, where four
sub-problems are alternatively handled in the outer loop and each sub-problem
is iteratively optimized in the inner loop. Based on this framework inspired
by structural knowledge, the handcrafted iterative algorithmic principle for
each sub-problem in the inner loop is replaced by an adaptive neural network
to dynamically mimic the gradient-descent-based iterative process, forming a
customized optimizer. According to the optimization theory [33], not only the
current gradient but also the previous gradients have an impact on the
variable update in an optimization problem. Hence, the recurrent neural
network with the capability of memorizing past information, particularly the
LSTM, is employed in the optimizer. As LSTM in the proposed SKDML framework is
to learn a good learning optimizer for new tasks, instead of learning a neural
network to directly map the input and output of tasks, the proposed framework
belongs to the category of meta-learning.
Furthermore, to pull out the solution from the local optimum, the proposed
SKDML framework adopts the global objective function in problem $\mathcal{P}$2
as the global loss function to update the inner LSTM in the outer loop.
Specifically, in the inner loop, the parameters of LSTM are frozen and
variables of sub-problems are iteratively updated by the given LSTM-based
optimizer as traditional gradient-descent-based algorithms do. After several
inner iterations, the LSTM parameters are updated in the outer loop to
minimize the global loss function, where widespread built-in optimizers like
Adam are usually applied in this process. As a consequence, global loss
information in the outer loop can be learned by the solution of each sub-
problem in the inner loop to achieve superior performance, which is
significantly different from the model-based AM algorithm that optimizes sub-
problems with local objective functions. Moreover, the proposed SKDML
framework is able to train in an unsupervised manner, suitable for solving
non-convex optimization problems with no labeled data.
### 4.2 The Proposed SKDML for I-CSC-based Task Offloading Problem
Figure 4: The overall structure of SKDML algorithm. The algorithm
architecture is divided into inner and outer loops. In the inner loop (blue
box), the four LSTMs update the variables of the subproblem separately without
updating the network parameters. In the outer loop (green box), the network
parameters of the four LSTMs are updated using global loss.
This subsection elaborates on the proposed SKDML method for the non-convex
I-CSC-based task offloading problem $\mathcal{P}$2\. As shown in Fig. 4, the
main framework is guided by the alternative structure of the AM algorithm,
consisting of an inner loop to iteratively update variables of the sub-problem
and an outer loop to alternatively optimize four different sub-problems, i.e.,
the transmission mode selection problem, the task offloading radio decision
problem, the transmission power allocation problem, the bandwidth and
computing resource allocation problem. To solve problem $\mathcal{P}$2 with
the proposed SKDML method, we add the constraint C6 as a penalty term to the
objective function and reformulate the problem as
$\min\hat{G}({\boldsymbol{\alpha}},{\boldsymbol{\eta}},{\boldsymbol{p}},{\boldsymbol{w}},{\boldsymbol{f}})=G({\boldsymbol{\alpha}},{\boldsymbol{\eta}},{\boldsymbol{p}},{\boldsymbol{w}},{\boldsymbol{f}})+\sum_{i=1}^{I}q_{i}(t_{i}-t_{i}^{\text{max}}),\quad\text{s.
t.}\quad C1-C5,$ (45)
where $q_{i}$ represents for latency penalty factor. We refer to the
minimization of
$\hat{G}({\boldsymbol{\alpha}},{\boldsymbol{\eta}},{\boldsymbol{p}},{\boldsymbol{w}},{\boldsymbol{f}})$
as the overall problem while to the minimization of
$\hat{g}(\boldsymbol{\alpha})$, $\hat{g}(\boldsymbol{\eta})$,
$\hat{g}(\boldsymbol{p})$ and $\hat{g}({\boldsymbol{w}},{\boldsymbol{f}})$ as
the four sub-problems to be optimized sequentially in each iteration. In the
following, we present both the inner variable update process for each sub-
problem and the outer sub-problems alternating in detail.
#### 4.2.1 The Variable Updating of Each Sub-Problem in the Inner Loop
In the inner loop, different from the traditional handcrafted gradient descent
algorithm, variables in each sub-problem are iteratively updated by an
adaptive and customized LSTM-based optimizer, whose parameters denoted by
$\boldsymbol{\theta}$ are frozen. The reason for choosing LSTM in the proposed
SKDML method is its ability to record the previous variable’s information by
the cell states $\boldsymbol{C}$, which can be adaptively adjusted by the
control gates inside the LSTM. Inspired by the gradient descent algorithm, the
LSTM-based optimizer also takes the gradient of the objective function,
represented by $\nabla\hat{g}()$, and the accumulated previous state of the
variable, represented by the cell states of LSTM $\boldsymbol{C}$, as inputs.
The outputs of the LSTM optimizer are the updating interval of the variable as
well as the updated cell state of LSTM.
Specifically, the first sub-problem is the transmission mode selection problem
$\mathcal{P}3$, i.e., which is the minimization of
$\hat{g}(\boldsymbol{\alpha})$. The LSTM network built for updating
$\boldsymbol{\alpha}$, the parameters of the corresponding LSTM network, and
the state cell are denoted as $\text{LSTM}_{\boldsymbol{\alpha}}$,
$\boldsymbol{\theta}_{\boldsymbol{\alpha}}$ and
$\boldsymbol{C}_{\boldsymbol{\alpha}}$, and respectively. The variable
$\boldsymbol{\alpha}$ updates using $\text{LSTM}_{\boldsymbol{\alpha}}$ are
represented as
$\displaystyle\boldsymbol{\alpha}^{n}=\boldsymbol{\alpha}^{n-1}+\text{LSTM}_{\boldsymbol{\alpha}}\left(\nabla\hat{g}(\boldsymbol{\alpha}^{n-1}),\boldsymbol{C}_{\boldsymbol{\alpha}^{n-1}},\boldsymbol{\theta}_{\boldsymbol{\alpha}}\right),$
(46)
where $n=1,...,N$ is the $n$-th iteration for updating $\boldsymbol{\alpha}$
in the inner loop. To strictly satisfy the constraints C2 in problem
$\mathcal{P}$3, we control $\boldsymbol{\alpha}$ within the range of $[0,1]$
and then use rounding to obtain the result.
The second sub-problem is the task offloading ratio decision problem
$\mathcal{P}$4, i.e., the minimization of $\hat{g}(\boldsymbol{\eta})$. Denote
by $\text{LSTM}_{\boldsymbol{\eta}}$,
$\boldsymbol{\theta}_{\boldsymbol{\eta}}$ and
$\boldsymbol{C}_{\boldsymbol{\eta}}$ the LSTM network established for updating
$\boldsymbol{\eta}$, the parameters and the state cells of the corresponding
LSTM network, respectively. The update of variable $\boldsymbol{\eta}$ by
$\text{LSTM}_{\boldsymbol{\eta}}$-based optimizer in the inner loop is
expressed as
$\displaystyle\boldsymbol{\eta}^{j}=\boldsymbol{\eta}^{j-1}+\text{LSTM}_{\boldsymbol{\eta}}\left(\nabla\hat{g}(\boldsymbol{\eta}^{j-1}),\boldsymbol{C}_{\boldsymbol{\eta}^{j-1}},\boldsymbol{\theta}_{\boldsymbol{\eta}}\right),$
(47)
where $j=1,...,J$ is the $j$-th iteration for updating $\boldsymbol{\eta}$ in
the inner loop. To strictly satisfy the constraints C1 in problem
$\mathcal{P}$4, we project the constraint to the range [0, 1].
The third sub-problem is the transmission power allocation, termed
$\mathcal{P}$5, aiming at the minimization of $\hat{g}(\boldsymbol{p})$. The
LSTM network constructed for the update of $\boldsymbol{p}$is denoted as
$\text{LSTM}_{\boldsymbol{p}}$, with its parameters and state cells
represented by $\boldsymbol{\theta}_{\boldsymbol{p}}$ and
$\boldsymbol{C}_{\boldsymbol{p}}$, respectively. The update to the variable
$\boldsymbol{p}$ through the $\text{LSTM}_{\boldsymbol{p}}$ -based optimizer
in the inner loop is articulated as
$\displaystyle\boldsymbol{p}^{m}=\boldsymbol{p}^{m-1}+\text{LSTM}_{\boldsymbol{p}}\left(\nabla\hat{g}(\boldsymbol{p}^{m-1}),\boldsymbol{C}_{\boldsymbol{p}^{m-1}},\boldsymbol{\theta}_{\boldsymbol{p}}\right),$
(48)
where $m=1,...,M$ is the $m$-th iteration for updating $\boldsymbol{p}$ in the
inner loop. To strictly satisfy the constraints C3 in problem $\mathcal{P}$5,
we map $\boldsymbol{p}$ into constraints C3.
The fourth sub-problem is the bandwidth and computational resource allocation,
termed $\mathcal{P}$6, aiming at the minimization of $\hat{g}(\boldsymbol{w})$
and $\hat{g}(\boldsymbol{f})$. The LSTM network constructed for the update of
$\hat{g}(\boldsymbol{w})$ and $\hat{g}(\boldsymbol{f})$ is denoted as
$\text{LSTM}_{\boldsymbol{wf}}$, with its parameters and state cells
represented by $\boldsymbol{\theta}_{\boldsymbol{wf}}$ and
$\boldsymbol{C}_{\boldsymbol{wf}}$, respectively. The update to the variable
$\boldsymbol{wf}$ through the $\text{LSTM}_{\boldsymbol{wf}}$ -based optimizer
in the inner loop is articulated as
$\displaystyle\boldsymbol{(w,f)}^{r}=\boldsymbol{(w,f)}^{r-1}+\text{LSTM}_{\boldsymbol{wf}}\left(\nabla\hat{g}(\boldsymbol{(wf)}^{r-1}),\boldsymbol{C}_{\boldsymbol{(wf)}^{r-1}},\boldsymbol{\theta}_{\boldsymbol{(wf)}}\right),$
(49)
where $r=1,...,R$ is the $r$-th iteration for updating
$\hat{g}(\boldsymbol{w})$ and $\hat{g}(\boldsymbol{f})$ in the inner loop. To
strictly satisfy the constraints C4 and C5 in problem $\mathcal{P}$6, we use
the projection method to transform the constraints into
${\boldsymbol{w}}=\begin{cases}{\boldsymbol{w}},\quad\text{if}\quad{\sum_{i=1}^{I}W_{i}}\leq
W_{\text{RSU}}\\\
\frac{{\boldsymbol{W}}}{\sum_{i=1}^{I}W_{i}}\sqrt{W_{\text{RSU}}},\quad\text{otherwise}\\\
\end{cases},$ (50)
${\boldsymbol{f}}=\begin{cases}{\boldsymbol{f}},\quad\text{if}\quad{\sum_{i=1}^{I}f_{i}}\leq
f_{\text{RSU}}\\\
\frac{{\boldsymbol{f}}}{\sum_{i=1}^{I}f_{i}}\sqrt{f_{\text{RSU}}},\quad\text{otherwise}\\\
\end{cases}.$ (51)
In the inner loop, the parameters of these four networks are fixed, which are
used to generate the update of variables. As the input of the variable update
function, the variables are updated once in each inner loop iteration, and the
inner loop update of a subproblem is not completed until the number of updates
reaches the set number of inner loop iterations. In this way, the four
subproblems are updated iteratively with each other.
#### 4.2.2 The Network Parameters Updating in the Outer Loop
In the outer loops, we update the network parameters through backpropagation
to minimize the accumulated global loss, given by
$\displaystyle\mathcal{L}_{G}^{s}=\frac{1}{k_{\text{up}}}\sum_{k_{s}=(s-1)k_{\text{up}}+1}^{sk_{\text{up}}}G(\boldsymbol{\alpha}^{k_{s}},\boldsymbol{\eta}^{k_{s}},{\boldsymbol{p}}^{k_{s}},{\boldsymbol{w}}^{k_{s}},{\boldsymbol{f}}^{k_{s}}).$
(52)
where $k_{up}$ is the update interval, and $s=1,2,...,S$, with $S=K/k_{up}$
being the maximum update number for LSTM networks and $K$ being the maximum
outer steps. For every $k_{up}$ outer loop iteration, the parameters of the
LSTM networks are updated by the Adam optimizer using the accumulated global
loss $\mathcal{L}_{G}^{s}$. And the accumulated global loss
$\mathcal{L}_{G}^{s}$ is used to update $\theta_{\boldsymbol{\alpha}}$,
$\theta_{\boldsymbol{\eta}}$, $\theta_{\boldsymbol{p}}$ and
$\theta_{\boldsymbol{wf}}$. Mathematically,
$\displaystyle\theta_{\boldsymbol{\alpha}}^{s+1}=\theta_{\boldsymbol{\alpha}}^{s}+\beta_{\boldsymbol{\alpha}}\cdot\text{Adam}(\theta_{\boldsymbol{\alpha}}^{s},\nabla_{\theta_{\boldsymbol{\alpha}}^{s}}\mathcal{L}_{s}^{G}),$
(53)
$\displaystyle\theta_{\boldsymbol{\eta}}^{s+1}=\theta_{\boldsymbol{\eta}}^{s}+\beta_{\boldsymbol{\eta}}\cdot\text{Adam}(\theta_{\boldsymbol{\eta}}^{s},\nabla_{\theta_{\boldsymbol{\eta}}^{s}}\mathcal{L}_{s}^{G}),$
(54)
$\displaystyle\theta_{\boldsymbol{p}}^{s+1}=\theta_{\boldsymbol{p}}^{s}+\beta_{\boldsymbol{p}}\cdot\text{Adam}(\theta_{\boldsymbol{p}}^{s},\nabla_{\theta_{\boldsymbol{p}}^{s}}\mathcal{L}_{s}^{G}),$
(55)
$\displaystyle\theta_{\boldsymbol{wf}}^{s+1}=\theta_{\boldsymbol{wf}}^{s}+\beta_{\boldsymbol{wf}}\cdot\text{Adam}(\theta_{\boldsymbol{wf}}^{s},\nabla_{\theta_{\boldsymbol{wf}}^{s}}\mathcal{L}_{s}^{G}),$
(56)
where $\beta_{\boldsymbol{\alpha}}$,
$\beta_{\boldsymbol{\eta}},\beta_{\boldsymbol{p}}$ and
$\beta_{\boldsymbol{wf}}$ are the learning rate of
$\text{LSTM}_{\boldsymbol{\alpha}}$, $\text{LSTM}_{\boldsymbol{\eta}}$,
$\text{LSTM}_{\boldsymbol{p}}$ and $\text{LSTM}_{\boldsymbol{wf}}$, i.e., the
iteration step size. The parameters $\theta$ are iteratively updated using the
Adam method [34].
#### 4.2.3 The Overall Algorithm of the Proposed SKDML Method
Algorithm 2 summarizes the proposed SKDML algorithm. Specifically, the
algorithm starts with inputs of randomly initialized transmission mode
selection $\boldsymbol{\alpha}_{0}$, task offloading ratio
$\boldsymbol{\eta}_{0}$, bandwidth allocation $\boldsymbol{w}_{0}$,
computational resource allocation $\boldsymbol{f}_{0}$, and randomly
initialized network parameters $\theta_{0}$. In the inner loop iteration,
first, update the mode selection problem using LSTM networks to obtain the
current optimal transmission mode. The second, third and fourth inner-loop
iterations are solved similarly to the first one by updating the task
offloading ratio, the power selection and the allocation of the bandwidth and
computing resources. In the outer-loop iterations, the network parameters are
updated every $k_{up}$ inner-loop iteration after the global loss function is
calculated $k_{up}$ times. The average loss function is used to update the
network parameters of the four LSTM networks to customize the frequency of
network parameter updates. By updating the networks in the inner loop and the
outer loop, the optimal transmission mode selection, task offloading ratio,
and resource allocation scheme are obtained.
## 5 Numerical Results
In this section, simulations are carried out to verify the effectiveness of
the proposed traffic-aware task offloading mechanism. The model-based AM
algorithm and the data-driven meta-learning approach without knowledge are
also conducted for comparison. Furthermore, the resource allocation with the
conventional I-CC scheme is also considered as a baseline.
Algorithm 2 The Proposed SKDML Method
1: Input: global loss function
$\hat{G}({\boldsymbol{\alpha}^{k},\boldsymbol{\eta}^{k},\boldsymbol{p}}^{k},{\boldsymbol{w}}^{k},{\boldsymbol{f}}^{k})$,
local loss functions $\hat{g}_{(}{\boldsymbol{\alpha}})$,
$\hat{g}_{(}{\boldsymbol{\eta}})$, $\hat{g}_{(}{\boldsymbol{p}})$ and
$\hat{g}_{(}{\boldsymbol{w,f}})$ , random initialization
${\boldsymbol{\eta}}^{0}$, ${\boldsymbol{p}}^{0}$, ${\boldsymbol{w}}^{0}$,
${\boldsymbol{f}}^{0}$, number of outer loops $K$, and number of inner loops
$N$, $J$, $M$, $R$;
2: Output: Estimated variables
$\boldsymbol{\alpha}^{K},\boldsymbol{\eta}^{K},{\boldsymbol{p}}^{K},{\boldsymbol{w}}^{K},{\boldsymbol{f}}^{K}$;
3: for $k\leftarrow 0$ to $K$ do
4: for $n\leftarrow 0$ to $N$ do
5:
$\boldsymbol{\alpha}^{n}=\boldsymbol{\alpha}^{n-1}+\text{LSTM}_{\boldsymbol{\alpha}}(\nabla\hat{g}(\boldsymbol{\alpha}^{n-1}),\boldsymbol{C}_{\boldsymbol{\alpha}^{n-1}},\boldsymbol{\theta}_{\boldsymbol{\alpha}})$;
6: end for
7: ${\boldsymbol{\alpha}}^{k}\leftarrow{\boldsymbol{\alpha}}^{N}$;
8: Update local loss function
$\hat{g}_{{\boldsymbol{\eta}}^{k-1},{\boldsymbol{p}}^{k-1},{\boldsymbol{w}}^{k-1},{\boldsymbol{f}}^{k-1}}({\boldsymbol{\alpha^{k}}})$;
9: for $j\leftarrow 0$ to $J$ do
10:
$\boldsymbol{\eta}^{j}=\boldsymbol{\eta}^{j-1}+\text{LSTM}_{\boldsymbol{\eta}}(\nabla\hat{g}(\boldsymbol{\eta}^{j-1}),\boldsymbol{C}_{\boldsymbol{\eta}^{j-1}},\boldsymbol{\theta}_{\boldsymbol{\eta}})$;
11: end for
12: ${\boldsymbol{\eta}}^{k}\leftarrow{\boldsymbol{\eta}}^{J}$
13: Update local loss function
$\hat{g}_{{\boldsymbol{\alpha}}^{k},{\boldsymbol{p}}^{k-1},{\boldsymbol{w}}^{k-1},{\boldsymbol{f}}^{k-1}}({\boldsymbol{\eta}^{k}})$;
14: for $m\leftarrow 0$ to $M$ do
15:
$\boldsymbol{p}^{m}=\boldsymbol{p}^{m-1}+\text{LSTM}_{\boldsymbol{p}}(\nabla\hat{g}(\boldsymbol{p}^{m-1}),\boldsymbol{C}_{\boldsymbol{p}^{m-1}},\boldsymbol{\theta}_{\boldsymbol{p}})$;
16: end for
17: ${\boldsymbol{p}}^{k}\leftarrow{\boldsymbol{p}}^{M}$;
18: Update local loss function
$\hat{g}_{{\boldsymbol{\alpha}}^{k},{\boldsymbol{\eta}}^{k},{\boldsymbol{w}}^{k-1},{\boldsymbol{f}}^{k-1}}({\boldsymbol{p}^{k}})$;
19: for $r\leftarrow 0$ to $R$ do
20:
$\boldsymbol{(w,f)}^{r}=\boldsymbol{(w,f)}^{r-1}+\text{LSTM}_{\boldsymbol{(w,f)}}(\nabla\hat{g}(\boldsymbol{(wf)}^{r-1}),\boldsymbol{C}_{\boldsymbol{(wf)}^{r-1}},\boldsymbol{\theta}_{\boldsymbol{(wf)}})$;
21: end for
22: ${\boldsymbol{w}}^{k}\leftarrow{\boldsymbol{w}}^{R}$;
23: ${\boldsymbol{f}}^{k}\leftarrow{\boldsymbol{f}}^{R}$;
24: Update local loss function
$\hat{g}_{{\boldsymbol{\alpha}}^{k},{\boldsymbol{\eta}}^{k},{\boldsymbol{p}}^{k}}({\boldsymbol{w}}^{k},\boldsymbol{f}^{k})$;
25: Update global loss function
$\hat{G}({\boldsymbol{\alpha}^{k},\boldsymbol{\eta}^{k},\boldsymbol{p}}^{k},{\boldsymbol{w}}^{k},{\boldsymbol{f}}^{k})$;
26: for $s\leftarrow 0$ to $K/{k_{\text{up}}}$ do
27:
$\mathcal{L}_{G}^{s}=\frac{1}{k_{\text{up}}}\sum_{k_{s}=(s-1)k_{\text{up}}+1}^{sk_{\text{up}}}G(\boldsymbol{\alpha}^{k_{s}},\boldsymbol{\eta}^{k_{s}},{\boldsymbol{p}}^{k_{s}},{\boldsymbol{w}}^{k_{s}},{\boldsymbol{f}}^{k_{s}})$;
28:
$\theta_{\boldsymbol{\alpha}}^{s+1}=\theta_{\boldsymbol{\alpha}}^{s}-\beta_{\boldsymbol{\alpha}}\nabla_{\theta_{\boldsymbol{\alpha}}^{s}}\mathcal{L}_{G}^{s}$;
29:
$\theta_{\boldsymbol{\eta}}^{s+1}=\theta_{\boldsymbol{\eta}}^{s}-\beta_{\boldsymbol{\eta}}\nabla_{\theta_{\boldsymbol{\eta}}^{s}}\mathcal{L}_{G}^{s}$;
30:
$\theta_{\boldsymbol{p}}^{s+1}=\theta_{\boldsymbol{p}}^{s}-\beta_{\boldsymbol{p}}\nabla_{\theta_{\boldsymbol{P}}^{s}}\mathcal{L}_{G}^{s}$;
31:
$\theta_{{\boldsymbol{w}}{\boldsymbol{f}}}^{s+1}=\theta_{{\boldsymbol{w}}{\boldsymbol{f}}}^{s}-\beta_{{\boldsymbol{w}}{\boldsymbol{f}}}\nabla_{\theta_{{\boldsymbol{w}}{\boldsymbol{f}}}^{s}}\mathcal{L}_{G}^{s}$;
32: end for
33: end for
Table 1: Simulation Parameters Parameter | Value
---|---
Bandwidth of RSUs | 40MHz
Transmit power vehicle | 300mW
Computing ability of RSUs | ${10}^{12}$ cycles/s
Maximum tolerant transmission latency | 100ms
Distance between RSUs | 500m
### 5.1 Simulation Setup
To validate the effectiveness of I-CSC mode and SKDML, we consider equipping
each vehicle with an environmental perception task that exhibits divisibility,
and its subtasks can be processed locally or offloaded to base stations for
processing. We provide a detailed overview of the parameters and corresponding
values used in this analysis, as outlined in Table 1 [35], [36], [37], [38].
We simulate and analyze the proposed algorithm based on Python. In the
simulation, we consider a scenario where a BS accommodates ten vehicles, each
carrying an environmental perception task. The ten vehicles are randomly
positioned at distances of 300m, 400m, 500m, and 600m from the BS,
respectively. We set the bandwidth of BS to 40MHz, and the transmit power of
the vehicle is 300mW [13].
### 5.2 Simulation Results
The trend of the loss function convergence in AM algorithms, meta-learning
without knowledge algorithms, and the proposed SKDML method in I-CSC mode, is
illustrated in Fig. 4. The utilized neural network is LSTM. The gradient
descent method’s learning rate is set to $10^{-3}$, the positive size of AM
algorithms $\beta$ is set to $8\times 10^{-3}$ [39].
We set the outer loop iteration count of the proposed SKDML at 500 and the
inner loop iteration count at 5. The convergence plots depict the proposed
SKDML with a red line, the meta-learning without knowledge algorithms
enhancement with a yellow line, and the AM algorithm with a blue line. From
the graph, it is evident that the SKDML achieves convergence in nearly 100
iterations, the meta-learning without knowledge algorithm converges at around
150 iterations, whereas the AM algorithm exhibits gradual convergence,
reaching its optimal state after 400 iterations. Its representations
unequivocally demonstrate that the proposed SKDML algorithm outperforms the
other two methods in terms of convergence speed. Furthermore, the meta-
learning without knowledge algorithm showcases a superior convergence speed
compared to the traditional AM algorithm. Adding to this, the loss value of
the SKDML eventually converges to 80, whereas the loss of the meta-learning
without knowledge algorithm converges to around 100, and the loss of the AM
algorithm settles at approximately 200. This compellingly illustrates that the
SKDML achieves a 60% reduction in costs compared to the AM algorithm and a 50%
reduction compared to meta-learning without knowledge algorithm.
Table 2: Online Processing Time of Three Algorithms Algorithm | Proposed SKDML method | Meta-learning w/o knowledge | AM algorithm
---|---|---|---
Time(ms) | 10.283943 | 19.586828 | 21.159225
In addition, we evaluated the convergence time of three algorithms, as
presented in Table 2. Under identical environmental parameters, the SKDML
demonstrated a convergence time of approximately 10 ms. This represents 47.2%
improvement in convergence speed compared to the meta-learning without
knowledge algorithm and 51.4% improvement over AM algorithms. Furthermore,
Table 2 presents the online inference time required by these three approaches,
which is counted on a server with 11th Gen Intel(R) Core(TM) i7-11700 @
2.50GHz, GPU: NVIDIA GeForce RTX 3060.
Figure 5: Convergence of the three algorithms Figure 6: Cost versus the
resource required for task computing
To compare the costs of three algorithms across various parameter variations,
in the following, we studied the impact of different parameters (such as the
task’s required cycle count, latency tolerance, and the number of vehicles) on
different algorithms and transmission methods.
In Fig. 5, we systematically vary the required cycle count for each task from
5 to 40 Megacycles [40]. This variation enables a comprehensive assessment of
the algorithms’ performances as well as the associated costs linked to diverse
transmission methods. As the required cycle count per task increases, both the
costs of I-CC mode and I-CSC mode correspondingly escalate. Furthermore, the
cost experiences a notably more pronounced escalation with the growing task
cycle count. However, it’s noteworthy that the cost of I-CSC mode remains
comparatively lower than that of I-CC mode. Among the three algorithms, the
SKDML consistently demonstrates lower costs. This consistent trend suggests
that, when the task cycle count is held constant, the SKDML facilitates
environment-aware task completion with diminished costs, thereby yielding more
efficient resource allocation strategies.
Figure 7: Cost versus the number of vehicies
In Fig. 6, as the number of vehicles increases from 2 to 14, it becomes
evident that the cost incurred by the I-CC mode consistently surpasses the
cost associated with I-CSC mode. Additionally, the cost resulting from the AM
algorithm exceeds that stemming from the meta-learning without knowledge
algorithm, which in turn exceeds the cost originating from the SKDML. This
pattern accentuates the notable efficacy of the SKDML. Given identical cost
constraints, the SKDML demonstrates the capacity to accommodate a larger
volume of vehicle tasks. In scenarios involving an equivalent number of
vehicles, it engenders more streamlined resource allocation strategies.
Figure 8: Cost versus the maximum tolerable latency of the task
Latency tolerance for each task was systematically augmented from 1ms to
200ms, as depicted in Fig. 7. The figure elucidates that irrespective of
variations in latency tolerance, the expense associated with I-CC mode
consistently surpasses that linked to I-CSC mode. This observation underscores
that when tasks share the same latency tolerance, opting for I-CSC mode can
achieve task completion at a reduced cost. Furthermore, the SKDML consistently
outperforms the other two algorithms.
To comprehensively validate disparities between the conventional I-CC mode and
the I-CSC mode of transmission selection, this investigation expanded the data
packet size from 1Mb to 30Mb, as illustrated in Fig. 8. When data packets are
relatively small, ranging from 1Mb to 15Mb, the cost differential between the
I-CC mode and the I-CSC mode, executed with the same algorithm, remains
relatively inconspicuous. This outcome arises due to the I-CSC mode also
opting for I-CC mode when dealing with diminutive data packets. However, as
the data packet size escalates to 20Mb, the I-CSC mode transitions to the
transmission of computational instructions. Due to the fact that packet size
only affects a small portion of the total cost, the cost generated by I-CSC
mode is much smaller than that of I-CC mode. Hence, post the 20Mb threshold,
the cost of I-CSC mode progression slows down appreciably.
Figure 9: Cost versus the input data size of the task
Fig. 9 presents bar graphs illustrating the performance of three algorithms
under varying transmission methods as the data packet size ranges from 1Mb to
30Mb. The bar markers in distinct forms depict energy costs and payment costs
in identical I-CC mode. These visualizations unveil that, as the data packet
size transitions from 1Mb to 15Mb, energy costs and payment costs for the same
algorithm under both transmission methods manifest substantial similarity.
However, within the 15Mb to 30Mb range, the growth in energy costs and payment
costs for I-CSC mode methods exhibits a more gradual trajectory compared to
traditional I-CC mode. This pattern emerges due to the I-CC mode’s inclination
towards instruction-based transmission as data packet sizes surpass a certain
threshold.
Figure 10: Different transmission models versus the size of input data Figure
11: Total energy consumption] versus the weight of energy consumption in RSU.
## 6 Conclusion
This paper has focused on the complex problem of computation offloading and
resource allocation for environment-aware tasks. First, we have introduced two
distinct yet complementary offloading strategies: the conventional data packet
offloading and the innovative environment-aware offloading. The primary
objective of the optimization is to minimize the overall cost while adhering
to the stringent constraints of task latency tolerance. Next, to augment real-
time processing efficiency and interpretability, we have have proposed a novel
approach called SKDML. It combines the well-established AM algorithm framework
with insights distilled from neural networks. Simulation results have shown
that our algorithm converges faster and performs better than AM algorithms and
uninformed neural network methods. Moreover, even when different parameters
are changed, the total cost obtained by our algorithm is consistently lower
than that of the other two algorithms. The proposed algorithm not only
surpasses the performance of both the traditional AM algorithm and meta-
learning without knowledge algorithm in terms of convergence speed and overall
efficacy but also exhibits exceptional performance in scenarios characterized
by larger task data packet sizes. Not only that cost-effectiveness of
environment-aware transmission methods has significantly outperformed that of
conventional data packet transmission methods.
## References
* [1] K. Liu, X. Xu, M. Chen, B. Liu, L. Wu, V. C. S. Lee, A hierarchical architecture for the future internet of vehicles, IEEE Communications Magazine 57 (7) (2019) 41–47.
* [2] S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang, W. Shi, Edge computing for autonomous driving: Opportunities and challenges, Proceedings of the IEEE 107 (8) (2019) 1697–1716.
* [3] A. Eskandarian, C. Wu, C. Sun, Research advances and challenges of autonomous and connected ground vehicles, IEEE Transactions on Intelligent Transportation Systems 22 (2) (2021) 683–711.
* [4] M.-A. Messous, H. Sedjelmaci, N. Houari, S.-M. Senouci, Computation offloading game for an UAV network in mobile edge computing, in: 2017 IEEE International Conference on Communications (ICC), IEEE, 2017, pp. 1–6.
* [5] Y. Wang, N. Masoud, A. Khojandi, Real-time sensor anomaly detection and recovery in connected automated vehicle sensors, IEEE Transactions on Intelligent Transportation Systems 22 (3) (2021) 1411–1421.
* [6] J. Ren, D. Zhang, S. He, Y. Zhang, T. Li, A survey on end-edge-cloud orchestrated network computing paradigms: Transparent computing, mobile edge computing, fog computing, and cloudlet, ACM Computing Surveys (CSUR) 52 (6) (2019) 1–36.
* [7] M. Li, N. Cheng, J. Gao, Y. Wang, L. Zhao, X. Shen, Energy-efficient uav-assisted mobile edge computing: Resource allocation and trajectory optimization, IEEE Trans. Veh. Technol. 69 (3) (2020) 3424–3438.
* [8] X. Wang, L. Fu, N. Cheng, R. Sun, T. Luan, W. Quan, K. Aldubaikhy, Joint flying relay location and routing optimization for 6G UAV–IoT networks: A graph neural network-based approach, Remote Sensing 14 (17) (2022) 4377.
* [9] Q. Zhang, H. Wen, Y. Liu, S. Chang, Z. Han, Federated-reinforcement-learning-enabled joint communication, sensing, and computing resources allocation in connected automated vehicles networks, IEEE Internet Things J. 9 (22) (2022) 23224–23240.
* [10] K. Zhang, S. Leng, X. Peng, L. Pan, S. Maharjan, Y. Zhang, Artificial intelligence inspired transmission scheduling in cognitive vehicular communications and networks, IEEE Internet of Things Journal 6 (2) (2019) 1987–1997.
* [11] J. Zhou, F. Wu, K. Zhang, Y. Mao, S. Leng, Joint optimization of offloading and resource allocation in vehicular networks with mobile edge computing, in: 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), 2018, pp. 1–6.
* [12] A. Dame, V. A. Prisacariu, C. Y. Ren, I. Reid, Dense reconstruction using 3d object shape priors, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
* [13] Y. Qi, Y. Zhou, Y.-F. Liu, L. Liu, Z. Pan, Traffic-aware task offloading based on convergence of communication and sensing in vehicular edge computing, IEEE Internet of Things Journal 8 (24) (2021) 17762–17777.
* [14] G. Liu, W. Wang, J. Yuan, X. Liu, Q. Feng, Elimination of accumulated error of 3d target location based on dual-view reconstruction, in: 2009 Second International Symposium on Electronic Commerce and Security, Vol. 2, 2009, pp. 121–124.
* [15] Y. Gong, Y. Wei, Z. Feng, F. R. Yu, Y. Zhang, Resource allocation for integrated sensing and communication in digital twin enabled internet of vehicles, IEEE Transactions on Vehicular Technology 72 (4) (2023) 4510–4524.
* [16] K. Zhang, Y. Mao, S. Leng, S. Maharjan, Y. Zhang, Optimal delay constrained offloading for vehicular edge computing networks, in: 2017 IEEE International Conference on Communications (ICC), IEEE, 2017, pp. 1–6.
* [17] H. Zhao, J. Tang, B. Adebisi, T. Ohtsuki, G. Gui, H. Zhu, An adaptive vehicle clustering algorithm based on power minimization in vehicular ad-hoc networks, IEEE Trans. Veh. Technol. 71 (3) (2022) 2939–2948.
* [18] Y. Liu, S. Wang, J. Huang, F. Yang, A computation offloading algorithm based on game theory for vehicular edge networks, in: IEEE International Conference on Communications (ICC), 2018, pp. 1–6.
* [19] H. Shahzad, T. H. Szymanski, A dynamic programming offloading algorithm for mobile cloud computing, in: IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2016, pp. 1–5.
* [20] J. Du, F. R. Yu, X. Chu, J. Feng, G. Lu, Computation offloading and resource allocation in vehicular networks based on dual-side cost minimization, IEEE Transactions on Vehicular Technology 68 (2) (2019) 1079–1092.
* [21] M. Liu, Y. Liu, Price-based distributed offloading for mobile-edge computing with computation capacity constraints, IEEE Wireless Communications Letters 7 (3) (2018) 420–423.
* [22] T. Q. Dinh, J. Tang, Q. D. La, T. Q. S. Quek, Offloading in mobile edge computing: Task allocation and computational frequency scaling, IEEE Transactions on Communications 65 (8) (2017) 3571–3584.
* [23] C. Wang, C. Liang, F. R. Yu, Q. Chen, L. Tang, Computation offloading and resource allocation in wireless cellular networks with mobile edge computing, IEEE Transactions on Wireless Communications 16 (8) (2017) 4924–4938.
* [24] M. Cui, S. Zhong, B. Li, X. Chen, K. Huang, Offloading autonomous driving services via edge computing, IEEE Internet of Things Journal 7 (10) (2020) 10535–10547.
* [25] Y. Dai, D. Xu, S. Maharjan, G. Qiao, Y. Zhang, Artificial intelligence empowered edge computing and caching for internet of vehicles, IEEE Wireless Commun. Mag. 26 (3) (2019) 12–18.
* [26] Y. He, N. Zhao, H. Yin, Integrated networking, caching, and computing for connected vehicles: A deep reinforcement learning approach, IEEE Trans. Veh. Technol. 67 (1) (2017) 44–55.
* [27] M. Li, J. Gao, L. Zhao, X. Shen, Deep reinforcement learning for collaborative edge computing in vehicular networks, IEEE Transactions on Cognitive Communications and Networking 6 (4) (2020) 1122–1135.
* [28] M. Cheng, J. Li, S. Nazarian, Drl-cloud: Deep reinforcement learning-based resource provisioning and task scheduling for cloud service providers, in: Asia and South Pacific Design Automation Conference (ASP-DAC), 2018, pp. 129–134.
* [29] Y. Wang, H. Liu, W. Zheng, Y. Xia, Y. Li, P. Chen, K. Guo, H. Xie, Multi-objective workflow scheduling with deep-q-network-based multi-agent reinforcement learning, IEEE Access 7 (2019) 39974–39982.
* [30] S. P. Boyd, L. Vandenberghe, Convex optimization, Cambridge university press, 2004\.
* [31] Q. Hu, Y. Cai, Q. Shi, K. Xu, G. Yu, Z. Ding, Iterative algorithm induced deep-unfolding neural networks: Precoding design for multiuser mimo systems, IEEE Transactions on Wireless Communications 20 (2) (2021) 1394–1410.
* [32] J.-Y. Xia, S. Li, J.-J. Huang, Z. Yang, I. M. Jaimoukha, D. Gündüz, Metalearning-based alternating minimization algorithm for nonconvex optimization, IEEE Trans. Neural Netw. Learn. Syst. (2022).
* [33] z. luo, Y. Huang, S. Li, L. Wang, T. Tan, Unfolding the alternating optimization for blind super resolution, in: H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Eds.), Advances in Neural Information Processing Systems, Vol. 33, Curran Associates, Inc., 2020, pp. 5632–5643.
* [34] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization (2017). arXiv:1412.6980.
* [35] I. Sorkhoh, D. Ebrahimi, R. Atallah, C. Assi, Workload scheduling in vehicular networks with edge cloud capabilities, IEEE Transactions on Vehicular Technology 68 (9) (2019) 8472–8486.
* [36] Y. Sun, X. Guo, J. Song, S. Zhou, Z. Jiang, X. Liu, Z. Niu, Adaptive learning-based task offloading for vehicular edge computing systems, IEEE Transactions on Vehicular Technology 68 (4) (2019) 3061–3074.
* [37] P. Liu, J. Li, Z. Sun, Matching-based task offloading for vehicular edge computing, IEEE Access 7 (2019) 27628–27640.
* [38] K. Zhang, X. Gui, D. Ren, Joint optimization on computation offloading and resource allocation in mobile edge computing, in: IEEE Wireless Communications and Networking Conference (WCNC), 2019, pp. 1–6.
* [39] P. Werbos, Backpropagation through time: what it does and how to do it, Proceedings of the IEEE 78 (10) (1990) 1550–1560.
* [40] J.-S. Lee, T.-H. Park, Fast lidar - camera fusion for road detection by cnn and spherical coordinate transformation, in: 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 1797–1802.
|
# Your Echos are Heard:
Tracking, Profiling, and Ad Targeting in the Amazon Smart Speaker Ecosystem
Umar Iqbal∙ Pouneh Nikkhah Bahrami† Rahmadi Trimananda‡ Hao Cui‡ Alexander
Gamero-Garrido¶
Daniel Dubois¶ David Choffnes¶ Athina Markopoulou‡ Franziska Roesner∙ Zubair
Shafiq†
∙University of Washington †University of California-Davis ‡University of
California-Irvine
¶Northeastern University
###### Abstract
Smart speakers collect voice input that can be used to infer sensitive
information about users. Given a number of egregious privacy breaches, there
is a clear unmet need for greater transparency and control over data
collection, sharing, and use by smart speaker platforms as well as third party
skills supported on them. To bridge the gap, we build an auditing framework
that leverages online advertising to measure data collection, its usage, and
its sharing by the smart speaker platforms. We evaluate our framework on the
Amazon smart speaker ecosystem. Our results show that Amazon and third parties
(including advertising and tracking services) collect smart speaker
interaction data. We find that Amazon processes voice data to infer user
interests and uses it to serve targeted ads on-platform (Echo devices) as well
as off-platform (web). Smart speaker interaction leads to as much as
30$\times$ higher ad bids from advertisers. Finally, we find that Amazon’s and
skills’ operational practices are often not clearly disclosed in their privacy
policies.
## 1 Introduction
The convenience of voice input has contributed to the rising popularity of
smart speakers [52], such as Amazon Echo [51], but it has also introduced
several unique privacy threats. Many of these privacy issues stem from the
fact that smart speakers record audio from their environment and potentially
share this data with other parties over the Internet—even when they should not
[59]. For example, smart speaker vendors or third-parties may infer users’
sensitive physical (e.g., age, health) and psychological (e.g., mood,
confidence) traits from their voice [82]. In addition, the set of questions
and commands issued to a smart speaker can reveal sensitive information about
users’ states of mind, interests, and concerns. Despite the significant
potential for privacy harms, users have little-to-no visibility into what
information is captured by smart speakers, how it is shared with other
parties, or how it is used by such parties.
Prior work provides ample evidence to support the need for greater
transparency into smart speaker data collection, sharing, and use. For
instance, smart speaker platforms have been known to host malicious third-
party apps [56, 87], record users’ private conversations without their
knowledge [62, 63], and share users’ conversations with strangers [81].
Further, platforms have patented several privacy-infringing practices to
monetize voice input. For example, Amazon has a patent for advertising
products to users based on inferences from physical and emotional
characteristics of users’ voices, e.g., targeting cough-drop ads at users with
colds [69].
There is a clear need for auditing how smart speaker ecosystems handle data
from their users’ interactions. To facilitate such independent, repeatable
audits, we need an approach that can work on unmodified, off-the-shelf
devices, and that does not rely on disclosures provided by the smart speaker
manufacturer. Conducting such an audit, however, requires addressing two key
open challenges. First, commercially available smart speakers are black-box
devices without open interfaces that allow independent researchers to expose
what data is collected or how they are shared and used. Second, when data
gathered from a smart speaker is sent over the Internet, there is no way to
isolate how the data is further shared and used.
In this paper, we address these challenges by building an auditing framework
that measures the collection, usage, and sharing of voice data. _Our key
insight is that data collection and sharing over the Internet can be inferred
through its usage in targeted advertisements._ Namely, we can create multiple
personas with different smart-speaker usage profiles, and test whether those
personas receive statistically significantly different advertisements and bid
values. This, in turn, can allow us to infer how data was shared and used.
To evaluate the effectiveness of this approach, we focus on Amazon’s smart
speaker platform, as it is the largest platform (46 million devices in the US
[43] and 200K third-party applications [60]). To address the first challenge,
we set up a custom Raspberry RPi (RPi) router [42] to capture the endpoints
contacted by Amazon Echo and as well as emulating an Amazon Echo by
instrumenting Alexa Voice Service (AVS) SDK [15] and running it on a RPi (we
call it AVS Echo) to capture collected data. Since our custom RPi router is
unable to decrypt TLS traffic from unmodified Amazon Echos, we configure our
AVS Echo to capture unencrypted network traffic.
To address the second challenge, we conduct controlled experiments where we
intentionally expose voice commands to an Amazon Echo and look for its usage
on-platform (i.e., on an Amazon Echo) and off-platform, (i.e., on the web). We
expose data by installing and interacting with apps (called skills in the
Amazon Echo ecosystem) from different categories according to _personas_ that
represent users with different interests. For example, a “fashion” persona is
configured to install and interact with skills from the fashion category.
To determine whether our personas’ smart-speaker interactions are used or
shared, we look for evidence in online targeted advertising [77, 58, 55]. We
measure targeting across two modalities and multiple devices: audio ads served
by Amazon Echos and display ads served by websites. By comparing ad content
and ad auction bid values across personas and carefully controlling what
information is exposed to other parties, we can identify when smart-speaker
interactions are likely the cause of ad targeting, and thus infer that data
was shared and/or used for that purpose.
Key contributions. Our auditing framework allows us to answer three crucial
questions:
1. 1.
Which organizations collect and propagate user data? Amazon Echo interaction
data is collected by both Amazon and third-parties, including advertising and
tracking services. As many as 41 advertisers sync their cookies with Amazon.
These advertisers further sync their cookies with 247 other third parties,
including advertising services.
2. 2.
Is voice data used by either Amazon or third-party apps beyond purely
functional purposes, such as for targeted advertising? Amazon processes voice
data to infer user interests. Our measurements indicate the usage of voice
data for on-platform (i.e., audio ads), off-platform (i.e., web ads), and
cross-device (i.e., non-Echo device) ad targeting. Advertisers bid as much as
30× higher on some personas. It is unclear if third-party skills infer user
interests and target personalized ads.
3. 3.
Are data collection, usage and sharing practices consistent with the official
privacy policies of Amazon and third-party skills? Our measurements indicate
that Amazon’s and skills’ operational practices are often not clearly
disclosed in their policies or other claims. For example, Amazon’s inference
of advertising interests from users’ voice interactions seems to be
inconsistent with their public statements [83, 75]. Similarly, more than 70%
skills do not even mention Alexa or Amazon and only 10 (2.2%) skills are clear
about data collection practices in their privacy policies.
In summary, we find strong evidence that smart-speaker interactions are used
for the purpose of targeting ads, and that this ad targeting implies
significant data sharing across multiple parties. To further strengthen and
enable new forms of auditing, we argue that substantial additional
transparency is needed in the smart speaker ecosystem. To that end, we will
make all of our code and data publicly available upon publication.
## 2 Background & Motivation
### 2.1 Amazon Echo & Alexa
In this paper, we study Amazon’s smart speaker platform, the most widely used
platform with more than 46 million devices in the US [43]. Amazon’s smart
speakers are called Echo and they are powered by the Alexa voice assistant.
Alexa is a voice assistant that responds to user requests conveyed through
voice input. Although Alexa can respond to a wide variety of general-purpose
requests, it is not well-suited for specialized tasks, e.g., ordering a pizza
from a particular restaurant. Thus, to augment Alexa, Amazon allows third
party services to build and publish applications called skills on the Alexa
marketplace. As of 2020, the Alexa marketplace hosts more than 200K third
party skills [60].
### 2.2 Privacy Issues
The inclusion of third party skills poses a privacy risk to the users of
Amazon Echo. Accordingly, Amazon imposes a set of platform policies to
mitigate potential privacy risks of third party skills. Amazon restricts
skills from collecting sensitive information, e.g., social security and bank
account numbers [7, 6], and requires user permission to allow access to
personal information, e.g., email, phone, location [18]. To enforce the
aforementioned policies, Amazon has a skill certification process that aims to
filter malicious skills before they can be published on the marketplace [5].
However, prior research has shown that policy-violating skills can get
certified [56] and thousands of skills on the Alexa marketplace violate
platform policies [87].
Smart speakers also handle the particularly sensitive data that consists of
users’ voice input. The content of users’ speech can reveal sensitive
information (e.g., private conversations) and the voice signals can be
processed to infer potentially sensitive information about the user (e.g.,
age, gender, health [82]). Amazon aims to limit some of these privacy issues
through its platform design choices [4]. Specifically, to avoid snooping on
sensitive conversations, voice input is only recorded when a user utters the
wake word, e.g., Alexa. Further, only processed transcriptions of voice input
(not the audio data) is shared with third party skills, instead of the raw
audio [32]. However, despite these design choices, prior research has also
shown that smart speakers often misactivate and unintentionally record
conversations [59]. In fact, there have been several real-world instances
where smart speakers recorded user conversations, without users ever uttering
the wake word [63].
Smart speakers typically send voice input to cloud servers for processing
(e.g., transcription), after which the data can be stored and shared with
other parties. This raises two privacy concerns. First, since the potentially
sensitive data from voice interactions is available to smart speaker vendors,
they can use this data for targeting ads (as proposed in a recent Amazon
patent [69]). Second, this data may be shared with other parties. For example,
when a user interacts with a third party skill, the (processed transcriptions
of) voice input is shared with the third party. In these cases, neither users
nor Amazon have any visibility or control on the processing, sharing, and
selling of users’ interpreted voice input. Third party skills often do not
publish their privacy policies, nor adhere to them even when they do [60].
### 2.3 Proposed Auditing Framework
To the best of our knowledge, prior work lacks an in-depth analysis of the
collection, sharing, and usage of data in the Alexa smart speaker ecosystem.
To fill this gap, we systematically analyze the data collection, sharing, and
usage practices of Amazon’s smart speaker platform including third party
skills. We conduct controlled experiments where we intentionally expose user
interests according to several personas, then observe the platform’s
subsequent behavior from three perspectives: _(i)_ network traffic exchanged
by smart speakers, _(ii)_ advertisements served to personas, and _(iii)_
privacy policies published by third-party skills. Our goal is to combine these
perspectives to answer the following research questions.
RQ1: Which organizations collect and propagate user data? We use network
traffic flows (e.g., remote endpoints) to measure data collection and sharing
by Amazon and third party skills. While we are able to observe communication
between Amazon and some third parties, we otherwise find that the Amazon
ecosystem uses an opaque communication model where encryption and proxying
hide substantial amounts of information flows among devices, Amazon servers,
and third parties.
RQ2: Is voice data used by either Amazon or third-party apps beyond purely
functional purposes, such as for targeted advertising? We measure
advertisements to infer data usage and sharing by Amazon and third-party
skills. To this end, we focus on detecting behaviorally targeted web and audio
ads. We study targeting in web ads because web publishers almost universally
employ well-established programmatic advertising protocols [27, 38]. We also
study targeting in audio ads even though smart speaker advertising ecosystem
is relatively nascent.111Amazon only allows audio ads on streaming skills [2]
and typically requires rather high minimum ad spend commitment from
advertisers [12].
RQ3: Are data usage and sharing practices compliant with privacy policies? We
extract key elements from privacy policies of Amazon Alexa platform and third
party skills. We compare privacy policies with our network traffic
measurements to assess the compliance of data collection, usage, and sharing
practices.
## 3 Measuring Tracking, Profiling, & Ad Targeting
Figure 1: Approach overview: (1–4) we link Amazon Echo to the Alexa web
companion app and visit Alexa skill marketplace to install skills, (5–8) we
then interact with the installed skills by uttering sample invocation
utterances listed in skill description, (9–11) we then visit popular websites
while logged into Amazon account and Alexa web companion app. In step 3∗ and
6∗, we record incoming/outgoing network traffic to/from Amazon Echo and AVS
Echo. In step 8, we record audio ads from music streaming skills. In step 12,
we record web ads on popular websites. In step 13, we analyze recorded data to
measure tracking, profiling, and ad targeting and its compliance with privacy
policies.
In this section, we describe our methodology to measure tracking, profiling,
and ad targeting by Amazon and third-party skills. Figure 1 presents the
overview of our approach. At a high level, we first intentionally leak data by
interacting with skills on Amazon Echo, then measure data tracking by
intercepting network traffic, profiling by requesting data from Amazon, and ad
targeting by analyzing ads on popular websites and music streaming skills.
### 3.1 Leaking data
#### 3.1.1 Simulating interest personas
We simulate nine interest personas by installing and interacting with skills
from nine different categories: Connected Car, Dating, Fashion & Style, Pets &
Animals, Religion & Spirituality, Smart Home, Wine & Beverages, Health &
Fitness, and Navigation & Trip Planners. We simulate several personas because
the nature of tracking, profiling, and ad targeting might differ across
different skill categories. Each interest persona is referred by the
respective skill category name.
Skill installation. As a first step, we create dedicated Amazon accounts for
each persona and use them to configure Amazon Echos (4th generation Amazon
Echo smart speakers). To avoid contamination across personas, we configure
each Amazon Echo through a fresh browser profile and assign unique IP address
to each device. We then use a Selenium [41] based web crawler to
programmatically visit the Alexa skill marketplace, and iteratively install
and enable the top-50 skills (based on the number of reviews) for each
category—we use the dataset released in [60] to extract top skills. If
prompted, we enable all of the requested permissions by a skill. It is
noteworthy that we do not link accounts for skills that require to link an
account. Our rationale for this methodological choice is to sidestep the non-
trivial account linking process, that typically requires creating an account
for the online service and often also linking a physical IoT device, e.g.,
iRobot skill requires to link a robot vacuum cleaner with the skill [68].
Skill interaction. After installing each skill, we interact with it by
programmatically uttering sample invocations. We also parse skill descriptions
to extract additional invocation utterances provided by the skill developer.
We interact with the Amazon Echo by iteratively uttering each skill’s
invocations. In case Alexa expects a follow up response or has a response of
more than 30 seconds, e.g., playing a song, we terminate the interaction by
uttering Alexa, Stop!. Note that a minute chunk of generic utterances, such as
Alexa, give me hosting tips, were redirected to Alexa instead of the skills.
We surmise that it could be because of the unavailability of the skill’s
backend server at the time of interaction, a bug in the skill, or an
unexpected sample utterance listed by the skill developer.
#### 3.1.2 Simulating control personas
In addition to the nine interest personas, we also simulate four control
personas. One control persona is linked to an Amazon account and an Amazon
Echo and referred to as vanilla persona. The remaining three personas are
primed by iteratively visiting top-50 websites from health, science, and
computer categories [8], and are referred to as web health, web science, and
web computer personas. We use OpenWPM [61], an open-source web measurement
tool to prime web personas. Similar to interest personas, to avoid
contamination across control personas, we configure each control persona
through a fresh browser profile and assign unique IP address to each persona.
Control personas serve as a baseline and allow to associate deviation to the
treatment applied to the interest persona in question. Vanilla persona serves
as a baseline for tracking and profiling the information that the user is an
Amazon consumer and owns an Amazon Echo. Web health, science, and computer
personas serve as a baseline for standard data tracking and profiling on the
web, about users with respective interests. The additional comparison with web
personas allow us to better contextualize the results, because as compared to
smart speakers, ad targeting has been extensively studied on the web [77, 76,
58].
### 3.2 Capturing network traffic
We capture outgoing and incoming network traffic, to and from, Amazon Echos to
measure data tracking by Amazon and skills. Since, Amazon Echo does not
provide any interface to monitor network traffic on the device, we intercept
network traffic on the router. To this end, we set up a custom Raspberry Pi
(RPi) based router [42] to intercept incoming and outgoing network traffic.
For each skill, we enable tcpdump on the RPi router, install the skill,
interact with the skill, uninstall the skill, and disable tcpdump. Enabling
and disabling tcpdump allow us to cleanly associate network traffic to each
skill. Similarly, uninstalling each skill before installing the next one
ensures that we associate the correct network traffic to each skill.
Unencrypted network traffic. Since we can only capture encrypted network
traffic on the router, we lack visibility on the tracked data. To enhance our
coverage, we simulate an Echo device by instrumenting Alexa Voice Service
(AVS) SDK [15] and running it on a Raspberry Pi (RPi)—we call it AVS Echo. We
use the instrumented AVS Echo to intercept and log the payload of each packet
before it is encrypted and sent over the network. The network traffic captured
through the AVS Echo allows us to examine all the data, including any
personally identifiable information (PII), sent in the network traffic, which
otherwise is not possible to observe in the encrypted network traffic captured
from the Amazon Echo on the RPi router. However, it is important to note that
skills that stream content, e.g., music, podcast, are not supported on un-
certified Alexa on AVS Echo [40]. Further, unlike commercial Amazon Echos that
can communicate with Amazon and third-party endpoints, AVS Echo only
communicates with Amazon.
Inferring origin. Both encrypted and unencrypted network traffic contain the
IP addresses of contacted endpoints. We resolve IP addresses to domain names
by using the information from Domain Name System (DNS) packets in network
traffic. We further map domain names to their parent organization by
leveraging information from DuckDuckGo [21], Crunchbase [19], and WHOIS.
### 3.3 Capturing advertisements
We rely on ad content and advertisers’ bidding behavior to infer data usage
and sharing. Ad content can reveal the ad topic and consequently the user
interests that advertisers might have inferred from the leaked Amazon Echo
interaction data. However, ad content may lack objective or discernible
association with the leaked data. For example, active advertising campaigns
may lack apparent association with the leaked data or advertising models may
interpret user interests differently. We try to offset subjectivity by also
relying on advertisers’ bidding behavior to infer the usage and sharing of
smart speaker interaction data. Prior research [76, 77, 58] has shown that the
advertisers bidding behavior is influenced by their pre-existing knowledge of
the users, which typically results in high bid values. Thus, if we encounter
high bid values from advertisers, a likely cause is the usage and sharing of
Amazon Echo interaction data.
Web advertisements. Since header bidding protocol [27] allows to observe bid
values at the client side, we collect ad bids and ad images on header bidding
supported websites. To this end, we first identify top websites that support
prebid.js [36], the most widely used implementation of header biding protocol
[28], and then visit those websites to capture bids and ad images. We extend
OpenWPM [61] to identify and capture data on prebid.js supported websites. To
identify prebid.js supported websites, we crawl Tranco top websites list [70]
and probe for prebid.js version, through an injected script that calls
pbjs.version. We treat a website as prebid supported, if we receive a non-null
prebid.js version. We stop the crawl as soon as we identify 200 prebid
supported websites. We then crawl the prebid.js supported websites and
intercept bidding requests. Specifically, we inject a script on the webpage
and collect the bids by calling pbjs.getBidResponses function. In case the
website has not received any bids, we request the bids ourselves by calling
pbjs.requestBids function. In order to more accurately simulate user behavior,
we enable OpenWPM’s bot mitigation and wait for 10–30 seconds between webpage
visits. It is important to note that we crawl the prebid.js supported websites
using the same browser profiles, that are logged into Amazon account and Alexa
web companion app, and IP addresses used to configure interest and vanilla
personas (Section 3.1). The browser profiles and IP addresses connect personas
with browsers and allow us to collect the advertisements targeted to the
personas.
Interpreting bids. In addition to user interests, advertisers consider several
factors, e.g., day of the week, website popularity, to determine the bid
values [76, 77]. We try to minimize the variability by keeping conditions
consistent across personas. Specifically, we use identical hardware/software,
collect bids at the same time, from the same location, and on the same
websites, for all personas. In addition, we only consider bids from ad slots
that are successfully loaded across all personas, because bid values vary by
ad slots [77] and advertiser may not bid on all ad slots across all personas.
We also relatively compare bid values across personas because their absolute
values can change over time, e.g., travel advertisements may get higher bids
around holidays. Since it is non-trivial to reverse engineer and control for
all the factors incorporated by advertisers, we crawl and extract bids from
the prebid.js supported websites several times (6 times before interacting
with skills and 25 times after interacting with skills) to further minimize
the variability in bid values.
Capturing requests/responses. In addition to collecting ad bids and images, we
also record the network requests and responses while crawling popular
websites. Network traffic allows us to measure data sharing, e.g., cookie
syncing [64], between Amazon and its advertising partners. Note that the
network traffic captured while crawling is different from network traffic
captured from Amazon Echos and AVS Echos (Section 3.2).
Audio advertisements. Considering the rapid growth of audio advertising, we
also try to infer data usage and sharing through audio ads, despite their
shortcomings (mentioned in Section 2). We capture audio ads played on three
audio-streaming skills: Amazon Music [9], Spotify [45], and Pandora [33]. We
include Amazon Music to determine if Amazon (the platform operator)
personalizes audio ads, while the other two are popular streaming services
[46, 14] with over 10,000 reviews on the Alexa platform [45, 33]. Since ads
are played at variable intervals in-between songs, we stream music for several
hours. Specifically, we stream and record six hours of top hit music for each
skill. We then automatically transcribe the recorded audio files [78] and
manually extract ads from transcripts.
It is noteworthy that we only capture audio ads on two interest personas
(Connected Car, Fashion & Style) where we expect most personalization (see
Section 5.4), and the Vanilla persona for baseline comparison. We reduce the
number of personas compared to our web experiments because of the time- and
labor-intensive nature of our methodology to collect and process audio ads.
Specifically, to capture audio ads, we place Amazon Echos in insulated
environments to avoid interference; a human coder then manually inspects both
the audio recording and their transcripts to identify ads (rather than song
lyrics). We place Amazon Echos in 3 different rooms, one for each persona—as
with web ads, we collect audio ads simultaneously to reduce variability. We
then manually identify ads from 54 hours (3 personas $\times$ 3 skills
$\times$ 6 hours) of audio transcripts.
## 4 Network Traffic Analysis
Figure 2: Network traffic distribution by persona, domain name, purpose, and
organization
### 4.1 Amazon has the best vantage point to track user activity
Table I presents the list of domains contacted by skills. We note that, 446
(99.11%), 2 (0.45%), and 31 (6.89%) of the skills contact domains that belong
to Amazon, skill vendors, and third parties, respectively (4 (0.89%) skills
failed to load). All active skills contact Amazon because Amazon mediates
communication between skills and users, i.e., Amazon first interprets the
voice input and then shares it with the skill [32]. Another possible
explanation for a large number of network traffic flows to Amazon could be the
hosting of skills on Amazon’s platform [3]. Garmin [24] and YouVersion Bible
[50] are the only skills that send traffic to their own domains. Figure 2
shows the network flows from skills to domains, their functionality, and their
parent organizations. Corroborating with the results from Table I, we note
that most network flows involve Amazon. We also note that the skills in most
categories, except for Smart Home, Wine & Beverages, Navigation & Trip
Planners, contact third party services.
Table XIII presents the details of data shared by skills. As expected, voice
recording is collected when a skill is installed and enabled. Further, 326
(72.44%) skills collect persistent identifiers, namely user and skill IDs, 434
(96.44%) collect user preferences, and 385 (85.55%) collect device events. We
also note that 8.59% of the skills that collect persistent identifiers also
send data to third-party domains.
Org. | Domains | Skills
---|---|---
Amazon | *(11).amazon.com | 895
| prod.amcs-tachyon.com | 305
| api.amazonalexa.com | 173
| *(7).cloudfront.net | 144
| device-metrics-us-2.amazon.com | 123
| *(4).amazonaws.com | 52
| acsechocaptiveportal.com | 27
| fireoscaptiveportal.com | 20
| ingestion.us-east-1.prod.arteries.alexa.a2z.com | 7
| ffs-provisioner-config.amazon-dss.com | 2
Skills | *(2).youversionapi.com | 2
| static.garmincdn.com | 1
Third party | dillilabs.com | 9
| *(2).megaphone.fm | 9
| cdn2.voiceapps.com | 7
| *(2).podtrac.com | 7
| *(2).pod.npr.org | 4
| chtbl.com | 3
| 1432239411.rsc.cdn77.org | 3
| *(2).libsyn.com | 3
| *(3).streamtheworld.com | 3
| discovery.meethue.com | 2
| turnernetworksales.mc.tritondigital.com | 1
| traffic.omny.fm | 1
TABLE I: Amazon, skill vendors, and third-party domains contacted by skills. “Org.” column refers to organization. “Skills” column represents the count of skills. Advertising and tracking domains are shaded with grey. Subdomains counts are represented with *(#), e.g., *(11).amazon.com represents requests to 11 subdomains of amazon.com. Organization | Functional | Advertising & Tracking | Total
---|---|---|---
Amazon | 88.93% | 7.91% | 96.84%
Skill vendor | 0.17% | 0% | 0.17%
Third party | 1.49% | 1.50% | 2.99%
Total | 90.59% | 9.4% | 100%
TABLE II: Distribution of advertising / tracking and functional network traffic by organization. Persona | Advertising & Tracking | Functional
---|---|---
Fashion & Style | 9 | 4
Connected Car | 7 | 0
Pets & Animals | 3 | 11
Religion & Sprituality | 3 | 8
Dating | 5 | 1
Health & Fitness | 0 | 1
TABLE III: Count of advertising/tracking and functional third-party domains contacted by personas. Skill name | Advertising & Tracking
---|---
Garmin [24] | chtbl.com
| traffic.omny.fm
| dts.podtrac.com
| turnernetworksales.mc.tritondigital.com
Makeup of the Day [29] | *(2).megaphone.fm
| play.podtrac.com
| chtbl.com
Men’s Finest Daily | play.podtrac.com
Fashion Tip [30] | *(2).megaphone.fm
Dating and Relationship | play.podtrac.com
Tips and advices [20] | *(2).megaphone.fm
Charles Stanley Radio [16] | *(2).streamtheworld.com
TABLE IV: Top-5 skills that contact third-party advertising and tracking
services. Subdomains counts are represented with *(#), e.g., *(2).megaphone.fm
represents two subdomains of megaphone.fm.
### 4.2 Data is leaked to advertisers and trackers
Several domains contacted by skills offer audio advertising and tracking
services (rows highlighted in gray in Table I). We rely on filter lists [34]
and manual investigations to detect advertising and tracking services. Table
II provides the distribution of functional and advertising domains contacted
by skills. We note that 9.4% of all network traffic, including 1.5% third-
party network traffic, supports advertising and tracking functionality. We
note that device-metrics-us-2.amazon.com, used by Amazon to collect device
metrics [54], is the most prominent tracking domain. Most contacted third-
party advertising and tracking services include Megaphone (megaphone.fm) and
Podtrac (podtrac.com), both of which specialize in audio advertising and
tracking services. We note that prominent skills, such as Genesis [25] and
Men’s Finest Daily Fashion Tip [31] with 398 and 13 reviews, contact these
third-party advertising and tracking services. Six of such skills do not
stream music, radio, podcast, or provide a flash briefing, which potentially
violates Amazon’s Alexa advertising policy that restricts non-streaming skills
from playing ads [2]. Surprisingly, we note that these skills do not play any
advertisements, despite including advertising services. It is unclear as to
why non-streaming skills include advertising and tracking services and why
these skills were not flagged during skill certification [13].
Table III and IV further provide the distribution of advertising and tracking
domains by personas and skills. From Table III, we note that skills in five
personas contact third-party advertising and tracking services, where skills
in Fashion & Style persona contact the most advertising and tracking services.
From Table IV, we note that skills contact several advertising and tracking
services. The skill Garmin [24] even contacts as much as 4 advertising and
tracking services.
Takeaway. Amazon is in the best position to track user activities because most
traffic is mediated through Amazon. Even if users intend, they cannot interact
with the skills without Amazon’s involvement. We also note that six non-
streaming skills send data directly from the smart speaker to advertising and
tracking services, which could be a potential violation of Amazon’s Alexa
advertising policy [2].
## 5 Ad Targeting analysis
### 5.1 User interaction leads to higher bid values
Figure 3 presents bid (CPM)222CPM (cost per mille) is the amount an advertiser
pays a website per thousand visitors who see its advertisements. Bids are
expressed in CPM. values across vanilla and interest personas on common ad
slots without and with (Figure 3b) user interaction. It can be seen from
Figure 3a that without user interaction, there is no discernible difference
between vanilla and interest personas. Whereas, with user interaction, i.e.,
Figure 3b, the bid values are significantly higher for interest personas as
compared to vanilla persona. Table V shows the median and mean values for
interest and vanilla personas with user interaction. It can be seen from the
table that median bids for all interest personas, except for Health & Fitness,
are 2$\times$ higher than vanilla persona. Similarly, mean bids for four
interest personas, i.e., Fashion & Style, Religion & Spirituality, Wine &
Beverages, and Health & Fitness, are 2$\times$ higher than vanilla persona. We
note that the bid values for Health & Fitness and Fashion & Style go as much
as 30$\times$ and 27$\times$ higher than the mean of vanilla persona.
(a) Bidding behavior without user interaction
(b) Bidding behavior with user interaction
Figure 3: CPM values across vanilla (control) and interest (treatment) personas on common ad slots without and with user interaction. Solid and dotted lines in bars represent median and mean, respectively. Persona | Median | Mean
---|---|---
Connected Car | 0.099 | 0.267
Dating | 0.099 | 0.198
Fashion & Style | 0.090 | 0.403
Pets & Animals | 0.156 | 0.223
Religion & Spirituality | 0.120 | 0.323
Smart Home | 0.071 | 0.218
Wine & Beverages | 0.065 | 0.313
Health & Fitness | 0.057 | 0.310
Navigation & Trip Planners | 0.099 | 0.255
Vanilla | 0.030 | 0.153
TABLE V: Median and mean bid values (CPM) for interest (treatment) and vanilla
(control) personas.
High bid values without user interaction. The high bid values without user
interaction could be explained by data collection during the holiday season,
i.e., before Christmas 2021. To rule out the impact of holiday season, we
compare the bids values without and with interaction that were collected close
to each other. Specifically, we compare the bids from last three iteration of
without interaction with bids from first three iterations of with interaction,
that were crawled within holiday season. Table VI presents mean bid values
without and with user interaction. It can be seen that the interest personas
with interaction receive higher bids than control persona. Whereas no
discernable differences exist for without interaction configurations.
Persona | No Interaction | Interaction
---|---|---
Connected Car | 0.364 | 0.311
Dating | 0.519 | 0.297
Fashion & Style | 0.572 | 0.404
Pets & Animals | 0.492 | 0.373
Religion & Spirituality | 0.477 | 0.231
Smart Home | 0.452 | 0.349
Wine & Beverages | 0.418 | 0.522
Health & Fitness | 0.564 | 0.826
Navigation & Trip Planners | 0.533 | 0.268
Vanilla | 0.539 | 0.232
TABLE VI: Mean bid values without and with interaction across interest and
vanilla personas that were collected close to each other.
### 5.2 Interest personas have statistically higher bids than vanilla persona
We perform the Mann-Whitney U test to analyze whether interest personas
receive significantly higher bids than vanilla persona. Our null hypothesis is
that the bid distributions for interest personas are similar to vanilla
persona. Whereas our alternative hypothesis is that the bid distributions for
interest personas are higher than the vanilla persona. We reject the null
hypothesis when the $p$-value is less than 0.05. In addition to $p$-value, we
also report the effect size (rank-biserial coefficient). Effect size ranges
from -1 to 1, where -1, 0, and 1 indicate stochastic subservience, equality,
and dominance of interest persona over vanilla persona. Effect size between
0.11–0.28, 0.28–0.43, and $\geq$ 0.43 are considered small, medium, and large,
respectively.
Persona | $p$-value | Effect size
---|---|---
Connected Car | 0.003 | 0.354
Dating | 0.006 | 0.363
Fashion & Style | 0.010 | 0.319
Pets & Animals | 0.005 | 0.428
Religion & Spirituality | 0.004 | 0.356
Smart Home | 0.075 | 0.210
Wine & Beverages | 0.083 | 0.192
Health & Fitness | 0.149 | 0.139
Navigation & Trip Planners | 0.002 | 0.410
TABLE VII: Statistical significance between vanilla (control) and interest
(treatment) personas. $p$-value is computed through Mann-Whitney U test.
Effect size is rank-biserial coefficient.
Table VII presents the results of statistical significance tests. We note that
six interest personas have significantly higher bids than vanilla persona with
medium effect size. For the remaining three interest personas, i.e., Smart
Home, Wine & Beverages, and Health & Fitness, the differences are not
statistically significant.
### 5.3 Interest personas are targeted personalized ads
Next, we analyze the ads delivered through prebid.js. In total, we receive
20,210 ads across 25 iterations. Since ads may lack any objective or even
discernible association with the leaked interests, as discussed in Section
3.3, we resort to manual analysis of ads. However, manual ad analysis is a
tedious task and it is not feasible to analyze thousands of ads. To this end,
we sample a relatively manageable number of ads where we expect to see the
most personalization.
We consider an ad to be personalized if three conditions are met: _(i)_ the
skill vendor is also the advertiser (e.g., a Ford ad shown to a persona with
“FordPass” skill), including Amazon itself, _(ii)_ it is only present in one
persona, and _(iii)_ it references a product in the same industry as the
installed skill, (e.g., an ad for a vehicle is shown to the Connected Car
persona). While any manual labeling process is subject to human error and
subjectivity, we argue that our definition is sufficiently concrete to
mitigate these concerns. In total, we filter 79 ads from installed skills’
vendors and 255 ads from Amazon across 25 iterations. We manually inspect each
ad and label them based on the text and product advertised in the ad.
Out of the 79 ads from installed skills vendors, 60, 12, 1, and 1 are from
Microsoft, SimpliSafe, Samsung, and LG in Smart Home persona, respectively.
Out of the remaining 5, 3 are from Ford and 2 are from Jeep in Connected Car
persona. It is noteworthy that none of the ads from installed skills vendors
are exclusive to the personas where their skills are installed, which
indicates that these ads do not reveal obvious personalization.
Persona | Advertised products
---|---
Health & Fitness | Dehumidifier, Essential oils
Smart Home | Vacuum cleaner, Vac. clean. accessories
Religion & Spirituality | Wifi router, Kindle, Swarovski
Pets & Animals | PC files copying/switching software
TABLE VIII: Personalized ads from Amazon on interest personas. Green
represents unique ads with apparent relevance to the persona. Yellow
represents unique ads that repeat across iterations but do not have any
apparent relevance to the persona.
However, ads from Amazon do seem to be personalized to personas. Table VIII
presents the unique and personalized ads from Amazon. Health & Fitness and
Smart Home personas receive unique and personalized ads, whereas Religion &
Spirituality and Pets & Animals receive unique but non-personalized ads. The
dehumidifier ad (Figure 4a) appears to have an association with the Air
Quality Report skill [1] and the essential oils ad appears to have an
association with the Essential Oil Benefits skill [23] in Health & Fitness
persona. The dehumidifier ad appeared 7 times across 5 iterations and the
essential oils ad appeared once in Health & Fitness persona. The vacuum
cleaner and vacuum cleaner accessories ads from Dyson appear to have an
association with the Dyson skill [22]; both ads appeared once in Smart Home
persona. We notice several ads repeated across iterations in Religion &
Spirituality and Pets & Animals persona that do not seem to have any apparent
personalization. For example, Amazon Eero WiFi (Figure 4b), Amazon Kindle, and
Swarovski ads exclusively appeared on 12, 14, 2 times across 8, 4, and 2
iterations, respectively in Religion & Spirituality persona. Similarly, PC
files copying/switching software ad appeared 4 times in 2 iterations in Pets &
Animals persona.
(a) Dehumidifier ad in Health & Fitness
(b) Eero WiFi ad in Religion & Spirituality
Figure 4: Unique and repeated ads in interest personas.
### 5.4 Audio ads are likely personalized
Next, we take a preliminary look at the 289 audio ads collected on Amazon
Music, Spotify, and Pandora (Section 3.3). Table IX shows the fraction of ads
on each audio-streaming skill for each persona. Since the recorded audio for
each skill is approximately equal in length, we surmise that differences in
the number of ads streamed across personas on the same skill, signal
differences in advertiser interest [57]. For instance, as shown in Table IX,
the number of ads on Spotify for the Connected Car persona is a fifth of the
number of ads for the other two personas. We speculate that this considerable
difference stems from the lower interest of advertisers to stream ads for this
persona.
We also manually label the products advertised in order to look for evidence
of obvious personalization (as we do in Section 5.3 for web ads). In this
case, we only consider audio ads streamed twice or more, as repetitions may
signal a stronger interest by the advertiser. Figure 5 present the
distribution of ads across Amazon Music, Spotify and Pandora. We find
potential preliminary evidence of audio ad personalization for the Fashion &
Style persona. Some advertising brands, such as Ashley and Ross on Spotify and
Swiffer Wet Jet on Pandora, are exclusively streamed for Fashion & Style
persona. Further, on Pandora, clothing brands such as Burlington and Kohl’s
appear much more frequently for the Fashion & Style persona than they do for
other personas. We do not find similar patterns for the Connected Car persona,
with the sole exception of Febreeze car on Pandora. We speculate that this
persona does not reveal valuable information to audio ad vendors (unlike on
the web, see Section 5.3), as streaming music while driving a car is a widely
popular activity. We also note that a large chunk of ads (16.61% of total ads)
on Amazon Music and Spotify advertise the premium version of these two
streaming services.
(a) Audio ads on Amazon Music
(b) Audio ads on Spotify
(c) Audio ads on Pandora
Figure 5: Distribution of audio ads across Amazon Music, Spotify, and Pandora. Persona | Amazon | Spotify | Pandora
---|---|---|---
Connected Car | 33.33% | 8.99% | 26.17%
Fashion & Style | 34.41% | 50.56% | 43.92%
Vanilla | 32.26% | 40.45% | 29.91%
TABLE IX: Fraction of ads ($n=289$) on each audio-streaming skill for each
persona.
### 5.5 Some advertisers sync their cookies with Amazon and bid higher than
non-cookie syncing advertisers
To target personalized ads, advertisers share user data with each other.
Typically unique user identifiers, e.g., cookies, are shared at the client
side with cookie syncing and user interest data is synced at the server side
[55]. We analyze cookie syncing instances that involve Amazon advertising
services in the network traffic captured while collecting ads (Section 3.3).
We note that 41 third parties sync their cookies with Amazon across all Echo
interest personas. Surprisingly, Amazon does not syncs its cookies with any
advertiser.333We analyze the OpenWPM datasets released by prior work [67] to
validate that Amazon’s cookie syncing behavior is not unique to our dataset.
The one sided cookie-syncs could be explained by Amazon advertising’s recent
services for central identity resolution [86].
To infer potential data sharing by Amazon, we compare and contrast the bid
values by Amazon’s partners (i.e., cookie syncing advertisers) and non-partner
advertisers. Figure 6 presents the bid values on common ad slots by Amazon’s
partners and non-partners advertisers. We note that both median and mean bid
values from partners are high for 6 and 7 personas as compared to bids from
non-partners, respectively. Median bid values are as much as 3$\times$ higher
for Pets & Animals, Religion & Spirituality, and Wine & Beverages personas,
while mean bid values are 3$\times$ higher for Pets & Animals, Smart Home, and
vanilla personas. It is noteworthy that Amazon’s advertising partners further
sync their cookies with 247 other third parties, including advertising
services. Such cookie syncs may lead to the propagation of user data in the
advertising ecosystem.
Figure 6: Bid values across personas on common ad slots distributed by Amazon’s advertising partners. | Partner | Non-partner
---|---|---
Persona | Median | Mean | Median | Mean
Connected Car | 0.140 | 0.296 | 0.086 | 0.228
Dating | 0.099 | 0.159 | 0.094 | 0.254
Fashion & Style | 0.080 | 0.485 | 0.095 | 0.281
Pets & Animals | 0.290 | 0.358 | 0.087 | 0.101
Religion & Spirituality | 0.268 | 0.400 | 0.088 | 0.276
Smart Home | 0.054 | 0.307 | 0.080 | 0.101
Wine & Beverages | 0.150 | 0.316 | 0.041 | 0.310
Health & Fitness | 0.099 | 0.235 | 0.053 | 0.363
Navigation & Trip Plan. | 0.090 | 0.236 | 0.100 | 0.281
Vanilla | 0.025 | 0.203 | 0.352 | 0.066
TABLE X: Median and mean bid values for personas from Amazon’s partner and
non-partner advertisers.
### 5.6 Echo interest personas are targeted similar to web interest personas
This section expands the discussion we have in Section 5.6. We compare Echo
interest personas with web interest personas. Comparing Echo interest personas
with web interest personas will allow us to draw parallels with the standard
data usage and sharing on the web. Figure 7 presents the bidding values for
Echo interest and web interest personas. It can be seen from the figure that
there are no discernible differences between Echo interest and and web
interest personas. We further conduct Mann-Whitney U test of statistical
significance to validate our observation. Our null hypothesis is that the bid
distributions of Echo interest personas are similar to web interest personas.
We reject the null hypothesis if the $p$-value is less than 0.05. Table XI
shows the statistical significance between Echo interest and web personas. It
can be seen from the table that for all persona combinations, except for
Navigation & Trip Planners and web computers, there are no significant
differences between Echo and web interest personas. We conclude that the voice
data leaked through smart speakers and browsing data leaked through web, leads
to similar amount of targeting.
Persona | $p$-value
---|---
| Health | Science | Computers
Connected Car | 0.857 | 0.752 | 0.243
Dating | 0.910 | 0.722 | 0.162
Fashion & Style | 0.964 | 0.586 | 0.277
Pets & Animals | 0.600 | 0.691 | 0.059
Religion & Spirituality | 0.815 | 0.976 | 0.125
Smart Home | 0.504 | 0.147 | 0.879
Wine & Beverages | 0.949 | 0.559 | 0.357
Health & Fitness | 0.543 | 0.234 | 0.767
Navigation & Trip Planners | 0.206 | 0.460 | 0.021
TABLE XI: Statistical significance between Echo interest (persona column) and
web interest (Health, Science, Computers columns) personas. $p$-value is
computed through Mann-Whitney U test. Figure 7: CPM values across vanilla,
Echo interest, and web interest personas on common ad slots. Solid and dotted
lines in bars represent median and mean, respectively.
Takeaway. Our measurements indicate the usage of voice data for on-platform
(i.e., audio ads), off-platform (i.e., web ads), and cross-device (i.e., non-
Echo device) ad targeting. Advertisers bid as much as 30$\times$ higher on
Echo users. Some advertisers sync their cookies with Amazon and bid higher
than non-cookie syncing advertisers.
## 6 Data Profiling Analysis
### 6.1 Amazon uses voice data to infer advertising interests
Since, Amazon allows users to access data collected about them, we request
data for interest and vanilla personas [10]. The data contains detailed
information about device diagnostics, search history, retail interactions,
Alexa, advertising, and other Amazon services. We are mostly interested in
advertising interests inferred by Amazon based on skill installation and
interactions. We requests data thrice, once after skill installation and twice
after skill interaction. We request interest twice to see whether inferred
interests evolve over time. Table XII presents the advertising interests
inferred by Amazon for various personas. We note that both skill installation
and interaction leads to interests inference by Amazon. With only skill
installation, Amazon infers that Health & Fitness persona is interested in
Electronics and DIY & Tools. Skill interaction, further allows Amazon to infer
interests for Fashion & Style and Smart Home persona and also refine interests
for Health & Fitness persona. Table XII shows that some of the interests even
have discernable relevance to the personas. For example, Fashion and Beauty &
Personal Care interests have discernable relevance with Fashion & Style
persona and Home & Kitchen interests have discernable relevance with Smart
Home persona. It is noteworthy that for our second data request after
interaction, Amazon did not return advertising interest files for Health &
Fitness, Wine & Beverages, Religion & Spirituality, Dating, and vanilla
personas. To eliminate a one-off technical issue, that may have resulted in
absence of advertising interest files, we again requested data from Amazon but
the advertising interest files were still absent. Though the exact reason
behind the absence of files is unclear, Amazon cannot be reliably trusted to
provide transparency in usage of data.
It is notable that the advertising interest inference that we observe can be
interpreted as inconsistent with Amazon’s public statements [83, 75].
Specifically, in a statement, Amazon mentioned that they do “not use voice
recordings to target ads” [83, 75]. While Amazon may not literally be using
the “recordings” (as opposed to transcripts and corresponding activities), our
results suggest that they are processing voice recordings, inferring
interests, and using those interests to target ads—this distinction between
voice recordings and processed recordings may not be meaningful to many users.
Amazon’s policies state the Alexa interactions are used for personalizing user
experience, e.g. improve speech recognition, and to build a more inclusive
Alexa, e.g., understand different accents [4]. The potential inconsistency
between policies/statements and actual practices raises questions about
Amazon’s commitment to only using user data for stated purposes.
Config. | Persona | Amazon inferred interests
---|---|---
Installation | Health & Fitness | Electronics
| | Home & Garden: DIY & Tools
Interaction | Health & Fitness | Home & Garden: DIY & Tools
(1) | Fashion & Style | Beauty & Personal Care
| | Fashion
| | Video Entertainment
| Smart Home | Electronics
| | Home & Garden: DIY & Tools
| | Home & Garden: Home & Kitchen
Interaction | Fashion & Style | Fashion
(2) | | Video Entertainment
| Smart Home | Pet Supplies
| | Home & Garden: DIY & Tools
| | Home & Garden: Home & Kitchen
TABLE XII: Advertising interests inferred by Amazon for interest personas.
### 6.2 It is unclear whether skills play a role in targeting of personalized
ads
Next, we try to quantify Amazon’s and skills’ role in higher bids and
targeting of personalized ads. Since all interactions are mediated through
Amazon, Amazon has the best vantage point to infer personas’ interests and
target personalized ads. Specifically, all voice inputs are interpreted by
Amazon and most network requests are routed to/through Amazon (Table I and
Figure 2). Amazon is also logged in to each persona and it can access its
cookies to uniquely identify each persona. In fact, Section 5.3 and 6.1
already show that Amazon targets personalized ads to users and uses voice data
to infer advertising interests, respectively. We also note that Smart Home,
Wine & Beverages, and Navigation & Trip Planners, personas do not contact any
non-Amazon services but still receives high bid values, as compared to vanilla
persona. Amazon also infers discernible interests for the Smart Home persona
(Table XII). These results suggest that Amazon plays a crucial, if not a sole,
role in higher bids and targeting of personalized ads.
In contrast, skills can only rely on persona’s email address, if allowed
permission, IP address, if skills contact non-Amazon web services, and
Amazon’s cookies, if Amazon collaborates with the skills, as unique
identifiers to reach to personas. Though we allow skills to access email
address, we do not log in to any online services (except for Amazon), thus
skills cannot use email addresses to target personalized ads. Skills that
contact non-Amazon web services and skills that collaborate with Amazon can
still target ads to users. However, we note that only a handful (9) of skills
contact few (12) advertising and tracking services (Table I and Figure 2),
which cannot lead to mass targeting. Similarly, we note that none of the
skills re-target ads to personas (Section 5.3), which implies that Amazon
might not be engaging in data sharing partnerships with skills. Despite these
observations, we still cannot rule out skills involvement in targeting of
personalized ads.
Takeaway. Amazon’s inference of advertising interests from users’ voice can be
interpreted as inconsistent with their public statements. Amazon does not
provide transparency in usage of data and thus cannot be reliably trusted to
protect user privacy. Our findings indicate that skills require Amazon’s
collaboration to effectively use collected data.
Category | Data Type(s) | Skill Disclosures | Example terms in privacy policies
---|---|---|---
| | Clr. | Vag. | Omi. | No Pol. | Amazon | Skills
Voice inputs | voice recording | 20 | 18 | 147 | 258 | voice recording | audio recording , sensory info.
Persistent IDs | customer / user ID | 11 | 9 | 38 | 84 | unique identifier | anonymized ID , UUID
skill ID | 0 | 11 | 85 | 230 | cookie |
User preferences | language | 0 | 3 | 5 | 10 | time zone setting , | regional and language settings ,
timezone | 0 | 3 | 5 | 10 | settings preferences | app settings
other preferences | 0 | 40 | 139 | 255 | |
Device events | audio player events | 0 | 60 | 99 | 226 | device metrics , Amazon Services metrics | usage data , interaction data
TABLE XIII: Data type analysis results. “Skill Disclosures” column presents
the numbers of skills that have clear, vague and omitted disclosures for a
certain “Data Type”, and number of skills with no policy.
## 7 Analyzing Privacy Policies
In this section, we analyze the consistency between the actual data collection
practices and privacy policies.
### 7.1 Collecting Privacy Policies
First, we obtain the privacy policy of Amazon (platform) from its website
[11]. This applies to all Amazon products, including Alexa. Alexa and its
privacy controls are further described on the Alexa website [49]. We then
download skills’ policies using a Puppeteer [37] based crawler. We crawl the
webpage of each skill, attempt to find the privacy policy link, and download
it if there is one. Recall from Section 3.1.1 that we experiment with 450
skills: nine categories, top-50 skills per category. Among the 450 skills,
only 214 (47.6%) skills provide links to their privacy policies and only 188
privacy policies can be downloaded. This is higher than the statistics
reported by prior work [71], which identified that only 28.5% of the skills
provide a privacy policy link [71]. Among the 188 obtained privacy policies,
129 do not even mention Alexa or Amazon in their text. They are mostly generic
and apply to various products from the same developer—not specific to Alexa
skills.
### 7.2 Network Traffic vs. Privacy Policies
We use and adapt PoliCheck [53] to perform NLP analysis of the privacy
policies and to check the consistency of data flows found in the network
traffic with those declared in the corresponding privacy policy. Policheck has
been previously applied to mobile app traffic [53], traffic from VR headsets
[84], as well as to voice assistants [71]. However, in [71], data flows were
extracted not from actual network traffic (as we do in this paper), but from
the permissions of skills [71].
In this context, a data flow is defined as $<$data type, endpoint$>$, i.e.,
what data type is sent to what destination endpoint (or “entity” in Policheck
terminology [53]). While running a skill, PoliCheck (i) extracts data flows as
$<$data type, entity$>$ tuples from the network traffic of the AVS Echo (ii)
analyzes the corresponding skill’s privacy policy text for statements that
disclose these data flows and (iii) checks the consistency of the two. For
example, while running the skill Sonos [44], the AVS Echo’s network traffic
includes an outgoing packet that sends voice data to an Amazon endpoint;
PoliCheck will extract the tuple $<$voice, amazon$>$ from this packet. At the
same time, Sonos states the following text in its privacy policy: ”The actual
recording of your voice command is then sent to the voice partner you have
authorized to receive such recording (for example, Amazon).” Thus, PoliCheck
will also extract the tuple $<$voice, amazon$>$ from this statement. Since the
tuple from the network traffic matches the statement tuple in the privacy
policy, PoliCheck labels this as a clear disclosure. In general, a data flow
found in the network traffic can be classified by Policheck [53] as: clear,
vague, ambiguous, incorrect, or omitted.
Ideally, for each skill, we would run PoliCheck on the unencrypted network
traffic collected from the AVS Echo to extract the skill’s data flows and
check them against the statements in the skill’s privacy policy. However, due
to the limitations of the AVS Echo (it does not support certain features and
only communicates with Amazon endpoints), we perform consistency analysis for
each of the two pieces of information in the tuple. First, we adapt PoliCheck
to perform the analysis only on the endpoints found in the encrypted traffic
collected from the Amazon Echo. Second, we adapt PoliCheck to perform the
analysis on the data types found in the unencrypted network traffic collected
from the AVS Echo. Note that we have adapted two distinct versions of
PoliCheck based on the version released in [84] to perform these two analyses
separately, as described next.
#### 7.2.1 Endpoint analysis
Since the encrypted traffic does not reveal the exact data types, we modify
PoliCheck to focus on validating entities (i.e., names of organizations)
during endpoint analysis. We update PoliCheck’s entity ontology to include all
the 13 endpoints we observe—each endpoint organization is labeled with one or
more categories: analytic provider, advertising network, and content provider
(see Table XIV). Amazon, as platform-party, is also labeled as platform
provider and voice assistant service. Next, we classify endpoint consistency
into one of three disclosure types: (1) clear, when the endpoint is disclosed
in the privacy policy using the exact organization name; (2) vague, when the
endpoint is disclosed vaguely using category names or third party; and (3)
omitted, when the endpoint is not disclosed at all. We do not use ambiguous
and incorrect disclosures as in the original PoliCheck because a contradiction
cannot be determined without considering data types. Finally, we label an
endpoint as (4) no policy when the skill does not provide a privacy policy.
Table XIV presents the result of our endpoint analysis.
Disclosure of platform-party collection. Only 10 privacy policies clearly
indicate the possibility that personal information is collected by Amazon. For
example, the skill Sonos [44] clearly states that voice recording is collected
by Amazon. Furthermore, we also found 136 skills, whose statements contain
vague disclosures that may correspond to the traffic going to Amazon. For
example, the privacy policy of the skill Harmony [26]) has the following
statement, in which Amazon is not explicitly mentioned as an entity that:
“Circle products may send pseudonymous information to an analytics tool,
including timestamps, transmission statistics, feature usage, performance
metrics, errors, etc.”
Disclosure of first-party collection. We found that 32 skills connect to non
platform-party endpoints. Among them, 10 provide privacy policies and only six
have at least one clear or vague disclosure. The only two clearly disclosed
first-party endpoints are in the privacy policies of the skills YouVersion
Bible [50] and Garmin [24], and correspond to the organizations that are the
developers of the skills.
Disclosure of third-party collection. Many third-party endpoints, e.g.,
Liberated Syndication, Podtrac, Spotify and Triton Digital, provide audio
content distribution and monetization (tracking/advertising) services. Skills
likely rely on these third-party service providers to deliver audio contents.
However, only a few skills disclose data collection and sharing with third-
party endpoints in their privacy policies, and when they do, they use vague
terms. For example, the skill Charles Stanley Radio [17] uses the term
“external service providers” to refer to third-party endpoints in the
following statement in its privacy policy: “We may also share your personal
information with external service providers who help us better serve you.”
Another example is the skill VCA Animal Hospitals that uses the blanket term
“third-parties” to refer to all third-party endpoints in its privacy policy
[48].
Endpoint Organization | Categories in the Ontology | Contacted Skills
---|---|---
Amazon Technologies, Inc. | analytic provider, advertising network, content provider, platform provider, voice assistant service | AAA Road Service , Salah Time , My Dog , My Cat , Outfit Check! , Pet Buddy , Rain Storm by Healing FM , Single Decade Short Rosary , Islamic Prayer Times , Sonos , 136 skills , 42 skills , 258 skills
Chartable Holding Inc | analytic provider, advertising network | Garmin , Makeup of the Day , My Tesla (Unofficial)
DataCamp Limited | content provider | Relaxing Sounds: Spa Music , Comfort My Dog , Calm My Cat
Dilli Labs LLC | content provider | VCA Animal Hospitals , EcoSmart Live , Dog Squeaky Toy , Relax My Pet , Dinosaur Sounds , Cat Sounds , Hush Puppy , Calm My Dog , Calm My Pet
Garmin International | content provider | Garmin
Liberated Syndication | analytic provider, advertising network | Calm My Pet , Al’s Dog Training Tips
National Public Radio, Inc. | content provider | Makeup of the Day , Men’s Finest Daily Fashion Tip
Philips International B.V. | content provider | Say a Prayer , Angry Girlfriend
Podtrac Inc | analytic provider, advertising network | Garmin , Gwynnie Bee , Genesis , Men’s Finest Daily Fashion Tip , Love Trouble , Makeup of the Day , Dating & Relationship Tips
Spotify AB | analytic provider, advertising network | Gwynnie Bee , Genesis , Dating and Relationship Tips and advices , Makeup of the Day , Men’s Finest Daily Fashion Tip , Love Trouble
Triton Digital, Inc. | analytic provider, advertising network | Garmin , Charles Stanley Radio
Voice Apps LLC | content provider | Prayer Time , Charles Stanley Radio , Morning Bible Inspiration , Holy Rosary , meal prayer , Halloween Sounds , Bible Trivia
Life Covenant Church, Inc. | content provider | YouVersion Bible , Lords Prayer
TABLE XIV: Endpoint organizations observed in the network traffic from skills
run on the Amazon Echo: only 32 skills exhibit non-Amazon endpoints. Skills
highlighted in green use the exact organization name in the statement that
discloses data collection and sharing by the endpoint. Skills highlighted in
yellow use third party or other vague terms. Skills highlighted in red do not
declare the contacted endpoint at all. Skills highlighted in gray do not
provide a privacy policy.
#### 7.2.2 Data Types Analysis
We adapt PoliCheck to perform consistency analysis on the data types found in
the unencrypted traffic collected from the AVS Echo. Thus, we rebuild
PoliCheck’s data ontology by following the methodology used in previous work
[71, 84]. We add new terms that represent new data types, particularly voice
recording that is unique to voice assistants. Furthermore, we also improve
this version of PoliCheck: we modify it to focus on checking specific data
types and ignore vague terms, e.g., pii, user info, and technical info.
Finally, we classify data types consistency using the same disclosure types
used in endpoint analysis. Table XIII presents the result of our data types
analysis using PoliCheck.
Disclosure of data types in skill’s privacy policies. 83 skills have at least
one clear or vague disclosures. Among them, only 20 and 11 skills disclose the
collection of voice recordings and customer IDs clearly. Finally, despite
providing privacy policies, 174 skills do not disclose the collection of data
types observable in their network traffic.
Disclosure of data types in Amazon’s privacy policy. As noted in Section 7.1,
only 59 skills mention Amazon or Alexa in their privacy policies. Among these,
only 10 of them explicitly provide a link to Amazon’s privacy policy or terms
of use. In addition to the low availability and specificity of skills’ privacy
policies, we identify a gap between developers and Amazon: most developers
neither disclose the data types in their privacy policies nor provide a link
to Amazon’s privacy policy, possibly because they are not aware that Amazon is
collecting these data types when a skill is running.
Going forward, we believe that the good practice of a developer referencing
the platform’s privacy policy in the skill’s privacy policy is easy to adopt.
What would be the impact of this practice to the clarity of disclosures?
Following the methodology in [84], we set PoliCheck to also check the
platform-party’s privacy policy, by default, and we perform another
experiment: we include Amazon’s privacy policy in addition to the skill’s own
privacy policy. We find that PoliCheck classifies all data flows to be either
clearly or vaguely disclosed depending on the terms that Amazon’s privacy
policy uses to disclose the data types. Table XIII lists the terms found in
the Amazon’s privacy policy by PoliCheck.
Takeaway. In general, our findings suggest that the majority of skill
developers, even among the top skills, do not write their privacy policies
properly. In other words, the skills’ actual data collection and sharing
practices are often not clearly disclosed in their privacy policies.
#### 7.2.3 Validation of PoliCheck results
To validate the correctness of PoliCheck when applied to skills, we visually
inspect data flows from 100 skills that have a privacy policy, and check the
consistency of these data flows with respect to the corresponding statements
in the privacy policy. Following the methodology to validate PoliCheck results
performed in [84, 71, 53], we consider multi-class classification. Similarly
to [84], we assess the performance of the multi-class classification using
micro- and macro-averaging. Thus, we obtain 87.41% micro-averaged precision,
recall and F1-score. We also obtain the macro-averaged precision, recall, and
F1-score as 93.96%, 77.85%, and 85.15% respectively.
## 8 Concluding Remarks
Takeaway. In this paper, we have audited the data collection, usage, and
sharing practices in the Amazon smart speaker ecosystem. Our results indicate
that _(i)_ Amazon Echo user interactions are tracked by both Amazon and third-
parties, _(ii)_ Amazon used Amazon Echo interactions for ad targeting on-
platform (e.g., audio ads) and off-platform (e.g., web ads), and _(iii)_
Amazon computed user interests from voice data in a way that was inconsistent
with their public statements. In many instances, Amazon and skills did not
clearly disclose their data collection practices in their privacy policies.
Furthermore, several skills did not provide any privacy policy or did not
reference the platform’s privacy policy. Given these findings, there is a
clear need for increased transparency—by using auditing tools such as ours—on
the practices of voice assistant platforms and third parties operating on
them. The propagation of user data beyond the initial platform to the web is
particularly alarming, as are the violations of privacy policies—which, as we
show, are limited in scope, vague, and often even nonexistent for third
parties.
Deployment. Our auditing framework and results may be useful to several
stakeholders, including Amazon and skill developers (for internal privacy
audits), policymakers (for crafting and effectively enforcing regulation), and
users (as an incentive to guard their privacy using available tools). Upon
publication we will release our code and data.
### 8.1 Possible Defenses
_Improved transparency and control for users._ Smart speakers users want to
know what data is being collected, how that data is being used, and by whom.
Our work suggests the need for greater transparency for users about the answer
to these questions, as well as better control. Such transparency and control
might come through a redesign of the platform itself (e.g., improved privacy-
related UX, system-level enforcement with information flow control) or through
external audits (such as with our framework) and external controls (either
technical—e.g., network traffic filtering—and/or policy-based). For example,
Amazon Echos are equipped with a debug interface [47]. Having such interface
unlocked for developers and auditors would reveal the actual data being
shared. Another example of a possible user defense is to selectively block
network traffic that is not essential for the skill to work (e.g., using an
approach similar to [72]).
_Limiting voice interaction data._ Even if the skills do not receive the
actual voice recordings, smart-speaker platform does, since it has to
transcribe them. Voice recordings do not only convey the command, but also
other personal characteristics of the speakers (e.g., emotion, health, accent,
etc. [82]). We can limit the sharing of this additional data by offloading the
wake-word detection and transcription functions of the Alexa platform with
offline tools such as [35, 39], and just send to the Alexa platform the
transcribed commands using their textual API with no loss of functionality.
### 8.2 Parallels with Other IoT Platforms
_Related platform-agnostic IoT works._ Several IoT works have measured network
traffic to detect data sharing. For example, [73, 65, 79, 80, 72] have shown
that tracking is common in several IoT platforms, regardless of the presence
of specific apps/skills. A difference between our findings and the ones of the
above works is that Amazon smart speakers in our study contact additional
endpoints from Amazon, skills vendors, and third-parties that have never been
reported before. For example, with respect to the endpoints reported in a 2021
study [72], we have observed 4 new Amazon domains (acsechocaptiveportal.com,
amazon-dss.com, a2z.com, amazonalexa.com.), 2 skills-specific endpoints (see
_skills_ row in Table I) and 12 new third-party endpoints (see _third party_
row in Table I). A possible explanation could be the change in Amazon’s
ecosystem since it was last studies, e.g., api.amazonalexa.com may have
replaced api.amazon.com, no longer contacted.
_Related platform-specific IoT works._ As compared to prior work on smart TVs
[85, 74] and VR headsets [84], we have found less data tracking activity on
smart speakers. However, on and off platform ad targeting indicates that data
sharing still happens. A possible explanation could be the server-side data
sharing from smart speaker platform for advertising purposes.
_Generalization to other IoT platforms._ Since indirect data sharing may
happen in other IoT platforms as well, we envision that such platforms,
including the ones already analyzed in prior work, may benefit from our
approach for measuring data collection, usage, and sharing. For example, smart
TV and VR platforms are amenable to our approach since we can collect network
traffic, measure advertising and tracking, and check privacy policy
compliance.
### 8.3 Clarifications and Updates
Since the initial release of this paper on arXiv [66], we have updated it to
clarify some statements, so as to avoid possible misinterpretations. In
particular, we do not claim that Amazon directly shares voice recordings or
transcripts with advertising networks. Neither do we claim that Amazon
surreptitiously records users’ voices; we issued voice commands and expected
to be recorded. We do find evidence that Amazon processes voice recordings
from skill interactions to infer user interests; and that it uses those
interests to target ads. We also clarified that Amazon’s inference of
advertising interests from users’ voice is potentially inconsistent with their
public statements. Amazon’s and skills’ operational practices are often not
clearly disclosed in their privacy policies. Amazon’s privacy policy neither
acknowledges nor denies the usage of Echo interactions for ad targeting. We
have also made more precise claims regarding Amazon’s advertising partners
syncing their cookies with Amazon, avoiding language specifying that Amazon
shares user interests with advertisers.
## References
* [1] Air Quality Report. https://www.amazon.com/ICM-Air-Quality-Report/dp/B01EOFCHMA/.
* [2] Alexa blogs: Advertising and alexa. https://developer.amazon.com/blogs/alexa/post/54c3a0f8-5b29-4071-acd7-2b832b860c83/advertising-and-alexa.
* [3] Alexa-hosted Skills. https://developer.amazon.com/en-US/docs/alexa/hosted-skills/build-a-skill-end-to-end-using-an-alexa-hosted-skill.html.
* [4] Alexa privacy hub. https://www.amazon.com/Alexa-Privacy-Hub/b?ie=UTF8&node=19149155011.
* [5] Alexa Skill Certification Requirements. https://developer.amazon.com/en-US/docs/alexa/custom-skills/certification-requirements-for-custom-skills.html.
* [6] Alexa Skills Policy Testing. https://developer.amazon.com/en-US/docs/alexa/custom-skills/policy-testing-for-an-alexa-skill.html.
* [7] Alexa Skills Privacy Requirements. https://developer.amazon.com/en-US/docs/alexa/custom-skills/security-testing-for-an-alexa-skill.html#25-privacy-requirements.
* [8] Alexa Top Sites by Category. https://https://www.alexa.com/topsites/category.
* [9] Amazon Music. https://music.amazon.com/.
* [10] Amazon: Request your data. https://www.amazon.com/gp/privacycentral/dsar/preview.html.
* [11] Amazon.com Privacy Notice. https://www.amazon.com/gp/help/customer/display.html?nodeId=GX7NJQ4ZB8MHFRNJ.
* [12] Audio Ads – Create audio advertising campaigns. https://advertising.amazon.com/en-ca/solutions/products/audio-ads.
* [13] AVS Testing and Certification Process. https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/product-testing-overview.html.
* [14] Best music streaming service for 2022 - cnet. https://www.cnet.com/tech/services-and-software/best-music-streaming-service/. (Accessed on 04/01/2022).
* [15] Build with Amazon’s Newest Devices & Services. https://developer.amazon.com/en-US/alexa/devices/alexa-built-in.
* [16] Charles Stanley Radio. https://www.amazon.com/In-Touch-Ministries-Charles-Stanley/dp/B07FF2QGXW/ref=sr_1_101?dchild=1&qid=1602785535&s=alexa-skills&sr=1-101.
* [17] Charles Stanley Radio. https://www.amazon.com/dp/B07FF2QGXW/.
* [18] Configure Permissions for Customer Information in Your Skill. https://developer.amazon.com/en-US/docs/alexa/custom-skills/configure-permissions-for-customer-information-in-your-skill.html.
* [19] Crunchbase. https://www.crunchbase.com/.
* [20] Dating and Relationship Tips and advices. https://www.amazon.com/Aaron-Spelling-Dating-Relationship-advices/dp/B07YCKFCCF/ref=sr_1_28?dchild=1&qid=1602782676&s=alexa-skills&sr=1-28.
* [21] DuckDuckGo Tracker Radar list of entities. https://github.com/duckduckgo/tracker-radar/tree/main/entities.
* [22] Dyson. https://www.amazon.com/Dyson-Limited/dp/B06WVN7SHC/.
* [23] Essential Oil Benefits. https://www.amazon.com/ttm-Essential-Oil-Benefits/dp/B074CNX3G8/.
* [24] Garmin. https://www.amazon.com/dp/B075TRB4V5/.
* [25] Genesis. https://www.amazon.com/Genesis-Motors-USA/dp/B01JXP09PI/ref=lp_14284820011_1_6?s=digital-skills&ie=UTF8&qid=1602832937&sr=1-6.
* [26] Harmony. https://www.amazon.com/dp/B01M4LDPX3/.
* [27] Header Bidding. https://admanager.google.com/home/resources/feature-brief-open-bidding/.
* [28] Header Bidding (HBIX) 2021 Tracker. https://www.kevel.co/hbix/.
* [29] Makeup of the Day. https://www.amazon.com/Xeline-Development-Makeup-the-Day/dp/B072N6BNB1/ref=sr_1_232?dchild=1&qid=1602773008&s=alexa-skills&sr=1-232.
* [30] Men’s Finest Daily Fashion Tip. https://www.amazon.com/Mens-Finest-Daily-Fashion-Tip/dp/B07CB3ZN6N/ref=lp_14284840011_1_5?s=digital-skills&ie=UTF8&qid=1602772432&sr=1-5.
* [31] Men’s Finest Daily Fashion Tip. https://www.amazon.com/Mens-Finest-Daily-Fashion-Tip/dp/B07CB3ZN6N/ref=lp_14284840011_1_5?s=digital-skills&ie=UTF8&qid=1602772432&sr=1-5.
* [32] Module 2: Design an engaging voice user interface. https://developer.amazon.com/en-US/alexa/alexa-skills-kit/get-deeper/tutorials-code-samples/build-an-engaging-alexa-skill/module-2.
* [33] pandora. https://www.amazon.com/Pandora-Media/dp/B07JBQZCRB.
* [34] Pi-hole Blocklist. https://firebog.net/.
* [35] Porcupine Wake Word. https://picovoice.ai/platform/porcupine/.
* [36] Prebid. https://prebid.org/.
* [37] Puppeteer. https://www.npmjs.com/package/puppeteer.
* [38] Real-time Bidding. https://developers.google.com/authorized-buyers/rtb/start.
* [39] Rhasspy Voice Assistant. https://rhasspy.readthedocs.io/.
* [40] Security Policy for Device SDKs. https://github.com/alexa/avs-device-sdk/security/policy.
* [41] Selenium. http://docs.seleniumhq.org/.
* [42] Setting up a Bridged Wireless Access Point. https://www.raspberrypi.com/documentation/computers/configuration.html#setting-up-a-bridged-wireless-access-point.
* [43] Smart speaker devices installed base in the united states from 2017 to 2020. https://www.statista.com/statistics/794480/us-amazon-echo-google-home-installed-base/.
* [44] Sonos. https://www.amazon.com/dp/B072ML3N6K/.
* [45] Spotify. https://www.amazon.com/Spotify/dp/B07FK56GVY.
* [46] Streaming music report sheds light on battle between spotify, amazon, apple, and google - the verge. https://www.theverge.com/2022/1/20/22892939/music-streaming-services-market-share-q2-2021-spotify-apple-amazon-tencent-youtube. (Accessed on 04/01/2022).
* [47] Uncovering the Echo Dot’s Hidden USB Port. https://hackaday.com/2019/08/15/uncovering-the-echo-dots-hidden-usb-port/.
* [48] VCA Animal Hospital. https://www.amazon.com/dp/B07KYS1Y1X/.
* [49] You have control over your Alexa experience. https://www.amazon.com/b/?node=19149155011.
* [50] YouVersion Bible. https://www.amazon.com/dp/B017RXFNKY/.
* [51] Amazon. Amazon echo & alexa devices. https://www.amazon.com/smart-home-devices/b?ie=UTF8&node=9818047011.
* [52] Analytics, S. S. Number of households with smart home products and services in use worldwide from 2015 to 2025. https://www.statista.com/statistics/1252975/smart-home-households-worldwide/.
* [53] Andow, B., Mahmud, S. Y., Whitaker, J., Enck, W., Reaves, B., Singh, K., and Egelman, S. Actions speak louder than words:$\\{$Entity-Sensitive$\\}$ privacy policy and data flow analysis with $\\{$PoliCheck$\\}$. In 29th USENIX Security Symposium (USENIX Security 20) (2020), pp. 985–1002.
* [54] Barceló-Armada, R., Castell-Uroz, I., and Barlet-Ros, P. Amazon alexa traffic traces. Computer Networks 205 (2022), 108782.
* [55] Bashir, M. A., Arshad, S., Robertson, W., and Wilson, C. Tracing information flows between ad exchanges using retargeted ads. In 25th USENIX Security Symposium (2016).
* [56] Cheng, L., Wilson, C., Liao, S., Young, J., Dong, D., and Hu, H. Dangerous skills got certified: Measuring the trustworthiness of skill certification in voice personal assistant platforms. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (2020).
* [57] Cook, J., Nithyanand, R., and Shafiq, Z. Inferring tracker-advertiser relationships in the online advertising ecosystem using header bidding. arXiv preprint arXiv:1907.07275 (2019).
* [58] Cook, J., Nithyanand, R., and Shafiq, Z. Inferring tracker-advertiser relationships in the online advertising ecosystem using header bidding. In Privacy Enhancing Technologies Symposium (PETS) (2020).
* [59] Dubois, D. J., Kolcun, R., Mandalari, A. M., Paracha, M. T., Choffnes, D., and Haddadi, H. When speakers are all ears: Characterizing misactivations of iot smart speakers. Proceedings on Privacy Enhancing Technologies (2020).
* [60] Edu, J., Aran, X. F., Such, J., and Suarez-Tangil, G. Skillvet: Automated traceability analysis of amazon alexa skills.
* [61] Englehardt, S., and Narayanan, A. Online Tracking: A 1-million-site Measurement and Analysis. In ACM Conference on Computer and Communications Security (CCS) (2016).
* [62] Fowler, G. A. Alexa has been eavesdropping on you this whole time. https://www.washingtonpost.com/technology/2019/05/06/alexa-has-been-eavesdropping-you-this-whole-time/, 2019\.
* [63] Gary Horcher, KIRO 7 News. Woman says her amazon device recorded private conversation, sent it out to random contact. https://www.kiro7.com/news/local/woman-says-her-amazon-device-recorded-private-conversation-sent-it-out-to-random-contact/755507974/.
* [64] Google. RTB - Cookie Matching. https://developers.google.com/authorized-buyers/rtb/cookie-guide.
* [65] Huang, D. Y., Apthorpe, N., Li, F., Acar, G., and Feamster, N. IoT inspector: Crowdsourcing labeled network traffic from smart home devices at scale. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 2 (2020), 1–21.
* [66] Iqbal, U., Bahrami, P. N., Trimananda, R., Cui, H., Gamero-Garrido, A., Dubois, D., Choffnes, D., Markopoulou, A., Roesner, F., and Shafiq, Z. Your echos are heard: Tracking, profiling, and ad targeting in the amazon smart speaker ecosystem. arXiv 2204.10920v1 (April 22, 2022).
* [67] Iqbal, U., Wolfe, C., Nguyen, C., Englehardt, S., and Shafiq, Z. Khaleesi: Breaker of advertising and tracking request chains. In USENIX Security Symposium (USENIX) (2022).
* [68] iRobot. irobot home. https://www.amazon.com/iRobot-Home/dp/B06Y3PSHQ3.
* [69] Jin, H., and Wang, S. Voice-based determination of physical and emotional characteristics of users, Oct. 2018. US Patent 10,096,319.
* [70] Le Pochat, V., Van Goethem, T., Tajalizadehkhoob, S., Korczyński, M., and Joosen, W. Tranco: A research-oriented top sites ranking hardened against manipulation. In Proceedings of the 26th Annual Network and Distributed System Security Symposium (2019), Internet Society.
* [71] Lentzsch, C., Shah, S. J., Andow, B., Degeling, M., Das, A., and Enck, W. Hey alexa, is this skill safe?: Taking a closer look at the alexa skill ecosystem. In 28th Annual Network and Distributed System Security Symposium, NDSS (2021).
* [72] Mandalari, A. M., Dubois, D. J., Kolcun, R., Paracha, M. T., Haddadi, H., and Choffnes, D. Blocking without Breaking: Identification and Mitigation of Non-Essential IoT Traffic. In Proc. of the Privacy Enhancing Technologies Symposium (PETS) (2021).
* [73] Mazhar, M. H., and Shafiq, Z. Characterizing smart home IoT traffic in the wild. In 2020 IEEE/ACM Fifth International Conference on Internet-of-Things Design and Implementation (IoTDI) (2020), IEEE, pp. 203–215.
* [74] Mohajeri Moghaddam, H., Acar, G., Burgess, B., Mathur, A., Huang, D. Y., Feamster, N., Felten, E. W., Mittal, P., and Narayanan, A. Watching you watch: The tracking ecosystem of over-the-top tv streaming devices. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (2019), pp. 131–147.
* [75] NBC. Are Smart Speakers Planting Ads On Our Social Media Profiles? https://www.nbcmiami.com/news/local/are-smart-speakers-planting-ads-on-our-social-media-profiles/157153/.
* [76] Olejnik, L., Tran, M.-D., and Castelluccia, C. Selling off privacy at auction. In Network and Distributed System Security Symposium (NDSS) (2014).
* [77] Papadopoulos, P., Kourtellis, N., Rodriguez, P., and Laoutaris, N. If you are not paying for it, you are the product: How much do advertisers pay to reach you? In Proceedings of the 2017 Internet Measurement Conference (2017).
* [78] Pro, A. P. Adobe premiere pro, 2022.
* [79] Ren, J., Dubois, D. J., Choffnes, D., Mandalari, A. M., Kolcun, R., and Haddadi, H. Information Exposure for Consumer IoT Devices: A Multidimensional, Network-Informed Measurement Approach. In Proc. of the Internet Measurement Conference (IMC) (2019).
* [80] Saidi, S. J., Mandalari, A. M., Kolcun, R., Haddadi, H., Dubois, D. J., Choffnes, D., Smaragdakis, G., and Feldmann, A. A Haystack Full of Needles: Scalable Detection of IoT Devices in the Wild. In Proc. of the Internet Measurement Conference (IMC) (2020).
* [81] Shaban, H. Amazon alexa user receives 1,700 audio recordings of a stranger through ‘human error’. https://www.washingtonpost.com/technology/2018/12/20/amazon-alexa-user-receives-audio-recordings-stranger-through-human-error/, 2018\.
* [82] Singh, R. Profiling humans from their voice. Springer, 2019.
* [83] Times, N. Y. Hey, Alexa, What Can You Hear? And What Will You Do With It? https://www.nytimes.com/2018/03/31/business/media/amazon-google-privacy-digital-assistants.html.
* [84] Trimananda, R., Le, H., Cui, H., Tran Ho, J., Shuba, A., and Markopoulou, A. Ovrseen: Auditing network traffic and privacy policies in oculus vr. In 31st USENIX security symposium (USENIX security 22) (2022).
* [85] Varmarken, J., Le, H., Shuba, A., Markopoulou, A., and Shafiq, Z. The TV is Smart and Full of Trackers: Measuring Smart TV Advertising and Tracking. In Proc. of the Privacy Enhancing Technologies Symposium (PETS) (2020).
* [86] Willens, M. Amid post-cookie confusion, amazon plans to launch an identifier of its own. https://digiday.com/marketing/amid-post-cookie-confusion-amazon-explores-launching-an-identifier-of-its-own/amp/, 2021\.
* [87] Young, J., Liao, S., Cheng, L., Hu, H., and Deng, H. SkillDetective: Automated Policy-Violation detection of voice assistant applications in the wild. In USENIX Security Symposium (2022).
|
additions
# A Survey of Text Watermarking in the Era of Large Language Models
Aiwei Liu<EMAIL_ADDRESS>, Leyi Pan
<EMAIL_ADDRESS>Tsinghua UniversityBeijingChina100084 , Yijian Lu
<EMAIL_ADDRESS>, Jingjing Li<EMAIL_ADDRESS>The Chinese
University of Hong KongHong KongChina999077 , Xuming Hu
<EMAIL_ADDRESS>, Lijie Wen<EMAIL_ADDRESS>Tsinghua
UniversityBeijingChina100084 , Irwin King<EMAIL_ADDRESS>The Chinese
University of Hong KongHong KongChina999077 and Philip S. Yu<EMAIL_ADDRESS>University of Illinois at ChicagoChicagoUnited States60607
###### Abstract.
Text watermarking algorithms play a crucial role in the copyright protection
of textual content, yet their capabilities and application scenarios have been
limited historically. The recent developments in large language models (LLMs)
have opened new opportunities for the advancement of text watermarking
techniques. LLMs not only enhance the capabilities of text watermarking
algorithms through their text understanding and generation abilities but also
necessitate the use of text watermarking algorithms for their own copyright
protection. This paper conducts a comprehensive survey of the current state of
text watermarking technology, covering four main aspects: (1) an overview and
comparison of different text watermarking techniques; (2) evaluation methods
for text watermarking algorithms, including their success rates, impact on
text quality, robustness, and unforgeability; (3) potential application
scenarios for text watermarking technology; (4) current challenges and future
directions for development. This survey aims to provide researchers with a
thorough understanding of text watermarking technology, thereby promoting its
further advancement.
Text Watermark, Large Language Models
††ccs: Computing methodologies Natural language processing††ccs: Security and
privacy Social aspects of security and privacy
## 1\. Introduction
Text watermarking involves embedding unique, imperceptible identifiers
(watermarks) into textual content. These watermarks are designed to be robust
yet inconspicuous, ensuring that the integrity and ownership of the content
are preserved without affecting its readability or meaning. Historically, text
watermarking has played a crucial role in various domains, from copyright
protection and document authentication to preventing plagiarism and
unauthorized content distribution (Kamaruddin et al., 2018). With the
advancement of Large Language Models (LLMs), both the techniques and
application scenarios of text watermarking have seen significant development.
As shown in Figure 1(a), this primarily includes the construction of enhanced
text watermarking algorithms using LLMs, the application of existing text
watermarking algorithms to LLMs, and the exploration of text watermarking
algorithms that are more closely integrated with LLMs. The flourishing
development of LLMs has propelled a thriving research landscape within the
realm of text watermarking, as depicted in Figure 1(b). Especially with the
advent of ChatGPT, text watermarking has notably surged into a research
fervor. To help better understanding the mutually beneficial relationship
between LLMs and text watermarking, this paper provides a survey of text
watermarking techniques in the era of large language models.
In the subsequent content of this section, we will separately discuss why text
watermarking benefits the application of LLMs (section 1.1), why utilizing
LLMs can lead to the development of superior text watermarking algorithms
(section 1.2), and the contributions of this survey along with the
organization of the following sections (section 1.3).
(a) A description of how LLMs promote the development of text watermarking
techniques and broaden their application scenarios.
(b) Number of publications in the field of text watermarking and LLMs (the
data for "Number of Publications in the field of LLMs" is sourced from Zhao et
al. (2023c))
Figure 1. Relationships between the development of text watermarking
techniques and Large Language Models (LLMs).
### 1.1. Why is Text Watermarking Beneficial for LLMs?
In recent years, large language models (LLMs) have made significant progress
in the field of natural language processing. As the parameter count of these
large language models continues to increase, their ability to understand and
generate language has also substantially improved. Notable examples include
GPT (Radford et al., 2018), BART (Lewis et al., 2019), T5 (Raffel et al.,
2020), OPT (Zhang et al., 2022), LaMDA (Thoppilan et al., 2022), LLaMA
(Touvron et al., 2023), and GPT4 (OpenAI, 2023). These large language models
have achieved excellent performance in a variety of downstream tasks,
including machine translation (Hendy et al., 2023; Zhu et al., 2020; Costa-
jussà et al., 2022; Hendy et al., 2023), dialogue systems (Hudeček and Dušek,
2023; Mi et al., 2022; Thoppilan et al., 2022; Shuster et al., 2022), code
generation (Ni et al., 2023; Vaithilingam et al., 2022; Nijkamp et al., 2022;
Xu et al., 2022), and other tasks (Li et al., 2020, 2022; Zhang et al., 2023b;
Thirunavukarasu et al., 2023). A recent work even suggests that GPT-4 is an
early (yet still incomplete) version of an artificial general intelligence
(AGI) system (Bubeck et al., 2023).
However, the extensive use of LLMs has also introduced a set of problems and
challenges. Firstly, the rapid generation of high-quality text by LLMs can
facilitate the rapid spread of false information (Megías et al., 2021).
Secondly, the issue of intellectual property related to large models is of
vital importance. This includes the copyright of datasets (Tang et al., 2023)
used for training large models and the addition of intellectual property
rights to prevent the extraction of knowledge from the models (Zhao et al.,
2023b). If effective tagging and detection methods for LLM-generated text
could be implemented, it would significantly aid in mitigating the
aforementioned issues. Text watermarking emerges as a promising solution to
address these challenges. By embedding a unique, identifiable, and non-
obtrusive marker (watermark) within the LLM-generated text, watermarking can
enable the tracking and attribution of content produced by LLMs.
### 1.2. Why are LLMs Beneficial for Text Watermarking?
One of the main challenges in text watermarking is embedding watermarks
without altering the original meaning or readability of the text. This
requirement presents a significant challenge, as traditional text watermarking
methods often struggle to modify the text without changing its semantics
(Atallah et al., 2001; Topkara et al., 2006a; Meral et al., 2009). This is due
to the need for text watermarking algorithms to have a strong understanding
and control over the text’s semantics. Here, Large Language Models emerge as a
game-changer. With their advanced understanding of language semantics and
context, LLMs enable more sophisticated text watermarking methods that
seamlessly integrate watermarks with minimal compromise to the text’s original
meaning (Abdelnabi and Fritz, 2021; Zhang et al., 2023a). This synergy allows
for the development of watermarking techniques that are not only more
effective but also more subtle, ensuring that the text remains as intended
while still carrying the necessary watermark features.
### 1.3. Why a Survey for Text Watermarking in the Era of LLMs?
Although text watermarking technology and large language models can
effectively enhance each other, for instance, text generated by LLMs can be
watermarked using text watermarking algorithms (Brassil et al., 1995; Por et
al., 2012; Rizzo et al., 2016; Munyer and Zhong, 2023; Yang et al., 2022,
2023; Yoo et al., 2023b), or LLMs themselves can be utilized to embed
watermarks in texts (Abdelnabi and Fritz, 2021; Zhang et al., 2023a).
Additionally, watermark algorithms can be directly incorporated during the
text generation process of LLMs (Kirchenbauer et al., 2023a; Zhao et al.,
2023a; Liu et al., 2023c, b; Ren et al., 2023; Wu et al., 2023). However, up
to date, no studies have attempted to comprehensively explore and understand
text watermarking from a broader perspective. The current surveys on text
watermarking mainly focus on techniques prior to the era of large language
models (Alkawaz et al., 2016; Kamaruddin et al., 2018).
Therefore, in this work, we provide the first comprehensive survey of text
watermarking algorithms in the era of large language models, covering the
detailed definition of text watermarking algorithms and the interconnections
between different kinds of text watermarking methods. Given the complexity and
diversity of text watermarking technology, we have also detailed how to
evaluate text watermarking algorithms from different perspectives, including
success rate, robustness, impact on text quality and unforgeability.
Additionally, we have introduced the current application scenarios of text
watermarking algorithms including copyright protection, fake news detection,
and academic integrity. This survey can provide researchers with a high-level
understanding of text watermarking algorithms and a comparison of the
similarities and differences between different text watermarking algorithms.
Researchers who merely wish to employ text watermarking technology can select
the appropriate algorithms and application scenarios based on the introduction
provided in this survey.
Organization of this survey. The remainder of this survey is organized as
follows: Section 2 introduces the definition of text watermarking and the
essential properties of text watermarking algorithms. Section 3 and Section 4
discuss two significant types of text watermarking algorithms: text
watermarking for existing text and text watermarking for LLMs. Section 5
elaborates on the different evaluation perspectives of text watermarking
algorithms, including success rate, impact on text quality, robustness, and
unforgeability. Section 6 presents the application scenarios of text
watermarking algorithms, covering copyright protection, academic integrity and
fake news detection. Section 7 analyzes the challenges still faced by current
text watermarking research and explores future research directions. Finally,
Section 8 concludes the survey.
## 2\. Preliminaries of Text Watermarking
To facilitate the introduction of various text watermarking algorithms as well
as its evaluation methods in subsequent sections, this section presents the
definition of text watermarking algorithms and outlines the characteristics
that an excellent text watermarking algorithm should possess. The taxonomy of
text watermarking algorithms is also introduced in this section.
### 2.1. Text Watermarking Algorithms
A text watermarking algorithm typically comprises two components: a watermark
generator $\mathcal{A}$, and a watermark detector $\mathcal{D}$. The watermark
generator $\mathcal{A}$ takes a text $\mathbf{x}$ and a watermark message $w$
as inputs and outputs a watermarked text $\mathbf{t}$, expressed as:
(1) $\mathcal{A}(\mathbf{x},w)=\mathbf{t}.$
The watermarked text, denoted as $\mathbf{t}$, can be derived in two ways: it
may be a modified version of the input text $\mathbf{x}$, where $\mathbf{x}$
is the original text, or it can be a new text generated in response to
$\mathbf{x}$, such as when $\mathbf{x}$ serves as a prompt for a Large
Language Model. The watermark message, denoted as $w$, can be a zero-bit
watermark, signifying merely its presence or absence, or a multi-bit
watermark, embedding detailed, customized information. The term ‘watermark
payload’ will be used henceforth to describe the quantity of information
conveyed by the watermark message.
For the watermark detector $\mathcal{D}$, its input is any text $\mathbf{t}$,
and its output is its predicted watermark message for the text, denoted as
$\mathcal{D}(\mathbf{t})=w$. If the output is None, it implies that the text
contains no watermark information.
### 2.2. Key Characteristics of Text Watermarking Algorithms
To facilitate a unified understanding of the objectives in designing text
watermarking algorithms, this section introduces the key characteristics that
a watermarking algorithm should possess. These primarily include a high
success rate of watermark detection, minimal impact on text quality,
robustness of the detector against text modifications, and unforgeability.
High success rate of watermark detection. The high success rate of a watermark
algorithm indicates that its detector $\mathcal{D}$ can accurately detect the
generated watermark text t. For a zero-bit watermark message, this accuracy is
usually measured using binary classification. When $w$ consists of multiple
bits, the evaluation commonly used is bit accuracy. If a watermarking
algorithm maintains a high success rate at a large bit number, it implies that
it possesses a high payload.
Low impact on text quality. We use $\mathcal{A}(\mathbf{x},None)$ to denote
the text generated without adding a watermark. If $\mathbf{x}$ is the text to
be modified, the output will be $\mathbf{x}$ itself. If $\mathbf{x}$ is a
prompt for LLM, it would be the text generated by LLM without a watermark. A
watermark algorithm that does not significantly affect text quality will the
following condition:
(2) $\forall
w_{i},\,\mathcal{R}(\mathcal{A}(\mathbf{x},\text{None}),\mathcal{A}(\mathbf{x},w_{i}))<\delta$
where $\mathcal{R}$ is a function evaluating text quality from multiple
perspectives, as will be discussed in seciton 4. $\delta$ represents a
threshold. If the difference in the evaluated scores of two texts is less than
this threshold, they are considered to be of similar quality.
Robustness to watermark removal attack. We use an operation $\mathcal{U}$ to
denote the watermark removal operations, which will be detailed in seciton 4.
If a watermarking algorithm is robust against watermark removal attacks, it
should satisfy the following conditions:
(3) $\forall
w_{i},\forall\mathbf{t}=\mathcal{A}(\mathbf{x},w_{i}),\,P(\mathcal{D}(\mathcal{U}(\mathbf{t}))=w_{i})>\beta.$
where $\beta$ is a threshold. If the probability of correctly detecting a
watermarked text after text modification exceeds $\beta$, the algorithm is
deemed sufficiently robust.
Unforgeability. Unforgeability refers to the difficulty for a third party to
counterfeit text watermarks. Typically, if the watermark’s generator
$\mathcal{A}$ is acquired by an attacker, the watermark can certainly be
forged. Therefore, the detailed definition of unforgeability is whether an
attacker, unable to access the watermark’s generator, can counterfeit the
watermark. This usually divides into two scenarios: in one, the attacker
cannot obtain the detector $\mathcal{D}$ and is limited to watermark
detection, known as the private detection scenario. In the second scenario,
the attacker has acquired the watermark’s detector, referred to as the public
detection scenario.
Although an ideal text watermarking algorithm should possess all four
aforementioned characteristics, it is challenging to balance them. Enhancing
one aspect might impact performance in another. In subsequent sections, we
will delve deeper into how different watermark algorithms strike a balance
among these characteristics.
### 2.3. Taxonomy of Text Watermarking Algorithms
To facilitate the organization of different text watermarking algorithms in
section 3 and section 4, this section provides an overview of our summarized
taxonomy of text watermarking algorithms.
every leaf node/.style= if n children=0#1 , every tree node/.style= if n
children=0minimum width=1em#1 , nonleaf/.style=font=, for tree=every leaf
node=my leaf, font=, every tree node=my node, font=, l sep-=4.5pt, l-=1.pt,
anchor=west, inner sep=2pt, l sep=10pt, s sep=3pt, fit=tight, grow’=east,
edge=ultra thin, parent anchor=east, child anchor=west, if n
children=0nonleaf, edge path= [draw, edge] (!u.parent anchor) – +(5pt,0) |-
(.child anchor)edge label; , if=isodd(n_children()) for children=
if=equal(n,(n_children("!u")+1)/2)calign with current [Text
Watermarking, draw=gray, fill=gray!15, text width=1.8cm, text=black
[Watermarking for Existing Text
(section 3), color=brightlavender, fill=brightlavender!15, text width=2cm,
text=black [Format-based Watermarking
(section 3.1), color=brightlavender, fill=brightlavender!15, text width=2.5cm,
text=black [Line/Word-Shift Coding (Brassil et al. (1995)), UniSpaCh (Por et
al. (2012)), Unicode Homoglyph Substitution (Rizzo et al. (2016)), EasyMark
(Sato et al. (2023)), color=brightlavender, fill=brightlavender!15, text
width=5.5cm, text=black] ] [Lexical-based Watermarking
(section 3.2), color=brightlavender, fill=brightlavender!15, text width=2.5cm,
text=black [ Equimark (Topkara et al. (2006b)), DeepTextMark (Munyer and Zhong
(2023)), Context-aware Lexical Substitution (Yang et al. (2022)), Binary-
encoding Lexical Substitution (Yang et al. (2023)), Robust Multi-bit Watermark
(Yoo et al. (2023b)) , color=brightlavender, fill=brightlavender!15, text
width=5.5cm, text=black] ], [Syntatic-based Watermarking
(section 3.3), color=brightlavender, fill=brightlavender!15, text width=2.5cm,
text=black [ NLW (Atallah et al. (2001)), WANE (Topkara et al. (2006a)), MA-
NLW (Meral et al. (2009)) , color=brightlavender, fill=brightlavender!15, text
width=5.5cm, text=black] ], [Generation-based Watermarking
( section 3.4), color=brightlavender, fill=brightlavender!15, text
width=2.5cm, text=black [ AWT (Abdelnabi and Fritz (2021)), REMARK-LLM (Zhang
et al. (2023a)) , color=brightlavender, fill=brightlavender!15, text
width=5.5cm, text=black] ] ] [Watermarking for LLMs
(section 4), color=lightgreen, fill=lightgreen!15, text width=2cm, text=black
[Training Time Watermarking
(section 4.1), color=lightgreen, fill=lightgreen!15, text width=2.5cm,
text=black [Dataset Trigger (Liu et al. (2023a)), Clean-Label Backdoor
Watermark (Tang et al. (2023)), Coprotector (Sun et al. (2022)), CodeMark (Sun
et al. (2023)), color=lightgreen, fill=lightgreen!15, text width=5.5cm,
text=black] ] [Watermarking During Logits Generation
(section 4.2), color=lightgreen, fill=lightgreen!15, text width=2.5cm,
text=black [KGW (Kirchenbauer et al. (2023a)), SWEET (Lee et al. (2023)),
Unbiased Watermark (Hu et al. (2023)), DiPmark (Wu et al. (2023)), COLOR (Yoo
et al. (2023a)), CTWL (Wang et al. (2023b)), ThreeBricks (Fernandez et al.
(2023)), Unigram Watermark (Zhao et al. (2023a)), KGW-reliability
(Kirchenbauer et al. (2023b)), NS-Watermark (Takezawa et al. (2023)), Semantic
Invariant Robust Watermark (Liu et al. (2023c)), Unforgeable Watermark (Liu et
al. (2023b)), Publicly Detectable Watermark (Fairoze et al. (2023)), Semantic-
based Robust Watermark (Ren et al. (2023)), color=lightgreen,
fill=lightgreen!15, text width=5.5cm, text=black] ], [Watermarking During
Token Sampling
(section 4.3), color=lightgreen, fill=lightgreen!15, text width=2.5cm,
text=black [Undetectable Watermark (Christ et al. (2023)), Robust Distortion-
free Watermark (Kuditipudi et al. (2023)), SemStamp (Hou et al. (2023))),
color=lightgreen, fill=lightgreen!15, text width=5.5cm, text=black] ] ] ]
Figure 2. The text watermarking methods can be broadly divided into two
categories: Watermarking for Existing Text (section 3) and Watermarking for
LLMs (section 4).
As illustrated in Figure 2, text watermarking algorithms can be broadly
classified into two categories. The first category, Watermarking for Existing
Text, focuses on embedding watermarks into pre-existing texts. Detailed in
Section 3, this method typically employs semantically invariant
transformations to incorporate watermarks seamlessly into the existing text.
The second category, Watermarking for Large Language Models (LLMs), involves
alterations to the large language models. As we’ll explore in Section 4, this
approach either introduces specific features into the training dataset or
modifies the text generation process of LLMs. In essence, it creates
watermarked text $\mathbf{t}$ in response to an input prompt $\mathbf{x}$.
Figure 3 offers a detailed illustration of these methods, emphasizing the
nuances of current text watermarking techniques. Notably, both the
‘watermarking during logits generation’ and ‘watermarking during token
sampling’ methods apply watermarks at the LLM inference stage, a process
collectively referred to as ‘inference time watermarking’ in this context. The
dashed line box under inference time watermarking represents the detailed
process of how the watermarked LLM generates watermarked text.
## 3\. Watermarking for Existing Text
Watermarking for existing text involves modifying a generated text to produce
a watermarked text. Based on the granularity of modifications, these methods
are primarily categorized into four types: format-based watermarking (section
3.1), lexical-based watermarking (section 3.2), syntactic-based watermarking
(section 3.3) and generation-based watermarking (section 3.4).
Figure 3. A more illustrative explanation of various text watermarking
methods. Watermarking for Existing Text (section 3) involves modifying
existing text to embed watermarks, primarily through format-based approaches
(section 3.1) such as white-space substitution, lexical-based approaches
(section 3.2) such as synonym substitution, syntactic-based approaches
(section 3.3) e.g. passivization, and generation-based approaches (section
3.4) that directly generated watermarked text through pretrained language
models. Watermarking for LLMs (section 4) refers to embedding watermarks in
Large Language Models, ensuring the text generated includes these watermarks,
which can be implemented during training time (section 4.1), during logits
generation (section 4.2), or during token sampling (section 4.3).
### 3.1. Format-based Watermarking
Format-based watermarking approaches are inspired by image watermarking
technology (Begum and Uddin, 2020). It does not modify the content of the text
but introduces changes to its format that are difficult for humans to detect,
thereby embedding a watermark. For example, Brassil et al. (1995) proposed
line-shift coding and word shift-coding techniques, achieved by vertically
shifting the positions of text lines or horizontally shifting the locations of
words within text lines. Correspondingly, the watermark detection process
involves measuring the distance between adjacent text line profiles or between
adjacent word column profiles to detect shifts. However, this approach is
limited to embedding watermarks in image-formatted text and cannot truly
return a text string with an embedded watermark. Considering this, various
watermarking methods that rely on the insertion or replacement of Unicode
codepoints have been introduced. Por et al. (2012) proposed a watermarking
scheme named UniSpach, which inserts Unicode space characters into inter-
sentence, inter-word, end-of-line and inter-paragraph spacings. A study by
Rizzo et al. (2016) presented a unicode homoglyph substitution text
watermarking method. It exploits the fact that text symbols that are similar
in appearance could have different Unicode codepoints. For instance, both
U+0043 and U+216d visually represent letter ‘C’, while U+004c and U+216c
appear as letter ‘L’. Following this, a family of simple watermarks named
EASYMARK (Sato et al., 2023) is proposed recently, composed of three different
methods: WHITEMARK, VARIANTMARK and PRINTMARK. Specifically, WHITEMARK
replaces a whitespace (U+0020) with another codepoint of a whitespace (e.g.
U+2004). VARIANTMARK leverages variation selectors of Unicode to embed hard-
to-perceive format into CJK texts. PRINTMARK, coping with printed texts, uses
ligature or whitespaces with slightly different lengths to embed watermark
messages. Correspondingly, the watermark detection process involves searching
for and counting the certain codepoints that have been inserted within the
text. As these watermarking methods relied on the richness of Unicode
encoding, their watermark payload are often quite large.
However, although these format-based watermarking methods allow for the simple
and effective embedding of large payload watermarks in text without altering
its specific content, modifications to the format can be easily spotted in
certain scenarios, as mentioned by Por et al. (2012) in the case of DASH
attack. As a result, these specially designed formats could be effortlessly
removed through canonicalization (Boucher et al., 2022), such as reseting line
spacing, searching and replacing certain codepoints through the entire text,
etc. The spotted formats might also be used to forge watermark, further
leading to the failure of effective detection.
### 3.2. Lexical-based Watermarking
Format-based approaches only modify the surface format of the text, making
them easily spotted and, consequently, more vulnerable to targeted removal by
reformatting. Therefore, it becomes imperative to explore alternative methods
that enable deeper insertion of watermarks within text. Several studies
applied word-level modifications by replacing selected words with their
alternatives without changing the sentence syntax structure (Topkara et al.,
2006b; Fellbaum, 1998; Munyer and Zhong, 2023; Yang et al., 2022, 2023; Yoo et
al., 2023b). We refer to these methods as lexical-based watermarking
approaches. Topkara et al. (2006b) presented a synonym substitution text
watermarking approach, using the linguistic database WordNet(Fellbaum, 1998)
as its synonym dictionary. The watermark detection process essentially
replicates the watermark message embedding procedure, with the distinction
that inverse rules are employed during the message extraction phase. In order
to conduct semantic modeling more effectively, Munyer and Zhong (2023)
utilized a pretrained Word2Vec model instead of WordNet to find alternatives.
In particular, they converted the selected words into Word2Vec vectors and
collected the n-nearest vectors to form replacement word set. Noticeably, they
trained a binary classifier as watermark detector, using pretrained BERT model
and transformer blocks as neural network components.
However, the aforementioned watermarking approaches, which depended on
context-independents synonym substitution (WordNet & Word2Vec), tend to
overlook the context of the target words when generating substitute
candidates. This oversight may result in a failure to preserve the overall
semantics of the sentence, consequently diminishing the quality of the text.
In response to this issue, context-aware lexical substitution is introduced
into text watermarking techniques. Yang et al. (2022) proposed a novel BERT-
based infill model to generate lexical substitution candidates, taking the
overall sentence’s meaning into account. The watermark detection algorithm
mimics the watermark generation process, first locating words containing the
embedded watermark message, then generating substitute candidates, and finally
applying the inverse embedding rules to extract the watermark message. To
simplify the watermark detection process, Yang et al. (2023) developed a
watermarking scheme by first computing a random binary encoding for each word,
then replacing the words representing bit-0 with context-based synonyms that
represent bit-1. Since the encoding computed for non-watermarked text adhere
to a Bernoulli distribution and the distribution is altered during the
watermarking process, statistical tests can be employed directly to detect the
presence of the watermark. To further improve the robustness of watermark
algorithms against watermark removal attacks, a study by Yoo et al. (2023b)
fine-tuned BERT-based infill model with keyword-preserving and syntactically-
invariant corruptions, which achieved a state-of-the-art robustness compared
to previous approaches.
Lexical-based watermarking approaches embed watermarks by substituting
synonyms in the text. However, the scope for semantically non-altering synonym
replacements that do not affect the text structure is likely limited.
Consequently, the capacity for the watermarks in lexical-based approaches is
restricted, often necessitating a trade-off with text quality.
### 3.3. Syntactic-based Watermarking
The lexical-based approaches attempted to embed watermark messages by
replacing certain words without altering the syntax structure of the sentence.
However, methods relying solely on lexical substitution are likely to lack
robustness against simple watermark removal attacks, such as random synonym
replacements. In the light of this, some studies attempt to embed watermarks
in text in a manner that is more challenging to remove, specifically by
altering the syntax structures of the text. These methods are known as
syntactic-based watermarking approaches. Atallah et al. (2001) introduced
three typical syntax transformations–Adjunct Movement, Clefting and
Passivization–to embed watermark messages, where:
* •
Adjunct Movement refers to the shifting of adjuncts to different positions
within a sentence. For instance, the adverbial phrase ‘often’ can be placed in
multiple locations in the sentence The dog often chased the cat.
* •
Clefting is a transformational process used to emphasize a specific part of a
sentence, often the subject. For example, the sentence The dog chased the cat
can be transformed into It was the dog that chased the cat to highlight the
dog.
* •
Passivization involves converting active voice sentences with transitive verbs
into the passive voice. For instance, the active sentence The dog chased the
cat can be transformed into the passive form The cat was chased by the dog.
Each transformation type corresponds to a specific message bit. For instance,
Adjunct Movement corresponds to 0, Clefting stands for 1, and Passivization
represents 2. During watermark detection, the original and the modified text
are both transformed into syntax trees. The syntax structures are subsequently
compared sentence by sentence to extract watermark messages. Following this,
Topkara et al. (2006a) expanded the watermark payload by additionally
introducing two syntax transformation type: Activization and Topicalizaiton.
Moreover, effort has not been solely made about adding watermarks into English
corpus. Meral et al. (2009) investigated 20 morphosyntactic tools in Turkish.
They noted that languages with high suffixation and agglutination, like
Turkish, provide ample opportunities for syntactic-based watermarking.
Syntactic-based watermarking approaches can embed watermarks into existing
texts in a relatively concealed manner. However, this method relies
significantly on the grammatical rules of a language, potentially
necessitating customization for each language. In certain texts, frequent
syntactic alterations might also impact the original style and fluency of the
text.
### 3.4. Generation-based Watermarking
The aforementioned methods have indeed made significant strides in the field
of text watermarking. However, these methods are still quite reliant on
specific rules, which may lead to unnatural modifications in some contexts. On
one hand, the unnatural modifications might lead to degradation of text
quality. On the other hand, if these clues are observed by human attackers,
there is a higher likelihood of them to design watermark removal attacks or
attempt to forge watermarks deliberately and specifically. A groundbreaking
advancement would be generating watermarked text directly from the original
text and the watermark message. With the rapid development of pretrained
language models, such techniques are gradually becoming feasible. In the realm
of generation-based approaches, the encoded original text and the watermark
message are typically fed into a pretrained language model, which subsequently
generates the watermarked text end-to-end.
Abdelnabi and Fritz (2021) introduced an end-to-end watermarking scheme named
AWT. It harnesses a transformer encoder for encoding the original sentence. It
then combined the sentence embedding and message embedding, feeding this
composite input into a transformer decoder to derive the watermarked text.
During the watermark detection process, the watermarked text is fed into
transformer encoder layers to obtain the secret message. Building on top of
AWT, Zhang et al. (2023a) noticed the gap between the dense distributions of
the watermarked text and the sparse distributions of the one-hot watermark
message encodings. To bridge this gap, they presented a watermarking method
named REMARK-LLM. Likewise, its watermarking process inserts watermark message
into the original text via a pretrained language model. Noticeably, a
reparameterization step is introduced to transform the distribution of the
generated watermarked tokens into a sparser distribution using Gumbel-Softmax
(Jang et al., 2016). Then a decoder based on the transformer architecture is
employed to extract the concealed messages from these embeddings. The
reparameterization step allows REMARK-LLM to successfully embed two times more
signatures into original text compared to prior art AWT and at the meantime
maintain detection effectiveness, making a significant progress in expanding
watermark payload.
## 4\. Watermarking for LLMs
In the above section, we discussed watermarking methods for existing text.
With more and more texts directly generated by large language models, studying
text watermarking techniques for large models has become a trend. Unlike the
method of modifying existing text to add a watermark, the watermarking for
LLMs technology directly enables LLM-generated text to contain a watermark.
Specifically, given a watermark message $w$ and a prompt $\mathbf{x}$, the
process of watermarking for LLMs is defined by the following expression:
(4) $\mathcal{A}(\mathbf{x},w)=M_{w}(\mathbf{x})=\mathbf{t}.$
To facilitate explanation, we assume that the watermarked text is directly
generated by a language language model $M_{w}$ with an embedded watermark
message.
To provide a better understanding of how to add a watermark to a Large
Language Model, we first provide an overview of the process used for
generating text with an LLM. Specifically, this involves three steps, LLM
training, logits generation and token sampling:
* •
Step1: LLM training. The process involves training a large language model M
using dataset D. The specific objectives of training can vary depending on the
application context. Currently, the most prevalent training objective employed
is next token prediction (Radford et al., 2019).
* •
Step2: logits generation. Given a trained large language model M, a prompt
$\mathbf{x}$, and a sequence of previously generated tokens
$\mathbf{t}^{0:(i-1)}$, the LLM generates a probability distribution over the
next token $\mathbf{t}^{(i)}$ in the vocabulary $\mathcal{V}$, represented as
logits $\mathbf{l}^{(i)}$:
(5) $\mathbf{l}^{(i)}=M(\mathbf{x},\mathbf{t}^{0:(i-1)}).$
* •
Step3: token sampling. The next token $\mathbf{t}^{(i)}$ is sampled from the
logits $\mathbf{l}^{(i)}$ which could be achieved using nucleus sampling
(Holtzman et al., 2019), choosing the token with the highest probability
(greedy decode), or using other decode algorithms such as beam search to
select a list of tokens with the highest probability. Here we use $S$ to
denote the token sampling process:
(6) $\mathbf{t}^{(i)}=S(\text{softmax}(\mathbf{l}^{(i)})).$
Through these steps, the large language model M can produce a single token
$\mathbf{t}^{(i)}$. To generate multiple tokens, we could simply repeat the
logits generation and token sampling process iteratively.
Considering that there are three important steps during the utilization of LLM
to generation text, watermarking for LLMs techniques could also be divided
into three distinct types correspondingly. Specifically, we refer to the three
distinct types as training time watermarking, watermarking during logits
generation, and watermarking during token sampling. These three watermarking
techniques will be elaborated in sections 4.1, 4.2, and 4.3, respectively.
### 4.1. Training Time Watermarking
The objective of training time watermarking is to embed a watermark message,
denoted as $w$, into a LLM during its training phase. This process is
typically accomplished by embedding the watermark message $w$ into the
training dataset. Specifically, embedding a watermark message into a given
dataset $D$ typically follows the following steps. The process begins with the
extraction of a subset $D_{s}$ from the dataset. A watermark message is then
embedded onto this subset through a watermark embedding function $W$,
producing the corresponding watermarked subset $\widetilde{D}_{s}$, denoted as
$\widetilde{D}_{s}=W(D_{s},w)$. Consequently, the watermarked dataset $D_{w}$
is defined as the union of the original dataset $D$ and the watermarked subset
$\widetilde{D}_{s}$, minus the non-watermarked subset $D_{s}$, as expressed by
the following formula:
(7) $D_{w}=(D\setminus D_{s})\cup\widetilde{D}_{s}$
Subsequently, the large language model (LLM) trained on the watermarked
dataset $D_{w}$ is embedded with the watermark message $w$, denoted as
$M_{w}$. Typically, $M_{w}$ exhibits characteristics of the watermark message
$w$ when operating on datasets with a distribution similar to
$\widetilde{D}_{s}$, enabling the wateramrk detection. In this process, the
most critical aspect is the design of the watermark embedding function $W$,
specifically how to transform $D_{s}$ into $\widetilde{D}_{s}$. Presently,
methods for designing $W$ predominantly draw on the concept of a backdoor
attack. Within the training set $\widetilde{D}_{s}=\\{(x,y)\\}$, for an input
$x$, a specific trigger is introduced to embed a recognizable feature which
manifests in the corresponding output and is detected in the subsequent
verification process. Depending on whether the introduced trigger disrupts the
original label $y$, the current methods can be broadly classified into two
categories.
The first category of methods introduces a trigger to x, compromising the
corresponding label y for that segment of x. For instance, Liu et al. (2023a)
proposed a training time watermark algorithm for text classification tasks,
randomly selecting a subset of text data to insert triggers at the character-
level, word-level, or sentence-level. These triggers are distinct features,
and they uniformly changed the labels of these data to a specific $y_{t}$. Sun
et al. (2022) implemented a similar approach for code generation, applying
word-level or sentence-level triggers to text, and employing methods like code
corrupting to disrupt the associated code. Although these techniques are
effective for detecting models using trigger-inclusive text, they compromise
part of the dataset’s labels, potentially degrading the model’s inherent
capabilities.
To address this issue, methods that produce low distortion on labels are
required. Tang et al. (2023) initially adopt adversarial learning to identify
examples within a certain category C that are prone to be wrongly classified
by models and then add triggers to these samples without altering their
original labels. For code generation tasks, Sun et al. (2023) conduct
semantically invariant transformations of the code, such as employing
different forms of syntactic sugar. This allows for the matching and detection
of triggers in text with varying code styles. Currently, the objective of
training time watermarking is to protect the copyright of datasets from
unauthorized use.
Although Training Time Watermarking can embed watermark messages into large
language models, it has several distinct disadvantages. Firstly, the
watermarked LLM $M_{w}$, can only produce watermarked outputs for a subset of
inputs. Secondly, the watermark payload is considerably limited, typically
only indicating the presence of a watermark and not containing more extensive
information. Thirdly, the watermark message is challenging to modify: altering
the watermark message $w$ often requires retraining the model, which is a
costly process. Therefore, the application of Training Time Watermarking
remains quite limited.
### 4.2. Watermarking during Logits Generation
Watermarking during logits generation refers to the insertion of a watermark
message $w$ into the logits (i.e., probabilities of tokens in the vocabulary
$\mathcal{V}$) generated by large language models. This approach does not
necessitate alterations to the parameters of the LLM $M$, rendering it more
flexible and cost-effective compared to training time watermarking methods.
In the scenario of watermarking during logits generation, the watermarking
algorithm $\mathcal{A}$ modifies the logits produced by the LLM. Specifically,
the watermark’s message $w$ is embedded into the logits generated by LLM. The
watermarked logits $\widetilde{\mathbf{l}^{(i)}}$, could be calculated using
the following formula:
(8)
$\widetilde{\mathbf{l}^{(i)}}=\mathcal{A}(M(\mathbf{x},\mathbf{t}^{0:(i-1)}),w)=M_{w}(\mathbf{x},\mathbf{t}^{0:(i-1)}),$
where we assume the watermarked logits $\widetilde{\mathbf{l}^{(i)}}$ is
generated by a watermarked LLM $M_{w}$.
Kirchenbauer et al. (2023a) proposed the first watermarking method based on
LLM logits modification, referred to as KGW. There, the watermark generator
randomly splits the vocabulary set into a red list and a green list at each
position depending on the previous token using a hash function. As illustrated
in Figure 4, when $M_{w}$ generates the $i^{th}$ token, a small bias $\delta$
is added to the logits of the tokens in the green list. Let $G$ represent the
green list and $R$ stands for the red list. Then the logits value of token
$v_{j}$ at position $i$ would be calculated as follows:
(9)
$\widetilde{\mathbf{l}_{j}^{(i)}}=M_{w}(\mathbf{x},\mathbf{t}^{0:(i-1)})=\left\\{\begin{array}[]{lr}M(\mathbf{x},\mathbf{t}^{0:(i-1)})[j]+\delta,&\
v_{j}\in G\\\ M(\mathbf{x},\mathbf{t}^{0:(i-1)})[j],&\ v_{j}\in
R\end{array}\right.$
As the watermark algorithm favors the logits of green tokens, the watermarked
text is expected to contain a higher proportion of green tokens compared to
the unwatermarked text. Consequently, the watermark detector assesses whether
a text is watermarked by first utilizing the hash function to categorize each
token as red or green sequentially. Following this, it calculates the green
token proportion using the z-metric. If the green token proportion surpasses a
specific threshold, the text is classified as watermarked.
Figure 4. A more illustrative description of the KGW (Kirchenbauer et al.,
2023a) algorithm.
Although the KGW watermark detection method demonstrated exceptional
performance in its test scenarios, achieving a false positive rate (human text
misclassified as watermark text) of less than $3\times 10^{-3}$%, and a false
negative rate (watermark text not detected) of less than 1%, there remain
numerous highly challenging scenarios in real-world applications that
necessitate specialized optimization and design of the watermark algorithm to
effectively address them. Below, we list four representative scenarios and
thoroughly introduce the improvements and explorations made to watermark
algorithms in these contexts, which are illustrated in Figure 5.
#### 4.2.1. Watermarking low-entropy text.
Low-entropy scenarios are situations such as code generation and formatted
document generation, where the generated text is relatively deterministic.
Entropy serves as a quantification of textual uncertainty, which is calculated
as follows:
(10) $H^{(i)}=-\sum_{j=1}^{|\mathcal{V}|}P_{j}^{(i)}\log P_{j}^{(i)},$
where $P_{j}^{(i)}$ denotes the probability value of token $v_{j}$ at position
$i$. A decrease in entropy corresponds to a heightened level of certainty
within the produced text. In real applications, it is also necessary to add
watermark when using LLM to generate low-entropy text. Under low-entropy
scenarios, significant modifications are not allowed, since it may notably
degrade text quality. Therefore, watermarking and detecting watermark in low-
entropy text pose a challenging task.
To address this problem, Lee et al. (2023) suggested calculating the entropy
before adding preference to logits of green tokens (i.e. set G in Equation 9).
If the entropy $H^{(i)}$ is lower than a threshold $H$, the logits vector
remains unmodified. Similarly, Wang et al. (2023b) employed a balance-marking
strategy, constraining the choice of vocabulary to a subset where the
cumulative model log probabilities exceed a certain percentage. This approach
effectively allows for the watermarking on only high-entropy tokens while
bypassing low-entropy tokens. For instance, consider a specific case where
there is only one word candidate; in such a scenario, the vocabulary subset is
limited to this single candidate. The above methods (Lee et al., 2023; Wang et
al., 2023b), by considering the influence of text entropy during the
watermarking process, achieves a relatively low impact on the quality of the
text. However, this method is still ineffective when the number of high-
entropy tokens in the text is low. Adding watermarks to low-entropy text
remains a challenge.
#### 4.2.2. Watermark with multi-bit payload.
The KGW(Kirchenbauer et al., 2023a) watermark algorithm introduced by Equation
9, could only determine the presence of a watermark and no additional
information, categorizing it as a zero-bit payload watermark. However,
practical applications often demand that watermarks carry additional
information such as copyright details, timestamps, or specific identifiers.
This necessitates watermarking techniques that not only detect the presence of
watermarks but also extract meaningful information, which could be categorized
as multi-bit payload watermark.
To achieve this, one possible solution is to establish multiple splits for
dividing the vocabulary into a red list and a green list. Specifically, we can
set N types of splits for the vocabulary, such as
$[(G_{1},R_{1}),…,(G_{N},R_{N})]$, where each split could performing LLM
watermarking following the way introduced in Equation 9. Each split
corresponds to a specific watermark message, endowing the watermark with a
$log_{2}{N}$-bit payload. For instance, Wang et al. (2023b) permits the user
to input a watermark message $m$ with $log_{2}{N}$ bits. Subsequently, based
on the hash value of this message $m$, the vocabulary is divided. However,
during detection phase, this method requires iterating through all $N$
possible messages, calculating the correlation between the text and the
red/green splits corresponding to each message respectively, thus is not
efficient. To mitigate this issue, Fernandez et al. (2023) proposed the Cyclic
Shift method, where distinct messages are generated by cyclically shifting an
initial message. This approach reduces redundant computational steps and
enhances detection efficiency through parallel processing.
However, these methods, due to the necessity of iterating each possible
message, encounter heightened computational complexity with the growing
watermark payload. To address this, Yoo et al. (2023a) proposed allocating
different bits of the message to various positions in the watermark text,
allowing independent detection of different bit altogether in a single
detection round. Then the information corresponding to each bit is
concatenated to obtain the final message. Specifically, the message is modeled
as a sequence $m=\Sigma^{b}$ where $\Sigma=\\{0,1\\}$. The process of dividing
the vocabulary at each generation step is then split into two phrases: the
first involves choosing a position in the message, $\Sigma^{b}[i]$, based on
the existing random seed, and the second involves dividing the vocabulary
according to this $\Sigma^{b}[i]$ and random seed. This approach significantly
improves efficiency by enabling the parallel detection of bit information.
Furthermore, Yoo et al. (2023a) also suggest dividing the vocabulary into more
parts, so that each position $\Sigma^{b}[i]$ in $m$ can contain more
information, further enhancing the payload of the text watermark.
#### 4.2.3. Preventing watermark removal attack.
As discussed in section 2, an effective watermarking algorithm must possess
sufficient robustness against watermark removal attacks, ensuring that the
watermark text remains detectable post-attack. These attacks typically involve
modifications to the text without altering its semantic content, which will be
introduced in section 5.3. Although the KGW algorithm (Kirchenbauer et al.,
2023a) demonstrated some robustness to watermark removal attacks in their
experiments, there remains room for improvement in its robustness.
In particular, for the watermarking during logits generation methods
(Kirchenbauer et al., 2023a, b; Zhao et al., 2023a; Liu et al., 2023b, c), the
most significant factor affecting robustness is how to determine the
modifications to logits. More specifically, how to divide the red-green list
mentioned in the KGW method (Kirchenbauer et al., 2023a). In the original KGW
method (Kirchenbauer et al., 2023a), the red-green list is determined based on
the hash value of preceding tokens. Kirchenbauer et al. (2023b) further
elaborated on some specific hash strategies, such as using only the token with
the smallest token id in previous tokens for hashing to decide the red-green
list, which can enhance robustness. (Zhao et al., 2023a) proved that a
globally fixed split of red and green list results in higher robustness to
watermark removal attacks . Additionally, since watermark removal attacks
typically do not alter the semantic content of the text, many studies have
also designed methods to determine the split of the red-green list based on
textual semantics. For example, Liu et al. (2023c) trained a watermark model
that can directly convert text semantic embeddings into watermark logits. Ren
et al. (2023) converted semantic embeddings into semantic values through
weighted embedding pooling followed by discretizing using NE-Ring, and then
divided the vocabulary into red-list and green-list based on these semantic
values. The current methods primarily focus on investigating the robustness of
zero-bit watermark algorithms against watermark removal attacks. Future
algorithms could further explore the robustness of multi-bit payload watermark
algorithms.
Figure 5. Demonstration of how various methods improve upon KGW (Kirchenbauer
et al., 2023a) to adapt to four scenarios: Watermarking Low-entropy Text,
Watermark with Multi-bit Payload, Preventing Watermark Removal Attack, and
Defending against Watermark Forgeries.
#### 4.2.4. Defending against watermark forgeries.
In the aforementioned discussion on the watermark algorithms’ ability to
preventing watermark removal attacks, it is assumed that the attacker is not
aware of the details and generation methods of the watermark algorithm, which
is essentially a black-box setting. Once an attacker obtain the watermark
generation details, they could effortlessly remove the watermark using the
anti-watermark techniques (Kirchenbauer et al., 2023b) or forge the watermark
easily. Therefore, for watermark algorithms, the capability to defend against
watermark forgeries is of great importance. The ability to defend against
watermark forgeries depends on the watermark algorithm’s capacity to
effectively conceal its watermark generation process.
If a watermark algorithm produces watermarked text that is imperceptible, the
watermark would become more difficult to forge. Here, imperceptible refers to
the indistinguishability in distribution between watermarked and non-
watermarked texts. Hu et al. (2023) discovered that the way KGW algorithm
(Kirchenbauer et al., 2023a) modifies to logits are biased, thus lacking the
imperceptibility. Specifically, biased in this context is defined as the
expectation of watermarked logits for all keys being the original logits of
the language model:
(11) $\mathop{\mathbb{E}}\limits_{k\sim
K}[M_{w}(\mathbf{x},\mathbf{t}^{0:(i-1)})]=M(\mathbf{x},\mathbf{t}^{0:(i-1)}),$
where each distinct key corresponds to a different method of splitting the
red-green list. The key reason why the KGW algorithm is biased is that it
applies a uniform bias, $\delta$, to all tokens in the green list. However,
this uniform $\delta$ disproportionately impacts tokens with lower
probabilities, ultimately resulting in bias (as detailed in the proof by (Hu
et al., 2023)). To address this issue, Hu et al. (2023) proposed two
reweighting methods to make the watermarking algorithm unbiased. The
$\delta$-reweight method samples a one-hot distribution directly based on the
original logits’ distribution. In contrast, the $\gamma$-reweight method
randomly rejects half of the probability distribution range, doubling the
probabilities of the remaining tokens. Similarly, Wu et al. (2023) employed an
$\alpha$-reweight method, which rejects tokens with probabilities below
$\alpha$ and proportionally increases the probabilities of the rest.
Theoretically, these three algorithms can be proven to be unbiased. These
unbiased watermarking algorithms, compared to the KGW algorithm, are better at
being imperceptible. However, the unbiased distribution resulting from these
re-weighting methods may not guarantee theoretical imperceptibility. For
instance, the variance in the distributions with same mathematical
expectations might be distinct. Therefore, further work is required to explore
whether these algorithms can truly achieve imperceptibility.
The capability of watermark algorithms to defend against watermark forgeries
cannot be solely based on the imperceptibility of the watermark. It is also
crucial that these algorithms robustly protect watermark rules from being
deciphered. In this context, we differentiate between two scenarios: private
detection and public detection. Private detection refers to cases where
watermark detection is only accessible to users through an API. In contrast,
public detection implies that the detail of watermark detector is open to all
users. For private detection scenarios, the complexity of the watermark
algorithm plays a vital role in its ability to resist watermark forgeries. For
instance, Zhao et al. (2023a) adopted a fixed global red-green list split to
generate watermark logits. Such a simple rule can be easily cracked through
statistical analysis of the watermark text (Sadasivan et al., 2023), revealing
which tokens are included in the green token list. Conversely, Liu et al.
(2023c) employed the semantic information to generate watermarked logits.
Extracting the watermark rules from their produced watermark text is
considerably more challenging, as these rules vary with different texts and
are sufficiently complex.
In the context of public detection scenarios, resisting watermark forgeries
becomes significantly more challenging. This difficulty arises because
attackers can access the watermark detectors. For methods where the watermark
generation process is involved in detection (Kirchenbauer et al., 2023a, b;
Zhao et al., 2023a; Liu et al., 2023c), exposing the watermark generator leads
to a complete inability to resist watermark forgeries. To address this issue,
Fairoze et al. (2023) have utilized digital signature technology from the
field of cryptography. This approach involves generating watermarks using a
private key and verifying them with a public key. However, the verification
via public key relies on features extracted from the text and users can still
exploit these features to forge watermarks. Further advancing this field, Liu
et al. (2023b) proposed the use of neural networks for watermark detection.
Due to the black-box nature of neural networks, the details of watermark
generation are not exposed, which could defend against watermark forgeries in
public detection scenarios.
### 4.3. Watermarking during Token Sampling
The previous section primarily focused on incorporating watermarks during the
logits generation phase for Large Language Models. However, even after
embedding watermarks, it is necessary to sample the next token from the
watermarked logits. Consequently, the effectiveness of the watermark might be
influenced by different sampling methods; for instance, beam search typically
allows for a higher watermark intensity. In this section, we will introduce a
technique of watermarking during token sampling, which does not alter the
logits produced by the LLM. The primary advantage of this method is that the
generated text is usually unbiased, thus minimally affecting text quality and
serving as the first line of defense against watermark forgery.
The principle of incorporating watermarks during the token sampling phase is
derived from the randomness inherent in token sampling. In this scenario,
watermarks can be introduced using a fixed random seed, where a pseudo-random
number generator produces a sequence of pseudo-random numbers to guide the
sampling of each token. For watermark detection, it is only necessary to
assess the alignment between the text tokens and the pseudo-random numbers,
specifically evaluating whether the choice of each token in the text matches
with the corresponding value in the random number sequence. For instance,
Christ et al. (2023) use binary representation for each word in the
vocabulary, with the pseudo-random numbers represented as a series of values
$u\in[0,1]$. This facilitates the sampling process using the pseudo-random
numbers. Specifically, if the predicted probability for a certain position
exceeds the corresponding pseudo-random number, then 1 is sampled at that
position, otherwise 0. In the detection of watermarks, it can be determined
whether the values of the pseudo-random numbers corresponding to the positions
with 1 in the binary tokens are significantly higher than those with 0.
However, this method still faces two challenges: 1) the detection algorithm is
not robust enough against watermark removal attacks, which involves certain
text modifications, and 2) due to the fixed nature of pseudo-random numbers,
the LLM with watermark will generate the same text for the same prompt each
time, thereby losing the inherent randomness in text generation by LLM.
To address these issues, Kuditipudi et al. (2023) proposed the use of a
pseudo-random number sequence significantly longer than the text, randomly
selecting a starting position from the sequence for each watermark insertion
to introduce randomness. Additionally, during watermark detection, they
incorporate a soft notion of edit distance (i.e., Levenshtein distance) into
the computation of the alignment between text and the pseudo-random number
sequence. This approach significantly enhances the robustness of the
watermarking algorithm against watermark removal attacks. Apart from
intervening in the sampling process of each token one by one, Hou et al.
(2023) suggested incorporating watermarks during sentence-level sampling. The
algorithm initially partitions the semantic embedding space into a watermarked
region and a non-watermarked region, and then performs sentence-level
rejection sampling until the sampled sentence falls within the watermarked
region. Since the partition principles are based on sentence-level semantics,
this approach significantly enhances robustness against watermark removal
attacks such as paraphrasing. Currently, there is limited research on
watermarking during token sampling, suggesting substantial potential for
growth in this field. Moreover, the practical effectiveness and robustness of
this method may require further validation through more experiments and real-
world applications.
## 5\. Evaluation Metrics for Text Watermarking
In sections 3 and 4, we have provided a comprehensive introduction to the
existing text watermarking methods. Meanwhile, for a text watermarking
algorithm, it is crucial to conduct a comprehensive evaluation from various
perspectives. In this section, we will introduce how to evaluate a
watermarking algorithm from four perspectives: success rate, text quality,
robustness and unforgeability. Among these, success rate (section 5.1) refers
to the ability to correctly detect watermarked texts. Text quality (section
5.2) assesses whether the quality of watermarked text is degraded compared to
non-watermarked text. Robustness (section 5.3) examines whether the
watermarked text can still be detected after watermark removal attacks.
Finally, unforgeability (section 5.4) evaluates the difficulty for third
parties to forge the watermark. Furthermore, Table 1 outlines the alignment
between various text watermarking algorithms discussed in the survey and the
evaluation perspectives they contribute to.
Table 1. Relationships between text watermarking algorithms covered in the
survey and the evaluation metrics, featuring the individual objectives each
text watermarking algorithm aims to achieve. $\blacktriangle$ stands for basic
objectives, $\bullet$ stands for primary objectives, and $\circ$ stands for
secondary objectives.
Text Watermarking Algorithms | | | Objectives | | | |
---|---|---|---|---|---|---|---
Watermarked | | | | | | |
Object | Category | Method | Success Rate | | Text Quality | Robustness | Unforgeability
| | | Detection Accuracy | Payload | | |
Existing Text | | | | | | |
(section 3) | Format-based | | | | | |
(section 3.1) | Line/Word-Shift Coding (Brassil et al., 1995) | $\blacktriangle$ | | $\bullet$ | | |
| | UniSpach (Por et al., 2012) | ▲ | $\bullet$ | $\bullet$ | |
| | Unicode Homoglyph Substitution (Rizzo et al., 2016) | ▲ | $\bullet$ | $\bullet$ | |
| | EasyMark (Sato et al., 2023) | ▲ | $\bullet$ | $\bullet$ | |
| Lexical-based | | | | | |
(section 3.2) | Equimark (Topkara et al., 2006b) | $\blacktriangle$ | | $\bullet$ | $\circ$ | |
| | DeepTextMark (Munyer and Zhong, 2023) | ▲ | | $\bullet$ | $\bullet$ |
| | Context-aware Lexical Substitution (Yang et al., 2022) | ▲ | $\circ$ | $\bullet$ | $\circ$ |
| | Binary-encoding Lexical Substitution (Yang et al., 2023) | ▲ | | $\circ$ | $\circ$ |
| | Robust Multi-bit Watermark (Yoo et al., 2023b) | ▲ | $\circ$ | $\circ$ | $\bullet$ |
| Syntatic-based | | | | | |
(section 3.3) | NLW (Atallah et al., 2001) | $\blacktriangle$ | | | $\bullet$ | |
| | WANE (Topkara et al., 2006a) | ▲ | $\bullet$ | $\circ$ | $\bullet$ |
| | MA-NLW (Meral et al., 2009) | ▲ | $\bullet$ | $\circ$ | $\circ$ |
| Generation-based | | | | | |
( section 3.4) | AWT (Abdelnabi and Fritz, 2021) | $\blacktriangle$ | | $\bullet$ | $\bullet$ | |
| | REMARK-LLM (Zhang et al., 2023a) | ▲ | $\bullet$ | $\circ$ | $\circ$ |
LLMs | | | | | | |
(section 4) | Training Time | | | | | |
(section 4.1) | Dataset Trigger (Liu et al., 2023a) | $\blacktriangle$ | | $\bullet$ | | |
| | Clean-Label Backdoor Watermark (Tang et al., 2023) | ▲ | | $\bullet$ | |
| | Coprotector (Sun et al., 2022) | ▲ | | $\bullet$ | |
| | CodeMark (Sun et al., 2023) | ▲ | | $\bullet$ | |
| Logits Generation | | | | | |
(section 4.2) | KGW (Kirchenbauer et al., 2023a) | $\blacktriangle$ | | $\bullet$ | | |
| | SWEET (Lee et al., 2023) | ▲ | | $\bullet$ | |
| | Unbiased Watermark (Hu et al., 2023) | ▲ | | $\bullet$ | $\circ$ | $\bullet$
| | DiPmark (Wu et al., 2023) | ▲ | | $\bullet$ | $\circ$ | $\bullet$
| | COLOR (Yoo et al., 2023a) | ▲ | $\bullet$ | $\circ$ | |
| | CTWL (Wang et al., 2023b) | ▲ | $\bullet$ | $\bullet$ | $\circ$ | $\bullet$
| | ThreeBrick (Fernandez et al., 2023) | ▲ | | | $\circ$ |
| | Unigram Watermark (Zhao et al., 2023a) | ▲ | | | $\bullet$ |
| | KGW-reliability (Kirchenbauer et al., 2023b) | ▲ | | | $\bullet$ |
| | NS-Watermark (Takezawa et al., 2023) | ▲ | | $\bullet$ | $\circ$ |
| | Semantic Invariant Robust Watermark (Liu et al., 2023c) | ▲ | | $\circ$ | $\bullet$ | $\bullet$
| | Unforgeable Watermark (Liu et al., 2023b) | ▲ | | $\circ$ | $\circ$ | $\bullet$
| | Publicly Detectable Watermark (Fairoze et al., 2023) | ▲ | | $\circ$ | $\circ$ | $\bullet$
| | Semantic-based Robust Watermark (Ren et al., 2023) | ▲ | | $\circ$ | $\bullet$ |
| Token Sampling | | | | | |
(section 4.3) | Undetectable Watermark (Christ et al., 2023) | $\blacktriangle$ | | $\bullet$ | $\circ$ | $\bullet$ |
| | Robust Distortion-free Watermark (Kuditipudi et al., 2023) | ▲ | | $\bullet$ | $\bullet$ | $\bullet$
| | SemStamp (Hou et al., 2023) | ▲ | | $\circ$ | $\bullet$ |
### 5.1. Success Rate
For a text watermarking algorithm, the fundamental requirement is that the
watermarked text could be detected with a high probability. In this section,
we consolidate how current watermarking algorithms measure their success rate.
Based on the amount of information carried by the watermarking algorithm, we
will introduce it in two scenarios: zero-bit and multi-bit.
#### 5.1.1. Zero-bit Watermark
In zero-bit watermarking, the watermarking algorithm can only determine
whether a text contains a watermark, without the ability to extract additional
information from the watermarked text. Given that the detection of a zero-bit
watermark inherently represents a binary classification problem — discerning
whether a text is watermarked or not — the evaluation of its success rate
typically employs classification metrics, such as accuracy and F1 score. In
most works (Kirchenbauer et al., 2023a; Zhao et al., 2023a; Liu et al., 2023b,
c), since the proportion of watermarked and non-watermarked texts in the test
dataset is usually equal, the values of accuracy and F1 score tend to be quite
similar. Despite this, the F1 score is often more frequently utilized,
accompanied by additional metrics including false positive and false negative
rate (Kirchenbauer et al., 2023a; Zhao et al., 2023a; Liu et al., 2023b).
Here, false positives denote the erroneous classification of non-watermarked
texts as watermarked, whereas false negatives refer to the incorrect
classification of watermarked texts as non-watermarked. Generally, false
positives are more important since misidentifying human-generated texts as
watermarked can lead to more adverse consequences.
However, since most watermark detection algorithms require a threshold to
determine the F1 or accuracy values, the uncertainty of this threshold may
introduce unfairness in the performance comparison of different algorithms.
Consequently, some works (Zhao et al., 2023a; Liu et al., 2023c) have reported
F1 values at fixed false positive rates of 1% and 10%, while others have
reported the optimal F1 score across all potential thresholds to ensure a
fairer comparison (Liu et al., 2023c). In addition, some works employ
hypothesis testing methods to calculate the p-value as an important metric for
sucess rate. Different algorithms utilize various hypotheses to compute the
p-value. For example, Kirchenbauer et al. (2023a) involves determining if the
calculated z-score exceeds a certain threshold, while (Kuditipudi et al.,
2023) hypothesized that the key generating the watermark text has a higher
probability of detecting the watermark compared to other randomly selected
keys. This method (Kuditipudi et al., 2023) does not require a predefined
threshold but necessitates multiple runs of the detection algorithm, which
could slow down the detection speed.
#### 5.1.2. Multi-bit Watermark
For multi-bit watermarking methods (Wang et al., 2023b; Rizzo et al., 2016;
Abdelnabi and Fritz, 2021; Yang et al., 2022; Yoo et al., 2023b, b, a), they
not only detect the presence of a watermark but also extract more detailed
watermark information. For instance, a watermarked text might convey specific
data like This text is generated by GPT-4 on June 6 by the Administrator (Wang
et al., 2023b). When evaluating the success rate of multi-bit watermarks, it
is crucial not only to assess their accuracy in extracting watermark
information but also to consider the payload of the watermark, which refers to
the number of bits in the watermark information.
Assume that a piece of watermark information, $w$, can be represented using
$n$ bits, denoted as $w=b_{1}b_{2}…b_{n}$, where each $b_{i}$ can take a value
of 0 or 1. In this context, the most commonly used metric for assessing
success rate is the bit error rate (BER) (Yoo et al., 2023b), which is the
probability of incorrectly predicting each bit during detection, or the bit
accuracy (Yoo et al., 2023a; Abdelnabi and Fritz, 2021), which is the rate of
correctly predicting each bit. These are two complementary metrics. Typically,
the calculation of the bit error rate is conducted under a fixed bit number,
and as the bit number increases, the bit error rate also tends to increase,
eventually approaching 50% (random).
Therefore, the bit capacity of a watermark algorithm has become an important
evaluation criterion, commonly referred to as payload (Yang et al., 2022),
Bits Per Watermark (BPW) (Yang et al., 2022; Yoo et al., 2023b), or code rate
(Wang et al., 2023b; Rizzo et al., 2016). The payload can be calculated by
dividing the total number of bits in the watermark information by the number
of tokens. For a watermark algorithm, there is an upper limit to its payload,
and enhancing the payload typically comes at the cost of either reduced text
quality (section 5.2) or decreased robustness (section 5.3). Additionally, the
value of the payload is contingent on the textual context; in scenarios with
higher entropy, the payload tends to be higher, whereas in lower entropy
scenarios, the payload is usually lower.
### 5.2. Text Quality
In section 2, we demonstrated that an essential characteristic of text
watermarking technology is its low-impact on text quality. This means that the
quality scores of texts, with or without watermarks, should be similar under
the text quality evaluation function $\mathcal{R}$ as described in Equation 2.
This section primarily introduces potential forms of the text quality
evaluation function $\mathcal{R}$. Current text watermarking research
predominantly assesses text quality using methods like perplexity values (Yang
et al., 2023; Kirchenbauer et al., 2023a; Hu et al., 2023; Wu et al., 2023;
Wang et al., 2023b; Zhao et al., 2023a; Liu et al., 2023c, b), semantic score
(Munyer and Zhong, 2023; Yoo et al., 2023b; Yang et al., 2022, 2023; Abdelnabi
and Fritz, 2021; Zhang et al., 2023a), performance evaluations for specific
tasks (Topkara et al., 2006a; Zhang et al., 2023a; Hu et al., 2023; Wu et al.,
2023; Yang et al., 2023; Lee et al., 2023; Sun et al., 2023), or text
diversity (Kirchenbauer et al., 2023b).
#### 5.2.1. Perplexity
Perplexity (PPL) is defined as the exponentiated average negative log-
likelihood of a sequence. Specifically, given a text $W={w_{1},...,w_{N}}$,
the PPL can be computed using an LLM $\mathcal{M}$:
(12)
$\mathcal{R}_{\text{PPL}}(W)=\exp\left(-\frac{1}{N}\sum_{i=1}^{N}\log\mathcal{M}(w_{i}|w_{1},\ldots,w_{i-1})\right).$
Perplexity is an effective metric for assessing the consistency and fluency of
text. Generally, a lower PPL indicates higher text quality. Typically, larger
LLMs are employed to compute PPL for more accurate assessments, examples of
which include GPT2(Yang et al., 2023), GPT-3 (Zhao et al., 2023a), OPT2.7B
(Kirchenbauer et al., 2023a; Wang et al., 2023b), LLaMA-13B (Liu et al.,
2023c, b), among others.
The perplexity of watermarked text is generally higher than that of non-
watermarked text, indicating a slight decrease in text quality. Generally, the
impact of watermarking algorithms on text quality is correlated with the
strength of the watermark. The higher the watermark strength, the more evident
the decline in text quality. For instance, in the KGW algorithm (Kirchenbauer
et al., 2023a), a higher value of $\delta$ results in greater impact on text
quality. Takezawa et al. (2023) suggest that for longer texts, a weaker
watermark strength can be employed to minimize the effect on text quality
while maintaining the effectiveness of the watermark.
#### 5.2.2. Semantic Score
Although text perplexity facilitates the evaluation of textual consistency and
fluency, it could not assess the accuracy of watermarked texts, specifically
in terms of semantic consistency between watermarked and un-watermarked text.
Consequently, some studies employ semantic scores, which measure the semantic
similarity between watermarked and non-watermarked texts, to evaluate the
impact of watermarking algorithms on text quality.
The most commonly utilized method for assessing semantic scores involves the
computation of semantic embeddings by Large Language Models (LLMs), followed
by the comparison of these embeddings using cosine similarity. This process
can be represented by the following formula:
(13)
$\mathcal{R}_{\text{se}}(W_{u},W_{w})=\frac{\mathcal{M}(W_{u})\cdot\mathcal{M}(W_{w)}}{\|\mathcal{M}(W_{u})\|\times\|\mathcal{M}(W_{w})\|},$
where $W_{u}$ and $W_{w}$ respectively represent the text without watermark
and the text with watermark. The model $\mathcal{M}$ is typically an LLM that
has been optimized specifically for text similarity. For instance, Munyer and
Zhong (2023) have used the Universal Sentence Encoder (Cer et al., 2018),
whereas Yoo et al. (2023b); Abdelnabi and Fritz (2021); Yang et al. (2022)
have employed Sentence-BERT (Reimers and Gurevych, 2019), and Yang et al.
(2023) have utilized all-MiniLM-L6-v2 111https://huggingface.co/sentence-
transformers/all-MiniLM-L6-v2. Most watermarking algorithms could achieve a
semantic similarity between the watermarked text and the original text
(without watermark) above 0.9. Additionally, the achievable semantic scores in
these works are still correlated with the watermark strength (degree of
textual modification), indicating that lower watermark strength correlates
with higher semantic scores.
While the text embedding based evaluation method could effectively captures
the overall semantic similarity, it falls short in delving into the semantic
nuances at a detailed level. Consequently, Yoo et al. (2023b) have further
employed RoBERTa-Large-NLI (Reimers and Gurevych, 2019) for a more precise
understanding and inference of complex semantic relations between texts
(Entailment Score, ES). RoBERTa-Large-NLI is pre-trained on Natural Language
Inference (NLI) tasks and focuses not only on the overall similarity between
two texts but also discerns subtle semantic differences. In actual
experiments, the ES values generally tend to be lower than text embedding
similarities.
Although semantic scores assessment based on Natural Language Inference (NLI)
offers an in-depth semantic analysis, it might still fall short in accurately
capturing variations at the level of individual words or phrases. To address
this, Zhang et al. (2023a) employed BERT-Score (Zhang et al., 2019) for a
word-level detailed comparison of texts. BERT-Score is more adept at
evaluating whether the watermark has altered specific vocabulary or
expressions in the original text.
#### 5.2.3. Task-specified Evaluation
Although the assessment method based on semantic scores could effectively
evaluate whether adding watermarks alters the text semantics, its impact on
real-world applications remains unclear. Consequently, many studies are now
focusing on exploring the effects of watermarking algorithms on specific
downstream tasks to assess their impact on text quality. These specific
downstream tasks include machine translation (Topkara et al., 2006a; Hu et
al., 2023; Wu et al., 2023; Zhao et al., 2023b), sentiment classification
(Yang et al., 2023), knowledge understanding (Tu et al., 2023), code
generation (Lee et al., 2023; Sun et al., 2023; Tu et al., 2023), text
summarization (Hu et al., 2023; Wu et al., 2023; Tu et al., 2023), story
generation (Zhao et al., 2023b), question answering (Tu et al., 2023) and
instruction following (Tu et al., 2023), as illustrated in Figure 6.
Machine Translation. Typically, only watermarking Large Language Models
(Section 4) incorporate machine translation as a downstream task for testing
text quality. Specifically, the evaluation involves comparing the translation
outcomes between a watermarked LLM and the original unwatermarked LLM. This
comparison is conducted using the BLEU score, a widely used metric in machine
translation. For the choice of translation LLMs, (Hu et al., 2023; Wu et al.,
2023) employed the Multilingual BART (Liu et al., 2020), while Takezawa et al.
(2023) utilized the NLLB-200 model (Costa-jussà et al., 2022). The translation
data typically employed is the WMT14 dataset, where the translations between
French, German, and English are the most commonly utilized settings. Most
watermarking approaches for LLMs result in a slight decrease in BLEU scores
(Kirchenbauer et al., 2023a; Takezawa et al., 2023; Liu et al., 2023b).
However, the unbiased watermark methods (Hu et al., 2023; Wu et al., 2023)
exhibit almost no decline in BLEU values, demonstrating the superiority of
unbiased watermarking.
Sentiment Classification. Using sentiment classification as a downstream task
can validate whether text watermarking algorithms can affect the sentiment
distribution, i.e., whether the text can maintain its original sentiment
(e.g., positive or negative) after the insertion of a watermark. Yang et al.
(2023) analyzed the sentiment distribution of texts with and without
watermarks using the Twitter-XLM-RoBERTa-Base-Sentiment
222https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment model.
Different sentiments generally have clear differences, making it easy for
watermark algorithms to maintain sentiment distribution.
Knowledge Understanding. To explore the performance of the watermark for LLMs
algorithm in tasks with shorter output lengths, Tu et al. (2023) proposed
testing on Knowledge Understanding tasks. Specifically, this involves two
scenarios: Knowledge Probing, using the KoLA dataset (Yu et al., 2023) for
assessing factual recall in LLMs, and Concept Probing, employing the Copen
dataset (Peng et al., 2022) for evaluating conceptual understanding. The
typical evaluation metric for these tasks is the F1 score. In practical tests,
applying watermarks to Knowledge Understanding tasks significantly decreases
the F1 scores across all algorithms, indicating the challenging nature of this
scenario.
Code Generation. Text watermarking for code generation is an important
application, which could test the impact of watermarking on code
functionality. Code evaluation could use unit test metrics like pass@k (Kulal
et al., 2019), or matching metrics such as BLEU and Exact match. Sun et al.
(2023) inserted watermark features into a subset of a dataset to achieve code
dataset watermarking. They showed negligible differences in BLEU and exact
match scores between models trained on datasets with and without watermarks.
However, embedding watermarks into individual code is more challenging than
watermarking an entire dataset. Lee et al. (2023) added watermarks to large
models for code generation, leading to a nearly 10% performance decline in
pass@100 and pass@80 metrics. Given that even minor modifications can disrupt
code functionality, and considering code’s inherently low entropy, the code
generation task presents a highly challenging downstream task for text quality
test (Tu et al., 2023).
Text Summarization. Similar to machine translation scenarios, only the
watermark algorithms for LLM consider text summarization as a downstream
evaluation task. Specifically, it compares the effectiveness of text
summarization between a watermarked Large Language Model (LLM) and an
unwatermarked LLM. The common evaluation metric for text summarization is
ROUGE (Lin, 2004). Furthermore, the most frequently used large model for
summarization is BART-Large (Liu et al., 2020), with the CNN-DM dataset
(Hermann et al., 2015) being prevalent. In practical results, current
watermark algorithms have a relatively minimal impact on text summarization
tasks. The algorithm by Kirchenbauer et al. (2023a). only causes a slight
decrease in ROUGE scores, whereas the unbiased watermark (Hu et al., 2023; Wu
et al., 2023)algorithm hardly affects the ROUGE scores.
Story Generation. Similar to text summarization tasks, story generation also
presents a suitable scenario for evaluating watermark algorithms for Large
Language Models. Tests in the story generation context typically involve
inputting the first half of a story and having the model (LLM) predict its
ending. The ROCstories dataset (Mostafazadeh et al., 2016) is commonly used,
with ROUGE as the evaluation metric. According to the experiments by Zhao et
al. (2023b), current watermark algorithms still cause a 1%-2% decrease in
performance on ROUGE scores. Furthermore, minimal research has been conducted
on story generation performance, indicating potential for future exploration.
Question Answering. Question answering (QA) is an important downstream
application of Large Language Models, and testing watermark algorithms for
LLMs on this task is equally crucial. Tu et al. (2023) conducted tests on
three different QA tasks, specifically: the ELI5 dataset (Fan et al., 2019)
for long-form QA tasks, the FinQA dataset (Maia et al., 2018) for Finance QA
tasks, and the HotpotQA dataset (Yang et al., 2018) for multi-hop reasoning QA
tasks. For the long-form QA and Finance QA tasks, the Rouge-L metric was used
for evaluation, while the F1 score was utilized for the multi-hop reasoning QA
task. Experimental results revealed that, after the introduction of
watermarks, all current watermark algorithms experienced a performance decline
of about 50% in Question Answering tasks, indicating the challenging nature of
watermark algorithms in the context of Question Answering.
Instruction Following. The instruction following ability of Large Language
Models has recently become an especially important aspect to assess,
reflecting whether LLMs can accurately follow user instructions to generate
output in open-ended situations. Tu et al. (2023) tested the impact of the
current watermark for LLMs algorithm on the Instruction Following task using
the AlpacaFarm dataset (Dubois et al., 2023). The evaluation method adopted
was GPT4-Judge (Zheng et al., 2023), where the GPT-4 model judges which
output, between the watermarked LLM and Davinci-003, is better in response to
a given instruction. Under this metric, both Llama2-7B-chat and Internlm-7B-8k
models showed over a 90% decline in performance with watermark, indicating
that instruction following presents a particularly challenging scenario for
watermark algorithms.
Figure 6. Specific downstream tasks used to evaluate the impact of text
watermarking algorithms on text quality.
#### 5.2.4. Output Diversity.
Although previous text quality assessment methods provide comprehensive
evaluations of consistency, fluency, and accuracy, they still overlook an
essential aspect: assessing the diversity of watermarked texts. Diversity
evaluation is often targeted at the watermark algorithms for Large Language
Models, since these algorithms involve embedding watermarks into LLMs,
necessitating an assessment of whether the diversity of the text generated by
the watermarked LLMs has changed. Text diversity is defined by calculating the
proportion of unique n-grams in a text sequence, with the formula being the
negative logarithm multiplied by the product of (1 - proportion) of unique
n-grams from 1 to N. A higher diversity score indicates fewer repeated n-grams
and richer text. The specific formula is defined as follows:
(14) $\mathcal{R}_{d}=-\log\left(1-\prod_{n=1}^{N}(1-u_{n})\right),$
where $u_{n}$ represents the ratio of different n-grams to the total number of
n-grams in a given text sequence. If a sequence contains many non-repeating
n-grams, $u_{n}$ will be close to 1, indicating high diversity. Conversely, if
many n-grams are repeated, $u_{n}$ will be small, indicating low diversity.
In the work Kirchenbauer et al. (2023b), it was discovered that for the KGW
watermarking algorithm (Kirchenbauer et al., 2023a), the context width (i.e.,
the number of tokens used on the left to hash and generate the green list) has
the most significant impact on text diversity. A larger context width enhances
the diversity of the text, but this comes at the cost of reduced robustness of
the watermark to text modifications. Furthermore, with a larger context width,
as the watermark strength increases, so does the diversity. Conversely, with a
smaller context width, an increase in watermark strength leads to a decrease
in diversity. Currently, only a few studies on watermarking for Large Language
Models have assessed its impact on text diversity. We suggest that future work
could focus more on evaluating the effect of watermarking on diversity.
### 5.3. Robustness
In the context of text watermarking, a crucial evaluation metric is its
robustness against watermark removal attacks. A watermark removal attack
refers to the process of altering watermarked text in an attempt to erase the
embedded watermark. If a watermarked text still has a high probability of
being detected following a Watermark Removal Attack, then the text
watermarking algorithm is considered highly robust.
Current assumptions for watermark removal attacks are based on black-box
access to the watermarking algorithm, meaning the method of watermark
generation is unknown and there is no access to the watermark detector during
the attack. This is primarily because, under white-box access, potent
watermark removal attack algorithms can be easily developed to remove most
watermarks, as mentioned by (Kirchenbauer et al., 2023b) in their anti-
watermark schema. This represents a limitation of all current watermarking
algorithms. Although it is possible that watermarks may not withstand
watermark removal attacks under white-box access, exploring these
possibilities or designing robust watermark algorithms under white-box
settings remains a direction for future research.
Under the premise of black-box access, numerous watermark removal attacks have
been developed to erase the watermark by modifying watermarked text. Based on
the granularity of textual modifications, we categorize these methods into
character-level attacks, word-level attacks, and document-level attacks. In
the following part of this section, we will discuss the implementation of
these attacks and the robustness of current text watermarking methods against
them.
#### 5.3.1. Character-level Attack.
Modifying characters in text without altering any actual words is a relatively
straightforward method of watermark removal attack. One approach involves
directly perturbing certain characters to create spelling errors. However,
this method is easily detectable and can compromise the quality of the text.
An alternative strategy involves replacing characters with visually similar
Unicode IDs, leveraging the fact that many Unicode IDs correspond to identical
or visually indistinguishable characters. This technique is also known as a
Homoglyph attack (Gabrilovich and Gontmakher, 2002). Although such methods are
difficult to detect by human observation, they can still be mitigated by
various canonicalization techniques. Therefore, normalization preprocessing
before passing through watermark detectors is crucial.
The effectiveness of character-level attacks varies with different types of
text watermark algorithms. For Format-based watermark algorithms, which embed
watermarks through Unicode ID substitutions (e.g., EasyMark (Sato et al.,
2023) replaces Unicode 0x0020 with 0x2004), Homoglyph attacks may be a direct
and effective method for watermark removal. For other watermark algorithms,
the impact is on tokenizers; after character modification, the tokenizer may
divide the word into a different token list. For instance, changing the "a" in
"apple" to a Cyrillic character "а" alters the tokenization from ["apple"] to
["а", "pple"]. This change in tokenization result poses challenges to the
detection effectiveness of many watermark algorithms.
The advantage of a character-level attack is its simplicity and effectiveness.
However, its drawback is that it is easily detectable or can be eliminated by
some simply designed algorithms. Therefore, it is not a reliable watermark
removal attack in all scenarios.
#### 5.3.2. Word-level Attack.
Compared to character-level attacks that only alter the surface of text, word-
level attacks modify the content of the text by adding, deleting, or altering
words to remove watermarks (Abdelnabi and Fritz, 2021; Yoo et al., 2023b;
Kirchenbauer et al., 2023a; Zhao et al., 2023a; Kuditipudi et al., 2023; Yang
et al., 2023). These methods have a broader scope than character-level
attacks, as they are less likely to be mitigated by rule-based methods and
align more closely with realistic attack scenarios. Currently, there are two
main types of word-level attacks.
Word-level Attack to Existing Text. Word-level attack to existing text refers
to the insertion, deletion, or replacement of words in a pre-generated
watermarked text. During the attack process, an attack rate is typically
established, which is a certain likelihood of inserting, deleting, or
replacing each token. For example, when the deletion attack rate is set as
0.5, nearly half of the text will be removed (Yang et al., 2023). In terms of
word substitution, synonym replacement is typically employed to ensure minimal
impact on the semantics. Specifically, the replacement word will be the one
from the word database giving the smallest sentence score difference, for
example, BERT score difference.
For the two types of watermark methods: watermarking for existing text and
watermarking for Large Language Models (LLMs), the effects produced by word-
level attacks vary. Regarding watermarking for existing text (Abdelnabi and
Fritz, 2021; Yoo et al., 2023b; Yang et al., 2023), word deletion has the most
effect in removing text watermark among the above three attacks. While low
deletion rate under 0.1 produces acceptable performance drop, the figure
exceeding 0.3 or more could result in the watermark severely damaged or even
erased (Yang et al., 2023). Word deletion stands out for two key reasons. The
first is that it can directly remove words that may contain embedded watermark
information, whereas word insertion not directly delete existing watermark
information and not all words have suitable synonyms during synonyms
substitution. Secondly, word deletion significantly alters the semantics,
surpassing both word insertion and synonym replacement, as these methods do
not remove the original semantics.
Regarding watermarking during logits generaion methods (section 4.2), the
basic attack effect is that the watermarked tokens (e.g. green tokens in
Kirchenbauer et al. (2023a)) in the text will be disturbed. For zero-bit
algorithms (Kirchenbauer et al., 2023a; Zhao et al., 2023a; Liu et al., 2023b,
c), insertion or replacement of non-watermarked tokens as well as deletion of
watermarked tokens result in decreased proportion of watermarked token in the
detected text. Consequently, the detector whose result relies on calculating
the ratio of watermarked tokens will output lower detection scores, causing
more false negative cases (Christ et al., 2023; Kuditipudi et al., 2023;
Kirchenbauer et al., 2023a; Zhao et al., 2023a). For multi-bit algorithms
(Wang et al., 2023b; Yoo et al., 2023a), disturbance of the tokens in a text
part originally embedded with a message will result in wrongly decoded message
(Wang et al., 2023b; Yoo et al., 2023a; Abdelnabi and Fritz, 2021; Yoo et al.,
2023b). That is because the decoding process is to choose the message with the
highest probability among all the candidates. In that case, altering one or
more words in a text fragment could cause the decoding probability to shift to
an unintended one. For algorithms like (Kirchenbauer et al., 2023a; Zhao et
al., 2023a; Wang et al., 2023b), there is extra effect because these
algorithms rely on preceding tokens to embed watermark and decode watermark.
Even though an original watermarked token survived all three editing attacks,
the changed surrounding would cause detection computation problems on that
token, and eventually lead to failed watermark decoding on such tokens
(Kirchenbauer et al., 2023a).
Usually, to achieve significant performance drop, sustainable word-level
attacks on the text is required. That is, the attack rate should be large
enough. For example, to decrease detection F1 score below 80%, a minimal
attack rate of 30% is required, which requires performing deletion or
replacement every 3 words (Yang et al., 2023). The main problems brought by a
large attack rate are the totally different semantics and noticeably degraded
text quality (Abdelnabi and Fritz, 2021; Kirchenbauer et al., 2023a; Yang et
al., 2023). Word deletion with high attack rate may even leave each sentence
incomplete. Often, the text under such heavy word-level attacks is not ideal
in real world, where the watermark-removed text should still be usable and
understandable.
In conclusion, although word-level attacks on existing watermarked texts
perform well in some scenarios (Yoo et al., 2023b), they may not best simulate
reality as exhibited by obvious quality issues due to lack of text
understanding ability.
Word-level Attack during Text Generation. Due to the inevitable impact on text
quality during word-level attack to existing texts, particularly when these
modifications are extensive, recent work has begun to explore word-level
attacks during the text generation phase. These methods primarily targets on
watermarking for LLMs. A notable example is the emoji attack (Goodside, 2023),
which prompts the LLM to generate emojis between each token and then remove
the emojis after generation. This attack is effective when watermark embedding
process depends on the preceding token. After the emojis are removed, detector
would fail to recognize the watermarked tokens as emojis are supposed to carry
the watermarking information of a subsequent token (Kirchenbauer et al.,
2023a) (Sun et al., 2023). For example, a user could request the LLM to
generate "smile" between every word, resulting in a sentence looking like
"There smile are smile some smile apples smile here". Assuming the word
"apples" is watermarked, and the proper detection computation of this word
needs its prefix, the emoji "smile". However, the user will remove the emojis
before using this machine-generated text, making the prefix of "apples" become
the word "some". Detection using a different prefix will be inaccurate, and
such miscalculation is applied in every word of the watermarked text.
The advantage of such attacks is their near-complete erasure of certain
watermarking methods (Kirchenbauer et al., 2023a; Christ et al., 2023).
However, they have two main disadvantages. Firstly, they depend heavily on the
strong adherence ability of Large Language Models to follow instructions.
Powerful LLMs like ChatGPT and Claude can effectively execute emoji attacks,
ensuring quality text output, but less capable LLMs may fail to follow
instructions, leading to unreasonable text generation. Kirchenbauer et al.
(2023a) proposed a solution to include such prompts during LLM fine-tuning so
that LLM could be taught to reject such requests from users. However, the
efficacy of such fine-tuning is not guaranteed and may potentially diminish
the capabilities of Large Language Models. Secondly, the success of these
attack methods is strongly linked to the watermark generation process. For
instance, the emoji attack assumes that the current token’s watermark status
(whether it belongs to the "green list) is dependent on the hash value of the
previous token. However, methods like those proposed by (Liu et al., 2023c),
which do not rely on the preceding token’s hash value but use the embedding of
generated text to determine the watermark status, are minimally affected by
the emoji attack.
#### 5.3.3. Document-level Attack.
Although word-level attack could modify individual words to alter the remove
the text watermark, their scope and depth of impact are relatively limited. In
contrast, document-level attack employ a more comprehensive and profound
approach. These attacks go beyond mere word modifications in a document,
encompassing broader adjustments to the entire content and structure of the
document. Common document-level attacks include semantic-preserving rewrites,
also known as rewrite attacks, and the insertion of watermark text into
existing human text, termed copy-paste attacks.
Rewrite Attack. The rewrite attack offers comprehensive and in-depth
modifications to text, yet its implementation is more challenging compared to
the word-level approach. Early methods of rewrite attacks often employed a re-
translation strategy (Yang et al., 2023), which involves translating the text
into another language and then back into the original language. This method
exploits subtle changes that may occur during the translation process to
achieve text rewriting. While the re-translation method was widely used
initially, it has a drawback: the double translation can introduce additional
errors, leading to significant semantic deviations from the original text.
Consequently, specialized language models for text rewriting, such as Dipper
(Krishna et al., 2023), have been developed. These models provide higher
quality text and allow for configurable degrees of modification during the
rewriting process. With the recent popularity of ChatGPT, many studies and
practices have started using it for text rewriting, most commonly the
gpt-3.5-Turbo version. Its advantage lies in requiring only a specific prompt
for the text rewrite attack, such as "please rewrite the following text:". For
more realistic scenario validation, manual rewriting is also an option. Manual
rewriting offers more precise semantic preservation and more natural language
expression but has the main disadvantage of being costlier, especially when
handling large amount of text.
The effectiveness of rewrite attacks varies for different types of
watermarking methods. For format-based watermarking approaches (Brassil et
al., 1995; Sato et al., 2023; Por et al., 2012; Rizzo et al., 2016), rewrite
attack is disastrous as any homoglyph will be replaced to the standard
language used by the LLM or human in the output. For other algorithms that
embed watermarks based on text content, the impact of rewrite attacks is
uncertain and may depend on three factors: whether the watermark detection is
related to the sequence of tokens, the strength of watermark implantation
(i.e., the extent of text modification), and the length of the text. Since
rewrite attack could easily disrupt the order of tokens but struggles to
replace all tokens, watermark detection method that do not rely on token
sequences, like the method proposed by Zhao et al. (2023a), tend to be more
robust. Conversely, methods like those of Kuditipudi et al. (2023), where
robustness is affected by token sequence, show different levels of
susceptibility. Additionally, if the watermark text requires significant
modifications from the unwatermarked text, it enhances the robustness against
rewrite attacks. The length of the watermarked text is also crucial.
Experiments by Kirchenbauer et al. (2023a) show that watermarked texts
exceeding 600 tokens are generally robust against all rewrite attacks. It is
noteworthy that for humans, erasing watermarks from texts ranging from 400 to
800 tokens through rewriting is exceptionally challenging.
There are some designs to further increase algorithm robustness against
rewrite attack as well. 1) Decrease the token-level dependency. The dependency
of watermark algorithms on specific contexts can compromise their
effectiveness when the original text is rewritten, as the original context may
not remain intact. Take the watermarking during logits generation methods as
example, if the partitioning of a token’s red-green list relies on $m$
adjacent words, the detection rate significantly decreases after an attack
when $m$ is not very small. Hence, watermark algorithms should reduce reliance
on neighboring words. This can be achieved by decreasing the window size $m$
(Kirchenbauer et al., 2023a), or by using hashing schemes that only involve a
subset of words within the window (Kirchenbauer et al., 2023b). Some studies
have even reduced the window size $m$ to zero (Zhao et al., 2023a). 2) Replace
with semantic-level dependency. A better approach is for the watermark
algorithm to refer to the general semantics around the watermarked token (Liu
et al., 2023c; Yoo et al., 2023b) instead of the exactly identical tokens or
words. That is to say, for a group of altered but semantically similar
contexts, the watermark generator and detector would produce the same result.
Unless the semantics is drastically altered, the watermark detection could
maintain a higher accuracy against rewrite attacks. One can even train the
watermark generator to produce closer results on semantically close texts ; 3)
Increase embedding space for each message. The paraphrasing attack would
change the appearance of a watermarked text fragment and disable accurate
detection in that fragment. If the message is only embedded in one short text
piece, the message would be easily erased if the re-writer considered that
part redundant. Thus, increasing the number or length of text fragments that
are embedded with the same message would intuitively increase the success
extraction rate (Munyer and Zhong, 2023; Wang et al., 2023b).
It is worth noting that although human writers are stronger paraphrasers than
machines. However, there are significant differences in individuals’ ability
to rewrite text, and moreover, when the text is lengthy, humans are often
powerless.
Copy-paste Attack. Copy-paste attack is to surround the target text with
distraction text, which in this context equal to the watermarked and non-
watermarked text respectively. Such attack will result in a much longer text
as compared to the original watermarked text. This attack aims to test if the
low ration setting can cause algorithm effectiveness to drop.
The copy-paste attack primarily diminishes the detectability of watermarks in
text by diluting their concentration. The efficacy of this attack is
significantly influenced by the proportion of watermarked text. For instance,
when the watermarked text constitutes a mere 10% of the total, the attack’s
impact tends to surpass that of most rewriting attacks (Kirchenbauer et al.,
2023b). However, as the share of watermarked text increases to, say, 25%, its
effectiveness in undermining certain watermarking algorithms (Kirchenbauer et
al., 2023a) becomes comparable to that of some rewriting attacks. Similar to
rewriting attacks, lengthening the text can enhance the reliability of
watermark detection, particularly in the context of copy-paste attacks.
However, copy-paste attacks may be identified by certain specific watermark
detection methods. For example, Kirchenbauer et al. (2023b) mentioned a
windowed test to calculate the level of watermarking in a region of the text
instead of the whole text. The idea is to detect watermarked text that is
inserted into an existing text. This should be effective for copy-paste
attack.
### 5.4. Unforgeability
In section 5.3, we discussed watermark removal attacks that assume the
attacker is unaware of the watermark’s generation method (black-box setting).
However, in real-world scenarios, attackers may attempt to obtain or decipher
the watermark’s generation method. Once an attacker acquires this method, they
can easily fabricate or remove existing watermarks, as mentioned by
(Kirchenbauer et al., 2023b) in their anti-watermark methods. Therefore, a
watermark algorithm must possess substantial unforgeability, meaning it should
be exceedingly difficult for attackers to discern the method of watermark
generation. This might imply that the algorithm itself needs to be highly
complex and secure, or involve mathematical problems that are challenging to
crack.
The discussion on watermark unforgeability could be differentiated into two
distinct scenarios: private detection and public detection. In the private
detection scenario, the watermark detection method is kept confidential,
accessible only to specific users or institutions, or indirectly provided to
users via an API (Kirchenbauer et al., 2023a). This setup’s advantage lies in
increasing the difficulty for attackers to obtain and analyze the detection
algorithm, as they lack direct access to the detection mechanism. Conversely,
in the public detection scenario, the watermark detection algorithm is
publicly available, allowing anyone to access and utilize these algorithms.
The benefit of this approach is its transparency and ease of integration,
making it widely applicable in various contexts where authenticity
verification is needed. However, this also implies that attackers can more
easily study the detection algorithms and seek methods to breach the
watermark. Therefore, the requirements for unforgeability in public detection
scenarios might be higher.
#### 5.4.1. Unforgeability in Privately Detectable Scenario
In private detection scenarios, the imperceptibility of text watermarking
algorithms is crucial for ensuring unforgeability due to the limited detection
capabilities of users. Imperceptibility refers to the watermark’s impact on
the original content being nearly undetectable in its statistical
distribution, which is essential for ensuring the watermark’s non-forgery. If
users are unaware of the watermark in the text, they cannot forge it.
Therefore, some studies have focused on testing unforgeability. For instance,
Abdelnabi and Fritz (2021) collected texts with and without watermarks and
trained classifiers to try to distinguish between these two categories. The
purpose of this test was to see if the classifiers could effectively identify
which texts contained watermarks. The performance of the classifiers serves as
a measure of the watermarking algorithm’s imperceptibility. Ideally,
classifiers should find it challenging to differentiate between watermarked
and non-watermarked texts. If the classifier cannot accurately differentiate,
this indicates that the watermarking algorithm performs well in terms of
imperceptibility, thereby enhancing its unforgeability.
When text watermarking algorithms fail to achieve complete imperceptibility,
or when attackers suspect or infer the presence of watermarks in the text,
they may resort to statistical methods to extract the watermarking algorithm’s
generation methods. These statistical attack methods primarily rely on the
analysis of statistical traces or patterns that the watermarking algorithm
might leave in the text, which generally requires sufficient prior knowledge
about the method of watermark insertion. For instance, Sadasivan et al. (2023)
proposed a spoofing attack algorithm that can target numerous watermarking
during logits generation methods (Kirchenbauer et al., 2023a; Zhao et al.,
2023a). Specifically, this attack involves calculating the frequency of tokens
in a text under a fixed prefix. In watermarked texts, the frequency of these
tokens may differ from normal texts. For instance, by analyzing the frequency
of tokens following ‘the’, tokens that appear more frequently might be
identified as part of the watermark (green list (Kirchenbauer et al., 2023a)).
The effectiveness of this attack is closely related to the complexity of the
watermarking method. If a watermarking algorithm operates in a complex manner
on a text, statistical-based attacks become less effective. This suggests that
increasing the complexity of watermarking algorithms is key to preventing such
attacks. For example, Liu et al. (2023c) proposed using text semantic
information associated with the watermarking rule, which effectively resists
spoofing attacks.
In the context of private watermark detection scenarios, there are measures
that can enhance the unforgeability of watermark algorithms. Firstly, it is
necessary to limit the detection frequency. When providing watermark detection
services via an API, mechanisms can be designed to restrict this frequency.
This helps prevent attackers from understanding and analyzing the watermark
algorithm through extensive trial and error, thereby reducing the
effectiveness of the previously mentioned spoofing attacks (Sadasivan et al.,
2023). Secondly, bolstering network security is crucial for preventing the
theft of watermark rules. Protecting the API and backend systems from hacker
intrusions includes, but is not limited to, using encrypted communications,
regularly updating security vulnerabilities, and implementing intrusion
detection systems. Thirdly, guarding against social engineering attacks is
essential. Social engineering attacks often manipulate people’s trust or
induce them to disclose information. Establishing strict internal security
protocols and verification processes is vital to prevent unauthorized
information (e.g. the key of watermark) disclosure.
#### 5.4.2. Unforgeability in Publicly Detectable Scenario
Evaluating the unforgeability of watermark algorithms under publicly
detectable scenarios is far more challenging, as the detection methods are
open, allowing attackers to more easily attempt to crack or remove the
watermark. In such scenarios, attackers could still employ previously
mentioned statistical attack methods, such as spoofing attacks (Sadasivan et
al., 2023), to analyze and extract the watermark’s generation rules. Since the
detector is public in this case, there might be more data available for such
attacks. Additionally, as watermark generation and detection algorithms are
often closely related, attackers might directly analyze how the watermark
detector is implemented and how the watermark is generated.
Currently, most watermarking algorithms utilize the watermark generator to
discern watermark features in text details, thereby exposing the watermark
generator. For example, Kirchenbauer et al. (2023a) still requires determining
whether each token is on the green list during the watermark detection
process, which exposes the watermark generation method. Consequently, these
watermarking algorithms lack unforgeability in publicly detectable scenarios.
A text watermarking algorithm with unforgeability must ensure that the
watermark detector does not reveal information about the watermark generator.
For instance, some current methods employ neural networks for watermark
detection (Liu et al., 2023b; Abdelnabi and Fritz, 2021), achieving effective
detection while preventing further information disclosure due to the black-box
nature of neural networks.
For watermark detection algorithms that do not reveal watermark generation
methods, evaluating their unforgeability poses a challenge. Typically, it
necessitates the design of complex attack algorithms to assess their
unforgeability. For instance, Liu et al. (2023b) proposed using reverse
training, where a watermark generator is trained inversely based on the
watermark detector. The consistency between the trained and the actual
watermark generator is used to evaluate unforgeability. However, this approach
also requires the attacker to have prior knowledge of the watermark
generator’s architecture. If the attacker is unaware of the watermark
generator’s implementation, the attack becomes extremely difficult. Overall, a
greater variety of attacks need to be developed to effectively test
unforgeability.
## 6\. Application for Text Watermarking
In the previous sections, we have comprehensively detailed the implementation
methods of text watermarking technologies in the era of large models and how
to thoroughly test these watermarking technologies. This section continues to
discuss the application of text watermarking technologies in real-world
scenarios. Specifically, we will primarily focus on the application of text
watermarking technologies in three key areas: copyright protection (Zhao et
al., 2023b, 2022; He et al., 2022a, b; Peng et al., 2023; Li et al., 2023;
Tang et al., [n. d.]; Chu et al., 2023; Tang et al., 2023; Sun et al., 2023;
Wang et al., 2023a; Taleby Ahvanooey et al., 2018; Mir, 2014; Iqbal et al.,
2019), fake news detection (Megías et al., 2021, 2022), and academic integrity
(Vasilatos et al., 2023; Zhao et al., 2022; Perkins, 2023). Firstly, we will
discuss how to protect the copyright of large models and text/data sets using
text watermarking technology to prevent infringement and misuse of
intellectual property. Secondly, the article will explore the role of this
technology in identifying and combating the spread of false information,
particularly in the current domains of social media and news dissemination.
Lastly, we delve into the significance of text watermarking technology in
maintaining academic integrity, including its use in plagiarism detection and
ensuring the originality of academic works. Meanwhile, we also provide a more
illustrative depiction of the different applications of text watermarking in
Figure 7.
Figure 7. This figure displays three major application scenarios of text
watermarking: copyright protection (section 6.1), academic integrity (section
6.2), and fake news detection (section 6.3).
### 6.1. Copyright Protection
#### 6.1.1. Text/Dataset Copyright
In the digital era, the protection of copyrights for texts and datasets is
particularly crucial. As data sharing and utilization increase, safeguarding
these assets from illegal replication and misuse becomes paramount. Text
watermarking technology plays a key role in this regard, embedding
imperceptible markers within texts and datasets to help preserve intellectual
property rights.
Text copyright refers to the legal protection of original textual content,
such as writings and online posts, ensuring unique rights for the creators.
Copyrighted texts should provide sufficient information to identify their
creators and sources. This protection extends beyond traditional publications
like books and journals to digital-era web articles, blog posts, and other
online contents. Current explorations in textual copyright primarily focus on
format-based watermarking algorithms (section 3.1). The main reason is that
only format-based watermarking does not require modifications to the text
content, whereas other watermarking techniques necessitate changes, which may
be unacceptable to some content creators.
For instance, Taleby Ahvanooey et al. (2018) utilizes layout features (e.g.,
spacing between words and lines) and formatting (e.g., text color, font, and
height) to embed watermarks in the text. To enhance the protection of text
copyrights in web content, Mir (2014) proposed an invisible digital watermark
based on the textual information contained in web pages. This watermark
utilizes predefined semantic and syntactic rules, which are encrypted and
transformed into spaces using binary control characters, then embedded within
the webpage. The watermark is embedded using the structure of HTML (Hypertext
Markup Language) as a cover file. Moreover, for certain specific text formats,
distinct watermark designs may be applied. For instance, Iqbal et al. (2019)
have focused on embedding watermarks in MS-Word documents by utilizing unique
attributes like variables, bookmarks, and ranges for copyright protection.
Although current text copyright protection methods predominantly use format-
based watermarking, the advancement of large language models suggests that
applying watermark algorithms with large language models to text copyright
protection is an important direction for future research.
With the rise and widespread application of deep learning technology, the
dataset copyright has become particularly important, which means protecting
datasets from unauthorized use has emerged as a crucial issue. In the realm of
dataset copyright protection, text watermarking technology plays a pivotal
role. By adding watermarks to some data within a dataset, a watermarked
dataset is created. Models trained on this watermarked dataset will possess
detectable characteristics that prevent unauthorized use.
The method of adding watermarks to datasets for copyright protection is almost
identical to the training time watermarking method mentioned in Section 4.1.
Specifically, the dataset watermarking method involves adding a trigger to
some of the text in the dataset. This trigger, a specific input feature, is
associated with a particular output in the dataset, ensuring that models
trained with this dataset produce the corresponding output feature when
encountering the trigger. This trigger can be implemented by modifying labels.
For instance, Liu et al. (2023a) altered the labels corresponding to text in a
text classification dataset. Sun et al. (2022) disrupted code in a code
generation dataset. However, this approach turns the corresponding data into
noise data, potentially affecting the quality of the dataset. Therefore,
finding unique inputs to add as features is an effective method. For example,
Tang et al. (2023) first used adversarial learning to identify data prone to
misclassification, then added triggers to this data. Sun et al. (2023) added
semantically invariant transformations to code to incorporate triggers. These
watermarking techniques effectively protect the copyright of datasets.
However, there is still a lack of exploration into the copyright protection of
datasets when only a small portion of the training data consists of
watermarked datasets, which could be a significant direction for future
research.
#### 6.1.2. LLM Copyright
In the domain of copyright protection for large language models (LLMs), the
primary objective is to safeguard these models from threats known as
extraction attacks. These attacks involve extracting substantial amounts of
data from large language models to train a new model. To protect LLM
copyrights, a common method is to embed watermarks in the LLM’s output. This
results in attackers using datasets with watermarks for training, leading to
the new model’s outputs also bearing watermark characteristics. This process
bears some resemblance to dataset copyright, except that in this case, the
watermarked dataset is generated by a LLM. Current efforts have developed
watermark algorithms for various LLM types, including embedding (Peng et al.,
2023), generative (He et al., 2022a, b; Zhao et al., 2023b), and
classification (Zhao et al., 2022) LLMs. The input of an embedding LLM is
text, and its output is the corresponding embedding of that text. The
generative LLM is currently the most commonly used LLM, with both its input
and output being text. In the case of a classification LLM, the input is text,
and the output is a specific category.
Peng et al. (2023) have developed a watermarking algorithm designed to protect
embedding LLMs. This algorithm initially involves the creation of a trigger
set. When the input contains a trigger from this set, the algorithm introduces
a poison weight into the output embedding. The ‘trigger’ mentioned here is
conceptually identical to that referenced in the context of dataset copyright
in section 6.1.1. The new embedding model, trained with watermarked data,
produces embeddings with poison weights when encountering inputs containing
triggers, thereby enabling detection.
For generative LLMs, He et al. (2022a) implemented a method of embedding
watermarks by substituting synonyms in the text already generated by LLMs.
During this synonym replacement process, certain ‘watermark tokens’ are
preferentially selected. Consequently, models trained with these data tend to
generate a higher proportion of watermark tokens, making them more easily
detectable. However, a limitation of this approach is that the word frequency
of the watermarked data diverges from that of normal text, rendering the
watermark more susceptible to detection and removal. To address this issue, He
et al. (2022b) conducted word substitution based on the context features,
which were derived from part-of-speech and dependency tree analyses. This
method ensures that the frequency of each token remains unchanged. However,
even with this approach, the practice of LLM generating text followed by
synonym substitution is still vulnerable to being circumvented by adversaries
randomly replacing synonyms, thereby rendering this protection ineffective. To
address this issue, Zhao et al. (2023b) adopted the concept of watermarking
during logits generation (section 4.2). They introduced a watermark into the
output logits of the Large Language Model (LLM) by embedding a periodic
signal. Models trained using this watermarked LLM exhibit periodic signal
characteristics in their outputs, making them detectable. This approach offers
more robust and invisible watermarks compared to previous methods.
Similarly, in the case of classification LLMs, Zhao et al. (2022) also adopted
the approach of embedding a watermark by inserting periodic signals into the
logits of the LLM output to enforce copyright protection. Specifically, this
involves introducing periodic signals into the logits corresponding to a
particular category, ensuring that models trained with data output from this
modified model will also contain these periodic signals in the output for that
specific category. However, this method inevitably impacts the quality of the
output data, especially when users extract data using hard-labels (converting
classification results into one-hot outputs) instead of continuous soft-
labels. Future work could explore how to embed watermarks in classification
LLMs with minimal impact on label quality for effective LLM copyright
protection.
### 6.2. Academic Integrity
Academic integrity issues hold particular importance in today’s educational
sphere, especially given the ease of access and use of large language models
(LLMs). Students might exploit these advanced models to complete assignments,
papers, or even participate in exams, presenting new challenges to the upkeep
of academic honesty. In tasks or exams where independent and original
completion by students is required, it becomes necessary to devise methods to
ascertain whether the submitted content is generated by a large language
model.
The current work primarily explores the design of algorithms for automatically
distinguishing text generated by Large Language Models (LLMs) from human-
written text. For instance, Mitchell et al. (2023) developed a GPT-based
classifier, Detect-GPT, aimed at identifying LLM-generated text. However, such
methods lack interpretability and may not be robust to out-of-domain text. To
address this issue, an online text detection tool, GPTZero
333https://gptzero.me/, operates on the assumption that LLM-generated texts
can be differentiated from human texts based on two metrics: perplexity
(Equation 12) and burstiness. Burstiness refers to the degree of uneven
distribution in a text’s length, complexity, or information density.
Similarly, Vasilatos et al. (2023) also employed the perplexity feature to
distinguish between human and machine-generated texts. Nonetheless, detection
methods based on perplexity and burstiness may be circumvented by deliberate
text modifications. Concurrently, the promising technique of text watermarking
remains underexplored in the field of academic integrity, which should become
a significant research direction in the future.
### 6.3. Fake News Detection
The application of large language models has raised two main concerns: the
generation of false information due to their text-generating capabilities, and
the rapid dissemination of such false information. Firstly, the powerful text-
generating capability of LLMs makes them an effective tool for producing fake
information. These models can rapidly generate texts that appear authentic and
accurate, but may actually contain erroneous or misleading information. Owing
to the high credibility and realism of the generated texts, they can be easily
utilized to construct fake news or false narratives, misleading the public and
distorting facts. The second concern is the rapid spread of false information.
Fake information generated by LLMs, due to its high degree of realism, can
spread swiftly on social media and other digital platforms, leading to the
proliferation and reinforcement of incorrect viewpoints (DiResta, 2020; Zhou
et al., 2023). This spread not only amplifies the impact of false information
but can also lead to public confusion and distrust towards authoritative
information sources. Therefore, the automatic identification of news generated
by LLMs is essential.
The current research has explored utilizing watermark technology to detect
fake images and videos generated by AI models, aiding in the identification of
fake news. For instance, a framework named DISSIMILAR(Megías et al., 2021) was
proposed to track and detect false media information by automatically
embedding watermarks in images before their release on social media. However,
there has been minimal exploration in detecting false content generated by
Large Language Models. We propose two potential approaches in this field. The
first involves a method similar to the DISSIMILAR framework, where watermarks
are added to text content before its publication on social media. This
approach would use format-based methods (3.1) to embed watermarks without
altering the text’s content. The second approach necessitates collaboration
with LLM providers, allowing them to embed watermarks and share watermark
detection methods with certain platforms, thereby facilitating the marking of
LLM-generated content. We recommend that future work should leverage text
watermarking technology to aid in the detection of false information.
## 7\. Challenges and Future Directions
Although detailed introductions to the methods, evaluations, and application
scenarios of text watermarking have been provided in previous sections,
numerous challenges remain in this field. These include balancing across
different evaluation perspectives, adapting text watermarking for more
challenging scenarios, developing more comprehensive benchmarks, and
broadening the application of text watermarking. These challenges will be
discussed in detail below.
### 7.1. Balancing Across Different Evaluation Perspectives
In section 5, we explore various perspectives for evaluating text watermarking
algorithms. However, these perspectives often present inherent contradictions,
making it extremely challenging for a text watermarking algorithm to excel in
all evaluation perspectives simultaneously. For instance, achieving a
favorable balance among success rate, text quality, and robustness at a high
payload is difficult. In this section, we will first analyze why these
perspectives are mutually contradictory, and then discuss potential strategies
for achieving a better balance in future work.
#### 7.1.1. Why are the Different Perspectives Conflicting?
The fundamental reason for contradictions among different perspectives lies in
the limited suitable text space for text watermarking, usually determined by
the text quality requirements. Specifically, according to Equation 2, the |
# Distinct quasiparticle excitations in single-particle spectral function
Jing-Yu Zhao Institute for Advanced Study, Tsinghua University, Beijing
100084, China Zheng-Yu Weng Institute for Advanced Study, Tsinghua
University, Beijing 100084, China
###### Abstract
Recent scanning tunneling microscopy (STM) measurements have observed a multi-
peak energy structure on the positive bias side in dilute-hole-doped cuprates,
where tightly-bound hole pairs are identified as the building blocks that can
continuously persist into the superconducting regime. In this work, we study
the single-particle spectral function based on a two-hole ground state
wavefunction [Phys. Rev. X 12, 011062 (2022)], which can provide a consistent
understanding of the experimental results. Here the wavefunction structure
with a dichotomy of $d$-wave Cooper pairing and $s$-wave “twisted hole”
pairing will lead to two branches in the local spectral function where the
low-lying one corresponds to a conventional nodal quasiparticle, and a higher
energy branch is associated with a “twisted” quasiparticle above the pair
breaking or “pseudogap” energy. These energies can be further extended into
energy spectra in the momentum space, in which the low-energy dispersion
agrees excellently with the Quantum Monte Carlo numerical result. The
implications for the STM spectra in the superconducting state will also be
discussed.
Figure 1: Local single-electron spectral function calculated by VMC, averaged
across the lattice. The red curve on the positive bias ($\omega\geq 0$) side
is measured by injecting an electron into the two-hole paired ground state
given in Eq. (6); The blue curve at $\omega\leq 0$ is calculated by extracting
an electron from the half-filling ground state as given in Eq. (8). The inset
shows the same data with a $V$-shaped background (dashed lines) added for
comparison with the experiment data.
Introduction.— The Cooper pair serves as the fundamental cornerstone in
unraveling the nature of superconducting (SC) mechanism. In doped cuprates, it
is believed that there exist preformed Cooper pairs even before the onset of
the superconducting phase. Recently, the scanning tunneling microscopy (STM)
experiments [1, 2, 3] have refocused on this issue by directly detecting
preformed hole pairs in the localized domains/puddles of lightly doped cuprate
compounds even in the insulating antiferromagnetic (AF) regime, which
continuously persist to the SC regime. The STM spectroscopy [1, 2, 3] further
reveals a multiple peak structure on the positive bias side, with the low-
lying peak becomes sharpened in the SC phase to indicate the coherent peak of
the $d$-wave nodal quasiparticle, while the higher energy peak consistent with
the pseudogap energy scale.
It is noteworthy that a strong pairing of two holes in an AF spin background
[4] has been studied by a ground-state wavefunction utilizing the variational
Monte Carlo (VMC) scheme, which yields an excellent agreement with the exact
diagonalization (ED) and density matrix renormalization group (DMRG) numerical
calculations including the ground-state energy and nontrivial quantum number
of angular momentum $L_{z}=2$. An unconventional pairing mechanism is involved
here as due to the local spin current cancellation, which is much stronger and
short-ranged as compared to either the long-range AF fluctuations or
resonating-valence-bond (RVB) correlations in the spin background. It is due
to the fact that the single hole doped into the AF background will always
induce a vortex of spin current to facilitate its motion, whose ground state
wavefunction describing a “twisted” hole - a bare hole surrounded by a spin
current vortex - has been previously carefully investigated [5, 6, 7, 8] in
the ladder and two-dimensional (2D) cases based on the $t$-$J$ model.
Consequently a strong binding force is found between the two “twisted holes”
with the opposite spin currents, resulting in an $s$-wave pairing of them.
Nevertheless, the overlap of such a ground state with a Cooper pair created by
the bare electron operators is in the $d$-wave-symmetry channel instead [4].
Given the pairing symmetry dichotomy in the two-hole ground state, a local
spectral function can be calculated to provide valuable information on the
internal structure of the wavefunction, which may be directly detected by the
STM measurement. Moreover, since each hole pair is tightly bound, a condensate
of a finite density of the pairs naturally leads to a $d$-wave SC state based
on the two-hole wavefunction [4], such that the calculated STM local spectral
function may also be used to account for the properties of the SC state, to
the zeroth order of approximation, by assuming that each hole pair as a
building block remains robust up to a finite doping as indicated in the
experiments [1, 2, 3, 9, 10]. Furthermore, treating the tightly-bound hole
pairs uniformly and coherently distributing in space, one can similarly
calculate the spectral function in the momentum space such that the single-
hole energy dispersion may be also inferred.
In this paper, building on the two-hole ground state ansatz [4], the single-
particle spectral function is studied, which shows some very unconventional
properties of the single-hole excitation. Specifically a two-component
structure has been revealed. The low-energy branch corresponds to a
conventional quasiparticle excitation (resembling a $d$-wave nodal Bogoliubov
quasiparticle) and the higher-energy component is a fractionalization of such
a quasiparticle into a “twisted” hole loosely bound to a spin antivortex field
in the background. The energy scale is found to be coincide with the binding
energy of two “twisted” holes, resembling to a pseudogap energy scale. We have
also examined the negative bias side by knocking out an electron to create a
hole. The low-energy branch is symmetric to the low-branch of the positive
bias side, with the energy dispersion matching with the quantum Monte Carlo
remarkably well in the whole Brillouin zone. But the high-energy branch
disappears due to an “orthogonality catastrophe” effect brought in by the
aformentioned antivortex field when the overlap with regard to the half-
filling spin background is calculated.
Model and variational wavefunctions.— The $t$-$J$ model is a minimal model
describing the doped Mott insulator physics in the cuprate:
$H=-\sum_{\langle
ij\rangle,\sigma}t(c_{i\sigma}^{\dagger}c_{j\sigma}+h.c.)+\sum_{\langle
ij\rangle}J\left(\mathbf{S}_{i}\cdot\mathbf{S}_{j}-\frac{1}{4}n_{i}n_{j}\right),$
(1)
where $\langle ij\rangle$ denotes a nearest-neighbor (NN) bond, and the strong
correlation nature originates from the no double occupancy constraint
$\sum_{\sigma}c^{\dagger}_{i\sigma}c_{i\sigma}\leq 1$. Here $\mathbf{S}_{i}$
and $n_{i}$ are spin and electron number operators on site $i$, respectively.
We choose $t/J=3$ throughout the paper.
The $t$-$J$ model reduces to the Heisenberg spin model at half-filling, where
the ground state $|\phi_{0}\rangle$ is a spin-singlet state for a finite-size
sample. Here $|\phi_{0}\rangle$ can be very precisely simulated by a bosonic
resonating-valence-bond (RVB) wavefunction, which produces a long-range
antiferromagnetic (AF) spin correlation in the large sample limit.
Based on $|\phi_{0}\rangle$, a single-hole wavefunction has been recently
constructed. Instead of a bare hole state $c_{i\downarrow}|\phi_{0}\rangle$
with removing a spin-$\downarrow$ electron from the half-filling background,
the ground state ansatz takes the following form [6]
$|\Psi_{\mathrm{1h}}\rangle=\sum_{i}\phi(i)\tilde{c}_{i\downarrow}|\phi_{0}\rangle~{},$
(2)
where a “twisted” hole creation operator is given by
$\tilde{c}_{i\downarrow}=e^{-i\hat{\Omega}_{i}}c_{i\downarrow}~{},$ (3)
and $\phi(i)$ is the wavefunction of such a twisted hole determined by VMC.
Here the specific expression of the phase factor $\hat{\Omega}_{i}$ is defined
by
$\hat{\Omega}_{i}=\sum_{l(\neq i)}\theta_{i}(l)n_{l\downarrow}~{},$ (4)
where $n_{l\downarrow}$ denotes the spin-$\downarrow$ number operator of the
electron at site $l$, and $\theta_{i}(l)$ is the so-called statistical angle
defined by the angle between the arrow pointing from site $l$ to $i$ and the
horizontal line $\mod 2\pi$.
Once employing the “twisted” electron $\tilde{c}_{i\downarrow}$ in Eq. (2),
the VMC calculations can yield very good agreements with ED and DMRG results
on both 2D square lattice [6, 7] and two-leg ladder [5, 11, 8], including an
exotic quantum number, i.e., the orbital angular momentum $L_{z}=\pm 1$ in the
2D ground state. Here the “twisted” quasiparticle $\tilde{c}_{i\downarrow}$
can be identified with a composite structure comprised of a bare hole with a
chiral spin current pattern surrounding around it.
Figure 2: (a) Schematic illustration of the two peaks in the local single-
particle spectral function, in which the higher-energy peak is at the energy
that two holes are first unpaired in the intermediate state and then one of
them gets annihilated by injecting electron as indicated by (b), and the
lower-energy peak corresponds to directly annihilating one of the two holes in
the paired state (c); Here (b) and (c) show the corresponding spin current
patterns (red arrows) associated with both processes with the black circle and
cross denoting the hole and the core of the anti-vortex left by the
annihilated hole, respectively (see text). Figure 3: Momentum space single-
electron spectrum measured by VMC. (a) and (b) are measured by injecting an
electron into the two-hole paired ground state as Eq. (6). (c) and (d) are
measured by extracting an electron from the half filling ground state as Eq.
(8). The Green’s function Monte Carlo calculation [12] (white cross) is also
shown for comparison with $t=3J$. A cut of the full Brillouin zone is used to
show the result, with the cut lines shown in the corresponding insets. (a) and
(b) share the same color bar, so as (c) and (d).
A strong binding between two “twisted” holes has been then found in the two-
hole ground state, which is described by the wavefunction ansatz [4]:
$|\Psi_{\mathrm{2h}}\rangle=\sum_{i,j}g(i,j)e^{-i(\hat{\Omega}_{i}-\hat{\Omega}_{j})}c_{i\uparrow}c_{j\downarrow}|\phi_{0}\rangle~{},$
(5)
with $g(i,j)$ as the variational parameter. The wavefunction in Eq. (5) has
been extensively studied by the VMC method in Ref. 4 with an excellent
agreement with the exact numerics including the new quantum number $L_{z}=2$
in the ground state. A dichotomy of the pairing symmetry has been identified
[4] here: while $g(i,j)$ is $s$-wave-like for the “twisted” holes,
$|\Psi_{\mathrm{2h}}\rangle$ is shown to be $d$-wave-like as measured by the
Cooper pair order parameter $c_{{\bf k}\uparrow}c_{-{\bf k}\downarrow}$ in
terms of the original electron operators. The strong pairing as evidenced by a
big unpairing energy scale $E_{\mathrm{pair}}\sim 1.97J$ has been found [4]
due to the cancellation between the spin currents of opposite chiralities in
Eq. (5). Namely the “twisted” hole object in Eq. (2) is strongly frustrated,
whose kinetic energy can only get restored by pairing in Eq. (5), which has
been pointed out to be a new pairing mechanism fundamentally distinct from the
usual RVB pairing mechanism.
It has been argued [4] that the tightly-bound two holes in Eq. (5) form a
“building block” of size $\sim 4a_{0}\times 4a_{0}$, which eventually leads to
a superconducting domain or phase at finite-doping. In other words, the Bose
condensate of the two-hole state Eq. (5) may be a good candidate for the
superconducting ground state [13] when a finite density of holes are present.
Since a local two-hole ground state in Eq. (5) may thus be well probed by the
STM experiment, which amounts to the calculation of the local single-particle
spectral function to be studied below.
Single-electron spectral functions.—When an electron (say, with
spin-$\uparrow$ without loss of generality) is injected into the two-hole
paired state, the local spectral function measurable by STM at site $i$ on the
positive bias side ($\omega\geq 0$) can be expressed by
$A_{ii}^{p}(\omega)=-\mathrm{Im}\sum_{n}\frac{\left|\langle\Psi_{1h}(n)|c^{\dagger}_{i\uparrow}|\Psi_{\mathrm{2h}}\rangle\right|^{2}}{\omega-[E_{1h}(n)-E^{G}_{2h}-\mu_{p}]+i\eta}~{},$
(6)
where the chemical potential $\mu_{p}$ is set such that the lowest energy of
$|\Psi_{\mathrm{1h}}(n)\rangle$ shifts to $\omega=0$. Here
$|\Psi_{1h}(n)\rangle$ denotes a single-hole excitation state with energy
$E_{1h}(n)$, whose general form may be constructed based on
$|\Psi_{\mathrm{2h}}\rangle$ in Eq. (5) by annihilating an electron, which is
given as follows
$|\Psi_{\mathrm{1h}}(n)\rangle=\sum_{i,v}\varphi_{n}(i,v)e^{-i(\hat{\Omega}_{i}-\hat{\Omega}_{v})}c_{i\downarrow}|\phi_{0}\rangle~{}.$
(7)
Here $|\Psi_{\mathrm{1h}}(n)\rangle$ differs from the one-hole ground state in
Eq. (7) by an extra antivortex phase factor $e^{i\hat{\Omega}_{v}}$ with $v$
denoting the center of the vortex (usually relaxed from the site to the center
of a plaquette). Note that in the two-hole paired state of Eq. (5), each of
the two holes carries a spin vortex with opposite chiralities. Once an
additional electron is injected into the system to annihilate the hole, the
spin vortex is left by a sudden approximation, which accounts for the
difference between a “twisted” hole and a bare hole according to Eq. (3). The
two basic processes creating such single-hole excitations from the two-hole
ground state is schematically illustrated in Fig. 2 (b) and (c), respectively,
which are realized by either first breaking up the paired “twisted” holes and
then removing one of the bare hole or directly annihilating one of two holes
in tight-bound ground state. The variational wave function $\varphi_{n}(i,v)$
will be determined via the energy optimization.
The calculated $A_{ii}^{p}(\omega)$ is shown in Fig. 1 at $\omega\geq 0$ by
the red curve. A double-peak structure with two characteristic energies,
$\Delta_{d}\sim 0.5J$ and $\Delta_{pg}\sim 2.0J$, are obtained (with $\eta\sim
0.1J$), whose physical meaning will be further discussed later.
On the negative bias side ($\omega\leq 0$), in principle one should calculate
the overlap between the two-hole ground state and the three-hole excited
states by knocking one additional electron out of the system, which would be
beyond the capability of the current method. By assuming the newly created
hole is weakly coupled to the two-hole pair in the background, we shall
calculate the following spectral function with the overlap between the half-
filling AF background $|\phi_{0}\rangle$ and the single-hole state given Eq.
(7) by approximately ignoring the original two-hole pair:
$A_{ii}^{m}(\omega)=-\mathrm{Im}\sum_{n}\frac{\left|\langle\Psi_{1h}(n)|c_{i\downarrow}|\phi_{0}\rangle\right|^{2}}{\omega+[E_{1h}(n)-E^{G}_{0}-\mu_{m}]+i\eta}~{},$
(8)
in which the chemical potential $\mu_{m}$ is chosen such as to move the
excitation edge to zero. The calculated STM spectral function
$A_{ii}^{m}(\omega)$ is shown in Fig. 1 at $\omega<0$ (blue curve). Note that
the relative amplitudes between $A_{ii}^{p}(\omega)$ and $A_{ii}^{m}(\omega)$
are arbitrary here, and in the inset of Fig. 1, a V-shaped background marked
by the dashed lines is added to illustrate the possible contributions from the
background spin-wave excitations which are not considered in the above STM
spectral functions.
Figure 4: Local structures of the single hole state of Eq. (7) at different
energy scales. (a)-(c) shows the local spin current pattern with the hole
projected at the black circle position. The thickness of the red arrow
represents the strength of the spin current. (d)-(f) shows the antivortex
distribution $|\varphi(h_{0},v)|$ with the hole $h_{0}$ fixed at the black
circle position.
The real-space information finds its correspondence in the momentum space
spectrum as well, as displayed in Fig. 3. Calculation of the momentum space
spectrum is essentially the same as the real space one, except that the
$c_{i\sigma}$ operators in Eqs. (6) and (8) should be replaced by $c_{{\bf
k}\sigma}$. All the other parameters are the same. The calculated results for
both the positive bias and negative bias are presented in Fig. 3.
On the negative bias side, there is only one band, which is just a single-hole
excitation created by $c_{{\bf k}\sigma}$ on the antiferromagnetic background
as formulated in Eq. (8). We find good agreements between the quasiparticle
spectrum of the wavefunction Eq. (7) determined by the VMC method and those
calculated by other numerical methods such as quantum Monte Carlo (QMC) [12,
14, 15], ED [16] and other numerical methods [17, 18], as well as other
theoretical approaches such as self-consistent Born approximations (SCBA) [19,
20] and angle-resolved photoemission spectroscopy (ARPES) experiments [21, 22,
23, 24].
On the positive bias side [Figs. 3(a) and (b)], one sees the low-energy peak
in the double-peak structure of the STM spectral function actually corresponds
to a $d$-wave Bogoliubov quasiparticle excitation with a vanishing gap along
the nodal line and a maximum gap near the anti-nodal direction. The whole
dispersion is approximately symmetric to the single-hole spectrum on the
negative bias side [Figs. 3(c) and (d)] except for the disappearance of the
portion along the diagonal scan between $(0,0)$ and $(\pi,\pi)$ due to the
$d$-wave symmetry [4] in the two-hole ground state. On the other hand, the
higher energy branch mimics the behavior of a “twisted” single-hole excitation
with the band bottom at $(\pi/2,\pi/2)$.
The internal structure of the single-hole excitation.—By further analysing
$|\Psi_{\mathrm{1h}}(n)\rangle$ at different energy scales, one may gain more
insights into the underlying physics of the single-hole state. For example,
the local spatial structure surrounding the doped hole in Eq. (7) is
calculated, as shown in Fig. 4, which offers a view of the surrounding neutral
spin current as well as the antivortex amplitude $|\varphi(h_{0},v)|$ around
the doped hole projected at a given position.
Within the low energy band of $\omega\sim\Delta_{d}$ and $\omega\sim 0$ [cf.
Figs. 4(b) and 4(c)], the neutral spin current around the doped hole exhibits
a relatively random distribution without forming a coherent closed vortex
pattern due to the compensation of the antivortex $e^{i\hat{\Omega}_{v}}$,
which is tightly bound to the hole in the nearest plaquettes as illustrated in
Figs. 4(e) and 4(f), consistent with the cartoon illustration in Fig. 2(c). As
a result, the total spin current gets compensated on average, and the single
hole state has a finite overlap with a conventional quasiparticle as measured
in the spectral function.
On the other hand, at $\omega\sim\Delta_{pg}$, the spin current around the
hole adopts the shape as a closed vortex profile shown in Fig. 4(a) with a
broader antivortex distribution [cf. Fig. 4(d)]. Such a pattern is consistent
with the process that the two-hole paired state first undergoes to an
unpairing excitation and then a hole gets annihilated by the injecting
electron to create a final single-hole state of a “twisted” hole loosely bound
to an anitvortex as illustrated in Fig. 2(b). Indeed the energy scale
$\Delta_{pg}\simeq E_{\mathrm{pair}}$ [4] for the two-hole state in Eq. (5).
Finally, it is worth to point out that the distinction between Eq. (2) and
$|\Psi_{\mathrm{1h}}(n)\rangle$ in Eq. (7) represents an important fact that
injecting a bare electron or hole into the two-hole or half-filled ground
state, under a “sudden approximation” in the STM and ARPES experiments, does
not necessarily creates a true single-hole ground state, which marks a
fundamental paradigm shift in understanding the single-particle spectral
function for a Mott insulator. Since a bare electron (hole) involves the
process of
$c_{i\downarrow}\Longleftrightarrow\tilde{c}_{i\downarrow}e^{-i\hat{\Omega}_{i}}$,
a vortex field $e^{i\hat{\Omega}_{v}}$ will be left in the background as a
many-body effect, which effectively prevents Eq. (7) from becoming the
“twisted” single-hole state in Eq. (2) unless a very long-time evolution is
realized. The related issue of “orthogonality catastophe” involving the
relaxation of the antivortex field will need a separate investigation
elsewhere.
Conclusion.—In this work, the concept of preformed Cooper pairs in the doped
Mott insulator has been examined by studying the corresponding single-particle
spectral function. An important finding is the two-component feature of the
single quasiparticle excitation by breaking up a pair of doped holes via
injecting an electron into the sample. Here, besides a $d$-wave nodal
quasiparticle excitation, the spectral function also shows a pseudogap-like
feature at the pair breaking energy, above which a non-Landau “twisted” hole
is identified. On the other hand, on the negative bias side, the low-energy
quasiparticle excitation has been also investigated by knocking out an
electron from the half-filled background, whose energy dispersion obtained by
the present VMC calculation agrees remarkably well with the previous QMC
result. These features provide an insightful understanding of the STM
measurements in the underdoped cuprates, which continuously evolve from the
insulating AF state to the SC phase.
Acknowledgements.— We acknowledge stimulating discussions with Yayu Wang, Hai-
Hu Wen, Jia-Xin Zhang. This work is supported by MOST of China (Grant No.
2021YFA1402101).
## References
* Li _et al._ [2023] H. Li, H. Li, Z. Wang, S. Wan, H. Yang, and H.-H. Wen, Low-energy gap emerging from confined nematic states in extremely underdoped cuprate superconductors, npj Quantum Materials 8, 18 (2023), 2207.00783 .
* Ye _et al._ [2023a] S. Ye, C. Zou, H. Yan, Y. Ji, M. Xu, Z. Dong, Y. Chen, X. Zhou, and Y. Wang, The emergence of global phase coherence from local pairing in underdoped cuprates, Nature Physics 10.1038/s41567-023-02100-9 (2023a).
* Ye _et al._ [2023b] S. Ye, J. Zhao, Z. Yao, S. Chen, Z. Dong, X. Li, L. Shi, Q. Liu, C. Jin, and Y. Wang, Visualizing the Zhang-Rice singlet, molecular orbitals and pair formation in cuprate, , 1 (2023b), arXiv:2309.09260 .
* Zhao _et al._ [2022] J.-Y. Zhao, S. A. Chen, H.-K. Zhang, and Z.-Y. Weng, Two-Hole Ground State: Dichotomy in Pairing Symmetry, Physical Review X 12, 011062 (2022), arXiv:2106.14898 .
* Wang _et al._ [2015] Q.-R. Wang, Z. Zhu, Y. Qi, and Z.-Y. Weng, Variational Wave Function for an Anisotropic Single-hole-doped $t$-$J$ Ladder, (2015), arXiv:1509.01260 .
* Chen _et al._ [2019] S. Chen, Q.-R. Wang, Y. Qi, D. N. Sheng, and Z.-Y. Weng, Single-Hole Wave Function in Two Dimensions: A Case Study of the Doped Mott Insulator, Physical Review B 99, 205128 (2019), arXiv:1812.05627 .
* Chen _et al._ [2022] S. A. Chen, Z.-Y. Weng, and J. Zaanen, Spin texture in doped Mott insulators with spin-orbit coupling, Physical Review B 105, 075136 (2022), arXiv:2109.04492 .
* Zhao _et al._ [2023] J.-Y. Zhao, S. A. Chen, R.-Y. Sun, and Z.-Y. Weng, Continuous transition from a Landau quasiparticle to a neutral spinon, Physical Review B 107, 085112 (2023), arXiv:2210.04918 .
* Cai _et al._ [2016] P. Cai, W. Ruan, Y. Peng, C. Ye, X. Li, Z. Hao, X. Zhou, D.-H. Lee, and Y. Wang, Visualizing the evolution from the Mott insulator to a charge-ordered insulator in lightly doped cuprates, Nature Physics 12, 1047 (2016).
* Yu _et al._ [2019] Y. Yu, L. Ma, P. Cai, R. Zhong, C. Ye, J. Shen, G. D. Gu, X. H. Chen, and Y. Zhang, High-temperature superconductivity in monolayer $\mathrm{Bi}_{2}\mathrm{Sr}_{2}\mathrm{CaCu}_{2}\mathrm{O}_{8+\delta}$, Nature 575, 156 (2019).
* Zheng _et al._ [2018] W. Zheng, Z. Zhu, D. N. Sheng, and Z.-Y. Weng, Hidden Spin Current in Doped Mott Antiferromagnets, Physical Review B 98, 165102 (2018), arXiv:1802.05977 .
* Boninsegni [1994] M. Boninsegni, Monte Carlo study of the energy dispersion curve of a mobile hole in a quantum antiferromagnet, Physics Letters A 188, 330 (1994).
* Weng [2011] Z.-Y. Weng, Superconducting Ground State of a Doped Mott Insulator, New Journal of Physics 13, 103039 (2011), arXiv:1105.3027 .
* Brunner _et al._ [2000] M. Brunner, F. F. Assaad, and A. Muramatsu, Single-Hole Dynamics in the $t$-$J$ Model on a Square Lattice, Physical Review B 62, 15480 (2000).
* Mishchenko _et al._ [2001] A. S. Mishchenko, N. V. Prokof’ev, and B. V. Svistunov, Single-hole spectral function and spin-charge separation in the $t$-$J$ model, Physical Review B 64, 033101 (2001).
* Leung and Gooding [1995] P. W. Leung and R. J. Gooding, Dynamical Properties of the Single-hole $t$-$J$ model on a 32-site square lattice, Physical Review B 52, R15711 (1995).
* Sénéchal _et al._ [2000] D. Sénéchal, D. Perez, and M. Pioro-Ladrière, Spectral Weight of the Hubbard Model through Cluster Perturbation Theory, Physical Review Letters 84, 522 (2000), arXiv:9908045 .
* Charlebois and Imada [2020] M. Charlebois and M. Imada, Single-particle spectral function formulated and calculated by variational monte carlo method with application to $d$-wave superconducting state, Physical Review X 10, 041023 (2020).
* Schmitt-Rink _et al._ [1988] S. Schmitt-Rink, C. M. Varma, and A. E. Ruckenstein, Spectral Function of Holes in a Quantum Antiferromagnet, Physical Review Letters 60, 2793 (1988).
* Grusdt _et al._ [2019] F. Grusdt, A. Bohrdt, and E. Demler, Microscopic spinon-chargon theory of magnetic polarons in the $t$-$J$ model, Physical Review B 99, 224422 (2019), arXiv:1901.01113 .
* Vishik _et al._ [2010] I. M. Vishik, W. S. Lee, F. Schmitt, B. Moritz, T. Sasagawa, S. Uchida, K. Fujita, S. Ishida, C. Zhang, T. P. Devereaux, and Z. X. Shen, Doping-Dependent Nodal Fermi Velocity of the High-Temperature Superconductor $\mathrm{Bi}_{2}\mathrm{Sr}_{2}\mathrm{CaCu}_{2}s\mathrm{O}_{8+\delta}$, Physical Review Letters 104, 207002 (2010).
* Ronning _et al._ [1998] F. Ronning, C. Kim, D. L. Feng, D. S. Marshall, A. G. Loeser, L. L. Miller, J. N. Eckstein, I. Bozovic, and Z. X. Shen, Photoemission evidence for a remnant Fermi surface and a $d$-wave-like dispersion in insulating $\mathrm{Ca}_{2}\mathrm{CuO}_{2}\mathrm{Cl}_{2}$, Science 282, 2067 (1998).
* Hu _et al._ [2018] C. Hu, J.-F. Zhao, Y. Ding, J. Liu, Q. Gao, L. Zhao, G.-D. Liu, L. Yu, C.-Q. Jin, C.-T. Chen, Z.-Y. Xu, and X.-J. Zhou, Evidence for Multiple Underlying Fermi Surface and Isotropic Energy Gap in the Cuprate Parent Compound $\mathrm{Ca}_{2}\mathrm{CuO}_{2}\mathrm{Cl}_{2}$, Chinese Physics Letters 35, 067403 (2018), arXiv:1805.10991 .
* Gao _et al._ [2020] Q. Gao, L. Zhao, C. Hu, H. Yan, H. Chen, Y. Cai, C. Li, P. Ai, J. Liu, J. Huang, H. Rong, C. Song, C. Yin, Q. Wang, Y. Huang, G. D. Liu, Z. Y. Xu, and X. J. Zhou, Electronic Evolution from the Parent Mott Insulator to a Superconductor in Lightly Hole-Doped $\mathrm{Bi}_{2}\mathrm{Sr}_{2}\mathrm{CaCu}_{2}\mathrm{O}_{8+\delta}$, Chinese Physics Letters 37, 10.1088/0256-307X/37/8/087402 (2020), arXiv:2008.00399 .
|
# Knowledge Graph in Astronomical Research with Large Language Models:
Quantifying Driving Forces in Interdisciplinary Scientific Discovery
Zechang Sun1111Work done during internship in Microsoft Research Asia Yuan-Sen
Ting2,3 Yaobo Liang4 Nan Duan4 Song Huang1 Zheng Cai1 1Department of
Astronomy, Tsinghua University, Beijing, China
2Research School of Astronomy and Astrophysics, Australian National
University, Canberra, Australia
3School of Computing, Australian National University, Canberra, Australia
4Microsoft Research Asia, Beijing, China
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>{shuang<EMAIL_ADDRESS>
###### Abstract
Identifying and predicting the factors that contribute to the success of
interdisciplinary research is crucial for advancing scientific discovery.
However, there is a lack of methods to quantify the integration of new ideas
and technological advancements in astronomical research and how these new
technologies drive further scientific breakthroughs. Large language models,
with their ability to extract key concepts from vast literature beyond keyword
searches, provide a new tool to quantify such processes. In this study, we
extracted concepts in astronomical research from 297,807 publications between
1993 and 2024 using large language models, resulting in a set of 24,939
concepts. These concepts were then used to form a knowledge graph, where the
link strength between any two concepts was determined by their relevance
through the citation-reference relationships. By calculating this relevance
across different time periods, we quantified the impact of numerical
simulations and machine learning on astronomical research. The knowledge graph
demonstrates two phases of development: a phase where the technology was
integrated and another where the technology was explored in scientific
discovery. The knowledge graph reveals that despite machine learning has made
much inroad in astronomy, there is currently a lack of new concept development
at the intersection of AI and Astronomy, which may be the current bottleneck
preventing machine learning from further transforming the field of astronomy.
## 1 Introduction
Interdisciplinary collaborations can drive innovation in research by
introducing new theoretical, analytical, or computational tools to specific
scientific domains. These new tools can innovate and transform the fields. For
instance, the theoretical understanding of quantum physics and general
relativity has driven much of modern cosmology Weinberg (2008), and each
subsequent engineering breakthrough leads to new windows of observation. A
prime example is the detection of gravitational waves with LIGO Abbott et al.
(2016), which was made possible by the convergence of cutting-edge
technologies in interferometry. Simultaneously, high-performance computing has
paved the way for understanding complex systems in the cosmos, such as the
evolution of galaxies McAlpine et al. (2016); Pillepich et al. (2018) and the
inner workings of stars and stellar atmospheres Gudiksen et al. (2011),
through N-body or hydrodynamical simulations.
The advancement of astronomy also relies heavily on the revolution of
statistical and analytical methods, which allow for proper inferences based on
observations. The introduction of even well-known statistical techniques to
astrophysics often leads to key turning points in the field. For example, a
cornerstone of our understanding of cosmology comes from analyzing the power
spectrum of the cosmic microwave background Hu and Dodelson (2002), while the
detection of planetary systems outside the solar system has benefited from
Gaussian Processes Hara and Ford (2023). More recently, the advent of deep
learning, with numerous successes in sciences such as AlphaFold Jumper et al.
(2021), has propelled much of the field to rethink statistical inference in
astronomy. This includes using generative models as surrogates for the
likelihood or posterior Cranmer et al. (2020); Sun et al. (2023a) and
employing flow-based generative models to capture higher-order moment
information in stochastic fields Diaz Rivero and Dvorkin (2020).
However, the underpinnings of these successful interdisciplinary results often
stem from a rigorous process of debate and adaptation within the community.
New thought processes are initially treated as disruptors, but a subset of
these promising methods subsequently becomes integrated into the field’s
knowledge base. Over time, such integration gains significant traction and
further creates branching of knowledge in the field, fostering its growth.
Consider the example of numerical simulation, which was initially viewed as a
“distraction” from pure mathematical interest in solving N-body problems and
Navier-Stokes equations Bertschinger (1998). However, astrophysics has
gradually acknowledged that some aspects of the field are non-linear and
beyond analytical understanding. The integration of numerical simulations has
subsequently led to the thriving study of galaxy evolution McAlpine et al.
(2016), a widely researched topic, and has also gradually permeated into more
specialized domains like solving the accretion physics of black holes and
protoplanetary disks Jiang et al. (2014); Bai (2016).
However, while such integration and branching off are intuitively clear,
studying and quantifying them remains a challenge. Questions such as how long
it might take for a field to adopt a new concept and the quantitative impact
it has on the field still evade rigorous study. A key bottleneck is the
difficulty in defining and extracting the various concepts described in a
paper. The classical approach of classification using only keywords or the
field Xu et al. (2018) of research lacks granularity. Other implicit methods
that aim to extract vectorized semantic representations from papers Meijer et
al. (2021) are hard to parse at the human level.
Recent advancements in large language models (LLMs), particularly generalized
pre-trained transformer techniques Brown et al. (2020); OpenAI et al. (2023),
have demonstrated exceptional zero-shot/few-shot capabilities across various
downstream tasks and have shown broad domain knowledge coverage Bubeck et al.
(2023). The synergy between LLMs and knowledge graphs constitutes an active
area of research. On the one hand, LLMs have shown robust capability for
knowledge graph construction, while on the other hand, the constructed
knowledge graph can help LLMs further enhance the accuracy of their responses
through retrieval augmented generation Pan et al. (2023); Zhu et al. (2023).
In this study, we explore the possibility of using LLMs as a bridging tool by
distilling concepts from research papers in astronomy and astrophysics and
constructing knowledge graphs to study their relationships and co-evolution
over time. To the best of our knowledge, this is the first time an LLM-based
knowledge graph has been constructed for astrophysics. The combination of the
LLM-extracted concepts with our proposed citation-reference-based relevance
allows us to quantitatively analyze cross-domain interactions over time and
the co-evolution of subfields in astronomy.
This paper is organized as follows: In Section 2, we outline the dataset used
for this study. Section 3 details the methodologies employed, including
knowledge graph construction with large language model agents and the
citation-reference-based relevance to quantify the interconnection between
different concepts. We present our findings in Section 4, including a case
study focusing on how numerical simulations were gradually adopted by the
astronomical community, and by extension, quantifying the current impact of
machine learning in astronomy. We discuss and conclude in Section 5.
## 2 Literature in Astronomical Research
This study employs a dataset of 297,807 arXiv papers in the fields of
astronomy and astrophysics, collected from 1993 to 2024 and sourced from the
NASA Astrophysics Data System (NASA/ADS) Accomazzi (2024). Astrophysics is
known to be a field where the vast majority of publications are on arXiv and
easily searchable on ADS. Therefore, the number of arXiv papers here comprises
a nearly complete collection of literature published in the field. We
downloaded all PDFs from arXiv and performed OCR with Nougat Blecher et al.
(2023). Through human inspection, we found that Nougat did a great job
transcribing the data with minimal failures. Auxiliary minor mistakes were
identified and cleaned up during those iterations.
A key component of this paper is understanding the relationship between
concepts, as viewed by the research community, through the citation
relationships within the existing literature. The fact that NASA/ADS oversees
a nearly complete literature makes astronomy one of the well-curated fields to
explore in this study. We further extract the citation-reference relationships
for the entire corpus using the NASA/ADS
API222https://ui.adsabs.harvard.edu/help/api/ to quantify the interaction
among various scientific concepts during their co-evolution.
## 3 Constructing a Knowledge Graph for Astronomy
Constructing a knowledge graph between concepts in astrophysics requires two
essential components: extracting the concepts in astronomical literature
through large language model agents, and determining the strength of
interconnectivity between concepts through the underlying relationships
between paper citations. In this section, we explore these components in more
detail.
### 3.1 Concept Extraction with Large Language Models
Figure 1: Schematic plot outlining the knowledge graph construction using
large language model agents. The extraction of concepts comprises three main
phases: (1) Concept Extraction, where agents construct scientific concepts
from documents; (2) Vectorization and Nearest Neighbor Finding, in which
concepts are vectorized and grouped by semantic similarity; (3) Concept
Merging, where similar concepts are combined to form a more coarse-grained
structures. The connections between concepts are then defined by citation-
reference relevance as detailed in Section 3.2, with concepts involved in more
citation-reference pairs assigned a higher relevance.
The key challenges in distilling concepts from publications using large
language models are twofold. Firstly, LLM agents may generate hallucinations,
producing lists of concepts that deviate from the expectations of human
experts. Secondly, even when the concepts are accurately distilled, the models
may yield concepts that are either too detailed, overly broad, or merely
synonymous with each other, thereby diminishing the practical relevance of
understanding their interrelationships. To address these challenges, we employ
a multi-agent system in this study, as shown in Figure 1. This system consists
of three parts: (a) extraction of concepts from astronomical publications; (b)
nearest neighbor search of the concepts; and (c) merging of the concepts. This
iterative approach enables control over the granularity of the knowledge
graph, tailoring it to our purpose.
In this study, we focus on extracting key concepts from the titles and
abstracts of astronomical publications to minimize computational cost. In
astronomy, the abstract often encapsulates the essential information,
including scientific motivation, methods, and data sources. The whole text
processing process, which will be detailed in Section 3, involved about 2
billion tokens with additional prompt and RAG source. To efficiently handle
this large-scale data while maintaining cost-effectiveness, we leverage open-
source large language models for concept extraction. Specifically, we employ
Mistral-7B-Instruct-v0.2333https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
Jiang et al. (2023) as our inference model and jina-embeddings-v2-base-
en444https://huggingface.co/jinaai/jina-embeddings-v2-base-en Günther et al.
(2023) for text embedding.
#### Concept Extraction:
The first agent is prompted to extract a preliminary set of scientific
concepts from the abstracts and titles. While most of these concepts appear to
be valid, some of them seem to be hallucinations that are not pertinent to
astronomy, such as “misleading result” and “maternal entity in astronomy”. To
address this issue, a secondary LLM agent is deployed to explain and clarify
each term, ensuring the removal of ambiguities and allowing only
scientifically valid concepts to proceed. In this clarifying step, we utilize
the entire document as an additional source enhanced by retrieval augmented
generation to assist our agent in accurately understanding the meanings of
various scientific terminologies. The validated scientific concepts are
denoted as $\\{c_{1},c_{2},\dots,c_{N}\\}$.
#### Vectorize and Nearest Neighbor Finding:
Once the concepts are extracted and validated, they are transformed into
vector representations using the text-embedding models, enabling the accurate
computation of similarity measures. We group the concepts based on the cosine
similarity of their corresponding vector representations into $M$ clusters,
represented as $\\{\\{c^{i}_{j},j=1,\dots,k_{i}\\},i=1,\dots,M\\}$. The number
of elements in each cluster, $k_{i}$, is adaptively determined based on a
predefined cosine similarity threshold among the elements within the cluster.
In this study, we set the threshold at 0.85, striking a balance between the
granularity of the concepts and the computational feasibility of the
subsequent steps.
#### Concept Merging:
Finally, the final agent merges these grouped concepts by analyzing clusters
of semantically similar concepts and distilling them into more general,
unified entities. For example, the concepts “X-Shooter spectra”, “DEIMOS
spectrograph”, and “Keck LRIS spectrograph” were combined into the broader
concept of “spectrograph”. This merging simplifies the structure of the
knowledge graph, reducing redundancy. Furthermore, a coarser knowledge graph
improves the readability of the visualization.
We iterate the neighbour finding and merging steps three times, gradually
coarsening the collection of concepts from 1,057,280, 164,352, and finally
24,797 concepts, respectively. We found, through domain expert evaluation
that, the granularity of the concepts after three iterations is appropriate,
with sufficient concepts covering the broad range of topics explored and
methods employed in the literature, but with enough fine-grained level to
understand the subtle evolution of the field in astrophysics. Some of the
final concepts include the commonly known concepts such as “dark matter” and
“inflation.” On average, each paper consists of $\sim$ 10 concepts.
### 3.2 Determining Concept Relevance
Upon defining the concepts, perhaps more critical is to determine,
quantitatively, how strongly two concepts are relevant. The relevancy of two
concepts is certainly subjective—concepts that were deemed irrelevant at a
certain point in time by the domain expert community might gradually become
relevant over time. However, such temporal evolution is exactly what we are
after to understand the shift of knowledge over time.
To gauge how two concepts are perceived as relevant by the community at a
fixed point in time, the citation-reference relationships between articles
become a natural annotated link between the concepts. In the following, we
will define based on the probability with which a pair of concepts appears
simultaneously in a certain article and its neighboring documents that have a
citation-reference relationship, the proximity of the two concepts. This
metric between concepts is inspired by the process by which researchers
randomly sample through the network of articles from one concept to another.
If the researcher can find another new concept from the parent concept that
they were originally interested in by searching through the direct citation
relation from the paper which contains the parent concept, and this leads the
researcher to another paper with a new concept, the two concepts are deemed
close. However, if the two concepts can only be found through a small subset
of papers of the parent concepts and their citations or references, then the
two concepts are deemed further apart at that point in time. We emphasize that
while the linkage (and here, the hypothetical “search”) is done through the
domain of the published literature, the knowledge graph is constructed at the
level of the extracted concepts.
More formally, let the final set of concepts be denoted as
$\mathrm{C}:\\{c_{1},c_{2},\dots,c_{n}\\}$, identified using large-language
model-based agents as outlined in Section 3.1. Let these concepts be
associated with a corpus of academic papers,
$\mathrm{N}:\\{n_{1},n_{2},\dots,n_{k}\\}$, and a set of citation-reference
relationships $\mathrm{L}:\\{(n_{a},n_{b})|n_{a},n_{b}\in\mathrm{N},\exists
n_{a}\rightarrow n_{b}\\}$, where $n_{a}\rightarrow n_{b}$ signifies that
paper $n_{a}$ cites paper $n_{b}$. To explore the propagation of a concept
$c_{\alpha}$ within this network, we define the probability of encountering
another concept $c_{\beta}$ starting from a specific paper $n_{k}$ that
discusses $c_{\alpha}$. This probability, denoted as
$p_{\alpha\rightarrow\beta|n_{k}}$, is formulated as:
$p_{\alpha\rightarrow\beta|n_{k}}=\frac{\mathrm{N}_{\beta}}{|S(n_{k},\mathrm{L},\beta)|}.$
(1)
The set $S(n_{k},\mathrm{L},\beta)$ is defined through an iterative process
starting with the initial paper set $n_{k}$ (denoted as ${S_{0}}$). In each
iteration, we expand the set by including papers that are directly cited by
any paper in the current set and have not been included in previous sets.
Formally, if $S_{n-1}$ is the set of papers at iteration $n-1$, then
$S_{n}=S_{n-1}\cup\\{n_{e}|(n_{s},n_{e})\in\mathrm{L},n_{s}\in
S_{n-1},n_{e}\notin S_{n-1}\\}$. The iteration continues until at least one
paper in the current set contains concept $c_{\beta}$, at which point we
denote the final set as $S_{T}$ and set $S_{T}=S(n_{k},\mathrm{L},\beta)$. The
number of papers containing $c_{\beta}$ within $S(n_{k},\mathrm{L},\beta)$ is
set to be $\mathrm{N}_{\beta}$.
Typically, the growth of the sets follows a pattern where $|S_{0}|=1$,
$|S_{1}|\sim 10^{2}$, and $|S_{2}|\sim 10^{4}$ in our experiments. This means
that if the concepts cannot be found directly from a direct citation from the
original paper that contains the parent concept, the number of papers “needed
to be read”, i.e., $|S|$, will drastically reduce the relevance of the two
concepts. Nonetheless, if the concepts are very prevalent, after a certain
level of search, the numerator $\mathrm{N}_{\beta}$ would then offset the
volume of search.
As this probability pertains to just a specific paper containing concept
$c_{\alpha}$, the probability of transitioning from concept $c_{\alpha}$ to
$c_{\beta}$, for all the papers $S_{\alpha}$ that contain $c_{\alpha}$, would
then be the expectation averaging over all papers in $S_{\alpha}$, or,
$p_{\alpha\rightarrow\beta}=\frac{1}{|S_{\alpha}|}\sum_{n_{k}\in
S_{\alpha}}p_{\alpha\rightarrow\beta|n_{k}}$ (2)
The above equation computes the average probability of moving from
$c_{\alpha}$ to $c_{\beta}$ across all papers that contain $c_{\alpha}$. To
assess the bidirectional relevance of concepts $c_{\alpha}$ and $c_{\beta}$,
and we will assume that the order of transition between two concepts is not
relevant, we define the citation-reference relevance between them as the
geometric average of the probabilities of transitioning in both directions:
$p_{\alpha,\beta}=\left(p_{\alpha\rightarrow\beta}\cdot
p_{\beta\rightarrow\alpha}\right)^{1/2}$ (3)
Finally, the transition probability attains the following trivial properties:
(1) $p_{\alpha,\beta}\leq 1,\forall c_{\alpha},c_{\beta}\in\mathrm{C}$; (2)
$p_{\alpha,\alpha}\equiv 1,\forall c_{\alpha}\in\mathrm{C}$; and (3)
$p_{\alpha,\beta}=p_{\beta,\alpha},\forall c_{\alpha},c_{\beta}\in\mathrm{C}$.
These properties ensure that the relevance metric is well-defined and
consistent, providing a foundation for analyzing the relationships between
concepts in the knowledge graph.
### 3.3 From Concept Relevance to Knowledge Graph
From the relevance defined as $p_{\alpha,\beta}$ above, which serves as a
robust metric for the link strength between two nodes, we can visualize the
knowledge as a force-directed graph. A force-directed graph Kobourov (2012);
Bannister et al. (2012), also known as a spring-embedder or force-based
layout, is a visual tool designed to illustrate relational data within network
graphs. This method leverages simulation techniques inspired by physical
systems, arranging nodes—which symbolize entities or concepts—and links—which
depict the relationships or connections between these nodes—in a coherent and
insightful layout. These graphs utilize the concept of attraction and
repulsion forces to strategically distribute nodes.
Figure 2: Visualization of a knowledge graph of 24,939 concepts, constructed
from 297,807 astronomical research papers. Only concepts appearing in more
than 20 papers and links with a link strength greater than 0.001 are
displayed. Each concept is categorized into one of the following domains: (A)
Galaxy Physics, (B) Cosmology & Nongalactic Physics, (C) Earth & Planetary
Science, (D) High Energy Astrophysics, (E) Solar & Stellar Physics, (F)
Statistics & AI, (G) Numerical Simulation, or (H) Instrumental Design. In the
upper panels, we show connections between galaxy physics and other scientific
domains. In the lower panel, we highlight the concepts from simulation,
statistics, and observational instruments and their respective locations with
respect to galaxy physics. Unsurprisingly, the technological concepts are
generally more globally spread, as the same techniques can have wide
implications for a broad range of topics in astronomy. Machine learning
techniques are still at the periphery of the knowledge graph, suggesting that
their integration in astronomy is still in its early stages. The interactive
version of the knowledge graph is made publicly available at
https://astrokg.github.io/.
By iteratively updating the positions of nodes based on these attraction and
repulsion forces, the force-directed graph algorithm converges to a layout
that minimizes the overall energy of the system. This results in an
informative 3D representation of the knowledge graph, where closely related
concepts are automatically positioned near each other, enhancing the
visibility of the density and connectivity within the graph. The capacity of
force-directed graphs to dynamically represent complex relational data makes
them particularly suitable for visualizing knowledge graphs.
In our context, the link strength between two nodes (concepts) is set to their
citation-reference relevance, $p_{\alpha,\beta}$. Concepts with higher
relevance will attract each other more strongly Cheong et al. (2021), causing
them to be positioned closer together in the visualized graph. Conversely, the
repulsion force is applied between all pairs of nodes, ensuring that they
remain adequately spaced to prevent overlap and maintain clear visual
separation. By leveraging the citation-reference relevance as the link
strength between concepts, we can create a graph that intuitively conveys the
relationships and clustering of ideas within the astronomical literature.
## 4 Intersection between Technological Advancement and Scientific Discovery
Our knowledge graph consists of 24,939 concepts, extracted from 297,807
astronomical research papers, with 339,983,272 interconnections. The
visualization of the knowledge graph as a force-directed graph is shown in
Figure 2. The filamentous structure shown in the knowledge graph demonstrates
the close interconnections across various subdomains within astronomical
research. For clarity, we only display concepts that appear in at least 20
papers and consider only those links with a citation-reference relevance
$p_{\alpha,\beta}>0.001$. This leads to 9,367 nodes and 32,494 links for the
visualization. We set the size of the nodes to be proportional to the
logarithm of their frequency of occurrence in the papers.
In the visualization, we further categorize all the concepts into scientific
concepts, following the categorization of astrophysics on
arXiv555https://arxiv.org/archive/astro-ph, namely Astrophysics of
Galaxies,666Astrophysics of Galaxies focuses on phenomena related to galaxies
and the Milky Way, including star clusters, interstellar medium, galactic
structure, formation, dynamics, and active galactic nuclei. Cosmology and
Nongalactic Astrophysics,777Cosmology and Nongalactic Astrophysics covers the
early universe’s phenomenology, cosmic microwave background, dark matter,
cosmic strings, and the large-scale structure of the universe. Earth and
Planetary Astrophysics,888Earth and Planetary Astrophysics studies deal with
the interplanetary medium, planetary physics, extrasolar planets, and the
formation of the solar system. High Energy Astrophysics,999High Energy
Astrophysics explores cosmic ray production, gamma ray astronomy, supernovae,
neutron stars, and black holes. and Solar and Stellar Astrophysics,101010Solar
and Stellar Astrophysics pertains to the investigation of white dwarfs, star
formation, stellar evolution, and helioseismology.. As we aim to understand
how concepts in technological advancement propel scientific discoveries, we
further define another three classes of “technological” domains, which we
identify as Statistics and Machine Learning, Numerical Simulation, and
Instrumental Design. The classifications below are conducted using
GPT-4111111https://openai.com/index/gpt-4/.
Figure 2 illustrates how relevant concepts cluster within the same domain and
how different domains interconnect. The upper panels demonstrate how the
different scientific clusters interact with each other. For instance, galaxy
physics, as anticipated, connects with both the largest scales in astronomical
research, such as cosmology and general relativity, and the smaller scales,
including stellar physics and planetary physics. The lower panel shows how the
technological concepts are embedded within the scientific concepts, including
numerical simulations, statistics, machine learning, and instrumental design.
The technological concepts are generally distributed more globally in the
knowledge graph, demonstrating their omnipresence in different subfields.
Interestingly, as shown in the figure, despite the booming interest and
popularity, machine learning techniques, particularly deep learning, are
situated only at the peripheral region of the knowledge graph. This suggests
that machine learning techniques are not yet fully integrated into the
astronomical research community, at least from the citation-reference point of
view. We will provide a more quantitative comparison of this observation in
the following section.
Figure 3: The average linkage for five distinct time periods is used to
investigate the temporal integration of technological techniques into
scientific research. The middle and lower panels illustrate a consistent
increase in the count of concepts, both in terms of scientific concepts
(bottom panel) and technical concepts (middle panel). The upper panel shows
the total cross-linkage between individual technical domains and scientific
concepts, with higher values indicating stronger adoption. The upper panel
reveals a two-phase evolution, with an observed latency of approximately five
years. The two phases signify the period of development and introduction of
new techniques in astronomy and their subsequent adoption by the community
(see text for details). Machine learning has begun to reach integration levels
comparable to those of numerical simulations seen two decades earlier.
However, the number of concepts in machine learning within astronomical
research has only increased rather marginally, rising from 152 between 1993
and 2000, to 215 from 2005 to 2010, and reaching 230 between 2015 and 2020.
### 4.1 Numerical Simulations in Astronomy
To demonstrate how technological advancement drives scientific discovery, we
will study the impact of numerical simulations on astronomy in more depth. In
modern-day astronomical research, numerical simulation has become an
indispensable tool. However, this was not always the case. The scientific
community experienced a gradual transition from focusing primarily on
theoretical deduction and analytical formulas to modeling complex phenomena
through numerical simulations. To understand this transition, we assess the
average relevance between numerical simulations and scientific concepts across
various time periods. We divided the dataset into five time periods from 1993
to 2020. In each time period, we recalculate the citation-reference relevance
using the papers published within that specific timeframe.
As shown in the bottom panel of Figure 3, unsurprisingly, the number of
“scientific concepts” has surged over time. Complementary to these scientific
concepts, we also see that the number of technical concepts has surged
alongside, especially in terms of numerical simulations and statistical
methods, which are shown as red and blue lines in the middle panel. On the
other hand, despite the interest in the field, the number of concepts in
machine learning in the astronomical literature, as shown by the green line,
is still lagging behind these other well-developed technological concepts by
an order of magnitude.
Perhaps more interesting is showing the weighted “intersection” between the
scientific concepts and the technical concepts, which is shown in the top
panels. The top panel shows the weighted “linkage” among all the scientific
concepts with the specific technical domain. We define the average linkage as
follows:
$\mathrm{Average\,\,Linkage}=\frac{1}{|\mathrm{A}|\cdot|\mathrm{B}|}\sum_{\alpha\in\mathrm{A},\beta\in\mathrm{B}}p_{\alpha,\beta}$
(4)
where $\mathrm{A}$ and $\mathrm{B}$ represent the sets of concepts related to
specific subfields, such as machine learning and numerical simulation in our
study. Here, $p_{\alpha,\beta}$ denotes the citation-reference relevance as
defined in Equation 3. If the new methods are well-adopted in the astronomical
community and advance scientific discovery, we should see an improvement in
the average citation-reference linkage (large values in the top panel). Viewed
this way, there is a clear two-phase evolution with the gradient of the
integration oscillating positively (blue arrow) and negatively (red arrow).
This is perhaps not surprising. For any technological advancement, it might
once be proposed with many technically focused papers written; however, the
citation-reference relation is mostly limited to the “technologists,” leading
to a dilution of the cross-correlation, which is shown by the red arrow. For
example, during the period of 1993-2000, there have been many works focusing
on the development of N-body simulation techniques Bode et al. (2000); Romeo
et al. (2004); Springel (2005). Yet, the integration remains marginal.
However, from 2000 onward, the astronomical community began to embrace N-body
simulations to resolve scientific questions Paz et al. (2006); Peñarrubia et
al. (2006); Zhou and Lin (2007), resulting in an increase in citation-
reference relevance during this time. A similar two-phase pattern is observed
from [2010, 2015) to [2015, 2020), during which time hydrodynamical
simulations developed Genel et al. (2014); Carlesi et al. (2014b, a) and
gradually gained acceptance McAlpine et al. (2016); Pillepich et al. (2018)
within the community. The delay between the development of new technologies
and their impact on scientific discovery spans approximately five years.
### 4.2 Machine Learning in Astrophysics
The revelation of the two-phase adoption in numerical simulations leads to the
possibility of better quantifying the integration of machine learning in
astronomy. In recent years, we have seen a booming interest in AI and its
applications in science. As modern-day astronomy is driven by big data, with
billions of sources routinely being surveyed, it is not surprising that
astronomy has also seen a drastic integration of AI to advance data processing
and analysis Baron (2019).
Figure 4 shows the average cross-domain linkage, as defined in Equation 3, but
between the concepts in machine learning and the five scientific domains. In
terms of the application of machine learning in astronomy, Cosmology &
Nongalactic Astrophysics takes the lead, as it benefits from machine
learning’s capacity to manage complex, large data sets from simulations and
surveys Villaescusa-Navarro et al. (2021b, a); Sun et al. (2023b). This is
followed by Galaxy Physics, which leverages ML for tasks like photometric
redshift prediction Sun et al. (2023a) and galactic morphology classification
Robertson et al. (2023). Solar and Stellar Physics have also shown promise in
emulating and analyzing stellar spectra Ting et al. (2019). High Energy
Astrophysics and Earth & Planetary Astrophysics have been slower to adopt ML.
But is machine learning now well-adopted in astronomical research? Figures 2
and 3 paint an interesting picture. On the one hand, the top panel of Figure 3
shows that there has been a rapid increase in the cross-science-and-AI
citation-reference linkage, demonstrating a huge interest among the community.
For instance, the scientific-technology score remains flat and low before
2015, signifying that despite a history of AI in astronomy such as the use of
neural networks for galaxy morphology classification can trace back to as
early as 1990s Storrie-Lombardi et al. (1992)—its impact remained minimal
until the surge in popularity of deep learning post-2015.
Yet, at the same time, even currently, Figure 2 shows that most of these
concepts still occupy a peripheral position in the knowledge graph. This
suggests that, from a citation-reference relevance perspective, such concepts
are still considered niche within the broader scientific community. This is
perhaps not too surprising because, compared to the deep integration of
numerical simulations, quantitatively, the cross-linkage score of machine
learning with astronomy remains only at the level that numerical simulations
and classical statistics were twenty years ago.
Perhaps what is strikingly lacking is that the number of machine learning
concepts $(\sim 300)$ in the astronomical literature remains an order of
magnitude smaller than that of numerical simulations ($\sim 2,000$), as shown
in the middle panel of Figure 3. This might imply that the machine learning
techniques widely adopted in astronomy, even at present, remain some of the
more classical techniques, such as linear regression and random forests. The
rapid adoption of “existing” techniques, while encouraging, might also signify
a bigger underlying problem of lack of innovation in applying AI to astronomy.
However, if the two-phase evolution applies, we should expect that in the
coming years, there will be more novel deep learning techniques introduced
before they are gradually adopted by the community.
Figure 4: Integration of machine learning in different subfields of astronomy.
The integration is defined as the average cross-domain linkage similar to the
top panel of Figure 3. Cosmology and Nongalactic Astrophysics currently lead
the application of machine learning in astronomy, followed by Galaxy Physics
and Solar & Stellar Physics. The adoption of machine learning concepts in
Earth & Planetary Physics and High Energy Astrophysics still lags behind.
## 5 Discussions and Conclusions
A quantitative study of the evolution of concepts and their interconnections
would not be possible without modern-day LLMs, partly due to the large amount
of arduous work required to manually label, extract concepts, and classify
topics, which can be easily done with minimal computing resources in our case.
Even when manual extraction is possible, the taxonomy of a scientific field is
often limited—tailored to provide vague contours of the domain, e.g., for
publication purposes, rather than a deep and more fine-grained differentiation
of the knowledge embedded in the field.
In this study, we construct, to the best of our knowledge, the first large-
language-model-based knowledge graph in the domain of astronomy and
astrophysics. The knowledge graph comprises 24,939 concepts extracted through
a careful iterative process with LLMs from 297,807 papers. We design a
relevance metric defined through the citation-reference relations in the
astronomical literature to understand the relations as well as the temporal
evolution between different concepts. The relevance metric follows the
intuition of how humans search for new concepts by quantifying the degree of
separation in the citation network as well as the prevalence of the concepts
in the field. The relevance is then applied as the linkage strength of the
force-directed graph to construct the knowledge graph, allowing us to
visualize the knowledge in the field in detail.
Based on this knowledge graph, we evaluate the temporal evolution of the
relevance of numerical simulations and machine learning in astronomical
research. We showed that while numerical simulations are routinely adopted in
modern-day astronomy, the concepts related to them have gone through a long
process of gradually being integrated into and accepted by the community. We
also found that the integration of numerical simulation into scientific
discovery shows a two-phase process, in which a five-year latency can be
observed between the development of techniques, where the relevance of the
techniques and the science might temporarily diminish, followed by the
flourishing period, where the methods mature and are widely applied to
astronomical research. We also found that the same trend can be found in
classical statistical analysis.
By the same metric, we found that, despite much of the interest and the
booming field of machine learning, the impact of machine learning in astronomy
remains marginal. While there is a drastic increase in the technique-science
cross-referencing, quantitatively, the referencing remains at a level that we
observed for numerical simulations about two decades ago. Furthermore, the
number of machine learning concepts introduced in astronomy remains an order
of magnitude smaller than that of numerical simulations and classical
statistical methods, which might imply that the current rapid increase in
relevance is driven mainly by the adoption of established machine learning
techniques from decades ago. Nonetheless, if the two-phase transition applies,
we expect more innovative techniques will be gradually introduced. In fact, in
recent years, we have seen many more modern-day techniques, both in terms of
flow-based and score-based generative models De Santi et al. (2024); Zhao et
al. (2023), being introduced, as well as, like this study, the application of
LLMs in astronomical research Dung Nguyen et al. (2023); Perkowski et al.
(2024). The metric introduced here will be able to continue monitoring this
process.
This study primarily aims to show a proof of concept, using LLM-based
Knowledge Graph to quantifiably understand the evolution of astronomical
research. As such our study certainly has much room for improvement. For
instance, proper robust extraction of scientific concepts from literature
heavily relies on the alignment between the agents and the researchers’
perception. In our study, the concepts are autonomously extracted through the
LLM agent, with the granularity of the concepts optimized through merging and
pruning. Such an LLM agent can certainly benefit from a subset of high-quality
annotated data and comparison with existing hierarchical taxonomies. The
process of concept pruning and merging is also somewhat crude, involving
vectorizing the concepts and performing a cosine similarity search. A better
method would involve further comparing these concepts, utilizing the
capabilities of large language models for more detailed concept
differentiation and pruning.
In a nutshell, our study demonstrates the potential of LLM-based knowledge
graphs in uncovering the intricate relationships and evolution of astronomical
research. By providing a quantitative framework for analyzing the integration
of new technologies and methodologies, this approach opens up new avenues for
understanding the dynamics of interdisciplinary research and the factors that
drive scientific progress, in astronomy and beyond.
## Ethical Statement
In this study, we construct a knowledge graph by extracting concepts from the
astronomical literature available on the arXiv preprint server. Our work aims
to advance the understanding of the evolution and interconnections of
scientific concepts within the field of astronomy. We emphasize that our study
does not involve the direct reproduction or distribution of the original
literature itself. Instead, we focus on distilling and analyzing the key
concepts present in the existing body of work.
To ensure ethical compliance and respect for intellectual property rights, we
will only release the extracted concepts and their relationships, without
sharing or reproducing the original text or any substantial portions of the
literature. This approach minimizes the risk of copyright infringement and
maintains the integrity of the original authors’ works.
Furthermore, the field of astronomical research generally operates under an
open-sky policy, which promotes collaboration, transparency, and the free
exchange of scientific knowledge. This policy aligns with our research
objectives and mitigates potential ethical or monetary disputes arising from
our work. Our goal is to provide insights that benefit the astronomical
community and contribute to the advancement of scientific understanding.
## Acknowledgments
The authors acknowledge the initial discussions with Kangning Diao and Jing
Tao from the Department of Astronomy in Tsinghua University. The authors are
grateful to Dr. Peng Cheng of the School of Social Science at Tsinghua
University for his expert advice on the philosophy of science. This research
has made use of NASA’s Astrophysics Data System Bibliographic Services. Y.S.T.
acknowledges financial support received from the Australian Research Council
via the DECRA Fellowship, grant number DE220101520 and support received from
Microsoft’s Accelerating Foundation Models Research (AFMR). S.H. is supported
by the National Science Foundation of China (grant no. 12273015).
## References
* Abbott et al. [2016] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, C. Affeldt, M. Agathos, K. Agatsuma, N. Aggarwal, O. D. Aguiar, L. Aiello, A. Ain, P. Ajith, B. Allen, A. Allocca, P. A. Altin, S. B. Anderson, W. G. Anderson, K. Arai, M. A. Arain, M. C. Araya, C. C. Arceneaux, J. S. Areeda, N. Arnaud, K. G. Arun, S. Ascenzi, G. Ashton, M. Ast, S. M. Aston, P. Astone, P. Aufmuth, C. Aulbert, S. Babak, P. Bacon, M. K. M. Bader, P. T. Baker, F. Baldaccini, G. Ballardin, S. W. Ballmer, J. C. Barayoga, S. E. Barclay, B. C. Barish, D. Barker, F. Barone, B. Barr, L. Barsotti, M. Barsuglia, D. Barta, J. Bartlett, M. A. Barton, I. Bartos, R. Bassiri, A. Basti, J. C. Batch, C. Baune, V. Bavigadda, M. Bazzan, B. Behnke, M. Bejger, C. Belczynski, A. S. Bell, C. J. Bell, B. K. Berger, J. Bergman, G. Bergmann, C. P. L. Berry, D. Bersanetti, A. Bertolini, J. Betzwieser, S. Bhagwat, R. Bhandare, I. A. Bilenko, G. Billingsley, J. Birch, R. Birney, O. Birnholtz, S. Biscans, A. Bisht, M. Bitossi, C. Biwer, M. A. Bizouard, J. K. Blackburn, C. D. Blair, D. G. Blair, R. M. Blair, S. Bloemen, O. Bock, T. P. Bodiya, M. Boer, G. Bogaert, C. Bogan, A. Bohe, P. Bojtos, C. Bond, F. Bondu, R. Bonnand, B. A. Boom, R. Bork, V. Boschi, S. Bose, Y. Bouffanais, A. Bozzi, C. Bradaschia, P. R. Brady, V. B. Braginsky, M. Branchesi, J. E. Brau, T. Briant, A. Brillet, M. Brinkmann, V. Brisson, P. Brockill, A. F. Brooks, D. A. Brown, D. D. Brown, N. M. Brown, C. C. Buchanan, A. Buikema, T. Bulik, H. J. Bulten, A. Buonanno, D. Buskulic, C. Buy, R. L. Byer, M. Cabero, L. Cadonati, G. Cagnoli, C. Cahillane, J. Calderón Bustillo, T. Callister, E. Calloni, J. B. Camp, K. C. Cannon, J. Cao, C. D. Capano, E. Capocasa, F. Carbognani, S. Caride, J. Casanueva Diaz, C. Casentini, S. Caudill, M. Cavaglià, F. Cavalier, R. Cavalieri, G. Cella, C. B. Cepeda, L. Cerboni Baiardi, G. Cerretani, E. Cesarini, R. Chakraborty, T. Chalermsongsak, S. J. Chamberlin, M. Chan, S. Chao, P. Charlton, E. Chassande-Mottin, H. Y. Chen, Y. Chen, C. Cheng, A. Chincarini, A. Chiummo, H. S. Cho, M. Cho, J. H. Chow, N. Christensen, Q. Chu, S. Chua, S. Chung, G. Ciani, F. Clara, J. A. Clark, F. Cleva, E. Coccia, P. F. Cohadon, A. Colla, C. G. Collette, L. Cominsky, M. Constancio, A. Conte, L. Conti, D. Cook, T. R. Corbitt, N. Cornish, A. Corsi, S. Cortese, C. A. Costa, M. W. Coughlin, S. B. Coughlin, J. P. Coulon, S. T. Countryman, P. Couvares, E. E. Cowan, D. M. Coward, M. J. Cowart, D. C. Coyne, R. Coyne, K. Craig, J. D. E. Creighton, T. D. Creighton, J. Cripe, S. G. Crowder, A. M. Cruise, A. Cumming, L. Cunningham, E. Cuoco, T. Dal Canton, S. L. Danilishin, S. D’Antonio, K. Danzmann, N. S. Darman, C. F. Da Silva Costa, V. Dattilo, I. Dave, H. P. Daveloza, M. Davier, G. S. Davies, E. J. Daw, R. Day, S. De, D. DeBra, G. Debreczeni, J. Degallaix, M. De Laurentis, S. Deléglise, W. Del Pozzo, T. Denker, T. Dent, H. Dereli, V. Dergachev, R. T. DeRosa, R. De Rosa, R. DeSalvo, S. Dhurandhar, M. C. Díaz, L. Di Fiore, M. Di Giovanni, A. Di Lieto, S. Di Pace, I. Di Palma, A. Di Virgilio, G. Dojcinoski, V. Dolique, F. Donovan, K. L. Dooley, S. Doravari, R. Douglas, T. P. Downes, M. Drago, R. W. P. Drever, J. C. Driggers, Z. Du, M. Ducrot, S. E. Dwyer, T. B. Edo, M. C. Edwards, A. Effler, H. B. Eggenstein, P. Ehrens, J. Eichholz, S. S. Eikenberry, W. Engels, R. C. Essick, T. Etzel, M. Evans, T. M. Evans, R. Everett, M. Factourovich, V. Fafone, H. Fair, S. Fairhurst, X. Fan, Q. Fang, S. Farinon, B. Farr, W. M. Farr, M. Favata, M. Fays, H. Fehrmann, M. M. Fejer, D. Feldbaum, I. Ferrante, E. C. Ferreira, F. Ferrini, F. Fidecaro, L. S. Finn, I. Fiori, D. Fiorucci, R. P. Fisher, R. Flaminio, M. Fletcher, H. Fong, J. D. Fournier, S. Franco, S. Frasca, F. Frasconi, M. Frede, Z. Frei, A. Freise, R. Frey, V. Frey, T. T. Fricke, P. Fritschel, V. V. Frolov, P. Fulda, M. Fyffe, H. A. G. Gabbard, J. R. Gair, L. Gammaitoni, S. G. Gaonkar, F. Garufi, A. Gatto, G. Gaur, N. Gehrels, G. Gemme, B. Gendre, E. Genin, A. Gennai, J. George, L. Gergely, V. Germain, Abhirup Ghosh, Archisman Ghosh, S. Ghosh, J. A. Giaime, K. D. Giardina, A. Giazotto, K. Gill, A. Glaefke, J. R. Gleason, E. Goetz, R. Goetz, L. Gondan, G. González, J. M. Gonzalez Castro, A. Gopakumar, N. A. Gordon, M. L. Gorodetsky, S. E. Gossan, M. Gosselin, R. Gouaty, C. Graef, P. B. Graff, M. Granata, A. Grant, S. Gras, C. Gray, G. Greco, A. C. Green, R. J. S. Greenhalgh, P. Groot, H. Grote, S. Grunewald, G. M. Guidi, X. Guo, A. Gupta, M. K. Gupta, K. E. Gushwa, E. K. Gustafson, R. Gustafson, J. J. Hacker, B. R. Hall, E. D. Hall, G. Hammond, M. Haney, M. M. Hanke, J. Hanks, C. Hanna, M. D. Hannam, J. Hanson, T. Hardwick, J. Harms, G. M. Harry, I. W. Harry, M. J. Hart, M. T. Hartman, C. J. Haster, K. Haughian, J. Healy, J. Heefner, A. Heidmann, M. C. Heintze, G. Heinzel, H. Heitmann, P. Hello, G. Hemming, M. Hendry, I. S. Heng, J. Hennig, A. W. Heptonstall, M. Heurs, S. Hild, D. Hoak, K. A. Hodge, D. Hofman, S. E. Hollitt, K. Holt, D. E. Holz, P. Hopkins, D. J. Hosken, J. Hough, E. A. Houston, E. J. Howell, Y. M. Hu, S. Huang, E. A. Huerta, D. Huet, B. Hughey, S. Husa, S. H. Huttner, T. Huynh-Dinh, A. Idrisy, N. Indik, D. R. Ingram, R. Inta, H. N. Isa, J. M. Isac, M. Isi, G. Islas, T. Isogai, B. R. Iyer, K. Izumi, M. B. Jacobson, T. Jacqmin, H. Jang, K. Jani, P. Jaranowski, S. Jawahar, F. Jiménez-Forteza, W. W. Johnson, N. K. Johnson-McDaniel, D. I. Jones, R. Jones, R. J. G. Jonker, L. Ju, K. Haris, C. V. Kalaghatgi, V. Kalogera, S. Kandhasamy, G. Kang, J. B. Kanner, S. Karki, M. Kasprzack, E. Katsavounidis, W. Katzman, S. Kaufer, T. Kaur, K. Kawabe, F. Kawazoe, F. Kéfélian, M. S. Kehl, D. Keitel, D. B. Kelley, W. Kells, R. Kennedy, D. G. Keppel, J. S. Key, A. Khalaidovski, F. Y. Khalili, I. Khan, S. Khan, Z. Khan, E. A. Khazanov, N. Kijbunchoo, C. Kim, J. Kim, K. Kim, Nam-Gyu Kim, Namjun Kim, Y. M. Kim, E. J. King, P. J. King, D. L. Kinzel, J. S. Kissel, L. Kleybolte, S. Klimenko, S. M. Koehlenbeck, K. Kokeyama, S. Koley, V. Kondrashov, A. Kontos, S. Koranda, M. Korobko, W. Z. Korth, I. Kowalska, D. B. Kozak, V. Kringel, B. Krishnan, A. Królak, C. Krueger, G. Kuehn, P. Kumar, R. Kumar, L. Kuo, A. Kutynia, P. Kwee, B. D. Lackey, M. Landry, J. Lange, B. Lantz, P. D. Lasky, A. Lazzarini, C. Lazzaro, P. Leaci, S. Leavey, E. O. Lebigot, C. H. Lee, H. K. Lee, H. M. Lee, K. Lee, A. Lenon, M. Leonardi, J. R. Leong, N. Leroy, N. Letendre, Y. Levin, B. M. Levine, T. G. F. Li, A. Libson, T. B. Littenberg, N. A. Lockerbie, J. Logue, A. L. Lombardi, L. T. London, J. E. Lord, M. Lorenzini, V. Loriette, M. Lormand, G. Losurdo, J. D. Lough, C. O. Lousto, G. Lovelace, H. Lück, A. P. Lundgren, J. Luo, R. Lynch, Y. Ma, T. MacDonald, B. Machenschalk, M. MacInnis, D. M. Macleod, F. Magaña-Sandoval, R. M. Magee, M. Mageswaran, E. Majorana, I. Maksimovic, V. Malvezzi, N. Man, I. Mandel, V. Mandic, V. Mangano, G. L. Mansell, M. Manske, M. Mantovani, F. Marchesoni, F. Marion, S. Márka, Z. Márka, A. S. Markosyan, E. Maros, F. Martelli, L. Martellini, I. W. Martin, R. M. Martin, D. V. Martynov, J. N. Marx, K. Mason, A. Masserot, T. J. Massinger, M. Masso-Reid, F. Matichard, L. Matone, N. Mavalvala, N. Mazumder, G. Mazzolo, R. McCarthy, D. E. McClelland, S. McCormick, S. C. McGuire, G. McIntyre, J. McIver, D. J. McManus, S. T. McWilliams, D. Meacher, G. D. Meadors, J. Meidam, A. Melatos, G. Mendell, D. Mendoza-Gandara, R. A. Mercer, E. Merilh, M. Merzougui, S. Meshkov, C. Messenger, C. Messick, P. M. Meyers, F. Mezzani, H. Miao, C. Michel, H. Middleton, E. E. Mikhailov, L. Milano, J. Miller, M. Millhouse, Y. Minenkov, J. Ming, S. Mirshekari, C. Mishra, S. Mitra, V. P. Mitrofanov, G. Mitselmakher, R. Mittleman, A. Moggi, M. Mohan, S. R. P. Mohapatra, M. Montani, B. C. Moore, C. J. Moore, D. Moraru, G. Moreno, S. R. Morriss, K. Mossavi, B. Mours, C. M. Mow-Lowry, C. L. Mueller, G. Mueller, A. W. Muir, Arunava Mukherjee, D. Mukherjee, S. Mukherjee, N. Mukund, A. Mullavey, J. Munch, D. J. Murphy, P. G. Murray, A. Mytidis, I. Nardecchia, L. Naticchioni, R. K. Nayak, V. Necula, K. Nedkova, G. Nelemans, M. Neri, A. Neunzert, G. Newton, T. T. Nguyen, A. B. Nielsen, S. Nissanke, A. Nitz, F. Nocera, D. Nolting, M. E. N. Normandin, L. K. Nuttall, J. Oberling, E. Ochsner, J. O’Dell, E. Oelker, G. H. Ogin, J. J. Oh, S. H. Oh, F. Ohme, M. Oliver, P. Oppermann, Richard J. Oram, B. O’Reilly, R. O’Shaughnessy, C. D. Ott, D. J. Ottaway, R. S. Ottens, H. Overmier, B. J. Owen, A. Pai, S. A. Pai, J. R. Palamos, O. Palashov, C. Palomba, A. Pal-Singh, H. Pan, Y. Pan, C. Pankow, F. Pannarale, B. C. Pant, F. Paoletti, A. Paoli, M. A. Papa, H. R. Paris, W. Parker, D. Pascucci, A. Pasqualetti, R. Passaquieti, D. Passuello, B. Patricelli, Z. Patrick, B. L. Pearlstone, M. Pedraza, R. Pedurand, L. Pekowsky, A. Pele, S. Penn, A. Perreca, H. P. Pfeiffer, M. Phelps, O. Piccinni, M. Pichot, M. Pickenpack, F. Piergiovanni, V. Pierro, G. Pillant, L. Pinard, I. M. Pinto, M. Pitkin, J. H. Poeld, R. Poggiani, P. Popolizio, A. Post, J. Powell, J. Prasad, V. Predoi, S. S. Premachandra, T. Prestegard, L. R. Price, M. Prijatelj, M. Principe, S. Privitera, R. Prix, G. A. Prodi, L. Prokhorov, O. Puncken, M. Punturo, P. Puppo, M. Pürrer, H. Qi, J. Qin, V. Quetschke, E. A. Quintero, R. Quitzow-James, F. J. Raab, D. S. Rabeling, H. Radkins, P. Raffai, S. Raja, M. Rakhmanov, C. R. Ramet, P. Rapagnani, V. Raymond, M. Razzano, V. Re, J. Read, C. M. Reed, T. Regimbau, L. Rei, S. Reid, D. H. Reitze, H. Rew, S. D. Reyes, F. Ricci, K. Riles, N. A. Robertson, R. Robie, F. Robinet, A. Rocchi, L. Rolland, J. G. Rollins, V. J. Roma, J. D. Romano, R. Romano, G. Romanov, J. H. Romie, D. Rosińska, S. Rowan, A. Rüdiger, P. Ruggi, K. Ryan, S. Sachdev, T. Sadecki, L. Sadeghian, L. Salconi, M. Saleem, F. Salemi, A. Samajdar, L. Sammut, L. M. Sampson, E. J. Sanchez, V. Sandberg, B. Sandeen, G. H. Sanders, J. R. Sanders, B. Sassolas, B. S. Sathyaprakash, P. R. Saulson, O. Sauter, R. L. Savage, A. Sawadsky, P. Schale, R. Schilling, J. Schmidt, P. Schmidt, R. Schnabel, R. M. S. Schofield, A. Schönbeck, E. Schreiber, D. Schuette, B. F. Schutz, J. Scott, S. M. Scott, D. Sellers, A. S. Sengupta, D. Sentenac, V. Sequino, A. Sergeev, G. Serna, Y. Setyawati, A. Sevigny, D. A. Shaddock, T. Shaffer, S. Shah, M. S. Shahriar, M. Shaltev, Z. Shao, B. Shapiro, P. Shawhan, A. Sheperd, D. H. Shoemaker, D. M. Shoemaker, K. Siellez, X. Siemens, D. Sigg, A. D. Silva, D. Simakov, A. Singer, L. P. Singer, A. Singh, R. Singh, A. Singhal, A. M. Sintes, B. J. J. Slagmolen, J. R. Smith, M. R. Smith, N. D. Smith, R. J. E. Smith, E. J. Son, B. Sorazu, F. Sorrentino, T. Souradeep, A. K. Srivastava, A. Staley, M. Steinke, J. Steinlechner, S. Steinlechner, D. Steinmeyer, B. C. Stephens, S. P. Stevenson, R. Stone, K. A. Strain, N. Straniero, G. Stratta, N. A. Strauss, S. Strigin, R. Sturani, A. L. Stuver, T. Z. Summerscales, L. Sun, P. J. Sutton, B. L. Swinkels, M. J. Szczepańczyk, M. Tacca, D. Talukder, D. B. Tanner, M. Tápai, S. P. Tarabrin, A. Taracchini, R. Taylor, T. Theeg, M. P. Thirugnanasambandam, E. G. Thomas, M. Thomas, P. Thomas, K. A. Thorne, K. S. Thorne, E. Thrane, S. Tiwari, V. Tiwari, K. V. Tokmakov, C. Tomlinson, M. Tonelli, C. V. Torres, C. I. Torrie, D. Töyrä, F. Travasso, G. Traylor, D. Trifirò, M. C. Tringali, L. Trozzo, M. Tse, M. Turconi, D. Tuyenbayev, D. Ugolini, C. S. Unnikrishnan, A. L. Urban, S. A. Usman, H. Vahlbruch, G. Vajente, G. Valdes, M. Vallisneri, N. van Bakel, M. van Beuzekom, J. F. J. van den Brand, C. Van Den Broeck, D. C. Vander-Hyde, L. van der Schaaf, J. V. van Heijningen, A. A. van Veggel, M. Vardaro, S. Vass, M. Vasúth, R. Vaulin, A. Vecchio, G. Vedovato, J. Veitch, P. J. Veitch, K. Venkateswara, D. Verkindt, F. Vetrano, A. Viceré, S. Vinciguerra, D. J. Vine, J. Y. Vinet, S. Vitale, T. Vo, H. Vocca, C. Vorvick, D. Voss, W. D. Vousden, S. P. Vyatchanin, A. R. Wade, L. E. Wade, M. Wade, S. J. Waldman, M. Walker, L. Wallace, S. Walsh, G. Wang, H. Wang, M. Wang, X. Wang, Y. Wang, H. Ward, R. L. Ward, J. Warner, M. Was, B. Weaver, L. W. Wei, M. Weinert, A. J. Weinstein, R. Weiss, T. Welborn, L. Wen, P. Weßels, T. Westphal, K. Wette, J. T. Whelan, S. E. Whitcomb, D. J. White, B. F. Whiting, K. Wiesner, C. Wilkinson, P. A. Willems, L. Williams, R. D. Williams, A. R. Williamson, J. L. Willis, B. Willke, M. H. Wimmer, L. Winkelmann, W. Winkler, C. C. Wipf, A. G. Wiseman, H. Wittel, G. Woan, J. Worden, J. L. Wright, G. Wu, J. Yablon, I. Yakushin, W. Yam, H. Yamamoto, C. C. Yancey, M. J. Yap, H. Yu, M. Yvert, A. ZadroŻny, L. Zangrando, M. Zanolin, J. P. Zendri, M. Zevin, F. Zhang, L. Zhang, M. Zhang, Y. Zhang, C. Zhao, M. Zhou, Z. Zhou, X. J. Zhu, M. E. Zucker, S. E. Zuraw, J. Zweizig, LIGO Scientific Collaboration, and Virgo Collaboration. Observation of Gravitational Waves from a Binary Black Hole Merger. Physics Review Letter, 116(6):061102, February 2016.
* Accomazzi [2024] Alberto Accomazzi. Decades of Transformation: Evolution of the NASA Astrophysics Data System’s Infrastructure. arXiv e-prints, page arXiv:2401.09685, January 2024.
* Bai [2016] Xue-Ning Bai. Towards a Global Evolutionary Model of Protoplanetary Disks. The Astrophysical Journal, 821(2):80, April 2016.
* Bannister et al. [2012] Michael J. Bannister, David Eppstein, Michael T. Goodrich, and Lowell Trott. Force-Directed Graph Drawing Using Social Gravity and Scaling. arXiv e-prints, page arXiv:1209.0748, September 2012.
* Baron [2019] Dalya Baron. Machine Learning in Astronomy: a practical overview. arXiv e-prints, page arXiv:1904.07248, April 2019.
* Bertschinger [1998] Edmund Bertschinger. Simulations of Structure Formation in the Universe. Annual Review of Astronomy and Astrophysics, 36:599–654, January 1998.
* Blecher et al. [2023] Lukas Blecher, Guillem Cucurull, Thomas Scialom, and Robert Stojnic. Nougat: Neural Optical Understanding for Academic Documents. arXiv e-prints, page arXiv:2308.13418, August 2023.
* Bode et al. [2000] Paul Bode, Jeremiah P. Ostriker, and Guohong Xu. The Tree Particle-Mesh N-Body Gravity Solver. The Astrophysical Journal Supplement Series, 128(2):561–569, June 2000.
* Brown et al. [2020] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners. arXiv e-prints, page arXiv:2005.14165, May 2020.
* Bubeck et al. [2023] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv e-prints, page arXiv:2303.12712, March 2023.
* Carlesi et al. [2014a] Edoardo Carlesi, Alexander Knebe, Geraint F. Lewis, Scott Wales, and Gustavo Yepes. Hydrodynamical simulations of coupled and uncoupled quintessence models - I. Halo properties and the cosmic web. Monthly Notices of the Royal Astronomical Society, 439(3):2943–2957, April 2014.
* Carlesi et al. [2014b] Edoardo Carlesi, Alexander Knebe, Geraint F. Lewis, and Gustavo Yepes. Hydrodynamical simulations of coupled and uncoupled quintessence models - II. Galaxy clusters. Monthly Notices of the Royal Astronomical Society, 439(3):2958–2969, April 2014.
* Cheong et al. [2021] Se-Hang Cheong, Yain-Whar Si, and Raymond K. Wong. Online force-directed algorithms for visualization of dynamic graphs. Information Sciences, 556:223–255, 2021.
* Cranmer et al. [2020] Kyle Cranmer, Johann Brehmer, and Gilles Louppe. The frontier of simulation-based inference. Proceedings of the National Academy of Science, 117(48):30055–30062, December 2020.
* De Santi et al. [2024] Federico De Santi, Massimiliano Razzano, Francesco Fidecaro, Luca Muccillo, Lucia Papalini, and Barbara Patricelli. Deep learning to detect gravitational waves from binary close encounters: Fast parameter estimation using normalizing flows. Physical Review D, 109(10):102004, May 2024.
* Diaz Rivero and Dvorkin [2020] Ana Diaz Rivero and Cora Dvorkin. Flow-based likelihoods for non-Gaussian inference. Physical Review D, 102(10):103507, November 2020.
* Dung Nguyen et al. [2023] Tuan Dung Nguyen, Yuan-Sen Ting, Ioana Ciucă, Charlie O’Neill, Ze-Chang Sun, Maja Jabłońska, Sandor Kruk, Ernest Perkowski, Jack Miller, Jason Li, Josh Peek, Kartheik Iyer, Tomasz Różański, Pranav Khetarpal, Sharaf Zaman, David Brodrick, Sergio J. Rodríguez Méndez, Thang Bui, Alyssa Goodman, Alberto Accomazzi, Jill Naiman, Jesse Cranney, Kevin Schawinski, and UniverseTBD. AstroLLaMA: Towards Specialized Foundation Models in Astronomy. arXiv e-prints, page arXiv:2309.06126, September 2023.
* Genel et al. [2014] Shy Genel, Mark Vogelsberger, Volker Springel, Debora Sijacki, Dylan Nelson, Greg Snyder, Vicente Rodriguez-Gomez, Paul Torrey, and Lars Hernquist. Introducing the Illustris project: the evolution of galaxy populations across cosmic time. Monthly Notices of the Royal Astronomical Society, 445(1):175–200, November 2014.
* Gudiksen et al. [2011] B. V. Gudiksen, M. Carlsson, V. H. Hansteen, W. Hayek, J. Leenaarts, and J. Martínez-Sykora. The stellar atmosphere simulation code Bifrost. Code description and validation. Astronomy & Astrophysics, 531:A154, July 2011.
* Günther et al. [2023] Michael Günther, Jackmin Ong, Isabelle Mohr, Alaeddine Abdessalem, Tanguy Abel, Mohammad Kalim Akram, Susana Guzman, Georgios Mastrapas, Saba Sturua, Bo Wang, Maximilian Werk, Nan Wang, and Han Xiao. Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents. arXiv e-prints, page arXiv:2310.19923, October 2023.
* Hara and Ford [2023] Nathan C. Hara and Eric B. Ford. Statistical Methods for Exoplanet Detection with Radial Velocities. Annual Review of Statistics and Its Application, 10(1):623–649, March 2023.
* Hu and Dodelson [2002] Wayne Hu and Scott Dodelson. Cosmic Microwave Background Anisotropies. Annual Review of Astronomy and Astrophysics, 40:171–216, January 2002.
* Jiang et al. [2014] Yan-Fei Jiang, James M. Stone, and Shane W. Davis. A Global Three-dimensional Radiation Magneto-hydrodynamic Simulation of Super-Eddington Accretion Disks. The Astrophysical Journal, 796(2):106, December 2014.
* Jiang et al. [2023] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7B. arXiv e-prints, page arXiv:2310.06825, October 2023.
* Jumper et al. [2021] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873):583–589, August 2021.
* Kobourov [2012] Stephen G. Kobourov. Spring Embedders and Force Directed Graph Drawing Algorithms. arXiv e-prints, page arXiv:1201.3011, January 2012.
* McAlpine et al. [2016] S. McAlpine, J. C. Helly, M. Schaller, J. W. Trayford, Y. Qu, M. Furlong, R. G. Bower, R. A. Crain, J. Schaye, T. Theuns, C. Dalla Vecchia, C. S. Frenk, I. G. McCarthy, A. Jenkins, Y. Rosas-Guevara, S. D. M. White, M. Baes, P. Camps, and G. Lemson. The EAGLE simulations of galaxy formation: Public release of halo and galaxy catalogues. Astronomy and Computing, 15:72–89, April 2016.
* Meijer et al. [2021] H. J. Meijer, J. Truong, and R. Karimi. Document Embedding for Scientific Articles: Efficacy of Word Embeddings vs TFIDF. arXiv e-prints, page arXiv:2107.05151, July 2021.
* OpenAI et al. [2023] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. GPT-4 Technical Report. arXiv e-prints, page arXiv:2303.08774, March 2023.
* Pan et al. [2023] Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. Unifying Large Language Models and Knowledge Graphs: A Roadmap. arXiv e-prints, page arXiv:2306.08302, June 2023.
* Paz et al. [2006] D. J. Paz, D. G. Lambas, N. Padilla, and M. Merchán. Shapes of clusters and groups of galaxies: comparison of model predictions with observations. Monthly Notices of the Royal Astronomical Society, 366(4):1503–1510, March 2006.
* Peñarrubia et al. [2006] Jorge Peñarrubia, Andrew J. Benson, David Martínez-Delgado, and Hans Walter Rix. Modeling Tidal Streams in Evolving Dark Matter Halos. The Astrophysical Journal, 645(1):240–255, July 2006.
* Perkowski et al. [2024] Ernest Perkowski, Rui Pan, Tuan Dung Nguyen, Yuan-Sen Ting, Sandor Kruk, Tong Zhang, Charlie O’Neill, Maja Jablonska, Zechang Sun, Michael J. Smith, Huiling Liu, Kevin Schawinski, Kartheik Iyer, Ioana Ciucă, and UniverseTBD. AstroLLaMA-Chat: Scaling AstroLLaMA with Conversational and Diverse Datasets. Research Notes of the American Astronomical Society, 8(1):7, January 2024.
* Pillepich et al. [2018] Annalisa Pillepich, Volker Springel, Dylan Nelson, Shy Genel, Jill Naiman, Rüdiger Pakmor, Lars Hernquist, Paul Torrey, Mark Vogelsberger, Rainer Weinberger, and Federico Marinacci. Simulating galaxy formation with the IllustrisTNG model. Monthly Notices of the Royal Astronomical Society, 473(3):4077–4106, January 2018.
* Robertson et al. [2023] Brant E. Robertson, Sandro Tacchella, Benjamin D. Johnson, Ryan Hausen, Adebusola B. Alabi, Kristan Boyett, Andrew J. Bunker, Stefano Carniani, Eiichi Egami, Daniel J. Eisenstein, Kevin N. Hainline, Jakob M. Helton, Zhiyuan Ji, Nimisha Kumari, Jianwei Lyu, Roberto Maiolino, Erica J. Nelson, Marcia J. Rieke, Irene Shivaei, Fengwu Sun, Hannah Übler, Christina C. Williams, Christopher N. A. Willmer, and Joris Witstok. Morpheus Reveals Distant Disk Galaxy Morphologies with JWST: The First AI/ML Analysis of JWST Images. The Astrophysical Journal Letters, 942(2):L42, January 2023.
* Romeo et al. [2004] Alessandro B. Romeo, Cathy Horellou, and Jöran Bergh. A wavelet add-on code for new-generation N-body simulations and data de-noising (JOFILUREN). Monthly Notices of the Royal Astronomical Society, 354(4):1208–1222, November 2004.
* Springel [2005] Volker Springel. The cosmological simulation code GADGET-2. Monthly Notices of the Royal Astronomical Society, 364(4):1105–1134, December 2005.
* Storrie-Lombardi et al. [1992] M. C. Storrie-Lombardi, O. Lahav, Jr. Sodre, L., and L. J. Storrie-Lombardi. Morphological Classification of Galaxies by Artificial Neural Networks. Monthly Notices of the Royal Astronomical Society, 259:8P, November 1992.
* Sun et al. [2023a] Zechang Sun, Joshua S. Speagle, Song Huang, Yuan-Sen Ting, and Zheng Cai. Zephyr : Stitching Heterogeneous Training Data with Normalizing Flows for Photometric Redshift Inference. arXiv e-prints, page arXiv:2310.20125, October 2023.
* Sun et al. [2023b] Zechang Sun, Yuan-Sen Ting, and Zheng Cai. Quasar Factor Analysis-An Unsupervised and Probabilistic Quasar Continuum Prediction Algorithm with Latent Factor Analysis. The Astrophysical Journal Supplement Series, 269(1):4, November 2023.
* Ting et al. [2019] Yuan-Sen Ting, Charlie Conroy, Hans-Walter Rix, and Phillip Cargile. The Payne: Self-consistent ab initio Fitting of Stellar Spectra. The Astrophysical Journal, 879(2):69, July 2019.
* Villaescusa-Navarro et al. [2021a] Francisco Villaescusa-Navarro, Daniel Anglés-Alcázar, Shy Genel, David N. Spergel, Yin Li, Benjamin Wandelt, Andrina Nicola, Leander Thiele, Sultan Hassan, Jose Manuel Zorrilla Matilla, Desika Narayanan, Romeel Dave, and Mark Vogelsberger. Multifield Cosmology with Artificial Intelligence. arXiv e-prints, page arXiv:2109.09747, September 2021.
* Villaescusa-Navarro et al. [2021b] Francisco Villaescusa-Navarro, Daniel Anglés-Alcázar, Shy Genel, David N. Spergel, Rachel S. Somerville, Romeel Dave, Annalisa Pillepich, Lars Hernquist, Dylan Nelson, Paul Torrey, Desika Narayanan, Yin Li, Oliver Philcox, Valentina La Torre, Ana Maria Delgado, Shirley Ho, Sultan Hassan, Blakesley Burkhart, Digvijay Wadekar, Nicholas Battaglia, Gabriella Contardo, and Greg L. Bryan. The CAMELS Project: Cosmology and Astrophysics with Machine-learning Simulations. The Astrophysical Journal, 915(1):71, July 2021.
* Weinberg [2008] Steven Weinberg. Cosmology. 2008\.
* Xu et al. [2018] J. Xu, Y. Bu, Y. Ding, et al. Understanding the formation of interdisciplinary research from the perspective of keyword evolution: a case study on joint attention. Scientometrics, 117:973–995, 2018.
* Zhao et al. [2023] Xiaosheng Zhao, Yuan-Sen Ting, Kangning Diao, and Yi Mao. Can diffusion model conditionally generate astrophysical images? Monthly Notices of the Royal Astronomical Society, 526(2):1699–1712, December 2023.
* Zhou and Lin [2007] Ji-Lin Zhou and Douglas N. C. Lin. Planetesimal Accretion onto Growing Proto-Gas Giant Planets. The Astrophysical Journal, 666(1):447–465, September 2007.
* Zhu et al. [2023] Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Huajun Chen, and Ningyu Zhang. LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities. arXiv e-prints, page arXiv:2305.13168, May 2023.
|
# SimPal: Towards a Meta-Conversational Framework to Understand Teacher’s
Instructional Goals for K-12 Physics
Effat Farhana<EMAIL_ADDRESS>Vanderbilt UniversityNashville,
TennesseeUSA , Souvika Sarkar<EMAIL_ADDRESS>Auburn UniversityAuburn,
AlabamaUSA , Ralph Knipper<EMAIL_ADDRESS>Auburn UniversityAuburn,
AlabamaUSA , Indrani Dey<EMAIL_ADDRESS>University of Wisconsin-
MadisonMadison, WisconsinUSA , Hari Narayanan<EMAIL_ADDRESS>Auburn
UniversityAuburn, AlabamaUSA , Sadhana Puntambekar
<EMAIL_ADDRESS>University of Wisconsin-MadisonMadison,
WisconsinUSA and Santu Karmaker<EMAIL_ADDRESS>Auburn UniversityAuburn,
AlabamaUSA
(2024; 2024)
###### Abstract.
Simulations are widely used to teach science in grade schools. These
simulations are often augmented with a conversational artificial intelligence
(AI) agent to provide real-time scaffolding support for students conducting
experiments using the simulations. AI agents are highly tailored for each
simulation, with a predesigned set of Instructional Goals (IGs), making it
difficult for teachers to adjust IGs as the agent may no longer align with the
revised IGs. Additionally, teachers are hesitant to adopt new third-party
simulations for the same reasons. In this research, we introduce SimPal, a
Large Language Model (LLM) based meta-conversational agent, to solve this
misalignment issue between a pre-trained conversational AI agent and the
constantly evolving pedagogy of instructors. Through natural conversation with
SimPal, teachers first explain their desired IGs, based on which SimPal
identifies a set of relevant physical variables and their relationships to
create symbolic representations of the desired IGs. The symbolic
representations can then be leveraged to design prompts for the original AI
agent to yield better alignment with the desired IGs. We empirically evaluated
SimPal using two LLMs, ChatGPT-3.5 and PaLM 2, on 63 Physics simulations from
PhET and Golabz. Additionally, we examined the impact of different prompting
techniques on LLM’s performance by utilizing the TELeR taxonomy to identify
relevant physical variables for the IGs. Our findings showed that SimPal can
do this task with a high degree of accuracy when provided with a well-defined
prompt.
Large Language Models, Conversational AI, Meta-Conversation, K-12 Science
††journalyear: 2024††journalyear: 2024††copyright: rightsretained††conference:
Proceedings of the Eleventh ACM Conference on Learning @ Scale; July 18–20,
2024; Atlanta, GA, USA††booktitle: Proceedings of the Eleventh ACM Conference
on Learning @ Scale (L@S ’24), July 18–20, 2024, Atlanta, GA, USA††doi:
10.1145/3657604.3664695††isbn: 979-8-4007-0633-2/24/07††ccs: Computing
methodologies Natural language processing††ccs: Applied computing Education
## 1\. Introduction
Simulations are widely used in science education, and prior research shows
that using simulations in science education can enhance students’
comprehension of scientific concepts (Rutten et al., 2012; Kollöffel and De
Jong, 2013). However, students often need guidance and scaffolding when
conducting experiments with simulations (Graesser et al., 2006; González-Cruz
et al., 2003), and it is challenging for one teacher to provide real-time
support to multiple students simultaneously (Falloon, 2019). Recent
advancements in Large Language Models (LLMs) (Brown et al., 2020) have
revolutionized conversational AI agents as a plausible solution to provide
real-time support to students. But LLM-powered conversational AI agents also
present unique challenges. First, existing AI agents are highly customized for
a specific simulation with a predesigned set of Instructional Goals (IGs)
(Fischer and Dershimer, 2020). Therefore, teachers often struggle to edit
these predesigned IGs or redesign the IGs because the AI agent will no longer
be aligned with the revised IGs. Second, middle or high school science
teachers lack the technical expertise to customize AI agents (Park et al.,
2023). This leads to the use of pre-existing, non-customizable agents or
third-party software, which requires more time and resources for simulations.
For similar reasons, teachers also hesitate to integrate new/other third-party
(closed-source) simulations into their instructional materials.
How can we empower teachers to integrate any third-party (open or closed-
source) simulation into their instruction materials such that they can I)
freely design their own Instructional Goals (IGs) and II) quickly customize a
conversational AI agent to better align with their IGs? More importantly, how
can we achieve this goal without requiring teachers to understand the
technical details of Large Language Models (LLMs) like GPT-4 (Achiam et al.,
2023) and PaLM (Chowdhery et al., 2023; Anil et al., 2023)? While LLMs are
trained on vast internet text data and can aid in language comprehension tasks
like answering questions (Li et al., 2021) and facilitating human
conversations (Sundar and Heck, 2023), adapting LLMs to domain-specific tasks
is still challenging due to a lack of proper knowledge grounding in that
particular domain. It is also unrealistic to expect school teachers to learn
knowledge-grounding techniques that require in-depth machine learning or deep
learning knowledge.
This paper introduces SimPal, a meta-conversational agent that can assist
school teachers in adopting any existing physics simulation into their lesson
plan while allowing them to custom-design their own IGs and customize a
general-purpose LLM that aligns with those custom IGs, facilitating
instruction at scale. SimPal achieves this ambitious goal through meta-
conversation, which is essentially a conversation with the teacher about
structuring future conversations with students for simulation-based physics
experiments. Through natural (meta-)conversation with SimPal, teachers first
explain their desired IGs, based on which SimPal identifies a set of relevant
physical variables and their relationships to create symbolic representations
of the desired IGs. The symbolic representations can then be leveraged to
design prompts for the original AI agent to yield better alignment with the
desired IGs.
Figure 1. SimPal’s high-level overview: The teacher converses with SimPal,
discussing their simulation of interest and corresponding IG. As the
conversation progresses, SimPal extracts useful information from the
conversation to infer a computational representation of the teacher’s IG. That
internal representation is then communicated back to the teacher so they can
make any necessary adjustments.
Figure 1 presents an overview of SimPal’s interaction with the teacher. The
teacher conveys their IGs to SimPal, and then SimPal creates symbolic
representations of IGs by identifying relevant physical characteristics and
their interactions. Accurately identifying relevant physical variables is
crucial, as the IGs are encoded in terms of these variables and will guide
student interactions. SimPal’s architecture allows a teacher to tailor their
lesson plan by I) modifying the variables and relations of a simulation
through natural conversation and II) integrating any third-party simulation.
A challenging first step toward achieving this goal is to have the LLM
accurately identify variables from the simulation selected by a teacher that
best matches their IGs. In this paper, we empirically evaluate this task’s
accuracy on 63 physics simulations from PhET and Golabz using two LLMs:
ChatGPT-3.5 (Brown et al., 2020) and PaLM 2 (Anil et al., 2023). By employing
the recently introduced TELeR taxonomy, we examined the impact of different
prompting strategies on LLM’s ability to identify the physical variables
relevant to the IGs. Our findings demonstrated that SimPal can perform this
task with a high degree of accuracy when provided with an appropriately
crafted prompt.
## 2\. Background and Related Work
Conversational Agents in K-12 Science. Conversational agents, like Betty’s
Brain (Kinnebrew and Biswas, 2012; Kinnebrew et al., 2013) and MetaTutor
(Bouchet et al., 2012; Azevedo et al., 2009) have been used to foster
students’ learning. In Betty’s Brain (Kinnebrew and Biswas, 2012; Kinnebrew et
al., 2013), students learn science and mathematics concepts by teaching a
virtual agent, Betty. MetaTutor is a hypermedia-based biology learning
environment where teachers set learning goals and students choose
metacognitive processes, with occasional pedagogical agent prompts. All of the
aforementioned frameworks support students’ learning, whereas SimPal offers a
conversational AI assistant for teachers to develop simulation-based science
lesson plans.
LLMs and K-12 Education. LLMs have recently been increasingly used to enhance
student learning. Zhang et al. utilized LLMs in solving arithmetic math word
problems (Zhang et al., 2023). Prihar et al. (Prihar et al., 2023) utilized
GPT-3 with few shot learning to generate middle school math explanations on
ASSISTments. They found that GPT-3, primarily trained on English text,
generated explanations that were significantly inferior to teacher-authored
ones. Lately, Khan Academy has introduced a GPT-4 (Achiam et al., 2023)
powered tutoring system, Khanmigo (Khan, 2024), to assist teachers in planning
their lessons and providing feedback on students writing. Our proposed
approach, SimPal, is similar to Khanmigo in terms of assisting teachers in
planning their lessons. However, SimPal differs from Khanmigo in that it
allows teachers to integrate any third-party simulations into their lesson
plans.
Grounding LLMs to Unseen Tasks. LLMs, which represent vast amounts of
information, still require adaptation to specific tasks. Traditionally, task-
specific supervised data is used to fine-tune an LLM and adapt it to new
natural language processing (NLP) applications (Dai and Le, 2015; Howard and
Ruder, 2018; Radford et al., 2019; Hu et al., 2023). However, fine-tuning
faces two major challenges: insufficient training data and a lack of computing
resources and expertise. Few-shot learning is another approach that uses
prompt engineering (Gao et al., 2021; Chen et al., 2022) and domain-specific
examples (Brown et al., 2020). However, few-shot learning may be challenging
for lesson planning due to teachers’ individual teaching styles and
preferences. Reinforcement learning (RL) from human feedback (RLHF) employs RL
to optimize human preferences during LLM training (Ouyang et al., 2022).
However, it can incur significant exploration costs in RL. In contrast, our
approach, known as meta-conversation, uses natural conversation to infer a
human preference, i.e., the teacher’s lesson plan.
Prompt Taxonomy for LLM. As LLM’s prompt impacts the output accuracy of LLMs,
a recent study proposed a taxonomy, TELeR (Santu and Feng, 2023), to design
and evaluate prompting techniques systematically. TELeR taxonomy has seven
levels of prompts. We only explain the four prompt levels [Level 1- Level 4]
used in our study in Table 1.
## 3\. Instruction Goals and SimPal
We formulate a teacher’s IG in terms of variables and relationships among
variables. Consider a toy example where the teacher’s instructional goal is to
teach inversely proportional relationships in Newton’s Second Law of Motion in
a PhET simulation (Lecerio, 2019). As demonstrated in Figure 1, the teacher
conveys their IGs (e.g., inversely proportional relationships Newton’s Second
Law of Motion) to SimPal. Then, SimPal generates relevant topics (e.g., force,
acceleration) for the lab and asks the teacher to review those. Upon receiving
the teacher’s feedback, SimPal then identifies a set of relevant variables and
their relationships to create symbolic representations of the desired IGs
based on the teacher’s feedback.
The scope of our study is variable extraction in Physics simulations, with the
task described as follows.
Problem Definition. Given an IG of a simulation topic, SimPal uses LLMs to
generate variables. The task is to assess LLM’s accuracy of generated
variables given a natural language description of the IG.
Table 1. TELeR Taxonomy for LLM Prompting Level (L) | Definition
---|---
L1 | One sentence describing the high-level task goal
L2 | Multi-sentence prompt describing the high-level
| goals and sub-tasks
L3 | Prompt describing the high-level goals
| and sub-tasks in bulleted style.
L4 | Prompt specifying high-level goals, sub-tasks, and
| output evaluation criteria (e.g., few-shot examples)
## 4\. Experimental Design
### 4.1. Underlying LLM of SimPal
Table 2 lists three LLMs that we assessed in our preliminary analysis.
Table 2. LLMs Evaluated in this work. Model | Creator | # Parameters
---|---|---
ChatGPT-3.5 (gpt-3.5-turbo-0613, (Brown et al., 2020)) | OpenAI | 175B
PaLM 2 (chat-bison-001, (Anil et al., 2023)) | Google | 340B
LLaMA-2 (Llama-2-70b-chat-hf, (Touvron et al., 2023)) | Meta | 70B
### 4.2. Prompt Design with SimPal
We used Level 1 to Level 4 following the TELeR taxonomy in Table 1. Example
Level 1, 2, 3, and 4 prompts are given below.
* •
Level 1 Identify and list the variables associated with these topics and the
description, along with their corresponding symbols.
* •
Level 2 You are a physics teacher in a high school, and you are preparing a
lesson plan on related concepts. You have a list of topics and descriptions.
Your task is to Level 1 Prompt Text
Please provide the variables and symbols in the following JSON format. The key
would be the “Name” of the variable and the value would be the “Symbol”.
Include symbols and strictly follow the JSON format.
Do not print topics and descriptions; only variable names and corresponding
symbols are used.
* •
Level 3 Level 2 Prompt Text
Please provide the variables and symbols in the following JSON format: [
“Name”: ” ”, “Symbol”: ” ” ]
\- List down all the relevant variables and their symbols.
* •
Level 4 Level 3 Prompt Text
You are given a GUIDELINES_PROMPT to show an example but do not include the
variables from the GUIDELINES_PROMPT in the response if they are not relevant.
### 4.3. Simulation Dataset
Our dataset includes simulations from PhET (Wieman, 2002) and Golabz
(University of Twente, 2012). PhET hosts free math and science simulations.
Golabz hosts online science labs to promote inquiry learning at scale. We
performed preliminary analysis on five PhET simulations (Section 4.4) and
final evaluation on 32 PhET and 31 Golabz simulations (Section 5).
### 4.4. Preliminary Experiments and Insights
We investigated the output of three LLMs on five PhET simulations using the
TELeR taxonomy prompting levels [Level 1– Level 4]. Table 3 shows that all
three LLMs’ F1-scores fall with Level-4 prompting. Observing the format
accuracy of Levels 2 and 3, we conclude that ChatGPT-3.5 and PaLM 2 generate
output in the desired format. Based on the results in Table 3, we selected two
LLMs, ChatGPT-3.5 and PaLM 2, with Level 2 and Level 3 prompting levels.
Table 3. LLM Performance and Prompting Levels as per the TeLER Taxonomy. Format Accuracy = (0) 1, if LLM-generated Results (Do not) Follow the Prompt’s Format Specification. The Highest of each Metric per Prompt Level is in Bold Model | Format Accuracy | Precision | Recall | F1 Score
---|---|---|---|---
Level 1 | | |
ChatGPT-3.5 | 0 | 0.923 | 0.923 | 0.923
PaLM 2 | 0 | 0.923 | 0.958 | 0.94
LLaMA-2 (70B) | 0 | 0.929 | 1 | 0.963
Level 2 | | |
ChatGPT-3.5 | 1 | 0.78 | 0.729 | 0.754
PaLM 2 | 1 | 0.881 | 0.835 | 0.857
LLaMA-2 (70B) | 0 | 0.876 | 0.897 | 0.887
Level 3 | | |
ChatGPT-3.5 | 1 | 0.898 | 0.877 | 0.887
PaLM 2 | 1 | 0.853 | 0.848 | 0.851
LLaMA-2 (70B) | 0.4 | 0.755 | 0.767 | 0.761
Level 4 | | |
ChatGPT-3.5 | 1 | 0.732 | 0.691 | 0.711
PaLM 2 | 1 | 0.96 | 0.712 | 0.818
LLaMA-2 (70B) | 0 | 0.82 | 0.761 | 0.7894
## 5\. Final Case Study and Evaluation
Dataset. We evaluated SimPal’s performance in 63 Physics simulations,
including 32 from PhET and 31 from Golabz, as depicted in Table 4. For each
simulation, we designed two prompting levels (Level 2 and Level 3) using two
LLMs: ChatGPT-3.5 and PaLM 2.
Table 4. Dataset Statistics. L2 = Level 2, L3 = Level 3, #Prompts = Total Prompts by Level 2 and Level 3 | ChatGPT-3.5 | PaLM 2
---|---|---
| L2 | L3 | #Prompts | L2 | L3 | #Prompts
Golabz | 32 | 32 | 64 | 32 | 32 | 64
PhET | 31 | 31 | 62 | 31 | 31 | 62
Evaluation. We created prompts by extracting IGs and topics from lab web
pages. The IGs in PhET and Golabz are the learning goals and lab descriptions,
respectively. To identify gold standard variables for a lab, we identified
topics from the lab webpage and added additional terms from the Teacher
Resources section. Finally, we cross-referenced the relevant terms with an
open-source CK-12 Physical Science textbook (CK-12, 2024), aligned to the Next
Generation Science Standards (NGSS) (Council et al., 2013) to determine the
final gold standards and manually compared SimPal’s outputs to the gold
standards.
Metric. For each simulation, the LLM inferred variables are com- pared against
the list of gold standard variables to compute the true positive, false
positive, true negative, and false negative statistics. Then, all such
statistics in a dataset were aggregated to compute the final Precision,
Recall, and micro-averaged F1 score.
Table 5. An Example Annotation Scheme and SimPal’s Output Evaluation on a Lab Titled Wave on a String Topics | LLM Output | Gold Standard
---|---|---
| ””Name””: ””Wavelength””,” | frequency
| , ””Symbol””: ””0̆3BB”” |
Frequency | ””Name””: ””Frequency””, | amplitude
| ””Symbol””: ””f”” |
Amplitude | ””Name””: ””Period”” , | wavelength
| ””Symbol””: ””T”” |
Damping | ””Name””: ””Amplitude”” , | period
| ””Symbol””: ””A”” |
| ””Name””: ””Speed”” , |
| ””Symbol””: ””v”” |
| ””Name””: ””Damping Coefficient””, |
| ””Symbol””: ””0̆3B2”” |
Table 5 presents an example of SimPal’s output evaluation in a lab. We
calculated true positive values (TP) by comparing the number of matched LLM
outputs to the gold standard, resulting in four true positives. We calculated
false positives (FP) by subtracting the number of LLM outputs from the true
positives, yielding two false positives. Further, we calculated the false
negatives (FN) by subtracting true positives from the number of gold standard
outputs, resulting in zero false negatives in the given example.
### 5.1. Results and Discussion
Table 6 presents our evaluation results of SimPal.
TELeR Prompting Levels and SimPal Performance. Level 3 prompting resulted in
higher F1 scores for both LLMs than Level 2 in Golabz simulations. In PhET
simulations, Level 2 prompting produced a higher recall score than Level 3 in
PaLM 2.
LLM Family and SimPal Performance. ChatGPT-3.5 outperformed PaLM 2 in
F1-scores in both Golabz and PhET simulations with Level 3 prompting.
ChatGPT-3.5 also achieved a higher F1 score than PaLM 2 for Level 2 prompting
in Golabz simulations.
Simulation Source and SimPal Performance. Golabz simulations resulted in a
higher F1-score in both Level 2 and Level 3 prompting than PhET in
ChatGPT-3.5. In PaLM 2, Golabz simulations outperformed PhET in F1-score in
only Level 3 prompting.
The differences in F1 scores between Golabz and PhET simulations may be due to
content alignment differences. Golabz simulations may have been more aligned
with curriculum standards. Additionally, PhET simulations may contain more
complex or detailed information, resulting in the generation of extraneous
outputs.
Table 6. SimPal’s Performance with TeLER Prompt Levels 2 and 3 for LLM
Families and Simulation Sources in Table 4 ChatGPT-3.5
---
| Precision | Recall | F1 | Precision | Recall | F1
| Level 3 | Level 2
Golabz | 0.590 | 0.713 | 0.60 | 0.525 | 0.627 | 0.541
PhET | 0.560 | 0.654 | 0.581 | 0.523 | 0.519 | 0.539
PaLM 2
| Level 3 | Level 2
Golabz | 0.607 | 0.639 | 0.568 | 0.555 | 0.591 | 0.525
PhET | 0.512 | 0.584 | 0.547 | 0.529 | 0.628 | 0.547
## 6\. Future Work
We plan to extend SimPal to provide support to students via meta-conversation.
This includes feedback on writings, answered questions, and hint generation.
Additionally, we plan to use SimPal’s student interaction data to generate
recommendations for teachers, such as identifying high-performing and
struggling students.
## 7\. Conclusion
In this study, we present SimPal, an LLM-based meta-conversational framework
for simulation-based science labs, allowing teachers to include third-party
(open or closed-source) simulations into lesson plans, facilitating
instruction at scale. We assessed SimPal’s variable generation capabilities
with two LLMs: ChatGPT-3.5 and PaLM 2 on 63 Physics simulations from PhET and
Golabz, experimenting with different prompts following the TELeR prompting
taxonomy. Our findings showed that I) SimPal can provide a meaningful variable
list tailored to the lab and instruction goal, and II) the LLM prompting level
impacts SimPal’s performance. Furthermore, we observed that Golabz simulations
outperformed PhET in the F1 score. It is important to note a limitation in our
evaluation; our gold standard outputs may lack the subject matter expertise of
real school teachers, potentially leading to disparities in F1 scores. Future
work will involve incorporating feedback from teachers and subject matter
experts to improve the accuracy and relevance of LLM outputs.
###### Acknowledgements.
This work was funded in part by the National Science Foundation through Grant
2302974.
## References
* (1)
* Achiam et al. (2023) Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023\. Gpt-4 technical report. _arXiv preprint arXiv:2303.08774_ (2023).
* Anil et al. (2023) Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023\. Palm 2 technical report. _arXiv preprint arXiv:2305.10403_ (2023).
* Azevedo et al. (2009) Roger Azevedo, Amy M Witherspoon, Arthur C Graesser, Danielle S McNamara, Amber Chauncey, and Emily Siler. 2009. MetaTutor: Analyzing Self-Regulated Learning in a Tutoring System for Biology. AIED.
* Bouchet et al. (2012) François Bouchet, Roger Azevedo, John S Kinnebrew, and Gautam Biswas. 2012. Identifying Students’ Characteristic Learning Behaviors in an Intelligent Tutoring System Fostering Self-Regulated Learning. _International Educational Data Mining Society_ (2012).
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020\. Language models are few-shot learners. _Advances in neural information processing systems_ 33 (2020), 1877–1901.
* Chen et al. (2022) Xiang Chen, Lei Li, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Decoupling knowledge from memorization: Retrieval-augmented prompt learning. _Advances in Neural Information Processing Systems_ 35 (2022), 23908–23922.
* Chowdhery et al. (2023) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023\. Palm: Scaling language modeling with pathways. _Journal of Machine Learning Research_ 24, 240 (2023), 1–113.
* CK-12 (2024) CK-12. 2024. CK-12 Physical Science for Middle School. https://flexbooks.ck12.org/cbook/ck-12-middle-school-physical-science-flexbook-2.0/.
* Council et al. (2013) National Research Council et al. 2013\. Next generation science standards: For states, by states. (2013).
* Dai and Le (2015) Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. _Advances in neural information processing systems_ 28 (2015).
* Falloon (2019) Garry Falloon. 2019. Using simulations to teach young students science concepts: An Experiential Learning theoretical analysis. _Computers & Education_ 135 (2019), 138–159.
* Fischer and Dershimer (2020) Christian Fischer and R Charles Dershimer. 2020. Preparing teachers to use educational games, virtual experiments, and interactive science simulations for engaging students in the practices of science. (2020).
* Gao et al. (2021) Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Few-shot Learners. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_. 3816–3830.
* González-Cruz et al. (2003) Javier González-Cruz, Rogelio Rodríguez-Sotres, and Mireya Rodríguez-Penagos. 2003. On the convenience of using a computer simulation to teach enzyme kinetics to undergraduate students with biological chemistry-related curricula. _Biochemistry and Molecular Biology Education_ 31, 2 (2003), 93–101.
* Graesser et al. (2006) Arthur C Graesser, G Tanner Jackson, Hyun-Jeong Joyce Kim, and Andrew Olney. 2006. AutoTutor 3-D Simulations: Analyzing Users’ Actions and Learning Trends. In _Proceedings of the Annual Meeting of the Cognitive Science Society_ , Vol. 28.
* Howard and Ruder (2018) Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 328–339.
* Hu et al. (2023) Nathan Hu, Eric Mitchell, Christopher D Manning, and Chelsea Finn. 2023. Meta-Learning Online Adaptation of Language Models. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_. 4418–4432.
* Khan (2024) Sal Khan. 2024. Khanmigo. https://www.khanacademy.org/khan-labs.
* Kinnebrew and Biswas (2012) John S Kinnebrew and Gautam Biswas. 2012. Identifying Learning Behaviors by Contextualizing Differential Sequence Mining with Action Features and Performance Evolution. _International Educational Data Mining Society_ (2012).
* Kinnebrew et al. (2013) John S Kinnebrew, Kirk M Loretz, and Gautam Biswas. 2013. A contextualized, differential sequence mining method to derive students’ learning behavior patterns. _JEDM— Journal of Educational Data Mining_ 5, 1 (2013), 190–219.
* Kollöffel and De Jong (2013) Bas Kollöffel and Ton De Jong. 2013. Conceptual Understanding of electrical circuits in secondary vocational engineering education: Combining traditional instruction with inquiry learning in a virtual lab. _Journal of engineering education_ 102, 3 (2013), 375–393.
* Lecerio (2019) Leah L. Lecerio. 2019. Newton’s Second Law of Motion. https://phet.colorado.edu/en/contributions/view/5092.
* Li et al. (2021) Alexander Hanbo Li, Patrick Ng, Peng Xu, Henghui Zhu, Zhiguo Wang, and Bing Xiang. 2021. Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open Domain Question Answering. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_. 4078–4088.
* Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022\. Training language models to follow instructions with human feedback. _Advances in neural information processing systems_ 35 (2022), 27730–27744.
* Park et al. (2023) Joonhyeong Park, Tang Wee Teo, Arnold Teo, Jina Chang, Jun Song Huang, and Sengmeng Koo. 2023. Integrating artificial intelligence into science lessons: teachers’ experiences and views. _International Journal of STEM Education_ 10, 1 (2023), 61.
* Prihar et al. (2023) Ethan Prihar, Morgan Lee, Mia Hopman, Adam Tauman Kalai, Sofia Vempala, Allison Wang, Gabriel Wickline, Aly Murray, and Neil Heffernan. 2023. Comparing different approaches to generating mathematics explanations using large language models. In _International Conference on Artificial Intelligence in Education_. Springer, 290–295.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019\. Language models are unsupervised multitask learners. _OpenAI blog_ 1, 8 (2019), 9.
* Rutten et al. (2012) Nico Rutten, Wouter R Van Joolingen, and Jan T Van Der Veen. 2012. The learning effects of computer simulations in science education. _Computers & education_ 58, 1 (2012), 136–153.
* Santu and Feng (2023) Shubhra Kanti Karmaker Santu and Dongji Feng. 2023. TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks. In _Findings of the Association for Computational Linguistics: EMNLP_. 14197––14203.
* Sundar and Heck (2023) Anirudh S Sundar and Larry Heck. 2023. cTBLS: Augmenting Large Language Models with Conversational Tables. In _Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)_. 59–70.
* Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023\. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ (2023).
* University of Twente (2012) the Netherlands University of Twente. 2012. Global Online Science Labs for Inquiry Learning in Schools. https://premium.golabz.eu/about/go-lab-initiative.
* Wieman (2002) Carl Wieman. 2002. PhET. https://phet.colorado.edu/.
* Zhang et al. (2023) Mengxue Zhang, Zichao Wang, Zhichao Yang, Weiqi Feng, and Andrew Lan. 2023. Interpretable Math Word Problem Solution Generation Via Step-by-step Planning. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics_. 6858––6877.
|
# Mitigating a discrete sign problem with extreme learning machines
Scott Lawrence<EMAIL_ADDRESS>Department of Physics, University of
Colorado, Boulder, CO 80309, USA Los Alamos National Laboratory Theoretical
Division T-2, Los Alamos, NM 87545, USA Yukari Yamauchi<EMAIL_ADDRESS>Institute for Nuclear Theory, University of Washington, Seattle, WA 98195, USA
###### Abstract
An extreme learning machine is a neural network in which only the weights in
the last layer are changed during training; for such networks training can be
performed efficiently and deterministically. We use an extreme learning
machine to construct a control variate that tames the sign problem in the
classical Ising model at imaginary external magnetic field. Using this control
variate, we directly compute the partition function at imaginary magnetic
field in two and three dimensions, yielding information on the positions of
Lee-Yang zeros.
††preprint: LA-UR-23-29892††preprint: INT-PUB-23-037
In seminal papers by Lee and Yang [1, 2], phase transitions in many-body
systems are investigated by studying the location of zeros of the partition
function in the complex plane of a control parameter, such as the temperature
or an external field. For example, the temperature at which a phase transition
occurs can be extracted by examining the location of the zeros as the
thermodynamic limit is taken. Given such a connection between the locations of
zeros and properties of phase transitions, the location of the Lee-Yang zeros
has been studied in a variety of systems in both theory [3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14] and experiment [15, 16, 17, 18]. Of particular interest to
this Letter, the location of the Lee-Yang zeros of the classical Ising model
has been studied by the high cumulants of thermodynamic observables [6, 10],
tensor networks [7], and on a complete and random graph [9].
Direct computation of the Ising partition function at imaginary external field
is obstructed by the presence of a sign problem, broadly similar to those that
prevent the simulation of real-time quantum dynamics via the path integral and
the computation of the nuclear equation of state at finite fermion density. A
great collection of methods has been developed over the past few decades to
treat sign problems that occur in lattice simulations; among them complex
Langevin [19], the density of states method [20], canonical methods [21, 22],
reweighting methods [23], series expansions in the chemical potential [24],
fermion bags [25], and analytic continuation from imaginary chemical
potentials [26], and contour deformation methods [27]. Contour deformations
methods in particular have recently been combined with machine learning
approaches [28, 29, 30, 31, 32, 33] including for theories with complex
couplings [34]; unfortunately, these methods are unable to treat systems with
discrete degrees of freedom.
In this Letter we introduce a machine learning approach for treating a lattice
sign problem, inspired by previous work on control variates [35, 36, 37, 38],
which does not depend on the ability to analytically continue the action and
observables. (See [39] for another approach to generalizing contour
deformation methods to spin systems.) We demonstrate the method in studying
the Lee-Yang zeros of the classical Ising model on a lattice $\Lambda$, whose
partition function is given by
$Z(J;h)=\sum_{s\in\\{0,1\\}^{|\Lambda|}}\exp\Bigg{\\{}J\sum_{\langle
x,y\rangle}s_{x}s_{y}+h\sum_{x}s_{x}\Bigg{\\}}\text{,}$ (1)
where the first sum in the exponential is taken over all pairs of neighboring
lattice sites. We will use both a two-dimensional square lattice and a three-
dimensional cubic lattice, each with periodic boundary conditions.
The partition function is polynomial in fugacity $e^{h}$, and at fixed $J$ is
therefore determined up to a constant factor by the location of its zeros. The
Lee-Yang theorem states that all zeros lie on the imaginary $h$ axis, on which
the partition function is purely real. At larger volumes the zeros become more
dense as a function of $-ih$, and when $J$ corresponds to the second-order
phase transition this line of zeros reaches the real axis in the infinite-
volume limit.
To find the zeros, we will operate at fixed $J$ and compute the ratio
$\frac{Z(h;J)}{Z_{Q}(J)}=\bigg{\langle}\exp\Big{\\{}h\sum_{x}s_{x}\Big{\\}}\bigg{\rangle}_{Q}\equiv\langle\mathcal{O}\rangle_{Q}\text{,}$
(2)
where the “quenched” expectation value of $\mathcal{O}$ is computed relative
to the Boltzmann distribution at $h=0$. This expectation value has a signal-
to-noise problem related to the configuration-dependent phase of
$\mathcal{O}\equiv e^{h\sum s}$: the variance of the phase is always of order
unity, while the signal, as a ratio of partition functions, falls
exponentially with the volume.
We increase the signal-to-noise ratio by constructing an appropriate control
variate, as was done in [40, 41, 37] for other signal-to-noise problems, and
in [35, 36] for various sign problems. We will find a function $f(s)$—termed
the _control variate_ —with vanishing expectation value, so that
$\langle\mathcal{O}\rangle=\langle\mathcal{O}-f\rangle$. The variance of the
new observable is
$\mathrm{Var}(\mathcal{O}-f)=\langle\mathcal{O}^{2}\rangle-2\langle\mathcal{O}f\rangle+\langle
f^{2}\rangle+\langle\mathcal{O}\rangle^{2}\text{.}$ (3)
Thus when $f$ is strongly correlated with $\mathcal{O}$, the new function
$\tilde{\mathcal{O}}\equiv\mathcal{O}-f$ has a smaller variance than the
original observable $\mathcal{O}$. We will refer to $\tilde{\mathcal{O}}$ as
the variance-reduced observable.
To avoid introducing any difficult-to-control bias in the Monte Carlo
calculation, we will construct $f(s)$ in such a way as to guarantee that its
expectation value vanishes exactly. Defining a discrete differentiation
operator by
$\nabla_{x}g(\vec{s})\equiv g(s)-g(s)|_{s_{x}\rightarrow-s_{x}}\text{,}$ (4)
we begin with a function $g(s)$ and construct the control variate $f(s)$
according to
$f(s)e^{-S}\equiv\sum_{x}\nabla_{x=x_{0}}g(T_{x}s)\text{.}$ (5)
Here the sum is taken over all lattice sites $x$ and $T_{x}$ is the
translation operator. The most general translationally invariant control
variate $f(s)$ can be expressed in this way—translational invariance is
desirable in this case as the observable of interest, $\mathcal{O}=e^{-ih\sum
s}$, is also translationally invariant. The choice of differentiation site
$x_{0}$ is irrelevant due to the sum over translations.
Any choice of $g(s)$ will yield a valid control variate, but not all will
improve the signal-to-noise ratio. As in [37], we begin the optimization
process by noting that given a basis of candidate control variates $F_{i}$,
the optimal linear combination may be determined by measuring the correlations
$M_{ij}=\langle F_{i}F_{j}\rangle$ and $v_{i}=\langle\mathcal{O}F_{i}\rangle$,
and computing
$c=M^{-1}v\text{.}$ (6)
The optimal control variate (within the chosen basis) is now given by
$f=\sum_{i}c_{i}F_{i}$.
To improve on the performance of a directly constructed basis of control
variates, we take inspiration from _extreme learning machines_ [42]. An
extreme learning machine (henceforth ELM) is a neural network in which only
the final layer is trained; all other weights are left equal to their
pseudorandomly initialized values. The learning process is now linear
regression, which is both efficient and deterministic. The loss in
expressivity from the fact that most weights are fixed, is at least partially
compensated by the ability to have a far larger network for the same
computational cost.
Figure 1: Ratio of partition functions on an $8\times 8$ lattice at $J=0.4$.
The bottom panel shows the deviation from the exact result, given by the
transfer matrix method.
In this Letter we define $g(s)$ via an ELM with a single hidden layer. The
inputs are $N$ functions $h_{j}(s)$, detailed below. The hidden layer is of
width $WL^{d}$, where $L^{d}$ is the spacetime volume of the lattice and $W$
is a tuneable width scaling factor. We use the CELU function [43] for the
activation layer. Thus the ELM can be written in the form
$g(x)=c_{i}\sigma_{\mathrm{CELU}}(A_{ij}x_{j})\text{.}$ (7)
Only the parameters in the vector $c$ are optimized. The $N\times WL^{d}$
matrix $A$ is left in its randomly initialized state for the entire procedure:
each component of $A$ is drawn independently from the uniform distribution on
the interval $[-L^{-d},L^{-d}]$. As an implementation detail, the
differentiation and averaging defined by Eq. (5) are performed before
multiplication by $c$. This allows the optimization of the coefficients $c$ to
be performed by directly solving a linear system, just as in [37].
In principle the spin configuration might be directly used as an input to the
ELM. In practice, as is often the case in machine learning tasks, a large
boost in performance is seen when the inputs are augmented with hand-crafted
functions of the spin configuration. Here, we select inputs to the ELM by
trial and error.
First, from the spin configuration $s$ we construct a ‘scaled’ version
$\tilde{s}_{x}=e^{-D(x)/D_{0}}s_{x}\text{,}$ (8)
where the distance function $D(x)$ is a measure of the distance from $x$ to
the origin:
$D(x)=\left[\sum_{k=1}^{d}2-2\cos\left(\frac{2\pi
x_{k}}{L}\right)\right]^{1/2}\text{.}$ (9)
This construction has the effect of encouraging the ELM to focus on short-
distance physics. We take $D_{0}=0.5$ throughout.
The input to the ELM consists of a total of $2+3L^{d}$ elements. The scaled
spin configuration $\tilde{s}$ accounts for $L^{d}$ of those. We also include
the real and imaginary parts of the phase of the Boltzmann factor (that is,
$\cos\mathrm{Im}\,S(s)$ and $\sin\mathrm{Im}\,S(s)$). Finally, we include two
scaled copies of $\tilde{s}$: $\tilde{s}\cos\mathrm{Im}\,S(s)$ and
$\tilde{s}\sin\mathrm{Im}\,S(s)$.
The detailed training procedure is as follows. The weights of the ELM are
independently drawn from the uniform distribution specified above. We collect
$K$ samples from the Ising model Boltzmann factor at some $J$ (but $h=0$), and
these samples are split into two sets. The first set, of size
$K_{\mathrm{learn}}$, is used only in fitting the optimal weights of the ELM,
while the second (of size $K_{\mathrm{est}}=K-K_{\mathrm{learn}}$) is used for
evaluating expectation values. This separation is necessary to avoid bias in
the final computed expectation values. Throughout this letter the two sets
will be chosen to be of equal size.
Figure 2: The performance of control variates constructed from an ELM across
lattice sizes and ELM widths, as measured by the ratio of the magnitude of the
expectation value of $\mathcal{O}$ to the variance of the estimator. All
calculations are performed at $J=0.2$ and an external field of $h=0.1i$, with
$5\times 10^{4}$ samples used to train each ELM and $5\times 10^{4}$ samples
per data point.
On each sample the ELM gives $WL^{d}$ outputs, which we will name $g_{i}(s)$.
Each output is differentiated with respect to the origin according to the
finite differencing operator defined above and summed over possible
translations, defining a basis $f_{i}(s)$ of possible control variates. The
correlations $M_{ij}$ and $c_{i}$ are measured on the $K_{\mathrm{learn}}$
samples, and from those measured values the optimal coefficients $c_{i}$
estimated. This defines the control variate to be used, and the improved
observable
$\tilde{\mathcal{O}}\equiv\mathcal{O}-\sum_{i}c_{i}f_{i}$ (10)
is measured on the remaining samples.
One additional technical detail must be treated: the correlation matrix $M$ is
typically ill-conditioned, with a condition number that rises rapidly with the
width parameter $W$. We regularize $M$ by adding a small multiple of the
identity matrix ($10^{-10}$ in this Letter).
To verify the correctness of this method, we first work with the model in two
dimensions. At modest values of $L$, the partition function may be computed
exactly by means of the transfer matrix. Three calculations of the partition
function on an $8\times 8$ lattice are shown in Fig. 1. The data points are
the Monte Carlo estimate of the ratio in Eq. (2) with or without the variance
reduction method applied. Each calculation is done with
$K_{\mathrm{learn}}=5\times 10^{3}=K_{\mathrm{est}}$ samples, and a width
scaling of $W=3$. Both data agree with the exact result, while the errors from
the calculation with the variance reduction are seen to be substantially
smaller than those without the reduction. The coupling of $J=0.4$ is chosen to
be slightly hotter than the critical coupling in two dimensions of
$J_{c}\approx 0.441$ [44].
Fig 2 shows the performance of the ELM, measured by the variance of the
estimator, as a function of the size of the lattice. We select a high
temperature and weak magnetic field, of $J=0.2$ and $h=0.1$, to avoid zeros of
the partition function and make this ratio meaningful. In additional to the
unimproved estimator, two computations are shown, corresponding to ELM widths
of $L^{2},3L^{2},$ and $8L^{2}$. We see that for any fixed size of ELM, there
is no exponential improvement in the variance, only a factor which is fixed or
decaying as $L$ increases. The typical improvement seen, a factor of $\sim
10$, corresponds to an advantage of $10^{2}$ in computational time when high
precision is desired.
Finally, in Fig. 3 we show the partition function at imaginary magnetic field
on a three-dimensional lattice, at lattice sizes of $L=4,5,6$. The largest
lattice size, $6^{3}$, is beyond what can be computed via the transfer matrix
with reasonable computational resources. We take $J=0.2$ to again be slightly
hotter than the phase transition (which sits at $J_{c}\approx 0.22$ [45]).
Each calculation is done with $K_{\mathrm{learn}}=2\times
10^{4}=K_{\mathrm{est}}$ samples, and a width scaling of $W=5$.
Figure 3: The partition function of the Ising model, as a function of complex
magnetic field strength, for different volumes in three spacetime dimensions
($4^{3}$, $5^{3}$, and $6^{3}$). The increasing density of the zeros at larger
volumes is clearly visible. At larger volumes and magnetic field strengths,
the raw data is insufficient to distinguish the partition function from $0$,
making it impossible to localize zeros further from the real axis; the use of
an ELM improves the situation somewhat, enabling the location of zeros to be
further constrained. Data points within two standard deviations of $0$ are
hidden for readability.
We have detailed a practical algorithm for mitigating volume-scaling sign
problems in lattice field theory. In the context in which we have tested the
method—the Ising model at imaginary external magnetic field—it consistently
yields a speedup of two orders of magnitude over the naive approach. This
approach is directly applicable to spin systems and other models in which the
degrees of freedom in the path integral are discrete, a marked advantage over
previous machine learning approaches that make use of contour deformations;
however, at this stage we have not attained an exponential improvement in the
average phase. This deficiency may be expected to be a fruitful direction for
future work.
All results in this Letter make use of the JAX library Equinox [46] for the
implementation of the ELM. S.L. is grateful to Frederic Koehler for originally
suggesting extreme learning as an technique of interest.
S.L. was supported at the beginning of this work by the U.S. Department of
Energy under Contract No. DE-SC0017905, and subsequently by a Richard P.
Feynman fellowship from the LANL LDRD program. LANL is operated by Triad
National Security, LLC, for the National Nuclear Security Administration of
U.S. Department of Energy (Contract No. 89233218CNA000001). Y.Y. is supported
by the INT’s U.S. Department of Energy grant No. DE-FG02-00ER41132.
## References
* Yang and Lee [1952] C. N. Yang and T. D. Lee, Phys. Rev. 87, 404 (1952).
* Lee and Yang [1952] T. D. Lee and C. N. Yang, Phys. Rev. 87, 410 (1952).
* Biskup _et al._ [2000] M. Biskup, C. Borgs, J. T. Chayes, L. J. Kleinwaks, and R. Kotecký, Phys. Rev. Lett. 84, 4794 (2000).
* Alves _et al._ [2002] N. A. Alves, J. P. N. Ferrite, and U. H. E. Hansmann, Phys. Rev. E 65, 036110 (2002).
* Lee [2013] J. Lee, Phys. Rev. Lett. 110, 248101 (2013).
* Deger _et al._ [2018] A. Deger, K. Brandner, and C. Flindt, Phys. Rev. E 97, 012115 (2018).
* García-Saez and Wei [2015] A. García-Saez and T.-C. Wei, Phys. Rev. B 92, 125132 (2015).
* Gnatenko _et al._ [2017] K. P. Gnatenko, A. Kargol, and V. M. Tkachuk, Phys. Rev. E 96, 032116 (2017).
* Krasnytska _et al._ [2016] M. Krasnytska, B. Berche, Y. Holovatch, and R. Kenna, Journal of Physics A: Mathematical and Theoretical 49, 135001 (2016).
* Deger _et al._ [2020] A. Deger, F. Brange, and C. Flindt, Physical Review B 102, 174418 (2020).
* Connelly _et al._ [2020] A. Connelly, G. Johnson, F. Rennecke, and V. Skokov, Phys. Rev. Lett. 125, 191602 (2020), arXiv:2006.12541 [cond-mat.stat-mech] .
* Rennecke and Skokov [2022] F. Rennecke and V. V. Skokov, Annals Phys. 444, 169010 (2022), arXiv:2203.16651 [hep-ph] .
* Johnson _et al._ [2023] G. Johnson, F. Rennecke, and V. V. Skokov, Phys. Rev. D 107, 116013 (2023), arXiv:2211.00710 [hep-ph] .
* Clarke _et al._ [2023] D. A. Clarke, K. Zambello, P. Dimopoulos, F. Di Renzo, J. Goswami, G. Nicotra, C. Schmidt, and S. Singh, PoS LATTICE2022, 164 (2023), arXiv:2301.03952 [hep-lat] .
* Binek [1998] C. Binek, Phys. Rev. Lett. 81, 5644 (1998).
* Wei and Liu [2012] B.-B. Wei and R.-B. Liu, Phys. Rev. Lett. 109, 185701 (2012).
* Peng _et al._ [2015] X. Peng, H. Zhou, B.-B. Wei, J. Cui, J. Du, and R.-B. Liu, Phys. Rev. Lett. 114, 010601 (2015).
* Brandner _et al._ [2017] K. Brandner, V. F. Maisi, J. P. Pekola, J. P. Garrahan, and C. Flindt, Phys. Rev. Lett. 118, 180601 (2017).
* Aarts and Stamatescu [2008] G. Aarts and I.-O. Stamatescu, JHEP 09, 018, arXiv:0807.1597 [hep-lat] .
* Langfeld and Lucini [2016] K. Langfeld and B. Lucini, _Proceedings, International Meeting Excited QCD 2016: Costa da Caparica, Portugal, March 6-12, 2016_ , Acta Phys. Polon. Supp. 9, 503 (2016), arXiv:1606.03879 [hep-lat] .
* Alexandru _et al._ [2005] A. Alexandru, M. Faber, I. Horvath, and K.-F. Liu, Phys. Rev. D72, 114513 (2005), arXiv:hep-lat/0507020 [hep-lat] .
* de Forcrand and Kratochvila [2006] P. de Forcrand and S. Kratochvila, _Hadron physics, proceedings of the Workshop on Computational Hadron Physics, University of Cyprus, Nicosia, Cyprus, 14-17 September 2005_ , Nucl. Phys. Proc. Suppl. 153, 62 (2006), [,62(2006)], arXiv:hep-lat/0602024 [hep-lat] .
* Fodor and Katz [2002] Z. Fodor and S. D. Katz, Phys. Lett. B534, 87 (2002), arXiv:hep-lat/0104001 [hep-lat] .
* Allton _et al._ [2002] C. R. Allton, S. Ejiri, S. J. Hands, O. Kaczmarek, F. Karsch, E. Laermann, C. Schmidt, and L. Scorzato, Phys. Rev. D66, 074507 (2002), arXiv:hep-lat/0204010 [hep-lat] .
* Chandrasekharan [2013] S. Chandrasekharan, Eur. Phys. J. A49, 90 (2013), arXiv:1304.4900 [hep-lat] .
* de Forcrand and Philipsen [2007] P. de Forcrand and O. Philipsen, JHEP 01, 077, arXiv:hep-lat/0607017 [hep-lat] .
* Alexandru _et al._ [2022] A. Alexandru, G. Basar, P. F. Bedaque, and N. C. Warrington, Rev. Mod. Phys. 94, 015006 (2022), arXiv:2007.05436 [hep-lat] .
* Alexandru _et al._ [2017] A. Alexandru, P. F. Bedaque, H. Lamm, and S. Lawrence, Phys. Rev. D96, 094505 (2017), arXiv:1709.01971 [hep-lat] .
* Ohnishi _et al._ [2019] A. Ohnishi, Y. Mori, and K. Kashiwa, JPS Conf. Proc. 26, 024011 (2019).
* Kashiwa _et al._ [2019] K. Kashiwa, Y. Mori, and A. Ohnishi, Phys. Rev. D 99, 114005 (2019), arXiv:1903.03679 [hep-lat] .
* Alexandru _et al._ [2018a] A. Alexandru, P. F. Bedaque, H. Lamm, and S. Lawrence, Phys. Rev. D97, 094510 (2018a), arXiv:1804.00697 [hep-lat] .
* Alexandru _et al._ [2018b] A. Alexandru, P. F. Bedaque, H. Lamm, S. Lawrence, and N. C. Warrington, Phys. Rev. Lett. 121, 191602 (2018b), arXiv:1808.09799 [hep-lat] .
* Lawrence and Yamauchi [2021] S. Lawrence and Y. Yamauchi, Phys. Rev. D 103, 114509 (2021), arXiv:2101.05755 [hep-lat] .
* Lawrence _et al._ [2022] S. Lawrence, H. Oh, and Y. Yamauchi, Phys. Rev. D 106, 114503 (2022), arXiv:2205.12303 [hep-lat] .
* Lawrence [2020] S. Lawrence, Phys. Rev. D 102, 094504 (2020), arXiv:2009.10901 [hep-lat] .
* Lawrence and Yamauchi [2023] S. Lawrence and Y. Yamauchi, Phys. Rev. D 107, 114505 (2023), arXiv:2212.14606 [hep-lat] .
* Bhattacharya _et al._ [2023] T. Bhattacharya, S. Lawrence, and J.-S. Yoo, Control variates for lattice field theory (2023), arXiv:2307.14950 [hep-lat] .
* Bedaque and Oh [2023] P. F. Bedaque and H. Oh, Leveraging neural control variates for enhanced precision in lattice field theory (2023), arXiv:2312.08228 [hep-lat] .
* Kashiwa _et al._ [2023] K. Kashiwa, Y. Namekawa, A. Ohnishi, and H. Takase, Application of the path optimization method to a discrete spin system (2023), arXiv:2309.06018 [hep-lat] .
* Fernandez and Martin-Mayor [2009] L. Fernandez and V. Martin-Mayor, Physical Review E 79, 051109 (2009).
* Weigel and Janke [2010] M. Weigel and W. Janke, Physical Review E 81, 066701 (2010).
* Huang _et al._ [2006] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, Neurocomputing 70, 489 (2006).
* Barron [2017] J. T. Barron, arXiv preprint arXiv:1704.07483 (2017).
* Onsager [1944] L. Onsager, Phys. Rev. 65, 117 (1944).
* Talapov and Blöte [1996] A. Talapov and H. Blöte, Journal of Physics A: Mathematical and General 29, 5727 (1996).
* Kidger and Garcia [2021] P. Kidger and C. Garcia, Differentiable Programming workshop at Neural Information Processing Systems 2021 (2021).
|
# A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty
Andrew Houston Department of Computer Science, Loughborough University,
Epinal Way, Loughborough, UK Academic Department of Military Rehabilitation,
Defence Medical Rehabilitation Centre, Stanford Hall, Loughborough, UK
Georgina Cosma Department of Computer Science, Loughborough University,
Epinal Way, Loughborough, UK
###### Abstract
Trust is a crucial factor affecting the adoption of machine learning (ML)
models. Qualitative studies have revealed that end-users, particularly in the
medical domain, need models that can express their uncertainty in decision-
making allowing users to know when to ignore the model’s recommendations.
However, existing approaches for quantifying decision-making uncertainty are
not model-agnostic, or they rely on complex statistical derivations that are
not easily understood by laypersons or end-users, making them less useful for
explaining the model’s decision-making process. This work proposes a set of
class-independent meta-heuristics that can characterize the complexity of an
instance in terms of factors are mutually relevant to both human and ML
decision-making. The measures are integrated into a meta-learning framework
that estimates the risk of misclassification. The proposed framework
outperformed predicted probabilities in identifying instances at risk of being
misclassified. The proposed measures and framework hold promise for improving
model development for more complex instances, as well as providing a new means
of model abstention and explanation.
###### keywords:
Explainable AI, Uncertainty Quantification, Meta-Learning, Fuzzy Clustering,
Complexity Theory
## 1 Introduction
In recent years, significant advancements have been made in the application of
machine learning (ML) to support clinical decision-making processes. The
medical domain has seen various applications of ML, including the prediction
of surgical outcomes [34, 53, 30] to aid in treatment planning, earlier
diagnoses of cancers [38, 47] to improve patient survival rates, and
prognostic tools for better management of neuro-degenerative conditions [18,
50]. Despite their potential benefits, the adoption of such tools remains a
challenge, with trust being cited as a primary barrier [5]. Complex, ‘black-
box’, algorithms that cannot explain when they may be in correct are often
cited a source of distrust [5]. As the use of AI continues to grow in fields
where incorrect decisions can have serious consequences, there is a growing
need to equip end-users with tools to facilitate an appropriate trust
relationship with ML and AI tools, termed trust calibration.
To address the demands for more interpretable models several approaches have
been proposed, including the use of interpretability and explainability
measures such as LIME [57] and SHAP [45], which offer post-hoc explanations
for model predictions. Another technique is the utilisation of inherently
interpretable models, such as decision trees [55]. However, as highlighted by
Kaur et al. [35], these methods may not take into account the contextual
factors that influence how end-users internalise information. Moreover, in
practice, even explanations of accurate model predictions have been shown to
be insufficient to overcome the internal biases of incorrect end-users [36,
12, 72].
The design of methods to facilitate trust calibration requires consideration
of the needs and perspectives of the end user. In the medical domain,
clinicians have emphasized the importance of models indicating uncertainty or
abstaining when their confidence is low [37, 68]. Global measures of model
performance, while useful for gauging overall reliability, are not enough to
foster trust and sustained use, as they lack insight into individual cases
[68, 56]. Therefore, it is crucial to develop methods that are interpretable
and applicable at the instance level. Amann et al. [3] emphasise that the
effectiveness of explanations depends on the end user’s ability to comprehend
their meaning. Thus, the interpretability of the methods themselves is a
crucial factor in facilitating trust calibration.
This paper presents a novel approach to calibrating trust in machine learning
(ML) models, incorporating the following three key contributions:
* •
A suite of model-agnostic, interpretable meta-heuristics that aim to
characterise instances in terms of the sources of complexity in decision-
making.
* •
A meta-learning framework, incorporating a synthetic data generator and
Bayesian-optimized, weighted fuzzy-clustering system to estimate the level of
uncertainty in the decision-making of an ML model, that outperforms predictive
probabilities for characterising misclassification risk.
* •
Experiments evaluating how the proposed methods could enable ML models to
refrain from making predictions when uncertainty is high and how uncertainty
can be communicated using information derived from the meta-features.
The remainder of paper is structured as follows: Section 2 provides a
descriptive overview of sources of uncertainty in decision making and existing
approaches to characterising the complexity of instances and uncertainty of
decision made by ML models; Section 3 describes the proposed meta-heuristics,
outlines the design of the fuzzy clustering system for estimating uncertainty
and describes the design of a knowledge-base used to improve the performance
of the uncertainty estimation system; Section 4 details the methods for
analysing the relationships between the proposed meta-heuristics and
misclassification events, identifying the optimal method for knowledge-base
construction, and evaluating the performance of the uncertainty estimation
system; Section 5 provides the experimental results; Section 6 explores the
use of the proposed methods for abstention and uncertainty explanation;
Lastly, section 7 provides a discussion of future directions.
## 2 Related Works
This section provides a descriptive overview of the types of uncertainty in
decision making, data-centric sources of complexity in machine learning tasks
and existing methods to characterise such sources of complexity and estimate
the uncertainty of decisions made by ML models.
### 2.1 Types of Uncertainty
Uncertainty can be classed as one of two types, Epistemic or aleatoric.
Epistemic uncertainty can be described as a type of uncertainty originating
from the insufficiency of similar training data [31]. Epistemic uncertainty
can manifest in various forms, such as the absence of underrepresented groups
in facial recognition datasets, resulting in a decline in recognition
performance for these groups [7], or in the occurrence of rare circumstances
within a dataset. Aleatoric uncertainty reflects uncertainty arising from a
degree of randomness which cannot be explained away, such as the roll of a
dice, flip of a coin, noise in a signal or low resolution of an image [64].
In machine learning research, several factors have been identified for
increasing the epistemic and aleatoric uncertainty of classification problems,
which class imbalance, class overlap and outliers, and these are described
below.
#### Class Imbalance
Class imbalance is defined as an unequal distribution of instances between
classes in a dataset and is a common problem in many domains, spanning medical
predictions [30], sentiment analysis [23] and information retrieval [8]. Large
class imbalances can result in models that are highly accurate, but lack
sensitivity [44]. Common approaches for addressing class imbalance include re-
sampling techniques, such as SMOTE [9], that either increase the instances in
the minority class or reduce the majority class, and cost-sensitive learning,
which gives higher weight to errors made on specific classes [21]. However,
class imbalance may have limited impact on the classifier performance,
depending on other factors such as the well-defined class boundaries [71, 63].
#### Class Overlap
Class overlap, defined as the overlap in the feature space between instances
of multiple classes, is a well-recognised challenge in classification tasks
[71, 63]. The complexity of an instance in a region of class overlap is higher
compared to instances in regions dominated by a single class. This overlap can
introduce noise in a classification, increasing the aleatoric uncertainty of
decisions made on such instances. The relationship between class overlap and
class imbalance is often discussed in literature, with some studies suggesting
that the impact of class overlap is greater than class imbalance, while the
influence of class imbalance on the complexity of a classification task
increases in problems with high overlap [71, 63, 65]. In the context of
clinical decision-making, class overlap may present in two forms: a general
overlap of all features, making it difficult to distinguish between the
diagnoses or prognoses of patients, or in patients where different aspects of
their presentation align closely with different class outcomes. This is
depicted in Fig. 1, where instance D represents a patient with features $x1$
and $x2$ falling within the global overlap of the blue and red classes, and
instance B represents a patient where feature $x1$ aligns closely with the
blue class and feature $x2$ aligns with the red class.
#### Outliers
Outliers are instances that lie in a region of low neighborhood density and
can result in higher levels of epistemic uncertainty in predictions due to the
absence of similar instances for comparison. The impact of outliers on the
complexity of a prediction is dependent on various factors, such as the
location of the instance in the feature space and the classifier used. For
instance, in the hypothetical dataset shown in Fig.1, two instances may have
similar levels of outlierness but their location in the feature space may
result in differing levels of complexity. While instance A has high levels of
outlierness, its location in the feature space relative to other instances of
the same class may result in highly accurate predictions. On the other hand,
instance C, though having similar levels of outlierness, may not be predicted
accurately as its location in the feature space aligns closely with the
opposing class. The impact of outliers on decision-making complexity is
therefore specific to the classifier and the dataset [2]. In clinical
decision-making, the availability of past evidence and experience plays a
critical role in supporting predictions. Qualitative work found that outlying,
abstract patient presentations can increase uncertainty in decision-making
[33]. Furthermore, due to the core underpinning of clinical decision-making
being the availability of evidence and past experience [11, 73], in the case
of rare and complex patient presentations, much like the presence of outliers
within the context of ML, uncertainty in the decisions made increases [67].
Figure 1: A hypothetical 2-feature dataset where points A and C are instances
where a prediction could be considered to have a high degree of epistemic
uncertainty, point B reflects an instance where a prediction could be
considered to have both high epistemic and aleatoric uncertainty, and point D
is an instance where a prediction could be considered to have a high degree of
aleatoric uncertainty
### 2.2 Existing Approaches for Characterising Complexity and Uncertainty
#### Meta-heuristic Approaches
In 2011, Smith and Martinez [62] proposed a series of hardness measures to
identify instances that are likely to be misclassified in typical
classification problems and defined thresholds for their use as a means of
data cleaning during model development. A subsequent paper further explored
the application of the proposed measures and their relevance to the complexity
of instance-level decision-making [63] finding class overlap to be most
significant factor in characterizing complex instances. The paper presented
several approaches for integrating the proposed hardness measures into the
learning process, including the augmentation of error functions within multi-
layer perceptrons to reduce the weight assigned to more complex instances and
the removal of complex instances to reduce unwanted noise within the training
set. The advantage of the measures proposed by Smith et al. [63] is that they
allow for the accurate characterization of the complexity of individual
instances, enabling robust model development and improved reasoning as to why
misclassifications occur, as demonstrated in Houston et al. [30].
Additionally, the measures are easy to calculate, and the sources of
complexity they reflect are well-defined, making them easily understood by
end-users. However, post-deployment, most of the proposed measures are limited
in their utility because they rely on class knowledge, which is unavailable in
prospective cases.
In 2022, Barandas et al. [6] proposed a method for characterizing aleatoric
and epistemic uncertainty in traditional classification problems and applied
the methods to aid in abstaining from making a prediction. They applied three
measures to characterize the domains of uncertainty. Entropy was used to
measure aleatoric uncertainty, variation ratios were applied to evaluate
epistemic uncertainty resulting from the model, and a density estimation
technique was applied to measure epistemic uncertainty resulting from a lack
of data. The methods successful in utilizing uncertainty measures to improve
the performance of a series of classifiers using uncertainty-based rejection.
The authors augmented the way two models can be compared by looking at both
actual performance metrics, such as accuracy, and the uncertainty values
associated with the predictions. Additionally, the interpretable nature of the
proposed measurements shows promise in acting as a facilitator of trust to the
end user. However, a primary limiting factor for the application of such
methods is their dependence on the classifier and its parameters. The
difficulty with having model-dependent methods for characterizing instances is
their limitation for use within meta-learning solutions, such as developing
dynamic models and ensembles which, in some cases, relies on meta-information
about an instance [15, 40].
#### Model-Dependent Approaches
The most straightforward way to quantify uncertainty is to use the prediction
probability of the predicted class for the instance. Typically, prediction
probability refers to the conditional probability of the predicted class,
although it may differ depending on the model. For example, for SVM models,
Platt scaling is commonly used to compute the prediction probability [52].
Rechkemmer and Yin [56] investigated the impact of predicted probabilities on
end-users’ decision-making and found that higher predicted probabilities
increased users’ willingness to follow the model’s predictions and improved
their self-reported trust in the model. This simple method for quantifying
uncertainty has the advantage of being applicable to new instances post-
deployment. However, it lacks interpretability as it does not explain why a
model is less confident.
Active learning is an approach designed to reduce the computational cost and
time required to train a learning algorithm by identifying the most useful
instances to train a classifier. A popular method of active learning is
uncertainty sampling, introduced by Lewis 181 and Gale [42], which identifies
instances that a model is most uncertain about, learning the representation of
such challenging cases to create a decision boundary. However, while
uncertainty sampling is effective in identifying instances that a model is
most uncertain about, like prediction probabilities, it does not identify the
reasoning behind the uncertainty. Sharma and Bilgic [61] proposed an evidence-
based framework for explaining uncertainty, targeting two specific sources of
uncertainty which they termed ‘conflicting evidence uncertainty’ and
‘insufficient evidence uncertainty’. ‘Conflicting evidence uncertainty’ refers
to the presence of strong evidence for an instance belonging to more than one
class, whereas ‘insufficient evidence uncertainty’ refers to a lack of
evidence for an instance belonging to any class. Their evaluations on real-
world datasets found that ‘conflicting evidence uncertainty’ appeared to be a
more effective means of active learning, outperforming traditional active
learning and ‘insufficient evidence uncertainty’. The benefit of using such
measures, unlike the hardness measures proposed by Smith et al. [63], is their
lack of reliance on class knowledge. However, like the methods proposed in
Barandas et al. [6], a downside to uncertainty measures derived through active
learning is their lack of independence from a classifier. Additionally, active
learning approaches typically fail to capture concepts such as noise and
outlierness [58].
In 2021, Northcutt et al. [48] proposed the confident learning (CL) algorithm
to identify uncertainty in dataset labels and to detect instances that are
incorrectly labelled. The CL algorithm estimates the joint distribution of
noisy labels and latent labels by utilizing the out-of-sample predicted
probabilities and noisy labels. This is done to characterize the class-
conditional label noise and identify instances with labelling issues. These
instances are then pruned, and the model is trained with re-weighting
instances using the estimated latent priors. Applying CL has the major benefit
of making a model more robust against epistemic error. Recently, Abad and Lee
[1] utilized the CL algorithm to identify instances for which classifications
were uncertain in a study focusing on mortality prediction. The study trained
six ML models on a dataset, and then the CL algorithm was used to identify
instances where class labels were uncertain. An XGBoost model was then
retrained, predicting three classes, including a “challenging” patient class.
Results were mixed, with the model achieving only a 31% precision and 14%
recall for the “challenging” instances. However, after the uncertain patients
were separately labelled, the XGBoost model’s performance for identifying
mortality patients increased from 87% to 96%. The work of Abad and Lee [1]
demonstrates how the application of uncertainty identification can be used to
improve classification performance and notify end-users of uncertain
predictions. Despite the somewhat positive results, the application of the CL
algorithm to identify uncertainty may be challenging to explain to end-users,
similar to the use of conditional probabilities. This limits its applicability
for explainability purposes, leaving the reasoning behind the uncertainty
relatively unknown.
#### Summary
To summarise, current methods for characterizing instance complexity and
quantifying decision-making uncertainty have limitations, including post-
deployment applicability, failure to consider multiple sources of complexity,
and a lack of suitability for end-users to understand why a decision is
uncertain or an instance is complex. Therefore, the proposed methods for
characterising uncertainty take a user-focused approach to characterise
complexity and estimate uncertainty, developing heuristics which reflect the
factors that impact a human-made decision.
### 2.3 Characterising Class Diversity
The hardness measures of Smith et al. [63] rely on class knowledge, often
looking at class disagreement with the new instance, which limits their use
post-deployment. Therefore, a class-independent alternative to disagreement is
required. This study proposes the use of a diversity measure, observing how
diverse the classes of the returned instances are as a measure of complexity.
The calculation of diversity can be thought of in a similar way to calculating
class imbalance.
Several methods have been proposed to quantify class imbalance in binary and
multi-class problems, the most informative measures being empirical
distributions and frequencies of classes within a dataset. However, these
measures can be difficult to examine in highly multi-class problems and are
not single-value representations. One proposed solution, suitable for multi-
class problems is the imbalance ratio, proposed by Prati et al. [54],
providing a single-value summary of class imbalance. The imbalance ratio
measures the number of instances in the most probable class for every instance
in the least probable class. However, despite being effective in binary
problems, the ratio is incapable of describing the class disparity in multi-
class solutions due to its disregard for all classes other than the most and
least probable. In recognising this flaw, Ortigosa-Hern’ndez et al. [49]
proposed the imbalance degree, a measure capable of characterising skewed
class distributions in multi-class problems, using a similarity/distance
function. The limitation of their approach was that the measure was sensitive
to the choice of distance metric, which was prone to change across different
classification problems. Acknowledging this, Zhu et al. [74] proposed a multi-
class imbalanced degree, based on the likelihood-ratio test (LRID). Empirical
evaluations on real and synthetic data sets showed their approach to be
superior to the imbalance ratio and imbalance degree, with the additional
benefit of not having to identify appropriate parameters for each dataset.
Considering this study aims to propose generalisable methods, capable of
identify complex instances across a range of classification tasks, the
likelihood ratio imbalance degree will be applied to reflect the diversity
within two of our proposed meta-heuristics, characterising diversity as:
$Diversity=-2\sum_{c=1}^{C}m_{c}ln\frac{b_{c}}{\hat{p}_{c}}$ (1)
where $C$ is the number of classes in the dataset, $m$ is the number of
instances in each respective class, $ln$ is the natural logarithm, $b$
represents a balanced class distribution and $\hat{p}$ represents the
estimated class distribution. $\hat{p}$ is estimated using:
$\hat{p}=\frac{m_{c}}{M}$ (2)
where $M$ is the total number of instances.
The diversity score is normalised to the worst possible imbalance within a
dataset, for example: if a sample consists of 300 records and 3 classes, then
the worst possible imbalance would be if a single class contained all 300
records. Contextualising this within the problem of estimating diversity, a
normalised LRID of 1, would reflect zero diversity being present within our
sample, as only one class is present.
## 3 Proposed Meta-heuristics and Framework for Uncertainty Estimation
This section presents a series of meta-heuristics used to characterise the
uncertainty of a model’s prediction on a given instance and outlines the
proposed framework for estimating the uncertainty of a prediction made by an
ML model.
### 3.1 Meta-Heuristics for Characterising Instance Complexity
Seven meta-heuristics are proposed each requiring a dataset, $X$, containing
$M$ instances and $N$ features, and an instance for which to characterise the
complexity of, $x$.
#### $k$-Diverse Neighbours
$k$-Diverse Neighbours ($k$DN) reflects the local overlap of an instance
within the task space, relative to its nearest neighbours from the training
set and is calculated according to Eq. 3, as:
$KDN(x)=diversity(KNN(x))$ (3)
where the function $KNN(x)$ returns the class labels of nearest $k$ instances
in $X$ to $x$.
### Disjunct Size
Disjunct size is a class-independent measure proposed by Smith et al. [63] and
calculates the size of the disjunct an instance is classified into by an
unpruned decision tree, formed from a training set. The disjunct size of the
returned instance is then normalised by dividing it by the size of the largest
disjunct within the dataset. Disjunct size is calculated according to Eq. 4,
as:
$DS(x)=\frac{|distjunct(x)|-1}{\max_{d\in D}|disjunct(d)|-1}$ (4)
where $D$ is the set of all disjuncts in $X$, and $|distjunct(x)|-1$ returns
size of the disjunct for instance $x$.
#### Disjunct Class Diversity
Disjunct class diversity (DCD) reflects the diversity of the class of
instances within the disjunct which an instance is classified into, ie those
that are similar based on a subset of their features. Contrary to the methods
of DS, the decision tree applied when calculating DCD is pruned. DCD is
calculated according to Eq. 5, as:
$DCD(x)=diversity(Disjunt(x))$ (5)
where the function $Disjunt(x)$ returns the classes of the instances contained
from dataset, $X$, in the same disjunct as instance, $x$.
#### Outlierness
Outlierness (OL), reflects the degree to which an instance is similar to the
instances contained within the training set. The outlierness of an instance is
calculated using the density-based approach proposed by Tang and He [66],
where the metric captures the ratio of the average neighbourhood density, to
the density of instance $x$, using Eq. 6, as:
$OL(x)=\frac{\sum_{x\in S(x)}p(x)}{|S(x)|p(x)}$ (6)
where $S(x)$ is a set of instances formed of the $k$-nearest neighbours,
$S_{KNN}(x)$, reverse nearest neighbours, $S_{RNN}(x)$, and shared nearest
neighbours, $S_{SNN}(x)$, to instance $x$, termed a neighbourhood. The
function $p(x)$ returns the density of the location of instance $x$, where
density is calculated using Eq. 7, as:
$p(x)=\frac{1}{|S(x)|+1}\sum_{X\in
S(x)\cup\\{x\\}}\frac{1}{h^{N}}K(\frac{X-x}{h})$ (7)
where $K\frac{X-x}{h}$ is a Gaussian kernel function with the kernel width of
$h$ and neighbourhood size of $k$, $N$ is the number of features in $X$ and
$|S(x)|$ is the size of the neighbourhood for instance, $x$.
#### Class-Level Outlierness
Class-Level Outlierness (CL-OL), reflects the disparity in the level
outlierness calculated for an instance, across each class within the dataset.
Given an instance, an outlierness score is calculated for each class in the
dataset. Thereafter, disparity is calculated using a variation of the
diversity Eq. 1 where $m$, originally reflecting the number of instances in a
class, now reflects the percentage of the summed outlierness scores, across
all classes,$M$.
#### Evidence Conflict
Evidence conflict (EC) reflects degree to which an instance’s features fall
within the distribution of a multiple classes. To calculate EC, first, an
$M\times N$ matrix is formed, where $M$ refers to the number of classes within
the dataset and $N$ refers to the number of features. The matrix will
henceforth be referred to as the conflict matrix. To populate the conflict
matrix, the conflicting evidence degree is calculated for each feature and
class-comparison. The conflicting evidence degree (CED) is calculated using
Eq. 8, as:
$CED(x^{n})=\begin{cases}1-1/(f(x_{n},X_{n}^{c})/f(x_{n},X_{n}^{r}))&f(x_{n},X_{n}^{c})>f(x_{n},X_{n}^{r})\\\
-1+1/(f(x_{n},X_{n}^{r})/f(x_{n},X_{n}^{c}))&f(x_{n},X_{n}^{c})>f(x_{n},X_{n}^{r})\end{cases}$
(8)
where the function $f(x_{n},X_{n}^{c})$ returns the probability density of the
class $c$ for feature $n$ at the value of $x_{n}$ and the function
$f(x_{n},X_{n}^{r})$ returns the probability density of a different class in
the dataset, $r$, for feature $n$ at the same value. When calculating EC,
class $c$ remains constant and is the class for which the instance has been
predicted by the classifier. Where a feature is categorical, the function is
calculated as follows is calculated using Eq. 9, as:
$f(x_{n})=\frac{|z\in X_{n}^{c}\wedge X_{n}^{c}=x_{n}|}{|X^{c}|}$ (9)
where z refers to the total instances in the dataset, $X$, of class $c$, and
feature $n$, where the category of the instance is the same as the category of
instance $x$.
Given that difference features carry different levels of importance to the
classifier, with some being more useful than others, the conflict matrix is
weighted according to the Fishers Discriminant Ratio value for the features,
normalised to highest ratio among the features. The total conflict score is
calculated as the sum of the conflict across all features for each class
comparison. The maximum conflict score across all class comparisons is
returned.
### 3.2 Proposed Uncertainty Estimation Framework
The proposed framework for estimating uncertainty is shown in Fig. 2.
Figure 2: A graphical illustration of the framework for forming the knowledge
base from which is used to train uncertainty estimation system.
#### Knowledge Base Formation
To train the fuzzy clustering system, a knowledge base is formed from the
meta-data of real instances from the training set and other classification
tasks. However, given that datasets from public repositories have drawn
criticism for not being representative of the classification problems that may
exist in the real world [46] and that their homogeneity, from a complexity
standpoint, can result in a biasing of algorithm development, the framework
embeds a synthetic data generator, termed Sy:Boid [29].
The process for generation is described fully in Houston and Cosma [29], but
briefly, the synthetic data generator takes inputs relating to the number of
instances, number of features, number of classes, and the desired complexity,
in this paper the F1 and N1 measures of Ho 338 and Basu [27] are used. A
series of random points are generated and labelled, and then, using a modified
boid algorithm points are moved about the feature space and the complexity of
the dataset is measured. The algorithm is embedded within a genetic algorithm
which optimises the rules weightings to generate a dataset that closest meets
the desired complexity.
To generate synthetic datasets to complement the real data within the
uncertainty system, Sy:Boid was tasked with generating datasets for all
combinations of F1 and N1, where both F1 and N1 are values between 0 and 1, N1
is not larger than F1 as this rarely occurs in real-data [29], and both F1 and
N1 increase in increments of 0.2. The dataset generation was repeated for each
dataset in table 1, mimicking the number of instances, dimensionality and the
number of classes. After all datasets were generated, 18020 instances were
randomly selected, equaling the number of real instances.
The meta-heuristics for each instance in the training set, public datasets and
synthetic dataset are calculated, using 5-fold cross-validation, where the
meta-heuristics of the validation set are stored in the knowledge base. A
single prediction is also provided for each instance by the classifier of
interest, trained using a nested 5-fold cross-validation where the inner loop
is used to determine the optimal hyper-parameters and the outer-loop is used
to predict the class of the validation set. The outer loop of the cross-
validation for evaluating the performance of the classifier is seeded to
ensure the folds are identical to those used to calculate the meta-heuristics.
Misclassification events are identified using Eq. 10, as:
$f(x)=\begin{cases}1&y_{pred}\neq y_{true}\\\ 0&y_{pred}=y_{true}\end{cases}$
(10)
where $y_{pred}$ is the predicted class of an instance, and $y_{true}$ is the
actual class of an instance. The misclassification events are then stored in
the knowledge base with their respective meta-heuristics.
#### Uncertainty Estimation
To examine the utility of the proposed measures in prospectively
characterising the complexity of a patient, a weighted fuzzy c-means
clustering approach is applied. Within the clustering system, the
misclassification rate is calculated for each cluster using all instances
contained within each respective cluster. Misclassification rates were
calculated using Eq. 11.
$Misclassification\leavevmode\nobreak\ Rate=1-\frac{TP+TN}{{TP+FP+TN+FN}}$
(11)
where $TP$ refers to true positive prediction, $FN$ refers to a false negative
prediction, $FP$ refers to false positive prediction and $TN$ refers to a true
negative prediction.
When a new patient’s data is input into the system their fuzzy membership is
calculated, returning the membership of that patient to each cluster. To
obtain a single value output, the output is defuzzified using the weighted
average method, by:
$x^{*}=\frac{\sum[\mu_{x}(\bar{x})\times\bar{x}]}{\sum\mu_{x}(\bar{x})}$ (12)
where $\mu_{x}$ is the multiplication of each weighting function of instance
$x$ and $\bar{x}$ is the misclassification rate associated with each
membership value.
To train and tune the clustering system, a 5-fold nested cross-validation
approach is applied. The meta-heuristics proposed in subsection 3.1 were shown
to vary in their strengths of association with misclassification performance
(Fig. 4). Therefore, to improve the performance of the clustering system above
that of one where each heuristic is weighted equally, Bayesian optimisation is
applied within the inner-loop to determine the optimal weightings of the
heuristics and number of clusters to include in the model, in order to
maximise the product of the odds ratio, area under the receiver operating
characteristic curve (AUROC) and area under the precision-recall curve
(AUPRC), where the input is the defuzzified output and the target is the
binary misclassification event. The outer-loop is used to evaluate the quality
of the uncertainty estimates for identifying misclassification. The complete
process for developing and training the predictive model and fuzzy clustering
system is detailed in Algorithm 1.
1 Inputs: $X$ Dataset features, $Y$ Dataset Targets, $KB$ Knowledge Base
2 Divide $X$ and $Y$ into $K$ stratified folds
3 for _$k_{i}$ in $K$ folds_ do
4 Let $X_{k_{i}}$ and $Y_{k_{i}}$ be the test set features and targets,
respectively
5
6 # Generate classification model, model predictions and meta-heuristics for
the training and testing set
7 for _$k_{j}$ in $K-1$ folds_ do
8 Let $X_{k_{j}}$ and $Y_{k_{j}}$ be the validation set features and targets,
respectively
9 Train and tune model on remaining $K-2$ folds using Bayesian cross-
validation, selecting parameters which maximise the balanced accuracy score.
10 Retrain optimal model on all $K-2$ folds.
11 Let $y_{pred}$ be the predicted class of the trained model applied to
$X_{k_{j}}$
12 Let $M_{k_{j}}=y_{pred}!=Y_{k_{j}}$
13 Let $C_{k_{j}}$ be the calculate meta-heuristics for $X_{k_{j}}$ with the
remaining $K-2$ folds acting as the training set.
14 end for
15 Retrain optimal model on all $K-1$ folds Let $y_{pred}$ be the predicted
class of the trained model applied to $X_{k_{i}}$
16 Let $M_{k_{i}}=y_{pred}!=Y_{k_{i}}$
17 Let $C_{k_{i}}$ be the calculate meta-heuristics for $X_{k_{i}}$ with the
remaining $K-1$ folds acting as the training set.
18 # Generate clustering system
19 for _$k_{j}$ in $K-1$ folds_ do
20 Let $C_{k_{j}}$ and $M_{k_{j}}$ be the validation complexity heuristics and
misclassification targets, respectively
21 Train and tune clustering system on remaining $K-2$ folds and $KB$ using
Bayesian cross-validation selecting the parameter weights and number of
clusters which maximise the fitness function on the validation set
22
23 end for
24 Let $C_{k_{i}}$ = $C_{k_{i}}\times$ optimal weights
25 Retrain optimal clustering system on all $K-1$ folds and $KB$
26 Let $U_{k_{i}}$ be the estimated uncertainty of the trained clustering
systems defuzzified output for all instances in $C_{k_{i}}$.
27
28 end for
29# Evaluate Performance
Evaluate performance over all $K$ folds using $U$ and $M$
Algorithm 1 Pseudo-code of the method for developing and training the
predictive model and fuzzy clustering system.
## 4 Experimental Methods
This section describes the experimental methods for determining the optimal
design of the uncertainty estimation system and evaluating its performance on
unseen data.
### 4.1 Dataset Description
Twenty-seven publicly available datasets were used in experiments to evaluate
the statistical relationships between the proposed meta-features and
misclassification events, and to determine the optimal design of the knowledge
base. A description of the datasets and their source is provided in Table 1.
The datasets vary in their number of instances and dimensionality (i.e. the
number of features). The selected datasets include continuous, ordinal and
categorical features, to ensure the proposed meta-heuristics and developed
system for estimating uncertainty are evaluated across a range of datasets.
Dataset Source Size Feature Types Instances Dims. Classes Nominal Ordinal
Continuous Acute Inflammations (Inflammation) [17] 120 6 2 Y N Y Acute
Inflammations (Nephritis) [17] 120 6 2 Y N Y Breast Cancer-C [51] 116 9 2 N N
Y Breast Cancer-W [20] 699 9 2 N N Y Breast Tissue [20] 106 9 6 N N Y CECS
[30] 126 23 2 Y Y Y C-Section [20] 80 5 2 Y N Y Chronic Kidney Disease [20]
351 24 2 Y Y Y Diabetic Retinopathy [4] 1151 18 2 Y N Y Early Stage Diabetes
[32] 520 16 2 Y N Y Echocardiogram [20] 126 8 2 Y N Y Fertility [24] 100 9 2 Y
Y Y HCV [20] 615 12 2 Y N Y Heart Disease-C [20] 303 13 2 Y Y Y Heart Failure
[10] 299 12 2 Y N Y Hepatitis [20] 154 19 2 Y N Y Hypothyroid [20] 3163 19 2 Y
N Y Indian Liver Disease [20] 583 10 2 Y N Y LSVT [69] 127 312 2 Y N Y
Lymphography [20] 142 36 2 Y N Y Maternal Health Risk [20] 1014 6 3 N N Y
Myocardial Infarct (Heart Failure) [25] 1700 111 2 Y Y Y Myocardial Infarct
(Death) [25] 1700 111 2 Y Y Y Myocardial Infarct (Infarction) [25] 1700 111 2
Y Y Y NKI Breast [70] 272 12 2 Y Y Y Parkinsons Speech [59] 756 753 2 Y N Y
Pancreatic Cancer [19] 590 8 3 Y N Y Parkinsons [43] 195 22 2 N N Y Thoracic
Surgery [75] 470 17 2 Y Y Y Vertebral Column (2-Class) [20] 311 6 2 N N Y
Vertebral Column (3-Class) [20] 311 6 3 N N Y
Table 1: Description of the datasets included in this paper.
#### 4.1.1 Statistical Evaluation of the Proposed Meta-Heuristics
The intention of each meta-heuristic is to characterise the complexity of an
instance, with the sources of complexity being independent from each other.
Therefore, to assess the independence of each meta-heuristic, a correlation
analysis was performed using Spearman’s Rank Correlation. To assess the
association of each meta-heuristic with misclassifications, independent of
each other, univariate binary logistic regressions were applied for each
model, with misclassifications (binary yes/no) acting as the dependent
variable and each meta-heuristic acting as the independent variable. Prior to
applying the logistic regression, each meta-heuristic underwent z-score
transformation to aid in the of interpretation of the odds-ratios.
### 4.2 Statistical Evaluation of the Knowledge Base Construction
To generate the optimal knowledge base, three things must be known: 1) how
many instances should be included, 2) how diverse from the current instance
should the selected instances be, 3) should the knowledge base be comprised of
real instances, synthetic instances or a combination of both. To identify the
optimal parameters, for each patient, 48 knowledge bases were generated
comprised of $m$ instances randomly selected from the nearest $q$% of
instances to the current instance, where $m$ = 100, 500, 1000 or 2000, and $q$
= 10%, 25%, 50% and 100%, with these instances selected from a meta-database
comprised of all real instances, all synthetic instances and a combination of
real and fake instances. The fuzzy clustering system was trained, according to
Algorithm 1, on each of the knowledge bases each time generating 10
defuzzified outputs for each instance (one for each classifier). For each
model, an odds ratio, AUROC and AUPRC were calculated for the outputs from the
fuzzy clustering system, trained on each of the 48 knowledge bases.
To assess what parameters resulted in the defuzzified outputs being best
suited for identifying misclassification, ANOVAs were applied with the
calculated odds-ratio, AUROC, AUPRC acting as the dependent variables and the
number of instances, diversity of the selected instances and realness of the
knowledge base acting as the independent variables. Additionally, the
interaction effect was calculated between the number of instances and the
diversity of the selected instances. Where significant main effects were
found, post-hoc analyses were performed for all two-variable combinations,
apply Bonferroni’s correction to account for multiple comparisons. For all
statistical tests, the null-hypothesis was rejected when $\alpha<0.05$.
### 4.3 External Validation of the Proposed Methods
To ensure the performance of the proposed methods are evaluated on unseen
data, not used to identify the optimal knowledge base, the dataset described
in Hou et al. [28], comprised of 4559 patients with sepsis-3 from the MIMIC-
III database was replicated. The full process for extracting and pre-
processing the patient data is detailed in Hou et al. [28]. The uncertainty
system was formed and evaluated according to Algorithm 1, with performance
measured in terms of the odds ratio between estimated uncertainty and actual
misclassification events, AUROC and AUPRC.
Given that the ability to identify misclassifications without class knowledge
is limited to instances where there is at least some information that allows
them to be classified correctly. The evaluation of the uncertainty estimations
is repeated twice. First including all patients, then again, removing
instances where their presentation and class target means they should be
misclassified. To determine instances which should be misclassified (ISMs),
the methods of Smith and Martinez [62] are applied. First, each instance is
characterised in terms of DS, Disjunct Class Percentage (DCP) and Class-
Likelihood Difference (CLD), where Disjunct size is calculated using Eq. 4,
and DCP and CLP are calculated using Eqs. 13 and 14, as:
$DCP(x)=\frac{|\\{z:z\in disjunct(x)\wedge t(z)=t(x)\\}|}{|disjunct(x)|}$ (13)
$CLD(x)=CL(x,t(x))-\underset{y\in Y-t(x)}{argmaxCL(x,y)}$ (14)
where the function $disjunct(x)$ returns the disjunct that covers instance
$x$, the function $t(x)$ returns the class label of instance $x$, $z$ refers
to instances contained in the returned disjunct, $Y$ is a set containing each
unique class label in the dataset and the function $CL(x)$ returns the
probability of an instance belonging to a certain class, and is derived using
Eq. (15):
$CL(x)=CL(x,t(x))=\prod_{n}^{N}P(x_{n}|t(x))$ (15)
where $x_{n}$ is the $n$th feature of instance $x$. Thereafter, should an
instance meet the following condition $CLD(x,t(x))<0\leavevmode\nobreak\
$and$\leavevmode\nobreak\ ((DS(x)==0\leavevmode\nobreak\
$and$\leavevmode\nobreak\ DCP(x)<0.5)\leavevmode\nobreak\ $or$DN(x)>0.8)$, the
are considered an ISM.
Due to the increased training time required to calculate the meta-heuristics
and train the fuzzy-clustering system, the performance of the estimated
uncertainty output by the fuzzy clustering system is compared against the
absolute difference of the classification probability for the predicted class
from 0.5. For all models, except SVM, classification probability is calculated
as the conditional probability of the predicted class. For SVM classification
probability is calculated using Platt scaling [52].
## 5 Experimental Results
### 5.1 Statistical Analysis of the Proposed Meta-Heuristics
#### Investigation of Co-Linearity between Meta-Heuristics
Results of the correlation analysis are presented in Fig. 3. Results showed
minimal co-linearity between meta-heuristics, meaning the source of
uncertainty that each meta-heuristic measures are independent of one and
other. The only pair of meta-heuristics which demonstrated co-linearity was DS
and DCD ($\rho=$ -0.60). However, due to both DS and DCD being disjunct-based
measures, it is not unsurprising that a degree of overlap exists within the
domains of uncertainty they capture.
Figure 3: Spearman’s rank correlation matrix showing the relationship between
each meta-heuristic. A negative value, highlighted in blue, represented a
negative correlation and a positive value, highlighted in red, represents a
positive correlation.
#### Investigation of the Independent Associations between Meta-Heuristics and
Misclassification Events
Results of the association-based analyses are presented in Fig. 4. Results
indicate significant associations across all classifiers for almost all
heuristics. A summary of the figure is provided below:
* •
KDN: An increase in KDN was observed to significantly increase the odds of a
misclassification occurring across all classifiers (p $<$ 0.001), with odds
ratios ranging from 1.780 (95% CI = 1.721 - 1.84) for SVM to 2.993 (95% CI =
2.846 - 1.84) for KNN, meaning that the more diverse the nearby instances are,
in terms of their class labels, the greater chance of a misclassification
occurring.
* •
DS: An increase in DS was shown to significantly decrease the likelihood of a
misclassification occurring across all classifiers (p $<$ 0.001), with odds
ratios ranging from 0.389 (95% CI = 0.371 - 0.407) for logistic regression, to
0.591 (95% CI = 0.371 - 0.407) for SVM, meaning that the more complex the
decision boundary for a given instance, relative to that of other instances,
the greater the chance of misclassification occurring.
* •
DCD: An increase in DCD was observed to significantly increase the odds of a
misclassification occurring across all classifiers (p $<$ 0.001), with odds
ratios ranging from 1.808 (95% CI = 1.742 - 1.877) for QDA to 2.366 (95% CI =
2.254 - 2.483) for XGBoost, meaning that the more diverse the nearby instances
are, in terms of their class labels, based on a subset of the available
features, the greater chance of a misclassification occurring.
* •
OL: An increase in OL affected the likelihood of misclassification occurring
differently for each classifier. For logistic regression, an increase in OL
significantly increased the odds of a misclassification occurring (OR = 1.127,
95% CI = 1.089 - 1.167), that is to say, the more outlying an instance from
the instances in the training set, the greater chance of a misclassification.
Whereas, for Adaboost, QDA and MLP, the opposite is true, with an increase in
OL significantly decreasing the chance of misclassification occurring.
Increases in OL was shown to not significantly affect the likelihood of a
misclassification occurring for SVM, KNN, Random Forest, LDA or XGBoost (p $>$
0.05).
* •
CL-OL: An increase in CL-OL was found to significantly increase the likelihood
of a misclassification occurring across all classifiers (p $<$ 0.05), with
odds ratios ranging from 1.017 (95% 1.028 - 1.121) for Adaboost to 1.402 (95%
CI 1.347 - 1.458) for SVM, meaning that the more equal an instance’s
outlierness is to all available classes, the greater the chance of a
misclassification occurring.
* •
HD: An increase in HD was shown to significantly decrease the likelihood of a
misclassification occurring across all classifiers (p $<$ 0.001), with odds
ratios ranging from 0.638 (95% CI = 0.606 - 0.669) for Random Forest, to 0.746
(95% CI = 0.720 - 0.773) for SVM, meaning that the closer to the hyperplane an
instance falls, the greater the chance of a misclassification occurring.
* •
EC: An increase in EC was observed to significantly increase the odds of a
misclassification occurring across all classifiers (p $<$ 0.001), with odds
ratios ranging from 1.599 (95% CI = 1.541 - 1.660) for QDA to 1.829 (95% CI
1.759 - 1.902) for logistic regression, meaning that the degree to which the
features of an instance point to two potential classes, the greater the chance
of a misclassification occurring.
Figure 4: Odds ratios and respective 95% confidence intervals demonstrating
the independent association of each meta-heuristic a misclassification.
Diamonds indicate the odds ratio, with an odds ratio greater than one
reflecting a positive association with a misclassification.
#### Summary
Results of the statistical analysis demonstrate that each of the proposed
meta-heuristics characterises instances in terms of different aspects of
uncertainty, independent of each other. However, as demonstrated by the
association-based analysis, some meta-heuristics are more strongly associated
with misclassifications than others and therefore, should not be weighted
equally when characterising the overall uncertainty of a classification.
### 5.2 Identification of the Optimal Knowledge Base
#### Effect of Size
Results of the between-group analysis found a significant difference in the
odd ratios obtained from knowledge bases of different sizes (p $<$ 0.001),
with the odds ratios obtained using knowledge bases comprised of only 100
instances (2.29 $\pm$ 0.36) being significantly lower (p = 0.001) than those
obtained by knowledge bases containing 500 (2.70 $\pm$ 0.41), 1000 (2.85 $\pm$
0.47) and 2000 instances (2.93 $\pm$ 0.50). Odds ratios obtained using a
knowledge base comprised of 500 instances were significantly lower than those
obtained by knowledge bases comprised of 1000 and 2000 instances (p = 0.045
and 0.001, respectively). No significant difference was observed in the odds
ratios obtained from knowledge bases comprised of 1000 and 2000 instances (p =
0.502). This observation was mirrored in the results of the analysis of both
the AUROC and AUPRC (p $<$ 0.001 and p = 0.027, respectively), with AUROCs
being significantly lower when only 100 instances were used (0.73 $\pm$ 0.03)
compared to when 500 (0.75 $\pm$ 0.03), 1000 (0.75 $\pm$ 0.03) and 2000 (0.75
$\pm$ 0.03) were used (p = 0.001) and AUPRC being significantly worse when the
knowledge base contained only 100 instances compared to 2000 (0.33 $\pm$ 0.06
vs. 0.35 $\pm$ 0.06, p = 0.033).
#### Effect of Diversity
In terms of the diversity of the knowledge base, the between-group analysis
found significant differences between the four groups for the analysis of odd
ratios (p $<$ 0.001). However, post-hoc analysis reveals the only significant
difference (p = 0.001) in odds ratio was that knowledge bases formed using the
nearest 50% of instances to the current instance (2.43 $\pm$ 0.37), were
significantly worse than those formed by the nearest 10% (2.82 $\pm$ 0.40),
25% (2.77 $\pm$ 0.40) and 100% (2.76 $\pm$ 0.68). Similarly, there was a
significant difference in the AUROCs achieved by different thresholds (p $<$
0.001). The post-hoc analysis revealed that the smaller thresholds of 10% and
25% resulted in significantly higher (p $<$ 0.01) AUROCs (0.75 $\pm$ 0.03)
than the larger thresholds of 50% and 100% (0.74 $\pm$ 0.03 and 0.73 $\pm$
0.04). No significant effect was observed in terms of AUPRC (p = 0.095)
#### Interaction Between Size and Diversity
An interaction effect was observed between the number of instances included in
the knowledge base and the diversity of the knowledge base for both odd-ratios
and AUROCs (p $<$ 0.001). As shown in Fig. 5, when the number of instances is
small, the performance of highly diverse knowledge bases suffers, however as
the number of instances increases, performance improves. In the case of
knowledge bases comprised of less diverse instances, performance plateaus as
more instances are introduced, which is likely the result of a saturated meta-
feature space. No interaction was observed in terms of AUPRC (p = 0.670).
Figure 5: The interaction for (A) odds-ratio and (B) AUROC, between the number
of instances included in the knowledge base and the diversity of the knowledge
base, where each line reflects the different diversity groups.
#### Effect of Realness
Concerning the realness of the knowledge base, between-group differences were
observed in terms of odds ratios (p = 0.003). Post-hoc analyses revealed that
knowledge bases comprised solely of fake instances led to significantly higher
(p = 0.036) odds ratios (2.750 $\pm$ 0.50) than those of solely real instances
(2.611 $\pm$ 0.51). No significant differences were observed between the
knowledge bases comprised of both real and fake instances (2.719 $\pm$ 0.49)
and the other two groups (p $>$ 0.05). The realness of the knowledge base did
not affect the performance in terms of either AUROC or AUPRC (p = 0.210 and
0.242, respectively).
#### Summary
Due to the interaction effect between the threshold and the number of
instances, all experiments will be carried out using a threshold of 100% and
selecting 2000 instances to train the fuzzy clustering system. Additionally,
due to the desire to retain some reality within the knowledge base, and
because of negligible performances differences between the knowledge bases
comprised of fake and a combination of real and fake instances, the knowledge
base will be formed of both real and synthetic instances.
### 5.3 External Validation of the Proposed Uncertainty Estimation System
Table 2 shows the performance of the proposed method, with and without the
inclusion of ISMs, compared against the use of classifier probabilities.
Results show that in terms of odds ratios, the proposed method proved superior
for 8 of the 10 models when ISMs were included and 5 of the 10 models when
classifiers were excluded, demonstrating that the estimated uncertainty is
equally associated with the classifier probabilities for instances which
should not be misclassified. In terms of AUROC, both with and without ISMs
included the proposed method was superior in 9 out of 10 models, with the only
model for which the classifier probabilities proved superior being SVM,
demonstrating that the uncertainty estimates achieved through the proposed
method achieve greater discriminative performance between positive and
negative examples, across the majority of models than classifier uncertainty.
Lastly, in terms of AUPRC, the proposed method proved superior in 8 out of 10
models when ISMs were included and 6 out of 10 models when ISMs were removed,
demonstrating the proposed methods are better at identifying
misclassifications than classifier probability across more models both with
and without the presence of ISMs. Generally, the proposed methods proved more
robust in providing estimations of uncertainty in the presence of ISMs,
however, the performance of both methods was improved by their removal.
| Proposed Method | Classifier Probabilites
---|---|---
Model | Odds Ratio (95% CI) | AUROC | | AUPRC
---
(Improvement from Random)
Odds Ratio (95% CI) | AUROC | | AUPRC
---
(Improvement from Random)
Including ISMs
LR | 3.263 (2.993 - 3.558) | 0.79 | 0.52 (0.27) | 2.423 (2.231 - 2.633) | 0.72 | 0.42 (0.18)
SVM | 1.459 (1.372 - 1.552) | 0.61 | 0.45 (0.08) | 2.279 (2.117 - 2.454) | 0.71 | 0.56 (0.19)
KNN | 2.814 (2.588 - 3.06) | 0.76 | 0.5 (0.23) | 2.1 (1.955 - 2.255) | 0.70 | 0.43 (0.16)
RF | 2.743 (2.521 - 2.985) | 0.76 | 0.42 (0.22) | 3.08 (2.775 - 3.419) | 0.75 | 0.4 (0.2)
ADA | 2.819 (2.594 - 3.062) | 0.75 | 0.47 (0.2) | 2.438 (2.241 - 2.651) | 0.72 | 0.44 (0.17)
GNB | 2.24 (2.07 - 2.424) | 0.72 | 0.37 (0.16) | 1.468 (1.372 - 1.571) | 0.59 | 0.31 (0.11)
LDA | 3.139 (2.886 - 3.415) | 0.79 | 0.51 (0.27) | 2.45 (2.258 - 2.659) | 0.73 | 0.43 (0.19)
QDA | 2.643 (2.442 - 2.86) | 0.76 | 0.44 (0.22) | 1.974 (1.833 - 2.125) | 0.69 | 0.37 (0.15)
MLP | 2.568 (2.37 - 2.782) | 0.75 | 0.42 (0.19) | 2.543 (2.338 - 2.766) | 0.74 | 0.41 (0.18)
XGB | 2.409 (2.223 - 2.61) | 0.74 | 0.36 (0.18) | 2.495 (2.285 - 2.726) | 0.74 | 0.37 (0.19)
Excluding ISMs
LR | 4.711 (4.157 - 5.339) | 0.85 | 0.49 (0.33) | 3.991 (3.503 - 4.547) | 0.80 | 0.39 (0.23)
SVM | 1.434 (1.34 - 1.535) | 0.61 | 0.42 (0.08) | 2.784 (2.548 - 3.041) | 0.75 | 0.58 (0.24)
KNN | 3.537 (3.176 - 3.94) | 0.80 | 0.46 (0.25) | 2.705 (2.469 - 2.963) | 0.76 | 0.41 (0.2)
RF | 3.667 (3.273 - 4.108) | 0.83 | 0.4 (0.26) | 4.724 (4.051 - 5.508) | 0.81 | 0.4 (0.26)
ADA | 3.422 (3.08 - 3.802) | 0.79 | 0.44 (0.23) | 3.103 (2.783 - 3.46) | 0.76 | 0.43 (0.21)
GNB | 2.479 (2.21 - 2.781) | 0.76 | 0.23 (0.13) | 2.191 (1.998 - 2.402) | 0.73 | 0.28 (0.18)
LDA | 4.087 (3.65 - 4.577) | 0.84 | 0.46 (0.3) | 3.973 (3.509 - 4.498) | 0.81 | 0.42 (0.26)
QDA | 3.454 (3.094 - 3.857) | 0.82 | 0.38 (0.25) | 3.514 (3.133 - 3.941) | 0.82 | 0.39 (0.26)
MLP | 3.341 (3.009 - 3.711) | 0.81 | 0.39 (0.22) | 3.532 (3.146 - 3.964) | 0.80 | 0.4 (0.23)
XGB | 3.152 (2.833 - 3.507) | 0.81 | 0.34 (0.22) | 3.235 (2.866 - 3.652) | 0.79 | 0.35 (0.22)
Table 2: Performance metrics, with and without the inclusion of ISMs, of the
proposed method for uncertainty estimation, compared against the absolute
difference of the classification probability from 0.5 for the predicted class.
Metrics highlighted in bold reflect the best performance for a given method.
## 6 Applications for Abstention and Explainability
The following section explores the application of the proposed meta-heuristics
and uncertainty estimation system for abstaining from making a decision and
explaining the level of uncertainty in terms of the sources of decision-making
complexity captured by the proposed meta-heuristics.
### 6.1 Abstention
In terms of facilitating trust calibration, the estimated uncertainty can be
used as a method for abstention and built into the deployed model, preventing
decisions from being shown to the end-user when uncertainty surpasses a given
threshold. To demonstrate this concept, an abstention threshold, beginning at
the 5th percentile and then incrementally increasing until the 95th
percentile, was applied to the uncertainty estimates for the sepsis dataset
used in subsection 5.3. At each threshold, the percentage of misclassified
instances was calculated. Fig. 6 demonstrates how by varying the threshold for
abstention the number of misclassifications can be reduced. Although such an
approach can reduce the number of misclassification, the number of instances
for which the model is applicable for also reduces. Therefore, the application
of such a method may not be appropriate for all use cases.
Figure 6: Percentage of misclassified instances for a given abstention
threshold.
### 6.2 Explainability
The second application of the proposed methods is for the improved explanation
of predictive uncertainty. Shapley additive explanations (SHAP) can be applied
to the calculated meta-heuristics and the estimated uncertainty to uncertainty
to demonstrate how each concept of complexity influences the level of
certainty in decision-making. As an example, SHAP values were calculated for a
patient with high predictive uncertainty and a patient with low predictive
uncertainty within the Sepsis-3 dataset used in section 5.3, when an MLP model
is applied.
Fig. 7 shows the force plots for two patients, for patient A the
classification decision was regarded as having a higher degree of certainty
than the mean from the training set, and for patient B, the decision was
regarded as lower certainty. The benefit of the proposed meta-heuristics is
that they are a mathematical derivation of explainable concepts within
humanistic decision-making and therefore can be expressed using natural
language. Therefore, for patient A we know that the decision was easier
primarily because there was less diversity in the class outcomes of similar
patients, as evidenced by KDN and DCD, with the level of outlierness playing a
small role in further reducing uncertainty. The only factor which increased
the uncertainty of the decision for patient A was a higher degree of evidence
supporting both class outcomes, evidenced by the EC score. Regarding patient
(B), the opposite is true, both KDN and DCD are high, which indicates a higher
level of diversity in class outcomes among similar patients, with the level of
outlierness increasing the level of uncertainty. However, there is less
conflicting evidence for patient B which works to lower the level of
uncertainty. The benefit of using interpretable methods to explore AI
uncertainty is that they can be used to further understand why a model may
abstain from making a decision, or prompt further investigation into why
patients may be classified incorrectly.
Figure 7: Force plots demonstrating the impact of the meta-heuristics on the
level of uncertainty of an MLP model for predicting 30-day mortality of
sepsis-3 patients. Red banners indicate the meta-feature value is increasing
the level of uncertainty of a prediction and the blue banner indicates the
meta-heuristic is decreasing the level of uncertainty. The base value
indicates the mean estimated uncertainty for instances in the training set and
the f(x) refers to the uncertainty value estimated for the current patient.
## 7 Discussion
The proposed method has been demonstrated to be effective in estimating the
uncertainty associated with AI decision-making, which can be a crucial factor
in instilling trust in AI applications, particularly in high-stakes domains
such as healthcare.There are several clear avenues for the application of such
methods to improve both the robustness of AI development and the facilitation
of trust-calibration.
Section 6 presents a clear use case for the estimated uncertainty scores to be
used as a means of abstaining from making a prediction. Although the present
study did not fully explore the potential of the proposed methods for model
abstention, prior research has suggested methods for determining the optimal
points for abstention [22] and abstention-specific performance measures [13].
Therefore, future research in this area could consider incorporating the
proposed methods for uncertainty estimation within existing frameworks for
developing classification systems with the option of abstention.
To enhance the robustness of AI development, the proposed meta-heuristics can
also be utilised to identify the competency of classifiers. The use of meta-
information to understand the strengths and weakness of algorithms is a long-
standing practice [63, 29, 60, 41]. The class-independent and highly
explainable nature of the proposed meta-heuristics makes them suitable for
algorithmic development, and their application in this area can offer new
insights into how the algorithm may perform on unseen data.
An emerging area in the field of precision medicine is the personalisation of
model development to the patient instead of the overall task. Studies have
investigated meta-learning techniques for dynamic classifier selection [14]
and ensemble generation [15, 26] in this domain. Within the area of
complexity, such methods could be applied to select models based on their
ability to handle more challenging instances to maximise the likelihood of a
correct classification [16]. In domains where large datasets are scarce, meta-
learning approaches have the potential to overcome data limitations and learn
viable solutions to parameter tuning and model architecture from similar
problems before applying the learned principles to the current problem to
arrive at a more optimal solution.
Qualitative research indicates that different methods are employed when
addressing uncertainty in real-world human decision-making, depending on the
source of the uncertainty, with factors such as past experience and evidence
availability influencing the strategy chosen [39, 73]. The advantage of the
proposed methods is that they are explainable and can identify the causes of
uncertainty within the AI decision-making process, enabling clinicians to
reason with the output. By providing end-users with a more transparent means
of engaging with and comprehending model outputs, it is hypothesized that
trust calibration will be enhanced.
In conclusion, the proposed method of uncertainty estimation was effective in
identifying instances that are more likely to be misclassified. Furthermore,
the proposed meta-heuristics offer additional opportunities to improve the
explainability of AI decision-making. Future research directions include the
application of the proposed meta-heuristics to enhance model development
through meta-learning and studying the impact of natural language explanations
of uncertainty on the trust calibration of end-users.
## Acknowledgements
We wish to acknowledge the financial support of DMRC and Loughborough
University who jointly funded the project.
## Author contributions
A.H. and G.C. contributed to the design of the study. A.H. and G.C. conceived
the experiments, A.H. conducted the experiment(s). A.H. analysed the results.
A.H. and G.C. interpreted the findings. G.C. supervised the project. A.H.
drafted the initial version of the manuscript. A.H. and G.C. reviewed the
manuscript.
## Competing Interests Statement
The authors declare they have no competing interests to disclose.
## References
* [1] Zahra Shakeri Hossein Abad and Joon Lee. Detecting uncertainty of mortality prediction using confident learning. In 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pages 1719–1722. IEEE, 2021\.
* [2] Edgar Acuña and Caroline Rodríguez. An empirical study of the effect of outliers on the misclassification error rate. Submitted to Transactions on Knowledge and Data Engineering, 2005\.
* [3] Julia Amann, Alessandro Blasimme, Effy Vayena, Dietmar Frey, Vince I Madai, and Precise4Q Consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC medical informatics and decision making, 20:1–9, 2020.
* [4] Bálint Antal and András Hajdu. An ensemble-based system for automatic screening of diabetic retinopathy. Knowledge-based systems, 60:20–27, 2014.
* [5] Onur Asan, Alparslan Emrah Bayrak, Avishek Choudhury, et al. Artificial intelligence and human trust in healthcare: focus on clinicians. Journal of medical Internet research, 22(6):e15154, 2020.
* [6] Marília Barandas, Duarte Folgado, Ricardo Santos, Raquel Simão, and Hugo Gamboa. Uncertainty-based rejection in machine learning: Implications for model development and interpretability. Electronics, 11(3):396, 2022.
* [7] Joy Adowaa Buolamwini. Gender shades: intersectional phenotypic and demographic evaluation of face datasets and gender classifiers. PhD thesis, Massachusetts Institute of Technology, 2017.
* [8] Edward Y Chang, Beitao Li, Gang Wu, and Kingshy Goh. Statistical learning for effective visual information retrieval. In Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429), volume 3, pages III–609. IEEE, 2003.
* [9] Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321–357, 2002.
* [10] Davide Chicco and Giuseppe Jurman. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC medical informatics and decision making, 20(1):1–16, 2020.
* [11] Jane Cioffi. A study of the use of past experiences in clinical decision making in emergency situations. International journal of nursing studies, 38(5):591–599, 2001.
* [12] Elodia B Cole, Zheng Zhang, Helga S Marques, R Edward Hendrick, Martin J Yaffe, and Etta D Pisano. Impact of computer-aided detection systems on radiologist accuracy with digital mammography. AJR. American journal of roentgenology, 203(4):909, 2014.
* [13] Filipe Condessa, José Bioucas-Dias, and Jelena Kovačević. Performance measures for classification systems with rejection. Pattern Recognition, 63:437–450, 2017.
* [14] Rafael MO Cruz, Robert Sabourin, and George DC Cavalcanti. Dynamic classifier selection: Recent advances and perspectives. Information Fusion, 41:195–216, 2018.
* [15] Rafael MO Cruz, Robert Sabourin, George DC Cavalcanti, and Tsang Ing Ren. Meta-des: A dynamic ensemble selection framework using meta-learning. Pattern recognition, 48(5):1925–1935, 2015.
* [16] Rafael MO Cruz, Hiba H Zakane, Robert Sabourin, and George DC Cavalcanti. Dynamic ensemble selection vs k-nn: why and when dynamic selection obtains higher classification performance? In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), pages 1–6. IEEE, 2017.
* [17] Jacek Czerniak and Hubert Zarzycki. Application of rough sets in the presumptive diagnosis of urinary system diseases. In Artificial intelligence and security in computing systems, pages 41–51. Springer, 2003.
* [18] Ana Luiza Dallora, Shahryar Eivazzadeh, Emilia Mendes, Johan Berglund, and Peter Anderberg. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review. PloS one, 12(6):e0179804, 2017.
* [19] Silvana Debernardi, Harrison O’Brien, Asma S Algahmdi, Nuria Malats, Grant D Stewart, Marija Plješa-Ercegovac, Eithne Costello, William Greenhalf, Amina Saad, Rhiannon Roberts, et al. A combination of urinary biomarker panel and pancrisk score for earlier detection of pancreatic cancer: A case–control study. PLoS Medicine, 17(12):e1003489, 2020.
* [20] Dheeru Dua and Casey Graff. UCI machine learning repository, 2019.
* [21] Charles Elkan. The foundations of cost-sensitive learning. In International joint conference on artificial intelligence, volume 17, pages 973–978. Lawrence Erlbaum Associates Ltd, 2001.
* [22] Lydia Fischer, Barbara Hammer, and Heiko Wersing. Optimal local rejection for classifiers. Neurocomputing, 214:445–457, 2016.
* [23] Kushankur Ghosh, Arghasree Banerjee, Sankhadeep Chatterjee, and Soumya Sen. Imbalanced twitter sentiment analysis using minority oversampling. In 2019 IEEE 10th international conference on awareness science and technology (iCAST), pages 1–5. IEEE, 2019.
* [24] David Gil, Jose Luis Girela, Joaquin De Juan, M Jose Gomez-Torres, and Magnus Johnsson. Predicting seminal quality with artificial intelligence methods. Expert Systems with Applications, 39(16):12564–12573, 2012.
* [25] Sergey E Golovenkin, Jonathan Bac, Alexander Chervov, Evgeny M Mirkes, Yuliya V Orlova, Emmanuel Barillot, Alexander N Gorban, and Andrei Zinovyev. Trajectories, bifurcations, and pseudo-time in large clinical datasets: Applications to myocardial infarction and diabetes data. GigaScience, 9(11):giaa128, 2020.
* [26] Chonghui Guo, Mucan Liu, and Menglin Lu. A dynamic ensemble learning algorithm based on k-means for icu mortality prediction. Applied Soft Computing, 103:107166, 2021.
* [27] Tin Kam Ho and Mitra Basu. Complexity measures of supervised classification problems. IEEE transactions on pattern analysis and machine intelligence, 24(3):289–300, 2002.
* [28] Nianzong Hou, Mingzhe Li, Lu He, Bing Xie, Lin Wang, Rumin Zhang, Yong Yu, Xiaodong Sun, Zhengsheng Pan, and Kai Wang. Predicting 30-days mortality for mimic-iii patients with sepsis-3: a machine learning approach using xgboost. Journal of translational medicine, 18(1):1–14, 2020.
* [29] Andrew Houston and Georgina Cosma. A genetically-optimised artificial life algorithm for complexity-based synthetic dataset generation. Information Sciences, 2022.
* [30] Andrew Houston, Georgina Cosma, Phillipa Turner, and Alexander Bennett. Predicting surgical outcomes for chronic exertional compartment syndrome using a machine learning framework with embedded trust by interrogation strategies. Scientific Reports, 11, 2021.
* [31] Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110(3):457–506, 2021.
* [32] MM Islam, Rahatara Ferdousi, Sadikur Rahman, and Humayra Yasmin Bushra. Likelihood prediction of diabetes at early stage using data mining techniques. In Computer Vision and Machine Intelligence in Medical Image Analysis, pages 113–125. Springer, 2020.
* [33] Roosan Islam, Charlene Weir, and Guilherme Del Fiol. Heuristics in managing complex clinical decision tasks in experts’ decision making. In 2014 IEEE International Conference on Healthcare Informatics, pages 186–193. IEEE, 2014.
* [34] Aditya V Karhade, Quirina CBS Thio, Paul T Ogink, Akash A Shah, Christopher M Bono, Kevin S Oh, Phil J Saylor, Andrew J Schoenfeld, John H Shin, Mitchel B Harris, et al. Development of machine learning algorithms for prediction of 30-day mortality after surgery for spinal metastasis. Neurosurgery, 85(1):E83–E91, 2019.
* [35] Harmanpreet Kaur, Eytan Adar, Eric Gilbert, and Cliff Lampe. Sensible ai: Re-imagining interpretability and explainability using sensemaking theory. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 702–714, 2022.
* [36] Ajay Kohli and Saurabh Jha. Why cad failed in mammography. Journal of the American College of Radiology, 15(3):535–537, 2018\.
* [37] Benjamin Kompa, Jasper Snoek, and Andrew L Beam. Second opinion needed: communicating uncertainty in medical machine learning. NPJ Digital Medicine, 4(1):4, 2021.
* [38] Konstantina Kourou, Themis P Exarchos, Konstantinos P Exarchos, Michalis V Karamouzis, and Dimitrios I Fotiadis. Machine learning applications in cancer prognosis and prediction. Computational and structural biotechnology journal, 13:8–17, 2015\.
* [39] Andre W Kushniruk. Analysis of complex decision-making processes in health care: cognitive approaches to health informatics. Journal of biomedical informatics, 34(5):365–376, 2001.
* [40] Mouna Lamari, Nabiha Azizi, Nacer Eddine Hammami, Assia Boukhamla, Soraya Cheriguene, Najdette Dendani, and Nacer Eddine Benzebouchi. Smote–enn-based data sampling and improved dynamic ensemble selection for imbalanced medical data classification. In Advances on Smart and Soft Computing: Proceedings of ICACIn 2020, pages 37–49. Springer, 2021.
* [41] Mateusz Lango and Jerzy Stefanowski. What makes multi-class imbalanced problems difficult? an experimental study. Expert Systems with Applications, 199:116962, 2022.
* [42] David D. Lewis and William A. Gale. A sequential algorithm for training text classifiers. In Bruce W. Croft and C. J. van Rijsbergen, editors, SIGIR ’94, pages 3–12, London, 1994. Springer London.
* [43] Max Little, Patrick McSharry, Eric Hunter, Jennifer Spielman, and Lorraine Ramig. Suitability of dysphonia measurements for telemonitoring of parkinson’s disease. Nature Precedings, pages 1–1, 2008.
* [44] Victoria López, Alberto Fernández, Salvador García, Vasile Palade, and Francisco Herrera. An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics. Information sciences, 250:113–141, 2013.
* [45] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
* [46] Nuria Macia and Ester Bernadó-Mansilla. Towards uci+: a mindful repository design. Information Sciences, 261:237–262, 2014.
* [47] Scott Mayer McKinney, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg S Corrado, Ara Darzi, et al. International evaluation of an ai system for breast cancer screening. Nature, 577(7788):89–94, 2020.
* [48] Curtis Northcutt, Lu Jiang, and Isaac Chuang. Confident learning: Estimating uncertainty in dataset labels. Journal of Artificial Intelligence Research, 70:1373–1411, 2021\.
* [49] Jonathan Ortigosa-Hernández, Inaki Inza, and Jose A Lozano. Measuring the class-imbalance extent of multi-class problems. Pattern Recognition Letters, 98:32–38, 2017.
* [50] Urvish K Patel, Arsalan Anwar, Sidra Saleem, Preeti Malik, Bakhtiar Rasul, Karan Patel, Robert Yao, Ashok Seshadri, Mohammed Yousufuddin, and Kogulavadanan Arumaithurai. Artificial intelligence as an emerging technology in the current care of neurological disorders. Journal of neurology, 268(5):1623–1642, 2021.
* [51] Miguel Patrício, José Pereira, Joana Crisóstomo, Paulo Matafome, Manuel Gomes, Raquel Seiça, and Francisco Caramelo. Using resistin, glucose, age and bmi to predict the presence of breast cancer. BMC cancer, 18(1):1–8, 2018.
* [52] John Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74, 1999.
* [53] Evan M Polce, Kyle N Kunze, Michael Fu, Grant E Garrigues, Brian Forsythe, Gregory P Nicholson, Brian J Cole, and Nikhil N Verma. Development of supervised machine learning algorithms for prediction of satisfaction at two years following total shoulder arthroplasty. Journal of Shoulder and Elbow Surgery, 2020.
* [54] Ronaldo C Prati, Gustavo EAPA Batista, and Diego F Silva. Class imbalance revisited: a new experimental setup to assess the performance of treatment methods. Knowledge and Information Systems, 45(1):247–270, 2015.
* [55] J. Ross Quinlan. Induction of decision trees. Machine learning, 1:81–106, 1986.
* [56] Amy Rechkemmer and Ming Yin. When confidence meets accuracy: Exploring the effects of multiple performance indicators on trust in machine learning models. In Proceedings of the 2022 chi conference on human factors in computing systems, pages 1–14, 2022.
* [57] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
* [58] Nicholas Roy and Andrew McCallum. Toward optimal active learning through sampling estimation of error reduction. In International Conference on Machine Learning, pages 441––448. Morgan Kaufmann, 2001.
* [59] C Okan Sakar, Gorkem Serbes, Aysegul Gunduz, Hunkar C Tunc, Hatice Nizam, Betul Erdogdu Sakar, Melih Tutuncu, Tarkan Aydin, M Erdem Isenkul, and Hulya Apaydin. A comparative analysis of speech signal processing algorithms for parkinson’s disease classification and the use of the tunable q-factor wavelet transform. Applied Soft Computing, 74:255–263, 2019.
* [60] Michael Scholz and Tristan Wimmer. A comparison of classification methods across different data complexity scenarios and datasets. Expert Systems with Applications, 168:114217, 2021.
* [61] Manali Sharma and Mustafa Bilgic. Evidence-based uncertainty sampling for active learning. Data Mining and Knowledge Discovery, 31:164–202, 2017.
* [62] Michael R Smith and Tony Martinez. Improving classification accuracy by identifying and removing instances that should be misclassified. In The 2011 International Joint Conference on Neural Networks, pages 2690–2697. IEEE, 2011.
* [63] Michael R Smith, Tony Martinez, and Christophe Giraud-Carrier. An instance level analysis of data complexity. Machine learning, 95:225–256, 2014.
* [64] David J Spiegelhalter. Understanding uncertainty. The Annals of Family Medicine, 6(3):196–197, 2008.
* [65] Jerzy Stefanowski. Overlapping, rare examples and class decomposition in learning classifiers from imbalanced data. Emerging paradigms in machine learning, pages 277–306, 2013.
* [66] Bo Tang and Haibo He. A local density-based approach for outlier detection. Neurocomputing, 241:171–180, 2017.
* [67] Domenica Taruscio and Alberto Mantovani. Multifactorial rare diseases: Can uncertainty analysis bring added value to the search for risk factors and etiopathogenesis? Medicina, 57(2):119, 2021.
* [68] Sana Tonekaboni, Shalmali Joshi, Melissa D McCradden, and Anna Goldenberg. What clinicians want: contextualizing explainable machine learning for clinical end use. In Machine learning for healthcare conference, pages 359–380. PMLR, 2019.
* [69] Athanasios Tsanas, Max A Little, Cynthia Fox, and Lorraine O Ramig. Objective automatic assessment of rehabilitative speech treatment in parkinson’s disease. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 22(1):181–190, 2013.
* [70] Marc J Van De Vijver, Yudong D He, Laura J Van’t Veer, Hongyue Dai, Augustinus AM Hart, Dorien W Voskuil, George J Schreiber, Johannes L Peterse, Chris Roberts, Matthew J Marton, et al. A gene-expression signature as a predictor of survival in breast cancer. New England Journal of Medicine, 347(25):1999–2009, 2002.
* [71] Pattaramon Vuttipittayamongkol, Eyad Elyan, and Andrei Petrovski. On the class overlap problem in imbalanced data classification. Knowledge-based systems, 212:106631, 2021.
* [72] Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang’Anthony’ Chen. Chexplain: enabling physicians to explore and understand data-driven, ai-enabled medical imaging analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2020.
* [73] Qian Yang, Aaron Steinfeld, and John Zimmerman. Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI conference on human factors in computing systems, pages 1–11, 2019.
* [74] Rui Zhu, Ziyu Wang, Zhanyu Ma, Guijin Wang, and Jing-Hao Xue. Lrid: A new metric of multi-class imbalance degree based on likelihood-ratio test. Pattern Recognition Letters, 116:36–42, 2018.
* [75] Maciej Ziȩba, Jakub M Tomczak, Marek Lubicz, and Jerzy Świa̧tek. Boosted svm for extracting rules from imbalanced data in application to prediction of the post-operative life expectancy in the lung cancer patients. Applied soft computing, 14:99–108, 2014.
## Supplementary File
Model | Hyperparameter
---|---
LR | | solver: liblinear; saga
---
penalty: l1; l2
regularisation: (0.01, 100)
SVM | | regularisation: (0.01, 100)
---
gamma: (0.01, 100)
degree: (1,5)
kernel: linear; polynomial; rbf; sigmoid
KNN | | no. of neighbors: (2,11)
---
algorithm: auto; ball_tree; kd_tree; brute
RF | | maximum depth: (2,10)
---
no. of estimators: (5,20)
maximum features: (0.25,1)
ADA | | learning rate: (0.005, 0.9)
---
no. of estimators: (5, 20)
NB | variable smoothing : (0.01,1)
LDA | solver: svd; lsqr; eigen
QDA | | regularisation parameter: (0.00001, 0.1)
---
tol: (0.0001, 0.1)
MLP | | activation function: tanh; relu
---
solver: sgd; adam
alpha: (0.0001, 0.05)
learning rate: constant; adaptive
XGB | | no. of estimators: (5, 20)
---
maximum depth: (2, 15)
learning rate: (0.05, 0.20)
min. child weight: (1, 4)
Table S.1: Hyperparameters tuned using Bayesian Cross-Validation for each ML
algorithm.
|
# Federated Learning with Differential Privacy
Adrien Banse1, Jan Kreischer2, Xavier Oliva i Jürgens3 1Exchange student at
EPFL, Switzerland from UCLouvain, Belgium 2Exchange student at EPFL,
Switzerland from Universität Zürich, Switzerland 3Student at EPFL,
Switzerland
###### Abstract
Federated learning (FL), as a type of distributed machine learning, is capable
of significantly preserving client’s private data from being shared among
different parties. Nevertheless, private information can still be divulged by
analyzing uploaded parameter weights from clients. In this report, we showcase
our empirical benchmark of the effect of the number of clients and the
addition of differential privacy (DP) mechanisms on the performance of the
model on different types of data. Our results show that non-i.i.d and small
datasets have the highest decrease in performance in a distributed and
differentially private setting.
## I Introduction
The training of deep-learning models usually requires large and diverse
datasets. In many areas, gathering large datasets is difficult and requires
the collaboration of multiple institutions. Especially in medicine, patient
data is spread among multiple entities. While each party might not have enough
data to robustly train a model for a specific task, the union of their
datasets can potentially lead to successful data insights. Especially for rare
diseases, data sharing among many entities, which can be located in different
countries, is crucial. However, medical data contains much private information
and is potentially identifiable, making it especially sensitive with regard to
privacy. Legal regulations, such as GDPR in Europe, contain specific clauses
describing the privacy requirements that medical data has to comply with.
To address these important limitations, privacy-preserving techniques are
gaining momentum, as they can enable to perform machine learning (ML)
algorithms on sensitive data. Especially federated machine learning, a non-
cryptographic approach for privacy-preserving training of ML models, has been
increasingly studied in the last years [1, 2, 3]. This will be covered in
section II.
Another privacy-enhancing technology used for training ML models is
differential privacy (DP). DP algorithms aim at quantifying and setting an
upper-bound to the privacy loss of an individual when entering their private
data into a dataset. They rely on incorporating random noise to the data or
model. DP has also been used in the federated setting [4], where to
collectively train a model, multiple parties exchange or send differentially
private model updates to a central server to protect against an honest-but-
curious adversary. This will be covered in section II-B.
In this project, we will focus on cross-silo federated machine learning with
differential privacy. The three questions we want to answer are the following.
1) How does the level of data distribution affect model accuracy, i.e. the
convergence of the optimization process?
2) How does differential privacy affect model accuracy?
3) Is it applicable to small, more realistic datasets?
We will perform experiments for both i.i.d., where all of the data is
independently and identically distributed, and non-i.i.d cases. Finally, we
will apply our FL-DP algorithm on a small medical dataset.
## II Theoretical Background
### II-A Federated Machine Learning
In federated learning, the data remains under the control of their owners,
which we designate as clients, and a central server coordinates the training
by sending the global model directly to the clients, which then update the
model with their data. The updated models from the clients are sent back to
the central server and averaged to obtain an updated version of the global
model. This is done via the FedAvg (Federated Averaging) algorithm [1],
described in Algorithm 1 in Appendix -A.
### II-B Differential Privacy
In this project, we focus on cross-silo federated learning, where data is
distributed between organizations with high computational resources, like
hospitals or banks. The biggest challenge in this setting usually lies on the
data security side.
The adversarial model in this setting includes both the central server and any
of the clients. The central server observes the updated parameters of all
clients, while a client observes its own updates and the new global model
parameters after every round. The problem lies in the fact that the model
parameters might leak information about the training data. The adversary’s
goal is to infer whether a given record was in the client’s training data
(membership inference) or learn properties about the client’s training data
(property inference).
($\epsilon$, $\delta$)-DP provides a strong criterion for privacy preservation
of distributed data processing systems. Here, $\epsilon>0$ is the
distinguishable bound of all outputs on two neighboring datasets (pairs of
databases $(\mathcal{D},\mathcal{D}_{-r})$ differing only in one row $r$. In
other words, the removal or addition of a single record in the database should
not substantially affect the values of the computed function/statistics.
$\log{\frac{\text{P}[A(\mathcal{D})=O]}{\text{P}[A(\mathcal{D}_{-r})=O]}}<\epsilon\text{
with probability }1-\delta$
Thus, $\delta$ represents the probability that the ratio of the probabilities
for two neighboring datasets cannot be bounded by $e^{\epsilon}$). Typically,
values of $\delta$ that are less than the inverse of any polynomial in the
size of the database are used.
There are three ways to apply differential privacy to Machine Learning:
objective perturbation, gradient perturbation and output perturbation. For
deep learning applications, we can not derive sensitivity bounds for the
objective and output and have to use gradient perturbation. We use PyTorch’s
module Opacus, that uses gradient perturbation with the advanced composition
method. It injects noise at every iteration by using the gradient clipping
technique [5] (see Algorithm 2 in Appendix -A).
## III Models and Methods
### III-A Datasets and models
In this section, we shortly describe the datasets used for the numerical
experiments, as well as the model used to classify them.
#### III-A1 MNIST
is a widely used database comprising thousands of $28\times 28$ pixels images
of 10 different handwritten digits (which means there are 10 different
classes). The samples are randomly and equally distributed among clients, we
say that the data is i.i.d.. We use the Convolution Neural Network (CNN)
defined in the PyTorch’s examples GitHub repository [6]. 60 000 data points
were used for training, and 10 000 for testing.
#### III-A2 FEMNIST
(from the LEAF benchmark [7]) is also a database consisting of $28\times 28$
images of 10 different handwritten digits (letters are discarded). The
difference between MNIST and FEMNIST datasets is that the partitioning of the
data is now based on the writer of the digit. It means that the underlying
distribution of data for each user is now consistent with the raw data,
yielding a non-i.i.d. dataset. We use the same CNN, and same training/testing
dataset sizes as for MNIST.
#### III-A3 Medical dataset
In order to tackle more realistic scenarios as explained in Section I, we use
a database of medical records of 120 patients [8]. The aim is to predict
whether patients suffer from inflammation of urinary bladder and/or nephritis
of renal pelvis origin. In this work, we focus on the pathology. The medical
data is randomly split onto the clients, representing hospitals. There are 6
attributes, namely temperature of the patient, occurrence of nausea, lumbar
pain, urine pushing, micturition pains, and burning of urethra, itch, swelling
of urethra outlet. We used logistic regression to tackle this classification
problem. 96 data points were used for training, and 24 for testing.
### III-B Experimental setup
The parameters for the experiments are set as follows: The learning rate of
SGD is set to $0.01$. The number of client epochs per round is set to 1 (to
avoid the clients falling in different local minima) for the MNIST and FEMNIST
database, 100 for the medical dataset. The number of global rounds is set to
30. We set the batch size of all clients to 128 for MNIST and FEMNIST, and 8
for the medical dataset.
#### III-B1 First Experiment
In order to see the impact of federated learning, we will leave all training
parameters fixed and perform federated learning for
$\texttt{nr\\_clients}\in\\{1,5,10\\}$ Note that the amount of data points
remains constant but more distributed.
#### III-B2 Second Experiment
We reuse the previous training setting, but fix $\texttt{nr\\_clients}=10$.
Now, we want to analyze the effect of differential privacy on performance and
convergence. We set the clipping threshold of the gradient to $1.2$. We set
the privacy parameter $\delta$ to $\frac{1}{2n}$. We compare the training loss
and final accuracy with various protection levels for 10 clients, using
$\epsilon\in\\{10,50,100\\}$. Every round, every client uses a privacy budget
of $\frac{\epsilon}{\texttt{nr\\_rounds}}$.
We report the testing accuracy of the global model, as well as its loss for
every training round.
## IV Results
### IV-A MNIST
Results of the first experiment are shown in Figure 1a, and results of the
second experiment are shown in Figure 1b.
(a) Accuracy and loss for different numbers of clients.
(b) Accuracy and loss for different values of $\epsilon$, with 10 clients.
Figure 1: MNIST database
Concerning the first experiment, as expected, one can clearly observe that the
more clients there are, the slower the learning process.
Regarding the second experiment, one can see that the learning is faster when
we go from a privacy budget of $\epsilon=10$ to $\epsilon=50$. The same
observation can be made for the transition from $\epsilon=50$ to
$\epsilon=100$ by looking at the testing loss graph. However, we can observe
that the improvement is not proportional to $\epsilon$ since we observe a less
important change for the second transition. The three privacy setups lead to
the same final accuracy though.
Finally, one can observe on Figure 1a and Figure 1b the difference of
convergence of SGD with and without privacy in a federated learning context.
Without privacy, SGD achieves more than $95\text{\,}\mathrm{\char 37\relax}$
of testing accuracy, while only $75\text{\,}\mathrm{\char 37\relax}$ with
privacy, even with a large privacy budget such as $\epsilon=100$.
### IV-B FEMNIST
Results can be founds in Figure 2a and Figure 2b.
(a) Accuracy and loss for different numbers of clients for the FEMNIST
database.
(b) Accuracy and loss for different values of $\epsilon$, with 10 clients for
the FEMNIST database.
Figure 2: FEMNIST database
We enlighten two main differences compared to Section IV-A:
1) While a single client achieves similar accuracy to the regular MNIST
dataset, the algorithm takes more global training rounds to converge with an
increasing number of clients. After 100 rounds the test accuracy of all models
reaches around 98%.
2) When using DP the training process does not really converge anymore, not
leading to a viable model for FEMNIST.
### IV-C Medical dataset
Results can be found on Figure 3a and Figure 3b.
(a) Accuracy and loss for different numbers of clients for the medical
database.
(b) Accuracy and loss for different values of $\epsilon$, with 10 clients for
the medical database.
Figure 3: Medical database
Again, we enlighten some differences compared to Section IV-A.
1) Training shows higher variance with more clients, but all manage to
converge.
2) Adding differential privacy, the model accuracy does not get any better.
Only for $\epsilon=10$ the loss decreases, indicating that the global model is
improving.
## V Discussion
In this section we will discuss the results in Section IV and mention further
improvements.
During the experiments, we clearly observed the trade-off between privacy and
utility. Since, participating clients only optimize using their own data, they
can end up in different local minima, making the global model sub-optimal.
This leads to a generally slower convergence with a higher number of clients.
The experiments for the MNIST dataset show that adding noise and gradient
clipping makes the maximal accuracy decrease around $20\text{\,}\mathrm{\char
37\relax}$ for the same number of training rounds, even on i.i.d data. More
clients might be needed to average out the noisy updates [9].
When it comes to non-i.i.d data, distribution of data in silos makes the
global model perform worse. A possible improvement would be to use a d-clique
topology (decentralized learning) instead of classical centralized federated
learning.
Regarding small datasets, i.e. more realistic datasets, the results are more
worrying: training is more sensitive to DP mechanisms, making it very
difficult to learn while effectively preserving privacy. Additionally, they
have a higher variance during training. We had to use bigger values than the
standard $\epsilon=0.1$, because such a bound was too restrictive to compute a
single mini-batch SGD iteration.
When it comes to limitations of our work, we have to note that we did not
focus on hyperparameter tuning and used fix parameters that might not be
optimal for the task. Additionally, there are other ways like fully
homomorphic encryption (FHE) that protects the data from attacks by encrypting
the model.
## VI Summary
We were able to show that models trained using federated machine learning are
able to achieve similar accuracy as models trained on a centralized dataset.
However, adding DP mechanisms to preserve the privacy of the data showed a
rapid decrease of performance, especially for non-i.i.d or small datasets,
which are much more realistic datasets.
## References
* [1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” 2017.
* [2] R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,” 2018.
* [3] F. Tramèr and D. Boneh, “Differentially private learning needs better features (or much more data),” 2021.
* [4] K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farhad, S. Jin, T. Q. S. Quek, and H. V. Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” 2019.
* [5] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , Oct 2016\. [Online]. Available: http://dx.doi.org/10.1145/2976749.2978318
* [6] “PyTorch’s MNIST example,” https://github.com/pytorch/examples/tree/master/mnist, (Accessed on 06/09/2021).
* [7] S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Konečný, H. B. McMahan, V. Smith, and A. Talwalkar, “Leaf: A benchmark for federated settings,” 2019.
* [8] “Uci machine learning repository: Acute inflammations data set,” https://archive.ics.uci.edu/ml/datasets/Acute+Inflammations, (Accessed on 06/09/2021).
* [9] T. Yu, E. Bagdasaryan, and V. Shmatikov, “Salvaging federated learning by local adaptation,” 2020.
### -A Algorithms
This section groups algorithms used to train global and client models, using
differential privacy (see Section II and Section II-B).
Result: Global model trained
Initialize: $\boldsymbol{w^{0}}$ ;
for _each global round $t=0,1,\dots$_ do
for _each client $k\in\\{1,\dots K\\}$_ do
$\boldsymbol{w_{k}^{t+1}}\leftarrow$ ClientUpdate($k,\boldsymbol{w^{t}}$)
end for
$\boldsymbol{w^{t+1}}=\sum_{k=1}^{K}\frac{n_{k}}{n}\boldsymbol{w^{t+1}_{k}}$;
end for
Algorithm 1 FedAvg (Federated Averaging)
In Algorithm 1, $K$ is the number of clients (each client denoted by $k$),
$\alpha$ is the learning rate, $n_{k}$ is the size of the dataset of client
$k$, $n=\sum_{k=1}^{K}n_{k}$ is the size of the entire dataset. Moreover,
$\boldsymbol{w^{t}_{k}}\in\mathbb{R}^{P}$ is the vector of $P$ parameters of
the local model of client $k$, at global round $t$, and
$\boldsymbol{w^{t}}\in\mathbb{R}^{P}$ is the vector of $P$ parameters of the
global model.
The function ClientUpdate involes privacy (see Section II-B), and is described
in Algorithm 2.
Result: $\boldsymbol{w_{k}^{t+1}}$ and privacy cost ($\epsilon$, $\delta$)
using a privacy accounting method
Input: Parameters of the global model at time $t$ $\boldsymbol{w^{t}}$
for _$\text{epoch }e=1,...,E$_ do
Sample $\lfloor\frac{n}{B}\rfloor$ batches at random
for _$j=1,\dots,\lfloor\frac{n}{B}\rfloor$_ do
for _$i=1,\dots,B$_ do
Compute gradient
$\boldsymbol{g_{j}(x_{i}^{j})}\leftarrow\nabla\mathcal{L}(\boldsymbol{w_{k}},\boldsymbol{x_{i}^{j}})$
Clip gradient
$\boldsymbol{\bar{g}_{j}(x_{i}^{j})}\leftarrow\boldsymbol{g_{j}(x_{i}^{j})}/\text{max}\left(1,\frac{\mathinner{\\!\left\lVert\boldsymbol{g_{j}(x_{i}^{j})}\right\rVert}_{2}}{C}\right)$
end for
Add noise
$\boldsymbol{\tilde{g}_{j}}\leftarrow\frac{1}{B}\left(\sum_{i=0}^{B}\boldsymbol{\bar{g}_{j}(x_{i}^{j})}+\mathcal{N}(0,\sigma^{2}C^{2}\boldsymbol{I})\right)$
Descent
$\boldsymbol{w_{k}}\leftarrow\boldsymbol{w_{k}}-\eta\boldsymbol{\tilde{g}_{j}}$
end for
end for
Algorithm 2 ClientUpdate for client $k$
In Algorithm 2, $\eta$ is the learning rate, $\sigma$ is the noise scale, $C$
is the gradient norm bound, $B$ is the batch-size, $E$ is the total number of
epochs, $n$ is the size of the dataset and $\boldsymbol{x}_{i}^{j}$ is the
$i$-th datapoint in the $j$-th batch.
$\mathcal{L}(\boldsymbol{w},\boldsymbol{x})$ is the loss corresponding to the
datapoint $\boldsymbol{x}$, depending on the parameters $\boldsymbol{w}$.
|
# Deconstructing Pedestrian Crossing Decision-making in Interactions with
Continuous Traffic: an Anthropomorphic Model
Kai Tian, Gustav Markkula, Chongfeng Wei , Yee Mun Lee, Ruth Madigan, Toshiya
Hirose, Natasha Merat and Richard Romano Manuscript received. Corresponding
author: Kai TianKai Tian, Gustav Markkula, Yee Mun Lee, Ruth Madigan, Natasha
Merat and Richard Romano are with Institute for Transport Studies, University
of Leeds, Leeds, United Kingdom, LS2 9JT (E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>R.Romano@leeds.ac.uk).Chongfeng Wei is with School of
Mechanical and Aerospace Engineering, Queen’s University Belfast, Belfast,
United Kingdom, BT7 1NN (E-mail: C.Wei@qub.ac.uk).Toshiya Hirose is with
Department of Engineering Science and Mechanics, Shibaura Institute of
Technology, Tokyo, Japan (E-mail: hiroset@shibaura-it.ac.jp). This work was
supported by the UK Engineering and Physical Sciences Research Council under
grant EP/S005056/1.
###### Abstract
As safe and comfortable interactions with pedestrians could contribute to
automated vehicles’ (AVs) social acceptance and scale, increasing attention
has been drawn to computational pedestrian behavior models. However, very
limited studies characterize pedestrian crossing behavior based on specific
behavioral mechanisms, as those mechanisms underpinning pedestrian road
behavior are not yet clear. Here, we reinterpret pedestrian crossing behavior
based on a deconstructed crossing decision process at uncontrolled
intersections with continuous traffic. Notably, we explain and model
pedestrian crossing behavior as they wait for crossing opportunities,
optimizing crossing decisions by comparing the visual collision risk of
approaching vehicles around them. A collision risk-based crossing initiation
model is proposed to characterize the time-dynamic nature of pedestrian
crossing decisions. A simulation tool is established to reproduce pedestrian
behavior by employing the proposed model and a social force model. Two
datasets collected in a CAVE-based immersive pedestrian simulator are applied
to calibrate and validate the model. The model predicts pedestrian crossing
decisions across all traffic scenarios well. In particular, by considering the
decision strategy that pedestrians compare the collision risk of surrounding
traffic gaps, model performance is significantly improved. Moreover, the
collision risk-based crossing initiation model accurately captures the timing
of pedestrian crossing initiations within each gap. This work concisely
demonstrates how pedestrians dynamically adapt their crossings in continuous
traffic based on perceived collision risk, potentially providing insights into
modeling coupled human-AV interactions or serving as a tool to realize human-
like pedestrian road behavior in virtual AVs test platforms.
###### Index Terms:
Pedestrian-AV interaction, Pedestrian road crossing, Decision-making model,
Traffic flow, Simulation.
## I Introduction
Continued advances in vehicle automation have brought us great anticipation
that society will adopt highly automated vehicles (AVs) in the near future.
However, this vision faces many unresolved challenges. One of them is to
achieve smooth interaction between AVs and other road users. The consensus
suggests that in the transition from manual to fully automated driving, there
will be mixed traffic with AVs and other road users on the road[1]. A typical
case is the expansion of the deployment of AVs from a few confined areas of
low risk to other road users to a range of operational design domains, which
could inevitably increase conflicts with other road users[2]. Failures in
interactions between AVs and other road users may hinder the large-scale
adoption and social acceptance of AVs [3, 4]. This, therefore, leads to the
research context of this study, which is to promote safe and smooth
communication and interaction in traffic [1, 3, 4]. Pedestrians are generally
regarded as the most vulnerable road users in modern transport systems, due to
the lack of protective equipment and slow movement compared to other road
users [5]. Given that pedestrians’ actions and intentions are
nondeterministic, and the diversity and dynamism of their behavior, moving
through this complicated environment is a challenge for AVs[6]. Moreover, AVs’
own behavior can also affect pedestrian road behavior, which introduces
further uncertainties into interactions. In particular, the issues mentioned
above become more pronounced at uncontrolled intersections where pedestrian
behavior is more unpredictable, and safety problems are more common than on
other controlled road sections, as there are no traffic signals to coordinate
the interaction process [7]. Additionally, most existing automated driving
systems regard the driving task as a pure collision-free motion planning
problem and view pedestrians in some contexts as rigid road obstacles, instead
of social beings [5, 8].
Against the above background, if AVs cannot properly understand the behavior
of pedestrians, they may not improve traffic efficiency and safety as
expected, but rather increase traffic dilemmas and additional issues[9].
Accordingly, much attention has been drawn to one pressing issue, namely
computational models for pedestrian road behavior, [10, 11, 6, 12, 13], which
may help AVs to better anticipate pedestrian intentions or serve as a tool to
implement realistic pedestrian behavior in simulated scenarios, and thus be
used in the validation and development of AVs[3, 14]. Existing computational
models for pedestrian behavior, particularly for pedestrian road-crossing
decisions have been developed based on a wide range of theories and
hypotheses, such as the cognitive models [15, 10], data-driven approaches
[16], discrete choice models[12], as well as game theoretical models [17].
However, those approaches have not yet bridged several gaps, as identified and
discussed below.
Firstly, most of these approaches are rarely based on specific behavioral or
psychological theories, such as pedestrian visual perception. Instead,
external physical factors, like time to collision (TTC), have been often used.
For example, [18, 19] developed a pedestrian crossing decision-making model
based on the vehicle deceleration distance. [20, 14] applied a minimum TTC as
the threshold for pedestrian crossing decisions. Although TTC or distance from
the vehicle has become the most used decision cue in crossing decision
models[18], growing evidence has shown that the impacts of vehicle kinematics
on pedestrians are multi-dimensional. For instance, at the same TTC condition,
a higher vehicle speed induces more pedestrians to cross the street compared
to a lower one[21]. Therefore, the TTC or distance may not properly carry the
risk information that pedestrians may perceive. As our previous research has
shown, pedestrian crossing behavior is highly correlated with their perceived
visual cues [22]. Hence, existing models lack effort in characterising
pedestrian perceived information, e.g., anthropomorphic visual cues[10, 1].
Moreover, few computational models specifically characterize pedestrian
decisions in the traffic flow scenario. In real situations, pedestrians
usually face a fleet of vehicles and accept one traffic gap after rejecting
some gaps. Thus, the decision-making in continuous traffic may not only be
based on the collision risk, but also involve many trade-offs between safety
and time efficiency[23]. Several previous studies indicated that with the
increased waiting time, pedestrians tended to accept crossing opportunities
with higher risk[7]. [14] developed a model which hypothesized that
pedestrians would change their obedience to the law when they waited a long
time. However, there is much evidence that pedestrians who tended to wait were
more cautious and less likely to accept risky gaps[24, 25, 26]. A meta-study
uncovered these conflicting results and noted that there was insufficient
evidence to support a linear relationship between waiting times and
pedestrians risking crossing the street[27]. On the one hand, the available
findings support that pedestrians may dynamically adjust their crossing
decision-making strategies in continuous traffic. On the other hand, it is
unreasonable to assume that pedestrians always tend to accept more dangerous
crossing opportunities as waiting time increases. Instead, we should treat
each case on its own merits. Therefore, it is necessary to look into the
details of pedestrian crossing behavior when interacting with traffic flow.
Finally, very limited models pay attention to the time dynamic of pedestrian
crossing decision-making. According to the cognitive decision-making theory,
pedestrian crossing initiation time (or onset time) is a variable due to the
noisy evidence in the human cognitive system [28]. In addition, it has been
shown that pedestrian crossing initiation time can be affected by many
factors. For instance, pedestrians may initiate quickly when facing a vehicle
with a low speed[21] or with a small time gap from the approaching
vehicle[29]. Accordingly, existing empirical observations highlight the time-
dynamic nature of pedestrian crossing decision-making. Recently, a class of
emerging models[15, 11, 10], namely the evidence accumulation model, detailed
model pedestrian crossing decisions and their timing by simulating the
cognitive process underlying crossing decision-making. However, given the
complexity of those models, they focused more on the details of the cognitive
process, and it is unclear whether it would be feasible to extend them to
cover additional factors, such as vehicle kinematics.
Regarding the above discussion, several research questions in existing
computational models of pedestrian crossing behavior can be summarised:
* •
There is a lack of computational models that characterize pedestrian crossing
decisions based on anthropomorphic behavioral theory.
* •
The decision pattern of pedestrians crossing the road when interacting with
the traffic flow remains unclear.
* •
There is a lack of computational models that concisely consider the time-
dynamic nature of road crossing decisions and relate them to vehicle
kinematics.
In this study, a decision-making model for pedestrians interacting with
continuous traffic at uncontrolled intersections is proposed to solve the
above questions. The main contributions of this paper are as follows:
* •
We formally apply our findings [22] and extend it to a relatively complex
traffic scenario, demonstrating that pedestrian crossing decisions are dynamic
and intrinsically linked to their perceived collision risk. Specifically, a
visual collision risk model is introduced as the main decision cue accounting
for pedestrian crossing decisions. Moreover, a novel decision strategy is
proposed to interpret pedestrian crossing decisions in continuous traffic
flow. In addition, a crossing initiation time model is developed and
associated with the collision cue model to account for the pedestrian dynamic
crossing initiation time.
* •
Two different datasets collected in a highly immersive pedestrian simulator
are applied to calibrate and validate the model.
* •
A simulation tool is established to reproduce pedestrian crossing decisions in
a customized traffic scenario based on the proposed model.
## II Methodology
Figure 1: A simplified framework for pedestrians road-crossing decision-making
process. Figure 2: (a) Visual collision cue model in road crossing scenario.
Collision cues are as a (b) function of distance from and speed of the vehicle
or (c) TTC from and speed of the vehicle.
### II-A Deconstructing the crossing decision-making process
During the decision-making process for road-crossing, several cognitive stages
may be involved to establish pedestrian situation awareness[1, 30]. Normally,
pedestrian perceived collision cues are the basis of their decisions, which
contain vehicle distance, speed, TTC, and more. Based on those visual cues,
pedestrians comprehend traffic situations and decide whether to cross the road
or not by combining some prior knowledge and strategies. Finally, there is a
reaction process before pedestrians start to move. Therefore, according to the
deconstructed three-stage cognitive process, we propose a collision cue-based
framework for road-crossing decision-making tasks (Fig. 1), assuming that the
crossing decision-making model consists of three constituent parts: visual
collision cue, decision, and crossing initiation.
### II-B Visual collision cue model
Modeling pedestrian-vehicle interaction is challenging, partly because
existing pedestrian models lack psychological underpinnings. According to
psychological theory, when moving through the environment, people rely on
their visual perception of the space around them [31, 32]. The road crossing
task is a typical case that highly demands pedestrians to use visual cues to
evaluate the collision risk from approaching vehicles and guide their
movements. Relevant behavioral research has shown that the human visual system
is sensitive to changes in some visual cues, which may be the source of
collision perception. Specifically, one group of cues may provide reliable
collision time information, such as Tau[33]. Other cues, like visual angle and
its first temporal derivative[32], effectively translate motion information
into visual cues through images that expand on the retina. Although most daily
naturalistic road crossings involve all of the above visual cues (and possibly
others), Delucia[32] has suggested that humans may rely on collision time-
related cues when the scenarios include robust optical information or occur at
a near distance. Conversely, when the optical information in the task is
impoverished or occurs at a long distance, the visual angle and its first
temporal derivative may play a dominant role. In light of this conceptual
framework, we have previously identified that the first temporal derivative of
visual angle, $\dot{\theta}$, is a critical collision cue for making crossing
decisions at uncontrolled intersections. We have demonstrated that
$\dot{\theta}$ not only well explains pedestrian crossing decisions across a
wide range of traffic scenarios from two different datasets, but also
reasonably characterizes the impacts of vehicle speed and traffic gap on
pedestrians [22]. Therefore, in this study, we formalized the pedestrian
crossing decision model based on our previous findings. Typically,
$\dot{\theta}$ refers to the change rate of the visual angle subtended by an
approaching vehicle, $\theta$, (Fig. 2a)[31]. The following equations specify
its physical model:
$\theta=2\tan^{-1}\frac{w}{2Z}\Rightarrow\dot{\theta}\left(Z,v,w\right)=\frac{wv}{(Z)^{2}+w^{2}/4}$
(1)
where $v$ denotes the vehicle speed, $Z$ and $w$ are the distance to and width
of the vehicle. To better interpret the collision cue model, an example is
shown in Fig. 2. Suppose that a vehicle ( $w=1.95$ m) approaches the
pedestrian with two different constant speeds (30 km/h and 60 km/h) from 100
m. $\dot{\theta}$ is an approximately inversely exponential function of
distance and TTC from the approaching vehicle (Fig. 2b, c), showing that
$\dot{\theta}$ increases slowly at long distances and rapidly at close
distances, which agrees qualitatively with the observation that pedestrians
usually feel safe to cross for long distance or big time gap conditions but
not when the vehicle is close [21]. Further, it can be noticed that speed
effects vary across distance (Fig. 2b) and TTC dimensions (Fig. 2c). When
$\dot{\theta}$ is a function of distance and speed, it increases with speed,
which is opposite to the results in Fig. 2c, suggesting that pedestrians may
perceive a higher collision threat from the vehicle with higher speed at the
same distance. However, the approaching vehicle with a slower speed gives
pedestrians a bigger collision threat under the same TTC. The results tie well
with the previous experimental observations on pedestrian crossing
behavior[21, 34, 35].
Figure 3: Illustration of the initiation model. (a) Initiation time $t_{int}$
is the duration between $t_{pass}$ and the time when the pedestrian start
crossing. $t_{sg}$ denotes the actual gap to the approaching vehicle when
pedestrians initiate. (b) The shapes of the initiation model by changing
$\gamma$. (c) The positions of the initiation model by changing $\tau$.
### II-C Decision model
Regarding crossing decisions at uncontrolled intersections, pedestrians
typically make crossing decisions by judging and selecting the appropriate
gaps between two consecutive vehicles, called gap acceptance behavior[7]. Our
previous study has proven that $\dot{\theta}$ is significantly negatively
correlated with pedestrian gap acceptance behavior, and a collision cue-based
binary choice logit model predicts pedestrian gap acceptance well across
different vehicle speeds and traffic gap experimental scenarios[22].
Furthermore, evidence from experimental observations indicated that
individuals’ judgments toward traffic gaps are not necessarily entirely static
over time, especially in traffic streams[36, 24, 25]. Due to certain learning
or comparison strategies, pedestrians may estimate different utilities for the
approaching vehicles with the same collision cues, thus adjusting their
crossing decision to balance safety and efficiency. We, therefore, propose the
following assumptions for the crossing decision-making in the traffic flow:
(i) Pedestrians make decisions mainly based on collision cues, i.e.,
$\dot{\theta}$, provided by approaching vehicles.
(ii) Pedestrians are unwilling to accept the current gap with a collision cue
equal to or greater than the maximum collision cue previously rejected. For
example, if pedestrians reject a $0.02$ rad/s cue, they would be more likely
to reject the same or bigger one upstream of traffic. The rule is given by:
${X}_{1}=\left\\{\begin{array}[]{l}1,\quad\dot{\theta}_{c}\geq\dot{\theta}_{mr}\\\
0,\quad\dot{\theta}_{c}<\dot{\theta}_{mr}\end{array}\right.$ (2)
where $X_{1}$ is the dummy variable for the rule. $\dot{\theta}_{c}$ and
$\dot{\theta}_{mr}$ represent collision cues for the current gap and maximum
rejected gap, respectively.
(iii) If pedestrians find that a gap next to the current gap has a smaller
collision cue than the current gap, they may prefer to wait for this gap
rather than accept a current gap with a greater collision threat, given the
rule:
${X}_{2}=\left\\{\begin{array}[]{l}1,\quad\dot{\theta}_{c}\geq\dot{\theta}_{f}\\\
0,\quad\dot{\theta}_{c}<\dot{\theta}_{f}\end{array}\right.$ (3)
where $X_{2}$ is the dummy variable for the decision rule. $\dot{\theta}_{f}$
represents a collision cue of the gap following the current one. Therefore,
the utility function of the decision model is formulated as:
$V=\rho_{0}\ln(\dot{\theta})+\rho_{1}X_{1}+\rho_{2}X_{2}+\rho_{3}$ (4)
where $\rho_{0}$ to $\rho_{3}$ are estimated coefficients. In this study,
every $\dot{\theta}$ only refers to the $\dot{\theta}$ value of the
approaching vehicle at the time when the rear end of the previous vehicle just
past the pedestrian (Fig. 3a). Regarding the $\ln$ transformation, we have
previously proven that it can efficiently increase the accuracy of model
fitting [22]. Since crossing decisions at uncontrolled intersections are
assumed to be a binary choice task, a logistic function is applied [7]. Then,
a decision model for crossing tasks in the traffic flow is given by:
$p(\dot{\theta},X_{1},X_{2})=\frac{1}{1+\exp\left(-V\right)}$ (5)
where $p$ is the probability of the gap acceptance. The (5) without the terms
$X_{1}$ and $X_{2}$ degenerates to the model we proposed in [22].
### II-D Crossing initiation model
In real traffic, the time at which pedestrians start to cross the road is a
variable[28]. As illustrated in Fig. 3a, crossing initiation time, $t_{int}$,
is typically defined as the duration between the time when the rear end of the
previous car passes the pedestrians’ position, $t_{pass}$, and the time when
pedestrians start their movements[21]. Emerging cognitive models[28, 11, 10]
have shown that the crossing initiation time distribution may arise from an
underlying evidence accumulation process, but of a form that requires costly
stochastic simulation of to estimate the distribution. However, the skewed,
lognormal-like shape of the distribution is similar to those arising from
simpler evidence accumulation processes, which can be written in a closed
mathematical form, such as Ex-Gaussian, Shifted Wald (SW), and Weibull[37].
Considering the similarities of those methods, we only apply the SW
distribution instead of trying all of them. The SW distribution is a simple
and concise distribution modeling tool, which can fully qualify the crossing
initiation time distribution with three parameters: $b$ (deviation around the
mode), $\gamma$ (tail magnitude) and $\tau$ (onset of the distribution). Its
equation is defined as:
$\begin{gathered}x\sim\operatorname{SW}(b,\gamma,\tau)\\\
\Rightarrow\frac{b}{\sqrt{2\pi(x-\tau)^{3}}}\cdot\exp\left(\frac{-[b-\gamma(x-\tau)]^{2}}{2(x-\tau)}\right)\end{gathered}$
(6)
An illustration of the distributional effect that occurs by changing each of
the $\gamma$ and $\tau$ parameters are shown in Fig. 3 b and c. The tail
becomes heavier as $\gamma$ decreases, (Fig. 3b). Changes in $\tau$ control
the position of the distribution (Fig. 3c)[37].
According to our assumptions in Fig. 1, the crossing initiation time model is
affected by collision cues, so we define the initiation time model as follows:
$\begin{gathered}t_{int}\sim\operatorname{SW}(b,\gamma,\tau)\\\ \text{ with
}\gamma=\beta_{1}\ln(\dot{\theta})+\beta_{2};\tau=\beta_{3}\ln(\dot{\theta})+\beta_{4}\end{gathered}$
(7)
where $t_{int}$ is the crossing initiation time. $\beta_{1}$ to $\beta_{4}$
are estimated coefficients. The idea behind these equations is that the
strength of collision cues could affect the distribution pattern of pedestrian
initiation time. For a more intensive collision threat, if pedestrians choose
to cross, they tend to do so more quickly, so the distribution is concentrated
and has a short tail. In contrast, when the collision threat is small,
pedestrians tend to start crossing slowly, so the distribution is more likely
to have a long tail[38]. Accordingly, the SW model is not only a practical
distribution model but also provides notable psychological significance for
our decision model. In addition, $b$ is assumed to be a coefficient not
influenced by collision cues. Furthermore, since response time data are
routinely assumed to be normally distributed in many studies [21, 39], another
crossing initiation time model based on the Gaussian distribution is proposed
as a comparison to the SW model, defined as the following equations:
$\begin{gathered}t_{int}\sim\mathcal{N}(\mu,\sigma),\\\ \text{ with
}\mu=\beta_{1}\ln(\dot{\theta})+\beta_{2};\sigma=\beta_{3}\ln(\dot{\theta})+\beta_{4}\end{gathered}$
(8)
where $\mu$ and $\theta$ are parameters of the Gaussian model, $\mathcal{N}$.
### II-E Pedestrian road-crossing decision-making model in traffic flow
Finally, a pedestrian road-crossing decision-making model based on the SW
distribution in the traffic flow (SW-PRD) is then established by employing (5)
and (7):
$\displaystyle f_{SW}(t_{\text{int
}})=\sum_{n=1}^{N}P_{n}\cdot\operatorname{SW}\left(b,\gamma\left(\dot{\theta}_{n}\right),\tau\left(\dot{\theta}_{n}\right)\right)$
(9) $\displaystyle
P_{n}=p\left(\dot{\theta}_{n},X_{1,n},X_{2,n}\right)\cdot\left(1-P_{n-1}\right)$
$\displaystyle P_{0}=0$
where $n$ is the position number of the gap in the traffic flow.
${\dot{\theta}}_{n}$, $X_{1,n}$ and $X_{2,n}$ represent the decision variables
for the $n$th traffic gap. $P_{n}$ means the recursive probability that
pedestrians accept the $n$th gap, which is calculated based on $p$ and
$P_{n-1}$. Similarly, a road-crossing decision model based on Gaussian
distribution (G-PRD) is given by:
$\displaystyle f_{G}(t_{\text{int
}})=\sum_{n=1}^{N}P_{n}\cdot\mathcal{N}\left(\mu\left(\dot{\theta}_{n}\right),\sigma\left(\dot{\theta}_{n}\right)\right)$
(10)
### II-F Simulation tool
In this subsection, an agent-based simulation tool is proposed using the
established models to reproduce pedestrian crossing behavior at uncontrolled
intersections with traffic flow. The framework mainly includes three parts:
the decision model, environment model, and pedestrian kinematics model.
Regarding the traffic environment, as the intersections on multi-lanes are
often separated by refuges[40], pedestrians actually cross one lane at a time.
Therefore, a single-lane road with an uncontrolled intersection is considered.
On the other hand, the model is possibly extended to a multi-lane situation,
but the impacts of refuges should be further considered [41]. A fleet of
vehicles travels on the lane at a constant speed, wherein the vehicle
quantity, speed, and traffic gaps can be customized. Afterward, a basic social
force model is applied as a pedestrian kinematics model[42], which considers
the driving force towards the destination and repulsive force from the
boundary of the crosswalk. Finally, according to the information provided by
the traffic environment and kinematics model, each pedestrian’s road crossing
decision is generated through PRD models. The detailed process of the
simulation tool is provided in the supplementary file (Appendix. A-A). A
demonstration video of the simulation tool is also provided. Please see the
attachment.
## III Model calibration and validation
In this study, two empirical datasets collected in a simulated environment,
i.e., a CAVE-based highly immersive pedestrian simulator, were applied to
calibrate and validate the PRD models. The following sections provide detailed
information on the two datasets, calibration, and validation methods.
Figure 4: Schematic diagrams and photos of traffic scenarios in simulated
experiments. The crossing scenarios and traffic of the (a) first dataset and
(b) second dataset.
### III-A Empirical data
Dataset one. A virtual road scene with a 3.5 m wide single lane and 1.85 m
wide pavement was created in the simulator. Two consecutive vehicles of 1.95 m
in width were driven in the middle of the road at the same constant speed.
Three vehicle speeds were selected, namely, 25 mph, 30 mph, or 35 mph. The
first vehicle came into view 96 m away from the pedestrian, and the second
vehicle maintained a specific time gap behind the first vehicle, i.e. 2 s, 3
s, 4 s, or 5 s (Fig. 4a). Sixty participants were instructed to cross the road
between the two cars if they felt comfortable and safe to do so. Otherwise,
they could reject the gap. Three experimental blocks were created, and each of
the 12 scenarios (4 time gaps $\times$ 3 speeds) were presented in random
order and repeated once in each experimental block. Therefore, each
participant experienced 72 trials, and 4270 trials of data were obtained in
total.
The virtual environment and simulation process mentioned above were designed
and controlled by the Unity3D platform. Internal code automatically recorded
the positions and velocities of vehicles and participants on each time step.
Two main metrics were applied: gap acceptance, $u$, and crossing initiation
time, $t_{int}$. The gap acceptance data were the binary crossing decisions
made by participants, i.e., $u=1$ means pedestrians accepted the gap, while 0
indicated rejected the gap. The crossing initiation time was defined as
described in Section II-D and Fig.3a. For more detailed information about this
dataset, please refer to [38].
Dataset two. To explore pedestrians’ road crossing decisions in traffic flow,
pedestrians were asked to cross a one-lane road with continuous traffic in the
simulator (Fig.4b). The size of time gaps between every two consecutive
vehicles varied, which provided pedestrians with different opportunities to
make crossing decisions (Fig.4b). Four traffic scenarios with different
sequences of gap sizes (in seconds) were designed as follows:
* •
Scenario one: 1 1 1 3 3 3 6 1 1 6;
* •
Scenario two: 1 1 1 1 3 3 7 1 1 3 8;
* •
Scenario three: 1 1 1 3 1 3 1 3 5 4 8;
* •
Scenario four: 2 3 1 1 3 1 1 1 5 4 7;
Among these scenarios, the one-second and two-second time gaps between
vehicles were considered dangerous crossing opportunities that very few
pedestrians would accept. For the three-second and four-second gaps, decisions
were expected to significantly differ between participants due to their
heterogeneity (e.g., age and gender). The time gaps longer than four seconds
were considered safe gaps that most pedestrians were expected to confidently
accept. In all scenarios, a range of compact, midsize, van, and SUV vehicles
were driven at 30 mph. Since the types of the approaching vehicle were
randomly selected, in the analyses here, the width of the vehicle was
calculated by averaging the width of all vehicles in the corresponding gap in
each scenario. 60 participants completed four crossing tasks in any of the
four scenarios and repeated them once more (4 crossing tasks $\times$ 4
scenarios $\times$ 2 repetitions). We, therefore, collected data from 1920
trials. All the trials that participants experienced were in a randomized
order. Similar to the first dataset, two main metrics were used: gap
acceptance, $u$, and crossing initiation time, $t_{int}$. For more detailed
information about this dataset, please refer to[25].
### III-B Data processing and parameter estimation
With regard to data processing, both datasets were divided into a training set
and a validation set. Regarding dataset one, as controlled experimental
variables were vehicle speed and time gap size, we separated the training and
validation sets by choosing the data from different combinations of
experimental variables (As illustrated in Section III-A, there were 12
different combinations). To have enough data in the training and validation
sets, data from 10 combinations were grouped into the training set, while the
rest of the data belonged validation set. Moreover, in order to make sure the
validation data were sufficiently different, the 2 combinations are not
adjacent to each other in terms of speed or time gap size. Accordingly, the
validation set included data in 4 s 25 mph and 5 s 35 mph conditions,
approximately accounting for $23\%$ of the initiation time data and $14\%$ of
the gap acceptance data (The data size of the two metrics was not the same as
there was no initiation time data for participants who rejected the gap). The
remaining data of all other conditions were grouped into the training set.
Similarly, with respect to dataset two, the data from traffic scenario four
were used as the validation set, accounting for $24\%$ of gap acceptance data
and $25\%$ of initiation time data.
A Maximum Likelihood Estimation (MLE) method was used to calibrate the
parameters in the models. Firstly, regarding the decision model (5), since it
assumes that crossing decisions are drawn from a Bernoulli distribution, its
likelihood function is given by:
$\begin{gathered}\mathcal{L}_{1}(\omega)=\prod_{i=1}^{n}p\left(\Theta\mid\omega\right)^{u_{i}}\left(1-p\left(\Theta\mid\omega\right)^{1-u_{i}}\right)\\\
\rho_{1},\rho_{2},\rho_{3},\rho_{4}\in\omega\\\
\dot{\theta}_{i},X_{1,i},X_{2,i}\in\Theta\end{gathered}$ (11)
where $\omega$ includes all the estimated parameters
$\rho_{1},\rho_{2},\rho_{3},\rho_{4}$. $\Theta$ denotes
$\dot{\theta}_{i},X_{1,i},X_{2,i}$ for the $i$th trial. $n$ is the size of the
dataset. With respect to the initiation models, their likelihood functions are
given by the following equations based on (7) and (8):
$\begin{gathered}\mathcal{L}_{2}(\Delta)=\prod_{j=1}^{m}\operatorname{SW}\left(t_{int,j},\dot{\theta_{j}}\mid\Delta\right)\\\
\beta_{1},\beta_{2},\beta_{3},\beta_{4},b\in\Delta\end{gathered}$ (12)
$\begin{gathered}\mathcal{L}_{3}(\Delta)=\prod_{j=1}^{m}\mathcal{N}\left(t_{int,j},\dot{\theta_{j}}\mid\Delta\right)\end{gathered}$
(13)
where $\Delta$ is the summary of the estimated parameters of crossing
initiation models. $t_{int,j}$ is the $j$th crossing initiation time data. The
data size is $m$. According to the MLE method, the maximization problem is
equivalent to minimizing the negative log-likelihood. Thus, the optimal
estimations for parameters are achieved when negative log-likelihood functions
are minimised, e.g., $-\ln\left(\mathcal{L}_{1}(\omega)\right)$. We applied a
built-in ’fminuc’ function in MATLAB to find the solution to the above
minimization problems [43].
Furthermore, there were some differences in the model estimates based on the
two datasets. Firstly, since the traffic flow scenarios were not considered in
dataset one, the models based on this dataset did not include the parameters
$\rho_{1},\rho_{2}$. Regarding dataset two, for comparison purposes, we
manipulated the SW-PRD model so that it had the proposed decision rules for
traffic flow, whereas the G-PRD model did not. The estimated parameters based
on the two datasets are presented in Table. I and Table. II. In addition, the
parameters of the social force model are adopted from [42].
### III-C Validation methods
After calibration, the predictions were compared with the validation set to
verify the ability of the models. Two evaluation methods were applied to
compare the performance of the proposed models, namely BIC and K-S test. The
BIC is given by:
$\text{BIC}=k\ln(n)-2\ln(L)$ (14)
where $k$ is the number of parameters in the model. $n$ is the size of the
dataset. $L$ is the maximum likelihood. The preferred model is the one with
the minimum BIC [44]. K-S test is a nonparametric test, which is used to
evaluate the goodness-of-fit of the predicted results by quantifying the
distance between empirical and predicted distributions [45]. The main equation
of K-S test is:
$D_{n,m}=\sup\left|\boldsymbol{F}_{n}(x)-\boldsymbol{F}_{m}(x)\right|$ (15)
where $\sup$ denotes the supremum function. $\boldsymbol{F}_{n}(x)$ and
$\boldsymbol{F}_{m}(x)$ are the distribution functions of the observed data
and predicted result. $n$ and $m$ represent the size of the samples. The K-S
test rejects the null hypothesis, i.e., two samples are drawn from the same
probability distribution if $D_{n,m}$ is bigger than the selected threshold.
In addition, the R-squared, $R^{2}$, and Root Mean Square Error (RMSE) are
also used in the model discussion.
TABLE I: Calibration results of models based on dataset one Parameter | SW-PRD (Without flow) | G-PRD (Without flow)
---|---|---
Estimate | 95 % C.I. | Estimate | 95 % C.I.
$\beta_{1}$ | 0.03 | [-0.19, 0.24] | -0.03* | [-0.05, -0.01]
$\beta_{2}$ | 4.48* | [3.35, 5.62] | 0.15* | [ 0.07, 0.24]
$\beta_{3}$ | -0.20* | [-0.26, -1.78] | -0.21* | [-0.24, -0.18]
$\beta_{4}$ | -2.11* | [-2.43, 1.22] | -0.76* | [-0.91, -0.62]
b | 6.06* | [4.43, 7.68] | - | -
$\rho_{0}$ | -2.14* | [-2.28, -1.98] | -2.14* | [-2.28, -1.98]
$\rho_{3}$ | -9.95* | [-10.64, -9.26] | -9.95* | [-10.64, -9.26]
LL | -108.43 | -176.69
BIC | 252.37 | 381.79
Note. LL: log-likelihood of the entire model, C.I.: confidence
interval, *: significant at a 5% significance level
With/Without flow: consider/not consider decision strategies for traffic flow
TABLE II: Calibration results of models based on dataset two Parameter | SW-PRD (With flow) | G-PRD (Without flow)
---|---|---
Estimate | 95 % C.I. | Estimate | 95 % C.I.
$\beta_{1}$ | 0.47* | [0.29, 0.66] | -0.05* | [-0.06, -0.04]
$\beta_{2}$ | 7.36* | [6.15, 8.57] | 0.01 | [-0.05, 0.07]
$\beta_{3}$ | 0.04 | [-0.02, 0.10] | -0.10* | [-0.13, -0.09]
$\beta_{4}$ | -1.41* | [-1.70, -1.13] | -0.59* | [-0.68, -0.50]
b | 7.76* | [5.6, 9.90] | - | -
$\rho_{0}$ | -2.92* | [-3.16, -2.68] | -3.31* | [-3.55, -3.07]
$\rho_{1}$ | -1.29* | [-1.56, -1.02] | - | -
$\rho_{2}$ | -0.50* | [-0.84, -0.15] | - | -
$\rho_{3}$ | -13.23* | [-14.30, -12.16] | -15.50* | [-16.56, -14.46]
LL(Decision model) | -1536.40 | -1672.50
LL(CIT model) | -36.35 | -104.03
BIC | 3218.40 | 3600.40
Note. LL(Decision model/CIT model): log-likelihoods of decision models
/crossing initiation time models
Figure 5: Validation results. Probability density functions and data based on
datasets (a) one and (b) two. The vertical dash-dotted lines in (b) indicate
the time when the rear end of the vehicle passes the pedestrian’s position.
The size of the time gap (in seconds) between every two vehicles is indicated
at the top of the diagram.
## IV Results and Analysis
In this Section, we first discuss the calibration results of the SW-PRD and
G-PRD models. Afterward, the validation results of the two models were
compared using the BIC and K-S test. Finally, the model with better
performance is compared to two entire datasets, and the reproduced crossing
behavior patterns are discussed in detail. Additionally, regarding the first
dataset, as it does not include the traffic flow scenario, we focus on the
impacts of speed and time gap on pedestrian crossing behavior, while the
effect of traffic is discussed using the results based on the second dataset.
TABLE III: Validation results of models based on dataset one Condition | Model | LL | BIC | K-S test score | P value
---|---|---|---|---|---
25 mph 4 s | SW-PRD | -23.08 | 71.47 | 0.06 | 0.56
| G-PRD | -27.28 | 74.82 | 0.10 | 0.08
35 mph 5 s | SW-PRD | -13.19 | 54.81 | 0.05 | 0.31
| G-PRD | -24.83 | 72.41 | 0.09 | 0.02*
### IV-A Calibration results
Dataset one. The parameters of the SW-PRD and G-PRD models were calibrated
using the first dataset. One thing to note is that as the first dataset did
not include traffic flow scenarios, these two models thus did not implement
decision strategies in traffic, which means $\rho_{1}$ and $\rho_{2}$ were not
included in the models, and two decision models in the SW-PRD and G-PRD models
were the same. The calibration results are shown in Table. I, where the
maximum log-likelihood and BIC of the SW-PRD model based on the training set
are -108.43 and 252.37, which are significantly better than those of the G-PRD
model, i.e., -176.69 and 381.79, indicating that the SW-PRD model can better
describe pedestrian crossing initiation time than the G-PRD model on the
calibration set. Moreover, it can be found that the effect of $\rho_{0}$ is
significantly negatively correlated with $\dot{\theta}$
($\text{Est}.=-2.14,\text{C.I.}=[-2.28,-1.98]$), showing that pedestrian
crossing gap acceptance decreases as the risk of collision increases.
Additionally, the estimated effect of $\beta_{3}$ in the SW-PRD model is
significantly correlated with $\dot{\theta}$ (Table. I), suggesting that
pedestrian crossing initiation time is negatively related to the collision
risk.
Dataset two. The calibration results based on the second dataset are shown in
Table. II. As the SW-PRD model implemented the decision strategies in traffic
flow, it included $\rho_{1}$ and $\rho_{2}$. However, the G-PRD model did not.
Meanwhile, as both the decision model and initiation time model in the SW-PRD
model and the SW-PRD model were different, we calculated the respective log-
likelihood of the decision and initiation time models to facilitate the
comparison of the results. Again, the SW-PRD model fits data better than the
G-PRD model, where the SW-PRD model has larger log likelihoods for both the
decision and crossing initiation time models, and its BIC is smaller than that
of the G-PRD model. In particular, concerning the SW-PRD model, except for the
significant effect of $\rho_{0}$
($\text{Est}.=-2.92,\text{C.I.}=[-3.16,-2.68]$), $\rho_{1}$ and $\rho_{2}$
also significantly affect the pedestrian gap acceptance
($\text{Est}.=-1.29,\text{C.I.}=[-1.56,-1.02];\text{Est.}=-0.50,\text{C.I.}=[-0.84,-0.15]$),
consistent with our assumed crossing decision strategies in traffic flow. In
addition, although the effect of $\beta_{3}$ in the SW-PRD model is not
significant, the positive effect of $\beta_{1}$ reduces the tail magnitude of
the distribution of crossing initiation time as $\dot{\theta}$ increases and
thus can reduce pedestrians crossing initiation time.
### IV-B Validation results
The calibration results indicate that the SW-PRD model fits the training sets
better than the G-PRD model. In this section, the validation sets of two
datasets are compared with the predicted results of two models.
Dataset one. Regarding the validation results, as shown in Table. III, the SW-
PRD model has better BIC values and K-S scores for all conditions.
Specifically, in the 35 mph 5 s condition, the K-S test rejects the null
hypothesis and indicates that the results of the G-PRD model are different
from the observed data at a $5\%$ significance level. As shown in Fig. 5a, it
can be found that the G-PRD model tends to overestimate the initial parts of
the data, but the SW-PRD model does not.
Dataset two. The predicted results are compared to the validation set of the
second dataset. The log-likelihood of crossing initiation time models of SW-
PRD and G-PRD are presented separately for reasons explained previously
(Table. IV). Both SW-PRD and G-PRD models accurately capture the timing of
pedestrian crossing decisions in the traffic flow, i.e., the peak location of
the initiation time distribution ( Fig. 5b). The predicted peak shapes of both
models are close to the data. However, the SW-PRD model has a relatively
better performance than the G-PRD model because the log-likelihood of the
crossing initiation time model for SW-PRD is bigger than the value for G-PRD
in Table. IV. The overall predictions of the SW-PRD model are closer to the
data than these of the G-PRD model. Specifically, the SW-PRD model has a
better BIC value and log-likelihood than the G-PRD model (Table. IV). Also,
the K-S test supports that the predicted density function of the SW-PRD model
is similar to the empirical distribution. In contrast, the predicted result of
the G-PRD model is rejected by the K-S test at a $5\%$ significance level
(Table. IV). As shown in Fig. 5b, it can be found that consistent with the
empirical data, the SW-PRD model predicts a decrease in the gap acceptance
from the first 3 s gap (at $t_{pass_{2}}$) to the second 3 s gap (at
$t_{pass_{5}}$). By contrast, the G-PRD model calculates a constant value for
both 3 s gaps, resulting in a significant underestimation of gap acceptance in
the first 3 s gap. In general, the SW-PRD model has better performance than
the G-PRD model on the validation set of dataset two.
TABLE IV: Validation results of models based on dataset two Model | LL | LL(CIT model) | BIC | K-S test score | $p$ value
---|---|---|---|---|---
SW-PRD | -578.37 | -11.23 | 1193.10 | 0.08 | 0.10
G-PRD | -707.53 | -52.76 | 1444.10 | 0.16 | 0.001*
In the following sections, we discuss the predicted pedestrian crossing
behavior patterns in detail by comparing predicted results with the two full
datasets to provide a complete understanding of the proposed crossing
decision-making model. Since SW-PRD performs better on all datasets than
G-PRD, the SW-PRD model generates our results in the following sections.
### IV-C Dataset one: Speed and time gap effects
The SW-PRD model predictions of crossing gap acceptance for each speed and
time gap condition are compared with the observed data in Fig. 6a. According
to the empirical data, crossing gap acceptance increased with vehicle speed
and traffic gap, aligning well with previous studies [21, 35]. The SW-PRD
model reproduces these behavioral patterns very well ($R^{2}=0.890$,
$RMSE=0.050$), suggesting that pedestrians might adapt their crossing
decisions based on the changes in collision cues.
Figure 6: Predicted gap acceptance of the SW-PRD model for both datasets. The
data and the predicted results are represented in black and blue respectively.
(a) For dataset one, the proportion of gap acceptance is plotted as a function
of vehicle speed and gap size (Gap sizes are indicated by different line
styles). (b) For dataset two, the proportion of gap acceptance for each gap of
each traffic scenario is presented.
Fig. 7a shows a comparison between the predicted crossing initiation time and
observed data. In line with the literature, [34], the empirical data showed
that pedestrian crossing initiation time correlated with vehicle kinematics,
i.e., it decreased as traffic gaps and vehicle speeds decreased. This
behavioral pattern can be understood as a distance-dependent phenomenon
whereby a reduction in vehicle speed and time gap leads to a reduction in
spatial distance, resulting in an increase in the perceived risk of collision
[22]. Hence, if pedestrians choose to cross, they tend to do so more quickly.
Based on our modeling results, the proposed SW-PRD model captures this pattern
with a good fit ($R^{2}=0.890$, $RMSE=0.050$), again indicating that visual
collision cues are associated with pedestrian crossing behavior.
Moreover, a more detailed comparison between predictions and data is shown in
Fig. A.2 in Appendix A-B. It can be noticed that the SW-PRD model predicts
pedestrian crossing behavior qualitatively and quantitatively. It not only
describes the distributions of pedestrian crossing initiation along the time
axis but also captures the variation in the mean crossing initiation time.
Figure 7: Predicted crossing initiation time of the SW-PRD model for both
datasets. Error bars and the edge of blue areas indicate the $2.5\%$ and
$97.5\%$ percentiles of the data and predicted results. (a) For dataset one,
the crossing initiation time is plotted as a function of vehicle speed and gap
size. (b) For dataset two, the crossing initiation time is a function of gap
size.
### IV-D Dataset two: Impacts of traffic flow
Predicted gap acceptances of the SW-PRD model in the traffic flow are compared
to the observed data in Fig. 6b. Firstly, it can be noticed that pedestrians
in the traffic flow did not accept gaps of the same size equally. For
instance, regarding the $4$th gap and the $5$th gap in traffic scenario one
(The size of both traffic gaps is 3 s), the probability of crossing gap
acceptance dropped significantly from $27.9\%$ to $10.5\%$. When pedestrians
faced the $6$th gap, the decreasing trend became even stronger. The
probability of crossing gap acceptance was $8.1\%$, more than three times
smaller than the value of the $4$th gap. Further looking at the predictions,
the SW-PRD model reproduces this behavioral pattern across all traffic
scenarios with reasonable goodness-of-fit Fig 6b).
Fig.7b plots the predicted crossing initiation time as a function of the time
gap and compares it with the observed data. The SW-PRD model fits the crossing
initiation time data well ($R^{2}=0.850$, $RMSE=0.038$). Consistent with
empirical observations and similar to the first dataset [29], the SW-PRD model
predicts a smaller initiation time as the time gap decreases, again suggesting
that pedestrians attempted to compensate for crossing risk in unsafe traffic
gaps by initiating faster.
Furthermore, as shown in Fig. A.3 in Appendix A-B, detailed model predictions
are compared with the observed data. Across all traffic scenarios, the SW-PRD
model accurately predicts the level, shape and location of peaks of the
crossing initiation time distribution, showing that the model has a good
ability to characterize pedestrian crossing decisions in a continuous flow of
traffic.
## V Discussion and conclusion
This study demonstrates a novel approach to characterize pedestrian crossing
decision-making at uncontrolled intersections with continuous traffic. We
hypothesized that the crossing behavior could be understood as depending on
three stages of information processing (perceive, decide, execute), and thus
proposed a model with three corresponding constituent parts: visual collision
cue, crossing decision, and crossing initiation. Following is a summary of the
detailed research results.
In our previous study[22], we showed that the visual collision cue,
$\dot{\theta}$, could capture the effects of vehicle kinematics on pedestrian
crossing decisions in single gaps and explain why pedestrians tended to rely
on distance from vehicles to make crossing decisions [21, 35]. In this study,
this finding is formally applied to model crossing decisions and extended to a
more complicated traffic scenario, i.e., a continuous flow of traffic. The
modeling results support that $\dot{\theta}$ is capable of characterizing the
risk perceived by pedestrians, at least at uncontrolled intersections with
constant speed traffic.
Moreover, regarding our third hypothesis, i.e., pedestrian crossing initiation
is time-dynamic and influenced by vehicle kinematics, we relate the proposed
crossing initiation time model to $\dot{\theta}$. The modeling results support
our hypothesis and show that pedestrians dynamically adjust their initiation
time based on vehicle kinematics. Both the SW and Gaussian distributions can
reasonably describe pedestrian initiation time, whilst the SW distribution has
relatively better goodness-of-fit than the Gaussian distribution, which
further indicates that the distribution of crossing initiation time is right-
skewed.
Notably, to accurately reproduce pedestrian crossing behavior in continuous
traffic flow, we further hypothesize that pedestrians compare the risks of the
gaps around them before making decisions, which is supported by the fact that
the proposed crossing decision strategy for continuous traffic scenarios
significantly improves the performance of the model. The study thus concludes
with the following findings. Firstly, pedestrians may have a reduced tendency
to accept a gap if they see an upcoming larger gap. Secondly, pedestrians may
have a greater tendency to reject a gap if they have already rejected a gap of
that size or larger. Although no other studies have yet found these patterns
of crossing behavior, some empirical observations provide indirect support.
[46] showed that drivers who rejected the bigger traffic gap tended to incur a
longer delay. [26] indicated that pedestrians who tended to reject the
crossing opportunities would be more cautious and tend to accept longer gaps.
Moreover, [24] found that pedestrians who missed the first opportunity to
cross the road would not compensate for their loss by accepting a shorter
second opportunity to cross the road. The above studies reinforce our
hypothesis that pedestrians who tend to wait for safer crossing opportunities
are more cautious and more likely to optimize their crossing strategies by
comparing crossing opportunities. Unlike several previous studies, which
simply assumed pedestrians tend to accept smaller gaps with the increase in
waiting time [7, 14], the novelty is that we show that there may be other
patterns in pedestrian crossing behavior in terms of waiting for the crossing
opportunity, which may provide an explanation for the non-significant effect
of waiting time on pedestrian crossing decisions found in the meta-study [27].
Furthermore, this finding is interesting in that it reminds us that there may
be a complex changing pattern in pedestrians’ strategy toward waiting for
crossing opportunities. Future research can further attempt to disentangle the
effects of waiting time and traffic flow.
Overall, this work provides a new concept that pedestrian crossing decisions
are dynamic and intrinsically closely linked to their perceived collision
risk, and can be reinterpreted through a three-stage crossing decision-making
process. The proposed model shows good predictive performance in different
simulator datasets, and it could therefore be interesting to test the model on
naturalistic traffic datasets as a next step. Furthermore, the idea of the
deconstructed process may drive further study to involve more complicated
perceptual, decision, and initiation models.
Regarding the practical implications of this study, there are many possible
ways to extend these concepts and models to further improve research in
pedestrian-AV interactions. First, as an increasing number of studies have
been keen on using pedestrian behavior models to promote safe and efficient
interactions[47], the proposed decision model may provide predictive
information to help automated driving systems to better anticipate pedestrian
crossing intentions and initiations. Early work is emerging where researchers
are attempting to plan and coordinate the actions of AVs and pedestrians
toward common goals by considering the visual collision risk of
pedestrians[6]. Another possible application case is future traffic scenarios
involving AV platoons and pedestrians, where AV platoons may need to take into
account the dynamic pedestrian crossing decisions along the length of the
platoon and adopt the decision strategy of each AV. Moreover, there is an
urgent need to train and evaluate AVs to perform well also in safety-critical
interactions with human road users. However, due to the low frequency of
critical traffic scenarios in real life, i.e., the corner case, and safety
reasons, both academia, and industry have agreed on using simulation methods
as a complementary way to validate AVs. Reliable simulation results rely on
the behavioral authenticity of simulated road users [14]. Hence, another
practical significance of this study is that the model can serve as a module
in the microscopic transport simulation tools or virtual testing platforms to
realize naturalistic pedestrian road crossing decisions.
However, several limitations of this study need to be addressed in the future.
Since the results and model cover only scenarios with single-lane, constant-
speed traffic, the model cannot be directly generalized to other scenarios
without further development. For example, in situations with yielding
vehicles, the collision cue model used in this study alone may not provide
sufficient information to model crossing decisions. In addition, compared to
the crossing behavior in pedestrian simulators, in real traffic, pedestrians
can flexibly adjust their behaviors and be affected by many potential factors.
The pedestrian simulator allows exact experimental control of conditions but,
therefore, naturally provides a less variable environment, and the virtual
nature of the task may also affect the observed behavior. Hence, an important
future work should apply the model to a reliable naturalistic dataset.
Furthermore, the model is developed based on current theories of human
collision and does not assert that pedestrians exactly use the applied visual
cues and perception strategy. As collision perception theory is further
developed, the model can be improved accordingly.
## References
* [1] A. R. Palmeiro, S. van der Kint, L. Vissers, H. Farah, J. C. de Winter, and M. Hagenzieker, “Interaction between pedestrians and automated vehicles: A Wizard of Oz experiment,” _Transp. Res. F: Traffic Psychol._ , vol. 58, pp. 1005–1020, 2018.
* [2] E. Connected, “Cooperative and automated mobility roadmap 2022,” 2022.
* [3] G. Markkula, R. Madigan, D. Nathanael, E. Portouli, Y. M. Lee, A. Dietrich, J. Billington, A. Schieben, and N. Merat, “Defining interactions: a conceptual framework for understanding interactive behaviour in human and automated road traffic,” _Theor. Issues Ergon. Sci_ , pp. 1–24, Mar. 2020.
* [4] A. Rasouli and J. K. Tsotsos, “Autonomous vehicles that interact with pedestrians: A survey of theory and practice,” _IEEE trans. Intell. Transp. Syst._ , 2019.
* [5] S. El Hamdani, N. Benamar, and M. Younis, “Pedestrian Support in Intelligent Transportation Systems: Challenges, Solutions and Open issues,” _Transp. Res. Part C Emerg._ , vol. 121, p. 102856, Dec. 2020.
* [6] J. E. Domeyer, J. D. Lee, H. Toyoda, B. Mehler, and B. Reimer, “Driver-pedestrian perceptual models demonstrate coupling: Implications for vehicle automation,” _IEEE Trans. Hum.-Mach. Syst._ , vol. 52, no. 4, pp. 557–566, 2022.
* [7] J. Zhao, J. O. Malenje, Y. Tang, and Y. Han, “Gap acceptance probability model for pedestrians at unsignalized mid-block crosswalks based on logistic regression,” _Accid. Anal. Prev._ , vol. 129, pp. 76–83, Aug. 2019.
* [8] F. Schneemann and I. Gohl, “Analyzing driver-pedestrian interaction at crosswalks: A contribution to autonomous driving in urban environments,” in _2016 IEEE Intell. Veh. Symp. (IV)_. IEEE, 2016, pp. 38–43.
* [9] A. Millard-Ball, “Pedestrians, autonomous vehicles, and cities,” _J. Plan. Educ. Res._ , vol. 38, no. 1, pp. 6–12, 2018, number: 1.
* [10] J. Pekkanen, O. T. Giles, Y. M. Lee, R. Madigan, T. Daimon, N. Merat, and G. Markkula, “Variable-drift diffusion models of pedestrian road-crossing decisions,” _Comput. Brain & Behav._, pp. 1–21, 2021.
* [11] O. T. Giles, G. Markkula, J. Pekkanen, N. Yokota, N. Matsunaga, N. Merat, and T. Daimon, “At the Zebra Crossing: Modelling Complex Decision Processes with Variable-Drift Diffusion Models,” PsyArXiv, preprint, Jul. 2019. [Online]. Available: https://osf.io/cgj7r
* [12] X. Zhang, H. Chen, W. Yang, W. Jin, and W. Zhu, “Pedestrian Path Prediction for Autonomous Driving at Un-Signalized Crosswalk Using W/CDM and MSFM,” _IEEE Trans. Intell. Transport. Syst._ , pp. 1–13, 2020.
* [13] M. Prédhumeau, A. Spalanzani, and J. Dugdale, “Pedestrian behavior in shared spaces with autonomous vehicles: An integrated framework and review,” _IEEE Trans. Intell. Veh._ , 2021.
* [14] A. Rasouli and I. Kotseruba, “Intend-wait-cross: Towards modeling realistic pedestrian crossing behavior,” _arXiv preprint arXiv:2203.07324_ , 2022.
* [15] G. Markkula, R. Romano, R. Madigan, C. W. Fox, O. T. Giles, and N. Merat, “Models of human decision-making as tools for estimating and optimizing impacts of vehicle automation,” _Transp. Res. Rec._ , vol. 2672, no. 37, pp. 153–163, 2018, number: 37.
* [16] B. Völz, H. Mielenz, I. Gilitschenski, R. Siegwart, and J. Nieto, “Inferring pedestrian motions at urban crosswalks,” _IEEE trans. Intell. Transp. Syst._ , vol. 20, no. 2, pp. 544–555, 2018, number: 2.
* [17] F. Camara, R. Romano, G. Markkula, R. Madigan, N. Merat, and C. Fox, “Empirical game theory of pedestrian interaction for autonomous vehicles,” in _Proceedings of Measuring Behavior 2018_. Manchester Metropolitan University, 2018.
* [18] H. Zhang, Y. Guo, Y. Chen, Q. Sun, and C. Wang, “Analysis of Pedestrian Street-Crossing Decision-Making Based on Vehicle Deceleration-Safety Gap,” _Int. J. Environ. Res. Public Health._ , vol. 17, no. 24, p. 9247, Dec. 2020.
* [19] T. Fu, L. Miranda-Moreno, and N. Saunier, “A novel framework to evaluate pedestrian safety at non-signalized locations,” _Accid. Anal. Prev._ , vol. 111, pp. 23–33, 2018, publisher: Elsevier.
* [20] H. Zhu, A. Almukdad, M. Iryo-Asano, W. K. Alhajyaseen, H. Nakamura, and X. Zhang, “A novel agent-based framework for evaluating pedestrian safety at unsignalized mid-block crosswalks,” _Accid. Anal. Prev._ , vol. 159, p. 106288, 2021.
* [21] R. Lobjois and V. Cavallo, “Age-related differences in street-crossing decisions: The effects of vehicle speed and time constraints on gap selection in an estimation task,” _Accid. Anal. Prev._ , vol. 39, no. 5, pp. 934–943, Sep. 2007, number: 5.
* [22] K. Tian, G. Markkula, C. Wei, Y. M. Lee, R. Madigan, N. Merat, and R. Romano, “Explaining unsafe pedestrian road crossing behaviours using a psychophysics-based gap acceptance model,” _Saf. Sci._ , vol. 154, p. 105837, 2022.
* [23] M. Sucha, D. Dostal, and R. Risser, “Pedestrian-driver communication and decision strategies at marked crossings,” _Accid. Anal. Prev._ , vol. 102, pp. 41–50, May 2017\.
* [24] R. Lobjois, N. Benguigui, and V. Cavallo, “The effects of age and traffic density on street-crossing behavior,” _Accid. Anal. Prev._ , vol. 53, pp. 166–175, 2013, publisher: Elsevier.
* [25] K. Tian, G. Markkula, C. Wei, E. Sadraei, T. Hirose, N. Merat, and R. Romano, “Impacts of visual and cognitive distractions and time pressure on pedestrian crossing behaviour: A simulator study,” _Accid. Anal. Prev._ , vol. 174, p. 106770, 2022.
* [26] G. Yannis, E. Papadimitriou, and A. Theofilatos, “Pedestrian gap acceptance for mid-block street crossing,” _Transp. Plan. Technol._ , vol. 36, no. 5, pp. 450–462, Jul. 2013, number: 5.
* [27] A. Theofilatos, A. Ziakopoulos, O. Oviedo-Trespalacios, and A. Timmis, “To cross or not to cross? review and meta-analysis of pedestrian gap acceptance decisions at midblock street crossings,” _Journal of Transport & Health_, vol. 22, p. 101108, 2021.
* [28] G. Markkula, Z. Uludağ, R. M. Wilkie, and J. Billington, “Accumulation of continuously time-varying sensory evidence constrains neural and behavioral responses in human collision threat detection,” _PLoS Comput. Biol._ , vol. 17, no. 7, p. e1009096, 2021.
* [29] S. Kalantarov, R. Riemer, and T. Oron-Gilad, “Pedestrians’ road crossing decisions and body parts’ movements,” _Transp. Res. F: Traffic Psychol._ , vol. 53, pp. 155–171, Feb. 2018.
* [30] S. Cœugnet, B. Cahour, and S. Kraiem, “Risk-taking, emotions and socio-cognitive dynamics of pedestrian street-crossing decision-making in the city,” _Transp. Res. F: Traffic Psychol._ , vol. 65, pp. 141–157, Aug. 2019.
* [31] J. J. Gibson, _The ecological approach to visual perception: classic edition_. Psychology Press, 2014.
* [32] P. R. DeLucia, “Critical roles for distance, task, and motion in space perception: Initial conceptual framework and practical implications,” _Hum. Factors_ , vol. 50, no. 5, pp. 811–820, 2008, number: 5 Publisher: SAGE Publications Sage CA: Los Angeles, CA.
* [33] D. N. Lee, “A theory of visual control of braking based on information about time-to-collision,” _Perception_ , vol. 5, no. 4, pp. 437–459, 1976, number: 4.
* [34] R. Lobjois and V. Cavallo, “The effects of aging on street-crossing behavior: From estimation to actual crossing,” _Accid. Anal. Prev._ , vol. 41, no. 2, pp. 259–267, Mar. 2009.
* [35] S. Schmidt and B. Färber, “Pedestrians at the kerb–Recognising the action intentions of humans,” _Transp. Res. F: Traffic Psychol._ , vol. 12, no. 4, pp. 300–310, 2009, number: 4.
* [36] R. Woodman, K. Lu, M. D. Higgins, S. Brewerton, P. A. Jennings, and S. Birrell, “Gap acceptance study of pedestrians crossing between platooning autonomous vehicles in a virtual environment,” _Transp. Res. F: Traffic Psychol._ , vol. 67, pp. 1–14, Nov. 2019.
* [37] R. Anders, F.-X. Alario, and L. Van Maanen, “The shifted Wald distribution for response time data analysis.” _Psychol. Methods_ , vol. 21, no. 3, pp. 309–327, Sep. 2016.
* [38] Y. M. Lee, R. Madigan, C. Uzondu, J. Garcia, R. Romano, G. Markkula, and N. Merat, “Learning to interpret novel ehmi: The effect of vehicle kinematics and ehmi familiarity on pedestrian’crossing behavior,” _J. Saf. Res._ , vol. 80, pp. 270–280, 2022.
* [39] J. A. Oxley, E. Ihsen, B. N. Fildes, J. L. Charlton, and R. H. Day, “Crossing roads safely: An experimental study of age differences in gap selection by pedestrians,” _Accid. Anal. Prev._ , vol. 37, no. 5, pp. 962–971, Sep. 2005, number: 5. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0001457505000795
* [40] D. G. Davies, “Research, development, and implementation of pedestrian safety facilities in the united kingdom,” 1999.
* [41] C. Zhang, B. Zhou, G. Chen, and F. Chen, “Quantitative analysis of pedestrian safety at uncontrolled multi-lane mid-block crosswalks in china,” _Accid. Anal. Prev._ , vol. 108, pp. 19–26, 2017.
* [42] F. Farina, D. Fontanelli, A. Garulli, A. Giannitrapani, and D. Prattichizzo, “Walking Ahead: The Headed Social Force Model,” _PLoS ONE_ , vol. 12, no. 1, p. e0169734, Jan. 2017.
* [43] MATLAB, “version 9.10.0 (R2021a),” _Natick, Massachusetts: The MathWorks Inc_ , 2021.
* [44] G. Schwarz, “Estimating the dimension of a model,” _Ann. Stat_ , pp. 461–464, 1978, publisher: JSTOR.
* [45] M. A. Stephens, “EDF statistics for goodness of fit and some comparisons,” _J. Am. Stat. Assoc._ , vol. 69, no. 347, pp. 730–737, 1974, publisher: Taylor & Francis.
* [46] W. K. Kittelson and M. A. Vandehey, _Delay effects on driver gap acceptance characteristics at two-way stop-controlled intersections_ , 1991, no. 1320.
* [47] F. Camara, N. Bellotto, S. Cosar, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, and C. W. Fox, “Pedestrian Models for Autonomous Driving Part I: low level models, from sensing to tracking,” _IEEE trans. Intell. Transp. Syst._ , p. 30, 2021.
| Kai Tian received the M.Sc. degree in automotive engineering from Chongqing
University, China, 2019. He is now a PhD student at the Institute of Transport
Studies, University of Leeds, UK, working from 2019. His main research
interests include pedestrian-automated vehicle interaction, human factors and
safety, and decision-making modelling.
---|---
| Gustav Markkula received the M.Sc. degree in engineering physics and
complex adaptive systems and the Ph.D. degree in machine and vehicle systems
from Chalmers University of Technology, Gothenburg, Sweden, in 2004 and 2015,
respectively. After having worked in the automotive industry for more than a
decade, he is now Chair in Applied Behaviour Modelling at the Institute for
Transport Studies, University of Leeds, U.K. His main research interests
include quantitative, cognitive modeling of road user behavior and
interaction, and virtual testing of vehicle technology and automation.
---|---
| Chongfeng Wei received the PhD degree in mechanical engineering from the
University of Birmingham, UK, in 2015. He is now an assistant professor at
Queen’s University Belfast, UK. His current research interests include
decision making and control of intelligent vehicles, human-centric autonomous
driving, cooperative automation, and dynamics and control of mechanical
systems. He is also serving as an Associate Editor of IEEE Open Journal of
Intelligent Transportation Systems, IEEE Transactions on Intelligent Vehicles,
and IEEE Transactions on Intelligent Transportation Systems.
---|---
| YeeMun Lee is currently a senior research fellow at the Institute for
Transport Studies, University of Leeds. She obtained her BSc (Hons) in
Psychology and her PhD degree in driving cognition from The University of
Nottingham Malaysia in 2012 and 2016 respectively. Her current research
interests include investigating the interaction between automated vehicles and
other road users using various methods, especially virtual reality
experimental designs. Yee Mun is involved in multiple EU-funded projects and
is actively involved in the International Organisation for Standardisation
(ISO).
---|---
| Toshiya Hirose received the master’s degrees and the Ph.D. degree from
Shibaura Institute of Technology, Tokyo, Japan in 2002 and 2005. He is
currently an Associate Professor with the Department of Engineering Science
and Mechanics, Shibaura Institute of Technology. Before joining Shibaura
Institute of Technology, he has worked with the National Traffic Safety and
Environment Laboratory in Japan, and he was in charge of developing safety
regulations for vehicles. He belonged to the Intelligent Transport Studies,
the University of Leeds as the visiting researcher from 2019 to 2020. His
active research interests include autonomous vehicles, driver assistance
systems, active safety, driving simulators and human behavior models.
---|---
| Natasha Merat is a Professor of Human Factors and Transport Systems at ITS,
University of Leeds. She is leader of the multidisciplinary Human Factors and
Safety Group and academic lead of Virtuocity at Leeds. She has a PhD in
Psychology from Leeds, and her research interests are in understanding user
interaction with new technologies in transport.
---|---
| Richard Romano has over thirty years of experience developing and testing
AVs and ADAS concepts and systems which began with the Automated Highway
Systems (AHS) project while he directed the Iowa Driving Simulator in the
early 1990’s. He received his BASc and MASc in Engineering Science and
Aerospace Engineering respectively from the University of Toronto, Canada and
a PhD in Motion Drive Algorithms for Large Excursion Motion Bases, Industrial
Engineering from the University of Iowa, USA. In addition to a distinguished
career in industry he has supervised numerous research projects and authored
many journal papers. In 2015 he was appointed Leadership Chair in Driving
Simulation at the Institute for Transport Studies, University of Leeds, UK.
His research interests include the development, validation and application of
transport simulation to support the human-centered design of vehicles and
infrastructure.
---|---
## Appendix A Supplementary file
### A-A Simulation tool
Figure A.1: Structure of the simulation tool. The traffic environment contains
a single lane (60 m long and 4.2 m wide) and a fleet of vehicles (colored
rectangles).
In this study, an agent-based simulation tool is proposed using the
established PRD models for reproducing pedestrian crossing behavior at
uncontrolled intersections with traffic flow. The framework mainly includes
three parts: PRD model, environment model, and pedestrian kinematics model
(Fig.A.1). The detailed process of the simulation tool is as follows:
(i) Generate the traffic environment using the given traffic and pedestrian
parameters.
(ii) Generate a pedestrian agent at a random location on the pavement near the
crosswalk. After that, the pedestrian walks to the edge of the pavement. Since
this study focuses on the crossing decisions in the traffic flow, the
pedestrian performs the PRD model after the first vehicle has passed him/her
(Algorithm. 1).
(iii) The PRD model generates each pedestrian’s decision and initiation time
through a Monte Carlo sampling method (Algorithm. 2).
(iv) Pedestrians cross the road and walk to the opposite side of the road. The
simulation model stops when the traffic scenario ends or all pedestrians cross
the road.
A demonstration video of the simulation tool is also provided. Please see the
attachment.
Algorithm 1 Simulation model based on the model
Input: Model parameters
$\rho_{0},\rho_{1},\rho_{2},\rho_{3},\beta_{1},\beta_{2},\beta_{3},\beta_{4},b$
Output: $\boldsymbol{u},\boldsymbol{t}_{int}$
1: $I_{r}=I$ // Number of remaining participants $I_{r}$ and total number
participants $I$
2: for $n$th gap in traffic $N$ do
3: $\dot{\theta}_{n}\leftarrow\text{Eq}.\ref{1}$
4: $X_{1,n},X_{1,n}\leftarrow\text{Eq}.\ref{2}$ and $\text{Eq}.\ref{3}$
5: $p_{n}\leftarrow\text{Eq}.\ref{5}$
6: $P_{n}=p_{n}\cdot(1-P_{n-1})\leftarrow\text{Eq}.\ref{9}$
7: for $i$th pedestrian in $I_{r}$ do
8: $u_{i}$ $\leftarrow$ $Binomial(1,P_{n,i})$ // Sampling: crossing decision
9: if $u_{i}==1$ then
10: $f(t_{int})$ $\leftarrow$ $\text{Eq}.\ref{9}$ or $\text{Eq}.\ref{10}$ //
Caulculate probability density function of crossing decision
11: $t_{int,i}$ $\leftarrow$ Algorithm. 2 // Sampling: crossing initiation
time
12: else
13: Continue
14: end if
15: end for
16: $I_{r}=I_{r}-\text{length}(\boldsymbol{t}_{int})$ // Update remaining
participants
17: end for
Algorithm 2 Monte Carlo sampling of the model
Input: $f(t_{int})$
Output: $t_{int,i}$
1: Initialise $s=1$
2: while $s\neq 2$ do
3: $\pi(x)=f(x)$
4: $s\leftarrow\text{Uniform}(0,1)$;
5: $y\leftarrow Q(x|y)$ // Arbitrary probability density
6: if $u\leq\text{min}(\frac{pi(y)Q(x|y)}{pi(x)Q(y|x)},1)$ then
7: $t_{int,i}=y$
8: $s=s+1$
9: else
10: $s=1$
11: end if
12: end while
### A-B Detailed modeling results
Detailed comparisons between modeling results and observations are shown in
Fig.A.2 and Fig.A.3. In Fig.A.2, the probability density functions of crossing
initiation time are plotted against time gaps and vehicle speeds. While, In
Fig.A.3, the probability density functions of crossing initiation time are
plotted as a function of traffic scenarios and crossing initiation time.
Figure A.2: Predicted density function of crossing initiation time of the SW-
PRD model based on dataset one. The predicted results, including density
function, samplings and mean values of crossing initiation time, are compared
with the observed data in terms of vehicle speed and traffic gap size. Figure
A.3: Predicted density function of crossing initiation time of the SW-PRD
model based on dataset two. The predicted density functions and samplings are
compared with the observed data. Regarding each traffic scenario, the order of
traffic gaps is indicated above each sub-figure. The vertical lines represent
the time when the rear end of the related vehicle passes the pedestrian’s
position, i.e., $t_{pass}$.
|
# Simple modules for twisted Hamiltonian extended affine Lie algebras
Santanu Tantubay Santanu Tantubay: Harish-Chandra Research Institute, A CI of
Homi Bhaba National Institute, Chhatnag Road, Jhunsi, Allahabad 211 019,
India<EMAIL_ADDRESS><EMAIL_ADDRESS>, Priyanshu
Chakraborty Priyanshu Chakraborty: Inistitute of Mathematical Sciences, A CI
of Homi Bhaba National Institute, Chennai, 600113, India,
<EMAIL_ADDRESS><EMAIL_ADDRESS>and Punita Batra Punita
Batra: Harish-Chandra Research Institute, A CI of Homi Bhaba National
Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India<EMAIL_ADDRESS>
###### Abstract.
In this paper we consider the twisted Hamiltonian extended affine Lie algebra
(THEALA). We classify the irreducible integrable modules for these Lie
algebras with finite dimensional weight spaces when the finite dimensional
center acts non-trivially. This Lie algebra has a triangular decomposition,
which is different from the natural triangular decomposition of twisted full
toroidal Lie algebra. Any irreducible integrable module of it is a highest
weight module with respect to the given triangular decomposition. In this
paper we describe the highest weight space in detail.
###### Key words and phrases:
Extended affine Lie algebras, Classical Hamiltonian Lie algebras
###### 2010 Mathematics Subject Classification:
17B67, 17B66
## 1\. Introduction
Extended affine Lie algebras (EALAs) form a category of important Lie algebras
consisting of finite dimensional simple Lie algebras, affine Lie algebras and
class of some other Lie algebras. Twisted toroidal extended affine Lie
algebras are examples of EALAs. The structure theory of EALAs have been
developed by several mathematicians like Allison, Azam, Berman, Gao, Neher,
Pianzola and Yoshii (see [2], [16], [17] and references therein). In 2004, Rao
classified all the irreducible integrable modules for toroidal Lie algebras
with finite dimensional weight spaces. In 2005, Rao and Jiang considered the
full toroidal Lie algebra and they identified the highest weight space with
Jet modules of derivations of Laurent polynomial rings. Later in 2018, Rao and
Batra considered the more general Lie algebra, called twisted full toroidal
Lie algebra and they classified all the irreducible integrable weight modules
for this Lie algebra. Full toroidal or twisted full toroidal Lie algerbas are
not examples of extended affine Lie algebras. So instead of adding full
derivations one adds a subalgebra of derivations space in order to make it an
extended affine Lie algebra. Twisted toroidal extended affine algebra is such
an example. In 2020, Rao, Batra, Sharma classified all the irreducible
integrable modules with finite dimensional weight spaces for twisted toridal
extended affine Lie algebra for nonzero level. In [22], Yuly Billig and John
Talboom gave a classification result for the category of jet modules for
divergence zero vector fields on torus, which plays a pivotal role in the
classification result of twisted toridal extended affine Lie algebras. John
Talboom gave a similar result for Hamiltonian vector fields on torus in [11].
In [1], Rao studied structure theory of Hamiltonian extended affine Lie
algebra and its integrable simple weight modules. In this article we consider
the twisted form of Hamiltonian extended affine Lie algebra and study its
integrable simple weight modules. In order to classify all these modules we
use a result of [11].
Let $\mathfrak{g}$ be a finite dimensional simple Lie algebra over
$\mathbb{C}$ and $A_{n}$ be the Laurent polynomial ring in $n\geq 2$ commuting
variables $t_{1},\dots,t_{n}$. Let $L(\mathfrak{g})=\mathfrak{g}\otimes A_{n}$
be a loop algebra. Let $L(\mathfrak{g})\oplus\Omega_{A_{n}}/dA_{n}$ be the
universal central extension of $L(\mathfrak{g})$. This is called the toroidal
Lie algebra (TLA). Let $Der(A_{n})$ be the Lie algebra of all the derivations
of $A_{n}$. Then $L(\mathfrak{g})\oplus\Omega_{A_{n}}/dA_{n}\oplus Der(A_{n})$
is called the full toroidal Lie algebra (FTLA). TLA and FTLA are not EALA, as
they do not admit any non-degenerate symmetric invariant bilinear form. So
instead of $Der(A_{n})$, one takes ${S}_{n}$ the subalgebra of $Der(A_{n})$,
consisting of divergence zero vector fields. The Lie algebra
$L(\mathfrak{g})\oplus\Omega_{A_{n}}/dA_{n}\oplus S_{n}$ is called a toroidal
extended affine Lie algebra which admits a non-degenerate symmetric bilinear
form and hence is an example of EALA. Now we take $n=2k$, a positive even
integer and let $H_{n}$ be the classical Hamiltonian Lie subalgebra of
Der($A_{n}$), then a quotient algebra of
$L(\mathfrak{g})\oplus\Omega_{A_{n}}/dA_{n}\oplus H_{n}$ becomes an extended
affine Lie algebra, called as Hamiltonian extended affine algebra.
Here we consider more general EALAs. We take $\sigma_{1},\dots,\sigma_{n}$ to
be finite order commuting automorphisms of $\mathfrak{g}$ and consider the
multi-loop algebra
$L(\mathfrak{g},\sigma)=\oplus_{r\in\mathbb{Z}^{n}}\mathfrak{g}(\bar{r})\otimes
t^{k}$. Corresponding to a diagram automorphism, Batra studied finite
dimensional modules of twisted multiloop algebras in [23]. The finite
dimensional representations of multiloop algebras (both untwisted and twisted)
have been studied by Lau in [18]. We assume that this multi-loop algebra is a
Lie torus. Now we consider the universal central extension of multi-loop
algebra and add $H_{n}$, the classical Hamiltonian Lie algebra. We quotient
this resulatant Lie algebra by $K(m)$ (See definition of $K(m)$ above
Proposition 2.1). We denote it by $\tau$. In this paper we will classify
irreducible integrable modules with finite dimensional weight spaces for
$\tau$, where the zero degree central operators act non-trivially for any
$n\geq 2$. We make use of some important results from [1] and [11] in order to
classify our modules.
The paper has been organized as follows. In Section 2, we recall the
definition of multiloop algebra, Lie torus and classical Hamiltonian Lie
algebra and then construct the twisted Hamiltonian extended affine Lie
algebra. Towards the end of Section 2, we recall a Proposition about the
dimension of graded component of central extension part. In Section 3, we fix
a Cartan subalgebra and with respect to this Cartan subalgebra we give a root
space decomposition of $\tau$. We define integrable modules for this Lie
algebra and give a triangular decomposition of $\tau$. Now we are taking a
different triangular decomposition of $\tau$ from the triangular decomposition
of toroidal or full toroidal Lie algebra for the level zero or non zero case.
With respect to this triangular decomposition we prove that any irreducible
integrable weight module is a highest weight module. Then the highest weight
space becomes an irreducible module for the zeroth part of triangular
decomposition of $\tau$. Infact it becomes $Z^{n-2}$-graded. We also show that
the whole central extension part of zeroth part acts trivially on the highest
weight space. This was not the case for twisted full toridal Lie algebra. In
Section 4, we study the highest weight space in detail. We take subalgbera $L$
of $\tau^{0}$ and a subspace $W$ of $M$, and we prove that $W$ is an $L$
submodule of $M$. We also prove that $M/W$ is completely reducible module for
$L$ and we identify each irreducible component of $M/W$ with tensor product of
irreducible module for a semisimple Lie algebra and symplectic algebra.
Finally we prove our main Theorem 4.3.
## 2\. Notation and Preliminaries
Let $n=2k$ be a positive integer and consider $A_{n}=C[t_{1}^{\pm
1},t_{2}^{\pm 1}\dots,t_{n}^{\pm 1}]$ be a Laurent polynomial ring in $n$
commuting variables $t_{1},\dots,t_{n}$ . We assume $\mathfrak{g}$ a finite
dimensional simple Lie algebra over $\mathbb{C}$ with a Cartan subalgebra
$\mathfrak{h}$ and let $\sigma_{1},\sigma_{2},\dots,\sigma_{n}$ be the
commuting automorphisms of $\mathfrak{g}$ of order $m_{1},m_{2},\dots,m_{n}$
respectively. For $m=(m_{1},m_{2},\dots,m_{n})\in\mathbb{Z}^{n}$ we define
$\Gamma=m_{1}\mathbb{Z}\oplus\dots\oplus m_{k-1}\mathbb{Z}\oplus
m_{k+1}\mathbb{Z}\oplus\dots\oplus m_{2k-1}\mathbb{Z}$,
$\Gamma_{0}=m_{k}\mathbb{Z}\oplus m_{2k}\mathbb{Z}$ and
$\Lambda:=\mathbb{Z}^{n-2}/\Gamma$, then
$m^{\prime}=(m_{1},\dots,m_{k-1},m_{k+1},\dots,m_{2k-1})\in\Gamma$. Set
$\bar{\Gamma}=m_{1}\mathbb{Z}\oplus\dots\oplus m_{2k}\mathbb{Z}$. For
convenience we write $\Gamma\oplus\Gamma_{0}=$ $\bar{\Gamma}$ throughout the
paper. Then we have
$\mathfrak{g}=\displaystyle{\bigoplus_{\underline{r}\in{\mathbb{Z}^{n}}/{\bar{\Gamma}}}}\mathfrak{g}(\underline{r})$,
where
$\mathfrak{g}(\underline{r})=\\{X\in\mathfrak{g}|\sigma_{i}(X)=\zeta_{i}^{r_{i}}X,1\leq
i\leq n\\}$, where $\zeta_{i}$ are $m_{i}$-th primitive root of unity for
$i=1,\dots,n$.
Define the multiloop algebra
$L(\mathfrak{g},\sigma)=\displaystyle{\bigoplus_{{r}\in{\mathbb{Z}^{n}}}}\mathfrak{g}(\underline{r})\otimes
t^{r}$ with the Lie brackets
$[x\otimes t^{a},y\otimes t^{b}]=[x,y]\otimes t^{a+b},$
for $x\in\mathfrak{g}(\underline{a}),y\in\mathfrak{g}(\underline{b})$ and
$a,b\in\mathbb{Z}^{n}$.
Now let $\mathfrak{g}_{1}$ be any arbitrary finite dimensional simple Lie
algebra over $\mathbb{C}$ with a cartan subalgebra $\mathfrak{h}_{1}$. Let
$\Delta(\mathfrak{g}_{1},\mathfrak{h}_{1})=supp_{\mathfrak{h}_{1}}(\mathfrak{g}_{1})$.
Then
$\Delta_{1}^{\times}=\Delta^{\times}(\mathfrak{g}_{1},\mathfrak{h}_{1})=\Delta(\mathfrak{g}_{1},\mathfrak{h}_{1})-\\{0\\}$
is an irreducible reduced finite root system with atmost two root lengths.
Define
$\Delta_{1,en}^{\times}=\begin{cases}\Delta_{1}^{\times}\cup
2\Delta_{1,sh}^{\times}&\text{if $\Delta_{1}^{\times}=B_{l}$ types}\\\
\Delta_{1}^{\times}&\text{ otherwise}.\end{cases}$
###### Definition 2.1.
A finite dimensional $\mathfrak{g}_{1}$-module $V$ is said to satisfy
condition $(M)$ if $V$ is irreducible with dimension greater than 1 and
weights of $V$ relative to $\mathfrak{h}_{1}$ are contained in
$\Delta_{1,en}^{\times}$.
###### Definition 2.2.
A multiloop algebra $L(\mathfrak{g},\sigma)$ is called a Lie torus ($LT$) if
* (1)
$\mathfrak{g}(\bar{0})$ is a finite dimensional simple Lie algebra.
* (2)
For $\underline{r}\neq 0$ and $\mathfrak{g}(\underline{r})\neq 0$,
$\mathfrak{g}(\underline{r})\cong U(\underline{r})\oplus W(\underline{r})$,
where $U(\underline{r})$ is trivial as $\mathfrak{g}(\underline{0})$-module
and either $W(\underline{r})$ is zero or satisfy condition (M).
* (3)
The order of the group generated by $\sigma_{i},1\leq i\leq n$ is equal to the
product of the orders of each $\sigma_{i}$, for $1\leq i\leq n$.
Let $A_{n}(m)=\mathbb{C}[t_{1}^{\pm m_{1}},\dots,t_{n}^{\pm m_{n}}]$ and
$\Omega_{A_{n}(m)}$ be a vector space spanned by the symbols $t^{r}K_{i},1\leq
i\leq n,r\in\bar{\Gamma}$. Also assume that $dA_{n}(m)$ be the subspace of
$\Omega_{A_{n}(m)}$ spanned by $\sum_{i=1}^{n}r_{i}t^{r}K_{i},$
$r\in\bar{\Gamma}$. Set $Z=\Omega_{A_{n}(m)}/dA_{n}(m)$, a typical element of
$Z$ is given by $K(u,r)=\displaystyle{\sum_{i=1}^{n}}u_{i}t^{r}K_{i}$, for
$u=(u_{1},\dots,u_{n})\in\mathbb{C}^{n}$ and $r\in\bar{\Gamma}$. It is well
known that
$\overline{LT}=LT\oplus\Omega_{A_{n}(m)}/dA_{n}(m)$
is the universal central extension of ${LT}$ with the following brackets:
$\displaystyle[x(p),y(q)]=[x,y](p+q)+(x|y)K(p,p+q),$ (2.1)
where $x(p)=x\otimes t^{p}$, $p,q\in\mathbb{Z}^{n}$.
Let $Der(A_{n}(m))$ be the space of all derivations of $A_{n}(m)$. A basis for
$Der(A_{n}(m))$ is given by {$d_{i},t^{r}d_{i}|1\leq i\leq n,0\neq
r\in\bar{\Gamma}$}. For
$u=(u_{1},\dots,u_{n})\in\mathbb{C}^{n},r\in\bar{\Gamma}$, set
$D(u,r)=\displaystyle{\sum_{i=1}^{n}}u_{i}t^{r}d_{i}$. $Der(A_{n}(m))$ forms a
Lie algebra with the Lie brackets
$[D(u,r),D(v,s)]=D(w,r+s),$
where $u,v\in\mathbb{C}^{n},r,s\in\bar{\Gamma}$ and $w=(u,s)v-(v,r)u$.
Now $Der(A_{n}(m))$ acts on $\Omega_{A_{n}(m)}/dA_{n}(m)$ by
$D(u,r).K(v,s)=(u,s)K(v,r+s)+(u,v)K(r,r+s).$
It is well known that $Der(A_{n}(m))$ admits an abelian extension of
$\Omega_{A_{n}(m)}/dA_{n}(m)$ with the Lie brackets
$\displaystyle[D(u,r),K(v,s)]=(u,s)K(v,r+s)+(u,v)K(r,r+s),$ (2.2)
$\displaystyle[D(u,r),D(v,s)]=D(w,r+s)+(u,s)(v,r)K(r,r+s),$ (2.3)
where $u,v\in\mathbb{C}^{n},r,s\in\bar{\Gamma}$ and $w=(u,s)v-(v,r)u$.
For $s=(s_{1}\dots,s_{n})\in\bar{\Gamma}$ we define
$\bar{s}=(s_{k+1},\dots,s_{2k},-s_{1},\dots,-s_{k}).$ Now consider a Lie
subalgebra of $Der(A_{n}(m))$ given by
$H_{n}(m)=span\\{d_{i},h_{r}=D(\bar{r},r)|1\leq i\leq n,r\in\bar{\Gamma}\\}.$
It is easy to see that $[h_{r},h_{s}]=(\bar{r},s)h_{r+s}$. This Lie algebra is
known as Hamiltonian Lie algebra (See [1],[11]). Clearly $H_{n}(m)$ induces an
action on $Z$. Let $\bar{\tau}=\overline{LT}\oplus H_{n}(m)$, this becomes a
Lie algebra with the brackets (2.1), (2.2), (2.3) and
$[h_{r},x\otimes t^{s}]=(\bar{r},s)x\otimes t^{r+s},$
for all $x\in\mathfrak{g}(\underline{s}),r\in\bar{\Gamma},s\in\mathbb{Z}^{n}$.
Define
$K(m)=span\\{K(u,r)|u\in\mathbb{C}^{n},r\in\bar{\Gamma}\setminus\\{0\\},(u,\bar{r})=0\\}$.
It is easy to see that $K(m)$ is an ideal of $\bar{\tau}$.
###### Proposition 2.1.
([1], Proposition 3.1)
1. (1)
$Z/K(m)$ is $\bar{\Gamma}$-graded with dim$(Z/K(m))_{r}=1$ having basis
element $K(\bar{r},r),$ for $r\neq 0.$
2. (2)
dim$(Z/K(m))_{0}=n$ with basis $K_{i},1\leq i\leq n$.
∎
Define $\tau_{n}=\tau=LT\oplus Z/K(m)\oplus H_{n}(m)$ and construct a bilinear
form on $\tau$ by
$(x(r)|y(s))=\delta_{r,-s}(x|y),$ $\forall
x\in\mathfrak{g}(\underline{r}),y\in\mathfrak{g}(\underline{s}),r,s\in\mathbb{Z}^{n};$
$(h_{r}|K(\bar{s},s))=\delta_{r,-s}(\bar{r},\bar{s}),$ where
$r,s\in\bar{\Gamma}\setminus\\{0\\}$.
$(d_{i},K_{j})=\delta_{i,j}$ for $1\leq i,j\leq n$.
All other brackets of bilinear form are zero. We see that $\tau$ becomes an
extended affine Lie algebra and is called a twisted Hamiltonian extended
affine Lie algebra (Twisted HEALA).
Let $\mathfrak{h}(\underline{0})$ denote a Cartan subalgebra of
$\mathfrak{g}(\underline{0})$. Then by [13] Lemma 3.1.3,
$\mathfrak{h}(\underline{0})$ is ad-diagonalizable on $\mathfrak{g}$ and
$\bigtriangleup^{\times}=\bigtriangleup^{\times}(\mathfrak{g},\mathfrak{h}(\underline{0}))$
is an irreducible finite root system in $\mathfrak{h}(\underline{0})$
(Proposition 3.3.5,[13]). Let
$\bigtriangleup_{0}:=\bigtriangleup(\mathfrak{g}(\underline{0}),\mathfrak{h}(\underline{0}))$.
One of the main properties of Lie tori is that
$\bigtriangleup:=\bigtriangleup_{0,en}$ (Proposition 3.2.5,[2]). Let
$\pi=\\{\alpha_{1}.\alpha_{2},...,\alpha_{d}\\}$ be the simple roots of
$\bigtriangleup_{0}.$ Let $Q$ be the root system of $\bigtriangleup_{0}$ and
$Q^{+}=\displaystyle{\bigoplus_{i=1}^{d}}\mathbb{Z}_{+}\alpha_{i}$. Here
$\mathbb{Z}_{+}$ denote the set of non-negative integers.
###### Remark 1.
$\overline{LT}$ is a Lie $\mathbb{Z}^{n}$ torus of type
$\bigtriangleup_{0,en}.$ Then by [20], Definition 4.2, $\overline{LT}$ is
generated as Lie algebra by
$(\overline{LT})_{\alpha}=\displaystyle{\bigoplus_{r\in\mathbb{Z}^{n}}}\mathfrak{g}(\underline{r},\alpha)\otimes
t^{r}$, $\alpha\in(\bigtriangleup_{0,en})^{\times}$.
## 3\. Existence of Highest weight space
In this section we will give a root space decomposition of $\tau$. Let
${H}=\mathfrak{h}(\underline{0})\displaystyle{\bigoplus_{i=1}^{n}}\mathbb{C}K_{i}\bigoplus_{i=1}^{n}\mathbb{C}d_{i}$
be our Cartan subalgebra for the root space decomposition of $\tau$. Define
$\delta_{i},w_{i}\in H^{*}$ by setting
$\delta_{i}(\mathfrak{h}(\underline{0}))=0,\delta_{i}(K_{j})=0$ and
$\delta_{i}(d_{j})=\delta_{ij}$, and
$w_{i}(\mathfrak{h}(\underline{0}))=0$,$w_{i}(K_{j})=\delta_{ij}$ and
$w_{i}(d_{j})=0$.
Take $\delta_{\beta}=\sum_{i=1}^{n}\beta_{i}\delta_{i}$ for
$\beta\in\mathbb{C}^{n}$. For $r\in\mathbb{Z}^{n}$, we shall refer to the
vector $\delta_{r+\gamma}$ as the translate of $\delta_{r}$ by the vector
$\delta_{\gamma}$, where $\gamma\in\mathbb{C}^{n}$. Define
$\mathfrak{g}(\underline{r},\alpha):=\\{x\in\mathfrak{g}(\underline{r})|[h,x]=\alpha(h)x,\forall
h\in\mathfrak{h}(\underline{0})\\}$. By the properties of Lie torus
$\mathfrak{g}(\underline{r})=\displaystyle{\bigoplus_{\alpha\in\mathfrak{h}(\underline{0})^{*}}}\mathfrak{g}(\underline{r},\alpha)$.
Then we have
$\tau=\displaystyle{\bigoplus_{\beta\in\bigtriangleup}}\tau_{\beta}$, where
$\bigtriangleup\subseteq\\{\alpha+\delta_{r}|\alpha\in\bigtriangleup_{0,en},r\in\mathbb{Z}^{n}\\}$,
$\tau_{\alpha+\delta_{r}}=\mathfrak{g}(\underline{r},\alpha)\otimes t^{r}$ for
$\alpha\neq 0$,
$\tau_{\delta_{r}}=\mathfrak{g}(\underline{r},0)\otimes
t^{r}\oplus\mathbb{C}K(\bar{r},r)\oplus\mathbb{C}h_{r}$ for $0\neq
r\in\bar{\Gamma}$,
$\tau_{\delta_{r}}=\mathfrak{g}(\underline{r},0)\otimes t^{r}$ for
$r\in\mathbb{Z}^{n}\setminus\bar{\Gamma}$ and $\tau_{0}=H$.
We call elements of $\bigtriangleup$ roots of $\tau$. A root
$\beta=\alpha+\delta_{r}$ is called a real root if $\alpha\neq 0$. Let
$\bigtriangleup^{re}$ denote the set of all real roots and
$\beta^{\vee}=\alpha^{\vee}+\frac{2}{(\alpha|\alpha)}\displaystyle{\sum_{i=1}^{n}}r_{i}K_{i}$
be co-root of $\beta$, where $\alpha^{\vee}$ is the co-root of
$\alpha\in\bigtriangleup_{0,en}$. For $\gamma\in\bigtriangleup^{re}$, define
$r_{\gamma}(\lambda)=\lambda-\lambda(\gamma^{\vee})\gamma$ for $\lambda\in
H^{*}$. Let ${W}$ be the Weyl group of $\tau$ generated by
$r_{\gamma},\forall\gamma\in\bigtriangleup^{re}$.
###### Definition 3.1.
A $\tau$ -module $V$ is called integrable if
* (1)
$V=\bigoplus_{\lambda\in H^{*}}V_{\lambda}$, where $V_{\lambda}=\\{v\in
V|h.v=\lambda(h)v,$ $\forall\,h\in H\\}$ and $dim(V_{\lambda})<\infty$.
* (2)
All the real root vectors act locally nilpotently on $V,$ i.e.,
$\mathfrak{g}(\underline{r},\alpha)\otimes t^{r}$ acts locally nilpotently on
$V$ for all $0\neq\alpha\in\bigtriangleup_{0,en}$.
We have the following Proposiotion as [5].
###### Proposition 3.1.
([5]) Let $V$ be an irreducible integrable module for $\tau$. Then
* (1)
$P(V)=\\{\gamma\in H^{*}|V_{\gamma}\neq 0\\}$ is $W$\- invariant.
* (2)
$dim(V_{\gamma})=dim(V_{w\gamma}),\forall w\in W$.
* (3)
If $\lambda\in P(V)$ and $\gamma\in\bigtriangleup^{re}$, then
$\lambda(\gamma^{\vee})\in\mathbb{Z}$.
* (4)
If $\lambda\in P(V)$ and $\gamma\in\bigtriangleup^{re}$, and
$\lambda(\gamma^{\vee})>0$, then $\lambda-\gamma\in P(V)$.
Now consider the triangular decomposition of $\tau$
$\tau_{++}=span\\{X(r),h_{s},K(\bar{s},s):r\in\mathbb{Z}^{n},s\in\bar{\Gamma},X\in\mathfrak{g}(\underline{r}),r_{k}>r_{2k},s_{k}>s_{2k}\\},$
$\tau_{+}=span\\{X_{\alpha}(r),h_{s},K(\bar{s},s):r\in\mathbb{Z}^{n},s\in\bar{\Gamma},X_{\alpha}\in\mathfrak{g}(\underline{r},\alpha),r_{k}=r_{2k}>0$
$\;or\;r_{k}=r_{2k}=0,\alpha>0,\;s_{k}=s_{2k}>0\\},$
$\tau^{0}=span\\{X(r),h_{s},K(\bar{s},s):r\in\mathbb{Z}^{n},s\in\bar{\Gamma},X\in\mathfrak{g}(\underline{r},0),r_{k}=r_{2k}=0,s_{k}=s_{2k}=0\\}$,
$\tau_{--}=span\\{X(r),h_{s},K(\bar{s},s):r\in\mathbb{Z}^{n},s\in\bar{\Gamma},X\in\mathfrak{g}(\underline{r}),r_{k}<r_{2k},s_{k}<s_{2k}\\},$
$\tau_{-}=span\\{X_{\alpha}(r),h_{s},K(\bar{s},s):r\in\mathbb{Z}^{n},s\in\bar{\Gamma},X_{\alpha}\in\mathfrak{g}(\underline{r},\alpha),r_{k}=r_{2k}<0$
$\;or\;r_{k}=r_{2k}=0,\alpha<0,\;s_{k}=s_{2k}<0\\}.$
Now we define $\tau^{+}=\tau_{++}\oplus\tau_{+}$ and
$\tau^{-}=\tau_{--}\oplus\tau_{-}$, therefor
$\tau=\tau^{-}\oplus\tau^{0}\oplus\tau^{+}$ is the triangular decomposition of
$\tau$. Note that
$[\tau_{++},\tau_{-}\oplus\tau_{+}]\subset\tau_{++},\;[\tau_{--},\tau_{-}\oplus\tau_{+}]\subset\tau_{--}$.
###### Remark 2.
We see that
$LT_{n-2}=span\\{X_{\alpha}(r),X(r),K(\bar{s},s):\alpha\in\bigtriangleup_{0,en},r_{k}=r_{2k}=s_{k}=s_{2k}=0\\}$
forms a Lie torus from the Lie algebra $\mathfrak{g}$ with automorphisms
$\sigma_{1},\dots,\sigma_{k-1},\sigma_{k+1},\dots,\sigma_{2k-1}$.
We shall now define automorphism of full toroidal Lie algebra
$FT=\mathfrak{g}\otimes A_{n}\oplus\Omega A_{n}/dA_{n}\oplus Der(A_{n})$,
where $\Omega A_{n}/dA_{n}$ can be defined as quotient space of $\Omega
A_{n}=span\\{t^{r}K_{i}:r\in\mathbb{Z}^{n},1\leq i\leq n\\}$ by
$d{A_{n}}=span\\{\sum_{i=1}^{n}r_{i}t^{r}K_{i}\\}$. Let $GL(n,\mathbb{Z})$ be
the group of invertible matrices with integer coefficients. Let $B\in
GL(n,\mathbb{Z})$ which acts on $\mathbb{Z}^{n}$, denote this action by
$Bu,\;u\in\mathbb{C}^{n}$. Now we define an automorphism of $FT$, again denote
it by $B$ by,
$B.X(r)=X(Br),\;B.K(u,r)=K(Bu,Br),\;B.D(u,r)=D(Fu,Br),$
where $F=(B^{T})^{-1}$.
Define
$K_{B}=span\\{K(Bu,Br)|(u,\bar{r})=0,u\in\mathbb{C}^{n},r\in\bar{\Gamma}\\}$
$H_{B}=span\\{d_{i},D(F\bar{r},Br)|1\leq i\leq
n,\;r\in\bar{\Gamma}\setminus\\{0\\}\\}$
We see that $[H_{B},K_{B}]\subseteq K_{B}$. Now take $\tau_{B}=LT_{B}\oplus
Z/K_{B}\oplus H_{B}$, where $LT_{B}$ is the image of $LT$ under the
automorphism $B$ of $FT$. Now under this automorphism $\tau\cong\tau_{B}$. In
order to avoid notational complexity instead of working with $\tau_{B}$, we
will work with $\tau$ after applying $B$ also.
Let $V$ be an irreducible integrable module with finite dimensional weight
spaces with respect to the Cartan subalgebra $H$, on which not all $K_{i}$ act
trivially. Then after twisting the module by a suitable isomorphism of above
type, we can assume that only $K_{k}$ acts non-trivially on the module and
$K_{i}$ acts trivially for $1\leq i\leq n$ and $i\neq k$.
###### Proposition 3.2.
Suppose $V$ is an irreducible integrable module for $\tau$ with finite
dimensional weight spaces with respect to the Cartan $H$. If the central
element $K_{k}$ acts as positive integer and $K_{i}$ acts trivially for $1\leq
i\leq n$ and $i\neq k$, then there exists a weight $\lambda$ such that
1. (1)
$X_{\alpha}(r).V_{\lambda}=0$, where
$r\in\mathbb{Z}^{n},\;X_{\alpha}\in\mathfrak{g}(\underline{r},\alpha),\;r_{k}>0$.
2. (2)
$h_{s}.V_{\lambda}=K(\bar{s},s).V_{\lambda}=0$, where
$s\in\bar{\Gamma},\;s_{k}>0$
###### Proof.
We can prove this Proposition similarly as Theorem 5.2 of [4]. Note that
instead of zero-th coordinte we need to work out with $k$-th coordinate here.
∎
Let
$B_{n,n}=\begin{pmatrix}1&0&\cdots&0&\cdots&0\\\ 0&1&\cdots&0&\cdots&0\\\
\vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\\
0&0&\cdots&b_{k,k}&\cdots&b_{k,2k}\\\
\vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\\
0&0&\cdots&b_{2k,k}&\cdots&b_{2k,2k}\end{pmatrix}$
and
$\left(\begin{array}[]{ccc}b_{k,k}&b_{k,2k}\\\
b_{2k,k}&b_{2k,2k}\end{array}\right)=\left(\begin{array}[]{ccc}a&1\\\
a-1&1\end{array}\right)$
with $2a-1>0$, all diagonal entries of $B_{n,n}$ are 1, rest of all other
entries are zero except $b_{k,k},b_{k,2k},b_{2k,k},b_{2k,2k}$. Now we twist
the module by an isomorphism $B\in GL(n,\mathbb{Z})$. Note that after twisting
the module by $B_{n,n}$, we have a weight space $V_{\lambda}$ of $V$ such that
$V_{\lambda+\eta+\delta_{s}}=0$, where $\eta\in Q$, $s_{k}-s_{2k}>0$, by
Proposition 3.2.
Define an ordering on the $H^{*}$ by $\lambda\leq\mu$ in $H^{*}$ if and only
if
$\mu-\lambda=\displaystyle{\sum_{i=1}^{d}n_{i}\alpha_{i}}+n_{d+k}\delta_{k}+n_{d+2k}\delta_{2k}$
where $n_{i}\in\mathbb{Z}$, either $(i)$ $n_{d+k}-n_{d+2k}>0$, or $(ii)$
$n_{d+k}=n_{d+2k}>0$ or $(iii)$ $n_{d+k}=n_{d+2k}=0$ and
$n_{i}\in\mathbb{Z}_{+}$, $1\leq i\leq d$. By an ordering of
$\mathfrak{h}(\underline{0})^{*}$ we mean that the oredering ”$\leq$”
restricted to $\mathfrak{h}(\underline{0})^{*}$.
###### Theorem 3.1.
Let $V$ be an irreducible integrable module for $\tau$ with finite dimensional
weight spaces with respect to the Cartan subalgebra $H$, then there exists a
weight space $V_{\mu}$ of $V$ such that $\tau^{+}.V_{\mu}=0$.
###### Proof.
We consider the Lie algebra
$\tilde{g}=span\\{X(r),K(\bar{s},s),h_{s},K(u,0),D(u,0):X\in\mathfrak{g}(\underline{r}),r\in\mathbb{Z}^{n},\;s\in\bar{\Gamma},r_{k}=r_{2k},s_{k}=s_{2k},u\in\mathbb{C}^{n}\\}$.
Consider the subspace
$W=\displaystyle{\bigoplus_{\eta\in
Q,\;l\in\mathbb{Z}^{n},\;l_{k}=l_{2k}}}V_{\lambda+\eta+\delta_{l}}.$
We can see that $W$ is an integrable (may not be irreducible)
$\tilde{g}$-module with respect to the same Cartan $H$.
Fix a weight $\lambda_{1}=\lambda+\eta+\delta_{l}$ of $W$. Let
$g_{i}=\lambda_{1}(d_{i})$ for $1\leq i\leq n$ and set
$g=(g_{1},g_{2},..,g_{n})$. Consider $P_{g}(W)=\\{v\in
W:d_{i}.v=g_{i}v,\;1\leq i\leq n\\}$. It is easy to see that $P_{g}(V)$ is an
integrable module for $\mathfrak{g}(\underline{0})$ with finite dimensional
weight spaces. Now by Lemma 3.5 of ([19]), we have $P_{g}(V)$ a finite
dimensional module for $\mathfrak{g}(\underline{0})$. Hence $P_{g}(V)$ has
only finitely many weights and consider the maximal weight $\lambda_{2}$
corresponding to $\lambda_{1}$ with respect to the ordering $"\leq"$. So
$\lambda_{1}\leq\lambda_{2}$ and $\lambda_{2}+\eta$ is not a weight of $W$,
for all $\eta>0$ and $\eta\in\bigtriangleup_{0,en}.$ Now we can use the method
used in Theorem 5.2 of [4] to prove that there exists a weight
$\mu=\lambda+\eta^{\prime}+\delta_{r}$ of $W$, where $\eta^{\prime}\in
Q,r\in\mathbb{Z}^{n},r_{k}=r_{2k}$ such that $V_{\mu+\eta+\delta_{s}}=0$ for
$\eta\in Q,s\in\mathbb{Z}^{n},s_{k}=s_{2k}>0$ or $s_{k}=s_{2k}=0$ but $\eta\in
Q^{+}$. Now we can prove $\tau^{+}.V_{\mu}=0$ with the help of similar method
used in Theorem 5.3 of [1]. ∎
The following Proposition is standard.
###### Proposition 3.3.
1. (1)
$M=\\{v\in V:\tau^{+}.v=0\\}$ is a non-zero irreducible $\tau^{0}$ module.
2. (2)
The weights of $M$ lies in the set
$\\{\mu+\delta_{r}:r_{k}=r_{2k}=0,r\in\mathbb{Z}^{n}\\},$ for a fixed weight
$\mu$ of $M$.
∎
###### Proposition 3.4.
If $\mu(h)=0,\;\forall\;h\in\mathfrak{h}(\underline{0})$ and for some weight
$\mu$ of $M$, then $M$ becomes an $H_{n-2}(m^{\prime})$ irreducible module.
###### Proof.
Suppose $\alpha$ be a positive root of $\bigtriangleup_{0,en}$ such that
$\mathfrak{g}(\underline{r},-\alpha)\neq 0$. Let $Y_{\alpha}\otimes
t^{r}\in\mathfrak{g}(\underline{r},-\alpha)\otimes t^{r}$ acting non-trivially
on $v_{\mu}$ for some weight vector $v_{\mu}$ of $M$. Then
$\mu-\alpha+\delta_{r}$ is a weight of $V$. Hence by Proposition 3.1(3), we
have
$r_{\alpha}(\mu-\alpha+\delta_{r})=\mu-\alpha+\delta_{r}-(\mu-\alpha+\delta_{r})(\alpha^{\vee})\alpha=\mu+\alpha+\delta_{r}$
is a weight of $V$, a contradiction. By Remark 1 and Remark 2, $(LT\oplus
Z/K(m))\cap\tau^{0}$ acting trivially on $v_{\mu}$. Consider the non-zero
subspace $W=\\{v\in M:(LT\oplus Z/K(m))\cap\tau^{0}.v=0\\}$ of $M$. We see
that $W$ is a $\tau^{0}$-module, hence by irreducibility $W=M$. Note that
$d_{k},d_{2k}$ are central in $\tau^{0}$ and hence acts as scalar on $M$.
Therefore $M$ is irreducible module for $H_{n-2}(m^{\prime})$. ∎
Classification of irreducible weight modules for $H_{n}$ is still unknown, So
we can not comment on irreducible weight modules of $H_{n-2}$. Now throughout
the paper we assume that $\bar{\mu}=\mu|_{\mathfrak{h}(\underline{0})}\neq 0$.
Therefore there exists a simple root $\alpha\in\bigtriangleup_{0}$ such that
$\mu(h_{\alpha})\neq 0$.
We fix some $i,\;1\leq i\leq n,\;i\neq k,2k$ and consider the extended loop
algebra $\mathfrak{g}(\underline{0})\otimes\mathbb{C}[t_{i}^{\pm
m_{i}}]\oplus\mathbb{C}d_{i}$. Let $\theta$ be the highest root of
$\mathfrak{g}(\underline{0})$, $\theta^{\vee}$ be its co-root and $W_{0}$ be
the Weyl group of $\mathfrak{g}(\underline{0})$. Also assume that $W_{i}$ be
the Weyl group of the loop algebra
$\mathfrak{g}(\underline{0})\otimes\mathbb{C}[t_{i}^{\pm
m_{i}}]\oplus\mathbb{C}d_{i}$. We can see that $h.v={\mu}(h)v$ for all $v\in
M$ and $h\in\mathfrak{h}(\underline{0})$. Define
$t_{i,h}:(h(\underline{0})\oplus\mathbb{C}d_{i})^{*}\rightarrow(h(\bar{0})\oplus\mathbb{C}d_{i})^{*}$
by $t_{i,h}(\lambda)=\lambda-\lambda(h)\delta_{i}$, where
$h\in\mathbb{Z}(W_{0}\theta^{\vee})$. Note that
$\mathbb{Z}(W_{0}\theta^{\vee})$ lies in $\mathbb{Z}$ linear combination of
coroots for $\bigtriangleup_{0}$ and $\lambda(\theta^{\vee})>0$ for all weight
$\lambda$ of $M$. Therefore
$r_{\mu}=min_{h\in\mathbb{Z}(W_{0}\theta^{\vee})}\\{\mu(h):\mu(h)>0\\}\in\mathbb{N}$.
Let $r_{\bar{\mu}}=\mu(h_{0})$ for some
$h_{0}\in\mathbb{Z}(W_{0}\theta^{\vee})$. It is well known that $W_{i}$ is
semidirect product of $W_{0}$ and the commutative group generated by
$t_{i,h}$, $h\in\mathbb{Z}(W_{0}\theta^{\vee})$.
###### Lemma 3.1.
For all $s_{i}\in\mathbb{Z}$, there exists $w\in W_{i}$ such that
$w({\mu}+s_{i}\delta_{i})={\mu}+\bar{s_{i}}\delta_{i}$, where
$0\leq\bar{s_{i}}<r_{{\mu}}$.
###### Proof.
Let $s_{i}=\bar{s_{i}}+p_{i}r_{\mu}$. Then
$t_{i,h_{0}}^{p_{i}}.({\mu}+s_{i}\delta_{i})={\mu}+\bar{s_{i}}\delta_{i}$. ∎
Let $A_{n-2}(m)$ be the Laurent polynomial ring with $(n-2)$ variables
$t_{i}^{m_{i}}$ for $1\leq i\leq n$, $i\neq k,2k$. Now we consider the Lie
algebra $\tau_{n-2}=\mathfrak{g}(\underline{0})\otimes A_{n-2}(m)\oplus
Z_{n-2}/K_{n-2}(m)\oplus H_{n-2}(m)$. Let $W$ be the Weyl group of
$\tau_{n-2}$, then $W_{i}\subseteq W$ for $1\leq i\leq n$, $i\neq k,2k$.
###### Lemma 3.2.
Let $\delta_{r}=\displaystyle{\sum_{1\leq i\leq n,\,i\neq
k,2k}}r_{i}\delta_{i}$, where
$r=(r_{1},\dots,r_{k-1},r_{k+1},\dots,r_{n-1})\in\mathbb{Z}^{n-2}$. Let
$r_{i}=c_{i}+s_{i}m_{i}$, where $0\leq c_{i}<m_{i}$. Then there exists $w\in
W$ such that $w({\mu}+\delta_{r})={\mu}+\delta_{c}+\underset{1\leq i\leq
n,\;i\neq k,2k}{\sum}\bar{s_{i}}m_{i}\delta_{i}$, where
$0\leq\bar{s_{i}}<r_{{\mu}}$.
###### Proof.
The proof follows from Lemma 3.1. ∎
Now from Proposition 3.3, we see that $M$ is $\mathbb{Z}^{(n-2)}$-graded. Let
$M=\oplus_{r\in\mathbb{Z}^{(n-2)}}M_{r}$, where $M_{r}=\\{v\in
M:d_{i}.v=(\mu(d_{i})+r_{i})v,\;1\leq i\leq n,\;i\neq k,2k\\}$. Let
$N^{\prime}=\\{M_{r}:r_{i}=c_{i}+s_{i}m_{i},\;0\leq c_{i}<m_{i},\;0\leq
s_{i}\leq r_{\mu},1\leq i\leq n,i\neq k,2k\\}$, then by Lemma 3.1, we see that
weight spaces of $M$ are uniformly bounded by maximal dimension of elements of
$N^{\prime}$. Take $N=\oplus_{M_{r}\in N^{\prime}}M_{r}$, then $N$ is finite
dimensional. Now define $M^{\prime}(r)=\oplus_{r_{i}\leq
k_{i}<m_{i}+r_{i}}M_{k}$, then $M=\oplus_{r\in\Gamma}M^{\prime}(r)$ is
$\Gamma$-garded $\tau^{0}$ module.
Now we give existence of an irreducible integrable module for $\tau_{n-2}$
such that atleast one element of $M$ is injectively sits inside the module. We
consider the module $V_{1}=U(\tau_{n-2})M$ for $\tau_{n-2}$. If every proper
submodule of $V_{1}$ intersects $M$ trivially then take the quotient of
$V_{1}$ by sum of all proper submodules, the quotient space is such an
example. Now assume that $V_{1}$ has a proper $\tau_{n-2}$ submodule, say
$W_{1}$ such that $W_{1}\cap M\neq 0$. Take $M_{1}=U(\tau_{n-2})(W_{1}\cap
M)$, if every proper submodule of $M_{1}$ intersects $M$ trivially then
construct the quotient space as above. In the quotient space $W_{1}\cap M$
goes injectively. Again if $M_{1}$ has a proper submodule, say $W_{2}$ such
that $W_{2}\cap M\neq 0$, then construct $M_{2}=U(\tau_{n-2})(W_{2}\cap M)$.
###### Lemma 3.3.
The decreasing chain of $\tau_{n-2}$-submodules $M_{1}\supseteq M_{2}\supseteq
M_{3}\supseteq\dots$ terminates after a finite step.
###### Proof.
Let $M_{1}\supsetneq M_{2}$ be two submodules of $\tau_{n-2}$. Then by Lemma
3.2, we can find $N_{i}\subseteq N$ such that $M_{i}=U(\tau_{n-2})N_{i}$. Now
this Proposition follows from Proposition 5.2.6 of [24]. ∎
Let $M_{i}$ be such a minimal module. Then every proper submodule of $M_{i}$
intersect $M$ trivially. Let $\widetilde{M_{i}}$ be the quotient module of
$M_{i}$ by the sum of all proper submodules. So $M_{i}\cap M$ sits injectively
inside $\widetilde{M_{i}}$. Now $\widetilde{M_{i}}$ is an irreducible
integrable module on which every $K_{i}$ acts trivially for $i\neq k,2k$. Now
using Proposition (7.4) of [1], we can find a vector $v\in\widetilde{M_{i}}$
such that $(Z/K(m))\cap\tau_{n-2}$ acts trivially on $v$. Now being an ideal
of $\tau_{n-2}$ along with the irreducibility of $\widetilde{M_{i}}$,
$(Z/K(m))\cap\tau_{n-2}$ acts trivially on $\widetilde{M_{i}}$.
###### Theorem 3.2.
$(Z/K(m))\cap\tau_{n-2}$ acts trivially on $M$.
###### Proof.
The proof follows from previous discussion and irreducibility of $M$ over
$\tau^{0}$. ∎
Let $h_{\alpha}$ be as defined above, i.e. $\mu(h_{\alpha})\neq 0$. Then we
have the following Proposition.
###### Proposition 3.5.
* (1)
$h_{\alpha}\otimes t^{k}$ acts injectively on $M$ for every $k\in\Gamma$.
* (2)
$h_{\alpha}\otimes t^{r}.h_{\alpha}\otimes
t^{s}=\lambda_{r,s}h_{\alpha}\otimes t^{r+s}$ on $M$, where
$\lambda_{r,s}=\lambda$ for all $r\neq 0,s\neq 0,r+s\neq 0$,
$\lambda_{r,-r}=\mu$ for all $r\neq 0$ and
$\lambda_{0,r}=\bar{\lambda}(h_{\alpha})$ for all $r\in\Gamma$. Further we
have $\mu\lambda_{0,r}=\lambda^{2}\neq 0$.
* (3)
dim $(M^{\prime}(r))=$ dim $(M^{\prime}(s))$ for all $r,s\in\Gamma$.
###### Proof.
Follows from Theorem 9.1 of [1]. ∎
## 4\. classification of integrable simple modules
Now recall that our Lie algebra reduces to $\tau=LT\oplus H_{n}(m)$ with
$\tau^{0}=\displaystyle{\bigoplus_{\begin{subarray}{c}r\in\mathbb{Z}^{n}\\\
r_{k}=r_{2k}=0\end{subarray}}}\mathfrak{g}(\underline{r},0)\otimes t^{r}\oplus
H_{n-2}(m^{\prime})\oplus\mathbb{C}d_{k}\oplus\mathbb{C}d_{2k}$. Note that
$d_{k},d_{2k}$ are central in $\tau^{0}$, hence they act by scalars on $M$.
Take
$\mathfrak{g}^{\prime}=\\{x\in\mathfrak{g}\;|\;[h,x]=0,\;\sigma_{k}(x)=\sigma_{2k}(x)=x,\>\forall\>h\in\mathfrak{h}(\underline{0})\\}$.
Now since $\mathfrak{g}^{\prime}$ is invariant under $\sigma_{i}$’s (where
$i\neq k,2k$ ), therefore $\mathfrak{g}^{\prime}$ is $\Lambda$ graded. It is
easy to see that $L(\mathfrak{g}^{\prime},\sigma)=LT\cap\tau^{0}$ ($=LT^{0}$,
say). Let us take $H_{n-2}^{\prime}(m^{\prime})=$
span$\\{D(\bar{r},r)-D(\bar{r},0)|\;r\in\Gamma\\}$. We can easily check that
$H_{n-2}^{\prime}(m^{\prime})$ is a Lie subalgebra of $H_{n-2}(m^{\prime})$.
Furthermore let us set $L=H_{n-2}^{\prime}(m^{\prime})\ltimes
L(\mathfrak{g}^{\prime},\sigma)$ and $W=$ span$\\{h_{\alpha}\otimes
t^{r}.v-v|r\in\Gamma,v\in M\\}$. We can see that $W$ is an $L$-module.
###### Lemma 4.1.
* (1)
$W$ is a proper $L$-submodule of $M$.
* (2)
$\widetilde{V}=M/W$ is a finite dimensional $L$-module.
###### Proof.
Let $z_{i}=h_{\alpha}\otimes t^{m}_{i}$ for each $i=1,\dots,n$ and $i\neq
k,2k$. Without loss of generality we can assume that $\lambda_{r,s}=1$ for
$r\neq 0,s\neq 0,r+s\neq 0$. Therefore we can say that $W=$
span$\\{z_{i}.v-v|v\in M\\}$. Now same as Proposition 5.4(3) of [19], we can
find a proper $LT^{0}$-submodule of $M$, which contains $W$. ∎
Let $\beta_{i}=\mu(d_{i})$ for $1\leq i\leq n$ and $i\neq k,2k$. Then
$\beta=(\beta_{1}\dots,\beta_{n})\in\mathbb{C}^{n-2}$. Then for any $L$ module
$V^{\prime}$, we can give a $\tau^{0}$ module structure on
$L(V^{\prime})=V^{\prime}\otimes A_{n-2}$ by
$x\otimes t^{k}.(v_{1}\otimes t^{s})=((x\otimes t^{k}).v_{1})\otimes t^{k+s}$.
$D(\bar{r},r).(v_{1}\otimes t^{s})=((D(\bar{r},r)-D(\bar{r},0)).v_{1})\otimes
t^{r+s}+(\bar{r},s+\beta)(v_{1}\otimes t^{r+s})$
for all $v_{1}\in V^{\prime},x\in\mathfrak{g}(\underline{k},0)$ and
$D(\bar{r},r)\in H_{n-2}(m^{\prime})$.
For $v\in M$, let $\bar{v}$ be the image of $v$ in $\widetilde{V}$. Now define
$\phi:M\rightarrow L(\widetilde{V})$
by $v\mapsto\bar{v}\otimes t^{k}$ for $v\in M(k).$
This map is clearly a nonzero $\tau^{0}$ -module homomorphism. Hence by
irreducibility of $M$ it follows that $M\cong\phi(M)$ is a $\tau^{0}$
submodule of $L(\widetilde{V})$.
Clearly $L$ is naturally $\Lambda$ graded. Now since $M$ and $W$ are $Z^{n-2}$
graded, therefore they are naturally $\Lambda$ graded and hence so is
$\widetilde{V}$. Therefore
$\widetilde{V}=\oplus_{\bar{p}\in\Lambda}\widetilde{V}(\bar{p})$.
Now for $\bar{p}\in\Lambda$, we set
$L(\widetilde{V})(\bar{p})=\\{v\otimes
t^{k+r+p}|v\in\widetilde{V}(\bar{k}),r\in\Gamma,k\in\mathbb{Z}^{n-2}\\}$
It can be easily verified that $L(\widetilde{V})(\bar{p})$ is a $\tau^{0}$
submodule of $L(\widetilde{V})$.
Let $I(\bar{r},r)=D(\bar{r},r)-D(\bar{r},0)$ for all $r\in\Gamma$. Define
$H_{n-2}^{\prime}(m^{\prime})=span\\{I(\bar{r},r):r\in\Gamma\\}$, this becomes
a Lie subalgebra of $H_{n-2}(m^{\prime})$. The following result can be deduced
similarly as in [4].
###### Proposition 4.1.
* (1)
$M\cong L(\widetilde{V})(\bar{0})$ as $\tau^{0}$ -module.
* (2)
$\widetilde{V}$ is $\Lambda$ -graded-irreducible module over $L$.
* (3)
$\widetilde{V}$ is completely reducible module over $L$ and all its
irreducible components are mutually isomorphic as
$H_{n-2}^{\prime}(m^{\prime})\ltimes\mathfrak{h}(\bar{0})\otimes
A_{n-2}(m^{\prime})$ -modules.
Now we will concentrate on irreducible representation of $L$. Let $(W,\pi)$ be
a finite dimensional representation of $L$. Let
$\pi(L(\mathfrak{g}^{\prime},\sigma))=\mathfrak{g}^{1}$, then
$\pi(L)=\mathfrak{g}^{1}\oplus\mathfrak{g}^{2}$, where $\mathfrak{g}^{2}$ is
the unique complement of $\mathfrak{g}^{1}$ in $\mathfrak{gl}(W)$ (Proposition
19.1(b) of [8] ). So $W$ will be an irreducible module for
$\mathfrak{g}^{1}\oplus\mathfrak{g}^{2}$. Therefore $W\cong W_{1}\otimes
W_{2}$, where $W_{1}$ and $W_{2}$ are irreducible modules for
$\mathfrak{g}^{1}$ and $\mathfrak{g}^{2}$ respectively (see [9] ). Let
$\mathfrak{g}^{\prime}=\mathfrak{g}^{\prime}_{ss}\oplus R$, where
$\mathfrak{g}^{\prime}$ and $R$ are Levi and radical part of
$\mathfrak{g}^{\prime}$. Then as
$\sigma_{i}(\mathfrak{g}^{\prime})=\mathfrak{g}^{\prime}$ and
$\sigma_{i}(R)=R$ for $1\leq i\leq n$, we have
$L(\mathfrak{g}^{\prime},\sigma)=L(\mathfrak{g}^{\prime}_{ss},\sigma)\oplus
L(R,\sigma)$. Now $W_{1}$ is irreducible module for
$L(\mathfrak{g}^{\prime},\sigma)$. As $R$ is a solvable ideal, it follows that
$\pi(L(R,\sigma))$ lies in the center of $\pi(L)$, which is atmost one
dimensional. Hence $L(R,\sigma)$ acts as a scalar on $W$. So $W_{1}$ will be
an irreducible module for $L(\mathfrak{g}^{\prime}_{ss},\sigma)$.
Fix a positive integer $l$. For each $i$, let $a_{i}=(a_{i,1},\dots,a_{i,l})$
such that
$\displaystyle a_{i,j}^{m_{i}}\neq a_{i,t}^{m_{i}},\,for\,j\neq t.$ (4.1)
Let $\mathfrak{g}$ be a finite dimensional semisimple Lie algebra. Let
$\sigma_{1},\dots\sigma_{n}$ be finite order automorphisms on $g$ of order
$m_{1},\dots m_{n}$ respectively. Let $L(\mathfrak{g},\sigma)$ be the
corresponding multiloop algebra. Let $I=\\{(i_{1},i_{2},\dots,i_{n})|1\leq
i_{j}\leq l\\}$. Now for $S=(i_{1},i_{2},\dots,i_{n})\in I$ and
$r=(r_{1},r_{2},\dots r_{n})\in\mathbb{Z}^{n}$,
$a_{S}^{r}=a_{1,i_{1}}^{r_{1}}a_{2,i_{2}}^{r_{2}}\cdots a_{n,i_{n}}^{r_{n}}.$
Now consider the evaluation map $\phi:\mathfrak{g}\otimes
A\rightarrow\bigoplus\mathfrak{g}$ ($l^{n}$ copies), $\phi(X\otimes
t^{r})=(a_{I_{1}}^{r}X,a_{I_{2}}^{r}X,\dots,a_{I_{l^{n}}}^{r}X)$, where
$I_{1},I_{2},\cdots I_{l^{n}}$ is some order on $I$. Now consider the
restriction of $\phi$ to $L(\mathfrak{g},\sigma)$.
###### Theorem 4.1.
$($[10]$)$ Let $W^{\prime}$ be a finite dimensional irreducible representation
of $L(\mathfrak{g},\sigma)$. Then the representation factors through
$\bigoplus\mathfrak{g}$ $($ $l^{n}$ copies$)$.
Here $W_{1}$ is irreducible module for $L(\mathfrak{g}^{\prime}_{ss},\sigma)$,
so the representation will factors through $l^{n-2}$ copies of
$\mathfrak{g}^{\prime}_{ss}$.
###### Proposition 4.2.
Let $W_{1}$ be irreducible module for $L(\mathfrak{g}^{\prime}_{ss},\sigma)$
as above. Then the representation of $L(\mathfrak{g}^{\prime}_{ss},\sigma)$
factors through only one copy of $\bigoplus\mathfrak{g}^{\prime}_{ss}$. So
$\mathfrak{g}^{1}_{ss}\cong\mathfrak{g}^{\prime}_{ss}$.
###### Proof.
We know by Theorem 4.1, that the representation factors through $l^{n-2}$
copies, for some positive integer $l$. we will prove here that $l=1$. Choose
the $i$-th piece of $\mathfrak{g}^{\prime}_{ss}$ and choose the proection of
the map $\pi$, say $\pi_{i}$ onto it. Doing the same calculation as in [3] we
will get $\pi_{i}(H_{n-2}^{\prime}(m^{\prime}))=0$ and $a_{I_{i}}^{r}=1$ for
all $r\in\Gamma$. Now suppose there is atleast two pieces, say $i$-th and
$j$-th piece is there. Therefore $I_{i}$ and $I_{j}$ are two different element
of $I$ with $a_{I_{i}}^{r}=1=a_{I_{j}}^{r}$ for all $r\in\Gamma$. Let
$I_{i}=(i_{1},i_{2},\dots,i_{n-2})$ and $I_{j}=(j_{1},j_{2},\dots j_{n-2})$.
Therefore there is $k$ with $1\leq k\leq n-2$ such that $i_{k}\neq j_{k}$. Now
if we take $r=(0,\dots,m_{k},\dots,0)$ then $a_{I_{i}}^{r}=1=a_{I_{j}}^{r}$
will give $a_{k,i_{k}}^{m_{k}}=a_{k,j_{k}}^{m_{k}}$, a contradiction to
equation (4.1). So there is atmost one piece. ∎
Now we know $\pi_{i}(H_{n-2}^{\prime}(m^{\prime}))=0$, therefore
$\mathfrak{g}^{2}\subseteq\pi(H_{n-2}^{\prime}(m^{\prime}))$. Now our aim is
to understand finite dimensional irreducible modules for
$H_{n-2}^{\prime}(m)$. We are going to establish a relation between finite
dimensional $H_{n}^{\prime}(m)$ modules and $H_{n}(m)\ltimes A_{n}(m)$-modules
with finite dimensional weight spaces. In order to do that we follow the
method used in [3].
Let $W$ be an irreducible finite dimensional $H_{n}^{\prime}(m)$ module. We
define $A_{n}(m)\rtimes H_{n}(m)$ module action on $L(W)=W\otimes A_{n}(m)$ in
the following way
$D(\bar{r},r).(w\otimes t^{k})=(I(\bar{r},r).w)\otimes
t^{r+k}+(\bar{r},\beta+k)w\otimes t^{r+k}$,
$D(u,0)(w\otimes t^{k})=(u,\alpha+k)w\otimes t^{k}$,
$t^{r}.w\otimes t^{k}=w\otimes t^{r+k}$, for all $u\in\mathbb{C}^{n},\;r(\neq
0),k\in\Gamma,w\in W$ and some $\alpha,\beta\in\mathbb{C}^{n}$.
We denote this module as $(\pi_{\alpha,\beta},L(W))$.
It is clear that $L(W)=\displaystyle{\bigoplus_{k\in\Gamma}}W\otimes t^{k}$ is
the weight space decomposition of $L(W)$. Consider the $H_{n}^{\prime}(m)$
submodule of $L(W)$, $W_{0}=\\{w\otimes t^{r}-w\;|\;r\in\Gamma,\;w\in W\\}$,
then $\overline{L(W)}=L(W)/W_{0}$ is an $H_{n}^{\prime}(m)$ submodule. Let
$(\theta,\overline{L(W)})$ be its corresponding representation. Now we define
a new representation $(\theta_{\xi},\overline{L(W)})$ of $H_{n}^{\prime}(m)$
by the following action
$\theta_{\xi}(I(\bar{r},r)).v=\theta(I(\bar{r},r))v+(\bar{r},\xi)v$, where
$\xi\in\mathbb{C}^{n}$.
It is easy to see $\theta_{0}=\theta$.
###### Lemma 4.2.
Let $W$ be a finite dimensional irreducible $H_{n}^{\prime}(m)$ module. Then
$L(W)$ is irreducible $A_{n}(m)\rtimes H_{n}(m)$ module.
###### Proof.
Let $U_{0}$ be a non-zero submodule of $L(W)$. Since $L(W)$ is a weight module
, so $u_{0}\otimes t^{r}\in U_{0}$ for some non-zero $u_{0}\in U_{0}$ and
$r\in\Gamma$. Now the action of $A_{n}(m)$ on $L(W)$ implies that
$u_{0}\otimes A_{n}(m)\in U_{0}.$ Hence we have $U_{0}=U_{1}\otimes A_{n}(m)$
for some non-zero subspace $U_{1}$ of $W$. Therefore to complete the proof, it
is sufficient to show that $U_{1}$ is a submodule of $W$.
Let $r\in\Gamma$, $u\in U_{1}$ and consider the action
$D(\bar{r},r)(u\otimes t^{-r})=I(\bar{r},r).u+(\bar{r},\beta-r)u.$
This imples that $I(\bar{r},r).u\in U_{1}$. Hence $U_{1}$ is a submodule of
$W$. ∎
###### Lemma 4.3.
Let $(\pi_{\alpha,\beta},L(W))$ be a finite dimensional irreducible
$A_{n}(m)\rtimes H_{n}(m)$ module for a finite dimensional module $(\eta,W)$
of $H_{n}^{\prime}(m)$. Then $(\theta_{\xi},\overline{L(W)}$ is irreducible
for $H_{n}^{\prime}(m)$.
###### Proof.
Note that $(\theta_{\alpha-\beta},\overline{L(W)})\cong(\eta,W)$. Moreover one
can see that if $W_{0}$ is a nonzero proper submodule of $W$, then
$W_{0}\otimes A_{n}(m)$ is a nonzero proper submodule of $L(W)$. ∎
###### Lemma 4.4.
Let $W$ be finite dimensional irreducible module for $\mathfrak{sp}_{n}.$ Then
$W$ can be made into $H_{n}^{\prime}(m)$-module by the action:
$I(\bar{r},r).w=(r^{t}{\bar{r}})w+(\bar{r},\zeta)w$, where
$\zeta\in\mathbb{C}^{n}$ and $r^{t}$ denote the transpose of the row vector
$r\in\Gamma.$
###### Proof.
Note that for $r\in\Gamma,$ $r^{t}\bar{r}\in\mathfrak{sp}_{n}$ and hence the
action is well define. It is easy to see that this action define a module
structure on $W$.
∎
###### Theorem 4.2.
Suppose $W$ is a finite dimensional irreducible $sp_{n}$-module. Let
$\alpha,\beta\in\mathbb{C}^{n}$. Take $L(W)=W\otimes A_{n}(m)$ and consider
the action $D(\bar{r},r)(w\otimes t^{k})=(\bar{r},k+\beta)w\otimes
t^{k+r}+(r^{t}{\bar{r}}).w\otimes t^{k+r}$, for $r(\neq 0)\in\Gamma$ and
$D(u,0)(w\otimes t^{k})=(u,\alpha+k)w\otimes t^{k}$ and $t^{r}(w\otimes
t^{k})=w\otimes t^{k+r}$. Then $L(W)$ is an irreducible module for
$H_{n}(m)\ltimes A_{n}(m)$. Moreover, all irreducible representations of
$H_{n}(m)\ltimes A_{n}(m)$ with finite dimensional weight spaces occur in this
way.
###### Proof.
The proof follows from Lemma 4.2, 4.3,4.4 and Theorem [11]. ∎
We know $\widetilde{V}$ is completely reducible $L$ module. Therefore
$\widetilde{V}=\oplus_{i=1}^{K}\widetilde{V}_{i}$ for some $K\in\mathbb{N}$.
Then by the previous discussion each $\widetilde{V}_{i}\cong W_{1}^{i}\otimes
W_{2}^{i}$ as $\mathfrak{g}^{\prime}_{ss}\oplus\mathfrak{sp}_{n-2}$ module,
where $W_{1}^{i}$, $W_{2}^{i}$ are irreducible modules for
$\mathfrak{g}^{\prime}_{ss}$ and $\mathfrak{sp}_{n-2}$ respectively. Since
each component $\widetilde{V}_{i}$ is isomorphic as
$H_{n-2}^{\prime}(m^{\prime})\ltimes(\mathfrak{h}(\underline{0})\otimes A(m))$
module, we can take $W_{2}^{i}\cong W_{2}^{1}$ ($=W_{2}$ ,say) as
$\mathfrak{sp}_{n}$-modules for each $i\in\\{1,\dots,K\\}$. Now consider
$W_{1}=\sum_{i=1}^{K}W_{1}^{i}$, which is a
$L(\mathfrak{g}^{\prime}_{ss},\sigma)$-module, in particular
$\mathfrak{g}^{\prime}_{ss}$ module. Since each $W_{1}^{i}$ is irreducible,
without loss of generality we can assume that the above sum is direct. It is
easy to see that $L$ is $\Lambda$-graded with zero-th component
$H_{n-2}^{\prime}(m^{\prime})\ltimes(\mathfrak{h}(\underline{0})\otimes A(m))$
and since $\widetilde{V}$ is $\Lambda$ graded irreducible module (Proposition
4.1), we can take $W_{1}$ as $\Lambda$-graded irreducible
$L(\mathfrak{g}^{\prime}_{ss},\sigma)$ module and $W_{2}$ a zero graded as
$H_{n-2}^{\prime}(m^{\prime})$-module which lies inside the zero-th graded
component of $L$.
We now define a $\tau^{0}$-module structure on $W_{1}\otimes W_{2}\otimes
A_{n}$ by
$X\otimes t^{k}(w_{1}\otimes w_{2}\otimes t^{l})=Xw_{1}\otimes w_{2}\otimes
t^{k+l}$, for $k,l\in\mathbb{Z}^{n-2}$ and
$X\in\mathfrak{g}^{\prime}_{ss}(\underline{k})$.
$D(\bar{r},r)(w_{1}\otimes w_{2}\otimes t^{k})=(u,k+\beta)w_{1}\otimes
w_{2}\otimes t^{k+r}+w_{1}\otimes(r{\bar{r}}^{t}).w_{2}\otimes t^{k+r}$ for
$r\neq 0$,
$D(u,0)(w_{1}\otimes w_{2}\otimes t^{k})=(u,k+\alpha)w_{1}\otimes w_{2}\otimes
t^{k}$.
Now take any one dimensional representation of $L(R,\sigma)$ say $\psi$. Then
for $y\in R(\underline{k})$ we take $y\otimes t^{k}(w_{1}\otimes w_{2}\otimes
t^{l})=\psi(y)(w_{1}\otimes w_{2}\otimes t^{k+l})$. Since $W_{1}$ is $\Lambda$
graded, which is compatible with $\Lambda$-gradation of
$\mathfrak{g}^{\prime}_{ss}$, so the submodule
$V^{\prime}=\bigoplus_{k\in\mathbb{Z}^{n}}W_{1,k}\otimes W_{2}\otimes t^{k}$
will be irreducible module for $\tau^{0}$. One can easily check that
$L(\widetilde{V})(\bar{0})\cong V^{\prime}$ as $\tau^{0}$-module.
###### Theorem 4.3.
Let $V$ be an irreducible integrable $\tau$ module with finite dimensional
weight spaces, with $K_{k}$ acting as $c_{0}$ and $K_{i}$ acts trivially for
$1\leq i\leq n$ and $i\neq k$. Then $V\cong
U(\tilde{\tau})M/M^{\textit{Rad}}$.
###### Proof.
Follows from the previous discussion. ∎
## References
* [1] S. Eswara Rao, Hamiltonian Extended Affine Lie Algebra and Its Representation Theory, Journal of Algebra (628) 71-97.
* [2] Bruce Allison, Stephen Berman, John Faulkner, Arturo Pianzola, Multiloop realization of extended affine Lie algebras and Lie tori, C Trans. Am. Math. Soc.361(9) (2009) 4807-4842
* [3] S. Eswara Rao, Sachin S. Sharma, Punita Batra, Integrable modules for twisted toroidal extended affine Lie algebras, Journal of Algebra 556 (2020) 1057-1072
* [4] Punita Batra, Senapathi Eswara Rao, On integrable modules for the twisted full toroidal Lie algebra, J. Lie Theory 28 (1)(2018) 79-105
* [5] S. Eswara Rao, Classification of irreducible integrable modules for toroidal Lie algebras with finite dimensional weight spaces, J. of Algebra 277(2004)318-348
* [6] Vyjayanthi Chari, Andrew Pressley, Weyl modules for classical and quantum affine algebras, Represent. Theory 5(2001) 191-223.
* [7] Souvik Pal, S. Eswara Rao, Classification of level zero irreducible integrable modules for twisted full toroidal Lie algebra, Journal of Algebra, 578 (2021) 1-29
* [8] James E. Humphreys, Introduction to Lie Algebras and Representation Theory, Graduate Texts in Mathematics, vol. 9,Springer- Verlag, New York-Berlin, 1978, second printing, revised.
* [9] Haisheng Li, On certain categories of modules for affine Lie algebras, Math. Z. 248 (3) (2004) 635-664.
* [10] Erhard Neher, Alistair Savage, Prasad Senesi, Irreducible finite-diemnsional representation of equivariant map algebras, Trans. Am. Math. soc. 364(5)(2012) 2619-2646.
* [11] John Talboom, category $\jmath$ modules for Hamiltonian vector fields on torus, J. Lie theory 28 (2018) 903-914.
* [12] Xiangqian Guo, Genquiang Liu, Jet modules for the centerless Virasoro-like algebra, J. Algebra Appl. 18(1) (2019) 1950002.
* [13] Katsuyuki Naoi, Multiloop Lie algebras and the construction of extended affine Lie algebras. J. Algebra, 323(8):2103-2129, 2010
* [14] V. Chari, Integrable representations of affine Lie algebras, Invent. Math. 85(1986), no. 2, 317-335.
* [15] Chen, Fulin; Li, Zhiqiang; Tan, Shaobin; Classification of integrable representations for toroidal extended affine Lie algebras, J. Algebra 514(2021), 1-37.
* [16] Erhard Neher, Extended affine Lie algebras, C. R. Math. Acad. Sci. Soc. R. Can. 26 (3) (2004) 90-96.
* [17] Yoji Yoshii, Lie tori-a simple characterization of extended affine Lie algebras, Publ. Res. Inst. Math. Sci. 42 (3) (2006) 739-762.
* [18] M. Lau, Representations of multiloop algebras, Pacific J. Math. 245 (2010), no. 1, 167-184.
* [19] Souvik Pal, Integrable modules for graded Lie tori with finite-dimensional weight spaces, J. Pure and Applied Algebra.285(9)(2021)
* [20] E.Neher, Extended affine Lie algebras-an introduction to their structure theory, in: Gemometric representation theory and Extended Affine Lie Algebras, in: Fields Inst. Commun.,vol.59, Amer. Math.Soc., Providence,RI,2011,107-167.
* [21] V. Chari, J. Greenstein, Graded Level Zero Integrable Representations of Affine Lie Algebras, Transactions of American Mathematical Society, Vol. 360, No. 6, 2923-2940.
* [22] Yuly Billig, John Talboom, Classification of category J modules for divergence zero vector fields on a torus, J. Algebra 500 (2018) 498–516.
* [23] Punita Batra, Representations of twisted multi-loop Lie algebras, Journal of Algebra, 272(2004) 404-416.
* [24] Souvik Pal, On level zero integrable modules over extensions of Lie tori, ”PhD thesis”, Harish-Chandra Research Institute, 2021.
|
# Asymptotic Analysis of $q$-Recursive Sequences
Clemens Heuberger Daniel Krenn Gabriel F. Lipnik
###### Abstract
For an integer $q\geq 2$, a $q$-recursive sequence is defined by recurrence
relations on subsequences of indices modulo some powers of $q$. In this
article, $q$-recursive sequences are studied and the asymptotic behavior of
their summatory functions is analyzed. It is shown that every $q$-recursive
sequence is $q$-regular in the sense of Allouche and Shallit and that a
$q$-linear representation of the sequence can be computed easily by using the
coefficients from the recurrence relations. Detailed asymptotic results for
$q$-recursive sequences are then obtained based on a general result on the
asymptotic analysis of $q$-regular sequences.
Three particular sequences are studied in detail: We discuss the asymptotic
behavior of the summatory functions of
* •
Stern’s diatomic sequence,
* •
the number of non-zero elements in some generalized Pascal’s triangle and
* •
the number of unbordered factors in the Thue–Morse sequence.
For the first two sequences, our analysis even leads to precise formulæ
without error terms.
Clemens Heuberger
<EMAIL_ADDRESS>https://wwwu.aau.at/cheuberg, Alpen-Adria-
Universität Klagenfurt, Austria
Daniel Krenn
<EMAIL_ADDRESS>http://www.danielkrenn.at, Paris Lodron University of
Salzburg, Austria
Gabriel F. Lipnik
<EMAIL_ADDRESS>https://www.gabriellipnik.at, Graz University of
Technology, Austria
Support
Clemens Heuberger and Daniel Krenn are supported by the Austrian Science Fund
(FWF): P 28466-N35. Gabriel F. Lipnik is supported by the Austrian Science
Fund (FWF): W 1230.
Acknowledgment
The authors thank Helmut Prodinger for drawing their attention to the counting
function of unbordered factors in the Thue–Morse sequence.
2020 Mathematics Subject Classification
05A16; 11A63, 11B37, 30B50, 68Q45, 68R05, 68R15
Key words and phrases
regular sequence, recurrence relation, digital function, summatory function,
asymptotic analysis, Dirichlet series, Stern’s diatomic sequence, Pascal’s
triangle, Thue–Morse sequence
###### Contents
1. 1 Introduction
2. 2 Brief Introduction to $q$-Regular Sequences
3. 3 $q$-Recursive Sequences
1. 3.1 Definitions
2. 3.2 Reduction to $q$-Regular Sequences in the General Case
3. 3.3 Reduction to $q$-Regular Sequences in a Special Case
4. 4 Asymptotics
1. 4.1 Growth of Matrix Products
2. 4.2 Asymptotics for Regular Sequences
3. 4.3 Spectral Results in the General Case
4. 4.4 Spectral Results in the Special Case
5. 4.5 Functional Equation for the Dirichlet Series in the Special Case
5. 5 Stern’s Diatomic Sequence
1. 5.1 Introduction of the Sequence
2. 5.2 Combinatorial Interpretations of the Sequence
3. 5.3 Regularity and a Linear Representation
4. 5.4 Asymptotics
6. 6 Number of Non-Zero Entries in a Generalized Pascal’s Triangle
1. 6.1 Introduction of the Sequence
2. 6.2 Regularity and a Linear Representation
3. 6.3 Full Asymptotics
7. 7 Number of Unbordered Factors in the Thue–Morse Sequence
1. 7.1 Introduction of the Sequence
2. 7.2 Regularity and a Linear Representation
3. 7.3 Joint Spectral Radius
4. 7.4 Asymptotics
8. 8 Proofs
1. 8.1 Proofs of the Reductions to $q$-Regular Sequences in the General Case
2. 8.2 Proof of the Reduction to $q$-Regular Sequences in the Special Case
3. 8.3 Proofs of the Spectral Results
4. 8.4 Proof of the Functional Equation in the Special Case
## 1 Introduction
#### $q$-Recursive Sequences.
We study a special class of recursively defined sequences, the so-called _$q$
-recursive sequences_. Here $q$ is an integer and at least $2$, and
$q$-recursive sequences are sequences which satisfy a specific type of
recurrence relation: Roughly speaking, every subsequence whose indices run
through a residue class modulo $q^{M}$ is a linear combination of subsequences
where for each of these subsequences, the indices run through a residue class
modulo $q^{m}$ for some $m<M$.
It turns out that this property is quite natural and many combinatorial
sequences are in fact $q$-recursive. A simple nontrivial example of such a
sequence111Throughout this paper, we let $\mathbb{N}$ denote the set of
positive integers and write $\mathbb{N}_{0}\coloneqq\mathbb{N}\cup\\{0\\}$.
Moreover, we write sequences in functional notation, i.e., we write a
(complex-valued) sequence $x$ as a function
$x\colon\mathbb{N}_{0}\to\mathbb{C}$, and as a consequence, the $n$th element
of the sequence is denoted by $x(n)$. In addition, we consider a vector of
complex-valued sequences $v=(x_{1},\dots,x_{D})^{\top}$ as a vector-valued
function $v\colon\mathbb{N}_{0}\to\mathbb{C}^{D}$, and its evaluation is
defined component-wise, i.e., $v(n)=(x_{1}(n),\dots,x_{D}(n))^{\top}$. is when
$h(n)$ is the largest power of $2$ less than or equal to $n$; see [34,
A053644]. Then we have $h(2n)=2h(n)$ and $h(2n+1)=2h(n)$ for $n\geq 1$ as well
as $h(1)=1$, so clearly $q=2$, and we set $M=1$ and $m=0$. This is because the
left-hand sides of the two recurrence relations contain $2^{1}n$ shifted by
$0$ and $1$, and because the right-hand sides only contain $2^{0}n$ (and no
shifts). Another example are divide-and-conquer recurrences, see [24, Equation
(1.2)].
#### $q$-Regular Sequences.
The concept of $q$-recursive sequences is related to $q$-regular sequences
introduced by Allouche and Shallit [1]. One definition of a $q$-regular
sequence $x$ is that every subsequence of the form $x(q^{j}n+r)$ can be
written as a linear combination of the same finite number of sequences; see
Section 2 for more details and a precise description. Again the sequence $h$
([34, A053644]) is an example, now for a $2$-regular sequence; it satisfies
$h(2^{j}n+r)=2^{j}h(n)$ for222By the definition of $q$-regular sequences as
given in Section 2, when representing every subsequence of the form
$h(2^{j}n+r)$ as a linear combination of the same finite number of sequences,
this needs to hold for all $n\geq 0$. In particular, this needs to hold for
$n=0$, which is not fulfilled in the example. However, this is only a minor
technical issue and can be fixed by adding some appropriate sequence to the
linear combination. Details about this repair are provided in Theorem 3.11.
$n\geq 1$, $j\geq 0$ and $0\leq r<2^{j}$, so every $h(2^{j}n+r)$ can be
written in terms of $h(n)$.
Equivalently, every $q$-regular sequence can be modeled by a $q$-linear
representation. Here $x(n)$ is one component of a vector $v(n)$, and there
exist matrices $A_{r}$ with $v(qn+r)=A_{r}v(n)$ for all $0\leq r<q$ and $n\geq
0$; see also Section 2.
#### Linear Representation of $q$-Recursive Sequences.
One main result of this paper is that every $q$-recursive sequence is indeed
$q$-regular; see Section 3.2. Even more can be said: Theorem 3.7 provides an
explicit $q$-linear representation of the sequence.
In Section 3.3, we significantly improve our results for a special case of
$q$-recursive sequences, namely where $M=m+1$.
#### Asymptotics of $q$-Recursive Sequences.
After exploring the concept of $q$-recursive sequences itself, we investigate
the asymptotic behavior of the summatory functions of $q$-recursive sequences,
i.e., sequences of partial sums. There exist explicit results for many
particular regular sequences and also some quite general results. Dumas [13]
as well as the first two authors of this paper together with Prodinger [22,
20] studied the asymptotics of $q$-regular sequences in general. The two works
[22, 20] will also be one of the main ingredients for obtaining the asymptotic
behavior of $q$-recursive sequences. We present details in Section 4. We
investigate an important special case where the asymptotic behavior can be
directly determined from the $q$-recursive sequence without constructing the
representation as a $q$-regular sequence.
#### Explicit Precise Asymptotics for Three Particular Sequences.
We also investigate three specific $q$-recursive sequences in-depth. In
particular, we derive asymptotic results for their summatory functions as well
as explain and illustrate the connection between these results and the fact
that the sequences are $q$-recursive. To be more specific, we analyze
* •
Stern’s diatomic sequence in Section 5,
* •
the number of non-zero entries in a generalized Pascal’s triangle in Section
6, and
* •
the number of unbordered factors in the Thue–Morse sequence in Section 7.
For the first two sequences, our analysis even leads to precise formulæ
without error terms.
#### Proofs.
We finally complete this paper by giving proofs of our results; these are
collected in Section 8.
## 2 Brief Introduction to $q$-Regular Sequences
The concept of $q$-regular sequences333In the standard literature [1, 2],
these sequences are called $k$-regular instead of $q$-regular. was first
introduced by Allouche and Shallit [1] in 1992, and a lot of research on them
has been done since then; see for example Allouche and Shallit [2] and [3],
Bell [4], and Coons and Spiegelhofer [11].
The parameter $q$ acts as a base (or radix); therefore the term _digital
function_ arises in context of such sequences. We start by giving a
definition; see Allouche and Shallit [2].
Let $q\geq 2$ be a fixed integer and $x\colon\mathbb{N}_{0}\to\mathbb{C}$ be a
sequence444The results given in Sections 3.2 and 3.3 are valid for sequences
$x\colon\mathbb{N}_{0}\to R$, where $R$ is a commutative Noetherian ring.
However, in places where we speak about the asymptotics of a sequence, we
always consider complex-valued sequences.. Then $x$ is called _$q$ -regular_
if the complex vector space generated by its $q$-kernel
$\operatorname{\mathcal{K}}_{q}(x)\coloneqq\big{\\{}x\circ(n\mapsto
q^{j}n+r)\,\big{|}\,\mathopen{}\text{integers }j\geq 0,0\leq r<q^{j}\big{\\}}$
has finite dimension. In other words, a sequence $x$ is _$q$ -regular_ if
there are $\Delta\in\mathbb{N}_{0}$ and sequences $x_{1}$, $\dots$,
$x_{\Delta}$ such that for every $j\in\mathbb{N}_{0}$ and $0\leq r<q^{j}$
there exist $c_{1}$, $\dots$, $c_{\Delta}\in\mathbb{C}$ with
$x(q^{j}n+r)=\sum_{i=1}^{\Delta}c_{i}x_{i}(n)$
for all $n\geq 0$.
By Allouche and Shallit [1, Theorem 2.2], a complex-valued sequence $x$ is
$q$-regular if and only if there exist a vector-valued sequence
$v\colon\mathbb{N}_{0}\to\mathbb{C}^{D}$ for some $D\in\mathbb{N}$ whose first
component coincides with $x$ and matrices $A_{0}$, …, $A_{q-1}$ such that
$v(qn+r)=A_{r}v(n)$
holds for all $0\leq r<q$ and $n\geq 0$. If this is the case, the tuple
$(A_{0},\dots,A_{q-1},v)$ is called _$q$ -linear representation_ of $x$, and
$D$ is said to be its _dimension_.
At this point, we note that a $q$-linear representation
$(A_{0},\dots,A_{q-1},v)$ of a sequence $x$ immediately leads to an explicit
expression for $x(n)$ by induction: Let $d_{L-1}\ldots d_{0}$ be the $q$-ary
digit expansion of $n$. Then we have
$x(n)=e_{1}A_{d_{0}}\ldots A_{d_{L-1}}v(0),$ (2.1)
where $e_{1}=(1,0,\ldots,0)$. In particular, the $n$th element $x(n)$ can be
computed in $O(\log n)$ operations in $\mathbb{C}$.
The prototypical and probably best-known example of a $q$-regular sequence is
the binary sum of digits.
###### Example 2.1 (Binary Sum of Digits [34, A064547]).
For $n\in\mathbb{N}_{0}$, let $s(n)$ denote the number of ones in the binary
expansion of $n$. Then we clearly have
$s(2n)=s(n)\mbox{\quad and\quad}s(2n+1)=s(n)+1$ (2.2)
for all $n\geq 0$ and $s(0)=0$. By induction we obtain
$s(2^{j}n+r)=s(n)+s(r)\cdot 1$
for all $n\geq 0$, $j\geq 0$ and $0\leq r<2^{j}$. This means that the complex
vector space generated by the kernel $\operatorname{\mathcal{K}}_{q}(s)$ is
also generated by $s$ and $n\mapsto 1$ and thus, the sequences $s$ is
$2$-regular.
A $2$-linear representation $(A_{0},A_{1},v)$ of $s$ is given by
$A_{0}=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix},\quad
A_{1}=\begin{pmatrix}1&1\\\ 0&1\end{pmatrix}\mbox{\quad
and\quad}v=\begin{pmatrix}s\\\ n\mapsto 1\end{pmatrix},$
where the corresponding recurrence relations $v(2n)=A_{0}v(n)$ as well as
$v(2n+1)=A_{1}v(n)$ can be checked easily by using (2.2).
## 3 $q$-Recursive Sequences
In this section, we introduce the concept of $q$-recursive sequences,
investigate how linear representations of these sequences look like and
thereby conclude that these sequences are $q$-regular. We will also
investigate a more restricted set-up. This special case (as we will call it
from now on in contrast to the general case) is important; two instances will
be discussed in Sections 6 and 7.
### 3.1 Definitions
We start by giving the definition of our sequences of interest.
###### Definition 3.2 ($q$-Recursive Sequence).
Let $q\geq 2$, $M>m\geq 0$, $\ell\leq u$ and
$n_{0}\geq\max\\{-\ell/q^{m},0\\}$ be fixed integers. Let $x$ be a sequence.
If there are constants $c_{s,k}\in\mathbb{C}$ for all $0\leq s<q^{M}$ and
$\ell\leq k\leq u$ such that
$x(q^{M}n+s)=\smashoperator[]{\sum_{\ell\leq k\leq u}^{}}c_{s,k}\,x(q^{m}n+k)$
(3.1)
holds for all $n\geq n_{0}$ and $0\leq s<q^{M}$, then we say that the sequence
$x$ is _$q$ -recursive with offset $n_{0}$, exponents $M$ and $m$, index shift
bounds $\ell$ and $u$, and coefficients $(c_{s,k})_{0\leq s<q^{M},\ell\leq
k\leq u}$_.
We use the convention that if any of the parameters $q$, $n_{0}$, $M$, $m$,
$\ell$, $u$, $(c_{s,k})_{0\leq s<q^{M},\ell\leq k\leq u}$ is not mentioned for
a recursive sequence, then we assume that a value of this parameter exists
such that the sequence is recursive with this value of the parameter.
The sequence where $h(n)$ is the largest power of $2$ less than or equal to
$n$ ([34, A053644]), that was mentioned in the introduction, is indeed a
$2$-recursive sequence with offset $1$, as $h(2n)=2h(n)$ and $h(2n+1)=2h(n)$
hold for $n\geq 1$. On the other hand, the binary sum of digits as introduced
in Example 2.1 does not directly555While the binary sum of digits $s$ does not
directly fit into the framework with $M=1$ and $m=0$, it is a $2$-recursive
sequence with exponents $M=2$ and $m=1$, since the recurrence relations
$s(4n)=s(2n)$, $s(4n+1)=s(2n+1)$, $s(4n+2)=s(2n+1)$ and
$s(4n+3)=-s(2n)+2s(2n+1)$ follow from (2.2) for all $n\geq 0$. This means that
in this and similar cases we can get rid of the inhomogeneity by increasing
the exponents. fit into this framework, because the constant sequence appears
on the right-hand side; see (2.2). For a discussion of such inhomogeneous
$q$-recursive sequences we refer to Corollary 3.13.
Before considering a slightly more involved example, we clarify the role of
the restriction on $n_{0}$.
###### Remark 3.3.
The condition $n_{0}\geq\max\\{-\ell/q^{m},0\\}$ in Definition 3.2 is
necessary because for $n=n_{0}$, (3.1) reduces to
$x(q^{M}n_{0}+s)=\smashoperator[]{\sum_{\ell\leq k\leq
u}^{}}c_{s,k}\,x(q^{m}n_{0}+k),$
and so the smallest argument of $x$ on the right-hand side is
$q^{m}n_{0}+\ell$, which is non-negative by the given condition and therefore
indeed a valid argument.
###### Example 3.4 (Odd Entries in Pascal’s Triangle [34, A006046]).
Let $p(n)$ be the number of odd entries in the first $n$ rows of Pascal’s
triangle. The first few elements are given in Table 3.1.
$n$ | $0$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $7$ | $8$ | $9$ | $10$
---|---|---|---|---|---|---|---|---|---|---|---
$p(n)$ | $0$ | $1$ | $3$ | $5$ | $9$ | $11$ | $15$ | $19$ | $27$ | $29$ | $33$
Table 3.1: First few elements of $p$
By Lucas’ theorem on binomial coefficients modulo a prime, the number of odd
entries in row $n$ of Pascal’s triangle is given by $2^{s(n)}$, where $s(n)$
is the binary sum of digits of $n$; see also Fine [15, Theorem 2]. This
implies that
$\displaystyle p(2n)=\smashoperator[]{\sum_{0\leq k<2n}^{}}2^{s(k)}$
$\displaystyle=\smashoperator[]{\sum_{0\leq
k<n}^{}}2^{s(2k)}+\smashoperator[]{\sum_{0\leq k<n}^{}}2^{s(2k+1)}$
$\displaystyle\stackrel{{\scriptstyle\stackrel{{\scriptstyle\text{\eqref{eq:binary-
sum-of-digits}}}}{{\downarrow}}}}{{=}}\smashoperator[]{\sum_{0\leq
k<n}^{}}2^{s(k)}+\smashoperator[]{\sum_{0\leq
k<n}^{}}2^{s(k)+1}=p(n)+2p(n)=3p(n)$
as well as
$\displaystyle p(2n+1)=\smashoperator[]{\sum_{0\leq k<2n+1}^{}}2^{s(k)}$
$\displaystyle=\smashoperator[]{\sum_{0\leq k\leq
n}^{}}2^{s(2k)}+\smashoperator[]{\sum_{0\leq k<n}^{}}2^{s(2k+1)}$
$\displaystyle\stackrel{{\scriptstyle\stackrel{{\scriptstyle\text{\eqref{eq:binary-
sum-of-digits}}}}{{\downarrow}}}}{{=}}\smashoperator[]{\sum_{0\leq k\leq
n}^{}}2^{s(k)}+\smashoperator[]{\sum_{0\leq k<n}^{}}2^{s(k)+1}=p(n+1)+2p(n)$
hold for all $n\geq 0$. Thus, the sequence $p$ is $2$-recursive with exponents
$M=1$ and $m=0$, index shift bounds $\ell=0$ and $u=1$, and offset $n_{0}=0$.
From Allouche and Shallit [1, Example 14 and Theorem 3.1], we know that the
sequence $p$ is $2$-regular as well. This is no coincidence: In the following,
we will show that each $q$-recursive sequence is $q$-regular. Furthermore, if
the recurrence relations in (3.1) are known, we can even give an explicit
$q$-linear representation of $x$.
In analogy to Definition 3.2, we also introduce _$q$ -regular sequences with
offset_.
###### Definition 3.5 ($q$-Regular Sequence with Offset).
Let $q\geq 2$ and $n_{0}\geq 0$ be fixed integers. A sequence $x$ is said to
be _$q$ -regular with offset $n_{0}$_ if there exist a vector-valued sequence
$v\colon\mathbb{N}_{0}\to\mathbb{C}^{D}$ for some $D\in\mathbb{N}$ whose first
component coincides with $x$ and matrices $A_{0}$, …, $A_{q-1}$ such that
$v(qn+r)=A_{r}v(n)$ holds for all $0\leq r<q$ and $n\geq n_{0}$. In this case,
we say that $(A_{0},\ldots,A_{q-1},v)$ is a _$q$ -linear representation with
offset $n_{0}$_ of $x$.
###### Remark 3.6.
A $q$-regular sequence with offset $0$ is $q$-regular in the usual sense.
Likewise, every $q$-linear representation with offset $0$ is a $q$-linear
representation as introduced in Section 2.
### 3.2 Reduction to $q$-Regular Sequences in the General Case
It turns out that every $q$-recursive sequence with any offset is indeed
$q$-regular (Corollary 3.12). This is an implication of the following two
results:
* •
Theorem 3.7 explicitly constructs a $q$-linear representation with offset
$n_{1}$ of $q$-recursive sequences with offset $n_{0}$, where
$n_{1}\in\mathbb{N}$ is explicitly given. This means that such sequences are
$q$-regular with offset $n_{1}$.
* •
Theorem 3.11 states that every $q$-regular sequence with some offset is
$q$-regular (without offset) as well. Also here, an explicit $q$-linear
representation of $x$ is given.
###### Theorem 3.7.
Let $x$ be a $q$-recursive sequence with offset $n_{0}$, exponents $M$ and $m$
and index shift bounds $\ell$ and $u$. Furthermore, set666We use Iverson’s
convention: For a statement $S$, we set $\llbracket S\rrbracket=1$ if $S$ is
true and $0$ otherwise; see also Graham, Knuth and Patashnik [19, p. 24].
$\displaystyle\ell^{\prime}$
$\displaystyle\coloneqq\bigg{\lfloor}\frac{(\ell+1)q^{M-m}-q^{M}}{q^{M-m}-1}\bigg{\rfloor}\llbracket\ell<0\rrbracket$
(3.2a) and $\displaystyle u^{\prime}$ $\displaystyle\coloneqq
q^{m}-1+\bigg{\lceil}\frac{uq^{M-m}}{q^{M-m}-1}\bigg{\rceil}\llbracket
u>0\rrbracket.$ (3.2b)
Then $x$ is $q$-regular with offset
$n_{1}=n_{0}-\lfloor\ell^{\prime}/q^{M}\rfloor$, and a $q$-linear
representation $(A_{0},\dots,A_{q-1},v)$ with offset $n_{1}$ of $x$ is given
as follows:
1. (a)
The vector-valued sequence $v$ is given in block form by
$v=\begin{pmatrix}v_{0}\\\ \vdots\\\ v_{M-1}\end{pmatrix},$ (3.3)
where the blocks are of the following form: For $0\leq j<m$, the block $v_{j}$
has the form
$v_{j}=\begin{pmatrix}x\circ(n\mapsto q^{j}n)\\\ \vdots\\\ x\circ(n\mapsto
q^{j}n+q^{j}-1)\end{pmatrix},$ (3.4a) and for $m\leq j<M$, the block $v_{j}$
has the form777We set $x(n)=0$ for $n<0$ to ensure that all blocks are well-
defined. By the choice of $n_{1}$, this does not have any factual
consequences. $v_{j}=\begin{pmatrix}x\circ(n\mapsto q^{j}n+\ell^{\prime})\\\
\vdots\\\ x\circ(n\mapsto q^{j}n+q^{j}-q^{m}+u^{\prime})\end{pmatrix}.$ (3.4b)
2. (b)
The matrices $A_{0}$, …, $A_{q-1}$ of the $q$-linear representation with
offset $n_{1}$ can be computed by using the coefficients in (3.1); an explicit
formula for the rows of these matrices is given in (8.8).
The linear representation $(A_{0},\ldots,A_{q-1},v)$ does not depend on the
offset $n_{0}$.
###### Remark 3.8.
1. 1.
We easily verify that $\ell^{\prime}\leq 0$ holds and it is clear that
$u^{\prime}\geq q^{m}-1\geq 0$. Thus $\ell^{\prime}\leq u^{\prime}$. This
implies that the blocks $v_{j}$ for $m\leq j<M$ in (3.4b) are indeed non-
empty.
2. 2.
It is easy to check that $x$ itself is a component of $v$. For $m=0$, this is
due to the fact that we have $\ell^{\prime}\leq 0\leq u^{\prime}$. However, it
can happen that $x$ is not the _first component_ of $v$ (as it is required for
a linear representation). Then a simple permutation of the components of $v$
brings $x$ to its first component.
3. 3.
The dimension of the $q$-linear representation is
$\frac{q^{M}-1}{q-1}+(M-m)\bigl{(}u^{\prime}-\ell^{\prime}-q^{m}+1\bigr{)},$
which is possibly very big. However, we can always apply a minimization
algorithm in order to decrease the dimension of the linear representation as
far as possible. Such an algorithm is presented in Berstel and Reutenauer [5,
Chapter 2] for recognizable series, but can be applied on regular sequences as
well; see [21]. SageMath [36] provides an implementation of this minimization
algorithm.
4. 4.
The statement of Theorem 3.7 for $M=1$ and $m=n_{0}=0$ is mentioned by the
first two authors of this article in [20, Remark 5.1].
In order to put the main aspects of the previous result across, we present two
examples: The first one is a simple continuation of Example 3.4, and the
second one discusses a $q$-recursive sequence with more involved parameters.
While the latter might not seem to be very natural, it is an intentionally
made choice to keep things illustrative and comprehensible. For further
natural combinatorial examples we refer to Sections 5, 6 and 7.
###### Example 3.9 (Odd Entries in Pascal’s Triangle, continued).
Let $p(n)$ again be the number of odd entries in the first $n$ rows of
Pascal’s triangle. As already mentioned (Example 3.4), $p$ is $2$-recursive
with exponents $M=1$ and $m=0$ and index shift bounds $\ell=0$ and $u=1$ as
well as
$p(2n)=\hbox{\pagecolor{SpringGreen}3}p(n)+\hbox{\pagecolor{LimeGreen}0}p(n+1)\mbox{\quad
and\quad}p(2n+1)=\hbox{\pagecolor{Goldenrod}2}p(n)+\hbox{\pagecolor{Dandelion}1}p(n+1)$
(3.5)
for all $n\geq 0$. Due to Theorem 3.7, $p$ is also $2$-regular (with offset
$n_{1}=0$) and a $2$-regular representation of $p$ can be found as follows. We
have $\ell^{\prime}=0$ and $u^{\prime}=2$, and it is due to the relation
$m=M-1=0$ that the vector $v$ only consists of one block, namely
$v=v_{M-1}=v_{0}=\begin{pmatrix}p\\\ p\circ(n\mapsto n+1)\\\ p\circ(n\mapsto
n+2)\end{pmatrix}.$
Moreover, we can determine the matrices $A_{0}$ and $A_{1}$ in various ways:
By (8.8), these matrices are
$A_{0}=\begin{pmatrix}\hbox{\pagecolor{SpringGreen}3}&\hbox{\pagecolor{LimeGreen}0}&0\\\
\hbox{\pagecolor{Goldenrod}2}&\hbox{\pagecolor{Dandelion}1}&0\\\
0&\hbox{\pagecolor{SpringGreen}3}&\hbox{\pagecolor{LimeGreen}0}\end{pmatrix}\mbox{\quad
and\quad}A_{1}=\begin{pmatrix}\hbox{\pagecolor{Goldenrod}2}&\hbox{\pagecolor{Dandelion}1}&0\\\
0&\hbox{\pagecolor{SpringGreen}3}&\hbox{\pagecolor{LimeGreen}0}\\\
0&\hbox{\pagecolor{Goldenrod}2}&\hbox{\pagecolor{Dandelion}1}\end{pmatrix}.$
However, these matrices can also be obtained in an ad hoc fashion, namely by
inserting $2n$ and $2n+1$ into $v$ and then component-wise applying (3.5). For
example, let us take a look at the third row of $A_{0}$: We have to consider
the third component of $v$, which is $p\circ(n\mapsto n+2)$. We insert $2n$,
which results in $p\circ(n\mapsto 2n+2)$, and we obtain
$p\circ(n\mapsto 2n+2)=p\circ(n\mapsto
2(n+1))=\hbox{\pagecolor{SpringGreen}3}p\circ(n\mapsto
n+1)+\hbox{\pagecolor{LimeGreen}0}p\circ(n\mapsto n+2)$
by (3.5). Thus, we have a $3$ in the second column, because $p\circ(n\mapsto
n+1)$ is the second component of $v$, and a $0$ in the third column, because
$p\circ(n\mapsto n+2)$ is the third component of $v$. Generally speaking, the
rows of $A_{r}$ that correspond to the last block $v_{M-1}$ always consist of
shifted copies of the coefficients in the recurrence relations.
The “step” between the second row and the third row of $A_{0}$ and between the
first row and the second row of $A_{1}$ is caused by the following fact: After
inserting $2n$ or $2n+1$, it can happen that the remainder is too large to
apply the given recurrence relations directly. For instance, this was the case
when determining the third row of $A_{0}$ above: After inserting $2n$, we have
obtained $p\circ(n\mapsto 2n+2)$, and we had to rewrite this to
$p\circ(n\mapsto 2(n+1))$ to be able to apply (3.5). This increase of the
argument by $1$ causes the shift of the entries in the matrix to the right by
$1$. For a more detailed description of this effect, we refer to the two
different cases in Part 3 of the proof of Theorem 3.7 in Section 8.1 and to
(8.8).
Note that the dimension of this linear representation is not minimal since the
sequence $p\circ(n\mapsto n+2)$ can be omitted. This is due to the following
two facts: The third columns of $A_{0}$ and $A_{1}$ correspond to
$p\circ(n\mapsto n+2)$. All non-zero elements of these columns are in the last
row, which again corresponds to $p\circ(n\mapsto n+2)$. This reduction is
possible because the coefficient of $p(n+1)$ in the left recurrence relation
of (3.5) is zero.
###### Example 3.10.
Consider the $2$-recursive sequence $x$ with exponents $M=3$ and $m=1$ given
by the recurrence relations
$\begin{split}x(8n)&=-\phantom{0}1x(2n-1)+\phantom{0}0x(2n)+\phantom{0}1x(2n+1),\\\
x(8n+1)&=-11x(2n-1)+10x(2n)+11x(2n+1),\\\
x(8n+2)&=-21x(2n-1)+20x(2n)+21x(2n+1),\\\
x(8n+3)&=-31x(2n-1)+30x(2n)+31x(2n+1),\\\
x(8n+4)&=-41x(2n-1)+40x(2n)+41x(2n+1),\\\
x(8n+5)&=-51x(2n-1)+50x(2n)+51x(2n+1),\\\
x(8n+6)&=-61x(2n-1)+60x(2n)+61x(2n+1),\\\
x(8n+7)&=-71x(2n-1)+70x(2n)+71x(2n+1),\end{split}$ (3.6)
for all $n\geq 0$. So for the sake of recognition, the coefficients
$(c_{s,k})_{0\leq s<8,-1\leq k\leq 1}$ are given by $c_{s,k}=(-1)^{\llbracket
k<0\rrbracket}10s+k$. The index shift bounds of $x$ are $\ell=-1$ and $u=1$,
and its offset is $n_{0}=0$. With the notation of Theorem 3.7, we further find
$\ell^{\prime}=-3$ and $u^{\prime}=3$.
Due to Theorem 3.7, $x$ is $2$-regular with offset $n_{1}=1$, and by (3.4) and
(8.8), a $2$-linear representation with offset $n_{1}=1$ of $x$ is given by
$(A_{0},A_{1},v)$ with
$v={\small\begin{pmatrix}x\\\ x\circ(n\mapsto 2n-3)\\\ x\circ(n\mapsto
2n-2)\\\ \vdots\\\ x\circ(n\mapsto 2n+3)\\\ x\circ(n\mapsto 4n-3)\\\
x\circ(n\mapsto 4n-2)\\\ \vdots\\\ x\circ(n\mapsto 4n+5)\\\
\end{pmatrix}{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>6.75ptv_{0}\\\ \mathclap{\left.\vphantom{\begin{matrix}x\\\ x\\\ \vdots\\\
x\end{matrix}}\right\\}}\hskip 6.75ptv_{1}\\\
\mathclap{\left.\vphantom{\begin{matrix}x\\\ x\\\ \vdots\\\
x\end{matrix}}\right\\}}\hskip 6.75ptv_{2}\end{matrix}}}$
as well as
$A_{0}=\left(\setcounter{MaxMatrixCols}{17}\begin{smallmatrix}0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0\\\ 0&\leavevmode\hbox to4.67pt{\vbox
to4.67pt{\pgfpicture\makeatletter\hbox{\hskip 2.33311pt\lower-2.33311pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}-51&50&51&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&-61&60&61&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&-71&70&71&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&-1&0&1&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-11&10&11&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-21&20&21&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-31&30&31&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-41&40&41&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&-51&50&51&0&0\leavevmode\hbox
to4.67pt{\vbox to4.67pt{\pgfpicture\makeatletter\hbox{\hskip
2.33311pt\lower-2.33311pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&0&0&0&0&0&0&0&0\end{smallmatrix}\right)\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0.88,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.88,0}\pgfsys@color@cmyk@stroke{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0.88,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\mbox{\quad
and\quad}A_{1}=\left(\setcounter{MaxMatrixCols}{17}\begin{smallmatrix}0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1\\\ 0&\leavevmode\hbox to4.67pt{\vbox
to4.67pt{\pgfpicture\makeatletter\hbox{\hskip 2.33311pt\lower-2.33311pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}0&0&-11&10&11&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-21&20&21&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-31&30&31&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-41&40&41&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-51&50&51&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-61&60&61&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&-71&70&71&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&-1&0&1&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&-11&10&11\leavevmode\hbox to4.67pt{\vbox
to4.67pt{\pgfpicture\makeatletter\hbox{\hskip 2.33311pt\lower-2.33311pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&0&0&0&0&0&0&0&0\end{smallmatrix}\right).\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0.68,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.68,0,0}\pgfsys@color@cmyk@stroke{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0.68,0,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$
Again, the matrices can also be obtained ad hoc, by inserting $2n$ and $2n+1$
into the components and, if needed, component-wise applying the relations of
(3.6). For example, the fourth row of $A_{1}$ corresponds to $x\circ(n\mapsto
2n-1)$, i.e., the fourth component of $v$. Inserting $2n+1$ yields
$x\circ(n\mapsto 2(2n+1)-1)=x\circ(n\mapsto 4n+1)$, which itself is the $13$th
component of $v$. Thus, we have a $1$ in the $13$th column in the fourth row
of $A_{1}$.
The “interesting” part of the matrices $A_{0}$ and $A_{1}$ is given by entries
in rows corresponding to $v_{M-1}=v_{2}$ and columns corresponding to
$v_{m}=v_{1}$. It is marked by the green and red boxes, respectively, and the
entries can be obtained exactly as described in the previous example. Here the
application of (3.6) is indeed needed and again leads to a block of shifted
copies of the coefficients in the recurrence relations. Also here, one can see
the “steps” in the matrices that were described in Example 3.9.
Up to now, we have reduced $q$-recursive sequences to $q$-regular sequences
with some offset. Next, we get rid of this offset; Allouche and Shallit
implicitly do such an offset correction for offset $1$ in the proof of [1,
Lemma 4.1].
###### Theorem 3.11.
Let $x$ be a $q$-regular sequence with offset $n_{0}$, and let
$(A_{0},\ldots,A_{q-1},v)$ be a $q$-linear representation with offset $n_{0}$
of $x$. Then $x$ is $q$-regular and a $q$-linear representation
$(\widetilde{A}_{0},\dots,\widetilde{A}_{q-1},\widetilde{v})$ of $x$ is given
as follows:
1. (a)
The vector-valued sequence $\widetilde{v}$ is given in block form by
$\widetilde{v}=\begin{pmatrix}v\\\ \delta_{0}\\\ \vdots\\\
\delta_{n_{0}-1}\end{pmatrix},$ (3.7)
where $\delta_{k}\colon\mathbb{N}_{0}\to\mathbb{C}$ is defined by
$\delta_{k}(n)=\llbracket n=k\rrbracket$ for all $0\leq k<n_{0}$ and $n\geq
0$.
2. (b)
Let $D\in\mathbb{N}$ be the dimension of $v$. Moreover, for $0\leq r<q$ and
$0\leq k<n_{0}$, let888We set $x(n)=0$ for $n<0$ to ensure that all vectors
are defined. Other definitions of values for negative arguments are possible,
but would result in other values for $W_{r}$. $w_{r,k}\coloneqq
v(qk+r)-A_{r}v(k)\in\mathbb{C}^{D}$, and let $W_{r}$ be the $D\times n_{0}$
matrix which has columns $w_{r,0}$, …, $w_{r,n_{0}-1}$. Then for all $0\leq
r<q$, the matrix $\widetilde{A}_{r}$ is given in block form by
$\widetilde{A}_{r}=\begin{pmatrix}A_{r}&W_{r}\\\ 0&J_{r}\end{pmatrix},$ (3.8)
where $J_{r}\in\\{0,1\\}^{n_{0}\times n_{0}}$ is the matrix defined by
$J_{r}\coloneqq\bigl{(}\llbracket
jq=k-r\rrbracket\bigr{)}_{\begin{subarray}{c}0\leq k<n_{0}\\\ 0\leq
j<n_{0}\end{subarray}}.$ (3.9)
The matrix $J_{r}$ is a lower triangular matrix with diagonal
$\operatorname{diag}(J_{r})=\bigl{(}\llbracket
r=0\rrbracket,0,\dots,0\bigr{)}.$
###### Corollary 3.12.
Every $q$-recursive sequence $x$ with any offset is $q$-regular and a
$q$-linear representation of $x$ is given as the combination of the explicit
constructions of the $q$-linear representations from Theorem 3.7 and Theorem
3.11.
While Section 3 up to this point (in particular Definition 3.2) considered
homogeneous recursive sequences, also inhomogeneities can occur. An example
is, as already mentioned, the binary sum of digits, where the constant
sequence appears. In the following corollary, we deal with such inhomogeneous
recursive sequences.
###### Corollary 3.13.
Let $q\geq 2$, $M>m\geq 0$, $\ell\leq u$ and
$n_{0}\geq\max\\{-\ell/q^{m},0\\}$ be fixed integers. Furthermore, let $x$ be
a sequence such that for all $0\leq s<q^{M}$ there exist $q$-regular sequences
$g_{s}$ and constants $c_{s,k}\in\mathbb{C}$ for $\ell\leq k\leq u$ with
$x(q^{M}n+s)=\smashoperator[]{\sum_{\ell\leq k\leq
u}^{}}c_{s,k}\,x(q^{m}n+k)+g_{s}(n)$ (3.10)
for all $n\geq n_{0}$.
Then $x$ is $q$-regular and a $q$-linear representation of $x$ can be
constructed straightforwardly by combining the explicit constructions of the
$q$-linear representations from Theorem 3.7 and Theorem 3.11 with $q$-linear
representations of shifted versions of the sequences $g_{s}$.
###### Remark 3.14.
The construction of a $q$-linear representation of a $q$-recursive sequence
(given by recurrence relations as in (3.1) or in (3.10)) with offset has been
included [26] in SageMath [36].
### 3.3 Reduction to $q$-Regular Sequences in a Special Case
We now study a specific case of $q$-recursive sequences, namely $q$-recursive
sequences with exponents $M=m+1$ and $m$ and index shift bounds $\ell=0$ and
$u=q^{m}-1$ for some $m\in\mathbb{N}_{0}$. The study of this case is well-
motivated: First of all, it will turn out in Sections 6 and 7 that this choice
of parameters is quite natural, i.e., we will see examples where subsequences
of indices modulo $q^{m+1}$ equal linear combinations of subsequences of
indices modulo $q^{m}$. Moreover, we can give the matrices of the linear
representation in a simpler form than in Theorem 3.7, and the upper bound
$u^{\prime}$ can be improved significantly. Finally, we show that the
asymptotics of the summatory functions of this special case of sequences can
be obtained directly from the recurrence relations in (3.1), without knowing a
linear representation of the sequence explicitly.
Note that in this section we assume the offset to be $n_{0}=0$, mainly for the
sake of readability. However, we want to emphasize that all results can be
stated for arbitrary offset $n_{0}\in\mathbb{N}_{0}$ as well, using Theorem
3.11.
We start by giving an analogon of Theorem 3.7 for our special case.
###### Theorem 3.15.
Let $x$ be a $q$-recursive sequence with exponents $M=m+1$ and $m$ and index
shift bounds $\ell=0$ and $u=q^{m}-1$ and coefficients $(c_{s,k})_{0\leq
s<q^{m+1},0\leq k<q^{m}}$. We define the matrices
$B_{r}=(c_{rq^{m}+d,k})_{\begin{subarray}{c}0\leq d<rq^{m}\\\ 0\leq
k<q^{m}\end{subarray}}$ (3.11)
for $0\leq r<q$. Then $x$ is $q$-regular and a $q$-linear representation
$(A_{0},\dots,A_{q-1},v)$ of $x$ is given as follows:
1. (a)
The vector-valued sequence $v$ is given in block form by
$v=\begin{pmatrix}v_{0}\\\ \vdots\\\ v_{m}\end{pmatrix},$ (3.12)
where for $0\leq j\leq m$, the block $v_{j}$ has the form
$v_{j}=\begin{pmatrix}x\circ(n\mapsto q^{j}n)\\\ \vdots\\\ x\circ(n\mapsto
q^{j}n+q^{j}-1)\end{pmatrix}.$ (3.13)
2. (b)
For $0\leq r<q$, the matrices $A_{r}$ are given in block form by
$A_{r}=\begin{pmatrix}J_{r0}&J_{r1}\\\ 0&B_{r}\end{pmatrix},$ (3.14)
where $J_{r0}\in\\{0,1\\}^{\frac{q^{m}-1}{q-1}\times\frac{q^{m}-1}{q-1}}$ and
$J_{r1}\in\\{0,1\\}^{\frac{q^{m}-1}{q-1}\times q^{m}}$. Furthermore, for
$0\leq r<q$, the matrices $J_{r0}$ are upper triangular matrices with zeros on
the diagonal, and the matrices $J_{r0}$ and $J_{r1}$ are given explicitly by
the first case of (8.8) (with $u^{\prime}$ replaced by $q^{m}-1$).
###### Remark 3.16.
1. 1.
The structure of $v$ is the same as in Theorem 3.7. In particular, the blocks
$v_{j}$ with $0\leq j<m$ coincide with the blocks $v_{j}$ from Theorem 3.7
given in (3.4a).
2. 2.
The matrices $J_{r0}$ and $J_{r1}$ can be decomposed into blocks of identity
matrices and zero matrices of smaller dimensions, which are horizontally
shifted depending on $r$. For an illustration we refer to Example 3.17.
3. 3.
The last component of $v$ is $x\circ(n\mapsto q^{m}n+q^{m}-1)$ in contrast to
$x\circ(n\mapsto q^{m}n+u^{\prime})$ when using Theorem 3.7. This means that
using Theorem 3.15 leads to a linear representation whose dimension is
$\frac{q^{m+1}-q}{q-1}$ less than the dimension achieved by Theorem 3.7.
4. 4.
In the case $m=0$, only rather special sequences can be handled by Theorem
3.15. For instance, for $q=2$ and $m=0$, only sequences of the form
$x(n)=x(0)a^{s(n)}$, where $s(n)$ is the binary sum of digits of $n$ and $a$
is some constant, fulfill the assumptions of this theorem. For all other
$q$-recursive sequences with $m=0$, Theorem 3.7 has to be used.
The following example will allow us to illustrate Theorem 3.15. For the sake
of simplicity, we again choose an artificial example.
###### Example 3.17.
Let us study the $2$-recursive sequence $x$ with exponents $M=3$ and $m=2$
given by the recurrence relations
$\displaystyle f(8n)$ $\displaystyle=f(4n)+f(4n+1)+f(4n+2)+f(4n+3),$
$\displaystyle f(8n+1)$ $\displaystyle=f(4n)+f(4n+1)+f(4n+2)+f(4n+3),$
$\displaystyle f(8n+2)$ $\displaystyle=f(4n)+f(4n+1)+f(4n+2)+f(4n+3),$
$\displaystyle f(8n+3)$ $\displaystyle=f(4n)+f(4n+1)+f(4n+2)+f(4n+3),$
$\displaystyle f(8n+4)$ $\displaystyle=2f(4n)+2f(4n+1)+2f(4n+2)+2f(4n+3),$
$\displaystyle f(8n+5)$ $\displaystyle=2f(4n)+2f(4n+1)+2f(4n+2)+2f(4n+3),$
$\displaystyle f(8n+6)$ $\displaystyle=2f(4n)+2f(4n+1)+2f(4n+2)+2f(4n+3),$
$\displaystyle f(8n+7)$ $\displaystyle=2f(4n)+2f(4n+1)+2f(4n+2)+2f(4n+3)$
for all $n\geq 0$. Then we have
$B_{0}=\begin{pmatrix}\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}1&1&1&1\\\ 1&1&1&1\\\
1&1&1&1\\\ 1&1&1&1\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\\\
\end{pmatrix}\mbox{\quad and\quad}B_{1}=\begin{pmatrix}\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}2&2&2&2\\\ 2&2&2&2\\\
2&2&2&2\\\ 2&2&2&2\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\\\
\end{pmatrix}.\leavevmode\hbox to4.8pt{\vbox
to10.8pt{\pgfpicture\makeatletter\hbox{\hskip 2.4pt\lower-2.4pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0.88,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.88,0}\pgfsys@color@cmyk@stroke{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0.88,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0.68,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.68,0,0}\pgfsys@color@cmyk@stroke{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0.68,0,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$
By Theorem 3.15, $x$ is $2$-regular and a $2$-linear representation
$(A_{0},A_{1},v)$ of $x$ is given by
$v=\begin{pmatrix}x\\\ x\circ(n\mapsto 2n)\\\ x\circ(n\mapsto 2n+1)\\\
x\circ(n\mapsto 4n)\\\ x\circ(n\mapsto 4n+1)\\\ x\circ(n\mapsto 4n+2)\\\
x\circ(n\mapsto 4n+3)\end{pmatrix}$
as well as
$A_{0}=\begin{pmatrix}\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}0&\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}1\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}0&0&0&0\\\
0&0&0&\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}1&0&0&0\\\
0&0&0\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&1\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&0\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\\\
0&0&0&\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}1&1&1&1\\\
0&0&0&1&1&1&1\\\ 0&0&0&1&1&1&1\\\ 0&0&0&1&1&1&1\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\\\
\end{pmatrix}\leavevmode\hbox to4.8pt{\vbox
to10.8pt{\pgfpicture\makeatletter\hbox{\hskip 2.4pt\lower-2.4pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0.88,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.88,0}\pgfsys@color@cmyk@stroke{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0.88,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.75}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.75,.75,.75}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.75}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.75,.75,.75}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.5}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.5,.5,.5}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.5}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.5,.5,.5}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\mbox{\quad
and\quad}A_{1}=\begin{pmatrix}\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}0&0&\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}1\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}0&0&0&0\\\
0&0&0&0&0&\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}1&0\\\
0&0&0\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&0&0&1\leavevmode\hbox
to6.67pt{\vbox to6.67pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\\\
0&0&0&\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}2&2&2&2\\\
0&0&0&2&2&2&2\\\ 0&0&0&2&2&2&2\\\ 0&0&0&2&2&2&2\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\\\
\end{pmatrix}.\leavevmode\hbox to4.8pt{\vbox
to10.8pt{\pgfpicture\makeatletter\hbox{\hskip 2.4pt\lower-2.4pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.75}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.75,.75,.75}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.75}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.75,.75,.75}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.5}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.5,.5,.5}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@invoke{
}\pgfsys@color@gray@fill{.5}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.5,.5,.5}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to4.8pt{\vbox to10.8pt{\pgfpicture\makeatletter\hbox{\hskip
2.4pt\lower-2.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0.68,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.68,0,0}\pgfsys@color@cmyk@stroke{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0.68,0,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@moveto{-2.0pt}{8.00002pt}\pgfsys@lineto{-2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{-2.0pt}\pgfsys@lineto{2.0pt}{8.00002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-2.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$
The dark gray boxes mark the matrices $J_{r0}$ and $J_{r1}$, whereas the
smaller, light gray boxes mark the shifted identity matrices mentioned in
Remark 3.16.
## 4 Asymptotics
We want to study the asymptotic behavior for $q$-recursive sequences (or, to
be precise, of their summatory functions). As we have already seen that such
sequences are $q$-regular, we can apply the results of [20]. This is indeed
what we do, however, our set-up here is more specific than $q$-regular
sequences in general, because the sequences are given by particular recurrence
relations. This leads to more specific results here.
We start by briefly discussing the growth of matrix products, in particular in
conjunction with the joint spectral radius. This is one important quantity
determining the asymptotics of a sequence. Beside that, the eigenvalues of the
sum of matrices of a $q$-linear representation play an important role.
Again, we will distinguish between the general case and the special case
introduced in Section 3.3.
### 4.1 Growth of Matrix Products
Before presenting previous results and adapting them to our purposes, we
recall the notion of the joint spectral radius and introduce some related
notions.
We fix a vector norm $\lVert\,\cdot\,\rVert$ on $\mathbb{C}^{D}$ and consider
its induced matrix norm.
###### Definition 4.18.
Let $\mathcal{G}$ be a finite set of $D\times D$ matrices over $\mathbb{C}$.
1. (a)
The joint spectral radius of $\mathcal{G}$ is defined as
$\rho(\mathcal{G})\coloneqq\lim_{k\to\infty}\sup\\{\lVert G_{1}\ldots
G_{k}\rVert^{1/k}\mid G_{1},\ldots,G_{k}\in\mathcal{G}\\}.$
2. (b)
We say that $\mathcal{G}$ has the _finiteness property_ if there exists a
$k\in\mathbb{N}$ such that
$\rho(\mathcal{G})=\sup\\{\lVert G_{1}\ldots G_{k}\rVert^{1/k}\mid
G_{1},\ldots,G_{k}\in\mathcal{G}\\}.$
3. (c)
We say that $\mathcal{G}$ has the _simple growth property_ if
$\lVert G_{1}\ldots G_{k}\rVert=O(\rho(\mathcal{G})^{k})$
holds for all $G_{1}$, …, $G_{k}\in\mathcal{G}$ and $k\to\infty$.
###### Remark 4.19.
1. 1.
In the definition of the joint spectral radius, the limit can be replaced by
an infimum over all $k\geq 1$; see Rota and Strang [35] and also [20, Section
7.2]. In particular, the limit in the definition of the joint spectral radius
always exists.
2. 2.
As any two norms on $\mathbb{C}^{D\times D}$ are equivalent, the definitions
of the joint spectral radius and the simple growth property do not depend on
the chosen norm. The finiteness property, however, depends on the chosen norm;
see Remark 7.45 for an example.
3. 3.
The finiteness property implies the simple growth property; see [20, Section
7.2].
4. 4.
The set
$\mathcal{G}\coloneqq\left\\{\begin{pmatrix}1&1\\\ 0&1\end{pmatrix}\right\\}$
has joint spectral radius $1$, but not the simple growth property, because the
$k$th power of the only matrix in $\mathcal{G}$ equals
$\begin{pmatrix}1&1\\\ 0&1\end{pmatrix}^{k}=\begin{pmatrix}1&k\\\
0&1\end{pmatrix}.$
In Lemma 4.23, we will study sufficient conditions under which sets of block
triangular matrices have the simple growth property.
### 4.2 Asymptotics for Regular Sequences
In order to obtain the asymptotics for the summatory function of $q$-recursive
sequences, we now apply a result of the first two authors of this article on
the asymptotic behavior of $q$-regular sequences [20, Theorem A]. So let $x$
be a $q$-recursive sequence with $q$-linear representation
$(A_{0},\dots,A_{q-1},v)$, and set
$C\coloneqq\smashoperator[]{\sum_{0\leq r<q}^{}}A_{r}.$
For a square matrix $G$, let $\sigma(G)$ denote the set of eigenvalues of $G$
and by $m_{G}(\lambda)$ the size of the largest Jordan block of $G$ associated
with some $\lambda\in\mathbb{C}$. In particular, we have $m_{G}(\lambda)=0$ if
$\lambda\notin\sigma(G)$.
Then we choose $R>0$ as follows: If the set
$\mathcal{A}=\\{A_{0},\dots,A_{q-1}\\}$ has the simple growth property, then
we set $R=\rho(\mathcal{A})$. Otherwise, we choose $R>\rho(\mathcal{A})$ such
that there is no eigenvalue $\lambda\in\sigma(C)$ with
$\rho(\mathcal{A})<\lvert\lambda\rvert\leq R$. Furthermore, we let
$\mathcal{X}(s)=\sum_{n\geq 1}n^{-s}x(n)\mbox{\quad
and\quad}\mathcal{V}(s)=\sum_{n\geq 1}n^{-s}v(n)$
denote the Dirichlet series corresponding to $x$ and $v$. Now we are ready to
state the result.
###### Theorem 4.20 (Asymptotic Analysis of $q$-Regular Sequences [20,
Theorem A]).
With the notations above, we have999We let $\\{z\\}\coloneqq z-\lfloor
z\rfloor$ denote the fractional part of a real number $z$.
$X(N)=\smashoperator[]{\sum_{0\leq
n<N}^{}}x(n)=\smashoperator[]{\sum_{\begin{subarray}{c}\lambda\in\sigma(C)\\\
\lvert\lambda\rvert>R\end{subarray}}^{}}N^{\log_{q}\lambda}\quad\smashoperator[]{\sum_{0\leq
k<m_{C}(\lambda)}^{}}\quad\frac{(\log N)^{k}}{k!}\ \Phi_{\lambda
k}(\\{\log_{q}N\\})\\\ +O\bigl{(}N^{\log_{q}R}(\log
N)^{\max\\{m_{C}(\lambda)\colon\lvert\lambda\rvert=R\\}}\bigr{)}$ (4.1)
as $N\to\infty$, where $\Phi_{\lambda k}$ are suitable $1$-periodic functions.
If there are no eigenvalues $\lambda\in\sigma(C)$ with
$\lvert\lambda\rvert\leq R$, the $O$-term can be omitted.
For $\lvert\lambda\rvert>R$ and $0\leq k<m_{C}(\lambda)$, the function
$\Phi_{\lambda k}$ is Hölder continuous with any exponent smaller than
$\log_{q}(\lvert\lambda\rvert/R)$.
The Dirichlet series $\mathcal{V}(s)$ converges absolutely and uniformly on
compact subsets of the half plane $\real s>\log_{q}R+1$ and can be continued
to a meromorphic function on the half plane $\real s>\log_{q}R$. It satisfies
the functional equation
$(I-q^{-s}C)\mathcal{V}(s)=\smashoperator[]{\sum_{1\leq
n<q}^{}}n^{-s}v(n)+q^{-s}\smashoperator[]{\sum_{0\leq r<q}^{}}A_{r}\sum_{k\geq
1}\binom{-s}{k}\biggl{(}\frac{r}{q}\biggr{)}^{k}\mathcal{V}(s+k)$ (4.2)
for $\real s>\log_{q}R$. The right-hand side of (4.2) converges absolutely and
uniformly on compact subsets of $\real s>\log_{q}R$. In particular,
$\mathcal{V}(s)$ can only have poles where $q^{s}\in\sigma(C)$.
For $\lambda\in\sigma(C)$ with $\lvert\lambda\rvert>R$ and $0\leq
k<m_{C}(\lambda)$, the Fourier series
$\Phi_{\lambda k}(u)=\sum_{\mu\in\mathbb{Z}}\varphi_{\lambda k\mu}\exp(2\mu\pi
iu)$ (4.3)
converges pointwise for $u\in\mathbb{R}$ where the Fourier coefficients
$\varphi_{\lambda k\mu}$ are given by the singular expansion
$\frac{x(0)+\mathcal{X}(s)}{s}\asymp\sum_{\begin{subarray}{c}\lambda\in\sigma(C)\\\
\lvert\lambda\rvert>R\end{subarray}}\;\sum_{\mu\in\mathbb{Z}}\;\sum_{0\leq
k<m_{C}(\lambda)}\frac{\varphi_{\lambda
k\mu}}{\bigl{(}s-\log_{q}\lambda-\frac{2\mu\pi i}{\log q}\bigr{)}^{k+1}}$
(4.4)
for $\real s>\log_{q}R$.
###### Remark 4.21.
1. 1.
[20, Theorem A] only uses the simple growth property implicitly; the full
details are contained in [20, Section 6]. Note that there, the only property
of the joint spectral radius used is [20, Equation (7.1)].
2. 2.
The given expressions for the Fourier coefficients allow their computation
with high precision; see [20, Part IV]. Furthermore, an implementation is
available at https://gitlab.com/dakrenn/regular-sequence-fluctuations. We will
use this implementation to compute the Fourier coefficients for the examples
in Sections 5, 6 and 7.
3. 3.
The motivation for analyzing the summatory function instead of the sequence
itself is the following: The asymptotic behavior of regular sequences is often
not smooth (which would imply that in any asymptotic expansion as given in
[20], everything is absorbed by the error term), whereas the asymptotic
behavior of the summatory function is.
However, it is also possible to apply Theorem 4.20 to a $q$-regular sequence
$x$ itself: Let us write
$x(N)=x(0)+\smashoperator[]{\sum_{0\leq n<N}^{}}\bigl{(}x(n+1)-x(n)\bigr{)}.$
So $x$ can be represented as the summatory function of the sequence of
differences
$f(n)\coloneqq x(n+1)-x(n),$
which is again $q$-regular by [1, Theorems 2.5 and 2.6]. Consequently,
applying Theorem 4.20 to $f$ yields an asymptotic analysis for
$F(N)=\smashoperator[]{\sum_{0\leq n<N}^{}}f(n)=x(N)-x(0),$
which differs from the asymptotic behavior of $x$ only by an additive
constant.
### 4.3 Spectral Results in the General Case
In this section, we show that in most cases, the asymptotic behavior of a
regular sequence can be deduced directly from a linear representation which is
valid from some offset $n_{0}\geq 1>0$. In these cases, it is not necessary to
use Theorem 3.11 to construct an augmented linear representation valid for all
non-negative integers. So, we will assume that $n_{0}\geq 1$ because
otherwise, there is nothing to do.
We first consider the significant eigenvalues and then the significant joint
spectral radii (significant with respect to Theorem 4.20).
###### Proposition 4.22.
Let $A_{0}$, …, $A_{q-1}$, $\widetilde{A}_{0}$, …, $\widetilde{A}_{q-1}$ and
$n_{0}$ as in Theorem 3.11. Assume that $n_{0}\geq 1$. Set
$C\coloneqq\smashoperator[]{\sum_{0\leq r<q}^{}}A_{r}\mbox{\quad
and\quad}\widetilde{C}\coloneqq\smashoperator[]{\sum_{0\leq
r<q}^{}}\widetilde{A}_{r}.$
Then $\sigma(\widetilde{C})\subseteq\sigma(C)\cup\\{0,1\\}$ holds. In
particular,
1. (a)
if $n_{0}=1$, then $\sigma(\widetilde{C})=\sigma(C)\cup\\{1\\}$ and for all
$\lambda\in\mathbb{C}\setminus\\{1\\}$, we have
$m_{C}(\lambda)=m_{\widetilde{C}}(\lambda)$; and
2. (b)
if $n_{0}\geq 2$, then $\sigma(\widetilde{C})=\sigma(C)\cup\\{0,1\\}$ and for
all $\lambda\in\mathbb{C}\setminus\\{0,1\\}$, we have
$m_{C}(\lambda)=m_{\widetilde{C}}(\lambda)$.
Before stating the second result, we state a lemma dealing with the simple
growth property for sets of block triangular matrices. This is a refinement of
Jungers [25, Proposition 1.5], which deals with the joint spectral radius only
(here restated as the first statement of the lemma).
###### Lemma 4.23.
Let $\mathcal{G}$ be a finite set of
$(D_{1}+D_{2}+\cdots+D_{s})\times(D_{1}+D_{2}+\cdots+D_{s})$ block upper
triangular matrices. For $G\in\mathcal{G}$ write
$G=\begin{pmatrix}G^{(11)}&G^{(12)}&\ldots&G^{(1s)}\\\
0&G^{(22)}&\ldots&G^{(2s)}\\\ \vdots&\vdots&\ddots&\vdots\\\
0&0&\ldots&G^{(ss)}\end{pmatrix}$
where the block $G^{(ij)}$ is a $D_{i}\times D_{j}$ matrix for $1\leq i\leq
j\leq s$. Set $\mathcal{G}^{(i)}\coloneqq\\{G^{(ii)}\mid G\in\mathcal{G}\\}$.
Then $\rho(\mathcal{G})=\max_{1\leq i\leq s}\rho(\mathcal{G}^{(i)})$.
If there is a unique $i_{0}\in\\{1,\ldots,s\\}$ such that
$\rho(\mathcal{G}^{(i_{0})})=\rho(\mathcal{G})$ and $\mathcal{G}^{(i_{0})}$
has the simple growth property, then $\mathcal{G}$ has the simple growth
property.
We now state the result on the joint spectral radius in the context of Theorem
3.11.
###### Proposition 4.24.
Let $\mathcal{A}\coloneqq\\{A_{0},\dots,A_{q-1}\\}$,
$\widetilde{\mathcal{A}}\coloneqq\\{\widetilde{A}_{0},\dots,\widetilde{A}_{q-1}\\}$
and $\mathcal{J}\coloneqq\\{J_{0},\dots,J_{q-1}\\}$ be the sets of matrices
and $n_{0}$ the offset as given in Theorem 3.11, and assume $n_{0}\geq 1$.
Then the joint spectral radii of $\widetilde{\mathcal{A}}$ and $\mathcal{J}$
satisfy
$\rho(\widetilde{\mathcal{A}})=\max\big{\\{}\rho(\mathcal{A}),1\big{\\}}\mbox{\quad
and\quad}\rho(\mathcal{J})=1,$ (4.5)
respectively. In particular, if $\rho(\mathcal{A})\geq 1$ holds, then we have
$\rho(\widetilde{\mathcal{A}})=\rho(\mathcal{A})$.
Furthermore, if $\rho(\mathcal{A})>1$ holds and $\mathcal{A}$ has the simple
growth property, then $\widetilde{\mathcal{A}}$ has the simple growth
property.
Combining Propositions 4.22 and 4.24 with Theorem 4.20 implies that the
asymptotics can also be determined by using the matrices $A_{0}$, …, $A_{q-1}$
(which do not contain the correction for the offset; see Theorem 3.11) instead
of the matrices $\widetilde{A}_{0}$, …, $\widetilde{A}_{q-1}$ from the linear
representation.
Note that if $\rho(\mathcal{A})<1$, then the error in (4.1) is $o(1)$. This
implies that adding constants (created by correction terms if the recurrence
relation is not valid for some $n\geq 0$) is visible in the asymptotic
expansion.
### 4.4 Spectral Results in the Special Case
Next, we are interested in the eigenvalues of the matrix $C=\sum_{0\leq
r<q}A_{r}$ for the special case. It turns out that the eigenvalues of $C$ can
be obtained from the recurrence relations (3.1) more directly than via the
detour to linear representations.
Note that also here we assume the offset to be $n_{0}=0$ for the sake of
readability, analogous to Section 3.3. The following results can be
generalized easily for arbitrary offset.
###### Proposition 4.25.
Let $A_{0}$, …, $A_{q-1}$ and $B_{0}$, …, $B_{q-1}$ be the matrices as given
in Theorem 3.15, let $M=m+1$ and $m$ be the exponents of the corresponding
$q$-recursive sequence with $m\geq 1$ and set $C=\sum_{0\leq r<q}A_{r}$. Then
we have
$\sigma(C)=\sigma(B_{0}+\cdots+B_{q-1})\cup\\{0\\}.$
Moreover, we have $m_{C}(\lambda)=m_{B_{0}+\cdots+B_{q-1}}(\lambda)$ for all
$\lambda\in\mathbb{C}\setminus\\{0\\}$.
###### Proposition 4.26.
Let $\mathcal{A}\coloneqq\\{A_{0},\dots,A_{q-1}\\}$,
$\mathcal{J}\coloneqq\\{J_{00},\dots,J_{(q-1)0}\\}$ and
$\mathcal{B}\coloneqq\\{B_{0},\dots,B_{q-1}\\}$ the sets of matrices as given
in Theorem 3.15. Then the joint spectral radii of $\mathcal{A}$ and
$\mathcal{J}$ satisfy
$\rho(\mathcal{A})=\rho(\mathcal{B})\mbox{\quad and\quad}\rho(\mathcal{J})=0,$
respectively.
Furthermore, if $\rho(\mathcal{B})>0$ holds and $\mathcal{B}$ has the simple
growth property, then $\mathcal{A}$ has the simple growth property.
The two propositions of this section provide the possibility to obtain the
asymptotics of the summatory function without knowing a linear representation
of the sequence; the asymptotics are fully determined by the matrices $B_{0}$,
…, $B_{q-1}$.
### 4.5 Functional Equation for the Dirichlet Series in the Special Case
Theorem 4.20 indicates that functional equations for the Dirichlet series
corresponding to the sequence of interest are essential for computing Fourier
coefficients of the periodic fluctuations. We now give an variant for our
special case of a $q$-recursive sequence which does not require constructing
the $q$-linear representation first.
###### Proposition 4.27.
Let $x$ be a $q$-recursive sequence with exponents $M=m+1$ and $m$, index
shift bounds $\ell=0$ and $u=q^{m}-1$ and coefficients $(c_{j,k})_{0\leq
j<q^{m+1},0\leq k<q^{m}}$, and let $B_{0}$, …, $B_{q-1}$ be the matrices
introduced in (3.11). Let $\rho>0$ be such that $x(n)=O(n^{\log_{q}R})$ as
$n\to\infty$ holds for all $R>\rho$, and let $\eta\geq 1$ be an integer. We
define the Dirichlet series
$\mathcal{X}_{j}(s)\coloneqq\smashoperator[]{\sum_{n\geq\eta}^{}}\
\frac{x(q^{m}n+j)}{(q^{m}n+j)^{s}}=\ \smashoperator[]{\sum_{n\geq
q^{m}\eta+j}^{}}\ \frac{x(n)\llbracket n\equiv
j{\@displayfalse\pmod{q^{m}}}\rrbracket}{n^{s}}$ (4.6)
for $0\leq j<q^{m}$ and $\real s>\log_{q}\rho+1$ and set
$\mathcal{X}(s)\coloneqq\begin{pmatrix}\mathcal{X}_{0}(s)\\\ \vdots\\\
\mathcal{X}_{q^{m}-1}(s)\end{pmatrix}.$
Then the functional equation
$\bigl{(}I-q^{-s}(B_{0}+\cdots+B_{q-1})\bigr{)}\mathcal{X}(s)=\begin{pmatrix}\mathcal{Y}_{0}(s)\\\
\vdots\\\ \mathcal{Y}_{q^{m}-1}(s)\end{pmatrix}$ (4.7)
holds for $\real s>\log_{q}\rho$ with
$\mathcal{Y}_{j}(s)=q^{-s}\sum_{k=0}^{q^{m}-1}\Biggl{(}\sum_{n\geq
1}\binom{-s}{n}\biggl{(}\sum_{\mu=0}^{q-1}c_{\mu q^{m}+j,k}\Bigl{(}\frac{\mu
q^{m}+j}{q}-k\Bigr{)}^{n}\biggr{)}\mathcal{X}_{k}(s+n)\Biggr{)}+\
\smashoperator[l]{\sum_{\eta\leq
n<q\eta}^{}}\frac{x(q^{m}n+j)}{(q^{m}n+j)^{s}}.$ (4.8)
Moreover, $\mathcal{Y}_{j}(s)$ is analytic for $\real s>\log_{q}\rho$ and all
$0\leq j<q^{m}$, and, in particular, $\mathcal{X}(s)$ is meromorphic for
$\real s>\log_{q}\rho$ and can only have poles $s$ where
$q^{s}\in\sigma(B_{0}+\cdots+B_{q-1})$.
## 5 Stern’s Diatomic Sequence
### 5.1 Introduction of the Sequence
We start our detailed study of particular sequences by studying a sequence
which has a long history, namely the so-called101010Note that this sequence
has a strong connection to Stern–Brocot trees, which were first discovered
independently by Stern [37] and Brocot [7]. This is the reason that Stern’s
diatomic sequence is also known as _Stern–Brocot sequence._ _Stern’s diatomic
sequence_ ; see [34, A002487]. After its earliest introduction by Stern [37]
in 1858, the sequence has been studied thoroughly; see Northshield [33] and
the references therein, and also Coons and Shallit [10], Leroy, Rigo and
Stipulanti [28].
Stern’s diatomic sequence $d$ is defined by111111Strictly speaking, $d(0)=0$
follows from (5.1b) for $n=0$. $d(0)=0$, $d(1)=1$ and
$\displaystyle d(2n)$ $\displaystyle=d(n),$ (5.1a) $\displaystyle d(2n+1)$
$\displaystyle=d(n)+d(n+1)$ (5.1b)
for all $n\geq 0$. The first few terms of $d$ are given in Table 5.1.
$n$ | $0$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $7$ | $8$ | $9$ | $10$ | $11$ | $12$ | $13$ | $14$ | $15$
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
$d(n)$ | $0$ | $1$ | $1$ | $2$ | $1$ | $3$ | $2$ | $3$ | $1$ | $4$ | $3$ | $5$ | $2$ | $5$ | $3$ | $4$
Table 5.1: First few elements of Stern’s diatomic sequence $d$
### 5.2 Combinatorial Interpretations of the Sequence
There are several combinatorial interpretations of Stern’s diatomic sequence.
In the following, we give a short overview of the most interesting connections
to combinatorial settings.
1. 1.
Let us call the word $d_{L-1}\ldots d_{0}$ over the alphabet $\\{0,1,2\\}$ a
_hyperbinary representation_ of some $n\in\mathbb{N}_{0}$ if $n=\sum_{0\leq
i<L}d_{i}2^{i}$ and $d_{L-1}\neq 0$. Then the number of different hyperbinary
representations of $n$ is equal to $d(n+1)$ for all $n\geq 0$; see Northshield
[33, Theorem 3.1].
2. 2.
Let $\genfrac{\\{}{\\}}{0.0pt}{}{n}{r}$ denote the _Stirling partition
numbers_ , i.e., $\genfrac{\\{}{\\}}{0.0pt}{}{n}{r}$ is the number of
different partitions of the set $\\{1,\dots,n\\}$ in exactly $r$ non-empty
subsets. Then $d(n)$ equals the number of integers $r\in\mathbb{N}_{0}$ such
that $\genfrac{\\{}{\\}}{0.0pt}{}{n}{2r}$ is even and non-zero; see Carlitz
[8].
3. 3.
Let $F(n)$ be the $n$th _Fibonacci number_. Then $d(n)$ is equal to the number
of different representations of $n$ as a sum of distinct Fibonacci numbers
$F(2k)$ with $k\in\mathbb{N}_{0}$; see Bicknell-Johnson [6].
4. 4.
An _alternating bit set_ of some integer $n\in\mathbb{N}_{0}$ is a subset $A$
of the positions in the binary expansion of $n$ such that
* •
the bits of the binary expansion of $n$ at positions which lie in $A$ are
alternating between $1$ and $0$,
* •
the most significant bit at a position which lies in $A$ is a $1$, and
* •
the least significant bit at a position which lies in $A$ is a $0$.
In particular, we also allow $A=\emptyset$ to be an alternating bit set. Note
that this definition implies that every alternating bit set has even
cardinality.
Then the number of different alternating bit sets of $n$ is equal to $d(n+1)$;
see Finch [14, Section 2.16.3].
5. 5.
There is a relation to the well-known _Towers of Hanoi_ ; see Hinz, Klavžar,
Milutinović, Parisse and Petr [23].
Thus, the asymptotic analysis of the summatory function of Stern’s diatomic
sequence is indeed well-motivated, also from a combinatorial point of view.
### 5.3 Regularity and a Linear Representation
In order to be able to apply Theorem 4.20 for the asymptotic analysis of the
summatory function, the sequence $d$ has to be recursive. Due to the
definition of the sequence in (5.1), it is clear that $d$ is $2$-recursive
with exponents $M=1$ and $m=0$, index shift bounds $\ell=0$ and $u=1$, and
offset $n_{0}=0$. Thus, it is also $2$-regular by Theorem 3.7. Note that
Theorem 3.15 is not applicable: The term $d(n+1)$ appears in (5.1b) and
therefore, the upper index shift bound $u$ needs to be at least $1$, whereas
Theorem 3.15 only allows $0$ as an upper index shift bound in the case $m=0$.
So we use Theorem 3.7 to obtain a $2$-linear representation $(A_{0},A_{1},v)$
of $d$: The vector-valued sequence $v$ is given by
$v=\begin{pmatrix}d\\\ d\circ(n\mapsto n+1)\\\ d\circ(n\mapsto
n+2)\end{pmatrix},$
and the matrices are given by
$A_{0}=\begin{pmatrix}1&0&0\\\ 1&1&0\\\ 0&1&0\end{pmatrix}\mbox{\quad
and\quad}A_{1}=\begin{pmatrix}1&1&0\\\ 0&1&0\\\ 0&1&1\end{pmatrix}.$
The correctness of the recurrence relations $v(2n)=A_{0}v(n)$ and
$v(2n+1)=A_{1}v(n)$ for all $n\geq 0$ can easily be verified by using (5.1).
As in Example 3.9, we can see that $d\circ(n\mapsto n+2)$ is actually not
needed in the linear representation, which is due to the fact that the
coefficient of $d(n+1)$ in the recurrence relation (5.1a) is zero. This
implies that $(A_{0}^{\prime},A_{1}^{\prime},v^{\prime})$ with
$v^{\prime}=\begin{pmatrix}d\\\ d\circ(n\mapsto n+1)\end{pmatrix}$
as well as
$A_{0}^{\prime}=\begin{pmatrix}1&0\\\ 1&1\end{pmatrix}\mbox{\quad
and\quad}A_{1}^{\prime}=\begin{pmatrix}1&1\\\ 0&1\end{pmatrix}$
is also a $2$-linear representation of $d$.
By applying the minimization algorithm mentioned in Remark 3.8 (3), we see
that this is the smallest possible $2$-linear representation of $d$.
### 5.4 Asymptotics
Let $\mathcal{V}(s)$ denote the Dirichlet series corresponding to
$v^{\prime}$, i.e.,
$\mathcal{V}(s)=\sum_{n\geq 1}n^{-s}v^{\prime}(n),$
and let $C=A_{0}^{\prime}+A_{1}^{\prime}$. In the following theorem, we state
the main result of this section: We give an asymptotic formula for the
summatory function of $d$ as well as a functional equation for
$\mathcal{V}(s)$.
###### Theorem 5.28 (Asymptotics for Stern’s Diatomic Sequence).
The summatory function $D$ of Stern’s diatomic sequence $d$ satisfies
$D(N)=\smashoperator[]{\sum_{0\leq
n<N}^{}}d(n)=N^{\kappa}\cdot\Phi_{D}(\\{\log_{2}N\\})+O(N^{\log_{2}\varphi})$
(5.2)
as $N\to\infty$, where $\kappa=\log_{2}3=1.5849625007211\ldots$,
$\varphi=\frac{1+\sqrt{5}}{2}=1.6180339887498\ldots$ is the golden ratio,
$\log_{2}\varphi=0.69424191363061\ldots$ and $\Phi_{D}$ is a $1$-periodic
continuous function which is Hölder continuous with any exponent smaller than
$\kappa-\log_{2}\varphi$. The Fourier coefficients of $\Phi_{D}$ can be
computed efficiently.
Furthermore, the Dirichlet series $\mathcal{V}(s)$ satisfies the functional
equation
$(I-2^{-s}C)\mathcal{V}(s)=v^{\prime}(1)+2^{-s}A_{1}^{\prime}\sum_{k\geq
1}\frac{1}{2^{k}}\binom{-s}{k}\mathcal{V}(s+k)$ (5.3)
for all $\real s>\log_{2}\varphi$. Both sides of Equation (5.3) are analytic
for $\real s>\log_{2}\varphi$, and, in particular, $\mathcal{V}(s)$ is
meromorphic for $\real s>\log_{2}\varphi$ and can only have at most simple
poles $s=\log_{2}3+\frac{2i\pi\mu}{\log 2}$ with $\mu\in\mathbb{Z}$.
Table 5.2 shows the first few Fourier coefficients and Figure 5.1 a plot of
the periodic fluctuation of Theorem 5.28.
$\mu$ | $\varphi_{\mu}$
---|---
$0$ | $\phantom{-}0.5129922721107177789989881697483$
$1$ | $-0.00572340619479957230984532582323+0.00692635056470554871320794780023i$
$2$ | $\phantom{-}0.00024322678681282580951796870908+0.00296266191012688412725699259509i$
$3$ | $-0.00145239145783579607592238228126+0.00117965322085442917547658711471i$
$4$ | $\phantom{-}0.00111195666191700704424207971541+0.00018518355971470343780812186861i$
$5$ | $-0.00046732929957426516792963051204+0.00050425058689999021735711128987i$
$6$ | $-0.00044953390461558932213468137492+0.00048773649732968174101103217106i$
$7$ | $\phantom{-}0.00036329328164895877338262637843+0.00035534416834062145852032394307i$
$8$ | $-0.00016679033186561839463311958967-0.00043694014091729453542478927729i$
$9$ | $\phantom{-}0.00030367683575578278808761185183+0.00009371153567156005005069054904i$
$10$ | $-0.00009911479960205956796299031716+0.00019462735102739460438023334462i$
Table 5.2: First few Fourier Coefficients of $\Phi_{D}$ Figure 5.1:
Fluctuation in the main term of the asymptotic expansion of the summatory
function $D$. The plot shows the periodic fluctuation $\Phi_{D}(u)$
approximated by its Fourier series of degree $2000$ (red) as well as the
function $D(2^{u})/2^{\kappa u}$ (blue).
###### Proof 5.29 (Proof of Theorem 5.28).
We use Theorem 4.20 with the linear representation
$(A_{0}^{\prime},A_{1}^{\prime},v^{\prime})$ and work out the parameters
needed in the theorem. Recall that $C=A_{0}^{\prime}+A_{1}^{\prime}$.
_Joint Spectral Radius._ We determine the joint spectral radius of
$A_{0}^{\prime}$ and $A_{1}^{\prime}$. As one matrix is the transpose of the
other, the spectral norm of each of them equals the square root of the
dominant eigenvalue of their product. The maximal spectral norm of the
matrices is an upper bound for the joint spectral radius; the square root of
the dominant eigenvalue of their product is a lower bound for the joint
spectral radius. As both bounds are equal, the joint spectral radius equals
the spectral norm. It turns out that this spectral norm equals
$\varphi=\frac{1+\sqrt{5}}{2}$.
_Finiteness Property._ The finiteness property for $A_{0}^{\prime}$ and
$A_{1}^{\prime}$ is satisfied with respect to the spectral norm, which can be
seen by considering exactly one factor $A_{0}^{\prime}$ or $A_{1}^{\prime}$.
Thus, we choose $R=\varphi$.
_Eigenvalues._ The spectrum of $C$ is given by $\sigma(C)=\\{1,3\\}$.
Furthermore, it is clear that all eigenvalues are simple and thus,
$m_{C}(3)=1$.
Applying Theorem 4.20 yields the result.
It will turn out during the proof of Theorem 6.34 that a slight modification
of the summatory function leads to an exact formula:
###### Corollary 5.30.
With the notations of Theorem 5.28, we have
$\smashoperator[]{\sum_{0\leq
n<N}^{}}d(n)+\frac{1}{2}d(N)=N^{\kappa}\cdot\Phi_{D}(\\{\log_{2}N\\}).$
## 6 Number of Non-Zero Entries in a Generalized Pascal’s Triangle
### 6.1 Introduction of the Sequence
The first two authors of this article have studied Pascal’s rhombus as one
possible generalization of Pascal’s triangle in [20] as well as in [22]
together with Prodinger. In particular, they analyzed the asymptotic behavior
of the number of odd entries in the $n$th row of Pascal’s rhombus.
Here, we consider a generalization of Pascal’s triangle to binomial
coefficients of words. This generalization was first introduced by Leroy, Rigo
and Stipulanti in [27]. We in particular study the sequence counting the
number of non-zero elements in each row (see [34, A007306] except for the
initial value), which was investigated in detail by the same authors in [29]
and [30], and provide an asymptotic result for the summatory function. Our
result coincides with the result in [30]. In contrast to [30], we additionally
provide the periodic fluctuation that occurs in the asymptotics by determining
its Fourier coefficients. This completes the full picture of the summatory
function.
We start with the following definition; also see Lothaire [31, Chapter 6] for
more details on binomial coefficients of words.
###### Definition 6.31 (Scattered Subword, Binomial Coefficients of Words).
Let $u=u_{1}\ldots u_{j}$ and $v=v_{1}\dots v_{k}$ be two words over the same
alphabet.
1. (a)
We say that $v$ is a _scattered subword_ of $u$ if there exists a strictly
increasing mapping $\pi\colon\\{1,\dots,k\\}\to\\{1,\dots,j\\}$ with
$u_{\pi(i)}=v_{i}$ for all $1\leq i\leq k$. We call $\pi$ an _occurrence_ of
$v$ as a scattered subword of $u$.
2. (b)
The _binomial coefficient_ of $u$ and $v$, denoted by $\binom{u}{v}$, is
defined as the number of different occurrences of $v$ as a scattered subword
of $u$.
For example, we consider the words $u=u_{1}u_{2}u_{3}u_{4}u_{5}u_{6}=110010$
and $v=v_{1}v_{2}=10$ over the alphabet $\\{0,1\\}$. Then we have
$\binom{110010}{10}=7$ (6.1)
because there are exactly seven possibilities to represent $v$ as a scattered
subword of $u$, namely
$u_{1}u_{3}=u_{1}u_{4}=u_{1}u_{6}=u_{2}u_{3}=u_{2}u_{4}=u_{2}u_{6}=u_{5}u_{6}=v.$
Note that the classical binomial coefficient for two integers $n$,
$k\in\mathbb{N}_{0}$ can be obtained by the identity
$\binom{1^{n}}{1^{k}}=\binom{n}{k},$
where $1^{n}$ denotes the word consisting of $n$ ones.
Next, we define the _generalized Pascal’s triangle_ $\mathcal{P}_{2}$ as an
infinite matrix as follows: The entry in the $n$th row and $k$th column of
$\mathcal{P}_{2}$ is given by $\binom{(n)_{2}}{(k)_{2}}$, where $(n)_{2}$
denotes the binary expansion of some $n\in\mathbb{N}_{0}$, i.e.,
$\mathcal{P}_{2}\coloneqq\Biggl{(}\binom{(n)_{2}}{(k)_{2}}\Biggr{)}_{\begin{subarray}{c}n\geq
0\\\ k\geq 0\end{subarray}}.$
Observe that $\binom{(n)_{2}}{(0)_{2}}=1$ and $\binom{(n)_{2}}{(n)_{2}}=1$
hold for all $n\geq 0$. We let $z$ denote the sequence of interest and define
$z(n)$ as the number of non-zero elements in the $n$th row of
$\mathcal{P}_{2}$. The first few values of $\mathcal{P}_{2}$ are given in
Table 6.1, and the last column shows the first few values of $z$. Figure 6.1
illustrates the non-zero elements in $\mathcal{P}_{2}$.
| $k$ | $0$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $7$ | $8$ |
---|---|---|---|---|---|---|---|---|---|---|---
$n$ | $(n)_{2}$ $(k)_{2}$ | $\varepsilon$ | $1$ | $10$ | $11$ | $100$ | $101$ | $110$ | $111$ | $1000$ | $z(n)$
$0$ | $\varepsilon$ | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1
$1$ | $1$ | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2
$2$ | $10$ | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3
$3$ | $11$ | 1 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3
$4$ | $100$ | 1 | 1 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 4
$5$ | $101$ | 1 | 2 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 5
$6$ | $110$ | 1 | 2 | 2 | 1 | 0 | 0 | 1 | 0 | 0 | 5
$7$ | $111$ | 1 | 3 | 0 | 3 | 0 | 0 | 0 | 1 | 0 | 4
$8$ | $1000$ | 1 | 1 | 3 | 0 | 3 | 0 | 0 | 0 | 1 | 5
Table 6.1: The first few elements of the generalized Pascal’s triangle
$\mathcal{P}_{2}$ as well as the corresponding number of non-zero elements in
each row. The values of the ordinary Pascal’s triangle are printed in bold.
Figure 6.1: Non-zero elements in the generalized Pascal’s triangle
$\mathcal{P}_{2}$
The following result by Leroy, Rigo and Stipulanti [29] provides a (at least
on the first glance) surprising connection between the number of non-zero
elements in $\mathcal{P}_{2}$ and Stern’s diatomic sequence.
###### Theorem 6.32 (Leroy–Rigo–Stipulanti [29, Section 4]).
The sequence $z$ satisfies the relation
$z(n)=d(2n+1)$
for all $n\geq 0$, where $d$ is Stern’s diatomic sequence as studied in
Section 5.
### 6.2 Regularity and a Linear Representation
In principle, Theorem 6.32 and the results on the asymptotics of Stern’s
diatomic sequence given in Section 5 could be used to determine the
asymptotics of $z$. However, it turns out that the error term in the
asymptotic expansion of $z$ vanishes. In order to show this, the results of
Section 5 are not sufficient, and we need to have a closer look at the linear
representation of $z$. Theorem 6.32 does not suffice for this purpose, so
instead we intend to present three different possibilities for obtaining a
linear representation. The first one will give some more details on the
reduction via Theorem 6.32, while the others will be based on the following
result, also by Leroy, Rigo and Stipulanti.
###### Theorem 6.33 (Leroy–Rigo–Stipulanti [29, Theorem 21]).
The sequence $z$ satisfies the recurrence relations
$\displaystyle z(2n+1)$ $\displaystyle=3z(n)-z(2n),$ (6.2a) $\displaystyle
z(4n)$ $\displaystyle=-z(n)+2z(2n),$ (6.2b) $\displaystyle z(4n+2)$
$\displaystyle=4z(n)-z(2n)$ (6.2c)
for all $n\geq 0$.
As already mentioned, the previous theorem as well as Theorem 6.32 provide
several possibilities to find a linear representation of $z$, and we discuss
three of them. As a side product of the second approach, it will also be clear
why $z$ is a recursive sequence and therefore fits into our framework.
#### Approach 1.
First of all, it is worth mentioning that we can use Theorem 6.32 to obtain a
$2$-linear representation: Since Stern’s diatomic sequence $d$ is $2$-regular
and $z(n)=d(2n+1)$ holds for all $n\in\mathbb{N}_{0}$, the $2$-regularity of
$z$ follows by Allouche and Shallit [1, Theorem 2.6]. In the proof of [1,
Theorem 2.6] we also find a description for the construction of a $2$-linear
representation of $z$ by using the linear representation of $d$. We do not
want to go into detail here.
#### Approach 2.
The recurrence relations in Theorem 6.33 are not directly in line with the
desired relations in the framework of $q$-recursive sequences as given in
(3.1). This second approach does not only lead to a desired linear
representation of $z$, but it also illustrates how recurrence relations as
given in (6.2) can be disentangled in order to obtain appropriate recurrence
relations for a $q$-recursive sequence. In fact, we will show that the
sequence $z$ is $2$-recursive, which directly implies that it is also
$2$-regular due to Theorem 3.15.
For this purpose, consider the system of equations
$\begin{pmatrix}-3&1&1&0&0&0&0\\\ 1&-2&0&1&0&0&0\\\ -4&1&0&0&0&1&0\\\
0&-3&0&1&1&0&0\\\ 0&0&-3&0&0&1&1\end{pmatrix}\begin{pmatrix}z\\\
z\circ(n\mapsto 2n)\\\ z\circ(n\mapsto 2n+1)\\\ z\circ(n\mapsto 4n)\\\
z\circ(n\mapsto 4n+1)\\\ z\circ(n\mapsto 4n+2)\\\ z\circ(n\mapsto 4n+3)\\\
\end{pmatrix}=0,$ (6.3)
where the first three rows correspond to the relations given in Theorem 6.33
and the last two rows arise from (6.2a) by replacing $n$ by $2n$ and by
$2n+1$, respectively.
We want to get a representation of $z$ as a $2$-recursive sequence. It turns
out that we can achieve such a sequence with exponents $M=2$ and $m=1$, so we
need explicit expressions for $z\circ(n\mapsto 4n)$, $z\circ(n\mapsto 4n+1)$,
$z\circ(n\mapsto 4n+2)$ and $z\circ(n\mapsto 4n+3)$ (corresponding to the last
four columns of the matrix). We also want these expressions to be free from
$z$ itself (corresponding to the first column of the matrix), so we transform
the system in such a way that an identity matrix appears in these columns.
Indeed, we multiply the system from the left with the inverse of the matrix
formed by these five columns and obtain
$\begin{pmatrix}0&\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}-\frac{5}{3}&\frac{1}{3}&1&0&0&0\\\
0&-\frac{4}{3}&-\frac{1}{3}\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&1&0&0\\\
0&\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}-\frac{1}{3}&-\frac{4}{3}&0&0&1&0\\\
0&\frac{1}{3}&-\frac{5}{3}\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}&0&0&0&1\\\
1&-\frac{1}{3}&-\frac{1}{3}&0&0&0&0\end{pmatrix}\begin{pmatrix}z\\\
z\circ(n\mapsto 2n)\\\ z\circ(n\mapsto 2n+1)\\\ z\circ(n\mapsto 4n)\\\
z\circ(n\mapsto 4n+1)\\\ z\circ(n\mapsto 4n+2)\\\ z\circ(n\mapsto 4n+3)\\\
\end{pmatrix}=0.\leavevmode\hbox to2.8pt{\vbox
to14.1pt{\pgfpicture\makeatletter\hbox{\hskip 0.4pt\lower-4.4pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{{}{}}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0.88,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.88,0}\pgfsys@color@cmyk@stroke{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0.91}{0}{0.88}{0.12}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0.88,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{9.30002pt}\pgfsys@moveto{0.0pt}{9.30002pt}\pgfsys@lineto{0.0pt}{-4.0pt}\pgfsys@lineto{2.0pt}{-4.0pt}\pgfsys@lineto{2.0pt}{9.30002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-4.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\leavevmode\hbox
to2.8pt{\vbox to14.1pt{\pgfpicture\makeatletter\hbox{\hskip
0.4pt\lower-4.4pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{ { {}{}{}{}{}}{}{}{}{}{{}}{} {
{}{}{}{}{}}{}{}{{}{}}{}{{}}{}{}{}{}{{}}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0.68,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.68,0,0}\pgfsys@color@cmyk@stroke{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\pgfsys@color@cmyk@fill{0}{0.87}{0.68}{0.32}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0.68,0,0}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{9.30002pt}\pgfsys@moveto{0.0pt}{9.30002pt}\pgfsys@lineto{0.0pt}{-4.0pt}\pgfsys@lineto{2.0pt}{-4.0pt}\pgfsys@lineto{2.0pt}{9.30002pt}\pgfsys@closepath\pgfsys@moveto{2.0pt}{-4.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$ (6.4)
Here the first four rows give the system
$\displaystyle z(4n)$ $\displaystyle=\frac{5}{3}z(2n)-\frac{1}{3}z(2n+1),$
$\displaystyle z(4n+1)$ $\displaystyle=\frac{4}{3}z(2n)+\frac{1}{3}z(2n+1),$
$\displaystyle z(4n+2)$ $\displaystyle=\frac{1}{3}z(2n)+\frac{4}{3}z(2n+1),$
$\displaystyle z(4n+3)$ $\displaystyle=-\frac{1}{3}z(2n)+\frac{5}{3}z(2n+1)$
for $n\geq 0$, which is a representation of $z$ as a $2$-recursive sequence
with offset $n_{0}=0$, exponents $M=2$ and $m=1$ and index shift bounds
$\ell=0$ and $u=1$. The last row of (6.4) can be omitted.
The matrices $B_{0}$ and $B_{1}$ as introduced in (3.11) are given by
$B_{0}=\frac{1}{3}\begin{pmatrix}\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}5&-1\\\
4&1\leavevmode\hbox to6.67pt{\vbox
to6.67pt{\pgfpicture\makeatletter\hbox{\hskip 3.33301pt\lower-3.33301pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox |
11institutetext: Institute for Astronomy, University of Hawai’i, Honolulu, HI
96822, USA
email<EMAIL_ADDRESS>22institutetext: Institute of Astronomy, KU Leuven,
Celestijnenlaan 200D, B-3001 Leuven, Belgium
email<EMAIL_ADDRESS>33institutetext: Department of Astrophysics,
IMAPP, Radboud University Nijmegen, PO Box 9010, 6500 GL, Nijmegen, The
Netherlands 44institutetext: Max Planck Institute for Astronomy, Koenigstuhl
17, 69117, Heidelberg, Germany
# Confronting sparse Gaia DR3 photometry with TESS for a sample of about
60,000 hot massive non-radial pulsators
Daniel Hey 1 1 Conny Aerts 22334 4
(Received February ??, 2024; Accepted ??, 2024)
###### Abstract
Context. The Gaia mission has delivered hundreds of thousands of variable star
light curves in multiple wavelengths. Recent work demonstrates that these
light curves can be used to identify (non-)radial pulsations in the OBAF-type
stars, despite the irregular cadence and low light curve precision of order a
few mmag. With the considerably more precise TESS photometry, we revisit these
candidate pulsators to conclusively ascertain the nature of their variability.
Aims. We seek to re-classify the Gaia light curves with the first two years of
TESS photometry for a sample of 58,970 p- and g-mode pulsators, encompassing
$\gamma$ Dor, $\delta$ Scuti, SPB, and $\beta$ Cep variables. From the TESS
data, we seek to assess the quality of Gaia’s classification of non-radial
pulsators which is based on sparse, years-long light curves of mmag precision.
We also supply four new catalogues containing the confirmed pulsators, along
with their dominant and secondary pulsation frequencies, the number of
independent mode frequencies, and a ranking according to their usefulness for
future asteroseismic ensemble analysis.
Methods. We first analyze the TESS light curves independent of their Gaia
classification by prewhitening all dominant pulsation modes down to a 1% false
alarm probability. Using this, in combination with a feature-based random
forest classifier, we identify different variability types across the sample.
Results. We find that the Gaia photometry is exceptionally accurate for
detecting the dominant and secondary frequencies, reaching approximately 80%
accuracy in frequency for p- and g-mode pulsators. The majority of Gaia
classifications are consistent with the classifications from the TESS data,
illustrating the power of the low-cadence Gaia photometry for pulsation
studies. We find that the sample of g-mode pulsators forms a continuous group
of variable stars along the main sequence across B, A, and F spectral types,
implying that the mode excitation mechanisms for all these pulsators need to
be updated with improved physics. Finally, we provide a rank-ordered table of
pulsators according to their asteroseismic potential for follow-up studies,
based on the number of sectors they have been observed in, their
classification probability, and the number of independent modes found in the
TESS light curves from the nominal mission.
Conclusions. Our catalogue offers a major increase in the number of confirmed
gravity-mode pulsators with an identified dominant mode suitable for follow-up
TESS ensemble asteroseismology of such stars.
###### Key Words.:
Techniques: photometric – Stars: Rotation – Stars: binaries: general – Stars:
oscillations (including pulsations) – Methods: statistical – Catalogs
## 1 Introduction
Over the course of the past century, variable stars have proven to be an
invaluable resource in the refinement of our understanding of stellar
structure and evolution. This has become notably apparent following the advent
of high-precision space photometry, a revolution which has provided
astronomers with a continuous stream of high-frequency micro-magnitude
precision light curves spanning the entirety of the Hertzsprung-Russell
diagram (HRD, Kurtz, 2022). The richness of this data has paved the way for
advanced asteroseismic investigations into the internal structure of stars
(Aerts, 2021).
Asteroseismic modelling has been utilized for thousands of red giants, with
stochastically excited identified modes exhibiting amplitudes around 0.1 mmag
and periodicities on the order of hours (e.g., Hekker & Christensen-Dalsgaard,
2017, for a review). Conversely, the sample of uniformly analysed white dwarf
successors to these red giants only includes a few tens of compact pulsators
with identified modes (e.g. Hermes et al., 2017). Despite the possibility of
their amplitudes reaching several mmag, the limited sample size is primarily
due to their characteristics as faint, fast gravity-mode pulsators (e.g.
Córsico et al., 2019, for a summary). Therefore, monitoring these pulsators
for asteroseismology necessitates a cadence under a minute, as opposed to the
half-hour cadences suitable for covering the hours-long periodicities of red
giant modes.
Asteroseismology of main sequence pulsators has so far also only been applied
to limited ensembles compared to red giants. For low-mass dwarfs like the Sun
this limitation is due to the low amplitudes (of order 10 $\mu$mag) and short
periods (of order minutes) of their solar-like pulsations (García & Ballot,
2019, for a review). For intermediate- and high-mass dwarfs the limitations
are mainly caused by the fast rotation (see Aerts & Tkachenko, 2024, for a
review), preventing identification of the asymmetrically split merged mode
multiplets for the majority of discovered class members. So far, homogeneous
asteroseismic modelling of intermediate-mass dwarfs, treating all pulsators in
the same way has only been done for a few modest ensembles:
* •
60 young unevolved $\delta\,$Scuti stars very close to the zero-age main
sequence with high-frequency pressure modes (Bedding et al., 2020). As for
most of the slowly rotating red giant pulsators, their modelling was done
while ignoring the Coriolis force and relied on just the large frequency
separation and the frequency of maximum power (Panda et al., 2024), rather
than the fitting of individual mode frequencies;
* •
26 slowly pulsating B stars (SPB stars, Pedersen et al., 2021) whose
individual identified mode frequencies were modelled based on the
methodological framework designed specifically for gravito-inertial modes in
fast rotators relying on the Traditional Approximation of Rotation (TAR), as
developed in Aerts et al. (2018, to which we refer for details);
* •
490 $\gamma\,$Doradus ($\gamma\,$Dor) stars whose measured buoyancy travel
time and near-core rotation frequency deduced from identified gravito-inertial
modes adopting the TAR were modelled by Fritzewski et al. (2024a); 37 of these
$\gamma\,$Dor stars had their individual modes modelled by methodology based
on the TAR coupled to a neural network (Mombarg et al., 2021).
For the high-mass $\beta\,$Cep stars, homogeneous ensemble modelling of their
few (typically between 3 and 5) identified pressure modes has not yet been
done; Bowman (2020) summarised some of the results for seven individual class
members. Clearly, ensemble asteroseismology of intermediate- and high-mass
dwarfs is still in its early stages, despite the discovery of thousands of
class members (Handler et al., 2019; Burssens et al., 2023; Eze & Handler,
2024). Lack of mode identification prevents applications to large ensembles.
Novel approaches to filter thousands of main-sequence pulsators with proper
mode identification are therefore in order. The initial attempts for the
toughest case of gravito-inertial pulsators by Garcia et al. (2022a, b) based
on just the first year of TESS monitoring already illustrated the major
potential of this mission for ensemble asteroseismology of fast rotators
across the Milky Way, in line with the opportunities discussed in Aerts &
Tkachenko (2024).
The current study aims to increase the sample of main sequence pulsators of
spectral types O, B, A, and F (hereafter OBAF pulsators) with the potential
for ensemble asteroseismology with an order of magnitude. Our work is focused
on the availability of homogeneously assembled seismic and non-seismic
observables. Even if it was not designed for this type of research, the ESA
Gaia mission (Gaia Collaboration et al., 2016b, a) has a significant role to
play in this context. We tackle the challenging search for suitable large
ensembles of dwarf pulsators by screening the homogeneous database of stellar
properties and sparse time-series photometry offered by Gaia’s Data Release 3
(DR3, Gaia Collaboration et al., 2023b). Starting from the sample of 106,207
candidate OBAF pulsators classified by Gaia Collaboration et al. (2023a), we
analyse those targets among this large ensemble having high-cadence high-
precision light curves assembled by the NASA TESS space telescope (Ricker et
al., 2015).
In order to prepare the ‘industrialisation’ of asteroseismic ensemble
modelling of OBAF pulsators, we need to find targets with tens of identified
modes from the TESS photometry among the ‘zoo of variables’ along the main
sequence (Eyer et al., 2023). Thousands of variable intermediate- and high-
mass dwarfs have already been found from high-cadence uninterrupted space
photometry (e.g., Uytterhoeven et al., 2011; Balona et al., 2011a; Balona &
Dziembowski, 2011; Balona et al., 2011b; Balona, 2016; Balona et al., 2019;
Antoci et al., 2019; Pedersen et al., 2019; Burssens et al., 2019, 2020).
Their variability is caused by different physical phenomena, making
identification of the pulsation modes for large ensembles of OBAF pulsators a
major obstacle. In this era of high-cadence space photometry the variability
phenomena are better understood and include stellar flaring (e.g. Balona,
2015; Pedersen et al., 2017) along with other magnetic variability (e.g.
Shultz et al., 2019; David-Uraz et al., 2019; Sikora et al., 2019a),
rotational modulation (e.g., Bowman et al., 2018; Sikora et al., 2019b), low-
frequency variability interpreted in terms of internal gravity waves (e.g.,
Aerts & Rogers, 2015; Bowman et al., 2019, 2020) or subsurface convection
(e.g., Cantiello & Braithwaite, 2019; Cantiello et al., 2021), eclipses in
close binaries (e.g., Kirk et al., 2016; IJspeert et al., 2021) and so on.
This non-exhaustive list of variable phenomena often occurs in combination
with nonradial pulsations (as summarised in Kurtz, 2022).
Furthermore, it has long been established that OBAF pulsators coexist with
various types of variable stars along the main sequence in the Hertzsprung-
Russell diagram (HRD) (e.g., Briquet et al., 2007). The recently released Gaia
Data Release 3 (Gaia DR3) data reaffirm this observation, demonstrating that
the classes of variability, including OBAF pulsators, encompass a substantial
proportion of stars distributed across nearly the entire main sequence within
the intermediate- and high-mass dwarf star population (Gaia Collaboration et
al., 2023a). This phenomenon is further substantiated by investigations of
young open clusters, which have been systematically monitored through joint
efforts involving the Gaia mission, as well as the refurbished Kepler (known
as K2) and Transiting Exoplanet Survey Satellite (TESS) missions (e.g., White
et al., 2017; Murphy et al., 2022; Bedding et al., 2023; Fritzewski et al.,
2024b; Li et al., 2024). Consequently, it is prudent to adopt an approach that
involves scrutinizing the time-series photometric data itself, rather than
solely focusing on the stellar positions within the HRD, to effectively
address our scientific objectives. Gaia DR3 provides us with the requisite
data to pursue this approach in our quest to identify the most appropriate
ensembles of OBAF pulsators.
In this paper, we revisit a large fraction of the 106207 OBAF pulsators
discovered in the sparse Gaia DR3 photometry by Gaia Collaboration et al.
(2023a) with the goal to derive and analyse their TESS light curves. We focus
on the poorly populated samples of nonradial pulsators having the highest
asteroseismic potential and thus revisit the candidate $\gamma\,$Dor stars,
Slowly pulsating B (SPB) stars, $\beta\,$Cep stars, and $\delta$ Scuti stars
identified by Gaia Collaboration et al. (2023a). The $\gamma\,$Dor and SPB
g-mode pulsators have been revisited already by Aerts et al. (2023) in terms
of their astrophysical properties, but the $\beta\,$Cep and $\delta$ Scuti
p-mode pulsators have not been evaluated as such. Here, we confront the Gaia
DR3 variability properties for the members assigned to these four classes with
their TESS data when available. Our work has the following two major aims:
* •
to assess the quality of Gaia’s classification of nonradial pulsators, which
is only based on sparse years-long light curves of mmag precision, by
confronting its outcome with high-cadence $\mu$mag precision TESS space
photometry for all class members that have Gaia and TESS independent data sets
in the public domain;
* •
to create four large new catalogues containing Gaia-discovered nonradial
pulsators confirmed by TESS, offering the community their TESS light curves
covering the first year of monitoring, their independent mode frequencies, and
identification of the dominant mode if possible. These catalogues are an
optimal starting point for future ensemble asteroseismic modelling.
## 2 Gaia and TESS Data
Figure 1: Left: Comparison of measured dominant TESS frequency against Gaia
frequency for the g-mode pulsators (top) and p-mode pulsators (bottom). Right:
The same for the secondary frequency. The dashed lines indicate the half,
unity, and twice relationships among the frequencies. We compare the sample of
g-mode candidate pulsators before removing combination frequencies, to match
the method of Aerts et al. (2023).
Figure 2: Example light curves and amplitude spectra from the p-mode (top
three panels) and g-mode (bottom three panels) candidate sample. The green
vertical line marks the measured Gaia frequency. We show the following cases;
a) where the measured Gaia frequency is in good agreement with TESS, b) where
the measured Gaia frequency is not the true dominant one, and c) the measured
Gaia frequency is incorrect. Figure 3: Distribution of amplitudes for the
dominant (blue) and secondary (orange) frequencies for each variable type in
our sample. Note that we use only stars where the dominant frequency lies on
the bisector within 0.1 d-1 tolerance for the TESS/Gaia overlap (cf. Fig. 1).
### 2.1 The four samples of candidate pulsators
Our samples consist of the p- and g-mode pulsators discussed in Gaia
Collaboration et al. (2023a). For the g-mode pulsators, we take the 15,062
candidates from Aerts et al. (2023). This sample contains 11,636 $\gamma$ Dor
and 3,462 SPB star candidates. In addition, we consider the 222 candidate
$\beta\,$Cep stars and 85,317 $\delta$ Scuti candidates classified as such by
Gaia Collaboration et al. (2023a). For the latter two classes, the extra
vetting based on the expected frequency range as done for the g-mode pulsators
by Aerts et al. (2023) is not meaningful, because their dominant p-mode
frequencies are expected to intervene with (multiples of) Gaia’s main
satellite frequency near 4 d-1 at mmag level. Moreover, large fractions among
the $\beta\,$Cep and $\delta\,$Scuti stars may have a dominant high-amplitude
radial mode, so a restriction on amplitude as extra vetting rule as adopted by
Aerts et al. (2023) for the $\gamma\,$Dor and SPB pulsators is less obvious to
define for the p-mode pulsators.
To construct the four samples for the current work, we cross-match their Gaia
Data Release 2 (DR2, Gaia Collaboration et al., 2018) identifications (IDs)
using the ‘nearest neighbours’ table. To obtain their TESS IDs, we then cross-
match the Gaia DR2 IDs against the TESS Input Catalog (TIC, Stassun et al.,
2018). The final cross-matched sample among DR3 (Gaia Collaboration et al.,
2023b), DR2, and TIC contains 85,313 $\delta$ Scuti stars, 11,636 $\gamma$ Dor
stars, 3,426 SPB stars, and 222 $\beta$ Cep stars. The loss of several stars
in the cross-matching process is a result of the DR2 to DR3 best neighbours
matching catalogue, which is not strictly one-to-one.
### 2.2 TESS photometry
The majority of the sample has not been targeted for dedicated observations by
TESS. With no official light curves delivered by the TESS team using the SPOC
pipeline, we instead use light curves from the TESS Gaia light curve (TGLC)
catalogue, produced from full-frame images by Han & Brandt (2023). TGLC light
curves were used as an alternative to the Quick-Look Pipeline (QLP, Huang et
al. (2020) because they reach fainter than 13.5 mag. The TGLC light curves
leverage Gaia photometry and astrometry to inform light curve extraction by
building a local point spread function forward model, making it a viable
source for fainter stars, where Gaia astrometry is useful for resolving
contaminated stars.
We use the TGLC light curves from the first 26 sectors of the TESS mission
calculated by the TGLC pipeline from the full-frame images. These light curves
are at a nominal 30-minute cadence. We use the calibrated aperture flux with
the “good” quality flags. For each light curve, we perform 3$\sigma$ clipping
of the flux and apply a Gaussian filter with a standard deviation of 100 to
remove significant long-term trends. A table containing the curl scripts used
to download the data is available in electronic format as supplementary
material. The TGLC light curves are available in the first 26 TESS sectors for
45,919 $\delta$ Scuti stars, 10,099 $\gamma$ Doradus stars, 2,777 SPB stars,
and 175 $\beta$ Cep stars, leading to a total analysis sample of 58,970
variables out of the original target list.
## 3 Confrontation between the two light curves per sample star
### 3.1 Prewhitening of dominant pulsation modes
To analyze the candidate pulsators in our sample we have developed an
automated prewhitening scheme based on the approach used for p-mode pulsators
in Hey et al. (2021), with several modifications. The algorithm functions as
follows: it begins by identifying the highest peak in the amplitude spectrum.
It then optimises the value of the peak frequency from a sinusoidal function
in the time domain characterized by a frequency, amplitude, and phase. This
fitted sinusoid is then subtracted from the light curve, repeating iteratively
until a predefined stopping condition. In contrast to Hey et al. (2021) where
the stopping condition was the signal-to-noise ratio (SNR) of the peak, our
method employs the false-alarm level for the highest amplitude peak, ensuring
it falls below a 1% probability threshold. If this condition is met, the peak
is deemed significant, removed, and the prewhitening process continues.
The prewhitening procedure is applied to each peak exceeding the 1%
significance level in the amplitude spectrum, or until a maximum of 100
iterations is reached, whichever occurs first. Additionally, for each peak, we
calculate its ‘prominence’, which quantifies the peak’s distinctiveness
relative to the surrounding amplitude spectrum. This metric serves as a useful
diagnostic tool for evaluating individual modes, especially in the scenario
where a single prewhitening step does not completely remove a peak (a common
occurrence in non-sinusoidal signals).
We perform the prewhitening for all stars in our sample. Stars with more than
one sector of TESS observations are stitched together prior to prewhitening,
with each sector having a simple 5$\sigma$ outlier removal and long-term trend
removal with a Gaussian filter. During the prewhitening routine, it is common
to encounter combination frequencies (Kurtz et al., 2015). These can be
identified and subsequently removed using the following equation (Li et al.,
2019):
$|f_{k}-(n_{i}f_{i}+n_{j}f_{j})|<\epsilon\,;$ (1)
where $i$, $j$, and $k$ are indices of the frequency peaks, $f_{i}$ and
$f_{j}$ are the parent frequencies, $f_{k}$ is the candidate for the
combination frequency, $n_{i}$ and $n_{j}$ are combination coefficients, and
$\epsilon$ is the threshold for identification. Following the approach of Li
et al. (2019), we limit our analysis to the 20 highest amplitude peaks,
considering them as potential parent frequencies. Our criterion for
combinations is restricted to cases where $|n_{i}|+|n_{j}|\leq 2$. In light of
the TESS data’s lower precision compared to the Kepler data, we opt for a
considerably larger $\epsilon$ value of 0.002 d-1, compared to that used by Li
et al. (2019). We also remove harmonics, defined as integer or half-integer
multiples of the parent frequency. Such harmonics are common, for example, in
eclipsing binary samples (e.g., IJspeert et al., 2024).
### 3.2 Comparison of the two dominant frequencies
Here we compare the results of our TESS light curves against the Gaia
photometry explored in Gaia Collaboration et al. (2023a) and Aerts et al.
(2023). Figure 1 shows the comparison between dominant and secondary modes
between the TESS and Gaia data for all the candidate pulsators in the four
samples, grouping the p-mode and g-mode pulsators in separate panels. The
agreement is particularly good considering the sparse sampling of the Gaia
data, with 69% of the sample lying along the bisector with a 0.1 d-1 tolerance
for the dominant frequencies of the p-modes, and 80% for the g-mode
frequencies. This result on comparisons between the dominant frequencies in
the Gaia and TESS light curves is superior to the 20% agreement in dominant
frequency between the Gaia and Kepler light curves for the few tens of
$\gamma\,$Dor and SPB stars found by Gaia Collaboration et al. (2023a, see
their Appendix A).
There is a clear systematic in the TGLC light curves at 1 d-1 caused by what
we believe to be reflected light from Earth (‘earthshine’). We find no
correlation between the amplitude of this signal and whether the star falls
along a TESS bisector or not. For the p-mode pulsators, Fig. 1 reveals
additional criss-crossing structures aside from the 1 d-1 systematic in the
TESS data. This phenomenon is understood in terms of the 30-min sampling by
TESS, causing a mirroring effect around the Nyquist frequency that leads to an
aliased signal of the true frequency. For example, a $\delta$ Scuti variable
with a true pulsation at 44 d-1 will have an indistinguishable copy of the
signal appearing at around 4 d-1 if observed in 30-minute cadence. This effect
can be seen, for example, in the $\delta$ Scuti and rapidly oscillating Ap
stars observed by Kepler in long-cadence (Bell et al., 2017; Murphy et al.,
2019; Hey et al., 2019). Note that the true and aliased signals of coherent
pulsators can be distinguished in the Kepler data as a consequence of periodic
modulation of the light curve measurement times (Murphy et al., 2013).
We show a series of light curves in Fig. 2 to demonstrate pathological cases
of the Gaia and TESS data. For the p- and g-mode candidate sample, we
illustrate three occurring scenarios: a) the Gaia measured dominant frequency
is correct and confirmed by TESS, b) Gaia picks up a secondary peak of an
otherwise correctly classified pulsator, and c) the dominant Gaia frequency is
wrong and likely of instrumental origin. The latter situation calls for a re-
evaluation of the Gaia DR3 variability classification of the star, based on
its TESS light curves. We tackle this subject in Sect. 4. We also note that
the Gaia identification of dominant and secondary frequencies is dependent on
the scanning law (Steen et al., 2024).
Figure 3 shows the histograms of the primary and secondary amplitudes for the
TESS data of each variable type, where the dominant frequency is in good (that
is, to within 0.1 d-1) agreement with the measured Gaia frequency, and the
classification is accurate in both TESS and Gaia with a classification
probability $>0.5$ (Section 4). We perform a two-sided Kolmogorov-Smirnov test
to compare the amplitude distributions across each of the two variability
classes with the same type of modes. The null hypothesis for this test is that
the two distributions are identical. That is, they are drawn from the same
underlying distribution. We choose a confidence level of 95% for the test,
such that the null hypothesis is rejected if the p-value is less than 0.05.
For the $\delta$ Scuti and $\beta$ Cep sample, the test indicates that their
amplitude distributions are statistically different, with a p-value of around
$10^{-14}$. This result reflects that the two classes of p-mode pulsators are
subject to the same type of excitation mechanism, namely the opacity (or
$\kappa$) mechanism, but that it acts in a different layer in the outer
envelope of the star. For the $\beta\,$Cep stars it concerns the partial
ionisation zone of iron-like isotopes, while the heat engine in the case of
$\delta\,$Scuti stars is acting in the second ionization zone of helium
(Pamyatnykh, 1999; Guzik, 2021).
For the $\gamma$ Dor and SPB sample, however, we find that the null hypothesis
can not be rejected with a p-value of 0.055. This indicates that the
distributions of their amplitudes are statistically the same. From the
observational side, this is expected from prior work based on variability
classification – only colour information allows us to distinguish between the
SPB and $\gamma$ Dor classes (Audenaert et al., 2021), and even then, there is
still confusion for stars with an effective temperature between 8500-9500 K
(Aerts et al., 2023). The equal amplitude distributions among $\gamma$ Dor and
SPB stars are of interest to improve mode excitation computations, which
requires a non-adiabatic treatment of the pulsation equations. Indeed, while
the SPB stars have their g modes excited by the $\kappa$ mechanism acting in
the same partial ionisation zone of iron-like isotopes as the p modes in the
$\beta\,$Cep stars (Pamyatnykh, 1999), the $\gamma\,$Dor stars are subject to
at least two not mutually exclusive excitation mechanisms: the
$\kappa-$mechanism for the hotter class members and the mechanism based on
flux blocking at the base of the thin outer convective envelope for the cooler
class members (Dupret et al., 2004, 2005). Moreover, nonlinear mode coupling
was also found to occur in large-amplitude SPB stars (Van Beeck et al., 2021)
and may excite extra modes via energy exchange, aside from the self-driven
linear modes due to the $\kappa$ mechanism. While a similar study on nonlinear
mode coupling has not yet been done for $\gamma\,$Dor stars, their Kepler
light curves show similar cusp-like shapes near the maxima than the SPB stars
do (cf. compare the data in Van Reeth et al., 2015; Pedersen et al., 2021).
Finally, it has been shown that adding novel physical ingredients in mode
excitation computations may appreciably enlarge the instability strips, such
as the Coriolis force due to fast rotation (Szewczuk & Daszyńska-Daszkiewicz,
2017) and radiative levitation due to atomic diffusion (Deal et al., 2016;
Rehm et al., 2024). All of this makes comparing observational instability
strips from surveys of pulsators with predicted strips based on just one
choice of input physics of limited value. This was already highlighted from
the dominant frequencies found in the Gaia DR3 light curves of g-mode
pulsators in Paper I, stressed in the review on asteroseismology of fast
rotators by Aerts & Tkachenko (2024), and is reinforced here from our
indistinguishable TESS amplitude distributions for the $\gamma\,$Dor and SPB
classes shown in Fig. 3. It is for this reason that we merge the SPB and
$\gamma$ Dor classes in Sec. 4.
### 3.3 Gaia amplitude ratios
Figure 4: Violin plot of the amplitude ratios for the four samples deduced
from the light curves in the three Gaia bandpasses with respect to the
amplitude in the TESS band (covering approximately 600 - 1,000 nm). The shaded
regions indicate the density distributions for each of the samples, with the
solid points showing the median. For clarity, we do not add the density
distribution centred around value 1.0 for the TESS bandpass itself.
So far we worked with the Gaia G passband. It covers the wavelengths between
330 nm to 1050 nm, with peak sensitivity at 640 nm. But DR3 also delivered the
light curves in the RP and BP colour bands. In practice, these BP and RP bands
are blue and red cuts of the broad G band, covering the ranges from 330 nm to
680 nm (BP) and from 630 nm to 1050 nm (RP), with maximum responses at 517 nm
and 783 nm, respectively (Weiler, 2018). On the other hand, the TESS detector
passband spans from 600 nm to 1000 nm with central wavelength at 787 nm
(Ricker et al., 2015).
Having time series data with colour information is advantageous for
asteroseismology in the case of ambiguous mode identification in terms of the
spherical harmonic wavenumbers $(l,m)$ characterising the geometry of the
mode. Indeed, the theoretical expression for the observed amplitudes of a mode
described by the spherical harmonic function $Y_{l}^{m}$ and viewed upon at an
inclination angle $i$ depends on the wavelength, via the perturbations of the
atmospheric flux and limb darkening caused by the mode (see Eq. (6.29) in
Aerts et al., 2010). The dependencies of this expression on the azimuthal
order $m$ and on the inclination angle $i$ of the star’s pulsation symmetry
axis drop out of the expression of amplitude ratios for different wavelengths.
This is why observed amplitude ratios deduced for light curves observed in
different passbands have been used to identify the mode degree $l$ of main-
sequence pulsators (e.g., Heynderickx et al., 1994; Breger, 2000; Dupret et
al., 2003; Aerts et al., 2004; De Cat, 2017; Brunsden et al., 2018). All these
applications of mode identification assumed one fixed set of input physics for
the theoretical predictions. We now know from space asteroseismology that this
is not appropriate for such pulsators (Aerts, 2021; Johnston, 2021; Pedersen,
2022).
Although the Coriolis and centrifugal forces complicate this capacity of mode
identification in fast rotators such as $\delta\,$Scuti stars (Daszyńska-
Daszkiewicz et al., 2002) and SPB stars (Townsend, 2003), we have a good
understanding of how they do so. Therefore, it was recently suggested by Aerts
& Tkachenko (2024) to exploit amplitude ratios for stars whose identification
of $(l,m)$ is already established. Indeed, in this case, any diversity in
observed amplitude ratios encapsulates differences in the internal,
atmospheric, and limb-darkening physics of the star. Figure 11 in Aerts &
Tkachenko (2024) illustrates this (future) potential opportunity from measured
amplitude ratios of prograde dipole gravito-inertial modes observed in both
Gaia and Kepler data of 23 $\gamma\,$Dor stars. Here, we illustrate the
potential of amplitude ratios from combined Gaia and TESS light curves for the
four samples of new Gaia DR3 pulsators.
For all the stars with consistent dominant frequency and consistent
classification (Section 4) in the Gaia G and TESS passbands, we computed the
ratios of the G, BP, and RP amplitudes of that frequency with respect to the
amplitude in the TESS passband. We show a violin plot of the results for the
four classes in Fig. 4. This figure is in line with expectations for low-
degree ($l\leq 2$) mode behaviour in stars with slow to moderate rotation,
whose ratios are predicted to decrease with increasing wavelength for the
three Gaia passbands (e.g., Heynderickx et al., 1994; De Ridder et al., 2004).
Comparison between Fig. 11 in Aerts & Tkachenko (2024) and the violin plot in
Fig. 4 illustrates the superiority of TESS over Kepler for this type of
exploratory research based on the dominant pulsation mode, given that the TESS
pulsators are generally much brighter than the Kepler $\gamma\,$Dor stars,
leading to more precise amplitude ratios. Given its potential for
asteroseismology, we provide the Gaia amplitude ratios alongside the database
of prewhitened modes as supplementary data.
## 4 Re-classification of the pulsators from TESS
Figure 5: Confusion matrix for the Random Forest classifier normalized against the true values. Here, hybrid refers to the simultaneous presence of $\delta$ Scuti p-mode pulsations and $\gamma$ Dor or SPB g-mode pulsations. Table 1: Classifications of the p- and g-mode candidate sample. The full table in electronic format, with probabilities for each class, is available online. DR3 Source ID | Sector | Class | Probability
---|---|---|---
2070667440659268352 | 14 | $\delta$ Scuti | 0.26
2247307763228746240 | 14 | Hybrid | 0.67
5311721917688649856 | 10 | $\delta$ Scuti | 0.35
5617799085534387968 | 7 | $\delta$ Scuti | 0.67
5657691905702501888 | 9 | $\gamma$ Dor/SPB | 0.93
| ⋮ | |
2925330438953873024 | 7 | Hybrid | 0.95
2164506978535115776 | 16 | Hybrid | 0.41
5516571310575369472 | 8 | $\delta$ Scuti | 0.74
5964132569560169472 | 12 | $\delta$ Scuti | 0.57
5661189937524776320 | 9 | $\gamma$ Dor/SPB | 0.90
We now re-evaluate the Gaia DR3 variability classification from Gaia
Collaboration et al. (2023a) by relying on the highly sampled and more precise
TESS light curves.
### 4.1 Training sample
To distinguish between various classes of variability, in particular,
intrinsic and extrinsic variability, we have implemented a simple feature-
based Random Forest classifier, similar to Audenaert et al. (2021) and Barbara
et al. (2022). This classifier seeks to identify different types of
variability based on extracted singular value features of the light curve.
These features, calculated with pycatch22 (Lubba et al., 2019), are spread
across different categories concerning linear autocorrelation and periodicity
of the flux, extreme events, distributions that ignore time ordering, and
more. These features have been chosen such that they are minimally redundant
and capture the largest possible variance of the input time series.
Our training sample for the classifier is sourced from both Kepler and TESS.
For Kepler, we use the sample compiled by Barbara et al. (2022), which
consists of high-level variability classifications pertaining to A/F-type
stars, including contact and detached binaries, $\delta$ Scuti stars, $\gamma$
Dor stars, RR Lyrae variables, along with rotational and non-variables. Given
that this sample relies heavily on data derived from the Kepler mission, it
poses certain challenges when applied to classify TESS data; the majority of
the stars within the sample are of such faint magnitude that their variability
signal cannot be observed within the TESS data, hence directly comparing the
TESS light curves with the Kepler labels is not feasible. As an alternative,
we have devised a strategy where we modify the Kepler light curves to mirror
the single-sector observations of the TESS photometry. The modifications we
have implemented on the Kepler light curves are as follows; limiting the time
span to a duration of less than 27 days, adding noise proportional to the
magnitude of the star, and introducing a data gap at 13.7 days to simulate the
TESS downlink.
We further construct a TESS training sample compiled against a series of A/F
variability papers (Skarka et al., 2022; Sikora et al., 2019a; Garcia et al.,
2022a, b; Shi et al., 2023), focusing on either the classification of A/F
stars or targeting a specific variable type in TESS. The Skarka sample
consists of variable A/F stars in the Northern continuous viewing zone, the
Sikora sample contains rotationally variable A-type stars, the Garcia sample
contains 60 $\gamma$ Dor stars with a long observational baseline, and the Shi
sample contains 286 SPB stars.
Each sample contains a slightly different type of classification, which we
homogenize into new categories. In particular, we merge all the ellipsoidal
and semi-detached binaries into the “contact” class, leaving the eclipsing
binary (EB) class for purely detached cases. We also merge the two hybrid
classes ($\gamma$ Dor + $\delta$ Scuti vs. $\delta$ Scuti + $\gamma$ Dor) into
a single hybrid class regardless of which variability type is more prominent.
We also merge the SPB and $\gamma$ Dor pulsators into a single class, as
already motivated in the previous section (additional reasons are discussed
below). The remaining classes are the pure $\delta$ Scuti pulsators and pure
rotational variables (‘Rot’). We discard the Skarka sample containing ”VAR”
sources – stars deemed to be variable with an indeterminate classification.
The majority of the stars in this additional training sample are located in
the TESS continuous viewing zone (CVZ), such that each star has multiple
sectors of observations up to almost a year in length. On the other hand, our
classification sample – the candidate p- and g-mode pulsators – are typically
observed in only one or two sectors as a consequence of being distributed
randomly across the sky. As a result, we do not stitch the light curves of any
of the targets in the training sample. Instead, we compute their features on a
per-sector basis and consider each sector of observations as a separate input.
That is, a single target in the training sample can contribute to the final
sample multiple times. We note that this will lead to larger ambiguity in the
classification sample. For example, observations of true $\gamma$ Dor
pulsations might be unresolved in a single sector, such that they are confused
with a rotational signal. Likewise, variables such as eclipsing or ellipsoidal
variables may have variability periods which exceed the single-sector
observations.
### 4.2 Feature extraction and classification
Table 2: Results of classification for each sample, showing the breakdown of
individual classifications as a fraction of the total sample. The number in
brackets represents the fraction of the sample for the class. $\gamma$ Dor
(N=10,047) $\displaystyle\left\\{\begin{array}[]{rl}\textrm{{$\gamma$ Dor /
SPB}}&6,489\leavevmode\nobreak\ (0.65)\\\
\textrm{Rotation}&2,416\leavevmode\nobreak\ (0.24)\\\
\textrm{{Hybrid}}&618\leavevmode\nobreak\ (0.06)\\\ \textrm{Eclipsing
binary}&251\leavevmode\nobreak\ (0.02)\\\ \textrm{Contact
binary}&205\leavevmode\nobreak\ (0.02)\\\ \textrm{$\delta$
Scuti}&58\leavevmode\nobreak\ (\sim 0)\\\ \end{array}\right.$ $\delta$ Scuti
(N=45,648) $\displaystyle\left\\{\begin{array}[]{rl}\textrm{{$\delta$
Scuti}}&19,226\leavevmode\nobreak\ (0.42)\\\
\textrm{{Hybrid}}&15,395\leavevmode\nobreak\ (0.34)\\\
\textrm{Rotation}&4,371\leavevmode\nobreak\ (0.10)\\\ \textrm{Eclipsing
binary}&3,656\leavevmode\nobreak\ (0.08)\\\ \textrm{$\gamma$ Dor /
SPB}&2,962\leavevmode\nobreak\ (0.06)\\\ \textrm{Contact
binary}&38\leavevmode\nobreak\ (\sim 0)\\\ \end{array}\right.$ SPB (N=2,795)
$\displaystyle\left\\{\begin{array}[]{rl}\textrm{{$\gamma$ Dor /
SPB}}&1,481\leavevmode\nobreak\ (0.53)\\\
\textrm{Rotation}&948\leavevmode\nobreak\ (0.34)\\\
\textrm{Hybrid}&209\leavevmode\nobreak\ (0.07)\\\ \textrm{Eclipsing
binary}&88\leavevmode\nobreak\ (0.03)\\\ \textrm{Contact
binary}&59\leavevmode\nobreak\ (0.02)\\\ \textrm{$\delta$
Scuti}&10\leavevmode\nobreak\ (\sim 0)\\\ \end{array}\right.$
We apply a uniform processing of each light curve prior to feature extraction.
This processing includes applying a Gaussian high-pass filter to remove long-
term trends and dividing the light curve by the standard deviation of its flux
(Z-scoring) to ensure normality across the light curve sample. Similar to the
training sample, each target is classified on a sector-by-sector basis, so
that a single target can have multiple classifications across different
sectors.
We use a greedy feature-selection algorithm to pick out a sample of 7 highly
orthogonal features from the original 22 features calculated. These are the
features that best discriminate amongst the variability classes. We also
include several additional features we consider important to the
classification which are not calculated in pycatch22 but are known to help
distinguish variability (see, e.g., Murphy et al. 2019). These features are
the skewness of the flux for discriminating eclipsing binaries, as well as the
skewness of the amplitude spectrum at frequencies less than 1 d-1, less than 5
d-1, and greater than 5 d-1. The frequency domain skewness indicators measure
the effective power contained in different regions of the amplitude spectrum:
$\delta$ Scuti variables will typically have significantly higher skewness at
higher frequencies, and hybrid pulsators will have strong skewness in both
regions. Finally, we include the dominant frequency and amplitude of pulsation
(in Z-scored units), the Gaia BP-RP colour index, and the Gaia reduced unit
weight error.
Figure 5 shows the confusion matrix for our training and test data. For
single-sector observations, the classifier appears accurate, especially for
hybrid pulsators and rotational variables. Unsurprisingly, the $\delta$ Scuti
class is strongly confused with the hybrid class since the training sample was
mostly based on TESS data exceeding multiple sectors. As a result, not all
modes in the hybrid pulsators are resolved in only a single sector. Curiously,
the eclipsing binary class has some overlap with the rotational variable
class. This is likely because the EB class consists of semi-detached (EA) and
W Ursae Majoris (EW) binaries. Only the contact class contains ellipsoidal
binaries. Finally, the $\gamma$ Dor class is weakly confused with the
rotational variables. Again, this is expected; a single sector of observations
limits the ability to resolve modes, thus the single rotation peak can be
mistaken for a $\gamma$ Dor pulsation and vice-versa.
We run the classifier on our sample of candidate $\delta$ Scuti and $\gamma$
Dor pulsators. Note that we do not classify the $\beta$ Cep sample which we
instead manually inspect. The results of the classifier for the sample along
with their class probability are presented in Table 1, with the breakdown of
each sample and resulting classification in Table 2. The number of classified
stars is slightly less than the number of available light curves discussed in
Sec. 2.2 because some light curves are completely dominated by poor data and
were discarded from the sample.
It is important to note that a single star observed in multiple sectors will
have a classification for each sector in the table. For example, TIC 38458616
has been observed for 13 sectors in the first two years of TESS and has an
independent classification per sector. Nine of the sectors predict it to be
$\gamma$ Dor variable, and four predict it to be a contact binary, with the
majority of sectors having a low probability for their respective class. A
closer inspection of the stitched light curve reveals the target to indeed be
a binary system. While stitching individual light curves may give better
results for poorly resolved modes, we strive to instead maintain a uniform
input sample for the classifier. We show examples of high probability
classifications in Fig. 8.
Finally, we make an additional cut on the resulting classifications. To ensure
that the rotational and eclipsing binary variables are reasonably well-
separated from the g-mode pulsators, we apply a data cut such that the second-
highest amplitude frequency can not be half of the dominant frequency within a
tolerance of 0.01 d-1 based on the prewhitening performed in Sec. 3.1. This is
done because rotational variables and eclipsing binaries typically show a
subharmonic frequency at half the dominant due to their strongly non-
sinusoidal light curves. Indeed, for eclipsing binaries, this subharmonic is
usually the true frequency. Making this additional cut removes 498 candidates
from the $\gamma$ Dor sample.
Figure 6: Stacked amplitude spectra of the g-mode candidate sample (left, in
period space) and $\delta$ Scuti candidate sample (right, in frequency space)
for which the prediction probability is greater than 0.5. Each star forms a
single row, sorted by the dominant pulsation, with color corresponding to
amplitude. For the g-mode sample, which combines both $\gamma$ Dor and SPB
stars, we see a distinct secondary ridge associated with either a harmonic of
the dominant frequency or the expected $\ell=2$ dipole modes seen in Li et al.
(2020). For the $\delta$ Scuti sample, we see ridges associated with the first
and second overtones, as well as a harmonic line. Figure 7: Stacked
amplitude spectra of the $\beta$ Cep sample sorted by dominant pulsation
frequency.
Using these classifications, we construct the stacked mplitude spectra. That
is, we calculate the amplitude spectrum for each star and stack it according
to the dominant pulsation frequency for the classified g-mode sample ($\gamma$
Dor / SPB), the $\delta$ Scuti stars, and the $\beta$ Cep stars, whose
prediction probability is above 0.5. We jointly visualize the $\gamma$ Dor and
SPB sample on the same figure. We show the stacked amplitude spectra in Fig.
6. Each row displays the amplitude spectrum of one star sorted vertically by
the dominant pulsation period. For the g-mode sample, we observe two distinct
ridges, along with a third very faint ridge. The primary ridge is likely
dominated by $\ell=1$ $m=1$ g-modes, with the secondary by either lower
amplitude $\ell=2$ modes similar to what was seen by Li et al. (2020) for the
Kepler data, or caused by the harmonic of the dominant mode. We note that the
highest amplitude ridge is likely not all pure dipole modes as the ridge is
formed from sorting by the dominant period. Note also the presence of a purely
vertical ridge and sidelobes at 1 d-1 caused by the known TGLC systematic.
The stacked $\delta$ Scuti figure shows four obvious ridges, corresponding to
the primary, first and second overtones, and harmonic frequency. These lines
are known properties of $\delta$ Scuti stars, visualized more commonly in
Peterson diagrams (Netzel et al., 2022; Pietrukowicz et al., 2020). Finally,
the $\beta$ Cep sample shows no obvious ridges as expected for these low-order
p- and g-mode pulsators (Stankov & Handler, 2005).
### 4.3 Prioritised catalogue of new pulsators ranked by asteroseismic
potential
Table 3: Rank ordered tables for the $\gamma$ Dor, $\delta$ Scuti, and SPB, and $\beta$ Cep classified pulsators. Note that the tables are separated according to their candidate and classification status – g-mode pulsators are the g-mode candidates from Aerts et al. (2023), and the p-mode pulsators are the candidates from Gaia Collaboration et al. (2023a). In the table, P and Nf refer to the probability of the classification and the number of independent modes respectively. Sectors is the number of non-contiguous sectors in which the target falls on TESS cameras calculated up to Cycle 6. The full version in electronic format is available online. | | | | $\gamma$ Dor sample | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|---|---
TIC ID | DR3 ID | RA | Dec | Prediction | P | Nf | $f_{1}$ | $f_{2}$ | $A_{1}$ | $A_{2}$ | Sectors | Score
| | deg | deg | | | d-1 | d-1 | ppt | ppt | | |
326258494 | 2249180128450868992 | 298.84 | 67.87 | GDOR/SPB | 0.98 | 17 | 2.62 | 2.24 | 12.21 | 0.72 | 37 | 0.79
259127682 | 2260889652408354944 | 291.59 | 67.19 | GDOR/SPB | 0.84 | 24 | 1.08 | 1.18 | 9.07 | 4.61 | 36 | 0.79
364588332 | 4648571270586910976 | 85.13 | -75.69 | GDOR/SPB | 0.91 | 22 | 1.31 | 1.30 | 9.71 | 5.38 | 35 | 0.79
| | | | ⋮ | | | | | | | |
| | | | $\delta$ Scuti sample | | | | | | | |
TIC ID | DR3 ID | RA | Dec | Prediction | P | Nf | $f_{1}$ | $f_{2}$ | $A_{1}$ | $A_{2}$ | Sectors | Score
233617727 | 2253706710446026496 | 284.02 | 64.80 | DSCT | 0.75 | 26 | 13.02 | 13.09 | 15.41 | 3.60 | 36 | 0.77
176960346 | 5266903384176544640 | 101.21 | -70.16 | DSCT | 0.60 | 37 | 6.54 | 10.97 | 14.79 | 2.83 | 33 | 0.77
38461275 | 4670142206953240448 | 61.21 | -64.17 | DSCT | 0.62 | 36 | 9.82 | 10.53 | 8.04 | 2.46 | 32 | 0.76
| | | | ⋮ | | | | | | | |
| | | | SPB sample | | | | | | | |
TIC ID | DR3 ID | RA | Dec | Prediction | P | Nf | $f_{1}$ | $f_{2}$ | $A_{1}$ | $A_{2}$ | Sectors | Score
267543987 | 2264463198340787968 | 289.43 | 72.95 | GDOR/SPB | 0.62 | 16 | 1.28 | 1.00 | 20.90 | 0.57 | 36 | 0.66
349784439 | 5288831253807922560 | 113.20 | -62.78 | GDOR/SPB | 0.79 | 7 | 2.98 | 1.49 | 7.93 | 5.09 | 34 | 0.63
300382254 | 5269074232447890048 | 111.28 | -67.76 | GDOR/SPB | 0.64 | 14 | 1.24 | 0.62 | 16.69 | 0.80 | 34 | 0.63
| | | | ⋮ | | | | | | | |
| | | | $\beta$ Cep sample | | | | | | | |
TIC ID | DR3 ID | RA | Dec | Prediction | P | Nf | $f_{1}$ | $f_{2}$ | $A_{1}$ | $A_{2}$ | Sectors | Score
145594454 | 5328039318762759552 | 132.55 | -49.00 | — | —- | 40 | 5.71 | 5.80 | 15.24 | 8.36 | 6 | 0.49
276171115 | 2055651749653738112 | 303.31 | 34.02 | — | — | 34 | 5.21 | 5.12 | 15.52 | 4.23 | 8 | 0.46
90964721 | 2055288395420325760 | 302.00 | 33.67 | — | — | 34 | 4.06 | 3.69 | 12.94 | 1.54 | 8 | 0.46
| | | | ⋮ | | | | | | | |
Finally, we present a catalogue of all the new pulsators ranked by their
asteroseismic potential (Table 3). As an example for the $\gamma$ Dor
variables, we quantify how likely they are to be true $\gamma$ Dor stars, and
how viable we believe their pulsation modes are for typical g-mode analyses
(i.e., Li et al. 2020). This table is a combination of the class prediction
probability, the number of currently available sectors (calculated using tess-
point; Burke et al. 2020), and cuts made based on prewhitening and combination
frequency removal. We weigh the contribution to the ‘score’ equally with the
following criteria; 1. The number of sectors that the target falls on a TESS
camera up to Cycle 6 of TESS observations divided by the maximum number of
sectors possible for a CVZ target. 2. The predicted class probability,
calculated as the mean probability of each sector. For targets with multiple
sectors of observation, we find the mean probability of each predicted class
and choose the class with the highest probability. Since the $\beta$ Cep
sample has no predictions, we do not include this in their scoring. 3. the
number of observed independent modes found during the prewhitening process
after removal of harmonics and combination frequencies.
We include all columns used to calculate this score for users wishing to
prioritize follow-up studies or work with alternative features, and show a few
high-probability classifications in Fig. 8.
## 5 Conclusions
In this paper we have examined the pulsators of spectral type O, B, A, or F
classified from Gaia DR3 photometry by Gaia Collaboration et al. (2023a) and
having TESS high-cadence light curves. A comparison of dominant frequencies
present in these independent light curves indicates that Gaia is extremely
good at measuring g-mode frequencies (approximately 80% precision when
compared against TESS), and reasonably effective at higher frequencies (69%).
We note that for the higher frequency p-modes, it is unclear whether the Gaia
or TESS data is more accurate for measuring the ‘true’ dominant frequency. The
30-minute cadence of the TESS data precludes accurate measurement of signals
above 24 d-1, whereas the Gaia photometry suffers from instrumental effects at
mmag level and its unequal sampling means there is no clearly defined Nyquist
limit. As such, we consider the 69% precision for the p-modes to be a worst-
case scenario.
A comparison of amplitudes for the dominant and secondary frequencies for each
variable class reveals that $\gamma$ Dor and SPB variables have
indistinguishable amplitude distributions. Prior work on variability
classification supports this result. While colour information has been used to
distinguish between the two classes of pulsators on the basis of instability
computations, Gaia Collaboration et al. (2023a) and Aerts et al. (2023) found
there to be a ‘continuum’ of g-mode pulsators bridging the predicted strips.
Said differently, both Gaia DR3 and TESS reveal that g-mode pulsators occur
along the main sequence covering an effective temperature ranging from roughly
6500 K to 18 000 K. According to instability computations available in the
literature, a fraction of these g-mode pulsators are too cold to be SPB stars
and too hot to be $\gamma\,$Dor stars. The large ranges in effective
temperature and luminosity for the Gaia DR3 $\gamma\,$Dor and SPB stars
discussed in Aerts et al. (2023) and now confirmed with TESS point to a lack
of physical ingredients in excitation predictions, such as rotation, radiative
levitation, nonlinear mode coupling (and tidal mode excitation not discussed
here). The combined Gaia DR3 and TESS light curves make us conclude that there
is one large global region of g-mode pulsations excited along the main
sequence, which is likely caused by a multitude of non-exclusive physical
phenomena. This suggestion from Aerts et al. (2023) is now solidified here
from our confirmation of the nature of these g-mode pulsators in our
catalogue, thanks to their TESS light curves. We also note that although
g-modes appear to be found across that entire temperature range, not all stars
pulsate in g-modes and not all pulsators are hybrids.
The classification of the TESS light curves reveals that the original Gaia
variability classification done by Coordination Unit 7 of the mission (see
Holl et al., 2018; Gaia Collaboration et al., 2019; Eyer et al., 2023;
Rimoldini et al., 2023; Gaia Collaboration et al., 2023a) is remarkably
accurate. For each candidate variable from Gaia, we find that the majority of
their TESS light curve classifications are in good agreement with their Gaia
classification. These results point to around 6,000 new $\gamma$ Dor, 20,000
new $\delta$ Scuti, and 1,481 new SPB pulsators confirmed by TESS. While the
TESS light curve classification is expected to be more accurate than Gaia, we
note that it is not perfect. Notably, the low-frequency g-mode pulsators are
easily confused with rotational variables.
We have made available several tables and datasets from the results of this
paper, including; prewhitened frequencies, amplitudes (in Gaia and TESS),
phases, and false alarm levels to 1% significance level for every target,
classifications and their respective probabilities for each sector of
observation, and a rank-ordered table of useful candidate pulsators. It is our
hope that the results presented here will enable future ensemble asteroseismic
studies of hot non-radial pulsators, especially with the release of Gaia DR4
and DR5, as well as with the upcoming PLATO mission (Rauer et al., 2024).
###### Acknowledgements.
The research leading to these results has received funding from the KU Leuven
Research Council (grant C16/18/005: PARADISE) and from the European Research
Council (ERC) under the Horizon Europe programme (Synergy Grant agreement
N∘101071505: 4D-STAR). While partially funded by the European Union, views and
opinions expressed are however those of the author(s) only and do not
necessarily reflect those of the European Union or the European Research
Council. Neither the European Union nor the granting authority can be held
responsible for them. This research was supported by the Munich Institute for
Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s
Excellence Strategy – EXC-2094 – 390783311. The MIAPbP research program
facilitated the interactions between the two authors, which ultimately led to
the current paper.
## References
* Aerts (2021) Aerts, C. 2021, Reviews of Modern Physics, 93, 015001, doi: 10.1103/RevModPhys.93.015001
* Aerts et al. (2010) Aerts, C., Christensen-Dalsgaard, J., & Kurtz, D. W. 2010, Asteroseismology, Springer-Verlag Heidelberg
* Aerts et al. (2004) Aerts, C., Cuypers, J., De Cat, P., et al. 2004, Astronomy and Astrophysics, 415, 1079, doi: 10.1051/0004-6361:20034628
* Aerts et al. (2023) Aerts, C., Molenberghs, G., & De Ridder, J. 2023, Astrophysical Properties of 15062 Gaia DR3 Gravity-Mode Pulsators: Pulsation Amplitudes, Rotation, and Spectral Line Broadening, arXiv, doi: 10.48550/arXiv.2302.07870
* Aerts & Rogers (2015) Aerts, C., & Rogers, T. M. 2015, ApJ, 806, L33, doi: 10.1088/2041-8205/806/2/L33
* Aerts & Tkachenko (2024) Aerts, C., & Tkachenko, A. 2024, A&A, in press, arXiv:2311.08453, doi: 10.48550/arXiv.2311.08453
* Aerts et al. (2018) Aerts, C., Molenberghs, G., Michielsen, M., et al. 2018, ApJS, 237, 15, doi: 10.3847/1538-4365/aaccfb
* Antoci et al. (2019) Antoci, V., Cunha, M. S., Bowman, D. M., et al. 2019, MNRAS, 490, 4040, doi: 10.1093/mnras/stz2787
* Audenaert et al. (2021) Audenaert, J., Kuszlewicz, J. S., Handberg, R., et al. 2021, The Astronomical Journal, 162, 209, doi: 10.3847/1538-3881/ac166a
* Balona (2015) Balona, L. A. 2015, MNRAS, 447, 2714, doi: 10.1093/mnras/stu2651
* Balona (2016) —. 2016, MNRAS, 457, 3724, doi: 10.1093/mnras/stw244
* Balona & Dziembowski (2011) Balona, L. A., & Dziembowski, W. A. 2011, MNRAS, 417, 591, doi: 10.1111/j.1365-2966.2011.19301.x
* Balona et al. (2011a) Balona, L. A., Cunha, M. S., Kurtz, D. W., et al. 2011a, MNRAS, 410, 517, doi: 10.1111/j.1365-2966.2010.17461.x
* Balona et al. (2011b) Balona, L. A., Pigulski, A., De Cat, P., et al. 2011b, MNRAS, 413, 2403, doi: 10.1111/j.1365-2966.2011.18311.x
* Balona et al. (2019) Balona, L. A., Handler, G., Chowdhury, S., et al. 2019, MNRAS, 485, 3457, doi: 10.1093/mnras/stz586
* Barbara et al. (2022) Barbara, N. H., Bedding, T. R., Fulcher, B. D., Murphy, S. J., & Van Reeth, T. 2022, Monthly Notices of the Royal Astronomical Society, 514, 2793, doi: 10.1093/mnras/stac1515
* Bedding et al. (2020) Bedding, T. R., Murphy, S. J., Hey, D. R., et al. 2020, Nature, 581, 147, doi: 10.1038/s41586-020-2226-8
* Bedding et al. (2023) Bedding, T. R., Murphy, S. J., Crawford, C., et al. 2023, ApJ, 946, L10, doi: 10.3847/2041-8213/acc17a
* Bell et al. (2017) Bell, K. J., Hermes, J. J., Vanderbosch, Z., et al. 2017, The Astrophysical Journal, 851, 24, doi: 10.3847/1538-4357/aa9702
* Bowman (2020) Bowman, D. M. 2020, Frontiers in Astronomy and Space Sciences, 7, 70, doi: 10.3389/fspas.2020.578584
* Bowman et al. (2020) Bowman, D. M., Burssens, S., Simón-Díaz, S., et al. 2020, A&A, 640, A36, doi: 10.1051/0004-6361/202038224
* Bowman et al. (2018) Bowman, D. M., Buysschaert, B., Neiner, C., et al. 2018, A&A, 616, A77, doi: 10.1051/0004-6361/201833037
* Bowman et al. (2019) Bowman, D. M., Burssens, S., Pedersen, M. G., et al. 2019, Nature Astronomy, 3, 760, doi: 10.1038/s41550-019-0768-1
* Breger (2000) Breger, M. 2000, 210, 3
* Briquet et al. (2007) Briquet, M., Hubrig, S., De Cat, P., et al. 2007, A&A, 466, 269, doi: 10.1051/0004-6361:20066940
* Brunsden et al. (2018) Brunsden, E., Pollard, K. R., Wright, D. J., De Cat, P., & Cottrell, P. L. 2018, Monthly Notices of the Royal Astronomical Society, 475, 3813, doi: 10.1093/mnras/sty034
* Burke et al. (2020) Burke, C. J., Levine, A., Fausnaugh, M., et al. 2020, TESS-Point: High precision TESS pointing tool, Astrophysics Source Code Library. http://ascl.net/2003.001
* Burssens et al. (2019) Burssens, S., Bowman, D. M., Aerts, C., et al. 2019, MNRAS, 489, 1304, doi: 10.1093/mnras/stz2165
* Burssens et al. (2020) Burssens, S., Simón-Díaz, S., Bowman, D. M., et al. 2020, A&A, 639, A81, doi: 10.1051/0004-6361/202037700
* Burssens et al. (2023) Burssens, S., Bowman, D. M., Michielsen, M., et al. 2023, Nature Astronomy, 7, 913, doi: 10.1038/s41550-023-01978-y
* Cantiello & Braithwaite (2019) Cantiello, M., & Braithwaite, J. 2019, ApJ, 883, 106, doi: 10.3847/1538-4357/ab3924
* Cantiello et al. (2021) Cantiello, M., Lecoanet, D., Jermyn, A. S., & Grassitelli, L. 2021, ApJ, 915, 112, doi: 10.3847/1538-4357/ac03b0
* Córsico et al. (2019) Córsico, A. H., Althaus, L. G., Miller Bertolami, M. M., & Kepler, S. O. 2019, A&A Rev., 27, 7, doi: 10.1007/s00159-019-0118-4
* Daszyńska-Daszkiewicz et al. (2002) Daszyńska-Daszkiewicz, J., Dziembowski, W. A., Pamyatnykh, A. A., & Goupil, M. J. 2002, A&A, 392, 151, doi: 10.1051/0004-6361:20020911
* David-Uraz et al. (2019) David-Uraz, A., Neiner, C., Sikora, J., et al. 2019, MNRAS, 487, 304, doi: 10.1093/mnras/stz1181
* De Cat (2017) De Cat, P. 2017, 152, 04001, doi: 10.1051/epjconf/201715204001
* De Ridder et al. (2004) De Ridder, J., Telting, J. H., Balona, L. A., et al. 2004, MNRAS, 351, 324, doi: 10.1111/j.1365-2966.2004.07791.x
* Deal et al. (2016) Deal, M., Richard, O., & Vauclair, S. 2016, A&A, 589, A140, doi: 10.1051/0004-6361/201628180
* Dupret et al. (2005) Dupret, M.-A., Grigahcène, A., Garrido, R., et al. 2005, Monthly Notices of the Royal Astronomical Society, 360, 1143, doi: 10.1111/j.1365-2966.2005.09114.x
* Dupret et al. (2004) Dupret, M. A., Grigahcène, A., Garrido, R., Gabriel, M., & Noel, A. 2004, 559, 408
* Dupret et al. (2003) Dupret, M.-A., Ridder, J. D., Cat, P. D., et al. 2003, Astronomy & Astrophysics, 398, 677, doi: 10.1051/0004-6361:20021679
* Eyer et al. (2023) Eyer, L., Audard, M., Holl, B., et al. 2023, A&A, 674, A13, doi: 10.1051/0004-6361/202244242
* Eze & Handler (2024) Eze, C. I., & Handler, G. 2024, $\beta$ Cephei Pulsators in Eclipsing Binaries Observed with TESS, doi: 10.48550/arXiv.2403.12281
* Fritzewski et al. (2024a) Fritzewski, D., Aerts, C., Mombarg, J. S. G., Gossage, S., & Van Reeth, T. 2024a, A&A, in press
* Fritzewski et al. (2024b) Fritzewski, D. J., Van Reeth, T., Aerts, C., et al. 2024b, A&A, 681, A13, doi: 10.1051/0004-6361/202347618
* Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051
* Gaia Collaboration et al. (2016a) Gaia Collaboration, Brown, A. G. A., Vallenari, A., Prusti, T., & de Bruijne, J. H. J. e. a. 2016a, A&A, 595, A2, doi: 10.1051/0004-6361/201629512
* Gaia Collaboration et al. (2023a) Gaia Collaboration, De Ridder, J., Ripepi, V., et al. 2023a, A&A, 674, A36, doi: 10.1051/0004-6361/202243767
* Gaia Collaboration et al. (2016b) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., Brown, A. G. A., & Vallenari, A. e. a. 2016b, A&A, 595, A1, doi: 10.1051/0004-6361/201629272
* Gaia Collaboration et al. (2023b) Gaia Collaboration, Vallenari, A., Brown, A. G. A., Prusti, T., & de Bruijne, J. H. J. e. a. 2023b, A&A, 674, A1, doi: 10.1051/0004-6361/202243940
* Gaia Collaboration et al. (2019) Gaia Collaboration, Eyer, L., Rimoldini, L., et al. 2019, A&A, 623, A110, doi: 10.1051/0004-6361/201833304
* García & Ballot (2019) García, R. A., & Ballot, J. 2019, Living Reviews in Solar Physics, 16, 4, doi: 10.1007/s41116-019-0020-1
* Garcia et al. (2022a) Garcia, S., Van Reeth, T., De Ridder, J., & Aerts, C. 2022a, A&A, 668, A137, doi: 10.1051/0004-6361/202244365
* Garcia et al. (2022b) Garcia, S., Van Reeth, T., De Ridder, J., et al. 2022b, A&A, 662, A82, doi: 10.1051/0004-6361/202141926
* Guzik (2021) Guzik, J. A. 2021, Frontiers in Astronomy and Space Sciences, 8, doi: 10.3389/fspas.2021.653558
* Han & Brandt (2023) Han, T., & Brandt, T. D. 2023, The Astronomical Journal, 165, 71, doi: 10.3847/1538-3881/acaaa7
* Handler et al. (2019) Handler, G., Pigulski, A., Daszyńska-Daszkiewicz, J., et al. 2019, The Astrophysical Journal, 873, L4, doi: 10.3847/2041-8213/ab095f
* Hekker & Christensen-Dalsgaard (2017) Hekker, S., & Christensen-Dalsgaard, J. 2017, A&A Rev., 25, 1, doi: 10.1007/s00159-017-0101-x
* Hermes et al. (2017) Hermes, J. J., Gänsicke, B. T., Kawaler, S. D., et al. 2017, ApJS, 232, 23, doi: 10.3847/1538-4365/aa8bb5
* Hey et al. (2021) Hey, D. R., Montet, B. T., Pope, B. J. S., Murphy, S. J., & Bedding, T. R. 2021, arXiv:2108.03785 [astro-ph]. http://ascl.net/2108.03785
* Hey et al. (2019) Hey, D. R., Holdsworth, D. L., Bedding, T. R., et al. 2019, Monthly Notices of the Royal Astronomical Society, 488, 18, doi: 10.1093/mnras/stz1633
* Heynderickx et al. (1994) Heynderickx, D., Waelkens, C., & Smeyers, P. 1994, A&AS, 105, 447
* Holl et al. (2018) Holl, B., Audard, M., Nienartowicz, K., et al. 2018, A&A, 618, A30, doi: 10.1051/0004-6361/201832892
* Huang et al. (2020) Huang, C. X., Vanderburg, A., Pál, A., et al. 2020, Photometry of 10 Million Stars from the First Two Years of TESS Full Frame Images, doi: 10.17909/t9-r086-e880
* IJspeert et al. (2021) IJspeert, L. W., Tkachenko, A., Johnston, C., et al. 2021, A&A, 652, A120, doi: 10.1051/0004-6361/202141489
* IJspeert et al. (2024) —. 2024, A&A, in press, arXiv:2402.06084, doi: 10.48550/arXiv.2402.06084
* Johnston (2021) Johnston, C. 2021, A&A, 655, A29, doi: 10.1051/0004-6361/202141080
* Kirk et al. (2016) Kirk, B., Conroy, K., Prša, A., et al. 2016, AJ, 151, 68, doi: 10.3847/0004-6256/151/3/68
* Kurtz (2022) Kurtz, D. W. 2022, ARA&A, 60, 31, doi: 10.1146/annurev-astro-052920-094232
* Kurtz et al. (2015) Kurtz, D. W., Shibahashi, H., Murphy, S. J., Bedding, T. R., & Bowman, D. M. 2015, Monthly Notices of the Royal Astronomical Society, 450, 3015, doi: 10.1093/mnras/stv868
* Li et al. (2019) Li, G., Bedding, T. R., Murphy, S. J., et al. 2019, MNRAS, 482, 1757, doi: 10.1093/mnras/sty2743
* Li et al. (2020) Li, G., Van Reeth, T., Bedding, T. R., et al. 2020, Monthly Notices of the Royal Astronomical Society, 491, 3586, doi: 10.1093/mnras/stz2906
* Li et al. (2020) Li, G., Van Reeth, T., Bedding, T. R., et al. 2020, MNRAS, 491, 3586, doi: 10.1093/mnras/stz2906
* Li et al. (2024) Li, G., Aerts, C., Bedding, T. R., et al. 2024, A&A, submitted, arXiv:2311.16991, doi: 10.48550/arXiv.2311.16991
* Lubba et al. (2019) Lubba, C. H., Sethi, S. S., Knaute, P., et al. 2019, Data Mining and Knowledge Discovery, 33, 1821, doi: 10.1007/s10618-019-00647-x
* Mombarg et al. (2021) Mombarg, J. S. G., Van Reeth, T., & Aerts, C. 2021, A&A, 650, A58, doi: 10.1051/0004-6361/202039543
* Murphy et al. (2022) Murphy, S. J., Bedding, T. R., White, T. R., et al. 2022, MNRAS, 511, 5718, doi: 10.1093/mnras/stac240
* Murphy et al. (2019) Murphy, S. J., Hey, D., Van Reeth, T., & Bedding, T. R. 2019, Monthly Notices of the Royal Astronomical Society, 485, 2380, doi: 10.1093/mnras/stz590
* Murphy et al. (2013) Murphy, S. J., Shibahashi, H., & Kurtz, D. W. 2013, Monthly Notices of the Royal Astronomical Society, 430, 2986, doi: 10.1093/mnras/stt105
* Netzel et al. (2022) Netzel, H., Pietrukowicz, P., Soszyński, I., & Wrona, M. 2022, Monthly Notices of the Royal Astronomical Society, 510, 1748, doi: 10.1093/mnras/stab3555
* Pamyatnykh (1999) Pamyatnykh, A. A. 1999, Acta Astron., 49, 119
* Panda et al. (2024) Panda, S. K., Dhanpal, S., Murphy, S. J., Hanasoge, S., & Bedding, T. R. 2024, ApJ, 960, 94, doi: 10.3847/1538-4357/ad0a97
* Pedersen (2022) Pedersen, M. G. 2022, ApJ, 930, 94, doi: 10.3847/1538-4357/ac5b05
* Pedersen et al. (2017) Pedersen, M. G., Antoci, V., Korhonen, H., et al. 2017, MNRAS, 466, 3060, doi: 10.1093/mnras/stw3226
* Pedersen et al. (2019) Pedersen, M. G., Chowdhury, S., Johnston, C., et al. 2019, ApJ, 872, L9, doi: 10.3847/2041-8213/ab01e1
* Pedersen et al. (2021) Pedersen, M. G., Aerts, C., Pápics, P. I., et al. 2021, Nature Astronomy, 5, 715, doi: 10.1038/s41550-021-01351-x
* Pietrukowicz et al. (2020) Pietrukowicz, P., Soszynski, I., Netzel, H., et al. 2020, Acta Astronomica, 70, 241, doi: 10.32023/0001-5237/70.4.1
* Rauer et al. (2024) Rauer et al. 2024, Experimental Astronomy, submitted
* Rehm et al. (2024) Rehm, R., Mombarg, J., Aerts, c., et al. 2024, APP, under review
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003, doi: 10.1117/1.JATIS.1.1.014003
* Rimoldini et al. (2023) Rimoldini, L., Holl, B., Gavras, P., et al. 2023, A&A, 674, A14, doi: 10.1051/0004-6361/202245591
* Shi et al. (2023) Shi, X.-d., Qian, S.-b., Zhu, L.-y., & Li, L.-j. 2023, The Astrophysical Journal Supplement Series, 268, 16, doi: 10.3847/1538-4365/ace88c
* Shultz et al. (2019) Shultz, M. E., Wade, G. A., Rivinius, T., et al. 2019, MNRAS, 490, 274, doi: 10.1093/mnras/stz2551
* Sikora et al. (2019a) Sikora, J., Wade, G. A., Power, J., & Neiner, C. 2019a, MNRAS, 483, 3127, doi: 10.1093/mnras/sty2895
* Sikora et al. (2019b) Sikora, J., David-Uraz, A., Chowdhury, S., et al. 2019b, MNRAS, 487, 4695, doi: 10.1093/mnras/stz1581
* Skarka et al. (2022) Skarka, M., Žák, J., Fedurco, M., et al. 2022, Astronomy & Astrophysics, 666, A142, doi: 10.1051/0004-6361/202244037
* Stankov & Handler (2005) Stankov, A., & Handler, G. 2005, ApJS, 158, 193, doi: 10.1086/429408
* Stassun et al. (2018) Stassun, K. G., Oelkers, R. J., Pepper, J., et al. 2018, The Astronomical Journal, 156, 102, doi: 10.3847/1538-3881/aad050
* Steen et al. (2024) Steen, M., Hermes, J. J., Guidry, J. A., et al. 2024, Measuring White Dwarf Variability from Sparsely Sampled Gaia DR3 Multi-Epoch Photometry, doi: 10.48550/arXiv.2404.02201
* Szewczuk & Daszyńska-Daszkiewicz (2017) Szewczuk, W., & Daszyńska-Daszkiewicz, J. 2017, MNRAS, 469, 13, doi: 10.1093/mnras/stx738
* Townsend (2003) Townsend, R. H. D. 2003, MNRAS, 340, 1020, doi: 10.1046/j.1365-8711.2003.06379.x
* Uytterhoeven et al. (2011) Uytterhoeven, K., Moya, A., Grigahcène, A., et al. 2011, A&A, 534, A125, doi: 10.1051/0004-6361/201117368
* Van Beeck et al. (2021) Van Beeck, J., Bowman, D. M., Pedersen, M. G., et al. 2021, A&A, 655, A59, doi: 10.1051/0004-6361/202141572
* Van Reeth et al. (2015) Van Reeth, T., Tkachenko, A., Aerts, C., et al. 2015, ApJS, 218, 27, doi: 10.1088/0067-0049/218/2/27
* Weiler (2018) Weiler, M. 2018, A&A, 617, A138, doi: 10.1051/0004-6361/201833462
* White et al. (2017) White, T. R., Pope, B. J. S., Antoci, V., et al. 2017, MNRAS, 471, 2882, doi: 10.1093/mnras/stx1050
## Appendix A Example classified light curves
Figure 8: A series of prototypical classified light curves from Sec. 4.2,
where the classification probability is greater than 0.8. Note that we have
stitched multiple TESS sectors to increase the frequency resolution.
|
# An analogue of Stone duality via support
Henning Krause Fakultät für Mathematik
Universität Bielefeld
D-33501 Bielefeld
Germany<EMAIL_ADDRESS>
(Date: July 23, 2023)
###### Abstract.
The notion of support provides an analogue of Stone duality, relating lattices
to topological spaces. This note aims to explain in lattice theoretic terms
what has been developed in the context of triangulated categories. In
particular, the parallel between support via closed and open sets is addressed
in terms of Hochster duality. As an application we indicate some consequences
for tensor exact categories.
###### Key words and phrases:
Exact category, Hochster duality, lattice, Stone duality, support, thick
subcategory, triangulated category
###### 2020 Mathematics Subject Classification:
18F70 (primary), 18G30 (secondary)
The traditional way of defining support assigns to an object a closed subset
of an appropriate topological space. On the other hand, there is Stone duality
which provides a correspondence between spaces and their lattices of open
subsets. This note aims to explain the connection between both concepts which
is based on Hochster duality.
Interest in this topic started with the seminal work of Balmer on support for
tensor triangulated categories [1]. Soon after the first paper appeared,
several authors noticed the connection with Stone duality via Hochster duality
[6, 11]. More recent work focusses on support for triangulated categories,
without assuming a tensor product, or at least without assuming it to be
symmetric [3, 9, 12, 13]. This note is following these ideas, without claiming
any originality. The purpose is to treat the subject purely in terms of
lattices, to make some beautiful ideas accessible for a wider audience. In
particular, one may apply the notion of support in other algebraic contexts,
beyond triangulated categories. As an illustration we indicate the rudiments
of ‘tensor exact geometry’. A further bonus is a version of Stone duality that
does not require lattices to be distributive.
## 1\. Spaces
We write $\mathbf{2}=\\{0,1\\}$ and view it either as a lattice via the usual
partial ordering or as topological space, where $\\{1\\}$ is open but not
closed. For a lattice $L$ let $L^{\mathrm{op}}$ denote its dual lattice. For a
topological space $X$ we write
$\Omega(X)=\operatorname{Hom}_{\mathbf{Top}}(X,\mathbf{2})$ for the lattice of
open sets and
$\operatorname{Cl}(X)=\operatorname{Hom}_{\mathbf{Top}}(X,\mathbf{2})^{\mathrm{op}}$
for the lattice of closed sets.
## 2\. Join-semilattices
Let $L$ be a _join-semilattice_. Thus $L$ is a poset in which all finite joins
exist, including the join over the empty set which is the unique minimal
element. A morphism of join-semilattices is a map that preserves all finite
joins. In particular, the partial order is preserved since
$a\leq b\;\iff\;a\vee b=b\quad\text{for}\quad a,b\in L.$
An _ideal_ of $L$ is a subset $I\subseteq L$ that is closed under finite joins
and such that $a\leq b$ in $L$ with $b\in I$ implies $a\in I$. Equivalently,
$I$ is of the form $\phi^{-1}(0)$ for a morphism $\phi\colon L\to\mathbf{2}$.
We write $\operatorname{Id}(L)$ for the lattice of ideals in $L$, with partial
order given by inclusion. Let us turn $\operatorname{Id}(L)$ into a
topological space which we denote $\operatorname{Sp}(L)$. Set
$\operatorname{supp}(a)\colonequals\\{I\in\operatorname{Sp}(L)\mid a\not\in
I\\}\quad\text{for}\quad a\in L.$
Note that $\operatorname{supp}(a\vee
b)=\operatorname{supp}(a)\cup\operatorname{supp}(b)$ for $a,b\in L$, and
$\bigcap_{a\in L}\operatorname{supp}(a)=\varnothing$. Thus sets of the form
$\operatorname{supp}(a)$ yield a basis of closed sets for a topology on
$\operatorname{Sp}(L)$.
###### Definition 1.
A _support datum_ on $L$ is a pair $(X,\sigma)$ consisting of a topological
space $X$ and a morphism $\sigma\colon L\to\operatorname{Cl}(X)$. A morphism
of support data $(X,\sigma)\to(Y,\tau)$ is a continuous map $f\colon X\to Y$
such that $\sigma=\operatorname{Cl}(f)\circ\tau$.
A support datum is nothing but a map that assigns to each $a\in L$ a closed
subset $\sigma(a)\subseteq X$ such that
1. ($\varnothing$)
$\sigma(0)=\varnothing$,
2. ($\vee$)
$\sigma(a\vee b)=\sigma(a)\cup\sigma(b)$ for all $a,b\in L$.
Predecessors of the following result are [3, Theorem 2] and [9, Proposition
5.4.2] in the context of triangulated categories.
###### Theorem 2.
The functor $X\mapsto\operatorname{Cl}(X)$ from topological spaces to join-
semilattices admits $L\mapsto\operatorname{Sp}(L)$ as a right adjoint. Thus
there is a natural bijection
(I)
$\Sigma\colon\operatorname{Hom}_{\mathbf{Top}}(X,\operatorname{Sp}(L))\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{JSLat}}(L,\operatorname{Cl}(X))$
which takes a continuous map $f\colon X\to\operatorname{Sp}(L)$ to the support
datum
$a\longmapsto f^{-1}(\operatorname{supp}(a))\quad\text{for}\quad a\in L.$
In particular, the pair $(\operatorname{Sp}(L),\operatorname{supp})$ is the
final support datum on $L$.
###### Proof.
It is easily checked that the map $\Sigma$ is well defined; it takes the
identity $\operatorname{Sp}(L)\to\operatorname{Sp}(L)$ to the support datum
$(\operatorname{Sp}(L),\operatorname{supp})$. Given any support datum
$(X,\sigma)$, we define $f\colon X\to\operatorname{Sp}(L)$ by setting
$f(x)\colonequals\\{a\in L\mid x\not\in\sigma(a)\\}\quad\text{for}\quad x\in
X.$
Then we have
$f^{-1}(\operatorname{supp}(a))=\\{x\in X\mid a\not\in
f(x)\\}=\sigma(a)\quad\text{for}\quad a\in L.$
Thus $f$ is continuous and we see that $\Sigma$ is surjective. For the
injectivity of $\Sigma$, observe that for any map $f\colon
X\to\operatorname{Sp}(L)$ and $x\in X$ we have
$f(x)=\\{a\in L\mid a\in f(x)\\}=\\{a\in L\mid x\not\in
f^{-1}(\operatorname{supp}(a))\\}.$
Thus $f$ is determined by $\Sigma(f)$. ∎
###### Remark 3.
(1) The assignment $\phi\mapsto\phi^{-1}(1)$ identifies
$\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2}^{\mathrm{op}})$ with the
ideal lattice $\operatorname{Id}(L)$. Taking $a\in L$ to the principal ideal
$\\{b\in L\mid b\leq a\\}$ identifies $L$ with the poset of _compact elements_
in $\operatorname{Id}(L)$.
(2) The join-semilattice $L$ can be recovered from the space
$\operatorname{Sp}(L)$ as follows. The _specialisation order_
$x\leq
y\;:\iff\;x\in\operatorname{cl}\\{y\\}\qquad(x,y\in\operatorname{Sp}(L))$
recovers the partial order on $\operatorname{Id}(L)=\operatorname{Sp}(L)$ that
is given by the inclusion of ideals. Taking the lattice $\operatorname{Id}(L)$
to its poset of compact elements yields $L$.
(3) The assignment $\phi\mapsto\phi^{-1}(0)$ identifies
$\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2})$ with
$\operatorname{Sp}(L)$ as sets. This yields the identification
$\operatorname{supp}(a)=\\{\phi\in\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2})\mid\phi(a)=1\\}\quad\text{for}\quad
a\in L.$
The choice of the topology (by declaring $\operatorname{supp}(a)$ to be
closed) suggests to write
$\operatorname{Sp}(L)=\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2})^{\vee}$.
Then one may rewrite the bijection (I) as
$\operatorname{Hom}_{\mathbf{Top}}(X,\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2})^{\vee})\simeq\operatorname{Hom}_{\mathbf{JSLat}}(L,\operatorname{Hom}_{\mathbf{Top}}(X,\mathbf{2})^{\mathrm{op}}).$
## 3\. Bounded lattices
Let $L$ be a _bounded lattice_. Thus $L$ is a poset in which all finite joins
and all finite meets exist. A morphism of bounded lattices is a map that
preserves all finite joins and all finite meets. An ideal $I\subseteq L$ is
_prime_ if $I\neq L$ and $a\wedge b\in I$ implies $a\in I$ or $b\in I$.
Equivalently, $I$ is of the form $\phi^{-1}(0)$ for a morphism $\phi\colon
L\to\mathbf{2}$. We write $\operatorname{Spc}(L)$ for the set of prime ideals
of $L$ and turn this into a topological space. Set
$\operatorname{supp}(a)\colonequals\\{I\in\operatorname{Spc}(L)\mid a\not\in
I\\}\quad\text{for}\quad a\in L.$
Note that $\operatorname{supp}(a\vee
b)=\operatorname{supp}(a)\cup\operatorname{supp}(b)$ for $a,b\in L$, and
$\bigcap_{a\in L}\operatorname{supp}(a)=\varnothing$. Thus sets of the form
$\operatorname{supp}(a)$ yield a basis of closed sets for a topology on
$\operatorname{Spc}(L)$.
###### Definition 4.
A _closed support datum_ on $L$ is a pair $(X,\sigma)$ consisting of a
topological space $X$ and a morphism $\sigma\colon L\to\operatorname{Cl}(X)$.
A morphism of closed support data $(X,\sigma)\to(Y,\tau)$ is a continuous map
$f\colon X\to Y$ such that $\sigma=\operatorname{Cl}(f)\circ\tau$.
A closed support datum is nothing but a map that assigns to each $a\in L$ a
closed subset $\sigma(a)\subseteq X$ such that
1. ($\varnothing$)
$\sigma(0)=\varnothing$ and $\sigma(1)=X$,
2. ($\vee$)
$\sigma(a\vee b)=\sigma(a)\cup\sigma(b)$ for all $a,b\in L$,
3. ($\wedge$)
$\sigma(a\wedge b)=\sigma(a)\cap\sigma(b)$ for all $a,b\in L$.
The predecessor of the following result is [1, Theorem 3.2] in the context of
tensor triangulated categories.
###### Theorem 5.
The functor $X\mapsto\operatorname{Cl}(X)$ from topological spaces to bounded
lattices admits $L\mapsto\operatorname{Spc}(L)$ as a right adjoint. Thus there
is a natural bijection
$\operatorname{Hom}_{\mathbf{Top}}(X,\operatorname{Spc}(L))\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{BLat}}(L,\operatorname{Cl}(X))$
which takes a continuous map $f\colon X\to\operatorname{Spc}(L)$ to the closed
support datum
$a\longmapsto f^{-1}(\operatorname{supp}(a))\quad\text{for}\quad a\in L.$
In particular, the pair $(\operatorname{Spc}(L),\operatorname{supp})$ is the
final support datum on $L$.
###### Proof.
The proof of Theorem 2 carries over without any change. The additional
structure on $L$ (given by the join $\wedge$) corresponds to the additional
condition on the ideals in $\operatorname{Spc}(L)$ (to be prime). ∎
## 4\. Hochster duality
Let $L$ be a bounded lattice. We consider the space $\operatorname{Spc}(L)$ of
prime ideals and provide a dual topology on this set. Observe that
$\operatorname{supp}(a\wedge
b)=\operatorname{supp}(a)\cap\operatorname{supp}(b)$ for all $a,b\in L$, and
$\bigcup_{a\in L}\operatorname{supp}(a)=\operatorname{Spc}(L)$. Thus sets of
the form $\operatorname{supp}(a)$ yield a basis of open sets for a topology on
$\operatorname{Spc}(L)$. We denote this space $\operatorname{Spc}(L)^{\vee}$
and call it the _Hochster dual_ of $\operatorname{Spc}(L)$. Note that
$\operatorname{Spc}(L)^{\vee}\cong\operatorname{Spc}(L^{\mathrm{op}})$
since
$\operatorname{Hom}_{\mathbf{BLat}}(L,\mathbf{2}^{\mathrm{op}})=\operatorname{Hom}_{\mathbf{BLat}}(L^{\mathrm{op}},\mathbf{2}).$
###### Definition 6.
An _open support datum_ on $L$ is a pair $(X,\sigma)$ consisting of a
topological space $X$ and a morphism $\sigma\colon L\to\Omega(X)$. A morphism
of open support data $(X,\sigma)\to(Y,\tau)$ is a continuous map $f\colon X\to
Y$ such that $\sigma=\Omega(f)\circ\tau$.
There is a natural bijection
$\displaystyle\operatorname{Hom}_{\mathbf{BLat}}(L,\Omega(X))\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{BLat}}(L^{\mathrm{op}},\operatorname{Cl}(X))$
that takes an open support datum $(X,\sigma)$ to the closed support datum
$(X,\tau)$ with
$\tau(a)\colonequals X\setminus\sigma(a)\quad\text{for}\quad a\in
L^{\mathrm{op}}.$
This yields the following reformulation of Theorem 5.
###### Theorem 7.
The functor $X\mapsto\Omega(X)$ from topological spaces to bounded lattices
admits $L\mapsto\operatorname{Spc}(L)^{\vee}$ as a right adjoint. Thus there
is a natural bijection
(II)
$\operatorname{Hom}_{\mathbf{Top}}(X,\operatorname{Spc}(L)^{\vee})\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{BLat}}(L,\Omega(X)).\qed$
###### Remark 8.
(1) The usual setting for Hochster duality are _spectral spaces_ , so spaces
of the form $\operatorname{Spc}(L)$ for some bounded distributive lattice $L$
[10, II.3.4].
(2) The adjunction formula (II) may be rewritten as
$\operatorname{Hom}_{\mathbf{Top}}(X,\operatorname{Hom}_{\mathbf{BLat}}(L,\mathbf{2}))\simeq\operatorname{Hom}_{\mathbf{BLat}}(L,\operatorname{Hom}_{\mathbf{Top}}(X,\mathbf{2})).$
## 5\. Frames
We recall some well known facts from Stone duality. A _frame_ is a complete
lattice in which finite meets distribute over arbitrary joins, so
$a\wedge\big{(}\bigvee_{i}b_{i}\big{)}=\bigvee_{i}(a\wedge
b_{i})\quad\text{for all}\quad a,b_{i}\in F.$
A morphism of frames is a map that preserves all joins and finite meeets. The
functor sending a topological space $X$ to its frame $\Omega(X)$ of open sets
admits a right adjoint, which sends a frame $F$ to its space
$\operatorname{Pt}(F)$ of points [10, II.1.4]. A _point_ of $F$ is by
definition a frame morphism $F\to\mathbf{2}$, and an open set is one of the
form
$U(a)=\\{\phi\in\operatorname{Pt}(F)\mid\phi(a)=1\\}\quad\text{for some}\quad
a\in F.$
This adjunction amounts to a natural bijection
(III) $\operatorname{Hom}_{\mathbf{Top}}(X,\operatorname{Pt}(F))\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{Frm}}(F,\Omega(X)).$
A frame $F$ is _spatial_ if there are enough points, which means that the unit
of the adjunction (given by evaluation) yields an isomorphism
(IV) $F\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\Omega(\operatorname{Pt}(F)).$
Let $L$ be a bounded lattice that is distributive. Then its ideal lattice
$\operatorname{Id}(L)$ is a spatial frame [10, II.3.4]. A frame isomorphic to
one of the form $\operatorname{Id}(L)$ is called _coherent_. Let us consider
the embedding $L\to\operatorname{Id}(L)$ which takes $a\in L$ to the principal
ideal
$\left\downarrow a\right.\colonequals\\{b\in L\mid b\leq a\\}.$
###### Lemma 9.
Restriction along $L\to\operatorname{Id}(L)$ induces for any frame $F$ a
natural bijection
(V) $\operatorname{Hom}_{\mathbf{Frm}}(\operatorname{Id}(L),F)\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{BLat}}(L,F).$
###### Proof.
The inverse map takes a morphism $\phi\colon L\to F$ to
$\operatorname{Id}(L)\to F$ given by
$I\longmapsto\bigvee_{a\in I}\phi(a)\quad\text{for}\quad
I\in\operatorname{Id}(L).\qed$
Take $F=\mathbf{2}$. Then the above bijection identifies each point
$\phi\in\operatorname{Pt}(\operatorname{Id}(L))$ with the prime ideal
$\phi^{-1}(0)\cap L$ of $L$, and this yields an isomorphism
(VI) $\operatorname{Pt}(\operatorname{Id}(L))\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Spc}(L)^{\vee}$
between the space of points of $\operatorname{Id}(L)$ and the Hochster dual of
$\operatorname{Spc}(L)$. More precisely, for any ideal
$I\in\operatorname{Id}(L)$ we have
$U(I)=\bigcup_{a\in I}U(\left\downarrow a\right.)$
and (VI) identifies $U(\left\downarrow a\right.)$ with
$\operatorname{supp}(a)$ for each $a\in L$.
###### Corollary 10.
Let $L$ be a bounded lattice that is distributive. Then the assignment
$I\longmapsto\bigcup_{a\in I}\operatorname{supp}(a)$
yields an isomorphism
(VII) $\operatorname{Id}(L)\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\Omega(\operatorname{Spc}(L)^{\vee}).$
###### Proof.
The isomorphism is a consequence of (IV) since $\operatorname{Id}(L)$ is a
spatial frame; its inverse sends an open set $U$ to $\\{a\in
L\mid\operatorname{supp}(a)\subseteq U\\}$. ∎
###### Remark 11.
(1) Stone duality yields an alternative proof of Theorem 7 when the lattice
$L$ is distributive, by combining (III), (V), and (VI).
(2) The adjunction formula (III) may be rewritten as
$\operatorname{Hom}_{\mathbf{Top}}(X,\operatorname{Hom}_{\mathbf{Frm}}(F,\mathbf{2}))\simeq\operatorname{Hom}_{\mathbf{Frm}}(F,\operatorname{Hom}_{\mathbf{Top}}(X,\mathbf{2})).$
(3) With the canonical identification
$\operatorname{Id}(L)=\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2}^{\mathrm{op}})$
for a join-semilattice $L$, the isomorphism (VI) may be rewritten as
$\operatorname{Hom}_{\mathbf{Frm}}(\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2}^{\mathrm{op}}),\mathbf{2})\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{BLat}}(L,\mathbf{2})$
whereas the isomorphism (VII) becomes
$\operatorname{Hom}_{\mathbf{JSLat}}(L,\mathbf{2}^{\mathrm{op}})\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Hom}_{\mathbf{Top}}(\operatorname{Hom}_{\mathbf{BLat}}(L,\mathbf{2}),\mathbf{2}).$
## 6\. Triangulated categories and beyond
Let $\mathcal{T}$ be an essentially small triangulated category and let
$\operatorname{Ob}\mathcal{T}$ denote the class of objects. For objects $X,Y$
in $\mathcal{T}$ we write
$X\sim Y\;:\iff\;[X]=[Y]$
where $[X]$ denotes the thick subcategory generated by $X$. This provides an
equivalence relation and the set of equivalence classes
$L(\mathcal{T})\colonequals\operatorname{Ob}\mathcal{T}/{\sim}$
is partially ordered via inclusion; it is a join-semilattice with
$[X]\vee[Y]=[X\oplus Y].$
Let $\operatorname{Thick}(\mathcal{T})$ denote the lattice of thick
subcategories of $\mathcal{T}$.
###### Lemma 12.
The assignment
$\mathcal{T}\supseteq\mathcal{S}\longmapsto\\{[X]\in L(\mathcal{T})\mid
X\in\mathcal{S}\\}\subseteq L(\mathcal{T})$
yields a lattice isomorphism
$\operatorname{Thick}(\mathcal{T})\xrightarrow{\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}}\operatorname{Id}(L(\mathcal{T}))$.
###### Proof.
The inverse map sends an ideal $I\subseteq L(\mathcal{T})$ to the full
subcategory of $\mathcal{T}$ that is given by the objects $X$ with $[X]\in I$.
∎
Now let $(\mathcal{T},\otimes)$ be a tensor triangulated category, i.e. a
triangulated category equipped with a monoidal structure (not necessarily
symmetric) which is exact in each variable. A thick subcategory
$\mathcal{S}\subseteq\mathcal{T}$ is a _tensor ideal_ if $X\otimes Y$ is in
$\mathcal{S}$ provided that one of $X,Y$ is in $\mathcal{S}$, and an ideal is
_radical_ if $X$ is in $\mathcal{S}$ provided that $X\otimes X$ is in
$\mathcal{S}$.
For objects $X,Y$ in $\mathcal{T}$ we write
$X\approx Y\;:\iff\;\langle X\rangle=\langle Y\rangle$
where $\langle X\rangle$ denotes the radical thick tensor ideal generated by
$X$. We obtain an equivalence relation and the set of equivalence classes
$L(\mathcal{T},\otimes)\colonequals\operatorname{Ob}\mathcal{T}/{\approx}$
is partially ordered via inclusion.
###### Lemma 13.
For objects $X,Y$ in $\mathcal{T}$ we have $\langle X\rangle\cap\langle
Y\rangle=\langle X\otimes Y\rangle$.
###### Proof.
One inclusion is clear. Let $Z\in\langle X\rangle\cap\langle Y\rangle$. Then
we compute
$\langle Z\rangle\subseteq\langle Z\otimes Z\rangle\subseteq\langle Z\otimes
Y\rangle\subseteq\langle X\otimes Y\rangle.$
Thus $Z\in\langle X\otimes Y\rangle$. ∎
We record the basic properties of $L(\mathcal{T},\otimes)$.
###### Proposition 14.
Let $(\mathcal{T},\otimes)$ be a tensor triangulated category. Then the poset
$L(\mathcal{T},\otimes)$ is a distributive lattice and its ideal lattice
$\operatorname{Id}(L(\mathcal{T},\otimes))$ identifies with the lattice of
radical thick tensor ideals of $\mathcal{T}$.
###### Proof.
For objects $X,Y$ in $\mathcal{T}$ we have
$\langle X\rangle\vee\langle Y\rangle=\langle X\oplus
Y\rangle\qquad\text{and}\qquad\langle X\rangle\wedge\langle Y\rangle=\langle
X\otimes Y\rangle$
thanks to Lemma 13. The tensor product distributes over sums and this implies
the distributivity of $L(\mathcal{T},\otimes)$. The second assertion is the
analogue of Lemma 12 and its proof carries over without change. ∎
Let us mention a couple of consequences. For example, Corollary 10 appplies
and yields a description of the lattice of radical thick tensor ideals of
$\mathcal{T}$ in terms of the space
$\operatorname{Spc}(L(\mathcal{T},\otimes))$ [1, Theorem 10.4]. Also, we see
that the lattice of radical thick tensor ideals of $\mathcal{T}$ is a coherent
frame [11, Theorem 3.1.9].
We conclude that the material developed in the previous sections can be
applied towards the description of thick subcategories and thick tensor
ideals. For some recent applications of this lattice theoretic approach, see
for example [4].
The thoughtful reader will notice that other settings (different from
triangulated categories) are perfectly feasible. Classical examples are
categories of modules or sheaves and their Serre subcategories; this is in
line with Gabriel’s reconstruction of a noetherian scheme from its category of
coherent sheaves [8].
## 7\. Tensor exact geometry
For exact categories we indicate the rudiments of ‘tensor exact geometry’
which may be viewed as the analogue of tensor triangular geometry [2]. Let
$(\mathcal{C},\otimes)$ be a _tensor exact category_ ; thus $\mathcal{C}$ is
an exact category in the sense of Quillen (so equipped with a distinguished
class of exact sequences) and there is a monoidal structure (not necessarily
symmetric) which is exact in each variable. A _thick subcategory_ is a full
additive subcategory $\mathcal{B}\subseteq\mathcal{C}$ satisfying the _two-
out-of-three property_ , so any exact sequence from $\mathcal{C}$ lies in
$\mathcal{B}$ if two of its three terms are in $\mathcal{B}$. We make the same
definitions as before for a tensor triangulated category and obtain with same
proof the analogue of Proposition 14.
###### Proposition 15.
Let $(\mathcal{C},\otimes)$ be a tensor exact category. Then the poset
$L(\mathcal{C},\otimes)$ is a distributive lattice and its ideal lattice
$\operatorname{Id}(L(\mathcal{C},\otimes))$ identifies with the lattice of
radical thick tensor ideals of $\mathcal{C}$.∎
A cornerstone of tensor triangular geometry is the classification of radical
thick tensor ideals [1, Theorem 10.4]; its analogue for tensor exact
categories is now a consequence of Corollary 10.
###### Corollary 16.
Taking an object to its support yields an isomorphism between the lattice of
radical thick tensor ideals of $\mathcal{C}$ and
$\Omega(\operatorname{Spc}(L(\mathcal{C},\otimes))^{\vee})$.∎
###### Example 17.
Let $G$ be a finite group scheme over a field $k$. We consider the category
$(\mathcal{C},\otimes)=(\operatorname{mod}kG,\otimes)$ of finite dimensional
$k$-linear representations of $G$ with the usual tensor exact structure. We
write $H^{*}(G,k)$ for the cohomology ring of $G$ with coefficients in $k$ and
$\operatorname{Spec}(H^{*}(G,k))$ for its spectrum of homogeneous prime ideals
(with the Zariski topology). Then results from [5, 7] yield a homeomorphism
$\operatorname{Spec}(H^{*}(G,k))\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}\operatorname{Spc}(L(\operatorname{mod}kG,\otimes))$
given by
$H^{*}(G,k)\supseteq\mathfrak{p}\longmapsto\\{M\in\operatorname{mod}kG\mid\operatorname{Ext}_{kG}^{*}(M,M)_{\mathfrak{p}}=0\\}\subseteq
L(\operatorname{mod}kG,\otimes).$
Let $\mathbf{D}^{b}(\operatorname{mod}kG)$ denote the bounded derived category
of $\operatorname{mod}kG$ with the induced tensor structure. It is interesting
to note that the inclusion
$\operatorname{mod}kG\to\mathbf{D}^{b}(\operatorname{mod}kG)$ induces an
isomorphism
$L(\mathbf{D}^{b}(\operatorname{mod}kG),\otimes)\xrightarrow{\
\raisebox{-1.72218pt}[0.0pt][0.0pt]{$\scriptstyle{\sim}$}\
}L(\operatorname{mod}kG,\otimes),$
reconciling triangulated and exact structure.
### Acknowledgements
It is a pleasure to thank Greg Stevenson for several helpful discussions.
Also, I am grateful to Paul Balmer for sharing some private notes [3]. This
work was supported by the Deutsche Forschungsgemeinschaft (SFB-TRR 358/1 2023
- 491392403).
## References
* [1] P. Balmer, _The spectrum of prime ideals in tensor triangulated categories_ , J. Reine Angew. Math. 588 (2005), 149–168.
* [2] P. Balmer, _Tensor triangular geometry_ , in Proceedings of the International Congress of Mathematicians. Volume II, 85–112, Hindustan Book Agency, New Delhi, 2010.
* [3] P. Balmer, P. S. Ocal, _Universal support for triangulated categories_ , private communication, 2023.
* [4] T. Barthel, N. Castellana, D. Heard, N. Naumann, L. Pol, and B. Sanders, Descent in tensor triangular geometry, arXiv:2305.02308.
* [5] D. J. Benson, S. B. Iyengar, H. Krause, and J. Pevtsova, _Stratification for module categories of finite group schemes_ , J. Amer. Math. Soc. 31 (2018), no. 1, 265–302.
* [6] A. B. Buan, H. Krause, and Ø. Solberg, _Support varieties–an ideal approach_ , Homology, Homotopy Appl., 9 (2007), 45–74.
* [7] E. M. Friedlander and J. Pevtsova, _$\Pi$ -supports for modules for finite groups schemes_, Duke Math. J. 139 (2007), 317–368.
* [8] P. Gabriel, _Des catégories abéliennes_ , Bull. Soc. Math. France 90 (1962), 323–448.
* [9] S. Gratz and G. Stevenson, _Approximating triangulated categories by spaces_ , arXiv:2205.13356.
* [10] P. T. Johnstone, _Stone spaces_ , Cambridge Stud. Adv. Math., vol. 3, Cambridge Univ. Press, Cambridge, 1982.
* [11] J. Kock and W. Pitsch, _Hochster duality in derived categories and point-free reconstruction of schemes_ , Trans. Amer. Math. Soc. 369 (2017), no. 1, 223–261.
* [12] H. Krause, _Central support for triangulated categories_ , arXiv:2301.10464.
* [13] D. K. Nakano, K. B. Vashaw and M. T. Yakimov, _Noncommutative tensor triangular geometry_ , Amer. J. Math. 144 (2022), no. 6, 1681–1724.
|
# The first comprehensive study of a giant nebula around a radio-quiet quasar
in the $z<1$ Universe
Zhuoqi (Will) Liu1, Sean D. Johnson1, Jennifer I-Hsiu Li1,2, Gwen C. Rudie3,
Joop Schaye4, Hsiao-Wen Chen5, Jarle Brinchmann6, Sebastiano Cantalupo7, Mandy
C. Chen5, Wolfram Kollatschny8, Michael V. Maseda9, Nishant Mishra1, Sowgat
Muzahid10
1Department of Astronomy, University of Michigan, 1085 S. University, Ann
Arbor, MI 48109, USA
2Michigan Institute for Data Science, University of Michigan, Ann Arbor, MI,
48109, USA
3The Observatories of the Carnegie Institution for Science, 813 Santa Barbara
Street, Pasadena, CA 91101, USA
4Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The
Netherlands
5Department of Astronomy & Astrophysics, The University of Chicago, Chicago,
IL 60637, USA
6Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP,
Rua das Estrelas, PT4150-762 Porto, Portugal
7Department of Physics, University of Milan Bicocca, Piazza della Scienza 3,
I-20126 Milano, Italy
8Institut für Astrophysik und Geophysik, Universität Göttingen, Friedrich-Hund
Platz 1, D-37077 Göttingen, Germany
9Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter St.,
Madison, WI 53706 USA
10Inter-University Centre for Astronomy and Astrophysics (IUCAA), Post Bag 4,
Ganeshkhind, Pune 411 007, India E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
We present the first comprehensive study of a giant, $\approx\\!\\!70$ kpc-
scale nebula around a radio-quiet quasar at $z<1$. The analysis is based on
deep integral field spectroscopy with MUSE of the field of HE 0238$-$1904, a
luminous quasar at $z=0.6282$. The nebula emits strongly in
$\mathrm{[O\,II]}$, $\rm H\beta$, and $\mathrm{[O\,III]}$, and the quasar
resides in an unusually overdense environment for a radio-quiet system. The
environment likely consists of two groups which may be merging, and in total
have an estimated dynamical mass of $M_{\rm dyn}\approx 4\times 10^{13}$ to
$10^{14}\ {\rm M_{\odot}}$. The nebula exhibits largely quiescent kinematics
and irregular morphology. The nebula may arise primarily through interaction-
related stripping of circumgalactic and interstellar medium (CGM/ISM) of group
members, with some potential contributions from quasar outflows. The
simultaneous presence of the giant nebula and a radio-quiet quasar in a rich
environment suggests a correlation between such circum-quasar nebulae and
environmental effects. This possibility can be tested with larger samples. The
upper limits on the electron number density implied by the [O II] doublet
ratio range from $\log(n_{\rm e,[O\,II]}/\mathrm{cm^{-3}})<1.2$ to $2.8$.
However, assuming a constant quasar luminosity and negligible projection
effects, the densities implied from the measured line ratios between different
ions (e.g., [O II], [O III], and [Ne V]) and photoionization simulations are
often 10$-$400 times larger. This large discrepancy can be explained by quasar
variability on a timescale of $\approx 10^{4}{-}10^{5}$ years.
###### keywords:
quasars: supermassive black holes – galaxies: groups – intergalactic medium
††pubyear: 2023††pagerange: The first comprehensive study of a giant nebula
around a radio-quiet quasar in the $z<1$ Universe–The first comprehensive
study of a giant nebula around a radio-quiet quasar in the $z<1$ Universe
## 1 Introduction
Galaxy evolution is a complex process that involves gas inflows and outflows
thought to control star formation and black hole growth (for a review, see
Naab & Ostriker, 2017). Observations of interstellar medium (ISM) gas masses
and star formation rates suggest that massive star-forming galaxies have an
ISM depletion timescale much smaller than the age of the Universe at $z<3$
(Kennicutt & Evans, 2012; Tacconi et al., 2013). This can be explained if
galaxies accrete gas from external sources to maintain their star-forming
activity and black hole growth (though see Leitner & Kravtsov, 2011). At the
same time, the ISM of galaxies can lose gas through various processes
including stellar (for a review, see Zhang, 2018) and AGN feedback (for a
review, see Fabian, 2012), ram pressure stripping (e.g., Hester, 2006), and
tidal interactions with neighboring galaxies (e.g., Marasco et al., 2016).
Therefore, observations of the physical conditions, kinematics, and
distribution of gas around galaxies can provide insights into the mechanisms
governing galaxy formation and evolution. For these reasons, observations of
the gaseous cosmic ecosystems of galaxies were highlighted as a key long-term
priority by the 2020 Decadal Survey for Astronomy and Astrophysics (National
Academies of Sciences, 2021).
The properties of gas flows around galaxies, including their morphology and
kinematics, can be directly traced by observations of giant gas nebulae with
state-of-the-art wide-field integral field spectrographs (IFSs) such as the
Multi-Unit Spectroscopic Explorer (MUSE; Bacon et al. 2010) and the Keck
Cosmic Web Imager (KCWI; Martin et al. 2010). At $z>2$, systematic IFS surveys
around radio-quiet quasars discovered ubiquitous giant H I Ly$\alpha$ nebulae
(e.g., Cantalupo et al., 2014; Borisova et al., 2016b; Cai et al., 2019;
O’Sullivan et al., 2020; Fossati et al., 2021; Mackenzie et al., 2021). More
recently, a study of the ionization states of one of these nebulae found that
the gas has a surprisingly large density for halo-scale emission or a very
broad density distribution (Cantalupo et al., 2019). However, due to
redshifting of optical emission lines into the infrared, surface brightness
dimming, and the faintness of galaxies at high redshift, more fully
characterizing these $z>2$ nebulae is time-consuming even with large space- or
ground-based telescopes (though see Langen et al., 2023).
At low redshift, on the other hand, non-resonant emission lines such as
$\rm[O\,II]$, $\rm H\beta$, and $\rm[O\,III]$ are available at optical
wavelengths, and collecting galaxy spectra is less expensive. The power of
IFSs enabled the discoveries of giant nebulae around starburst galaxies,
galaxy groups, and quasars (e.g., Epinat et al., 2018; Boselli et al., 2019;
Chen et al., 2019; Rupke et al., 2019; Zabl et al., 2021; Burchett et al.,
2021; Leclercq et al., 2022; Dutta et al., 2023a), arising from outflows,
interactions, and filamentary accretion. These low redshift nebulae provide an
opportunity to study the physical conditions and the processes that may
produce giant nebulae at higher redshift. Most published studies of giant
nebulae around $z<1$ quasars have focused on radio-loud systems (Johnson et
al., 2018; Helton et al., 2021; Johnson et al., 2022), which represent a small
fraction of the general quasar population (e.g., Kellermann et al., 1989).
Furthermore, clustering measurements indicate that radio-loud quasars
typically reside in massive galaxy groups with halo masses of $M\sim 10^{13}\
{\rm M_{\odot}}$ while the halo masses of more common radio-quiet systems are
approximately five times lower on average (e.g., Shen et al., 2009). This mass
miss-match and the possibility of radio jet feedback make the comparison
between low-redshift giant nebulae around radio-loud quasars and high-redshift
radio-quiet ones difficult.
Recently, Chen et al. (2023) demonstrated the existence of giant nebulae
around two radio-quiet quasars as part of a study focused on turbulence using
the observed velocity structure function. In this paper, we present the first
comprehensive characterization of a giant nebula and associated galaxy
environment around a radio-quiet quasar at $z<1$, HE 0238$-$1904. Recently,
this nebula was independently discovered and reported by Zhao & Wang (2023).
However, our interpretation of the system differs substantially from the one
presented by Zhao & Wang (2023) due to adoption of a significantly different
quasar systemic redshift. In particular, Zhao & Wang (2023) adopted a Mg II
emission-based redshift of $z=0.631$ from the Hamburg/ESO Survey of bright
Quasars (Wisotzki et al., 2000). On the other hand, we adopt a redshift
estimate of $z=0.6282$ based on the [O II] emission-line centroid measured in
the spectrum of the quasar extracted from the same MUSE dataset used to
measure the kinematics of the giant nebula. The paper is organized as follows:
In Section 2, we discuss the observations, data reduction, and processing. In
Section 3, we describe our measurements and investigate the group environment
and giant nebula properties. In Section 4, we investigate the origin of the
nebula and the physical conditions of the gas. In Section 5, we summarize our
findings and discuss their implications.
Throughout the paper, we adopt a flat $\Lambda$ cosmology with $\Omega_{\rm
m}=0.3$, $\Omega_{\rm\Lambda}=0.7$, and $H_{0}=70\,\rm km\,s^{-1}Mpc^{-1}$.
All magnitudes are given in the AB system unless otherwise stated.
## 2 Observations and Data
The $z\approx 0.63$ quasar HE 0238$-$1904 has high-quality archival UV HST
absorption spectra used to study the CGM of the Milky Way (Zheng et al., 2019;
Bish et al., 2021) and distant galaxies (Muzahid et al., 2018; Lehner et al.,
2018) in addition to a highly ionized, fast outflow from the quasar itself
(Muzahid et al., 2012; Arav et al., 2013). To identify faint foreground
galaxies in the quasar field, we observed it with MUSE as part of the Quasar-
field Blind Emitter Survey (MUSE-QuBES; Muzahid et al. 2020; Dutta et al.
2023b) on the Very Large Telescope (VLT; PI: J. Schaye, PID: 094.A-0131(B) &
096.A-0222(A)). MUSE is an integral-field spectrograph on the UT4 VLT with a
field of view (FoV) of $1^{\prime}\times 1^{\prime}$ and a spaxel size of
$0.2^{\prime\prime}$ in wide-field mode (WFM). MUSE covers the spectral range
between $4750\,\text{\AA}$ to $9350\,\text{\AA}$ and a resolution of $R\sim
3000$. The MUSE observations are centered near the quasar sightline, and we
obtained eleven exposures collected between November 18th, 2014 and February
2nd, 2016 with a total exposure time of 8.75 hr with median seeing full-width-
at-half-maximum (FWHM) conditions of $0.7^{\prime\prime}$. At the redshift of
HE 0238$-$1904, the MUSE FoV corresponds to a projected size of $\approx 400$
proper kpc (pkpc) on a side, and the spectral coverage includes emission lines
such as [O II], H$\beta$, and [O III]. These emission lines enable sensitive
studies of any ionized nebulae and galaxies in the quasar’s environment.
Figure 1: MUSE spectrum of HE 0238$-$1904 overplotted with best-fit models.
The MUSE spectrum is shown as a solid black line, the power-law continuum
model is shown as a dashed purple line, and the iron template model is shown
using a solid blue line. The bottom left inset panel shows the
$\mathrm{[O\,II]}$ line emission with the best-fit continuum+line model shown
in red. The top right inset panel shows the $\rm H\beta$ and [O III] emission
with the best-fit shown in red.
To ensure robustness of results, we analyzed the MUSE data reduced through
three independent pipelines including CubEx (Cantalupo et al., 2019), the MUSE
GTO team pipeline (Weilbacher et al., 2014), and the ESO reduction pipeline
(Weilbacher et al., 2012) and found consistent results with all three. All
three pipelines include bias subtraction, flat fielding, wavelength
calibration, geometric calibration, sky subtraction, flux calibration, and
stacking of exposures. For the ESO reductions, we obtained the final, stacked
datacube from the ESO Science Archive and performed additional post-processed
sky subtraction with the Zurich Atmosphere Purge package (ZAP; Soto et al.
2016). For simplicity, we converted the air wavelengths delivered by the three
pipelines to vacuum.
To enable more sensitive and higher angular resolution photometric
measurements of galaxies in the quasar field, we also obtained an image from
the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST) with
the F814W filter (PI: L. Straka, PID: 14660) with a total exposure time of
$2182$ seconds split between four dithered exposures. We obtained the reduced,
stacked image from the Barbara A. Mikulski Archive for Space Telescopes
(MAST). In addition, to measure the UV luminosity of the quasar, we obtained
the archival UV spectrum from the Cosmic Origins Spectrograph (COS; Green et
al. 2012) from MAST. The spectrum consists of a total exposure time of 14400
seconds and 7496 seconds in the G130M and G160M gratings, respectively (PI: J.
Green and S. Penton, PID: 11541 and 12505). We reduced and coadded the COS
spectrum following procedures outlined in Johnson et al. (2015); Chen et al.
(2020).
### 2.1 Quasar Light Subtraction
HE 0238$-$1904 has a Gaia (Gaia Collaboration et al., 2018) $G$-band magnitude
of $m_{G}=15.2$, and this brightness combined with the broad wings of the MUSE
point spread function (PSF) causes contamination of nearby galaxy spectra with
quasar light. This contamination includes both continuum and line emission due
to the unresolved narrow-line region in the nucleus. To study faint extended
emission, we removed the contamination by performing quasar light subtraction
as described in Helton et al. (2021). In summary, our method of quasar light
subtraction does not rely on PSF measurements. Instead, it uses spectral
information and the fact that quasars and galaxies have different spectral
energy distributions (see also Rupke et al., 2017; Chen et al., 2023).
In ground-based observations, the Earth’s atmosphere scatters bluer photons
more than redder ones so that the PSF is wider at bluer wavelengths. The
differential scattering makes the spectral slope observed in a spaxel depend
on the angular separation from the quasar with steeper (shallower) slopes
further from (closer to) the quasar centroid. To account for this, we used a
two-component non-negative matrix factorization (NMF; Blanton & Roweis 2007;
Ren et al. 2018) of the quasar light, with one component having a shallow
slope and a second having a steep slope. Adding additional a third or fourth
NMF component(s) did not noticeably improve the results. In general, the
spectrum for each spaxel near the quasar has some light from the quasar and
potentially nearby galaxies as well. To subtract quasar light while avoiding
subtraction of galaxy light, we fit each spaxel with a linear combination of
the two quasar non-negative components and the first two Sloan Digital Sky
Survey-Baryon Oscillation Spectroscopic Survey (SDSS-BOSS) galaxy eigenspectra
(Bolton et al., 2012) and then subtracted the quasar component of the model.
Unlike with some other systems (e.g., Johnson et al., 2018), the host of HE
0238$-$1904 does not exhibit bright, extended starlight, so the contribution
inferred by the galaxy model was not significant.
## 3 Measurements and Environment
### 3.1 Quasar Properties
HE 0238$-$1904 is a luminous, radio-quiet quasar (Véron-Cetty & Véron, 2006;
Arav et al., 2013). To ensure self-consistent measurements of the quasar
properties, we estimated its redshift, luminosity, and black hole mass using
the MUSE spectrum extracted via MPDAF (Bacon et al., 2016) with a
$r=3^{\prime\prime}$ aperture. To measure the systemic redshift of the quasar,
we fit the $\rm[O\,II]\lambda\lambda 3727,3729$ doublet with a Gaussian
profile following Hewett & Wild (2010) and found $z=0.6282\pm 0.0002$, where
the uncertainty represents the scatter between the $\rm[O\,II]$ centroid and
stellar absorption lines of SDSS quasars at similar redshift. This redshift is
$\approx+500\ {\rm km\,s^{-1}}$ from a previously reported Mg II based
estimate from Wisotzki et al. (2000). Even so, a more recent Mg II based
redshift of $z=0.628$ from Monroe et al. (2016) confirms our [O II]-based
redshift estimate. In general, quasar redshifts measured from the [O II]
doublet are more accurate than those measured from broad-lines like Mg II, as
we argue in Section 4.1.
In addition, we estimated the bolometric luminosity and the black hole mass of
HE 0238$-$1904 by fitting the extracted MUSE spectrum with the Python QSO
fitting code (PyQSOFit; Guo et al. 2019). PyQSOFit fits a quasar’s spectrum
with a combination of a power-law continuum, $\mathrm{Fe\,II}$ template, and
sets of Gaussian line profiles for both the broad- and narrow-lines. We
modelled the $\rm H\beta$ and [O III] spectral region with the continuum
components, three Gaussian profiles for the broad H$\beta$, and two for the
narrow H$\beta$ and [O III]. From the fit, we computed a monochromatic
luminosity at $5100$Å of $\lambda L_{5100}\approx 1.6\times 10^{46}\rm\
erg\,s^{-1}$ and a bolometric luminosity of $L_{\rm bol}\approx 1.7\times
10^{47}\,\rm erg\,s^{-1}$ using the bolometric correction factor from Richards
et al. (2006). Finally, we inferred a black hole mass of $M_{\rm BH}\approx
10^{9.8}\ {\rm M_{\odot}}$ using the single-epoch virial theorem-based
approach from Vestergaard & Peterson (2006). Following Kormendy & Ho (2013),
this black hole mass corresponds to a stellar mass of $M_{*}\approx 10^{12.0}\
{\rm M_{\odot}}$ for the host galaxy, but we caution this stellar mass may be
significantly overestimated due to uncertainty in single-epoch virial theorem-
based black hole masses and observed scatter in the black hole mass-stellar
mass relation. For example, if the true black hole mass is $1\sigma$ below the
mean single-epoch virial theorem estimate, and the stellar mass is $1\sigma$
below the estimate from the black hole mass-stellar mass relation, the
inferred stellar mass would be $M_{*}\approx 10^{11.4}\ {\rm M_{\odot}}$.
Furthermore, the single-epoch virial theorem-based relation used here is not
calibrated for quasars as luminous as HE 0238$-$1904, which may drive disk
wind, erroneously inflating the black hole mass estimate. The fitted quasar
spectrum is shown in Figure 1.
Figure 2: HST ACS+F814W image of the field of HE 0238-1904. The full image has
a FoV of $1.5^{\prime}\times 1.5^{\prime}$. The larger dashed box shows the
$1^{\prime}\times 1^{\prime}$ MUSE FoV. The smaller dashed box marks the
$30^{\prime\prime}\times 30^{\prime\prime}$ region displayed in Figure 4. The
LOS velocities of galaxies relative to the quasar are denoted with outlining
colors and the corresponding colorbar is shown on the bottom left. The
histogram in the bottom right inset panel shows the velocity distribution of
galaxies where galaxies in both orange and purple outlined regions are plotted
separately. We note that the orange and purple regions and corresponding
histograms are only for visualization. The two-Gaussian fitting of the
velocity distribution does not rely on any spatial information. Galaxies in
the quasar host environment are labeled with black circles and labeled by
their IDs. The approximate stellar mass weighted group center is marked with a
white asterisk while the weighted centers of the richer, redshifted group and
less rich, blueshifted group are marked with red and blue asterisks,
respectively.
### 3.2 Galaxy Measurements and Properties
To study the environment of HE 0238$-$1904, we conducted a galaxy survey by
first identifying all continuum sources in MUSE and the ACS$+$F814W image. We
identified continuum sources by running Source Extractor (SE; Bertin & Arnouts
1996) on a median MUSE white light image and the HST image separately. To
ensure completeness, we also added sources based on visual inspection.
Typically, sources are missing from MUSE due to biased background estimation
caused by bright objects in the field or due to blending. Based on the
background sky standard deviation and source counts in the ACS$+$F814W image,
the imaging catalog is complete for objects brighter than $m_{\rm
F814W}\approx 26\\!-\\!27$, depending on angular size.
Figure 3: MUSE galaxy spectra with the best-fit spectral models. The MUSE
spectrum is shown by a solid black line. The uncertainty is shown by a solid
grey line. The best-fit model used for redshift measurement is shown by a
solid red line.
For each identified object, we extracted a MUSE spectrum with MPDAF with a
circular aperture of $r=0.7^{\prime\prime}$, which is roughly the size of the
MUSE seeing FWHM. The choice of this modest aperture may result in some
wavelength dependent aperture losses but helps increase S/N for redshift
estimation. We then fit each spectrum as a linear combination of SDSS galaxy
eigenspectra as described in Helton et al. (2021) to measure the source
redshift. In summary, we computed the best-fit linear combination on a grid
from $z=0$ to $z=1$ with a step size of $\Delta z=0.0001$ and recorded the
goodness-of-fit statistic ($\chi^{2}$) over the entire grid. We adopted the
redshift with the minimum global $\chi^{2}$ as our initial solution. We then
visually inspected each best-fit model to ensure robustness and assigned the
redshift quality. For galaxies with both emission and absorption lines, we
masked out strong emission lines and measured the redshift based on stellar
absorption features when possible to avoid a potential bias in redshift from
large-scale nebulae in the field (which may not be closely associated with the
galaxies in question). Finally, we classified our confidence in the redshift
measurements based on the number of the detected spectral features. All of the
galaxies in the quasar environment have two or more spectral features except
for G11 and G18. According to Helton et al. (2021), the uncertainty in galaxy
redshifts measured in MUSE spectra with these techniques is $\sigma\approx\rm
20\,km\,s^{-1}$. Comparing the continuum source catalog and the corresponding
redshift measurements, the redshift survey is approximately $100\%$ complete
for sources brighter than $m_{\rm F814W}\\!\approx\\!24$ and approximately
$95\%$ complete for those brighter than $m_{\rm F814W}\\!\approx\\!25$. For
comparison, an $L_{*}$ galaxy at $z\approx 0.6$ has $m_{\rm
F814W}\\!\approx\\!20.6$ assuming the luminosity function from Faber et al.
(2007). The high completeness of the galaxy survey at faint magnitudes enables
us to study the origins of nebulae, even if they arise from interactions
involving relatively faint dwarf galaxies.
Table 1: Summary of Galaxies in the Field of HE 0238$-$1904 at $z\approx
z_{\rm{QSO}}$.
ID | R.A.a | Decl.b | zc | $m_{\mathrm{F814W}}$d | $M_{B}$e | K-correction | D4000 | $A_{V}$ | $\log(M_{*}/{\rm M_{\odot}})$f | $\Delta\theta$g | dh | $\Delta$vi
---|---|---|---|---|---|---|---|---|---|---|---|---
| (J2000) | (J2000) | | (AB) | (AB) | template | | (mag) | | (′′) | (pkpc) | ($\mathrm{km\ s^{-1}}$)
Host | 02:40:32.58 | $-$18:51:51.4 | 0.6282 | … | … | … | … | … | … | 0.0 | 0.0 | 0
G1 | 02:40:32.63 | $-$18:51:55.8 | 0.6278 | 24.3 | $-$17.5 | S0 | $1.26\pm 0.57$ | … | 9.3j | 4.4 | 30.4 | -76
G2 | 02:40:32.73 | $-$18:51:47.1 | 0.6270 | 23.3 | $-$18.5 | S0 | $1.56\pm 0.08$ | 0.1 | 9.5 | 4.8 | 32.7 | -224
G3k | 02:40:32.74 | $-$18:51:55.9 | 0.6280 | 23.8 | $-$18.3 | Irr | … | … | 9.6j | 5.0 | 34.3 | -40
G4 | 02:40:32.57 | $-$18:51:56.7 | 0.6284 | 24.9 | $-$17.3 | Irr | $1.05\pm 0.07$ | 0.2 | 8.3 | 5.4 | 36.7 | +34
G5 | 02:40:32.71 | $-$18:51:57.0 | 0.6280 | 25.2 | $-$17.0 | Irr | $0.64\pm 0.08$ | 0.1 | 7.4 | 5.9 | 40.1 | -40
G6 | 02:40:32.96 | $-$18:51:54.4 | 0.6295 | 22.4 | $-$19.4 | S0 | $1.35\pm 0.02$ | 0.1 | 10.1 | 6.1 | 41.5 | +237
G7 | 02:40:33.04 | $-$18:51:53.8 | 0.6275 | 23.8 | $-$18.0 | S0 | $1.30\pm 0.04$ | 0.0 | 9.3 | 6.9 | 46.9 | -132
G8 | 02:40:32.21 | $-$18:51:58.7 | 0.6284 | 21.8 | $-$20.0 | S0 | $1.62\pm 0.02$ | 0.2 | 10.4 | 9.1 | 61.9 | +34
G9 | 02:40:33.44 | $-$18:51:50.7 | 0.6330 | 23.8 | $-$18.1 | S0 | $1.49\pm 0.05$ | 0.2 | 9.7 | 12.2 | 82.2 | +882
G10 | 02:40:33.53 | $-$18:51:48.4 | 0.6323 | 20.0 | $-$21.9 | S0 | $1.71\pm 0.01$ | 0.8 | 11.5 | 13.8 | 94.3 | +753
G11 | 02:40:32.37 | $-$18:51:37.6 | 0.6302 | … | … | … | … | … | … | 14.1 | 96.3 | +360
G12 | 02:40:32.00 | $-$18:51:39.9 | 0.6297 | 21.4 | $-$20.4 | S0 | $1.64\pm 0.02$ | 0.2 | 10.6 | 14.1 | 96.5 | +274
G13 | 02:40:32.28 | $-$18:52:04.9 | 0.6272 | … | … | … | … | … | … | 14.2 | 97.0 | -187
G14 | 02:40:33.17 | $-$18:51:37.9 | 0.6310 | 22.6 | $-$19.2 | S0 | $1.37\pm 0.03$ | 0.7 | 10.0 | 15.8 | 108.0 | +513
G15 | 02:40:33.62 | $-$18:51:43.2 | 0.6253 | 24.8 | $-$17.0 | S0 | $1.99\pm 0.22$ | 0.4 | 9.0 | 16.8 | 115.0 | -537
G16 | 02:40:31.85 | $-$18:52:05.5 | 0.6279 | 23.8 | $-$18.0 | S0 | $1.98\pm 0.16$ | 1.1 | 9.5 | 17.5 | 119.8 | -58
G17 | 02:40:33.75 | $-$18:51:45.5 | 0.6332 | 22.7 | $-$19.1 | S0 | $1.57\pm 0.03$ | 0.6 | 10.1 | 17.6 | 120.3 | +919
G18 | 02:40:33.53 | $-$18:51:39.6 | 0.6332 | … | … | … | … | … | … | 17.9 | 121.9 | +922
G19 | 02:40:33.69 | $-$18:52:00.1 | 0.6358 | 22.2 | $-$19.7 | S0 | $1.60\pm 0.02$ | 0.4 | 10.3 | 18.0 | 122.9 | +1398
G20 | 02:40:31.97 | $-$18:52:07.9 | 0.6271 | … | … | … | … | … | … | 18.8 | 128.1 | -205
G21 | 02:40:33.48 | $-$18:51:36.9 | 0.6341 | 22.1 | $-$19.7 | S0 | $1.26\pm 0.02$ | 1.4 | 10.3 | 19.3 | 131.8 | +1084
G22 | 02:40:31.34 | $-$18:52:02.5 | 0.6268 | 23.0 | $-$18.9 | S0 | $1.66\pm 0.05$ | 0.5 | 10.1 | 20.9 | 142.8 | -261
G23 | 02:40:33.76 | $-$18:51:38.2 | 0.6319 | 24.4 | $-$17.6 | S0 | $1.62\pm 0.11$ | 0.6 | 9.5 | 21.3 | 145.5 | +679
G24 | 02:40:33.87 | $-$18:51:36.1 | 0.6333 | 23.6 | $-$18.5 | Scd | $1.07\pm 0.04$ | 1.8 | 9.8 | 23.8 | 162.4 | +937
G25 | 02:40:33.26 | $-$18:52:13.9 | 0.6277 | 25.5 | $-$16.3 | S0 | $1.46\pm 0.17$ | 1.8 | 8.8 | 24.5 | 167.5 | -95
G26 | 02:40:30.93 | $-$18:51:43.7 | 0.6272 | 23.0 | $-$18.8 | S0 | $1.66\pm 0.05$ | 0.6 | 9.9 | 24.7 | 168.3 | -187
G27 | 02:40:34.29 | $-$18:51:46.3 | 0.6297 | 23.7 | $-$18.1 | S0 | $1.30\pm 0.06$ | 0.5 | 9.5 | 24.8 | 169.0 | +274
G28 | 02:40:32.96 | $-$18:52:17.2 | 0.6282 | 23.0 | $-$19.1 | Scd | $1.07\pm 0.02$ | 1.0 | 9.5 | 26.4 | 180.0 | -3
G29 | 02:40:32.32 | $-$18:51:24.4 | 0.6357 | 24.5 | $-$17.8 | Irr | $0.83\pm 0.06$ | 1.4 | 8.3 | 27.2 | 185.6 | +1379
G30 | 02:40:34.59 | $-$18:51:45.3 | 0.6323 | 24.5 | $-$17.6 | Scd | $0.96\pm 0.06$ | 0.1 | 8.9 | 29.2 | 199.2 | +753
G31 | 02:40:34.57 | $-$18:52:00.4 | 0.6312 | … | … | … | … | … | … | 29.6 | 201.9 | +550
G32 | 02:40:34.83 | $-$18:51:55.7 | 0.6354 | 24.8 | $-$17.1 | S0 | $1.25\pm 0.08$ | 0.0 | 9.0 | 32.2 | 219.9 | +1324
G33 | 02:40:34.55 | $-$18:51:34.9 | 0.6313 | 20.0 | $-$22.1 | Scd | $1.17\pm 0.01$ | 0.4 | 10.8 | 32.4 | 220.9 | +569
G34 | 02:40:34.88 | $-$18:51:53.0 | 0.6349 | 20.6 | $-$21.2 | S0 | $1.55\pm 0.01$ | 0.4 | 11.2 | 32.7 | 222.9 | +1232
* •
Notes.
* a
Right ascension.
* b
Declination.
* c
Best-fit redshift, from principal component analysis of SDSS galaxy
eigenspectra from BOSS. G11/G18 have only one spectral feature.
* d
Apparent HST ACS+F814W magnitude.
* e
Absolute B-band magnitude.
* f
Stellar mass from stellar population fits to the MUSE spectrum and DES & HST
photometry.
* g
Angular distance from the quasar.
* h
Projected physical distance from the quasar.
* i
LOS velocity from the quasar.
* j
Stellar mass estimated from the median $M_{*}/L$ ratio of the group resulting
in large systematic uncertainties.
* k
The uncertainty in the position of G3 is larger than other galaxies due to the
diffraction spike in the HST ACS+F814W image.
To examine properties of the quasar host environment, we identified candidate
group members based on their LOS velocities relative to the quasar ($\Delta
v=v-v_{\rm QSO}$). In particular, we selected galaxies with $|\Delta v|<\rm
2000\ km\,s^{-1}$. We inferred the physical properties of the selected
galaxies with Bagpipes (Carnall et al., 2018; Carnall et al., 2019). Bagpipes
performs stellar population synthesis (SPS) with a stellar evolution model
from Bruzual & Charlot (2003), an initial mass function from Kroupa (2001),
and the Bayesian inference package Multinest (Buchner et al., 2014; Feroz et
al., 2009; Feroz et al., 2019). We fit both spectroscopic and photometric data
simultaneously with Bagpipes. Many of the galaxies in our sample only have one
photometric datapoint available, necessitating the use of the spectra to
further inform the stellar population synthesis. In our fitting procedure, we
assumed an exponential star formation history with e-folding time scale of
$0.01<\rm\tau/Gyr<8.00$, solar stellar metallicity, and dust attenuation model
from Calzetti et al. (2000) with $0<A_{V}/\rm mag<2$. The choice of
exponentially declining star formation histories enables more direct
comparison with surveys such as MUSE-Wide (Urrutia et al., 2019) and the MUSE
Ultra DEEP Field (Fossati et al., 2019). We introduced a 2nd order
multiplicative polynomial to reconcile the potential artificial differences
between SED measured in photometry and spectra. This polynomial accounts for
systematic uncertainty in the MUSE flux due to wavelength dependent aperture
losses and uncertainty in the flux calibration (Weilbacher et al., 2020). We
also used Bagpipes spectrum noise scaling to allow the relative weighting of
the photometry and spectrum to be a nuisance parameter. We note that the
results are not sensitive to this scaling in our case (see Carnall et al.,
2019). In addition to the ACS$+$F814W photometry, we also included $grizY$
photometric data from the Dark Energy Survey (DES; Abbott et al. 2021)
available for 16 galaxies. The resulting stellar mass estimates and dust
attenuation $A_{V}$ values are reported in Table 1. The stellar masses have
associated systematic uncertainties of $\approx 0.2$ dex. Galaxies close to
the quasar (G1-G7) are contaminated by the quasar light, and we used the
quasar-light subtracted spectra for Bagpipes fitting when possible. Galaxies
G1, G3, G11, G13, G18, G20, and G31 do not have a stellar mass estimate
because their continua are too faint or are too badly contaminated by the
quasar continuum. To further characterize these galaxies, we also report
$4000$ Å break strength (D4000; Gallazzi et al. 2005) and rest-frame $B$-band
absolute magnitude with $K$-corrections calculated using templates from
Coleman et al. (1980) chosen based on the strength of the 4000 Å break. The
IDs, galaxy coordinates (R.A., Decl.), redshifts, ACS$+$F814W apparent
magnitudes, absolute $B$-band magnitudes, adopted K-correction templates (S0,
Scd, or Irregular), and D4000 measurements are reported in Table 1, along with
the angular distances, projected distances, and LOS velocity differences from
the quasar sightline. The locations of these galaxies are shown in Figure 2
and several example MUSE spectra are overplotted with their best-fit PCA
spectral models in Figure 3. An interactive view of the galaxy environment and
spectra is available online111http://zhuoqiliu.com/HE0238-1904.html.
### 3.3 The Galactic Environment
In the MUSE field of HE 0238$-$1904 we identified 35 galaxies, including the
quasar host, with LOS velocities $|\Delta v|<\rm 2000\,km\,s^{-1}$ of the
quasar systemic velocity, which is sufficient to encompass most members of
even massive galaxy clusters. Figure 2 shows a $1.5^{\prime}\times
1.5^{\prime}$ FoV image from the ACS+F814W observations of the field where we
marked the quasar with a grey star and labelled galaxies with circles as well
as their ID. The color of the circle represents the LOS velocity of each
galaxy relative to the quasar. Additionally, we display the $1^{\prime}\times
1^{\prime}$ MUSE FoV, and a smaller $30^{\prime\prime}\times
30^{\prime\prime}$ region which is the focus of later figures in this work.
Among the 35 galaxies in the environment of HE 0238$-$1904, four (two) exhibit
stellar masses of $\log(M_{*}/{\rm M_{\odot}})>10.5\ (>\\!11)$ (excluding the
quasar), indicating a significant overdensity and likely a massive group. To
further characterize the environment, we show the distribution of galaxies’
LOS velocities relative to the quasar ($\Delta v=v-v_{\rm QSO}$) in the bottom
right panel of Figure 2. The LOS velocity distribution peaks around
$\rm-100\,km\,s^{-1}$ but exhibits a non-Gaussian tail toward higher velocity
of $\rm+100\,km\,s^{-1}$ to $\rm+1400\,km\,s^{-1}$. There is a clear trend
between LOS velocity and location on the sky visible in Figure 2 with galaxies
with $\Delta v>0\rm\,km\,s^{-1}$ largely falling North East of the quasar and
those with $\Delta v<0\,\rm km\,s^{-1}$ falling near the quasar or South West
of it. To better visualize the location$-$velocity trend, we divided the field
into two regions, one NE of the quasar and one SW of it. The NE (SW) one is
marked by an orange (purple) trapezoid in Figure 2. We also show the LOS
velocity distribution of the galaxies in each trapezoidal region by the
corresponding histograms in the inset panel in Figure 2. The peak and the tail
in the histogram correspond closely to these two regions respectively. The
non-Gaussian LOS velocity distribution and correlation with spatial location
suggests that the overdensity near the quasar host may consist of two
distinct, but possibly interacting, galaxy groups.
To quantify the velocity dispersions of these two potential groups, we fit two
Gaussians to the entire LOS velocity distribution. This results in one narrow,
blueshifted Gaussian and one broader, redshifted one. The blueshifted Gaussian
has a mean LOS velocity of $\Delta v_{\rm group}=\rm-99\pm 25\,km\,s^{-1}$ and
a 1D velocity dispersion of $\sigma_{\rm group}=\rm 92\pm 50\,km\,s^{-1}$ and
includes $\approx 35\%$ of the galaxies near HE 0238$-$1904\. The redshifted
Gaussian has $\Delta v_{\rm group}=\rm 629\pm 140\,km\,s^{-1}$ and
$\sigma_{\rm group}=\rm 506\pm 90\,km\,s^{-1}$ and includes $\approx 65\%$ of
the galaxies. In both cases, the uncertainty estimates are based on bootstrap
resampling. While the Gaussian fitting did not include any spatial
information, the two Gaussians closely match the purple and orange velocity
histograms formed from a spatial separation (see Figure 2). These fitting
results suggest that the environment around the quasar includes one massive
group at $\Delta v_{\rm group}\approx\rm 600\,km\,s^{-1}$ and one less massive
group closer to the quasar velocity. Assuming each group is virialized, we
estimate dynamical masses of $M_{\rm dyn}\sim 9.8\times 10^{13}\ {\rm
M_{\odot}}$ and $M_{\rm dyn}\sim 5.7\times 10^{11}\ {\rm M_{\odot}}$ (Munari
et al., 2013) for the richer, redshifted group and less rich, blueshifted
group, respectively. To place a lower limit on the mass estimate, we fit a
single Gaussian to galaxies with $\Delta v>200\,\rm km\,s^{-1}$. We found the
velocity dispersion is $\approx 400\,\rm km\,s^{-1}$, corresponding to a mass
of $M_{\rm dyn}\sim 3.8\times 10^{13}\ {\rm M_{\odot}}$. The mass range of
$M_{\rm dyn}\approx 4\times 10^{13}-10^{14}\ {\rm M_{\odot}}$ is consistent
with massive group or modest mass cluster. However, we caution that the
assumption that the groups are virialized introduces additional uncertainty
given the complex environment. Finally, in Figure 2, we show the stellar mass
weighted group center as a white asterisk, and membership weighted
($\frac{P_{\rm blue/red}}{P_{\rm blue}+P_{\rm red}}$) centers as red and blue
asterisks for the richer, redshifted group and less rich, blueshifted group
respectively.
To test the expectation that dynamically more massive groups will contain more
massive galaxies, we investigate the most massive galaxies in each group. G8
and G22 are the most massive galaxies with a stellar mass of $\log(M_{*}/{\rm
M_{\odot}})=10.4$ and $10.1$ respectively in the less rich, blueshifted group.
On the other hand, the richer, redshifted group includes two massive
elliptical galaxies, G10 and G34, with $\log(M_{*}/\mathrm{M}_{\odot})=11.5$
and $11.2$, respectively. Furthermore, the richer, redshifted group contains a
massive disc galaxy, G33, with $\log(M_{*}/{\rm M_{\odot}})=10.8$. This is
consistent with HE 0238$-$1904 residing in an overdense region likely made of
two groups with the redshifted one being richer and more massive. However, the
quasar redshift falls between the centroids of the two groups indicating that
it could arise in either or truly be located between them. Despite the large
uncertainty in the stellar mass of the quasar host galaxy (see Section 3.1),
the large black hole mass suggests it is a massive galaxy, possibly the
largest in the overdensity around HE 0238$-$1904\. It is therefore more
probable that HE 0238$-$1904 resides in the richer, redshifted group.
Nonetheless, we cannot completely rule out the possibility that HE 0238$-$1904
originates from the less rich, blueshifted group. In either case, the
dynamically rich and likely unrelaxed environment could result in galaxy
interactions that can produce giant nebulae via ram pressure and tidal
stripping.
Figure 4: Visualizations of the nebula discovered around HE 0238$-$1904\.
Panel (a): HST ACS+F814W image of the field. Galaxies are circled in black and
labelled with their IDs. Panel (b): map of the nebular LOS velocity relative
to the quasar systemic velocity. Galaxies are circled in black and colored
with their velocities. Panel (c): map of nebular photoionization shown as the
line ratio $\rm[O\,III]\lambda 5008/[O\,II]\lambda\lambda 3727+3729$.
Panel(d)-(f) and Panel (g)-(i): narrow-band $\rm[O\,II]$ and $\rm[O\,III]$
surface brightness maps extracted from the MUSE datacube over the velocity
intervals labelled in each panel. The inset panel in Panel (h) shows a zoomed,
unsmoothed map around G3 and G5 to emphasize the possible existence of a tidal
tail. These maps are overlaid with $\rm[O\,II]$ and $\rm[O\,III]$ surface
brightness contours at levels of $0.08$ and $0.3\times
10^{-17}\rm\,erg\,cm^{-2}\,s^{-1}\,arcsec^{-2}$. The contours shown in panel
(e) and panel (h) are overlaid on the HST image in blue and red respectively.
We note that surface brightness maps and contours are smoothed with $3$ pixel
kernels. A version of this figure with the region circles marked in every
velocity panel is available online1. Table 2: Summary of emission-line
measurements for extracted regions in the nebula around HE 0238$-$1904.
ID | Distancea | Extraction | $\rm[O\,II]$ | $\rm H\beta$ | $\rm[O\,III]$ | $\rm[Ne\,V]$ | $\rm[O\,III]$ | $\rm He\,II$ | $\Delta$vb | $\sigma$c
---|---|---|---|---|---|---|---|---|---|---
| ($\rm pkpc$) | radius | $\lambda\lambda 3727+3729$ | | $\lambda 5008$ | $\lambda 3346$ | $\lambda 4364$ | $\lambda 4687$ | ($\mathrm{km\ s^{-1}}$) | ($\mathrm{km\ s^{-1}})$
| | (′′) | ($\mathrm{10^{-17}\,erg}$ | ($\mathrm{10^{-17}\,erg}$ | ($\mathrm{10^{-17}\,erg}$ | ($\mathrm{10^{-17}\,erg}$ | ($\mathrm{10^{-17}\,erg}$ | ($\mathrm{10^{-17}\,erg}$ | |
| | | $\mathrm{s^{-1}cm^{-2}}$) | $\mathrm{s^{-1}cm^{-2}}$) | $\mathrm{s^{-1}cm^{-2}}$) | $\mathrm{s^{-1}cm^{-2}}$) | $\mathrm{s^{-1}cm^{-2}}$) | $\mathrm{s^{-1}cm^{-2}}$) | |
S1 | 45 | 0.7 | $1.73\pm 0.05$ | $0.69\pm 0.06$ | $9.17\pm 0.05$ | $0.15\pm 0.03$ | $0.21\pm 0.02$ | $<0.21$ | $-11\pm 3$ | $62\pm 4$
S2 | 36 | 0.7 | $3.55\pm 0.08$ | $1.14\pm 0.14$ | $23.48\pm 0.10$ | $0.37\pm 0.05$ | $0.40\pm 0.04$ | $0.35\pm 0.11$ | $-55\pm 3$ | $43\pm 4$
S3 | 25 | 0.7 | $<0.30$ | $<0.27$ | $6.27\pm 0.22$ | $<0.15$ | $<0.09$ | $<0.18$ | $-107\pm 3$ | $61\pm 4$
$\rm S3_{wing}$ | 25 | 0.7 | $2.90\pm 0.10$ | $0.73\pm 0.09$ | $2.44\pm 0.22$ | $<0.18$ | $<0.12$ | $<0.21$ | $-14\pm 9$ | $104\pm 5$
S4 | 17 | 0.7 | $1.34\pm 0.18$ | $0.28\pm 0.08$ | $3.39\pm 0.10$ | $<0.18$ | $<0.09$ | $<0.15$ | $-114\pm 3$ | $45\pm 4$
$\rm S4_{wing}$ | 17 | 0.7 | $4.17\pm 0.20$ | $0.52\pm 0.09$ | $3.14\pm 0.12$ | $<0.27$ | $<0.15$ | $<0.27$ | $\hskip 0.85358pt+12\pm 8$ | $169\pm 6$
S5 | 9 | 0.7 | $5.96\pm 0.28$ | $0.77\pm 0.26$ | $2.51\pm 0.22$ | $<0.84$ | $<0.51$ | $<0.78$ | $\hskip 8.82036pt+8\pm 11$ | $140\pm 11$
S6 | 20 | 0.7 | $5.04\pm 0.07$ | $1.47\pm 0.12$ | $14.03\pm 0.07$ | $0.15\pm 0.05$ | $0.22\pm 0.04$ | $0.34\pm 0.09$ | $-62\pm 3$ | $68\pm 4$
S7 | 29 | 0.7 | $0.99\pm 0.04$ | $0.18\pm 0.06$ | $0.63\pm 0.04$ | $<0.09$ | $<0.06$ | $<0.18$ | $-72\pm 8$ | $111\pm 8$
S8 | 18 | 0.7 | $2.33\pm 0.04$ | $0.52\pm 0.06$ | $1.98\pm 0.04$ | $<0.09$ | $<0.06$ | $<0.15$ | $-119\pm 4$ | $89\pm 4$
S9 | 11 | 0.7 | $3.71\pm 0.16$ | $1.10\pm 0.15$ | $2.56\pm 0.13$ | $<0.45$ | $<0.27$ | $<0.39$ | $+173\pm 7$ | $110\pm 7$
S10 | 15 | 0.7 | $1.96\pm 0.05$ | $0.47\pm 0.05$ | $1.58\pm 0.04$ | $<0.12$ | $<0.09$ | $<0.15$ | $\hskip 0.85358pt+58\pm 4$ | $79\pm 5$
B1 | 49 | 1.4 | $1.14\pm 0.08$ | $0.89\pm 0.12$ | $2.21\pm 0.08$ | $<0.21$ | $<0.15$ | $<0.33$ | $\hskip 0.85358pt+50\pm 6$ | $128\pm 7$
B2 | 47 | 1.8 | $4.32\pm 0.13$ | $<0.57$ | $1.96\pm 0.11$ | $<0.30$ | $<0.21$ | $<0.57$ | $-36\pm 8$ | $119\pm 8$
B3 | 76 | 1.7 | $1.03\pm 0.09$ | $<0.60$ | $1.37\pm 0.07$ | $<0.21$ | $<0.15$ | $<0.42$ | $-69\pm 5$ | $50\pm 6$
B4 | 79 | 1.4 | $0.31\pm 0.11$ | $<0.24$ | $0.83\pm 0.06$ | $<0.18$ | $<0.12$ | $<0.33$ | $\hskip 0.85358pt+30\pm 4$ | $30\pm 6$
$\rm B4_{wing}$ | 79 | 1.4 | $0.99\pm 0.16$ | $<0.24$ | $0.40\pm 0.11$ | $<0.18$ | $<0.12$ | $<0.33$ | $\hskip 3.69885pt-83\pm 42$ | $201\pm 36$
* a
Projected physical distance from the quasar.
* b
LOS velocity relative to the quasar with uncertainty, assuming a systematic
uncertainty of $3\rm\,km\,s^{-1}$ (Weilbacher et al., 2020).
* c
LOS velocity dispersion with uncertainty, assuming a systematic uncertainty of
$4\rm\,km\,s^{-1}$ (Kamann et al., 2016).
### 3.4 Nebular Environment
Due to ionizing radiation from the accretion disk, wide-field IFS observations
of quasar fields often find large nebulae (Johnson et al., in prep). To search
for the nebulae around HE 0238$-$1904, we conducted continuum subtraction of
the datacube locally for the $\rm[O\,II]$, $\rm H\beta$, and $\rm[O\,III]$
emission lines around the quasar. For continuum fitting near each of the three
lines, we masked the spectral region within $\pm 500{-}1000\,\rm km\,s^{-1}$
of the expected observed wavelength at the quasar’s redshift. We fine-tuned
the masked region individually for each of the three lines to avoid skyline
contamination and to account for the larger width $\rm[O\,II]$ doublet. For
each spaxel in the masked datacube, we then fit a third-order polynomial to
the continuum regions around each line and subtracted the best-fit model to
complete the continuum subtraction.
This continuum-subtracted MUSE datacube enabled the discovery of a giant
ionized nebula in $\rm[O\,II]$, $\rm H\beta$, and $\rm[O\,III]$ around HE
0238$-$1904 with a total area of $\approx 5000\ {\rm kpc}^{2}$ which is
visualized in Figure 4. This nebula surrounds the quasar with projected radii
of $d\rm\approx\\!30\ to\ 50\,pkpc$ and with LOS velocities of $\Delta
v\approx-250\ \rm to\ {+}250\ km\,s^{-1}$ from the quasar. The nebula is more
extended to the South East and the South West of the quasar. The South East
extension of the nebula is spatially coincident with galaxies G1, G3, G4, and
G5. Additionally, the tail extending South West of the quasar is distinct from
but approximately in the direction of G8.
To examine the nebula and any relationship with galaxies in the quasar
environment, we show $\rm[O\,II]$ and $\rm[O\,III]$ emission contours over the
HST image in panel (a) of Figure 4. We also display a nebular LOS velocity map
in panel (b) and a $\rm[O\,III]/[O\,II]$ line ratio map in panel (c). We
constructed these two maps by jointly fitting Gaussian line profiles to the
continuum-subtracted $\rm[O\,II]$, $\rm H\beta$, and $\rm[O\,III]$ datacubes.
Instead of fitting the spectrum of each individual spaxel, we averaged over
circular apertures of $r=1^{\prime\prime}$ to enhance S/N. We chose this
aperture radius based on experimentation to visualize even faint parts of the
nebula. These two maps provide an opportunity to study the spatial dependence
of the kinematics and the ionization state of the gas. In addition, we show
three panels of narrowband images generated from the continuum subtracted
datacubes for each of $\rm[O\,II]$ and $\rm[O\,III]$ in velocity ranges of
$-300\rm\ to\ {-}100\ km\,s^{-1}$, $-100\rm\ to\ {+}100\ km\,s^{-1}$, and
$+100\rm\ to\ {+}300\ km\,s^{-1}$ in panel (d)-(f) and (g)-(i) respectively.
The nebula exhibits an irregular morphology but with a spatial trend in
kinematics. In particular, the region North of the quasar is redshifted
relative to the quasar and has a LOS velocity of $\Delta
v=0{-}250\rm\,km\,s^{-1}$. The region South of the quasar including the tail
to the West is mainly blueshifted relative to the quasar but with a small
redshifted region in the most Southern points. This southern region is
spatially coincident and potentially kinematically coincident with G1, G3, G4
and G5. However, the continua of these galaxies are too faint to measure
stellar absorption-based redshifts. This raises the possibility that their
nebular spectra may be contaminated by the surrounding nebulae, resulting in a
biased redshift measurement. In the case of G3 and G4, the line width of the
nebular emission near the galaxies is significantly narrower than the more
extended emission from nearby parts of the giant nebula, indicating that the
galaxy line emission likely arises in the ISM of the two dwarfs.
The nebula also shows a spatial trend in the ionization-state-sensitive
$\rm[O\,III]/[O\,II]$ line ratio. The majority of the nebula is $\rm[O\,II]$
dominated but the region South East of the quasar has greater $\rm[O\,III]$
emission, particularly, at a few $\rm[O\,III]$ knots near G1, G3 and G5. The
knots near G3 and G5 have the highest surface brightness in the nebula.
Furthermore, the bright region extending to the South of the brightest knot
near G3 is reminiscent of a tidal tail.
To better explore the properties of the nebula, we selected several
representative regions in it and extracted their full spectra to infer
physical conditions from both strong ($\rm[O\,II]$, $\rm H\beta$, and
$\rm[O\,III]$) and weak lines ($\rm[Ne\,V]\lambda 3427$, $\rm H\delta$, $\rm
H\gamma$, $\rm[O\,III]\lambda 4364$, and $\rm He\,II\lambda 4687$222Other weak
lines such as $\rm[Ne\,III]\lambda 3869$, $\rm He\,I\lambda 3889$ & $\rm H8$,
and $\rm H\epsilon$ are covered by MUSE but we do not use them in this work
because of contaminating sky lines or blending with other lines.). We picked
the locations of these regions to cover a wide range in line ratios, surface
brightness, and projected locations relative to the quasar. These regions are
shown in panel (g) of Figure 4 and labelled with letters and numbers where S#
refers to regions with higher surface brightness for which we used an
extraction radius of $0.7^{\prime\prime}$ while B# labels low surface
brightness regions which required a larger extraction radius
($>1^{\prime\prime}$) to achieve sufficient S/N.
To measure the emission properties for each region, we jointly fit the strong
and weak emission lines described above with Gaussian profiles using LMFIT
(Newville et al., 2014). For each region, all fitted lines share the same
redshift and velocity width, but line fluxes are free parameters except for
cases with line ratios set by atomic physics (e.g., $\rm[O\,III]\lambda 4960$
and $\rm[O\,III]\lambda 5008$). In most cases, a single set of Gaussians is
enough to describe the emission line profiles, except for S3, S4, and B4 which
require a second set of Gaussians to account for broader ($\sigma\approx
100{-}170\,\rm km\,s^{-1}$) emission wings. Such emission wings are often seen
around luminous quasars due to quasar-driven outflows (Heckman et al., 1981;
Liu et al., 2013a, b), but the wings on S3, S4, and B4 may also be due to
projection effects. We summarize the measurements for these regions, including
their distances from the quasar, extraction radii, line fluxes, LOS
velocities, and 1-D velocity dispersions, in Table 2. We display strong and
weak line spectra as well as their best-fit models in Figure 5 and Figure 6
respectively for a representative subset of the regions.
Figure 5: Examples of nebular spectra (stronger lines) and best-fit spectral
models for multiple regions. The locations of these regions are shown as
circles and labelled by their IDs in Figure 4. The extracted spectrum is shown
as solid black lines and the error array is shown as grey lines. The best-fit
models are shown as red solid lines. Figure 6: Examples of nebular spectra
(fainter lines) and best-fit spectral models for multiple regions. The
locations of these regions are shown as circles and labelled by their IDs in
Figure 4. The plotting style is as described in Figure 5.
## 4 Discussion
Figure 7: Emission line surface brightness profile for the nebula around HE
0238$-$1904\. The [O II] and [O III] profiles are extracted over a velocity
interval of $-600$ to $600\,\rm km\,s^{-1}$, and are circularly averaged at
different distances from the quasar centroid.
As discussed in Section 3.3, the environment of HE 0238$-$1904 is overdense
and includes a massive galaxy group or cluster. Based on clustering studies,
this environment is richer than those of most radio-quiet systems, but
consistent with expectation for radio-loud ones. This demonstrates that radio-
quiet systems like HE 0238$-$1904 are diverse in terms of their host
environment. Nevertheless, the lack of detected radio emission and amorphous
morphology of the nebula suggests that it is not jet related. Considering that
most published giant nebulae at $z<1$ are in a rich environments, the presence
of giant nebulae might be correlated with group properties. A larger sample
size of quasars with wide IFS observations is required to investigate this
possibility.
Alternatively, such a rich environment can be explained by variable radio
quasars. Quasars are capable of changing from radio-quiet to radio-loud or
vice versa. Nyland et al. (2020) found 26 sources showing radio variability
over timescales of decades from the SDSS DR14 quasar catalog (Pâris et al.,
2018) and the Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010)
R90 quasar catalog (Assef et al., 2018). These sources, once considered radio-
quiet quasars, now meet the criteria for radio-loud ones. It implies that the
probability that any particular radio-quiet quasar becomes radio-loud on the
light-crossing timescale of the nebula is approximately $1\%$. However, the
presence of a massive group and nebula mean that HE 0238$-$1904 is not a
representative quasar and so may be more likely to transition to radio-loud
relatively soon. On the other hand, the possibility that HE 0238$-$1904 was
previously radio-loud and is now radio-quiet is harder to address since such
transitions are not well studied.
In the following subsections, we discuss insights into the physical origins
and state of the giant nebula which includes analyses of density and
ionization-state sensitive diagnostic emission lines. Several of these
analyses require priors on the dust content and density of the gas. To
investigate dust content, we estimate Balmer line ratios, and find $\rm
H\delta/H\gamma$ ratios of $\approx 0.55$. These ratios are consistent with
Case B recombination (Osterbrock & Ferland, 2006) in the absence of dust. To
obtain density estimates, we infer emission measure of the nebula from the
surface brightness of $\rm H\beta$ following Chen et al. (2019). Assuming $\rm
H\alpha/H\beta\approx 3$, a clumping factor of 1, and length-scale
$30\rm\,pkpc$, we found an electron density of $\log(n_{\rm e}/{\rm
cm}^{-3})\approx-1$. However, this density estimate has a large uncertainty
and is effectively a lower limit due to the assumption of a unity clumping
factor.
### 4.1 Origin of the Nebular Gas
Giant nebulae can be produced via ram pressure and tidal stripping, AGN and
stellar feedback, or filamentary accretion. The nebula around HE 0238$-$1904
is unlikely to arise from a jet-driven outflow given the fact that the quasar
is radio-quiet and exhibits no detectable radio jet. While S3 and S4 exhibit
broad emission wings, most regions are well characterized by a single Gaussian
profile with narrow velocity dispersion ($\sigma<120\,\rm km\,s^{-1}$; see
Table 2). These quiescent kinematics are inconsistent with the broad velocity
dispersion expected from radio-quiet AGN and stellar feedback (Liu et al.,
2013b; Rupke et al., 2019). In addition, the morphology is inconsistent with
expectations for filamentary accretion (Johnson et al., 2022). On the other
hand, the nebula is spatially and kinematically coincident with likely
interacting galaxies in the field of HE 0238$-$1904, suggesting that stripping
from interactions is likely responsible for most of the nebula with possible
subdominant contributions from outflows.
The nebula spatially surrounds the Host, G1, G3, G4, and G5, and extends to
the South West of the quasar to a projected distance of $d\sim\rm 70\,pkpc$.
This spatial coincidence suggests that the nebula likely arises from
interaction-related stripping. The dwarf galaxies G3 and G5 show a possible
tidal-tail-like structure as shown in panels (e) and (h) of Figure 4,
suggesting that this part of the nebula might be created from tidal stripping.
In addition to this, the emission maps on larger scales resemble a head-tail
morphology with the head around the quasar and with the tail extending to the
South West of the quasar. Head-tail morphologies are commonly seen in nebulae
originated from ram pressure stripped ISM (e.g., Poggianti et al., 2016;
Boselli et al., 2019; Chen et al., 2019). Interestingly, while the nebula
exhibits a head-tail morphology, it does not exhibit multiple filaments like
some “jellyfish” galaxies observed in the optical line emission. Instead, it
resembles the smoother emission profile sometimes seen in ram-pressure debris
observed in H I 21-cm (Hess et al., 2017). There are two plausible
explanations for ram pressure stripping in the environment of HE 0238$-$1904\.
First, dwarf galaxies may have travelled through the hot halo of the massive
group from West to East, leaving their ram pressure stripped ISM and CGM
behind along their path. Second, the nebula may arise from stripping of the
quasar host’s ISM and CGM if it is falling into the richer, redshifted group
and passing through the associated hot halo. The first scenario requires a
coincident alignment between the nebula stripped from the dwarfs and the
lightcone of the quasar. Alternatively, the second scenario would naturally
result in the illumination of the nebula by the quasar since the nebula
surrounds it.
To investigate the location of the nebula relative to the quasar and the
emission geometry of HE 0238$-$1904, we show the surface brightness profiles
of [O II] and [O III] made with Photutils (Bradley, 2023) in Figure 7. The
profile of [O II] declines smoothly as a function of radius, while the [O III]
exhibits shallower drop due to the bright knots seen in the narrow-band
images. The centroids of narrow-band [O II] and [O III] surface brightness
maps are 10 and 19 pkpc away from the quasar, respectively. This
correspondence in position can be explained if the nebula surrounds the
quasar. Alternatively, the nebula could be more distant from the quasar and
reside within its ionization cone by chance. This is more probable if the
opening angle is wide, as suggested by Trainor & Steidel (2013); Borisova et
al. (2016a); Schmidt et al. (2018); den Brok et al. (2020). However, a large
opening angle also makes chance alignment of the [O II] centroid and the
quasar centroid within $15\%$ of the size of the nebula less likely.
The giant nebulae around HE 0238$-$1904 was independently discovered and
reported by Zhao & Wang (2023). They attributed the gas to a superbubble
driven by the quasar based on an apparent large velocity shift between the
nebula and the quasar redshift and as well as broad line widths reported near
the quasar. However, the large velocity shift is due to the reliance on an
older, Mg II-based redshift of $z=0.631$, which is $\approx+500\ {\rm
km\,s^{-1}}$ from our [O II]-based redshift of $z=0.6282$. Rather than relying
on a redshift estimate from the literature, we measured the quasar redshift
and kinematics of the giant nebula from the same MUSE dataset to avoid any
systematic uncertainty due to wavelength calibration errors. Moreover, quasar
redshifts based on [O II] are generally more accurate than those measured from
Mg II due to the narrowness of the line and lack of blueshifted wings on [O
II]. In particular, quasar redshifts measured from [O II] trace the underlying
quasar host redshifts measured in stellar absorption to within $\approx\pm 20\
{\rm km\,s^{-1}}$ (Hewett & Wild, 2010). Finally, our redshift estimate of
$z=0.6282$ is more consistent with the centroid of the broad H$\beta$ line,
aligns with the peak of the quasar’s [O III] emission line, and matches a more
recent Mg II-based redshift of $z=0.628$ from the UV-bright Quasar Survey
(Monroe et al., 2016). Furthermore, we measured significantly narrower line
widths near the quasar. This is likely due to our removal of [O III] and [O
II] emission from the unresolved narrow-line emission region of the quasar
while Zhao & Wang (2023) only removed emission from the broad-line region. In
summary, the modest velocity shifts and largely narrow emission line widths
are consistent with much of the gas originating from interactions with more
minor possible contributions from an outflow. When using the updated quasar
redshift and quasar-light subtracted datacube, we find no evidence for a fast,
quasar driven superbubble in the system.
Table 3: Summary of nebula regions in the Field of HE 0238$-$1904.
ID | $\log(n_{\rm e,[O\,II]}/\mathrm{cm}^{-3})$a | $\log(n_{\rm H,Cloudy}/\mathrm{cm}^{-3})$b | ${\rm log}(U_{\rm Cloudy})$c
---|---|---|---
S1 | $<1.6$ | $1.6^{\hskip 0.56905pt+\hskip 0.28453pt0.1}_{-0.1}$ | $-2.2^{-0.1}_{\hskip 0.56905pt+\hskip 0.42677pt0.1}$
S2 | $<1.7$ | $1.7^{\hskip 0.56905pt+\hskip 0.28453pt0.1}_{-0.1}$ | $-2.1^{-0.1}_{\hskip 0.56905pt+\hskip 0.42677pt0.1}$
S3 | … | … | …
S4 | … | … | …
S5 | $<1.6$ | $4.2^{\hskip 0.56905pt+\hskip 0.28453pt0.2}_{-0.3}$ | $-3.0^{-0.2}_{\hskip 0.56905pt+\hskip 0.42677pt0.3}$
S6 | $\hskip 20.77051pt1.8^{\hskip 0.56905pt+\hskip 0.28453pt0.1}_{-0.1}$ | $2.7^{\hskip 0.56905pt+\hskip 0.42677pt0.1}_{-0.1}$ | $-2.5^{-0.1}_{\hskip 0.56905pt+\hskip 0.42677pt0.1}$
S7 | $<1.9$ | $3.0^{\hskip 0.56905pt+\hskip 0.28453pt0.3}_{-0.3}$ | $-3.2^{-0.3}_{\hskip 0.56905pt+\hskip 0.42677pt0.3}$
S8 | $<1.3$ | $3.5^{\hskip 0.56905pt+\hskip 0.28453pt0.1}_{-0.2}$ | $-3.3^{-0.1}_{\hskip 0.56905pt+\hskip 0.42677pt0.2}$
S9 | $<2.3$ | $4.1^{\hskip 0.56905pt+\hskip 0.28453pt0.2}_{-0.3}$ | $-3.5^{-0.2}_{\hskip 0.56905pt+\hskip 0.42677pt0.3}$
S10 | $<1.4$ | $3.6^{\hskip 0.56905pt+\hskip 0.28453pt0.2}_{-0.2}$ | $-3.3^{-0.2}_{\hskip 0.56905pt+\hskip 0.42677pt0.2}$
B1 | $<2.8$ | $2.1^{\hskip 0.56905pt+\hskip 0.28453pt0.1}_{-0.2}$ | $-2.7^{-0.1}_{\hskip 0.56905pt+\hskip 0.42677pt0.2}$
B2 | $<1.2$ | $2.9^{\hskip 0.56905pt+\hskip 0.28453pt0.1}_{-0.3}$ | $-3.4^{-0.1}_{\hskip 0.56905pt+\hskip 0.42677pt0.3}$
B3 | $<2.5$ | $1.9^{\hskip 0.56905pt+\hskip 0.28453pt0.1}_{-0.2}$ | $-2.8^{-0.1}_{\hskip 0.56905pt+\hskip 0.42677pt0.2}$
B4 | … | … | …
* •
Notes.
* a
Number density measurement or $95$% upper limit measured from
$\rm[O\,II]\lambda 3729/[O\,II]\lambda 3727$.
* b
Number density inferred from Cloudy simulation described in the text.
* c
Best-fit ionization parameter computed by Cloudy simulation.
### 4.2 Physical Conditions of the Emitting Gas
Previous studies of giant nebulae have attributed the ionization of the gas to
ionizing photons from AGN, shocks, and young stellar populations (e.g.,
Johnson et al., 2018; Rupke et al., 2019; Chen et al., 2019; Helton et al.,
2021). The presence of the quasar suggests the source of ionization is AGN-
related. To study the physical conditions of the gas, we measured the the
density- and temperature-sensitive $\rm[O\,II]\lambda 3729/[O\,II]\lambda
3727$ and $\rm[O\,III]\lambda 4364/[O\,III]\lambda 5008$ line ratios as well
as ionization state-sensitive strong and weak line ratios in each region.
These line ratio measurements are reported in Table 2 and a [O III]/[O II] map
is shown in panel (c) of Figure 4. We discuss these measurements and their
implications in the following three subsections.
#### 4.2.1 Direct Density and Temperature Estimates
With spectral coverage of $\rm[O\,II]\lambda 3727$, $\rm[O\,II]\lambda 3729$,
$\rm[O\,III]\lambda 4364$, and $\rm[O\,III]\lambda 5008$, we can directly
measure electron density ($n_{\rm e}$) and temperature ($T_{\rm e}$), as
discussed in Osterbrock & Ferland (2006). The $\rm[O\,II]$ doublet is a good
density estimator because the difference in excitation energy between these
two upper states is small so that the relative population in the two states is
determined by electron density and is insensitive to temperature. In contrast,
the $\rm[O\,III]$ doublet upper states have a larger excitation energy
difference, making the populations of these states mainly sensitive to
electron temperature and insensitive to electron density. Electron number
densities from the $\rm[O\,II]$ doublet are reasonable proxies for the overall
densities of ionized nebulae because $\rm H$ and $\rm O$ share similar
ionization energies of $13.6\,\rm eV$.
To translate line ratios into physical conditions, we used Pyneb (Luridiana et
al., 2015) which predicts the [O II] and [O III] line ratios at a given
density and temperature by solving the detailed balance equation for an
$n$-level atom. We fit the measured line ratios with Pyneb models by
performing Markov chain Monte Carlo (MCMC) analysis with emcee (Foreman-Mackey
et al., 2013), and inferred physical conditions from the resulting posteriors.
We report the densities in Table 3, though we omit measurements in cases where
the S/N or broad line width results in poorly constrained conditions.
For all regions where the $\rm[O\,II]$ doublet is resolved, the line ratio is
in the low density limit except S6. We therefore report 95$\%$ upper limits in
density for all but S6. The inferred electron number density upper limits
range from $1.2<\log(n_{\rm e,[O\,II]}/\mathrm{cm^{-3}})<2.8$, with a median
of $\log(n_{\rm e,[O\,II]}/\mathrm{cm^{-3}})<1.6$. These density upper limits
are consistent with gas arising from ionized ISM (Draine, 2011) or CGM. We
detected $\rm[O\,III]\lambda 4364$ in only three luminous regions, S1, S2, and
S6. The inferred temperatures for S1, S2, and S6 are
$\log(T/\mathrm{K})\approx 4.2$, $4.2$, and $4.1$ respectively.
#### 4.2.2 Indirect Density Estimates from Photoionization Simulations
Under the assumption that the nebula is ionized by the quasar, its ionization
states are set by the luminosity of the quasar, density of the gas, and
distance from the quasar, with secondary effects from metallicity and ionizing
spectral shape. With an estimate of the quasar’s luminosity and assuming
projection effects are negligible, the density structure of the gas can be
inferred from measured line ratios (see Cantalupo et al., 2019). Studies of
high redshift quasar nebulae found ionization states can only be explained by
a density of $\log(n_{\rm H}/\mathrm{cm^{-3}})\approx 1.9$, significantly
higher than expected CGM/IGM densities, or alternatively by a broad density
distribution (see Cantalupo et al., 2019). At low redshift, this kind of
scenario can be further explored with insight from rest-optical lines to
compare ionization-based densities with more direct density estimates from the
$\rm[O\,II]$ doublet.
To infer the physical conditions from the line ratios in Table 2, we ran
photoionization simulations for each region with Cloudy version C17.03
(Ferland et al., 2017). We modelled the quasar’s radiation field using a power
law ($I\propto\nu^{\alpha}$) between 0.37 and 73.5 $\rm Ryd$, with $\alpha$
between $-1.8<\alpha<0$ following Groves et al. (2004) but extending to a
higher $\alpha$. We set the modeled quasar luminosity at $1\,\rm Ryd$ using
direct measurement of the monochromatic UV luminosity from COS. For the gas,
we adopted single density and single metallicity models, with density of
$-2<\log(n_{\rm H}/\mathrm{cm}^{-3})<4.6$ and metallicity of
$-1.5<\log(Z/Z_{\odot})<0.5$. We chose this metallicity range to cover the
characteristic metallicities of the cool CGM around massive elliptical
galaxies (Zahedy et al., 2019) but extended it to higher metallicity in case
some gas has ISM origins. Due to limited ion coverage, metallicity and
$\alpha$ are degenerate in some cases, so we treated them as nuisance
parameters and focused on inferred densities. We note that there is relatively
little degeneracy between density and metallicity except at high metallicities
of $\log(Z/Z_{\odot})>0.2$ when increased cooling from metal lines begins to
substantially change the equilibrium temperature.
For each region, we conducted these models in grids with a step of $0.2$ dex
in density and metallicity, and 0.2 in $\alpha$. We then interpolated these
models with the RegularGridInterpolator function from scipy.interpolate
(Virtanen et al., 2020) within these ranges after checking for convergence.
Finally, we ran emcee to estimate posteriors given the measured line ratios
and uncertainties. We verified the quality of the fits by comparing the
posteriors of the model line ratios with the measured line ratios using violin
plots shown in Figure 9. The violin plots verify that the ionization-state-
sensitive line ratios (shown in the middle panels) are consistent with the
measured line ratios. The best-fit $\alpha$ values for most regions are within
$-1.0<\alpha<-0.6$, somewhat greater than ones given in Groves et al. (2004).
Inferred metallicities for S1, S2, and S6, with He II and [Ne V] detections,
are well-constrained to be $-0.2<\log(Z/Z_{\odot})<0.2$. The densities
inferred from these photoionization simulations range from $\log(n_{\rm
H,Cloudy}/\mathrm{cm}^{-3})=1.6$ to $4.2$ and are reported in the right column
of Table 3, though we stress that these densities neglect potential quasar
variability and projection effects.
#### 4.2.3 Comparison of the Density Estimates
Previous photoionization-based estimates of the density of quasar nebulae at
high-redshift found unexpectedly high densities, close to or exceeding typical
densities for the ISM, despite being measured on CGM/IGM scale (Cantalupo et
al., 2019). The ionization sensitive line ratios of the nebula around HE
0238$-$1904 also imply high photoionization-based densities of
$1.6<\log(n_{\rm H,\,Cloudy}/\mathrm{cm}^{-3})<4.2$. However, the more direct
$\rm[O\,II]$-based densities are inconsistent with and significantly smaller
than the photoionization-based densities for most regions as shown in Table 3.
To better demonstrate this inconsistency, Figure 9 shows both the measured
line ratios and the posteriors inferred from the photoionization models for
S2, S6, and S9. The ionization-state-sensitive line ratios are consistent with
the model posteriors for all three regions, while the $\rm[O\,II]$ line ratios
are highly discrepant for S6 and S9. The right panel of each subfigure shows
the density posteriors from both direct and indirect density estimates.
As shown in Table 3, we found that all regions with photoionization-based
density estimates except S1, S2, B1, and B3 have a large ($1{-}2$ dex)
discrepancy when compared to the [O II] doublet-based densities. In the most
extreme case, S5, the two density estimates are off by $2.6$ dex or a factor
of $400$. In principle, the inferred density mismatch could be explained by a
non-uniform density distribution if the [O II] arises from less dense gas than
the other emission lines. To test whether a more complicated density structure
could explain the density mis-match, we modeled the emitting gas as a multi-
phase system consisting of one low density component and one high density
component with the relative contribution of each treated as an additional free
parameter. This model successfully reproduces the observed emission-line
ratios, and the density inferred for the high density component matches the
single-phase model results. Furthermore, the posteriors of the two-component
model indicate that the high density component dominates the $\rm[O\,II]$
emission. Therefore, a two-phase model cannot explain the density discrepancy
between the direct [O II]-based density measurements and the ionization-state-
based density estimates.
To test if a broad, continuous density distribution can explain the
discrepancy, we modelled the emitting gas with a log-normal density
distribution (see Cantalupo et al., 2019). A log-normal distribution is
defined as
${\rm PDF}(n)\text{d}n=\frac{1}{\sqrt{2\pi}\sigma}{\rm
exp}\Big{[}-\frac{[\text{ln}(n)-\text{ln}(\mu)]^{2}}{2\sigma^{2}}\Big{]}\text{dln}(n)$
(1)
where $\sigma$ is the dispersion and $\mu$ is the mean density. We started
with calculating emission line emissivity in an extended Cloudy model grid,
similar to ones discussed in Section 4.2.2. We then computed the predicted
line ratios for a log-normal density distribution by interpolating Cloudy
models and integrating over the PDF. Our results show that a log-normal
distribution with a large $\sigma$ can reproduce the ionization-sensitive line
ratios, but the log-normal models predict that the [O II] emission arises from
dense gas, resulting in [O II] line ratios of $\log(\frac{\lambda
3729}{\lambda 3727})={-}0.4\,{\rm to}\,{-}0.1$, inconsistent with the observed
ratios of $\log(\frac{\lambda 3729}{\lambda 3727})>0.1$. Therefore, a broad
density distribution is unlikely to reconcile the density discrepancy.
Alternatively, projection effects can also result in disagreement between the
two density estimates. However, assuming that the gas is randomly and
approximately spherically distributed around the quasar, the projected
distance is unlikely to be much smaller than the radial distance between the
quasar and the nebula. For example, producing a factor of $400$ mismatch in
density requires the radial distance to be 20 times larger than the projected
distance. While such projection effects are possible in principle, the
required contrived geometry is unlikely.
In principle, the discrepancy in density could be explained if the nebula is
not ionized by the quasar due to obscuring dust blocking its light from
reaching this gas. Filtering the quasar’s radiation through dust would soften
the incident ionizing radiation field. However, the best-fit $\alpha$ values
from our photoionization analysis suggests a hard ionizing spectrum for almost
all regions. Alternatively, the ionization of the nebulae could be due to
young stellar populations (Morisset et al., 2015) or fast shocks (Allen et
al., 2008). However, there is no evidence of extended star-formation in rest-
frame $u$-band images of the system formed from the MUSE datacube.
To investigate the possibility of fast shocks, we show two emission line
diagnostic diagrams overlaid with shock models in a grid of shock velocity and
magnetic field strength in Figure 8. Producing the observed [O III]/[O II] and
[Ne V]/[O II]333We note that [Ne V]/[Ne III] as a better shock tracer cannot
be used due to [Ne III]$\lambda$3869 is severely contaminated by skylines.
ratios requires shock velocities of $v_{\rm shock}>250\,\rm km\,s^{-1}$ (Allen
et al., 2008). These shock velocities are greater than the LOS velocity and
velocity dispersion of the nebula in nearly all locations, even after
accounting for projection effects. For example, some regions (S1 and S2) would
require shock velocities exceeding $1000\,\rm km\,s^{-1}$ and most regions
(S3, S4, S6, S8, S10, B1, B2, B3, and B4) would require $>300\\!-\\!400\,\rm
km\,s^{-1}$, making them unlikely to be ionized by shocks. On the other hand,
while the observed line ratios of S5, S7, and S9 favor AGN photoionization,
large uncertainties in their $\rm H\beta$ flux can accommodate shocks with
velocities as low as $200\,\rm km\,s^{-1}$. This would alleviate the density
discrepancy in these three regions. However, for most regions, the shock
velocity required to reproduce the observed line ratios exceeds velocities
observed in the system. Shocks are therefore unlikely to explain the density
discrepancy in most cases.
Figure 8: The emission line diagnostic diagrams log([O III]$\lambda 5008$/[O
II]$\lambda\lambda 3727,3729$) versus log([O III]$\lambda 5008$/H$\beta$) and
log([Ne V]$\lambda 3427$/[O II]$\lambda\lambda 3727,3729$) versus log([O
III]$\lambda 5008$/[O II]$\lambda\lambda 3727,3729$) for nebular regions. Line
ratios are shown as orange points with error bars, or shown as $3\sigma$ upper
limits for non-detections. For S3, S4, and B4, total line ratios (main+wing)
are shown as orange diamonds with error bars, or with upper limits. We note
that S3, S4, and B4 might have large uncertainty due to multiple components
detected within $150\rm\,km\,s^{-1}$. We compare these line ratios with the
fast radiative shock models (shock plus precursor) from Allen et al. (2008).
Emission-line ratio grids with solar metallicity, a preshock density of
$n_{\rm e}=100\,\rm cm^{-3}$, and a magnetic field strength of
$B=0.001{-}100\rm\,\mu G$ are shown in orange and grey for a shock velocity of
$100{-}250\rm\,km\,s^{-1}$ and $250{-}1000\rm\,km\,s^{-1}$ respectively.
Perhaps more likely, the difference in the density estimates could be due to
quasar variability (Richstone & Oke, 1977). Quasar variability is directly
observed on timescales of decades (Stone et al., 2022). Observations of
“changing-look” AGN, light echoes, and quasar proximity zones suggest the
average episodic lifetime of quasars may range from $10^{4}$ to $10^{7}$ years
and AGN episodes may be highly clustered (e.g., Schirber et al., 2004;
Gonçalves et al., 2008; Kirkman & Tytler, 2008; Trainor & Steidel, 2013;
Syphers & Shull, 2014; Schawinski et al., 2015; Comerford et al., 2017;
Schmidt et al., 2018; Shen, 2021). Therefore, each region of the nebula around
HE 0238$-$1904 may experience a drastically different radiation field from the
quasar, depending on the light travel time. For example, S5 and S6 are at a
projected distance of $10$ to $\rm 20\,pkpc$ from the quasar, respectively,
and their line ratios can be explained if the quasar was $400$ and $10$ times
less luminous than currently observed. In contrast, S1 and S2 are at a
projected distance of $\approx\rm 40\,pkpc$ from the quasar, and their
properties can be explained if they received ionizing radiation consistent
with the current luminosity of the quasar. We confirmed that quasar
variability could explain the ionization state and $\rm[O\,II]$ ratio by re-
running Cloudy models and MCMC analysis after significantly decreasing the
quasar luminosity.
Figure 9: Examples of violin plots and density posteriors for nebular regions.
For each subfigure, the left panel shows the density- and temperature-
sensitive line ratios from the flux measurements as red points with error bars
and the photoionization model posteriors are shown in orange. The middle panel
shows the ionization-sensitive line ratios from the flux measurements as red
points with error bars and the photoionization model posteriors in orange. The
right panel shows the density posterior from both direct (red histogram) and
indirect density estimates (orange filled histogram). The density posterior
inferred from the [O II] doublet extends below the plotted range as indicated
by the red arrows.
## 5 Summary and Conclusions
In this paper, we presented the first comprehensive analysis of a giant nebula
around a radio-quiet quasar at $z<1$ based on MUSE observations of the field
of HE 0238$-$1904. The wide FoV, high spatial sampling, and wide wavelength
coverage enabled us to investigate the origin and the physical condition of
the group and gaseous environment with a spatially resolved analysis of the
morphologies, kinematics, and nebular photoionization properties. Our finding
can be summarized as follows.
1. 1.
We found that HE 0238$-$1904 resides in an overdense environment containing
two potentially merging galaxy groups based spatial distribution and
kinematics. This includes a less rich, blueshifted group with $12$ galaxies
and a richer, redshifted group with $22$ galaxies. Assuming the more massive
group is virialized, its dynamical mass is $M_{\rm dyn}\sim 4\times
10^{13}{-}10^{14}\ {\rm M_{\odot}}$. Such a massive, rich environment is
unusual for a radio-quiet quasar, which typically resides in a halo with a
mass of $\sim 3\times 10^{12}\ {\rm M_{\odot}}$ (Shen et al., 2009).
2. 2.
We identified a giant nebula covering a projected area of $\approx 5000\ {\rm
kpc}^{2}$ around HE 0238$-$1904 emitting strongly in $\rm[O\,II]$,
$\mathrm{H}\beta$, and $\rm[O\,III]$. The nebula has an irregular morphology
with a spatial trend in kinematics where the region North of the quasar is
redshifted and the region South of the quasar is mainly blueshifted relative
to the quasar. The southern region is spatially coincident with four dwarf
galaxies.
3. 3.
The coincidence with nearby galaxies suggests that it arises from stripping of
ISM or CGM, which is consistent with its morphology and largely narrow LOS
velocity dispersion. In addition, the nebula shows a head-tail morphology with
the head near the quasar and with the tail extending toward South West of the
quasar. The head-tail structure may originate from ram pressure if the quasar
and the surrounding nebula are infalling toward the massive galaxy group to
the North East. However, we note there are some small regions at $d\approx
20\,\rm pkpc$ from the quasar that have broader emission wings, perhaps
suggesting an outflow origin.
4. 4.
To better characterize the physical conditions of the nebula, we measured the
fluxes of strong and weak emission line fluxes. The inferred electron number
density upper limits from the $\rm[O\,II]$ doublet range from $\log(n_{\rm
e,[O\,II]}/\mathrm{cm^{-3}})<1.2$ to $2.8$, with a median of $\log(n_{\rm
e,[O\,II]}/\mathrm{cm^{-3}})<1.6$. These density upper limits are consistent
with ISM or CGM origin. However, densities inferred from photoionization
models are often inconsistent with the $\rm[O\,II]$-based density upper
limits, reaching values of up to $400$ times higher.
5. 5.
The disagreement in density estimates is unlikely to be due to density
inhomogeneities, but can be explained by quasar variability, if the quasar
varied significantly on timescales of $10^{4}$ to $10^{5}$ years. This finding
suggest that long-term quasar variability should be included when considering
ionization-based inferences into the physical conditions of giant nebulae
around quasars.
The possibility of significant quasar variability on timescales of $10^{4}$ to
$10^{5}$ years has implications far beyond accretion disk physics in the
central engine. In particular, significant fluctuations on these timescales
can result in out-of-equilibrium conditions in the low density circumgalactic
medium due to the long recombination time of low density gas (Oppenheimer &
Schaye, 2013; Segers et al., 2017). Indeed, such AGN “flickering” may be
responsible for strong O VI absorption observed around Milky Way-like galaxies
at low redshift (Oppenheimer et al., 2018). The recent and upcoming
commissioning of new IFSs on large telescopes, such as LLAMAS (Furesz et al.,
2020), IFUM (Mateo et al., 2022), Blue MUSE (Richard, 2019), and MIRMOS
(Konidaris et al., 2020), will continue to drive further discoveries of giant
nebulae which could be followed up with IFS like HARMONI (Thatte et al., 2022)
on future, 30-meter class telescopes, extending similar insights to higher
redshifts and fainter systems.
## Acknowledgements
SDJ and ZQL acknowledge partial support from HST-GO- 15280.009-A, HST-
GO-15298.007-A, HST-GO-15655.018-A, and HST-GO-15935.021-A. JIL is supported
by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt
Futures program. SC gratefully acknowledges support from the European Research
Council (ERC) under the European Union’s Horizon 2020 Research and Innovation
programme grant agreement No 864361. This paper is based on observations from
the European Organization for Astronomical Research in the Southern Hemisphere
under ESO (PI: J. Schaye, PID: 094.A-0131(B) & 096.A-0222(A)), and the
NASA/ESA Hubble Space Telescope (PI: L. Straka, PID: 14660; PI: J. Green,
11541; PI: S. Penton, PID: 12505). Additionally, this paper made use of the
NASA/IPAC Extragalactic Database, the NASA Astrophysics Data System, Astropy
(Astropy Collaboration et al., 2022), Aplpy (Robitaille & Bressert, 2012), and
Photutils (Bradley, 2023).
## DATA AVAILABILITY
The data used in this paper are available from the the ESO and HST data
archives.
## References
* Abbott et al. (2021) Abbott T. M. C., et al., 2021, ApJS, 255, 20
* Allen et al. (2008) Allen M. G., Groves B. A., Dopita M. A., Sutherland R. S., Kewley L. J., 2008, ApJS, 178, 20
* Arav et al. (2013) Arav N., Borguet B., Chamberlain C., Edmonds D., Danforth C., 2013, MNRAS, 436, 3286
* Assef et al. (2018) Assef R. J., Stern D., Noirot G., Jun H. D., Cutri R. M., Eisenhardt P. R. M., 2018, ApJS, 234, 23
* Astropy Collaboration et al. (2022) Astropy Collaboration et al., 2022, ApJ, 935, 167
* Bacon et al. (2010) Bacon R., et al., 2010, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III. p. 773508 (arXiv:2211.16795), doi:10.1117/12.856027
* Bacon et al. (2016) Bacon R., Piqueras L., Conseil S., Richard J., Shepherd M., 2016, MPDAF: MUSE Python Data Analysis Framework, Astrophysics Source Code Library, record ascl:1611.003 (ascl:1611.003)
* Bertin & Arnouts (1996) Bertin E., Arnouts S., 1996, A&AS, 117, 393
* Bish et al. (2021) Bish H. V., Werk J. K., Peek J., Zheng Y., Putman M., 2021, ApJ, 912, 8
* Blanton & Roweis (2007) Blanton M. R., Roweis S., 2007, AJ, 133, 734
* Bolton et al. (2012) Bolton A. S., et al., 2012, AJ, 144, 144
* Borisova et al. (2016a) Borisova E., Lilly S. J., Cantalupo S., Prochaska J. X., Rakic O., Worseck G., 2016a, ApJ, 830, 120
* Borisova et al. (2016b) Borisova E., et al., 2016b, ApJ, 831, 39
* Boselli et al. (2019) Boselli A., et al., 2019, A&A, 631, A114
* Bradley (2023) Bradley L., 2023, astropy/photutils: 1.8.0, doi:10.5281/zenodo.7946442, https://doi.org/10.5281/zenodo.7946442
* Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
* Buchner et al. (2014) Buchner J., et al., 2014, A&A, 564, A125
* Burchett et al. (2021) Burchett J. N., Rubin K. H. R., Prochaska J. X., Coil A. L., Vaught R. R., Hennawi J. F., 2021, ApJ, 909, 151
* Cai et al. (2019) Cai Z., et al., 2019, ApJS, 245, 23
* Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682
* Cantalupo et al. (2014) Cantalupo S., Arrigoni-Battaia F., Prochaska J. X., Hennawi J. F., Madau P., 2014, Nature, 506, 63
* Cantalupo et al. (2019) Cantalupo S., et al., 2019, MNRAS, 483, 5188
* Carnall et al. (2018) Carnall A. C., McLure R. J., Dunlop J. S., Davé R., 2018, MNRAS, 480, 4379
* Carnall et al. (2019) Carnall A. C., et al., 2019, MNRAS, 490, 417
* Chen et al. (2019) Chen H.-W., Boettcher E., Johnson S. D., Zahedy F. S., Rudie G. C., Cooksey K. L., Rauch M., Mulchaey J. S., 2019, ApJ, 878, L33
* Chen et al. (2020) Chen H.-W., et al., 2020, MNRAS, 497, 498
* Chen et al. (2023) Chen M. C., et al., 2023, MNRAS, 518, 2354
* Coleman et al. (1980) Coleman G. D., Wu C. C., Weedman D. W., 1980, ApJS, 43, 393
* Comerford et al. (2017) Comerford J. M., Barrows R. S., Müller-Sánchez F., Nevin R., Greene J. E., Pooley D., Stern D., Harrison F. A., 2017, ApJ, 849, 102
* Draine (2011) Draine B. T., 2011, Physics of the Interstellar and Intergalactic Medium
* Dutta et al. (2023a) Dutta R., et al., 2023a, arXiv e-prints, p. arXiv:2302.09087
* Dutta et al. (2023b) Dutta S., Muzahid S., Schaye J., Mishra S., Chen H.-W., Johnson S., Wisotzki L., Cantalupo S., 2023b, arXiv e-prints, p. arXiv:2303.16933
* Epinat et al. (2018) Epinat B., et al., 2018, A&A, 609, A40
* Faber et al. (2007) Faber S. M., et al., 2007, ApJ, 665, 265
* Fabian (2012) Fabian A. C., 2012, ARA&A, 50, 455
* Ferland et al. (2017) Ferland G. J., et al., 2017, Rev. Mex. Astron. Astrofis., 53, 385
* Feroz et al. (2009) Feroz F., Hobson M. P., Bridges M., 2009, MNRAS, 398, 1601
* Feroz et al. (2019) Feroz F., Hobson M. P., Cameron E., Pettitt A. N., 2019, The Open Journal of Astrophysics, 2, 10
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Fossati et al. (2019) Fossati M., et al., 2019, MNRAS, 490, 1451
* Fossati et al. (2021) Fossati M., et al., 2021, MNRAS, 503, 3044
* Furesz et al. (2020) Furesz G., et al., 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. p. 114470A, doi:10.1117/12.2562803
* Gaia Collaboration et al. (2018) Gaia Collaboration et al., 2018, A&A, 616, A1
* Gallazzi et al. (2005) Gallazzi A., Charlot S., Brinchmann J., White S. D. M., Tremonti C. A., 2005, MNRAS, 362, 41
* Gonçalves et al. (2008) Gonçalves T. S., Steidel C. C., Pettini M., 2008, ApJ, 676, 816
* Green et al. (2012) Green J. C., et al., 2012, ApJ, 744, 60
* Groves et al. (2004) Groves B. A., Dopita M. A., Sutherland R. S., 2004, ApJS, 153, 9
* Guo et al. (2019) Guo H., Liu X., Shen Y., Loeb A., Monroe T., Prochaska J. X., 2019, MNRAS, 482, 3288
* Heckman et al. (1981) Heckman T. M., Miley G. K., van Breugel W. J. M., Butcher H. R., 1981, ApJ, 247, 403
* Helton et al. (2021) Helton J. M., Johnson S. D., Greene J. E., Chen H.-W., 2021, MNRAS, 505, 5497
* Hess et al. (2017) Hess K. M., Cluver M. E., Yahya S., Leisman L., Serra P., Lucero D. M., Passmoor S. S., Carignan C., 2017, MNRAS, 464, 957
* Hester (2006) Hester J. A., 2006, ApJ, 647, 910
* Hewett & Wild (2010) Hewett P. C., Wild V., 2010, MNRAS, 405, 2302
* Johnson et al. (2015) Johnson S. D., Chen H.-W., Mulchaey J. S., 2015, MNRAS, 449, 3263
* Johnson et al. (2018) Johnson S. D., et al., 2018, ApJ, 869, L1
* Johnson et al. (2022) Johnson S. D., et al., 2022, ApJ, 940, L40
* Kamann et al. (2016) Kamann S., et al., 2016, A&A, 588, A149
* Kellermann et al. (1989) Kellermann K. I., Sramek R., Schmidt M., Shaffer D. B., Green R., 1989, AJ, 98, 1195
* Kennicutt & Evans (2012) Kennicutt R. C., Evans N. J., 2012, ARA&A, 50, 531
* Kirkman & Tytler (2008) Kirkman D., Tytler D., 2008, MNRAS, 391, 1457
* Konidaris et al. (2020) Konidaris N. P., Rudie G. C., Newman A. B., Hare T. S., Williams J. E., Lanz A. E., Kelson D. D., Crane J., 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. p. 114471E, doi:10.1117/12.2561171
* Kormendy & Ho (2013) Kormendy J., Ho L. C., 2013, ARA&A, 51, 511
* Kroupa (2001) Kroupa P., 2001, MNRAS, 322, 231
* Langen et al. (2023) Langen V., Cantalupo S., Steidel C. C., Chen Y., Pezzulli G., Gallego S. G., 2023, MNRAS, 519, 5099
* Leclercq et al. (2022) Leclercq F., et al., 2022, A&A, 663, A11
* Lehner et al. (2018) Lehner N., Wotta C. B., Howk J. C., O’Meara J. M., Oppenheimer B. D., Cooksey K. L., 2018, ApJ, 866, 33
* Leitner & Kravtsov (2011) Leitner S. N., Kravtsov A. V., 2011, ApJ, 734, 48
* Liu et al. (2013a) Liu G., Zakamska N. L., Greene J. E., Nesvadba N. P. H., Liu X., 2013a, MNRAS, 430, 2327
* Liu et al. (2013b) Liu G., Zakamska N. L., Greene J. E., Nesvadba N. P. H., Liu X., 2013b, MNRAS, 436, 2576
* Luridiana et al. (2015) Luridiana V., Morisset C., Shaw R. A., 2015, A&A, 573, A42
* Mackenzie et al. (2021) Mackenzie R., et al., 2021, MNRAS, 502, 494
* Marasco et al. (2016) Marasco A., Crain R. A., Schaye J., Bahé Y. M., van der Hulst T., Theuns T., Bower R. G., 2016, MNRAS, 461, 2630
* Martin et al. (2010) Martin C., Moore A., Morrissey P., Matuszewski M., Rahman S., Adkins S., Epps H., 2010, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III. p. 77350M, doi:10.1117/12.858227
* Mateo et al. (2022) Mateo M., Bailey J. I., Song Y., Crane J., Hull C., Shectman S., Birk C., 2022, in Evans C. J., Bryant J. J., Motohara K., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 12184, Ground-based and Airborne Instrumentation for Astronomy IX. p. 121845P, doi:10.1117/12.2629506
* Monroe et al. (2016) Monroe T. R., Prochaska J. X., Tejos N., Worseck G., Hennawi J. F., Schmidt T., Tumlinson J., Shen Y., 2016, AJ, 152, 25
* Morisset et al. (2015) Morisset C., Delgado-Inglada G., Flores-Fajardo N., 2015, Rev. Mex. Astron. Astrofis., 51, 103
* Munari et al. (2013) Munari E., Biviano A., Borgani S., Murante G., Fabjan D., 2013, MNRAS, 430, 2638
* Muzahid et al. (2012) Muzahid S., Srianand R., Savage B. D., Narayanan A., Mohan V., Dewangan G. C., 2012, MNRAS, 424, L59
* Muzahid et al. (2018) Muzahid S., Fonseca G., Roberts A., Rosenwasser B., Richter P., Narayanan A., Churchill C., Charlton J., 2018, MNRAS, 476, 4965
* Muzahid et al. (2020) Muzahid S., et al., 2020, MNRAS, 496, 1013
* Naab & Ostriker (2017) Naab T., Ostriker J. P., 2017, ARA&A, 55, 59
* National Academies of Sciences (2021) National Academies of Sciences 2021, Pathways to Discovery in Astronomy and Astrophysics for the 2020s, doi:10.17226/26141.
* Newville et al. (2014) Newville M., Stensitzki T., Allen D. B., Ingargiola A., 2014, LMFIT: Non-Linear Least-Square Minimization and Curve-Fitting for Python, Zenodo, doi:10.5281/zenodo.11813
* Nyland et al. (2020) Nyland K., et al., 2020, ApJ, 905, 74
* O’Sullivan et al. (2020) O’Sullivan D. B., Martin C., Matuszewski M., Hoadley K., Hamden E., Neill J. D., Lin Z., Parihar P., 2020, ApJ, 894, 3
* Oppenheimer & Schaye (2013) Oppenheimer B. D., Schaye J., 2013, MNRAS, 434, 1063
* Oppenheimer et al. (2018) Oppenheimer B. D., Segers M., Schaye J., Richings A. J., Crain R. A., 2018, MNRAS, 474, 4740
* Osterbrock & Ferland (2006) Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and active galactic nuclei
* Pâris et al. (2018) Pâris I., et al., 2018, A&A, 613, A51
* Poggianti et al. (2016) Poggianti B. M., et al., 2016, AJ, 151, 78
* Ren et al. (2018) Ren B., Pueyo L., Zhu G. B., Debes J., Duchêne G., 2018, ApJ, 852, 104
* Richard (2019) Richard J., 2019, in The Very Large Telescope in 2030. p. 24, doi:10.5281/zenodo.3356260
* Richards et al. (2006) Richards G. T., et al., 2006, ApJS, 166, 470
* Richstone & Oke (1977) Richstone D. O., Oke J. B., 1977, ApJ, 213, 8
* Robitaille & Bressert (2012) Robitaille T., Bressert E., 2012, APLpy: Astronomical Plotting Library in Python, Astrophysics Source Code Library (ascl:1208.017)
* Rupke et al. (2017) Rupke D. S. N., Gültekin K., Veilleux S., 2017, ApJ, 850, 40
* Rupke et al. (2019) Rupke D. S. N., et al., 2019, Nature, 574, 643
* Schawinski et al. (2015) Schawinski K., Koss M., Berney S., Sartori L. F., 2015, MNRAS, 451, 2517
* Schirber et al. (2004) Schirber M., Miralda-Escudé J., McDonald P., 2004, ApJ, 610, 105
* Schmidt et al. (2018) Schmidt T. M., Hennawi J. F., Worseck G., Davies F. B., Lukić Z., Oñorbe J., 2018, ApJ, 861, 122
* Segers et al. (2017) Segers M. C., Oppenheimer B. D., Schaye J., Richings A. J., 2017, MNRAS, 471, 1026
* Shen (2021) Shen Y., 2021, ApJ, 921, 70
* Shen et al. (2009) Shen Y., et al., 2009, ApJ, 697, 1656
* Soto et al. (2016) Soto K. T., Lilly S. J., Bacon R., Richard J., Conseil S., 2016, MNRAS, 458, 3210
* Stone et al. (2022) Stone Z., et al., 2022, MNRAS, 514, 164
* Syphers & Shull (2014) Syphers D., Shull J. M., 2014, ApJ, 784, 42
* Tacconi et al. (2013) Tacconi L. J., et al., 2013, ApJ, 768, 74
* Thatte et al. (2022) Thatte N. A., et al., 2022, in Evans C. J., Bryant J. J., Motohara K., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 12184, Ground-based and Airborne Instrumentation for Astronomy IX. p. 1218420, doi:10.1117/12.2628834
* Trainor & Steidel (2013) Trainor R., Steidel C. C., 2013, ApJ, 775, L3
* Urrutia et al. (2019) Urrutia T., et al., 2019, A&A, 624, A141
* Véron-Cetty & Véron (2006) Véron-Cetty M. P., Véron P., 2006, A&A, 455, 773
* Vestergaard & Peterson (2006) Vestergaard M., Peterson B. M., 2006, ApJ, 641, 689
* Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261
* Weilbacher et al. (2012) Weilbacher P. M., Streicher O., Urrutia T., Jarno A., Pécontal-Rousset A., Bacon R., Böhm P., 2012, in Radziwill N. M., Chiozzi G., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8451, Software and Cyberinfrastructure for Astronomy II. p. 84510B, doi:10.1117/12.925114
* Weilbacher et al. (2014) Weilbacher P. M., Streicher O., Urrutia T., Pécontal-Rousset A., Jarno A., Bacon R., 2014, in Manset N., Forshay P., eds, Astronomical Society of the Pacific Conference Series Vol. 485, Astronomical Data Analysis Software and Systems XXIII. p. 451 (arXiv:1507.00034), doi:10.48550/arXiv.1507.00034
* Weilbacher et al. (2020) Weilbacher P. M., et al., 2020, A&A, 641, A28
* Wisotzki et al. (2000) Wisotzki L., Christlieb N., Bade N., Beckmann V., Köhler T., Vanelle C., Reimers D., 2000, A&A, 358, 77
* Wright et al. (2010) Wright E. L., et al., 2010, AJ, 140, 1868
* Zabl et al. (2021) Zabl J., et al., 2021, MNRAS, 507, 4294
* Zahedy et al. (2019) Zahedy F. S., Chen H.-W., Johnson S. D., Pierce R. M., Rauch M., Huang Y.-H., Weiner B. J., Gauthier J.-R., 2019, MNRAS, 484, 2257
* Zhang (2018) Zhang D., 2018, Galaxies, 6, 114
* Zhao & Wang (2023) Zhao Q., Wang J., 2023, ApJ, 943, L25
* Zheng et al. (2019) Zheng Y., Peek J. E. G., Putman M. E., Werk J. K., 2019, ApJ, 871, 35
* den Brok et al. (2020) den Brok J. S., et al., 2020, MNRAS, 495, 1874
|
than JSON [18] and the schema-less binay serialization specifications listed
in Table 7 in the _Tier 2 Minified $\geq$ 100 $<$ 1000 bytes, numeric,
redundant, and nested_ (Tier 2 NRN) GeoJSON [12] document from subsection
6.13. With the exception of ASN.1 PER Unaligned [58] and Cap’n Proto Packed
Encoding [70], the selection of schema-driven binary serialization
specifications result in negative outliers for the Tier 2 NRN case as shown in
Figure 69 (A, B, C, D, E and F). Compared to the other JSON documents from the
input data set, this JSON document consists of highly nested arrays and almost
no object keys. Leaving that exception aside, we found that in general, the
schema-driven binary serialization specifications listed Table 6 provide the
highest space-efficiency improvements in comparison to JSON [18] on _boolean_
documents and tend to provide the least space-efficient improvements on
_textual_ JSON documents. Figure 69 shows that schema-driven binary
serialization specifications, in particular ASN.1 PER Unaligned [58], Apache
Avro [28], Protocol Buffers [32] and Apache Thrift [61], result in high size
reductions in comparison to JSON. However, every considered schema-driven
binary serialization specification results in at least one negative space-
efficiency exception.
Summary. The schema-driven binary serialization specifications listed in Table
6 tend to be more space-efficient than the schema-less binary serialization
specifications listed in Table 7 and JSON [18] in most cases. Based on our
findings, we conclude that ASN.1 PER Unaligned [58] and Apache Avro (unpacked)
[28] are space-efficient in comparison to schema-less binary serialization
specifications in almost all cases as they provide over 70% median size
reductions and over 65% average size reductions in comparison to JSON [18].
Table 65: A summary of the size reduction results in comparison to JSON [18] of the selection of schema-driven binary serialization specifications listed in Table 6 against the input data listed in Table 4 and Table 5. See Figure 69 for a visual representation of this data. Serialization Specification | Size Reductions in Comparison To JSON | Negative Cases
---|---|---
Maximum | Minimum | Range | Median | Average
ASN.1 (PER Unaligned) | 98.5% | -7.9% | 106.4 | 71.4% | 65.7% | 1 / 27 (3.7%)
Apache Avro (unframed) | 100% | -48.9% | 148.9 | 73.5% | 65.7% | 1 / 27 (3.7%)
Microsoft Bond (Compact Binary v1) | 88% | -56.8% | 144.8 | 63.4% | 54% | 1 / 27 (3.7%)
Cap’n Proto | 81.1% | -179.1% | 260.1 | 1.9% | -2.9% | 12 / 27 (44.4%)
Cap’n Proto (packed) | 90.1% | -20% | 110.1 | 55.2% | 49.6% | 1 / 27 (3.7%)
FlatBuffers | 72% | -257.9% | 329.8 | 0.7% | -6.1% | 13 / 27 (48.1%)
Protocol Buffers | 100% | -71.1% | 171.1 | 70.6% | 59.3% | 1 / 27 (3.7%)
Apache Thrift (Compact Protocol) | 97.7% | -45.8% | 143.5 | 67.6% | 58.1% | 1 / 27 (3.7%)
Averages | 90.9% | -85.9% | 176.9 | 50.6% | 42.9% | 14.3%
Figure 69: A box plot that demonstrates the size reduction (in percentages) of
the selection of schema-driven binary serialization specifications listed in
Table 6 in comparison to uncompressed JSON [18] given the input data listed in
Table 4 and Table 5.
### 8.3 Q3: How do JSON-compatible sequential binary serialization
specifications compare to JSON-compatible pointer-based binary serialization
specifications in terms of space-efficiency?
Figure 70: The binary serialization specifications that resulted in the
highest size reductions for each JSON [18] document for the input data listed
in Table 4 and Table 5, broken down by type. Schema-driven sequential binary
serialization specifications, in particular ASN.1 PER Unaligned [58] and
Apache Avro [28], resulted in the highest size reductions in most cases.
In terms of the schema-less binary serialization specifications listed in
Table 7, Table 64 illustrates that in comparison to JSON [18], FlexBuffers
[68] results in negative median and average size reductions, a characteristic
only otherwise applicable to BSON [43]. Leaving BSON aside, FlexBuffers only
results in more space-efficient messages than a strict subset of the
sequential schema-less binary serialization specifications in three cases:
_Tier 1 TRN_ (subsection 6.6), _Tier 2 TRN_ (subsection 6.17) and _Tier 3 TRN_
(subsection 6.24). Furthemore, FlexBuffers [68] is comparatively more space-
efficient that all the other schema-less binary serialization specifications
listed in Table 7 for the _Tier 2 TRF_ JSON document from subsection 6.16 and
the _Tier 3 TRF_ JSON document from subsection 6.23. However, as explained in
subsection 8.1, this is due to FlexBuffers automatic string deduplication
feature, which is orthogonal to whether a binary serialization specification
is sequential or pointer-based.
We refer to the schema-driven binary serialization specifications listed in
Table 6. Table 65 illustrates that the selection of sequential schema-driven
binary serialization specifications are strictly superior to FlatBuffers [67]
in terms of space reductions. Similarly, Cap’n Proto [70] (unpacked) provides
a more space-efficient bit-string than a single sequential schema-driven
binary serialization specification, Microsoft Bond [44] (Compact Binary v1),
in a single case: _Tier 2 BRF_ (subsection 6.20). However, Cap’n Proto [70]
(packed) results in more space-efficient messages than a strict subset of the
sequential schema-driven binary serialization specifications in six cases:
_Tier 1 NNF_ (subsection 6.3), _Tier 1 BRF_ (subsection 6.9), _Tier 2 NRN_
(subsection 6.13), _Tier 2 BRF_ (subsection 6.20), _Tier 3 NRF_ (subsection
6.22), and _Tier 3 BRF_ (subsection 6.27); but never surpasses the entire set
of sequential schema-driven binary serialization specifications for any JSON
document from the input data set listed in Table 4 and Table 5.
Summary. Based on our findings, sequential binary serialization specifications
are typically more space-efficient than pointer-based binary serialization
specifications, independent of whether they are schema-less or schema-driven.
### 8.4 Q4: How does compressed JSON compares to uncompressed and compressed
JSON-compatible binary serialization specifications?
#### 8.4.1 Data Compression
We found that data compression tends to yield negative results on _Tier 1
Minified $<$ 100 bytes_ JSON documents. As an extreme, LZMA resulted in a
negative 171.4% size reduction for subsection 6.3. The entire selection of
data compression formats produced negative results for all the _Tier 1
Minified $<$ 100 bytes_ JSON documents we considered except for subsection
6.10, for which LZ4 produced a negative result but GZIP [17] and LZMA resulted
in a 8.2% and 6.1% reduction, respectively, and subsection 6.6, for which all
data compression formats produced positive results ranging from 10.4% in the
case of LZ4 to 16.7% in the case of GZIP [17]. Leaving _Tier 1 Minified $<$
100 bytes_ JSON documents aside, all the data compression formats we selected
offered better average and median compression ratios on _textual_ JSON
documents as seen in Table 66. Out of the selection of data compression
formats, GZIP [17] performed better in terms of the average and median size
reduction in all _Tier 2 Minified $\geq$ 100 $<$ 1000 bytes_ and _Tier 3
Minified $\geq$ 1000 bytes_ categories.
Table 66: The average and median size reduction of using the selection of data compression formats on the _Tier 2 Minified $\geq$ 100 $<$ 1000 bytes_ and _Tier 3 Minified $\geq$ 1000 bytes_ input JSON documents. GZIP [17] resulted in higher compression ratios for all categories. Compression Format | Numeric | Textual | Boolean
---|---|---|---
_Average_ | _Median_ | _Average_ | _Median_ | _Average_ | _Median_
GZIP (compression level 9) | 39% | 33.3% | 54% | 49.2% | 28% | 26.8%
LZ4 (compression level 9) | 21% | 19.5% | 40% | 32.7% | 20% | 8.7%
LZMA (compression level 9) | 38% | 32.8% | 52% | 48% | 25% | 21.3%
#### 8.4.2 Schema-less Binary Serialization Specifications
Table 67 summarizes the size reductions provided by schema-less binary
serialization specifications in comparison to compresed JSON [18]. Leaving
BSON [43] and FlexBuffers [68] aside, schema-less binary serialization
specifications typically provide space-efficient results in _Tier 1 Minified
$<$ 100 bytes_ JSON documents, as these usually resulted in negative
compression ratios. However, compressed JSON provides space-efficient results
in 15 out of the 27 listed in Figure 11. In comparison to compressed JSON, no
schema-less binary serialization provides both a positive median and average
size reduction. As shown in Figure 71, the selection of schema-less binary
serialization specifications listed in Table 7, with the exception of
FlexBuffers [68], result in negative outliers for the Tier 2 TRF case from
subsection 6.16 (A, B, C, D, E).
As summarized in Table 68, compressing the bit-strings produced by schema-less
binary serialization specifications results in 22 out 90 instances that are
space-efficient in comparison to compressed JSON on _Tier 2 Minified $\geq$
100 $<$ 1000 bytes_ and _Tier 3 Minified $\geq$ 1000 bytes_ JSON documents but
reduces the advantages that uncompressed schema-less binary serialization
specifications have over compressed JSON on _Tier 1 Minified $<$ 100 bytes_
JSON documents. In comparison to compressed JSON, compressed CBOR [6] is
strictly equal or superior than the rest of the compressed schema-less binary
serialization specifications in all but a single case: _Tier 1 NRN_ from
subsection 6.2, providing the highest median (8.8%) and highest average (8.1%)
size reductions. As a notable outlier shown in Figure 72, best-case compressed
BSON [43] results in a negative size reduction of 44% in comparison to
compressed JSON [18] for the Tier 2 NRN case from subsection 6.13.
Table 67: A summary of the size reduction results in comparison to the best case scenarios of compressed JSON [18] given the compression formats listed in Table 9 of the selection of schema-less binary serialization specifications listed in Table 7 against the input data listed in Table 4 and Table 5. See Figure 71 for a visual representation of this data. Serialization Specification | Size Reductions in Comparison To Compressed JSON | Negative Cases
---|---|---
Maximum | Minimum | Range | Median | Average
BSON | 50.0% | -353.9% | 403.9 | -40.8% | -76.9% | 22 / 27 (81.4%)
CBOR | 69.7% | -307.1% | 376.8 | 7.5% | -26.8% | 13 / 27 (48.1%)
FlexBuffers | 45.5% | -193.5% | 238.9 | -48.1% | -50.8% | 20 / 27 (74%)
MessagePack | 69.7% | -307.1% | 376.8 | 7.5% | -26.2% | 13 / 27 (48.1%)
Smile | 54.5% | -292.2% | 346.8 | -5% | -31.7% | 14 / 27 (51.8%)
UBJSON | 60.6% | -327.3% | 387.9 | -16.3% | -43.6% | 15 / 27 (55.5%)
Averages | 58.3% | -296.9% | 355.2 | -15.9% | -42.7% | 59.8%
Figure 71: A box plot that demonstrates the size reduction (in percentages) of the selection of schema-less binary serialization specifications listed in Table 7 in comparison to the best-case compressed JSON [18] given the compression formats listed in Table 9 and the input data listed in Table 4 and Table 5. Table 68: A summary of the size reduction results of the best case scenarios of compressed schema-less binary serialization specifications listed in Table 7 in comparison to the best case scenarios of compressed JSON [18] given the compression formats listed in Table 9 and the the input data listed in Table 4 and Table 5. See Figure 72 for a visual representation of this data. Serialization Specification | Size Reductions in Comparison To Compressed JSON | Negative Cases
---|---|---
Maximum | Minimum | Range | Median | Average
Compressed BSON | 8% | -44% | 52 | -10.1% | -11% | 23 / 27 (85.1%)
Compressed CBOR | 24.5% | -8.7% | 33.3 | 8.8% | 8.1% | 4 / 27 (14.8%)
Compressed FlexBuffers | 0% | -58.9% | 58.9 | -24.4% | -23.8% | 27 / 27 (100%)
Compressed MessagePack | 24.5% | -13.7% | 38.2 | 7.5% | 5.9% | 10 / 27 (37%)
Compressed Smile | 13.9% | -18.4% | 32.2 | -1.6% | -1.6% | 14 / 27 (51.8%)
Compressed UBJSON | 13.6% | -16.5% | 30.1 | -0.7% | -1.9% | 15 / 27 (55.5%)
Averages | 14.1% | -26.7% | 40.8 | -3.4% | -4.1% | 57.3%
Figure 72: A box plot that demonstrates the size reduction (in percentages) of
the selection of schema-less binary serialization specifications listed in
Table 7 in their best-case compressed forms given the compression formats
listed in Table 9 in comparison to the best-case compressed JSON [18] given
the compression formats listed in Table 9 and the input data listed in Table 4
and Table 5.
#### 8.4.3 Schema-driven Binary Serialization Specifications
As shown in Table 69, schema-driven binary serialization specifications
provide positive median and average size reductions in comparison to
compressed JSON [18]. However, schema-driven binary serialization
specifications tend to produce negative results in comparison to compressed
JSON mostly on _Tier 2 Minified $\geq$ 100 $<$ 1000 bytes_ _textual_ (22 out
of 32 cases) and _Tier 3 Minified $\geq$ 1000 bytes_ _textual_ (25 out of 32)
JSON documents. Even when taking compression into account, both ASN.1 PER
Unaligned [58] and Apache Avro (unpacked) [28] continue to provide over 70%
median size reductions and almost 40% average size reductions. As shown in
Figure 73, the entire selection of schema-driven binary serialization
specifications listed in Table 6 results in negative outliers for the Tier 2
TRF case from subsection 6.16 (A, B, C, D, E, G and H) and the Tier 2 NRN case
from subsection 6.13 (F).
Compressing the bit-strings produced by schema-driven binary serialization
specifications shows that compressed _sequential_ schema-driven binary
serialization specifications are strictly superior than compressed JSON [18]
as shown in Table 70. On the higher end, both ASN.1 PER Unaligned [58] and
Apache Avro [28] provide median and average size reductions of over 50% in
comparison to compressed JSON, with a minimum size reduction of over 11% in
the Tier 2 NRN case from subsection 6.13 for which all the schema-driven
binary serialization specifications previously resulted in negative size
reductions in comparison to uncompressed JSON. As a notable exception shown in
Figure 74, best-case compressed FlatBuffers [67] results in a negative size
reduction of 68.1% (A) in comparison to compressed JSON [18] for the Tier 2
NRN case.
Table 69: A summary of the size reduction results in comparison to the best case scenarios of compressed JSON [18] given the compression formats listed in Table 9 of the selection of schema-diven binary serialization specifications listed in Table 6 against the input data listed in Table 4 and Table 5. See Figure 73 for a visual representation of this data. Serialization Specification | Size Reductions in Comparison To Compressed JSON | Negative Cases
---|---|---
Maximum | Minimum | Range | Median | Average
ASN.1 (PER Unaligned) | 98.5% | -222.7% | 321.3 | 75.5% | 39% | 6 / 27 (22.2%)
Apache Avro (unframed) | 100% | -227.3% | 327.3 | 72.7% | 39.4% | 5 / 27 (18.5%)
Microsoft Bond (Compact Binary v1) | 93.2% | -239% | 332.1 | 60.2% | 23.4% | 6 / 27 (22.2%)
Cap’n Proto | 70.1% | -315.6% | 385.7 | -9.1% | -45.7% | 15 / 27 (55.5%)
Cap’n Proto (packed) | 86.4% | -267.5% | 353.9 | 50% | 17% | 8 / 27 (29.6%)
FlatBuffers | 54.5% | -486.2% | 540.8 | -23.4% | -55.4% | 17 / 27 (62.9%)
Protocol Buffers | 100% | -238.3% | 338.3 | 67% | 28.4% | 6 / 27 (22.2%)
Apache Thrift (Compact Protocol) | 98% | -238.3% | 336.3 | 69.3% | 29% | 6 / 27 (22.2%)
Averages | 87.6% | -279.4% | 367 | 45.3% | 9.4% | 31.9%
Figure 73: A box plot that demonstrates the size reduction (in percentages) of the selection of schema-driven binary serialization specifications listed in Table 6 in comparison to the best-case compressed JSON [18] given the compression formats listed in Table 9 and the input data listed in Table 4 and Table 5. Table 70: A summary of the size reduction results of the best case scenarios of compressed schema-driven binary serialization specifications listed in Table 6 in comparison to the best case scenarios of compressed JSON [18] given the compression formats listed in Table 9 and the the input data listed in Table 4 and Table 5. See Figure 74 for a visual representation of this data. Serialization Specification | Size Reductions in Comparison To Compressed JSON | Negative Cases
---|---|---
Maximum | Minimum | Range | Median | Average
Compressed ASN.1 (PER Unaligned) | 83.8% | 11.2% | 72.6 | 54.5% | 51.2% | 0 / 27 (0%)
Compressed Apache Avro (unframed) | 84% | 16.7% | 67.3 | 52.2% | 52.3% | 0 / 27 (0%)
Compressed Microsoft Bond (Compact Binary v1) | 66.3% | 8.6% | 57.6 | 42% | 40.3% | 0 / 27 (0%)
Compressed Cap’n Proto | 75.7% | -13.8% | 89.4 | 22.1% | 28% | 3 / 27 (11.1%)
Compressed Cap’n Proto (packed) | 77.2% | -18.1% | 95.3 | 30.2% | 32.7% | 3 / 27 (11.1%)
Compressed FlatBuffers | 60.4% | -68.1% | 128.5 | 10.2% | 12.1% | 9 / 27 (33.3%)
Compressed Protocol Buffers | 80.3% | 7.8% | 72.5 | 46.4% | 44.6% | 0 / 27 (0%)
Compressed Apache Thrift (Compact Protocol) | 78.1% | 10.3% | 67.8 | 48.9% | 46.2% | 0 / 27 (0%)
Averages | 75.7% | -5.7% | 81.4 | 38.3% | 38.4% | 6.9%
Figure 74: A box plot that demonstrates the size reduction (in percentages) of
the selection of schema-driven binary serialization specifications listed in
Table 6 in their best-case compressed forms given the compression formats
listed in Table 9 in comparison to the best-case compressed JSON [18] given
the compression formats listed in Table 9 and the input data listed in Table 4
and Table 5.
Summary. In comparison to compressed JSON, both compressed and uncompressed
schema-less binary serialization specifications result in negative median and
average size reductions. However, both compressed and uncompressed schema-
driven binary serialization specifications result in positive median and
average reduction. Furthermore, compressed sequential schema-driven binary
serialization specifications are strictly superior to compressed JSON in all
the cases from the input data.
### 8.5 JSON Compatibility
Implementing the benchmark and writing schemas for the set of schema-driven
serialization specification revealed that some of the considered schema-driven
serialization specifications are not strictly compatible with JSON [18] and
required transformations in order for the input data to be accepted by the
implementations or the respective schema definition languages when encoding
JSON documents from the input data set listed in Table 4 and Table 5. These
transformations can be inspected in the benchmark public GitHub repository
123123123https://github.com/jviotti/binary-json-size-benchmark. These
transformations are divided into the following categories:
* •
Keys. The schema definition languages provided by ASN.1 [57], Microsoft Bond
[44], Cap’n Proto [70], FlatBuffers [67], Protocol Buffers [32], and Apache
Thrift [61] disallow property names that include hyphens, slashes, dollar
signs, parenthesis, and periods. Also, ASN.1 [57] disallows property names
that start with an underscore and Cap’n Proto [70] disallows property names
include underscores and capitalised property names. Furthermore, Protocol
Buffers [32] and Apache Thrift [61] disallow property names that equal the
reserved keywords _async_ , _extends_ , _in_ , and _with_. To handle these
cases, the disallowed properties are renamed to a close variation that the
schema language permits.
* •
Values. Protocol Buffers [32] defines the _null_ type as an enumeration
consisting of a single constant: zero
124124124https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/struct.proto.
FlatBuffers [67] does not support a _null_ type. When using FlatBuffers [67],
we represent this type with an enumeration consisting of a single constant in
the same manner as Protocol Buffers [32]. In both cases, we transform any JSON
[18] _null_ value into zero.
* •
Structural. Neither Microsoft Bond [44], Cap’n Proto [70], FlatBuffers [67],
Protocol Buffers [32], nor Apache Thrift [61] support encoding a JSON document
that consists of a top level array. In these cases, we move the array into a
wrapper structure. FlatBuffers [67] and Protocol Buffers [32] also do not
support nested arrays. In these cases, we introduce wrapper structures at
every array nesting level. Finally, ASN.1 [57], Microsoft Bond [44], Cap’n
Proto [70], FlatBuffers [67], Protocol Buffers [32], and Apache Thrift [61] do
not support heterogenous arrays of non-composite types. In these cases, we
convert the heterogenous arrays into arrays of union structures. Microsoft
Bond [44] does not support union types and in this case we introduce a
structure consisting of optional fields. Additionally, the use of unions in
FlatBuffers [67] requires the introduction of an additional textual property
to signify the union choice. In order not to put this specification at a
disadvantage, we encode the fixed-length heterogenous array as tables where
their property names correspond to the array indexes.
The type of transformations that were necessary for each JSON document from
the input data defined in Table 4 and Table 5 are listed in Table 71. In
summary, every schema-less binary serialization specifications listed in Table
7 is compatible with the input data set. In terms of schema-driven
specifications, only Apache Avro [28] is strictly compatible with the input
data set.
Table 71: A summary of the transformations needed to serialize the input data JSON documents listed in Table 4 and Table 5 using the set of binary serialization specifications listed in Table 6 and Table 7. The JSON documents from Table 4 and Table 5 that are not present in this table did not require any type of transformation. Each letter signifies the type of required transformation as defined in this section. The letter K stands for _Keys_ , the letter V stands for _Values_ , and the letter S stands for _Structural_. Input Data | ASN.1 | Apache Avro | Microsoft Bond | BSON | Cap’n Proto | CBOR | FlatBuffers | FlexBuffers | MessagePack | Protocol Buffers | Smile | Apache Thrift | UBJSON
---|---|---|---|---|---|---|---|---|---|---|---|---|---
Tier 1 NRF | K | | K | | K | | K | | | K | | K |
Tier 1 NRN | K | | K | | K | | K | | | K | | K |
Tier 1 TRF | K | | K | | K | | K | | | K | | K |
Tier 1 TRN | K+S | | K+S | | K+S | | K+S | | | K+S | | K+S |
Tier 1 TNF | | | | | | | | | | K | | K |
Tier 1 BRF | | | | | | | V | | | V | | |
Tier 1 BRN | K | | K | | K | | K | | | K | | K |
Tier 1 BNN | K | | K | | K | | K | | | K | | K |
Tier 2 NRN | | | | | | | S | | | S | | |
Tier 2 NNF | | | | | K | | | | | | | |
Tier 2 NNN | | | S | | K+S | | S | | | S | | S |
Tier 2 TNF | | | | | K | | | | | | | |
Tier 2 TNN | K | | K | | K | | K | | | K | | K |
Tier 2 BRF | | | | | K | | V | | | V | | |
Tier 3 NRF | K+S | | K+S | | K+S | | K+S | | | K+S | | K+S |
Tier 3 TRF | K+S | | K+S | | K+S | | K+S | | | K+S | | K+S |
Tier 3 TRN | K | | K | | K | | K | | | K | | K |
Tier 3 BRF | | | | | K | | V | | | V | | |
Tier 3 TNF | K | | K | | K | | K | | | K | | K |
## 9 Future Work
In this paper, we present the results of a comprehensive benchmark of 13 JSON-
compatible schema-driven and schema-less binary serialization specifications
across 27 real-world JSON documents test cases across industries.
Our findings provide a number of conclusions. When we investigated how JSON-
compatible schema-less binary serialization specifications compare to JSON in
terms of space-efficiency, we found that using MessagePack [30] on _Tier 1
Minified $<$ 100 bytes_ and _Tier 2 Minified $\geq$ 100 $<$ 1000 bytes_ JSON
documents, Smile [56] on _Tier 3 Minified $\geq$ 1000 bytes_ JSON documents,
and FlexBuffers [68] on JSON documents with high-redundancy of _textual_
values increases space-efficiency. When we investigated how JSON-compatible
schema-driven binary serialization specifications compare to JSON and JSON-
compatible schema-less binary serialization specifications in terms of space-
efficiency, we found that ASN.1 PER Unaligned [58] and Apache Avro (unpacked)
[28] are space-efficient in comparison to schema-less binary serialization
specifications in almost all cases. When we investigated how JSON-compatible
sequential binary serialization specifications to compare to JSON-compatible
pointer-based binary serialization specifications in terms of space-
efficiency, we found that sequential binary serialization specifications are
typically more space-efficient than pointer-based binary serialization
specifications, independent of whether they are schema-less or schema-driven.
When we investigated how compressed JSON compares to uncompressed and
compressed JSON-compatible binary serialization specifications, we found that
in comparison to compressed JSON, both compressed and uncompressed schema-less
binary serialization specifications result in negative median and average size
reductions. However, both compressed and uncompressed schema-driven binary
serialization specifications result in positive median and average reduction.
Furthermore, compressed sequential schema-driven binary serialization
specifications are strictly superior to compressed JSON in all the cases from
the input data.
Based on our findings, we believe there is room to augment the input data set
to include JSON documents that match the 9 missing taxonomy categories
described in subsection 5.1 and to increase the sample proportionality. We
hope to encourage contributions to our open-source space-efficiency benchmark
automation software for general improvements and support for new JSON-
compatible binary serialization specifications. Using our learnings, we hope
to propose a new JSON-compatible binary serialization specification with
better space-efficiency characteristics.
## Acknowledgments and Disclosure of Funding
Thanks to OSS Nokalva 125125125https://www.ossnokalva.com for offering access
to and help for using their proprietary ASN-1Step ASN.1 [57] implementation.
## References
* [1]
* rss [2003] Berkman Center for Internet & Society at Harvard Law School 2003\. _RSS 2.0 Specification_. Berkman Center for Internet & Society at Harvard Law School. https://www.rssboard.org/rss-2-0-1
* Baazizi et al. [2019] Mohamed-Amine Baazizi, Dario Colazzo, Giorgio Ghelli, and Carlo Sartiani. 2019. Parametric schema inference for massive JSON datasets. _The VLDB Journal_ 28, 4 (2019), 497–521.
* Bartík et al. [2015] Matěj Bartík, Sven Ubik, and Pavel Kubalik. 2015\. LZ4 compression algorithm on FPGA. In _2015 IEEE International Conference on Electronics, Circuits, and Systems (ICECS)_. 179–182. https://doi.org/10.1109/ICECS.2015.7440278
* Biswal and Almallah [2019] Amrit Kumar Biswal and Obada Almallah. 2019. _Analytical Assessment of Binary Data Serialization Techniques in IoT Context_. Master’s thesis. Dipartimento di Elettronica, Informazione e Bioingegneria.
* Bormann and Hoffman [2013] C. Bormann and P. Hoffman. 2013. _Concise Binary Object Representation (CBOR)_. RFC. IETF. https://doi.org/10.17487/RFC7049
* Bourhis et al. [2017] Pierre Bourhis, Juan L. Reutter, Fernando Suárez, and Domagoj Vrgoč. 2017. JSON: Data Model, Query Languages and Schema Specification. In _Proceedings of the 36th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems_ (Chicago, Illinois, USA) _(PODS ’17)_. Association for Computing Machinery, New York, NY, USA, 123–135. https://doi.org/10.1145/3034786.3056120
* Braeger [2016] Steven Braeger. 2016\. _Universal Binary JSON Specification_. UBJSON. https://ubjson.org
* Bray [2014] T. Bray. 2014. _The JavaScript Object Notation (JSON) Data Interchange Format_. RFC. IETF. https://doi.org/10.17487/RFC8259
* Bryan [2013a] P. Bryan. 2013a. _JavaScript Object Notation (JSON) Patch_. RFC. IETF. https://doi.org/10.17487/RFC6902
* Bryan [2013b] P. Bryan. 2013b. _JavaScript Object Notation (JSON) Pointer_. RFC. https://doi.org/10.17487/RFC6901
* Butler et al. [2016] H. Butler, M. Daly, A. Doyle, S. Gillies, S. Hagen, and T. Schaub. 2016. _The GeoJSON Format_. RFC. IETF. https://doi.org/10.17487/RFC7946
* Cánovas Izquierdo and Cabot [2013] Javier Luis Cánovas Izquierdo and Jordi Cabot. 2013. Discovering Implicit Schemas in JSON Data. In _Web Engineering_ , Florian Daniel, Peter Dolog, and Qing Li (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 68–83.
* Collet and Kucherawy [2021] Yann Collet and Murray Kucherawy. 2021. _Zstandard Compression and the ’application/zstd’ Media Type_. RFC. IETF. https://doi.org/10.17487/RFC8878
* Consortium [2019] The Unicode Consortium. 2019\. _The Unicode Standard, Version 12.1.0_. Standard. The Unicode Consortium, Mountain View, CA.
* Cánovas Izquierdo and Cabot [2016] Javier Luis Cánovas Izquierdo and Jordi Cabot. 2016. JSONDiscoverer: Visualizing the schema lurking behind JSON documents. _Knowledge-Based Systems_ 103 (2016), 52–55. https://doi.org/10.1016/j.knosys.2016.03.020
* Deutsch [1996] P. Deutsch. 1996\. _GZIP file format specification version 4.3_. RFC. https://doi.org/10.17487/RFC1952
* ECMA [2017] ECMA. 2017. _ECMA-404: The JSON Data Interchange Syntax_. ECMA, Geneva, CH. https://www.ecma-international.org/publications/standards/Ecma-404.htm
* ECMA [2021] ECMA. 2021. _ECMA-262: ECMAScript 2021 language specification_. ECMA, Geneva, CH. https://www.ecma-international.org/publications/standards/Ecma-262.htm
* Ed-douibi et al. [2017] Hamza Ed-douibi, Javier Luis Cánovas Izquierdo, and Jordi Cabot. 2017. Example-Driven Web API Specification Discovery. In _Modelling Foundations and Applications_ , Anthony Anjorin and Huáscar Espinoza (Eds.). Springer International Publishing, Cham, 267–284.
* F. Galiegue [[n.d.]] G. Court F. Galiegue, K. Zyp. [n.d.]. _JSON Schema: core definitions and terminology_. IETF. https://tools.ietf.org/html/draft-zyp-json-schema-04
* Faulkner et al. [2021] Steve Faulkner, Arron Eicholz, Travis Leithead, Alex Danilo, and Sangwhan Moon. 2021. _HTML 5.2_. W3C Recommendation. W3C. https://www.w3.org/TR/html52/.
* Feng and Li [2013] Jianhua Feng and Jinhong Li. 2013. Google protocol buffers research and application in online game. In _IEEE Conference Anthology_. IEEE, China, 4.
* Ferragina and Manzini [2010] Paolo Ferragina and Giovanni Manzini. 2010. On Compressing the Textual Web. In _Proceedings of the Third ACM International Conference on Web Search and Data Mining_ (New York, New York, USA) _(WSDM ’10)_. Association for Computing Machinery, New York, NY, USA, 391–400. https://doi.org/10.1145/1718487.1718536
* Fielding et al. [1999] R. Fielding, J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, and T. Berners-Lee. 1999. _Hypertext Transfer Protocol – HTTP/1.1_. RFC. https://doi.org/10.17487/RFC2616
* Fielding and Reschke [2014] R. Fielding and J. Reschke. 2014. _Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content_. RFC. https://doi.org/10.17487/RFC7231
* for Floating-Point Arithmetic [2019] Working Group for Floating-Point Arithmetic. 2019. IEEE Standard for Floating-Point Arithmetic. _IEEE Std 754-2019 (Revision of IEEE 754-2008)_ 1, 1 (2019), 1–84.
* Foundation [2012] The Apache Software Foundation. 2012\. _Apache Avro™ 1.10.0 Specification_. The Apache Software Foundation. https://avro.apache.org/docs/current/spec.html
* Frozza et al. [2018] A. A. Frozza, R. d. S. Mello, and F. d. S. d. Costa. 2018\. An Approach for Schema Extraction of JSON and Extended JSON Document Collections. In _2018 IEEE International Conference on Information Reuse and Integration (IRI)_. IEEE, Salt Lake City, Utah, 356–363.
* Furuhashi et al. [2013] Sadayuki Furuhashi, Satoshi Tagomori, Yuichi Tanikawa, Stephen Colebourne, Stefan Friesel, René Kijewski, Michael Cooper, Uenishi Kota, and Gabe Appleton. 2013\. _MessagePack Specification_. MessagePack. https://github.com/msgpack/msgpack/blob/master/spec.md
* Gil and Trezentos [2011] Bruno Gil and Paulo Trezentos. 2011. Impacts of Data Interchange Formats on Energy Consumption and Performance in Smartphones. In _Proceedings of the 2011 Workshop on Open Source and Design of Communication_ (Lisboa, Portugal) _(OSDOC ’11)_. Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/2016716.2016718
* Google [2020] Google. 2020. _Protocol Buffers Version 3 Language Specification_. Google. https://developers.google.com/protocol-buffers/docs/reference/proto3-spec
* Hamerski et al. [2018] Jean Carlo Hamerski, Anderson RP Domingues, Fernando G Moraes, and Alexandre Amory. 2018. Evaluating serialization for a publish-subscribe based middleware for MPSoCs. In _2018 25th IEEE International Conference on Electronics, Circuits and Systems (ICECS)_. IEEE, IEEE, Bordeaux, 773–776.
* Harrison et al. [2021] Sam Harrison, Abhishek Dasgupta, Simon Waldman, Alex Henderson, and Christopher Lovell. 2021. How reproducible should research software be? https://doi.org/10.5281/zenodo.4761867
* Huffman [1952] David A Huffman. 1952\. A method for the construction of minimum-redundancy codes. _Proceedings of the IRE_ 40, 9 (1952), 1098–1101.
* ISO/IEC JTC 1/SC 22 Programming languages and system software interfaces [2002] their environments ISO/IEC JTC 1/SC 22 Programming languages and system software interfaces. 2002\. _Z formal specification notation — Syntax, type system and semantics_. Standard. International Organization for Standardization.
* Jahed and Dingel [2019] K. Jahed and J. Dingel. 2019. Enabling Model-Driven Software Development Tools for the Internet of Things. In _2019 IEEE/ACM 11th International Workshop on Modelling in Software Engineering (MiSE)_. IEEE, Montreal, QC, Canada, 93–99. https://doi.org/10.1109/MiSE.2019.00022
* Jayasankar et al. [2021] Uthayakumar Jayasankar, Vengattaraman Thirumal, and Dhavachelvan Ponnurangam. 2021. A survey on data compression techniques: From the perspective of data quality, coding schemes, data type and applications. _Journal of King Saud University - Computer and Information Sciences_ 33, 2 (2021), 119–140. https://doi.org/10.1016/j.jksuci.2018.05.006
* Klettke et al. [2015] Meike Klettke, Uta Störl, and Stefanie Scherzinger. 2015\. Schema extraction and structural outlier detection for JSON-based nosql data stores. In _Datenbanksysteme für Business, Technologie und Web (BTW 2015)_ , Thomas Seidl, Norbert Ritter, Harald Schöning, Kai-Uwe Sattler, Theo Härder, Steffen Friedrich, and Wolfram Wingerath (Eds.). Gesellschaft für Informatik e.V., Bonn, 425–444.
* Krashinsky [2003] Ronny Krashinsky. 2003\. Efficient web browsing for mobile clients using HTTP compression. _See: http://www. cag. lcs. mit. edu/~ ronny/classes/httpcomp. pdf_ (2003).
* Kyurkchiev [2015] Pavel Kyurkchiev. 2015\. Integrating a System for Symbol Programming of Real Processes with a Cloud Service. , 8 pages.
* Maeda [2012] K Maeda. 2012. Performance evaluation of object serialization libraries in XML, JSON and binary formats. , 177–182 pages.
* Merriman et al. [2020] Dwight Merriman, Stephen Steneker, Hannes Magnusson, Luke Lovett, Kevin Albertson, Kay Kim, and Allison Reinheimer Moore. 2020. _BSON Specification Version 1.1_. MongoDB. http://bsonspec.org/spec.html
* Microsoft [2018] Microsoft. 2018\. _Bond Compiler 0.12.0.1_. Microsoft. https://microsoft.github.io/bond/manual/compiler.html
* Nottingham and Sayre [2005] M. Nottingham and R. Sayre. 2005. _The Atom Syndication Format_. RFC. IETF. https://doi.org/10.17487/RFC4287
* Parekar and Thakare [2014] PM Parekar and SS Thakare. 2014. Lossless data compression algorithm–a review. _International Journal of Computer Science & Information Technologies_ 5, 1 (2014).
* Peng [2011] Roger D Peng. 2011\. Reproducible research in computational science. _Science_ 334, 6060 (2011), 1226–1227.
* Petersen et al. [2017] Bo Petersen, Henrik Bindner, Shi You, and Bjarne Poulsen. 2017\. Smart grid serialization comparison: Comparision of serialization for distributed control in the context of the internet of things. In _2017 Computing Conference_. IEEE, IEEE, London, UK, 1339–1346.
* Popić et al. [2016] S. Popić, D. Pezer, B. Mrazovac, and N. Teslić. 2016\. Performance evaluation of using Protocol Buffers in the Internet of Things communication. In _2016 International Conference on Smart Systems and Technologies (SST)_. International Conference on Smart Systems and Technologies, Osijek, HR, 261–265.
* Pradana et al. [2019] M. A. Pradana, A. Rakhmatsyah, and A. A. Wardana. 2019\. Flatbuffers Implementation on MQTT Publish/Subscribe Communication as Data Delivery Format. In _2019 6th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI)_. IEEE, Bandung, Indonesia, 142–146. https://doi.org/10.23919/EECSI48112.2019.8977050
* Proos and Carlsson [2020] D. P. Proos and N. Carlsson. 2020. Performance Comparison of Messaging Protocols and Serialization Formats for Digital Twins in IoV. In _2020 IFIP Networking Conference (Networking)_. IEEE, Paris, France, 10–18.
* Putnam [1957] Hilary Putnam. 1957\. Three-valued logic. _Philosophical Studies_ 8, 5 (1957), 73–80.
* Ross and West [2016] David Ross and Mike West. 2016. _Entry Point Regulation_. W3C Working Group Note. W3C. https://w3c.github.io/webappsec-epr/.
* Sahlmann et al. [2018] Kristina Sahlmann, Alexander Lindemann, and Bettina Schnor. 2018. Binary Representation of Device Descriptions: CBOR versus RDF HDT. _Technische Universität Braunschw: Braunschweig, Germany_ 0, 0 (2018), 4.
* Sakamoto et al. [2015] Y. Sakamoto, S. Matsumoto, S. Tokunaga, S. Saiki, and M. Nakamura. 2015. Empirical study on effects of script minification and HTTP compression for traffic reduction. In _2015 Third International Conference on Digital Information, Networking, and Wireless Communications (DINWC)_. 127–132. https://doi.org/10.1109/DINWC.2015.7054230
* Saloranta [2017] Tatu Saloranta. 2017\. _Efficient JSON-compatible binary format: "Smile"_. FasterXML. https://github.com/FasterXML/smile-format-specification/blob/master/smile-specification.md
* Sector [2015a] ITU Telecommunication Standardization Sector. 2015a. _Abstract Syntax Notation One (ASN.1): Specification of basic notation_. Standard. ITU Telecommunication Standardization Sector.
* Sector [2015b] ITU Telecommunication Standardization Sector. 2015b. _ASN.1 encoding rules: Specification of Packed Encoding Rules (PER)_. Standard. ITU Telecommunication Standardization Sector.
* Silva de Moura et al. [2000] Edleno Silva de Moura, Gonzalo Navarro, Nivio Ziviani, and Ricardo Baeza-Yates. 2000. Fast and Flexible Word Searching on Compressed Text. _ACM Trans. Inf. Syst._ 18, 2 (April 2000), 113–139. https://doi.org/10.1145/348751.348754
* Simmons and Reece [2017] Brent Simmons and Manton Reece. 2017. _JSON Feed Version 1_. https://jsonfeed.org/version/1
* Slee et al. [2007] Mark Slee, Aditya Agarwal, and Marc Kwiatkowski. 2007\. Thrift: Scalable cross-language services implementation. _Facebook White Paper_ 5 (2007), 8.
* Stewart and Burns [2020] Simon Stewart and David Burns. 2020. _WebDriver_. W3C Working Draft. W3C.
* Storer and Szymanski [1982] James A. Storer and Thomas G. Szymanski. 1982. Data Compression via Textual Substitution. _J. ACM_ 29, 4 (Oct. 1982), 928–951. https://doi.org/10.1145/322344.322346
* Sumaray and Makki [2012] Audie Sumaray and S Makki. 2012. A comparison of data serialization formats for optimal efficiency on a mobile platform. , 6 pages.
* Svoboda [2020] Pavel Contoˇsˇand Martin Svoboda. 2020\. JSON Schema Inference Approaches. In _Advances in Conceptual Modeling: ER 2020 Workshops CMAI, CMLS, CMOMM4FAIR, CoMoNoS, EmpER, Vienna, Austria, November 3–6, 2020, Proceedings_. Springer Nature, Springer, Vienna, Austria, 173.
* van Liempt [2020] Jordi van Liempt. 2020\. _CityJSON: does (file) size matter?_ Master’s thesis. Department of Urbanism, Faculty of the Built Environment & Architecture.
* van Oortmerssen [2014] Wouter van Oortmerssen. 2014\. _FlatBuffers: Writing a Schema_. Google. https://google.github.io/flatbuffers/flatbuffers_guide_writing_schema.html
* van Oortmerssen [2017] Wouter van Oortmerssen. 2017\. _FlexBuffers_. Google. https://google.github.io/flatbuffers/flexbuffers.html
* Vanura and Kriz [2018] J. Vanura and P. Kriz. 2018\. Perfomance evaluation of Java, JavaScript and PHP serialization libraries for XML, JSON and binary formats. , 166–175 pages.
* Varda [2013] Kenton Varda. 2013\. _Cap’n Proto Schema Language_. Sandstorm. https://capnproto.org/language.html
* Viotti [2022] Juan Cruz Viotti. 2022\. _jviotti/binary-json-size-benchmark:_. https://doi.org/10.5281/zenodo.5829569
* Viotti and Kinderkhedia [2022] Juan Cruz Viotti and Mital Kinderkhedia. 2022. A Survey of JSON-compatible Binary Serialization Specifications. arXiv:2201.02089 [cs.DB]
* Wibowo [2011] Canggih Wibowo. 2011\. Evaluation of Protocol Buffers as Data Serialization Format for Microblogging Communication.
* Wischenbart et al. [2012] Martin Wischenbart, Stefan Mitsch, Elisabeth Kapsammer, Angelika Kusel, Birgit Pröll, Werner Retschitzegger, Wieland Schwinger, Johannes Schönböck, Manuel Wimmer, and Stephan Lechner. 2012\. User Profile Integration Made Easy: Model-Driven Extraction and Transformation of Social Network Schemas. In _Proceedings of the 21st International Conference on World Wide Web_ (Lyon, France) _(WWW ’12 Companion)_. Association for Computing Machinery, New York, NY, USA, 939–948. https://doi.org/10.1145/2187980.2188227
* Wright et al. [2019] A. Wright, H. Andrews, and B. Hutton. 2019. _JSON Schema: A Media Type for Describing JSON Documents_. Technical Report. IETF. https://tools.ietf.org/html/draft-handrews-json-schema-02
* Wright et al. [2020] A. Wright, H. Andrews, and B. Hutton. 2020. _JSON Schema: A Media Type for Describing JSON Documents_. Technical Report. IETF. https://tools.ietf.org/html/draft-bhutton-json-schema-00
* Ziv and Lempel [1977] J. Ziv and A. Lempel. 1977\. A universal algorithm for sequential data compression. _IEEE Transactions on Information Theory_ 23, 3 (1977), 337–343. https://doi.org/10.1109/TIT.1977.1055714 |
# Phase behaviour of fluids in undulated nanopores
Martin Pospíšil Department of Physical Chemistry, University of Chemical
Technology Prague, Praha 6, 166 28, Czech Republic;
The Czech Academy of Sciences, Institute of Chemical Process Fundamentals,
Department of Molecular Modelling, 165 02 Prague, Czech Republic Alexandr
Malijevský Department of Physical Chemistry, University of Chemical
Technology Prague, Praha 6, 166 28, Czech Republic; The Czech Academy of
Sciences, Institute of Chemical Process Fundamentals, Department of Molecular
Modelling, 165 02 Prague, Czech Republic
###### Abstract
The geometry of walls forming a narrow pore may qualitatively affect the phase
behaviour of the confined fluid. Specifically, the nature of condensation in
nanopores formed of sinusoidally-shaped walls (with amplitude $A$ and period
$P$) is governed by the wall mean separation $L$ as follows. For $L>L_{t}$,
where $L_{t}$ increases with $A$, the pores exhibit standard capillary
condensation similar to planar slits. In contrast, for $L<L_{t}$, the
condensation occurs in two steps, such that the fluid first condenses locally
via bridging transition connecting adjacent crests of the walls, before it
condenses globally. For the marginal value of $L=L_{t}$, all the three phases
(gas-like, bridge and liquid-like) may coexist. We show that the locations of
the phase transitions can be described using geometric arguments leading to
modified Kelvin equations. However, for completely wet walls, to which we
focus on, the phase boundaries are shifted significantly due to the presence
of wetting layers. In order to take this into account, mesoscopic corrections
to the macroscopic theory are proposed. The resulting predictions are shown to
be in a very good agreement with a density functional theory even for
molecularly narrow pores. The limits of stability of the bridge phase,
controlled by the pore geometry, is also discussed in some detail.
## I Introduction
It is well known that fluids which are subjects of narrow confinements exhibit
quite different phase behaviour compared to their bulk counterparts rowlin ;
hend ; nakanishi81 ; nakanishi83 ; gelb . A fundamental example of this is a
phenomenon of capillary condensation occurring in planar slits made of two
identical parallel walls a distance $L$ apart. Macroscopically, the shift in
the chemical potential, relative to its saturation value $\mu_{\rm sat}$, at
which capillary condensation occurs, is given by the Kelvin equation (see,
e.g. Ref.gregg )
$\delta\mu_{\rm cc}^{\parallel}=\frac{2\gamma\cos\theta}{L\Delta\rho}\,,$ (1)
where $\gamma$ is the liquid-gas surface tension, $\theta$ is the contact
angle characterizing wetting properties of the walls,
$\Delta\rho=\rho_{l}-\rho_{g}$ is the difference between the number densities
of coexisting bulk liquid and gas. Here, the Laplace pressure difference
$\delta p$ across the curved interface separating the gas and liquid phases
has been approximated by $\delta p\approx\delta\mu\Delta\rho$, accurate for
small undersaturation evans85 . Microscopic studies of capillary condensation
based on density functional theory (DFT) evans84 ; evans85 ; evans85b ;
evans86 ; evans86b ; evans87 ; evans90 and computer simulation binder05 ;
binder08 have shown that the Kelvin equation is surprisingly accurate even
for microscopically narrow pores. This is particularly so for walls that are
partially wet ($\theta>0$) where the Kelvin equation remains quantitatively
accurate even for slits which are only about ten molecular diameters wide
tar87 . For completely wet pores ($\theta=0$), Eq. (1), which ignores the
presence of thick wetting layers adsorbed at the walls, is somewhat less
accurate but its mesoscopic extension based on Derjaguin’s correction derj
provides excellent predictions for the location of capillary condensation even
at nanoscales.
Capillary condensation in planar slits can be interpreted as a simple finite-
size shift of the bulk liquid-gas transition controlled by a single geometric
parameter $L$, which also determines a shift in the critical temperature
$T_{c}(L)$ beyond which only a single phase in the capillary is present
evans86 . On a mean-field level, the transition can be determined by
constructing adsorption (initiated at a gas-like state) and desorption
(initiated at a liquid-like state) isotherms which form low- and high-density
branches of the van der Waals loop and which have the same free energies right
at the chemical potential $\mu_{\rm cc}^{\parallel}=\mu_{\rm
sat}-\delta\mu_{\rm cc}^{\parallel}$.
However, the situation becomes significantly more sophisticated for pores of
non-planar geometry. In those more general cases the translation symmetry may
be broken not only across but also along the confining walls, which can make
the phenomenon of capillary condensation much more subtle. For example, by
considering a semi-infinite slit made of capping the open slit at one end, the
transition, which occurs at the same value of $\mu_{\rm cc}^{\parallel}$, can
become second-order due to the formation of a single meniscus which
continuously unbinds from the capped end, as the chemical potential is
increased towards $\mu_{\rm cc}^{\parallel}$ darbellay ; evans_cc ; tasin ;
mistura ; mal_groove ; parry_groove ; mistura13 ; our_groove ; bruschi2 ;
fin_groove_prl . If such a capped capillary is not semi-infinite but of a
finite depth $D$ (measuring a distance between the capped and the open end of
the capillary), asymmetric effective forces acting on the meniscus from both
capillary ends round and shift the transition by an amount scaling with
$D^{3}$ for systems with dispersion forces fin_groove .
Another example of the impact of broken translation symmetry on phase
behaviour in narrow slits is when the walls are no longer smooth but are
structured chemically or geometrically. If the width of such slits is
considerably larger than a length characterizing the lateral structure, the
condensation scenario will differ from that for non-structured slits just
quantitatively. For instance, if the walls are formed of two species with
different contact angles, then the location of capillary condensation will be
macroscopically given by Eq. (1), in which Young’s contact angle is replaced
by the effective contact angle given by Cassie’s law cassie ; het_slit_prl .
However, for sufficiently narrow pores the walls structure may play more
significant role and can change the mechanism of the condensation, which
happens in two steps, such that the fluid first condenses only locally by
forming liquid bridges across the pore het_slit_prl ; chmiel ; rocken96 ;
rocken98 ; bock ; swain ; bock2 ; valencia ; hemming ; schoen2 .
Figure 1: A sketch illustrating three possible phases in a nanopore of a mean
width $L$ formed by sinusoidally-shaped walls with an amplitude $A$ and period
$P$: a) gas phase, b) bridge phase, and c) liquid phase.
In this paper, we study phase behaviour of fluids in pores formed of smoothly
undulated and completely wet walls. The particular emphasis is put on model
pores formed by a pair of sinusoidally-shaped walls, where one of the walls is
a reflection symmetry of the other. In this way, the translation symmetry of
the system is broken along two of the Cartesian axes ($x$ and $z$, say) but is
maintained along the remaining one ($y$ axis). Let $P$ be the period and $A$
the amplitude of the walls, whose mean separation is $L$. Hence, the local
width of the pore smoothly varies (as a function of $x$) between $L-2A$ and
$L+2A$. The model, together with a macroscopic illustration of possible
phases, which the confined fluid is anticipated to adopt, is sketched in Fig.
1.
The purpose of this paper is to present a detailed analysis of the phase
behaviour of (simple) fluids in such confinements. To this end, we first
formulate a purely macroscopic theory based on geometric arguments. This
allows us to determine the mean separation of the walls $L_{t}$, which
separates two possible condensation regimes. For $L>L_{t}$, capillary
condensation occurs in one step and macroscopically its location is given by a
trivial modification of Eq. (1), leading to a marriage of the Kelvin equation
with Wenzel’s law wenzel , such that the latter is of the form
$``\cos{\theta^{*}}"=r\cos\theta\,.$ (2)
Here $r$ is the roughness parameter of the wall and the symbol
$``\cos{\theta^{*}}"$ characterizes an enhancement of the wetting properties
of the wall due to its nonplanar geometry.
In contrast, for $L<L_{t}$, when the condensation is a two-step process, the
phase boundaries between gas-like (G) and bridge (B) phases, as well as
between bridge and liquid-like (L) phases are macroscopically determined. This
requires to find how the location of the bridging films varies with the
chemical potential and we also examine the limits of metastable extensions of
B phase due to the pore geometry. Moreover, in order to capture the effect of
adsorbed wetting layers, which the purely macroscopic theory neglects, the
mesoscopic corrections, incorporating the wetting properties of the walls, are
included. The resulting predictions will be shown to be in an excellent
agreement with a microscopic density functional theory (DFT), even on a
molecular scale of the walls parameters.
The rest of the paper is organized as follows. In section II we formulate a
macroscopic theory determining phase boundaries between all the G, B, and L
phases using simple geometric arguments. We start with considering a general
model of a nanopore whose shape is represented by a smooth function $\psi(x)$,
before we focus specifically to sinusoidally-shaped walls. The geometric
considerations are further applied to estimate the range of stability of B
phase. In section III we extend the macroscopic theory by including the
mesoscopic corrections for walls exerting long-range, dispersion potentials.
In section IV we formulate the microscopic DFT model, which we use to test the
aforementioned predictions; the comparison is shown and discussed in section
V. Section VI is the concluding part of the paper where the main results of
this work are summarized and its possible extensions are discussed.
## II Macroscopic description of capillary condensation and bridging
transition for completely wet walls
### II.1 General model
We consider a pore of a mean width $L$ formed by a pair of walls each of shape
$\psi(x)$, where $x$ is a horizontal axis placed along the pore, such as in
Fig. 2. More specifically, the vertical heights of the top and bottom walls
measured along the $z$-axis are $z_{w}(x)$ and $-z_{w}(x)$, respectively, with
$z_{w}(x)=\frac{L}{2}-\psi(x)\,,$ (3)
assuming that $\psi(x)$ is a differentiable, even and periodic function of
wavelength $P$ with a global minimum at $x=0$. Furthermore, we assume that the
walls are completely wet which means that their Young contact angle $\theta=0$
and that the pressure of the bulk reservoir, with which the confined fluid is
in equilibrium, is below the saturated vapour pressure, i.e., $p<p_{\rm sat}$.
At low pressures, the pore is filled with a gas-like phase of a low density
$\rho_{g}$ and the corresponding grand potential per unit length over a single
period can be approximated macroscopically as
$\Omega_{g}=-pS+2\gamma_{\rm wg}\ell_{w}\,.$ (4)
Here, $\gamma_{\rm wg}$ is the wall-gas surface tension, $S=PL$ is the area
between the walls in the $x$-$z$ plane over one period and
$\ell_{w}=2\int_{0}^{P/2}\sqrt{1+\psi^{\prime 2}(x)}{\rm d}x\,,$ (5)
is the arc-length of the boundary of each wall in the $x$-$z$ projection over
one period.
At sufficiently high pressures, however, the pore will be filled by a liquid-
like phase of a high density $\rho_{l}$, with the grand potential per unit
length
$\Omega_{l}=-p_{l}S+2\gamma_{\rm wl}\ell_{w}\,,$ (6)
where $p_{l}$ is the pressure of the metastable bulk liquid and $\gamma_{\rm
wl}$ is the liquid-wall surface tension. The system undergoes first-order
capillary condensation from the gas-like to the liquid-like phase when
$\Omega_{g}=\Omega_{l}$. Using Young’s equation it follows that the capillary
condensation occurs when the pressure difference $\delta p=p-p_{l}$ is
$\delta p=\frac{2\gamma\ell_{w}}{S}\,.$ (7)
More conveniently, this can be expressed in terms of the chemical potential
$\delta\mu_{\rm cc}=\frac{2\gamma\ell_{w}}{S\Delta\rho}\,,\;\;\;({\textrm{gas-
liquid}})\,,$ (8)
measuring the shift of the transition from saturation.
Provided the shape of the confining walls is only slowly varying, this can be
approximated by
$\delta\mu_{\rm cc}=\frac{2\gamma}{L\Delta\rho}\left(1+\delta\right)$ (9)
where
$\delta=\frac{1}{P}\int_{0}^{P/2}\psi^{\prime 2}(x){\rm d}x$ (10)
is a dimensionless parameter characterizing the wall undulation, which is
trivially related with the roughness parameter ($r\approx 1+\delta$) appearing
in Eq. (2).
Figure 2: A scheme of a bridge phase inside a nanopore formed by two walls of
the local height $z_{w}(x)$ and $-z_{w}(x)$ relative to the horizontal axis.
The macroscopic picture assumes that the menisci demarcating the liquid bridge
are parts of a circle of the Laplace radius $R$ which meets tangentially the
walls at the points $[\pm x_{0},\pm z_{0}]$.
Apart from the gas-like and the liquid-like phases, the non-planar geometry
may also enable a formation of an intermediate phase (or phases), where the
fluid condenses only locally near the adjacent parts of the walls, giving rise
to a periodic array of liquid bridges (see Fig. 2). For simplicity, we will
further assume that the pore geometry allows only for a single bridge per
period. The points $[\pm x_{0},\pm z_{0}]$ at which the menisci of the bridges
meet the walls are pressure-dependent and, macroscopically, are specified by
two conditions: firstly, the menisci are of a circular shape with the Laplace
radius of curvature $R=\gamma/\delta p$ and, secondly, the menisci meet the
walls tangentially (since the walls are completely wet). This leads to the
implicit equation for $x_{0}$:
$z_{w}^{2}(x_{0})(1+z_{w}^{\prime 2}(x_{0}))=R^{2}\,,$ (11)
which, together with (3), determines the location of the bridge. This, in
turn, allows to obtain a macroscopic approximation for the grand potential per
unit length of the bridge phase
$\Omega_{b}=-pS_{g}-p_{l}S_{l}+2\gamma_{wg}\ell_{w}^{g}+2\gamma_{wl}\ell_{w}^{l}+2\gamma\ell\,,$
(12)
where
$S_{l}=2\left(2\int_{0}^{x_{0}}z_{w}(x){\rm d}x-S_{m}\right)$ (13)
and $S_{g}=S-S_{l}$, are the volumes (per unit length) occupied by liquid and
gas, respectively. Here, the symbol
$S_{m}=R^{2}\sin^{-1}\left(\frac{z_{0}}{R}\right)-z_{0}\sqrt{R^{2}-z_{0}^{2}}$
(14)
represents the area of to the circular segment highlighted by yellow colour in
Fig. 2.
Furthermore,
$\ell_{w}^{l}=2\int_{0}^{x_{0}}\sqrt{1+\psi^{\prime 2}(x)}{\rm d}x$ (15)
and $\ell_{w}^{g}=\ell_{w}-\ell_{w}^{l}$ are the respective arc-lengths of the
wall-liquid and wall-gas interfaces. Finally,
$\ell=2R\sin^{-1}\left(\frac{z_{0}}{R}\right)$ (16)
is an arc-length of each meniscus.
First-order bridging transition from G to B occurs at the chemical potential
$\mu_{\rm gb}=\mu_{\rm sat}-\delta\mu_{\rm gb}$, when its shift from
saturation is
$\delta\mu_{\rm
gb}=\frac{2\gamma(\ell_{w}^{l}-\ell)}{S_{l}\Delta\rho}\,,\;\;\;({\textrm{gas-
bridge}})\,,$ (17)
as obtained by balancing $\Omega_{g}$ and $\Omega_{b}$. If $\delta\mu_{\rm
gb}<\delta\mu_{\rm cc}$, then the bridge state is never the most stable phase
and the bridging transition is preceded by capillary condensation. However, if
$\delta\mu_{\rm gb}>\delta\mu_{\rm cc}$, the condensation is a two-step
process, such that the system first condenses locally, when $\mu=\mu_{\rm
gb}$, and eventually globally when $\Omega_{b}=\Omega_{l}$, which occurs for
the chemical potential $\mu_{\rm bl}=\mu_{\rm sat}-\delta\mu_{\rm bl}$, with
$\delta\mu_{\rm
bl}=\frac{2\gamma(\ell+\ell_{w}^{g})}{S_{g}\Delta\rho}\,,\;\;\;({\textrm{bridge-
liquid}})\,.$ (18)
### II.2 Sinusoidally shaped walls
We will now be more specific and consider models of sinusoidally shaped walls
by setting
$\psi=A\cos(kx)\,,$ (19)
where $A$ is the amplitude and $k=2\pi/P$ is the wave number of the confining
walls. In this special case, the geometric measures (5), (10), (13), and (15)
become:
$\delta=\frac{A^{2}k^{2}}{2}\,,$ (20)
$S_{l}=2Lx_{0}-\frac{4A}{k}\sin(kx_{0})-2R^{2}\sin^{-1}\left(\frac{z_{0}}{R}\right)+2z_{0}\sqrt{R^{2}-z_{0}^{2}}\,,$
(21) $\ell_{w}^{l}=2E(x_{0},iAk)\,,$ (22)
and
$\ell_{w}=\frac{4E(iAk)}{k}\,,$ (23)
where $E(\cdot)$ and $E(\cdot,\cdot)$ are the complete and incomplete elliptic
integrals of second kind, respectively, and $i$ is the imaginary unit.
#### II.2.1 Capillary condensation
It follows from Eqs. (8) and (23) that the global condensation from capillary
gas to capillary liquid occurs at the chemical potential:
$\delta\mu_{\rm cc}=\frac{4\gamma E(iAk)}{\pi L\Delta\rho}\,,$ (24)
which is a simple modification of the Kelvin equation (1) for planar slits
with completely wet walls ($\theta=0$). This can also be expressed as a series
in the powers of the aspect ratio $a=A/P$:
$\delta\mu_{\rm cc}=\delta\mu_{\rm
cc}^{\parallel}\left(1+\pi^{2}a^{2}+{\cal{O}}(a^{4})\right)\,.$ (25)
From Eq. (25) it follows that the sinusoidal geometry enhances condensation
(as expected), i.e. occurs farther from saturation compared to a planar slit.
Clearly, this is due to the fact that the area of the (hydrophilic) walls
increases with $a$, while the volume of the metastable liquid in the condensed
state remains unchanged. Eq. (25) also implies that the location of the
capillary condensation in sinusoidal slits does not depend on the wall
parameters $A$ and $P$ independently but only on their ratio in a roughly
quadratic manner. The relevance of these macroscopic predictions for
microscopic systems will be tested in section V.
#### II.2.2 Bridging transition
From Eq. (11) it follows that the horizontal distance $\pm x_{0}$ determining
the location of the bridge meniscus of radius $R$ is given implicitly by
$\left(\frac{L}{2}-A\phi\right)^{2}\left[1+k^{2}A^{2}(1-\phi^{2})\right]=R^{2}\,,$
(26)
with $\phi\equiv\cos(kx_{0})$. This is a quartic equation, the solution of
which is thus accessible analytically. However, for slightly undulated walls,
$\delta\ll 1$, it is more transparent to express $\phi$ as a power series in
$\delta$. To this end, we introduce an auxiliary parameter $\epsilon$:
$\left(\frac{L}{2}-A\phi\right)^{2}\left[1+2\epsilon\delta(1-\phi^{2})\right]=R^{2}\,,$
(27)
such that the solution is sought in the form of
$\phi(\epsilon)=\sum_{n=0}^{\infty}\phi_{n}\epsilon^{n}\,.$ (28)
When plugged into (27), the coefficients $\phi_{n}$ are easily determined by
balancing the corresponding powers of $\epsilon$:
$\phi_{0}=\frac{\frac{L}{2}-R}{A}\,,$ (29)
$\phi_{1}=\frac{Rk^{2}A(1-\phi_{0}^{2})}{2}\,,$ (30)
etc. Substituting back to (28) and setting $\epsilon=1$, one obtains:
$\phi=\frac{L-2R}{2A}+\frac{\delta}{2A}\left(1-\frac{(L-2R)^{2}}{4A^{2}}\right)+\mathcal{O}(\delta^{2})\,.$
(31)
This can be further simplified by expanding $\phi\approx 1-k^{2}x_{0}^{2}/2$,
which to the lowest order in $\delta$ allows for this simple approximation:
$x_{0}\approx\sqrt{\frac{2(1-\phi_{0})}{k^{2}}}\,,$ (32)
with $\phi_{0}$ given by (29).
Once $x_{0}$ is known, $S_{l}$ and $\ell_{w}^{l}$ (as well as $S_{g}=S-S_{l}$
and $\ell_{w}^{g}=\ell_{w}-\ell_{w}^{l}$) can be determined from Eqs. (21–23).
These measures are eventually substituted into Eqs. (17) and (18) to solve for
the location of the gas-bridge and the bridge-liquid transitions in terms of
the corresponding Laplace radii, $R_{\rm gb}=\gamma/(\delta\mu_{\rm
gb}\Delta\rho)$ and $R_{\rm bl}=\gamma/(\delta\mu_{\rm bl}\Delta\rho)$.
#### II.2.3 Spinodals of bridging transitions
Figure 3: Illustration of the macroscopic estimation of the lower (a) and
upper (b) spinodals of bridging transition in sinusoidal pores.
In contrast to G and L phases, which, on a macroscopic level, have both
infinite metastable extensions, the stability of bridging films is restricted
by the pore geometry. As is illustrated in Fig. 3, for the given pore
parameters there are lower and upper limits in the values of the Laplace
radius, $R_{s}^{-}$ and $R_{s}^{+}$, allowing for a formation of the bridging
film.
The lower spinodal of B phase corresponds to the smallest Laplace radius,
which still enables a formation of the bridge, such that the menisci just
connect each other, cf. Fig. 3a. In order to determine $R_{s}^{-}$, we will
approximate the shape of the crests by a parabola
$z_{w}(x)\approx c_{1}+c_{2}x^{2}\,,$ (33)
corresponding to an expansion of $z_{w}(x)$ to second order around its
minimum. Specifically for the sinusoidal pores, the coefficients in Eq. (33)
are $c_{1}=L/2-A$ and $c_{2}=Ak^{2}/2$. This approximation seems adequate,
since the menisci are close to the origin.
Assuming a circular shape of the menisci, the contact points must satisfy
$R_{s}^{-}=\frac{x_{0}^{2}+z_{0}^{2}}{2x_{0}}$ (34)
and the continuity condition further implies that
$R_{s}^{-}=x_{0}(2c_{2}z_{0}+1)\,.$ (35)
Eqs. (34) and (35), together with Eq. (33) upon substituting for $x_{0}$, form
a set of three equations for three unknowns, yielding the contact points of
the menisci
$x_{0}=\frac{c_{1}}{\sqrt{2c_{1}c_{2}+1}}\,,$ (36)
$z_{0}=c_{1}\frac{3c_{1}c_{2}+1}{2c_{1}c_{2}+1}$ (37)
and its radius
$R_{s}^{-}=\frac{2c_{1}^{2}c_{2}(3c_{1}c_{2}+2)}{(2c_{1}c_{2}+1)^{\frac{3}{2}}}\,.$
(38)
As for the largest Laplace radius, $R_{s}^{+}$, of a meniscus, which can still
fit into the pore, we simply adopt the approximation:
$R_{s}^{+}=\frac{L}{2}+A\,,$ (39)
which corresponds to the state, at which the meniscus meets the walls at the
widest part of the pore, see Fig. 3b. This estimation of the upper spinodal of
B phase is justified by the assumption that the aspect ratio $a=A/P$ is not
too large.
## III Mesoscopic corrections
In this section we extend the macroscopic theory by taking into account the
presence of wetting layers adsorbed at the confining walls.
### III.1 Wide pores
We first consider wide pores experiencing one-step capillary condensation from
G to L. In general, the local thickness $\ell(x)$ of wetting layers is a
functional of the wall shape, $\ell(x)=\ell[\psi](x)$, which, in principle,
could be contructed using, e.g., a sharp-kink approximation for long-range
microscopic forces dietrich or a non-local interfacial Hamiltonian for short-
range microscopic forces nonlocal . However, even for simple wall geometries,
such as sinusoids as specifically considered here, either approach would lead
to complicated expressions whose solutions would require numerical treatments.
Instead, we propose a simple modification of Derjaguin’s correction for the
Kelvin equation for planar slits evans85 ; evans85b . Thus, specifically for
long-range microscopic forces and for walls of small roughness, we propose the
following Derjaguin’s-like correction for the generalized Kelvin equation (8)
$\delta\mu_{\rm cc}=\frac{2\gamma\ell_{w}}{(L-3\ell_{\pi})\Delta\rho}\,,$ (40)
which for the sinusoidal model becomes
$\delta\mu_{\rm cc}=\frac{4\gamma E(iAk)}{\pi(L-3\ell_{\pi})\Delta\rho}\,.$
(41)
Here, $\ell_{\pi}$ is the thickness of the wetting layer adsorbed at a single
planar wall at the given bulk state. We recall that the factor of $3$ is
associated with the character of the long-range, dispersion forces, which we
will consider in our microscopic model and which would be changed to $2$ for
short-range forces evans85 ; evans85b . Clearly, the approximation
$\ell(x)\approx\ell_{\pi}$ seems plausible only for geometries of small
roughness (aspect ratio) which we focus on and for which we will test Eq. (41)
by comparing with DFT results. Furthermore, taking into account that Eqs. (40)
and (41) refer to wide pores, capillary condensation is expected to occur near
the bulk coexistence where $\ell_{\pi}$ can be described analytically in its
known asymptotic form.
### III.2 Narrow pores
Figure 4: Illustration of the RP geometric construction of the bridge phase in
a completely wet sinusoidal nanopore by taking into account the wetting layers
nature . a) The walls are first coated by wetting layers, whose normal width
is $\ell_{\pi}$ corresponding to a thickness of the liquid film adsorbed on a
planar wall at the given chemical potential. In the second step, the circular
menisci of the Laplace radius $R=\gamma/(\delta\mu\Delta\rho)$ are drawn, such
that they meet the wetting layers tangentially; b) The construction of the
shape of the interface $\tilde{\psi}(x)$ corresponding to the wetting layers.
The unit vectors $\mathbf{n}$ and $\mathbf{t}$ are normal and tangent to the
wall at a given point $x^{\prime}$, respectively, using which the height of
the wetting layer can be determined at the point $x$, shifted from
$x^{\prime}$ according to Eq. (43).
For narrow pores of widths $L<L_{t}$, condensation occurs via formation of
capillary bridges. To account for wetting layers in this case, we will adopt
the geometric construction due to Rascón and Parry (RP) nature , which is
schematically illustrated in Fig. 4a. The construction consists of two steps:
i) first, each wall is covered by a wetting film whose width measured normally
to the wall is $\ell_{\pi}$, ii) secondly, menisci of the Laplace radius
$R=\gamma/(\delta\mu\Delta\rho)$ are connected tangentially to the _wetting
layers_ (rather than to the walls). By following this rule, we will first show
explicitly what the shape $\tilde{\psi}(x)$ of the wetting film interface is
for a general shape $\psi(x)$ of the wall, before applying this result
specifically for the sinusoidal wall.
Let us consider an arbitrary point $x^{\prime}$ on the horizontal axis, at
which the local height of the wall is $\psi(x^{\prime})$. Thus, the unit
tangential vector at this point is
${\mathbf{t}}=(1,\psi^{\prime}(x^{\prime}))/\sqrt{1+\psi^{\prime
2}(x^{\prime})}$, where the prime denotes a derivative with respect to
$x^{\prime}$; hence, the unit normal at $\psi(x^{\prime})$ is
${\mathbf{n}}=(-\psi^{\prime}(x^{\prime}),1)/\sqrt{1+\psi^{\prime
2}(x^{\prime})}$. According to the RP construction the local height of the
wetting film interface $\tilde{\psi}(x)$ is a distance $\ell_{\pi}$ from
$\psi(x^{\prime})$ along the normal vector, (see Fig. 4b). It follows that
$\tilde{\psi}(x)=\psi(x^{\prime})+\frac{\ell_{\pi}}{\sqrt{1+\psi^{\prime
2}(x^{\prime})}}$ (42)
where
$x=x^{\prime}-\frac{\ell_{\pi}\psi^{\prime}(x^{\prime})}{\sqrt{1+\psi^{\prime
2}(x^{\prime})}}\,.$ (43)
Considering walls of small gradients, the difference $x-x^{\prime}$ is
supposed to be small, thus
$\tilde{\psi}(x)\approx\psi(x^{\prime})+\frac{\ell_{\pi}}{\sqrt{1+\psi^{\prime
2}(x)}}$ (44)
and
$x^{\prime}\approx x+\frac{\ell_{\pi}\psi^{\prime}(x)}{\sqrt{1+\psi^{\prime
2}(x)}}\,,$ (45)
to first order in $x-x^{\prime}$. By substituting (45) into (44), one obtains
that
$\tilde{\psi}(x)\approx\psi\left(x+\frac{\ell_{\pi}\psi^{\prime}(x)}{\sqrt{1+\psi^{\prime
2}(x)}}\right)+\frac{\ell_{\pi}}{\sqrt{1+\psi^{\prime 2}(x)}}\,,$ (46)
which determines $\tilde{\psi}(x)$ explicitly. This can be further simplified
by expanding the first term on the r.h.s. to first order:
$\tilde{\psi}(x)\approx\psi(x)+\ell_{\pi}\sqrt{1+\psi^{\prime 2}(x)}\,.$ (47)
Specifically, for the sinusoidal wall, Eq. (47) becomes:
$\tilde{\psi}(x)\approx
A\cos(kx)+\ell_{\pi}\sqrt{1+A^{2}k^{2}\sin^{2}(kx)}\,.$ (48)
Thus, within the mesoscopic treatment, we proceed in the same manner as in the
previous section, except that $\psi(x)$ is replaced by $\tilde{\psi}(x)$, as
given by Eq. (48).
## IV Density functional theory
Classical DFT evans79 is a tool of statistical mechanics describing
equilibrium behaviour of inhomogeneous fluids. Based on the variational
principle, the equilibrium one-body density $\rho({\bf r})$ of the fluid
particles is determined by minimizing the grand potential functional:
$\Omega[\rho]={\cal F}[\rho]+\int{\rm d}{\mathbf{r}}\rho({\bf
r})[V({\mathbf{r}})-\mu]\,.$ (49)
Here, ${\cal F}[\rho]$ is the intrinsic free-energy functional, which contains
all the information about the intermolecular interactions between the fluid
particles, $V({\mathbf{r}})$ is the external potential, which, in our case,
represents the influence of the confining walls and $\mu$ is the chemical
potential of the system and the bulk reservoir. The intrinsic free-energy
functional is usually separated into two parts:
${\cal F}[\rho]={\cal F}_{\rm id}[\rho]+{\cal F}_{\rm ex}[\rho]\,.$ (50)
The first, ideal-gas contribution, which is due to purely entropic effects, is
known exactly:
$\beta{\cal F}_{\rm id}[\rho]=\int{\rm d}{\bf
r}\rho({\mathbf{r}})\left[\ln(\rho({\bf r})\Lambda^{3})-1\right]\,,$ (51)
where $\Lambda$ is the thermal de Broglie wavelength and $\beta=1/k_{B}T$ is
the inverse temperature.
The remaining excess part of the intrinsic free energy arising from the fluid-
fluid interaction, ${\cal F}_{\rm ex}$, must be almost always approximated and
its treatment depends on the interaction model. For models involving hard
cores, the excess contribution can be treated in a perturbative manner, such
that it is typically further split into the contribution ${\cal F}_{\rm hs}$
due to hard-sphere repulsion, and the contribution ${\cal F}_{\rm att}$
arising from attractive interactions:
${\cal F}_{\rm ex}[\rho]={\cal F}_{\rm hs}[\rho]+{\cal F}_{\rm att}[\rho]\,.$
(52)
The hard-sphere part of the free-energy is described using Rosenfeld’s
fundamental measure theory ros
${\cal F}_{\rm hs}[\rho]=k_{B}T\int{\rm
d}{\mathbf{r}}\,\Phi(\\{n_{\alpha}\\})\,,$ (53)
where the free energy density $\Phi$ depends on the set of weighted densities
$\\{n_{\alpha}\\}$. Within the original Rosenfeld approach these consist of
four scalar and two vector functions, which are given by convolutions of the
density profile and the corresponding weight function:
$n_{\alpha}({\mathbf{r}})=\int{\rm d}{\bf
r}^{\prime}\rho({\mathbf{r}}^{\prime})w_{\alpha}({\mathbf{r}}-{\mathbf{r}}^{\prime})\;\;\alpha=\\{0,1,2,3,v1,v2\\}\,,$
(54)
where $w_{3}({\mathbf{r}})=\Theta(R-|{\mathbf{r}}|)$,
$w_{2}({\mathbf{r}})=\delta(R-|{\mathbf{r}}|)$,
$w_{1}({\mathbf{r}})=w_{2}({\mathbf{r}})/4\pi R$,
$w_{0}({\mathbf{r}})=w_{2}({\mathbf{r}})/4\pi R^{2}$,
$w_{v2}({\mathbf{r}})={\mathbf{r}}\delta(R-|{\mathbf{r}}|)/R$, and
$w_{v1}({\mathbf{r}})=w_{v2}({\mathbf{r}})/4\pi R$. Here, $\Theta$ is the
Heaviside function, $\delta$ is the Dirac function and $R=\sigma/2$ where
$\sigma$ is the hard-sphere diameter.
The attractive free-energy contribution is treated at a mean-field level:
$F_{\rm att}[\rho]=\frac{1}{2}\int d{\bf{r}}_{1}\rho({\mathbf{r}}_{1})\int
d{\bf{r}}_{2}\rho({\mathbf{r}}_{2})u_{\rm
att}(|{\mathbf{r}}_{1}-{\mathbf{r}}_{2}|)\,,$ (55)
where $u_{\rm att}(r)$ is the attractive part of the Lennard-Jones-like
potential:
$u_{\rm att}(r)=\left\\{\begin{array}[]{cc}0\,;&r<\sigma\,,\\\
-4\varepsilon\left(\frac{\sigma}{r}\right)^{6}\,;&\sigma<r<r_{c}\,,\\\
0\,;&r>r_{c}\,.\end{array}\right.$ (56)
which is truncated at $r_{c}=2.5\,\sigma$. For this model, the critical
temperature corresponds to $k_{B}T_{c}=1.41\,\varepsilon$.
The external potential $V({\mathbf{r}})=V(x,z)$ representing the presence of
the confining walls can be expressed as follows:
$V(x,z)=V_{w}(x,L/2+z)+V_{w}(x,L/2-z)\,,$ (57)
where $L$ is the mean distance between the walls and $V_{w}(x,z)$ describes a
potential of a single, sinusoidally shaped wall with an amplitude $A$ and
period $P=2\pi/k$, formed by the Lennard-Jones atoms distributed uniformly
with a density $\rho_{w}$:
$\displaystyle V_{w}(x,z)$ $\displaystyle=$
$\displaystyle\rho_{w}\int_{-\infty}^{\infty}{\rm
d}x^{\prime}\int_{-\infty}^{\infty}{\rm
d}y^{\prime}\int_{-\infty}^{A\cos(kx^{\prime})}{\rm d}z^{\prime}$ (58)
$\displaystyle u_{w}\left(\sqrt{(x-x^{\prime})^{2}+y^{\prime
2}+(z-z^{\prime})^{2}}\right)\,,$
where
$u_{w}(r)=4\,\varepsilon_{w}\left[\left(\frac{\sigma_{w}}{r}\right)^{12}-\left(\frac{\sigma_{w}}{r}\right)^{6}\right]$
(59)
is the 12-6 Lennard-Jones potential.
Minimization of (49) leads to the Euler–Lagrange equation
$\frac{\delta{\cal
F}[\rho]}{\delta\rho({\mathbf{r}})}+V({\mathbf{r}})-\mu=0\,,$ (60)
which can be recast into the form of a self-consistent equation for the
equilibrium density profile:
$\rho({\mathbf{r}})=\Lambda^{-3}\exp\left[\beta\mu-\beta
V({\mathbf{r}})+c^{(1)}({\mathbf{r}})\right]$ (61)
that can be solved iteratively. Here,
$c^{(1)}({\mathbf{r}})=c^{(1)}_{\mathrm{hs}}({\mathbf{r}})+c^{(1)}_{\mathrm{att}}({\mathbf{r}})$
is the one-body direct correlation function, whose hard-sphere contribution,
$c^{(1)}_{\rm hs}({\mathbf{r}})=-\sum_{\alpha}\int{\rm
d}{\mathbf{r}}^{\prime}\;\frac{\partial\Phi(\\{n_{\alpha}\\})}{\partial
n_{\alpha}}\,w_{\alpha}({\mathbf{r}}^{\prime}-{\mathbf{r}})$ (62)
and the attractive contribution,
$c^{(1)}_{\mathrm{att}}({\mathbf{r}})=-\beta\int{\rm
d}{\mathbf{r}}^{\prime}\;u_{\rm
att}(|{\mathbf{r}}-{\mathbf{r}}^{\prime}|)\,\rho({\mathbf{r}}^{\prime}),$ (63)
are obtained by varying ${\cal F}_{\rm hs}$ and ${\cal F}_{\rm att}$ w.r.t.
$\rho({\mathbf{r}})$, respectively.
Eq. (61) was solved numerically using Picard’s iteration on a 2D rectangular
grid with an equidistant spacing of $0.1\,\sigma$ (except for the calculations
presented in Fig. 8, where the considered wall parameters required reducing of
the grid spacing down to $0.02\,\sigma$). For evaluations of the integrals
(54), (62), and (63), which are in the form of convolutions, we applied the
Fourier transform. To this end, we followed the approach of Salinger and Frink
frink2003 , according to which Fourier transforms of $\rho({\mathbf{r}})$ and
$\partial\Phi(\\{n_{\alpha}\\})/\partial n_{\alpha}$ are evaluated numerically
using the fast Fourier transform, while $\hat{w}_{\alpha}$ are calculated
analytically four_conv :
$\displaystyle\hat{w}_{3}({\mathbf{k}})=4\pi R^{3}\,\frac{\sin(2\pi kR)-2\pi
k\cos(2\pi kR)}{(2\pi kR)^{3}},$ $\displaystyle\hat{w}_{2}({\mathbf{k}})=4\pi
R^{2}\,\frac{\sin(2\pi kR)}{2\pi kR},$
$\displaystyle\hat{w}_{1}({\mathbf{k}})=\frac{\hat{w}_{2}({\mathbf{k}})}{4\pi
R},\hskip
49.79231pt\hat{w}_{0}({\mathbf{k}})=\frac{\hat{w}_{2}({\mathbf{k}})}{4\pi
R^{2}},$
$\displaystyle\hat{w}_{\mathrm{v2}}({\mathbf{k}})=-2\pi{\mathbf{k}}\hat{w}_{3}({\mathbf{k}}),\qquad\hat{w}_{\mathrm{v1}}({\mathbf{k}})=\frac{\hat{w}_{\mathrm{v2}}({\mathbf{k}})}{4\pi
R}\,,$
where ${\mathbf{k}}=(k_{x},k_{z})$ is the vector in the reciprocal space and
$k=|{\mathbf{k}}|$. We applied the analogous approach to evaluate the
attractive contribution to the one-body direct correlation function,
$c^{(1)}_{\mathrm{att}}({\mathbf{r}})$, as given by Eq. (63). To this end, the
Fourier transform of $u_{\rm att}(r)$ has been determined analytically:
$\hat{u}_{\mathrm{att}}(k)=\frac{2\,\varepsilon\sigma^{2}}{3\,kr_{c}^{4}}\left[r_{c}^{4}\,\Psi(k;\sigma)-\sigma^{4}\,\Psi(k;r_{c})\right]\,,$
(64)
where
$\displaystyle\Psi(k;\xi)$ $\displaystyle=$ $\displaystyle 2\pi
k\xi\left(2\pi^{2}k^{2}\xi^{2}-1\right)\cos\left(2\pi k\xi\right)$
$\displaystyle+\left(2\pi^{2}k^{2}\xi^{2}-3\right)\sin\left(2\pi k\xi\right)$
$\displaystyle+8\pi^{4}k^{4}\xi^{4}\operatorname{Si}\left(2\pi k\xi\right)\,,$
where $\operatorname{Si}(x)=\int_{0}^{x}\sin(t)/t{\rm d}t$ is the sine
integral.
Once the equilibrium density is obtained, the phase behaviour of the system
can be studied by determining the grand potential, as given by substituting
$\rho({\bf r})$ back to (49), and the adsorption, defined as
$\Gamma=\frac{1}{LP}\int_{0}^{P}{\rm d}x\int_{-z_{w}(x)}^{z_{w}(x)}{\rm
d}z\;\left[\rho(x,z)-\rho_{b}\right]\,,$ (66)
where $\rho_{b}$ is the density of the bulk gas.
## V Results and discussion
Figure 5: Adsorption isotherms obtained from DFT for nanopores formed by walls
with $A=2\,\sigma$ and $P=50\,\sigma$. The mean distance between the walls is
a) $L=8\,\sigma$, b) $L=9\,\sigma$, and c) $L=10\,\sigma$.
In this section, we present our DFT results for condensation of simple fluids
confined by two sinusoidally shaped walls using the model presented in the
previous section for the wall parameters $\varepsilon_{w}=0.8\,\varepsilon$
and $\sigma_{w}=\sigma$. The results are compared with the predictions based
on the macroscopic and mesoscopic arguments formulated in sections II and III.
In order to test the quality of the predictions, we will consider two
temperatures. We will first present our results for temperature
$k_{B}T/\varepsilon\doteq 1.28\approx k_{B}T_{w}/\varepsilon$, which is
slightly _below_ the wetting temperature. At this temperature, the contact
angle of the considered walls is very low (about $1^{\circ}$), which means
that macroscopically the walls can be viewed effectively as completely wet,
yet they remain covered by only a microscopically thin wetting films (since
the isolated walls exhibit first-order wetting). The reason behind this choice
is that we, first of all, wish to test the quality of the purely macroscopic
theory, which ignores the presence of wetting layers adsorbed at the walls.
Clearly, if the theory did not work reasonably well even in the absence of
wetting layers, then any attempt of its elaboration by including mesoscopic
corrections accounting for the presence of wetting layers would not be
meaningful. However, we will show that the macroscopic theory is in a close
agreement with the DFT results for all the types of phase transitions the
system experiences, and provides thus quantitatively accurate description of
the phase diagrams for the considered nanopores. In the next step, we will
consider a higher temperature, $k_{B}T/\varepsilon=1.35$, which is well above
the wetting temperature, and compare the DFT results with both the purely
macroscopic theory, as well as its mesoscopic modification. If not stated
otherwise, the comparison will be illustrated by considering walls with a
period $P=50\,\sigma$ and amplitudes $A=2\,\sigma$ or $A=5\,\sigma$. We
deliberately avoid systems with large aspect ratios for the reason discussed
in the concluding section.
### V.1 $T\approx T_{w}$
Figure 6: Equilibrium 2D density profiles corresponding to a) capillary gas,
b) bridge, and c) capillary liquid phases in the nanopore with $A=2\,\sigma$,
$P=50\,\sigma$ and $L=8\,\sigma$ (cf. Fig. 5a).
We start with presenting adsorption isotherms obtained from DFT for nanopores
with fixed wall parameters but for different mean widths $L$ (see Fig. 5). For
the smallest $L$, the adsorption isotherm exhibits two jumps separating three
capillary phases. As expected, these correspond to G, which is stable
sufficiently far from saturation, B which is stabilized at intermediate
pressures and L, which forms close to saturation. The structure of all the
capillary phases are illustrated in Fig. 6 where the 2D equilibrium density
profiles are plotted. As the mean width of the pore $L$ is increased, the
interval of $\delta\mu$ over which the bridge phase is stable becomes smaller
and smaller, as is illustrated in Fig. 5b . Here, the locations of G-B and B-L
transitions become almost identical, which means that such a value of $L$ is
already very close to $L_{t}$ allowing for G-B-L coexistence. For $L>L_{t}$,
the bridge phase is never the most stable state, so that capillary gas
condenses to capillary liquid directly in a single-step (cf. Fig. 5c).
Figure 7: DFT results (symbols) showing a dependence of $\delta\mu_{\rm cc}$
on the aspect ratio $a=A/P$ for nanopores with $P=50\,\sigma$ and
$L=50\,\sigma$. The solid line represents the solution of the Kelvin equation
(25) and the dashed line shows the value of $\delta\mu_{\rm cc}^{\parallel}$
for capillary condensation in the planar slit obtained from 1D DFT. The inset
shows the log-log plot of the DFT results and the straight line with the slope
of $2$ confirms the prediction (25). Figure 8: DFT results for a dependence
of $\delta\mu_{\rm cc}$ on $A$ and $P$, such that $a=A/P=0.1$. The horizontal
dotted line indicates the prediction given by Kelvin’s equation (24).
Figure 9: A comparison between DFT results (symbols) and the prediction given
by Kelvin’s equation (24) (line) for a dependence of $\delta\mu_{\rm cc}$ on
$L$ for walls with amplitudes $A=2\,\sigma$ (a) and $A=5\,\sigma$ (b) and
period $P=50\,\sigma$.
Let us first focus on a single-step capillary condensation at wide slits. Fig.
7 displays DFT results showing a dependence of $\delta\mu_{\rm cc}$ on the
wall amplitude up to $A\approx 20\,\sigma$ (with both $P$ and $L$ fixed to
$50\,\sigma$). The agreement between DFT results and the Kelvin equation (24)
is very good, and in particular the inset Fig. 7 confirms that the dependence
$\delta\mu_{\rm cc}(a)$ is approximately quadratic for sufficiently small
amplitudes, in line with the expansion (25). We note that the results include
the case of $A=0$ corresponding to a planar slit (in which case the walls
exert the standard $9$-$3$ Lennard-Jones potential), obtained independently
using 2D, as well as a simple 1D DFT; the resulting values of $\delta\mu_{\rm
cc}$ are essentially identical, which serves as a good test of the numerics.
Figure 10: A dependence of $x_{0}$, specifying the location where the bridging
menisci meet the walls, on $\delta\mu$, for the slits with $A=2\,\sigma$ and
$L=8\,\sigma$ (a) and $A=5\,\sigma$ and $L=14\,\sigma$ (b). The period of the
walls is $P=50\,\sigma$ in both cases. A comparison is made between DFT
results (symbols), the prediction given by the solution of the quartic
equation, (26), (full line)) and its simple approximative solution, (32),
based on the perturbative scheme (dotted line)). The DFT results include
states where the bridges are stable (full circles), as well as the states
where the bridges are metastable (open circles).
Figure 11: Comparison of the location of G-B transition, $\delta\mu_{\rm gb}$,
as a function of $L$ obtained from DFT (symbols) and the macroscopic
prediction given by Eq. (17) (solid line) for nanopores formed by sinusoidally
shaped walls with the amplitude $A=2\,\sigma$ (a) and $A=5\,\sigma$ (b) and
period $P=50\,\sigma$. Also shown are the estimated lower (red dotted line)
and upper (red dashed line) spinodals of B phase, as obtained from Eqs. (38)
and (39), respectively. The DFT results include states where the bridges are
stable (full circles), as well as the states where the bridges are metastable
(open circles).
Figure 12: Comparison of the location of B-L transition, $\delta\mu_{\rm bl}$,
as a function of $L$ obtained from DFT (symbols) and the macroscopic
prediction given by Eq. (18) (line) for nanopores formed by sinusoidally
shaped walls with the amplitude $A=2\,\sigma$ (a) and $A=5\,\sigma$ (b) and
the period $P=50\,\sigma$. The DFT results include states where the bridges
are stable (full circles), as well as the states where the bridges are
metastable (open circles).
Next, instead of varying $a$, the aspect ratio (and $L$) will be kept
constant, such that $A$ and $P$ are varied simultaneously. In Fig. 8 we show
DFT results for $\mu_{\rm cc}$ as a function of $A$ (and $P$) which are
compared with the prediction given by the Kelvin equation (24). Recall that
according to the latter, $\mu_{\rm cc}$ depends on $A$ and $P$ only via their
ratio and should thus be constant. It reveals that although $\mu_{\rm cc}$ is
indeed almost invariable for sufficiently large values of $A$ and $P$ and
approach the limit, which is rather close to the Kelvin prediction (with the
relative difference about $3\%$), we can also detect a microscopic non-
monotonic regime below $A\approx 2\,\sigma$. Here, $\mu_{\rm cc}$ somewhat
contra-intuitively drops well below $\mu_{\rm cc}^{\parallel}$ meaning that
such a microscopically small roughness prevents the fluid from condensation.
However, this result is completely consistent with the recent microscopic
studies which report that molecular-scale roughness may actually worsen
wetting properties of substrates, in a contradiction with the macroscopic
Wenzel law berim ; mal_rough ; zhou ; svoboda . This can be explained by a
growing relevance of repulsive microscopic forces accompanied by strong
packing effects when the surface roughness is molecularly small mal_rough .
The decrease of $\delta\mu_{\rm cc}$ upon reducing $A$ (and $P$) terminates
when the amplitude is only a fraction of a molecular diameter ($A\approx
0.2\sigma$), where it reaches its minimum; for even finer structure of the
wall the roughness becomes essentially irrelevant and $\mu_{\rm cc}$
approaches its planar limit $\mu_{\rm cc}^{\parallel}$, as expected.
Finally, we test the Kelvin equation by examining the dependence of
$\delta\mu_{\rm cc}$ on $L$. In Fig. 9 we compare the Kelvin equation with DFT
for nanopores with $A=2\,\sigma$ and $A=5\,\sigma$. In both cases the
agreement is very good, especially for large $L$. For the smallest values of
$L$ (but still greater than $L_{t}$, such that the condensation occurs within
one step), $\delta\mu_{\rm cc}$ is slightly underestimated by the Kelvin
equation but the agreement is still very reasonable.
Figure 13: Comparison of the threshold mean width $L_{t}$, allowing for a
three-phase coexistence, as a function of the wall amplitude $A$, obtained
from DFT (symbols) and from the macroscopic prediction given by Eq. (67)
(line).
We further consider narrow pores that experience condensation in two steps via
formation of liquid bridges. We start with examining the location of the
bridges and test the reliability of Eq. (26) and its approximative
perturbative solution. Fig. 10 shows a dependence of $x_{0}$ specifying the
location, at which the menisci meet the walls, on $\delta\mu$, as obtained
from DFT for nanopores with amplitudes $A=2\,\sigma$ and $A=5\,\sigma$. The
values of $x_{0}$ corresponding to DFT have been read off from the density
profiles in the following way. We approximate the liquid-gas interface by a
part of a circle, $z_{c}(x)$, of the Laplace radius
$R=\gamma/\delta\mu\Delta\rho$. For this, we first determine the point
$(x_{m},0)$, where the interface intersects the $x$-axis using the mid-density
rule $\rho(x_{m},0)=(\rho_{g}+\rho_{l})/2$ (see Fig. 2) rule . This allows us
to determine the center of the circle, $x_{R}=x_{m}+R$, and the contact point
$x_{0}$ is then obtained using the equal tangent condition,
$z^{\prime}_{w}(x_{0})=z^{\prime}_{c}(x_{0})$. The results include the contact
points of bridges which correspond both to stable (full symbols) and
metastable (empty symbols) states and are compared with the solutions of the
quartic equation (26) and its approximative analytic solution given by Eq.
(32). The comparison shows a very good agreement between DFT and Eq. (26),
which systematically improves with increasing $A$ (as verified for other
models, the results of which are not reported here). This is because the
location of bridges is more sensitive to uncertainty in $R$ for walls with
smaller amplitudes. The simple explicit expression (32) proves to be a
reasonable approximation, except for a near proximity of saturation; however,
the bridge states are already metastable in this region.
We further test the macroscopic prediction given by Eq. (17) for a dependence
of $\delta\mu_{\rm gb}$ on $L$. The comparison between the macroscopic theory
and DFT is shown in Fig. 11, again for the amplitudes of $A=2\,\sigma$ and
$A=5\,\sigma$. It should be noted that in both cases the bridging transitions
occur over practically identical range of the distance between crests of the
opposing walls ($4$–$8\,\sigma$), although in some cases the transitions lie
already in a metastable region. The presence of the lower bound can be
interpreted as the minimal width between the crests allowing for condensation
and is comparable with the critical width for the planar slit ($L_{c}\approx
5\,\sigma$ at this temperature). On the other hand, the presence of the upper
bound is due to a free-energy cost for the presence of menisci, which
destabilizes the bridges, when $L$ becomes large. The DFT results are compared
with the prediction given by Eq. (17) (with $x_{0}$ obtained from Eq. (26))
and overall the agreement is very good, especially for $A=5\,\sigma$, owing to
a very accurate prediction of $x_{0}$ (cf. Fig. 10). We also plot the
estimated lower and upper limits of the bridging states determining the range
of stability of bridges for a given $L$, as obtained from Eqs. (38) and (39).
The predicted spinodals indeed demarcate the DFT results for the G-B
equilibrium.
Figure 14: Phase diagrams showing the phase behaviour of fluids in nanopores
with the walls of amplitudes $A=2\,\sigma$ (a) and $A=5\,\sigma$ (b) and the
period $P=50\,\sigma$, in the $\delta\mu$-$L$ plane. The phase boundaries
between G, B and L phases correspond to the DFT results (black solid line) and
the macroscopic theory (black dashed line). Also shown are the spinodals
demarcating the limits of stability of B phase, as determined by DFT (solid
red lines) and the macroscopic theory (dashed red lines). All the three phase
boundaries meet at the triple point T, for which $L=L_{t}$ (cf. Fig. 13). The
DFT results also include the critical point ${\rm C_{\rm gb}}$, whose presence
allows for a continuous formation of bridges, cf. Fig. 15. The vertical dotted
lines depicted in the upper panel correspond to adsorption isotherms shown in
Fig. 5 (blue) and in Fig. 15 (green). Figure 15: Adsorption isotherm
corresponding to the nanopore with $A=2\,\sigma$, $P=50\,\sigma$ and
$L=7.4\,\sigma$ illustrating continuous formation of B phase. The
thermodynamic path corresponds to the green line in the phase diagram shown in
Fig. 14.
We now turn our attention to the second step of the condensation process in
narrow pores, which corresponds to B-L transition. In Fig. 12 we compare the
dependence of $\delta\mu_{\rm bl}$ on $L$ between DFT results and the
prediction given by Eq. (18). Although still very reasonable, the agreement,
compared to the previous results for G-B transition, is now slightly less
satisfactory. This can be attributed to a more approximative macroscopic
description of L phase, which, unlike the low-density G phase, exhibits
strongly inhomogeneous structure (cf. Fig. 6).
In Fig. 13 we further show a dependence of $L_{t}$, separating one-step and
two-step condensation regimes, on the wall amplitude $A$. The DFT results are
compared with the macroscopic theory, according to which the dependence of
$L_{t}(A)$ is given implicitly by solving
$\frac{S_{l}}{S}=\frac{\ell_{w}^{l}-\ell}{\ell_{w}^{l}}\,,\;\;\;(L=L_{t})\,.$
(67)
This equation follows by combining any pair of the three phase boundaries
conditions, $\delta\mu_{\rm cc}(L)$, $\delta\mu_{\rm gb}(L)$, and
$\delta\mu_{\rm bl}(L)$, as given by Eqs. (8), (17), and (18), respectively.
The comparison reveals that the macroscopic theory is in a close agreement
with DFT at least for the considered range of (small) amplitudes.
The phase behaviour in sinusoidal nanopores is summarised in the phase
diagrams displayed in Fig. 14 for $A=2\,\sigma$ and $A=5\,\sigma$, where the
phase boundaries between G, B and L phases are shown in the $\delta\mu$-$L$
plane. Note that while all the G-L, B-L and B-G lines terminate at the triple
point, only the G-L line is semi-infinite. This is in contrast to the B-L
line, which is restricted geometrically by the condition $L=2A$ and the G-B
line which possesses the critical point, allowing for a continuous transition
between G and B phases; this is demonstrated in Fig. 15 showing a continuous
adsorption corresponding to the green line in Fig. 14a. The comparison of the
DFT results with the macroscopic theory reveals an almost perfect agreement
for both cases, except for the critical point, which the macroscopic theory
does not capture. Apart from the equilibrium coexistence lines, the
borderlines demarcating the stability of the B phase within DFT are shown and
compared with the lower and upper spinodals according to the geometric
arguments (38) and (39), respectively. Here, perhaps somewhat surprisingly,
the macroscopic prediction for the upper spinodal is more accurate than for
the lower spinodal, especially for the larger amplitude.
### V.2 $T>T_{w}$
Figure 16: DFT results showing the thickness $\ell_{\pi}$ of the liquid film
adsorbed on a planar Lennard-Jones wall as a function of $\delta\mu$. For
small values of $\delta\mu$, the results are consistent with the expected
asymptotic power-law, as is verified by the log-log plot shown in the inset,
where the straight line has a slope of $-1/3$. The line in the figure
corresponds to the fit of the power-law to the DFT data, which gives
$\ell_{\pi}=1.363\,\delta\mu^{-1/3}$. Figure 17: Comparison of the dependence
of $\delta\mu_{\rm cc}$ on the aspect ratio $a=A/P$ between DFT (symbols), the
macroscopic theory, Eq. (24), (solid line) )and the mesoscopic theory, Eq.
(41), (dashed line) for nanopores with $P=50\,\sigma$ and $L=50\,\sigma$. The
dotted line indicates the value of $\delta\mu_{\rm cc}^{\parallel}$ for
capillary condensation in the planar slit obtained from 1D DFT. The inset
shows the log-log plot of the DFT results and the straight line with the slope
of $2$ confirms the prediction (25).
Figure 18: Comparison of the dependence of $\delta\mu_{\rm cc}$ on $L$ between
DFT results (symbols), the prediction given by the fully macroscopic Kelvin
equation (24) (dotted line) and its mesoscopic correction given by Eq. (41)
(solid line). The nanopores are formed of sinusoidally shaped walls with the
amplitudes of $A=2\,\sigma$ (a) and $A=5\,\sigma$ (b), and period
$P=50\,\sigma$. The DFT results include states which are stable (full circles)
and also metastable (open circles).
Let us now consider a temperature corresponding to $k_{B}T/\varepsilon=1.35$,
which is well above $T_{w}$, to examine the impact of the wetting layers on
the fluid phase behaviour in sinusoidal nanopores and to test the mesoscopic
corrections proposed in section III. We start by presenting the dependence of
the film thickness $\ell_{\pi}$ adsorbed on a planar, 9-3 Lennard-Jones wall,
on $\delta\mu$, as obtained from DFT (see Fig. 16); this is an important pre-
requisite for our further mesoscopic analysis requiring an explicit expression
for $\ell_{\pi}(\delta\mu)$. To this end, we fitted the asymptotic form of
$\ell_{\pi}(\delta\mu)$ to the DFT data obtaining $\ell_{\pi}\approx
1.363\delta\mu^{-1/3}$. Fig. 16 shows that the asymptotic power-law is
surprisingly accurate even far from the bulk coexistence and will thus be used
for the further analyzes.
We now turn to wide slits (with $L=50\,\sigma$), for which the condensation is
a one-step process from G to L. Fig. 17 shows the comparison of DFT results
for a dependence of $\delta\mu_{\rm cc}$ on the aspect ratio $a=A/P$, with the
predictions obtained from the macroscopic Kelvin equation, Eq. (24), and its
mesoscopic extension given by Eq. (41). While the shape of the graphs
$\delta\mu_{\rm cc}(a)$ given by both theories is very similar, the mesoscopic
theory provides a substantial improvement over the macroscopic theory and
yields a near perfect agreement with DFT especially for lower values of $a$.
Clearly, the improvement is due to the fact that according to the mesoscopic
theory the nanopores are effectively thinner, which shifts the predicted
values of $\delta\mu_{\rm cc}$ upwards (further away from saturation) compared
to the macroscopic treatment. In addition, the horizontal line denoting 1D DFT
results for $a=0$ is again completely consistent with the 2D DFT results,
while the inset of the figure confirms the predicted quadratic dependence of
$\delta\mu_{\rm cc}$ on $a$ for small values of the aspect ratio.
Similar conclusion also applies to the results shown in Fig. 18, where we
display a dependence of $\delta\mu_{\rm cc}$ on $L$ for nanopores with
amplitudes $A=2\sigma$ and $A=5\,\sigma$. A comparison between DFT, the
macroscopic theory and its mesoscopic correction is shown for a large interval
of pore widths including those, for which capillary condensation is a two-step
process and thus the G-L transition lies in a metastable region (open
circles). In both cases, the mesoscopic correction provides a considerable
improvement over the macroscopic theory.
Figure 19: Comparison of the location of G-B transition, $\delta\mu_{\rm gb}$,
as a function of $L$, obtained from DFT (symbols), the macroscopic prediction
given by Eq. (17) (dotted line) and its mesoscopic correction based on the RP
construction (full line), for nanopores formed of sinusoidal walls with the
amplitude $A=2\,\sigma$ (a) and $A=5\,\sigma$ (b) and period $P=50\,\sigma$.
The macroscopic results terminate at the (macroscopically predicted) upper
limit of B stability (denoted by the cross), when the radius of the bridge
menisci becomes $R=R_{s}^{+}$. The DFT results include states which stable
(full circles) and also metastable (open circles).
Figure 20: Comparison of the location of B-L transition, $\delta\mu_{\rm bl}$,
as a function of $L$ obtained from DFT (symbols), the macroscopic prediction
given by Eq. (18) (dotted line) and its mesoscopic correction based on the RP
construction (full line), for nanopores formed of sinusoidal walls with the
amplitude $A=2\,\sigma$ (a) and $A=5\,\sigma$ (b) and period $P=50\,\sigma$.
The macroscopic results terminate at the (macroscopically predicted) lower
limit of B stability (denoted by the cross), when the radius of the bridge
menisci becomes $R=R_{s}^{-}$. The DFT results include states which stable
(full circles) and also metastable (open circles).
Finally, we test the impact of the mesoscopic correction, now based on the RP
construction, for narrow pores, which exhibit capillary condensation in two
steps. The dependence of the location of G-B and B-L transitions on $L$ is
shown in Fig. 19 and Fig. 20, respectively. Again, the mesoscopic correction
leads to a remarkable improvement over the macroscopic theory over the entire
interval of considered widths, including those, where G-B and B-L transitions
are already metastable w.r.t. to G-L transition. In fact, the improvement is
not only quantitative. It is because that, at this temperature, the
macroscopic theory hits the upper spinodal (for the G-B equilibrium) and the
lower spinodal (for the B-L equilibrium) within the range of $L$ where both
DFT and the mesoscopic correction allows for the presence of B phase.
## VI Summary and Outlook
We have studied phase behaviour of fluids confined in nanopores formed by a
pair of completely wet walls of smoothly undulated shapes. The varying local
width of such confinements implies that condensation from a low-density phase
of capillary gas (G) to a high-density phase of capillary liquid (L) may be
mediated by a sequence of first-order condensation transitions corresponding
to a formation of liquid bridges between adjacent parts of the walls. Our
analysis focused on sinusoidally-shaped walls of period $P$ and amplitude $A$,
whose mean separation is $L$. The walls are placed such that one is the
reflection symmetry of the other, meaning their local separation varies
smoothly between $L-2A$ and $L+2A$. The nature of condensation in such pores
is governed by the mean distance between the walls and can be characterised by
the value $L_{t}$, which is shown to increase nearly linearly with $A$. For
separations $L>L_{t}$, the condensation is a single-step process from G to L,
similar to that in planar slits. However, for $L<L_{t}$, the condensation is a
two-step process, such that the capillary gas first condenses locally to join
the crests of the walls by liquid bridges forming the bridge phase (B). Upon
further increase of the chemical potential (or pressure), the system
eventually experiences another first-order transition corresponding to a
global condensation from B to L. It is only for the walls separation
$L=L_{t}$, which allows for a three-phase G-B-L coexistence.
The phase behaviour of fluids confined by sinusoidal walls has been described
in detail using macroscopic, mesoscopic and microscopic models. On a
macroscopic level, we assumed that the confined fluid in G and L phases has a
uniform density corresponding to that of a stable bulk gas or a metastable
bulk liquid, at the given temperature and chemical potential. The liquid
bridges in B phase are separated from the surrounding gas by curved menisci,
whose shapes were modelled as a part of a circle of the Laplace radius
connecting the walls tangentially. Based on this description we have obtained
predictions for the pertinent phase boundaries. Furthermore, we have imposed
simple geometric arguments to estimate lower and upper limits of metastable
extensions of B phase.
The comparison with DFT results has shown that the macroscopic description
provides a very accurate prediction for the fluid phase behaviour in
sinusoidal pores even for microscopically small values of the geometric
parameters, provided the influence of the wetting layers adsorbed at the walls
is insignificant. However, quite generally, their impact cannot be neglected
when the pores are formed by completely wet walls of molecularly small
separations. To this end, we have proposed simple mesoscopic corrections of
the macroscopic theory, which take into account the presence of the wetting
layers, whose width has been approximated by $\ell_{\pi}$ corresponding to the
film thickness adsorbed on the pertinent planar wall. This approximation is
thus consistent with Derjaguin’s correction of the Kelvin equation for the
location of capillary condensation in planar slits. For the transitions
involving B phase, we employed the simple geometric construction due to Rascón
and Parry, which, too, assumes a coating of the walls by a liquid film of
thickness $\ell_{\pi}$, which modifies the effective shape and separation of
the confining walls. The comparison with DFT results revealed that the
mesoscopic corrections improve the predictions considerably and provide a
description of the fluid phase behaviour in sinusoidally-shaped walls with a
remarkable accuracy, at least for the case of low to moderate values of the
aspect ratio $a=A/P$.
The reason why we have not considered high values of $a$, is not because the
geometric arguments would fail in such cases – in fact, it was shown that the
predictions for the location of the menisci is more accurate for more wavy
walls than for flatter ones – although the mesoscopic corrections might be
expected to be more approximative as $a$ increases. There is, however, a
_qualitative_ reason, why the current description should be modified for such
systems. This is related with the phenomenon of the osculation transition osc
which separates the regimes where the troughs in G and B phases are filled
with a gas (as assumed in this work), from that where the troughs are
partially filled with liquid. Allowing for this phenomenon, and the
accompanying interference between the “vertical” and the “horizontal” menisci,
would make the phase behaviour scenario even much more intricate and we
postpone this for future studies.
There are many other possible extensions of this work. For models with high
values of $a$, one should also perhaps consider some improvement over the
current mesoscopic corrections that would lead to a geometry- and position-
dependent non-uniformity in the width of the wetting layers. A more
comprehensive description of the phase behaviour in sinusoidal nanopores
should also take into account the prewetting transition, at which the
thickness of the adsorbed layers has a jump discontinuity. For partially wet
walls, the extension of the macroscopic theory would be straightforward but
there is another interfacial phenomenon usually referred to as unbending
transition which should be accounted for unbending . Natural modifications of
the nanopore model include an examination of the broken reflection symmetry on
the stability of the bridge phase. More intricate extensions of the current
model comprise pair of walls with different wavelengths or walls with
additional undulation modes.
###### Acknowledgements.
This work was financially supported by the Czech Science Foundation, Project
No. 21-27338S.
## References
* (1) J. S. Rowlinson and B. Widom Molecular Theory of Capillarity (Oxford: Oxford University Press) (1982).
* (2) D. Henderson, Fundamentals of Inhomoheneous Fluids, Marcel Dekker, New York (1992).
* (3) M. E. Fisher and H. Nakanishi, J. Chem. Phys. 75, 5857 (1981).
* (4) H. Nakanishi and M. E. Fisher, J. Chem. Phys. 78, 3279 (1983).
* (5) L. D. Gelb, K. E. Gubbins, R. Radhakrishnan, and M. Sliwinska- Bartkowiak, Rep. Prog. Phys. 62, 1573 (1999).
* (6) S. J. Gregg and K. S. W. Sing Adsorption, Surface Area and Porosity (New York: Academic) (1982).
* (7) R. Evans and U. Marini Bettolo Marconni, Chem. Phys. Lett. 115, 415 (1985).
* (8) R. Evans and P. Tarazona, Phys. Rev. Lett. 52, 557 (1984).
* (9) R. Evans and U. Marini Bettolo Marconi, Phys. Rev. A 32, 3817 (1985).
* (10) R. Evans, U. Marini Bettolo Marconni, and P. Tarazona, J. Chem. Soc., Faraday Trans. 82, 1763 (1986).
* (11) R. Evans, U. Marini Bettolo Marconi, and P. Tarazona, J. Chem. Phys. 84, 2376 (1986).
* (12) R. Evans and U. Marini Bettolo Marconni, J. Chem. Phys. 86, 7138 (1987).
* (13) R. Evans, J. Phys. Condens. Matter 2, 8989 (1990).
* (14) M. Müller and K. Binder, J. Phys.: Condens. Matter 17, S333 (2005).
* (15) K. Binder, J. Horbach, R. L. C. Vink, and A. De Virgiliis, Soft Matter 4, 1555 (2008).
* (16) P. Tarazona, U. Marini Bettolo Marconi, and R. Evans, Mol. Phys. 60, 573 (1987).
* (17) B. V. Derjaguin, Zh. Fiz. Khim. 14, 137 (1940).
* (18) G. A. Darbellay and J. M. Yeomans, J. Phys. A 25, 4275 (1992).
* (19) C. Rascón, A. O. Parry, N. B. Wilding, and R. Evans, Phys. Rev. Lett. 98, 226101 (2007).
* (20) M. Tasinkevych and S. Dietrich, Eur. Phys. J. E23, 117 (2007).
* (21) L. Bruschi and G. Mistura, J. Low Temp. Phys. 157, 206 (2009).
* (22) A. Malijevský, J. Chem. Phys. 137, 214704 (2012).
* (23) C. Rascón, A. O. Parry, R. Nürnberg, A. Pozzato, M. Tormen, L Bruschi, and G. Mistura, J. Phys.: Condens. Matter 25, 192101 (2013).
* (24) G. Mistura, A. Pozzato, G. Grenci, L. Bruschi, and M. Tormen, Nat. Commun. 4, 2966 (2013).
* (25) A. Malijevský and A. O. Parry, J. Phys: Condens. Matter 26, 355003 (2014).
* (26) L. Bruschi, G. Mistura, P.T.M. Nguyen, D.D. Do, D. Nicholson, S. J. Park, and W. Lee, Nanoscale 7, 2587 (2015).
* (27) A. Malijevský and A. Parry, Phys. Rev. Lett. 120, 135701 (2018).
* (28) A. Malijevský , Phys. Rev. E 97, 052804 (2018).
* (29) A. B. D. Cassie, Discuss. Faraday Soc. 3, 11 (1948).
* (30) M. Láska, A. O. Parry, A. Malijevský, Phys. Rev. Lett. 126, 125701 (2021).
* (31) G. Chmiel, K. Karykowski A. Patrykiejew, W. Rżysko and S. Sokołowski, Mol. Phys. 81, 691 (1994).
* (32) P. Röcken and P. Tarazona, J. Chem. Phys. 105, 2034 (1996)
* (33) P. Röcken, A. Somoza, P. Tarazona, and G.Findenegg, J. Chem. Phys. 108, 8689 (1998).
* (34) H. Bock and M. Schöen, Phys. Rev. E 59, 4122 (1999).
* (35) P. S. Swain and R. Lipowsky, Europhys. Lett. 49, 203 (2000).
* (36) H. Bock, D. J. Diestler, and M. Schöen, J. Phys. Condens. Matter 113, 4697 (2001).
* (37) A. Valencia, M. Brinkmann, and R. Lipowsky, Langmuir 17, 3390 (2001).
* (38) C. J. Hemming and G. N. Patey, J. Phys. Chem. B 110, 3763 (2006).
* (39) M. Schöen, Phys. Chem. Chem. Phys. 10, 223 (2008).
* (40) R. N. Wenzel, Ind. Eng. Chem. 28, 988 (1936).
* (41) S. Dietrich, in Phase Transitions and Critical Phenomena, edited by C. Domb and J. L. Lebowitz (Academic, New York, 1988), Vol. 12.
* (42) A. O. Parry, C. Rascón, N. R. Bernardino, and J. M. Romero-Enrique, J. Phys.: Condens. Matter 18 6433 (2006).
* (43) C. Rascón and A. O. Parry, Nature 407, 986 (2000).
* (44) R. Evans, Adv. Phys. 28, 143 (1979).
* (45) Y. Rosenfeld, Phys. Rev. Lett. 63, 980 (1989).
* (46) A. G. Salinger and L. J. D. Frink, J. Chem. Phys. 118, 7457 (2003).
* (47) We adopoted the following convention for the Fourier transform: $\hat{\varphi}({\mathbf{k}})=\int{\rm d}{\mathbf{r}}\;\varphi({\mathbf{r}})\,e^{-2\pi i{\mathbf{k}}\cdot{\mathbf{r}}}$.
* (48) G. O. Berim and E. Ruckenstein, J. Colloid Interface Sci. 359, 304 (2011).
* (49) A. Malijevský, J. Chem. Phys. 141, 184703 (2014).
* (50) S. Zhou, J. Stat. Phys. 170, 979 (2018).
* (51) M. Svoboda, A. Malijevský, and M. Lísal, J. Chem. Phys. 143, 104701 (2015).
* (52) The location of the interface does not change in any appreciable way if alternative rules, such as the one based on the arithmetic mean of the bulk packing fractions rather than the densities, is applied.
* (53) M. Pospíšil, A. O. Parry, and A. Malijevský, Phys. Rev. E 105, 064801 (2022).
* (54) C. Rascón, A. O. Parry, and A. Sartori, Phys. Rev. E 59, 5697 (1999).
|
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{54.09302pt}{-61.03052pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{110.99854pt}{-61.03052pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\par{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{7.33447pt}{-85.35828pt}\pgfsys@curveto{7.33447pt}{-81.30751pt}{4.05077pt}{-78.0238pt}{0.0pt}{-78.0238pt}\pgfsys@curveto{-4.05077pt}{-78.0238pt}{-7.33447pt}{-81.30751pt}{-7.33447pt}{-85.35828pt}\pgfsys@curveto{-7.33447pt}{-89.40904pt}{-4.05077pt}{-92.69275pt}{0.0pt}{-92.69275pt}\pgfsys@curveto{4.05077pt}{-92.69275pt}{7.33447pt}{-89.40904pt}{7.33447pt}{-85.35828pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-2.91667pt}{-88.83049pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\lambda$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{64.23999pt}{-85.35828pt}\pgfsys@curveto{64.23999pt}{-81.30751pt}{60.95628pt}{-78.0238pt}{56.90552pt}{-78.0238pt}\pgfsys@curveto{52.85475pt}{-78.0238pt}{49.57104pt}{-81.30751pt}{49.57104pt}{-85.35828pt}\pgfsys@curveto{49.57104pt}{-89.40904pt}{52.85475pt}{-92.69275pt}{56.90552pt}{-92.69275pt}\pgfsys@curveto{60.95628pt}{-92.69275pt}{64.23999pt}{-89.40904pt}{64.23999pt}{-85.35828pt}\pgfsys@closepath\pgfsys@moveto{56.90552pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{53.98885pt}{-88.83049pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\lambda$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{{}\pgfsys@moveto{121.14551pt}{-85.35828pt}\pgfsys@curveto{121.14551pt}{-81.30751pt}{117.8618pt}{-78.0238pt}{113.81104pt}{-78.0238pt}\pgfsys@curveto{109.76027pt}{-78.0238pt}{106.47656pt}{-81.30751pt}{106.47656pt}{-85.35828pt}\pgfsys@curveto{106.47656pt}{-89.40904pt}{109.76027pt}{-92.69275pt}{113.81104pt}{-92.69275pt}\pgfsys@curveto{117.8618pt}{-92.69275pt}{121.14551pt}{-89.40904pt}{121.14551pt}{-85.35828pt}\pgfsys@closepath\pgfsys@moveto{113.81104pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{110.89436pt}{-88.83049pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\lambda$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \par{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{7.53447pt}{0.0pt}\pgfsys@lineto{49.37105pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{23.90665pt}{4.87233pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{ST}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{64.43999pt}{0.0pt}\pgfsys@lineto{106.27657pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{80.81216pt}{4.87233pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{ST}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{7.53447pt}{-28.45276pt}\pgfsys@lineto{49.37105pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{23.90665pt}{-36.84685pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{ST}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{64.43999pt}{-28.45276pt}\pgfsys@lineto{106.27657pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{80.81216pt}{-36.84685pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{ST}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{7.53447pt}{-85.35828pt}\pgfsys@lineto{49.37105pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{23.90665pt}{-93.75237pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{ST}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{64.43999pt}{-85.35828pt}\pgfsys@lineto{106.27657pt}{-85.35828pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{80.81216pt}{-93.75237pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{ST}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \par{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{0.0pt}{-7.53447pt}\pgfsys@lineto{0.0pt}{-20.91829pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-12.78856pt}{-15.98726pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{56.90552pt}{-7.53447pt}\pgfsys@lineto{56.90552pt}{-20.91829pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{44.11696pt}{-15.98726pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{113.81104pt}{-7.53447pt}\pgfsys@lineto{113.81104pt}{-20.91829pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{101.02248pt}{-15.98726pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{0.0pt}{-35.98723pt}\pgfsys@lineto{0.0pt}{-49.58052pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-12.78856pt}{-44.54475pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{56.90552pt}{-35.98723pt}\pgfsys@lineto{56.90552pt}{-49.58052pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{44.11696pt}{-44.54475pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{113.81104pt}{-35.98723pt}\pgfsys@lineto{113.81104pt}{-49.58052pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{101.02248pt}{-44.54475pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{0.0pt}{-64.23051pt}\pgfsys@lineto{0.0pt}{-77.8238pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-12.78856pt}{-72.78804pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{56.90552pt}{-64.23051pt}\pgfsys@lineto{56.90552pt}{-77.8238pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{44.11696pt}{-72.78804pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}\pgfsys@moveto{113.81104pt}{-64.23051pt}\pgfsys@lineto{113.81104pt}{-77.8238pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{101.02248pt}{-72.78804pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\par{{}}{}{{}}{}{{}}{}{{}}{}{{}}{}{{}}{}{{}}{}{{}}{}{{}}{}\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}{}{{}}{ {}{}{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{ {}{}}{}{}{{}{}}}} }{{}{}}{{}}
{}{}{}{{{}}{{}}{{}}} {{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{3.76723pt}{-6.52505pt}\pgfsys@curveto{12.35037pt}{-21.03198pt}{12.49815pt}{-34.89381pt}{4.22638pt}{-49.58052pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{12.15617pt}{-19.06812pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{3.76723pt}{-6.52505pt}\pgfsys@curveto{17.86624pt}{-30.94533pt}{17.86624pt}{-54.41295pt}{3.76723pt}{-78.83322pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{17.87447pt}{-44.43999pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{3.76723pt}{-34.97781pt}\pgfsys@curveto{12.31836pt}{-49.78885pt}{12.31836pt}{-64.02219pt}{3.76723pt}{-78.83322pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{12.11021pt}{-69.57605pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}}{
{}{}{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{
{}{}}{}{}{{}{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}} {{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{60.67274pt}{-6.52505pt}\pgfsys@curveto{69.25589pt}{-21.03198pt}{69.40367pt}{-34.89381pt}{61.1319pt}{-49.58052pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{69.06169pt}{-19.06812pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{60.67274pt}{-6.52505pt}\pgfsys@curveto{74.77176pt}{-30.94533pt}{74.77176pt}{-54.41295pt}{60.67274pt}{-78.83322pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{74.77998pt}{-44.43999pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{60.67274pt}{-34.97781pt}\pgfsys@curveto{69.22388pt}{-49.78885pt}{69.22388pt}{-64.02219pt}{60.67274pt}{-78.83322pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{69.01573pt}{-69.57605pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{}{{}}{
{}{}{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{
{}{}}{}{}{{}{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}} {{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{117.57826pt}{-6.52505pt}\pgfsys@curveto{126.1614pt}{-21.03198pt}{126.30919pt}{-34.89381pt}{118.03741pt}{-49.58052pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{125.96721pt}{-19.06812pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{117.57826pt}{-6.52505pt}\pgfsys@curveto{131.67728pt}{-30.94533pt}{131.67728pt}{-54.41295pt}{117.57826pt}{-78.83322pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{131.6855pt}{-44.43999pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{117.57826pt}{-34.97781pt}\pgfsys@curveto{126.1294pt}{-49.78885pt}{126.1294pt}{-64.02219pt}{117.57826pt}{-78.83322pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{125.92125pt}{-69.57605pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{\scriptsize${\lambda_{\mathrm{DS}}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\par{{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-10.9028pt}{13.91275pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{ABC}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{38.29437pt}{15.85718pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{Type III}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{102.73463pt}{13.91275pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{GCB}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-28.48459pt}{-2.51443pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathrm{DS}_{1}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-28.48459pt}{-30.9672pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathrm{DS}_{2}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-31.28458pt}{-87.87271pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathrm{DS}_{11}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \par\par
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\qquad{\bm{\mathbf{\Lambda}}}=\begin{bmatrix}\lambda&{\lambda_{\mathrm{ST}}}&0&{\lambda_{\mathrm{DS}}}&0&0&\cdots&{\lambda_{\mathrm{DS}}}&0&0\\\
{\lambda_{\mathrm{ST}}}&\lambda&{\lambda_{\mathrm{ST}}}&0&{\lambda_{\mathrm{DS}}}&0&\cdots&0&{\lambda_{\mathrm{DS}}}&0\\\
0&{\lambda_{\mathrm{ST}}}&\lambda&0&0&{\lambda_{\mathrm{DS}}}&\cdots&0&0&{\lambda_{\mathrm{DS}}}\\\
{\lambda_{\mathrm{DS}}}&0&0&\lambda&{\lambda_{\mathrm{ST}}}&0&\cdots&{\lambda_{\mathrm{DS}}}&0&0\\\
0&{\lambda_{\mathrm{DS}}}&0&{\lambda_{\mathrm{ST}}}&\lambda&{\lambda_{\mathrm{ST}}}&\cdots&0&{\lambda_{\mathrm{DS}}}&0\\\
0&0&{\lambda_{\mathrm{DS}}}&0&{\lambda_{\mathrm{ST}}}&\lambda&\cdots&0&0&{\lambda_{\mathrm{DS}}}\\\
\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\ddots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}&\raisebox{3.0pt}{$\scalebox{0.75}{$\vdots$}$}\\\
{\lambda_{\mathrm{DS}}}&0&0&{\lambda_{\mathrm{DS}}}&0&0&\cdots&\lambda&{\lambda_{\mathrm{ST}}}&0\\\
0&{\lambda_{\mathrm{DS}}}&0&0&{\lambda_{\mathrm{DS}}}&0&\cdots&{\lambda_{\mathrm{ST}}}&\lambda&{\lambda_{\mathrm{ST}}}\\\
0&0&{\lambda_{\mathrm{DS}}}&0&0&{\lambda_{\mathrm{DS}}}&\cdots&0&{\lambda_{\mathrm{ST}}}&\lambda\end{bmatrix}$
The optimal penalties were found to be $\lambda^{\diamond}=2.2$ for the ridge
penalty, $\lambda^{\diamond}_{\mathrm{DS}}=0.0022$ for the data set fusion
penalty, and $\lambda^{\diamond}_{\mathrm{ST}}=6.8\times 10^{-4}$ for the
subtype fusion penalty, respectively.
Figure 4. Summary of the estimated precision matrices for the NF-$\kappa$B
pathway. _Top row_ : Heat maps of the estimated precision matrices pooled
across data sets for each genetic subtype. _Middle row from left to right:_
The pooled target matrix for ABC, the difference between the pooled ABC and
GCB estimates, and the pooled target matrix for GCB. _Bottom:_ The color key
for the heat maps.
To summarize and visualize the 33 class precision estimates they were pooled
within DLBCL subtype. Panels A–C of Figure 4 visualizes the 3 pooled estimates
as heat maps. Panels D and F visualize the constructed target matrices for the
ABC and GCB subtypes, respectively. Panel E then gives the difference between
the pooled ABC and GCB estimates, indicating that they harbor differential
signals to some degree. We would like to capture the commonalities and
differences with a differential network representation.
The estimated class-specific precision matrices were subsequently scaled to
partial correlation matrices. Each precision matrix was then sparsified using
the lFDR procedure of Section 4.3. Given the class an edge was selected
whenever $1-\widehat{\mathrm{lFDR}}\geq 0.999$. To compactly visualize the the
multiple GGMs we obtained _signed edge-weighted total networks_ mentioned in
Section 4.4. Clearly, for inconsistent connections the weight would vary
around zero, while edges that are consistently selected as positive (negative)
will have a large positive (negative) weight. These meta-graphs are plotted in
Figure 5. Panels A–C give the signed edge-weighted total networks for each
subtype across the data sets. They show that (within DLBCL subtypes) there are
a number of edges that are highly concordant across all data sets. To evaluate
the greatest differences between the ABC and GCB subtypes, the signed edge-
weighted total network of the latter was subtracted from the former. The
resulting graph $\mathcal{G}_{\mathrm{ABC}-\mathrm{GCB}}$ can be found in
Panel D. Edges that are more stably present in the ABC subtype are represented
in orange and the edges more stably present in the GCB subtype are represented
in blue. Panel F represents the graph from panel D with only those edges
retained whose absolute weight exceeds $2$. In a sense, the graph of panel F
then represents the stable differential network. The strongest connections
here should suggest places of regulatory deregulation gained or lost across
the two subtypes. Interestingly, this differential network summary shows
relatively large connected subgraphs suggesting differing regulatory
mechanisms.
The graph in panel F of Figure 5 then conveys the added value of the
integrative fusion approach. Certain members of the CCL, CXCL, and TNF gene
families who were highly central in the analysis of Section 6.1 are still
considered to be central here. However, it is also seen that certain genes
that garnered high centrality measures in the single data set analyzed in
Section 6.1 do not behave stably _across_ data sets, such as CXCL2. In
addition, the integrative analysis appoints the BCL2 gene family a central
role, especially in relation to the ABC subtype. This contrasts with Section
6.1, where the BCL2 gene family was not considered central and appeared to be
connected mostly in the non-ABC classes. Moreover, whereas the analysis of the
single data set could not identify a signal for MYD88, the integrative
analysis identifies MYD88 to be stably connected across data sets. Especially
the latter two observations are in line with current knowledge on deregulation
in the NF-$\kappa$B pathway in DLBCL patients. Also in accordance with
litterature is the known interaction of LTA with LTB seen in panel F of Figure
5 [45, 10] which here appear to be differential between ABC/GCB. Thus,
borrowing information across classes enables a meta-analytic approach that can
uncover information otherwise unobtainable through the analysis of single data
sets.
Figure 5. Summary of estimated GGMs for the NF-$\kappa$B pathway. _Panels
A–C_ : Graphs obtained by adding the signed adjacency matrices for each
subtype across the data sets. The edge widths are drawn proportional to the
absolute edge weight. _Panel D_ : Graph obtained by subtracting the summarized
signed adjacency matrix of GCB (panel A) from that of ABC (panel C). Edge
widths are drawn proportional to the absolute weight and colored according to
the sign. Orange implies edges more present in ABC and blue implies edges more
present in GCB. _Panel E_ : As the graph in panel D, however only edges with
absolute weight $>2$ are drawn. _Panel F_ : As the graph in panel E, but with
an alternative layout. _Far right-panel:_ EID key and corresponding HGNC
curated gene names of the NF-$\kappa$B pathway genes. Genes that are connected
in panel F are shown bold.
## 7\. Discussion and conclusion
We considered the problem of jointly estimating multiple inverse covariance
matrices from high-dimensional data consisting of distinct classes. A fused
ridge estimator was proposed that generalizes previous contributions in two
principal directions. First, we introduced the use of targets in fused ridge
precision estimation. The targeted approach helps to stabilize the estimation
procedure and allows for the incorporation of prior knowledge. It also
juxtaposes itself with various alternative penalized precision matrix
estimators that pull the estimates towards the edge of the parameter space,
i.e., who shrink towards the non-interpretable null matrix. Second, instead of
using a single ridge penalty and a single fusion penalty parameter for all
classes, the approach grants the use of _class-specific_ ridge penalties and
_class-pair-specific_ fusion penalties. This results in a flexible shrinkage
framework that (i) allows for class-specific tuning, that (ii) supports
analyzes when a factorial design underlies the available classes, and that
(iii) supports the appropriate handling of situations where some classes are
high-dimensional whilst others are low-dimensional. Targeted shrinkage and
usage of a flexible penalty matrix might also benefit other procedures for
precision matrix estimation such as the fused graphical lasso [13].
The targeted fused ridge estimator was combined with post-hoc support
determination, which serves as a basis for integrative or meta-analytic
Gaussian graphical modeling. This combination thus has applications in meta-,
integrative-, and differential network analysis of multiple data sets or
classes of data. This meta-approach to network analysis has multiple
motivations. First, by combining data it can effectively increase the sample
size in settings where samples are relatively scarce or expensive to produce.
In a sense it refocuses the otherwise declining attention to obtaining a
sufficient amount of data—a tendency we perceive to be untenable. Second,
aggregation across multiple data sets decreases the likelihood of capturing
idiosyncratic features (of individual data sets), thereby preventing over-
fitting of the data.
Insightful summarization of the results is important for the feasibility of
our approach to fused graphical modeling. To this end we have proposed various
basic tools to summarize commonalities and differences over multiple graphs.
These tools were subsequently used in a differential network analysis of the
NF-$\kappa$B signaling pathway in DLBCL subtypes over multiple GEP data sets.
This application is not without critique, as it experiences a problem present
in many GEP studies: The classification of the DLBCL subtypes (ABC and GBC) is
performed on the basis of the same GEP data on which the network analysis is
executed. This may be deemed methodologically undesirable. However, we justify
this double use of data as (a) the pathway of interest involves a selection of
genes whereas the classification uses all genes, and (b) the analysis
investigates partial correlations and differential networks whereas the
classification, in a sense, considers only differential expression.
Furthermore, as in all large-scale genetic screenings, the analyzes should be
considered ‘tentative’ and findings need to be validated in independent
experiments. Notwithstanding, the analyzes show that the fusion approach to
network integration has merit in uncovering class-specific information on
pathway deregulation. Moreover, they exemplify the exploratory _hypothesis
generating_ thrust of the framework we offer.
We see various inroad for further research. With regard to estimation one
could think of extending the framework to incorporate a fused version of the
elastic net. Mixed fusion, in the sense that one could do graphical lasso
estimation with ridge fusion or ridge estimation with lasso fusion, might also
be of interest. From an applied perspective the desire is to expand the
toolbox for insightful (visual) summarization of commonalities and differences
over multiple graphs. Moreover, it is of interest to explore improved ways for
support determination. The lFDR procedure, for example, could be expanded by
considering all classes jointly. Instead of applying the lFDR procedure to
each class-specific precision matrix, one would then be interested in
determining the proper mixture of a grand common null-distribution and
multiple class-specific non-null distributions. These inroads were out of the
scope of current work, but we hope to explore them elsewhere.
### 7.1. Software implementation
The fused ridge estimator and its accompanying estimation procedure is
implemented in the rags2ridges-package [31] for the statistical language R.
This package has many supporting functions for penalty parameter selection,
graphical modeling, as well as network analysis. We will report on its full
functionality elsewhere. The package is freely available from the
Comprehensive R Archive Network: http://cran.r-project.org/.
## Acknowledgements
Anders E. Bilgrau was supported by a grant from the Karen Elise Jensen Fonden,
a travel grant from the Danish Cancer Society, and a visitor grant by the
Dept. of Mathematics of the VU University Amsterdam. Carel F.W. Peeters
received funding from the European Community’s Seventh Framework Programme
(FP7, 2007-2013), Research Infrastructures action, under grant agreement No.
FP7-269553 (EpiRadBio project). The authors would also like to thank Karen
Dybkær of the Dept. of Haematology at Aalborg University Hospital, for her
help on the biological interpretations in the DLBCL application.
## Appendix A Geometric interpretation of the fused ridge penalty
Some intuition behind the fused ridge is provided by pointing to the
equivalence of penalized and constrained optimization. To build this intuition
we study the geometric interpretation of the fused ridge penalty in the
special case of (6) with $\bm{\mathbf{T}}=\bm{\mathbf{0}}$. In this case
$\lambda_{gg}=\lambda$ for all $g$, and $\lambda_{g_{1}g_{2}}=\lambda_{f}$ for
all $g_{1}\neq g_{2}$. Clearly, the penalty matrix then amounts to
${\bm{\mathbf{\Lambda}}}=\lambda\bm{\mathbf{I}}_{p}+\lambda_{f}(\bm{\mathbf{J}}_{p}-\bm{\mathbf{I}}_{p})$.
Matters are simplified further by considering $G=2$ classes and by focusing on
a specific entry in the precision matrix, say
$({\bm{\mathbf{\Omega}}}_{g})_{jj^{\prime}}=\omega_{jj^{\prime}}^{(g)}$, for
$g=1,2$. By doing so we ignore the contribution of other precision elements to
the penalty. Now, the fused ridge penalty may be rewritten as:
$\displaystyle\frac{\lambda}{2}\Big{(}\big{\|}{\bm{\mathbf{\Omega}}}_{1}\big{\|}_{F}^{2}+\big{\|}{\bm{\mathbf{\Omega}}}_{2}\big{\|}_{F}^{2}\Big{)}+\frac{\lambda_{f}}{4}\sum_{g_{1}=1}^{2}\sum_{g_{2}=1}^{2}\big{\|}{\bm{\mathbf{\Omega}}}_{g_{1}}-{\bm{\mathbf{\Omega}}}_{g_{2}}\big{\|}_{F}^{2}$
$\displaystyle=\frac{\lambda}{2}\Big{(}\big{\|}{\bm{\mathbf{\Omega}}}_{1}\big{\|}_{F}^{2}+\big{\|}{\bm{\mathbf{\Omega}}}_{2}\big{\|}_{F}^{2}\Big{)}+\frac{\lambda_{f}}{2}\big{\|}{\bm{\mathbf{\Omega}}}_{1}-{\bm{\mathbf{\Omega}}}_{2}\big{\|}_{F}^{2}.$
Subsequently considering only the contribution of the
$\omega_{jj^{\prime}}^{(g)}$ entries implies this expression can be further
reduced to:
$\displaystyle\frac{\lambda}{2}\left[\big{(}\omega_{jj^{\prime}}^{(1)}\big{)}^{2}+\big{(}\omega_{jj^{\prime}}^{(2)}\big{)}^{2}\right]+\frac{\lambda_{f}}{2}\big{(}\omega_{jj^{\prime}}^{(1)}-\omega_{jj^{\prime}}^{(2)}\big{)}^{2}=\frac{\lambda+\lambda_{f}}{2}\left[\big{(}\omega_{jj^{\prime}}^{(1)}\big{)}^{2}+\big{(}\omega_{jj^{\prime}}^{(2)}\big{)}^{2}\right]-\lambda_{f}\omega_{jj^{\prime}}^{(1)}\omega_{jj^{\prime}}^{(2)}.$
It follows immediately that this penalty imposes constraints on the parameters
$\omega_{jj^{\prime}}^{(1)}$ and $\omega_{jj^{\prime}}^{(2)}$, amounting to
the set:
(20)
$\displaystyle\biggl{\\{}\big{(}\omega_{jj^{\prime}}^{(1)},\omega_{jj^{\prime}}^{(2)}\big{)}\in\mathbb{R}^{2}:\frac{\lambda+\lambda_{f}}{2}\Bigl{[}\bigl{(}\omega_{jj^{\prime}}^{(1)}\bigr{)}^{2}+\bigl{(}\omega_{jj^{\prime}}^{(2)}\bigr{)}^{2}\Bigr{]}-\lambda_{f}\omega_{jj^{\prime}}^{(1)}\omega_{jj^{\prime}}^{(2)}\leq
c\biggr{\\}},$
for some $c\in\mathbb{R}_{+}$. It implies that the fused ridge penalty can be
understood by the implied constraints on the parameters. Figure 6 shows the
boundary of the set for selected values.
Figure 6. Visualization of the effects of the fused ridge penalty in terms of
constraints. The left panel shows the effect of $\lambda_{f}$ for fixed
$\lambda$. Here, $\lambda_{f}=0$ is the regular ridge penalty. The right panel
shows the effect of $\lambda$ while keeping $\lambda_{f}$ fixed.
Panel 6A reveals the effect of the fused, inter-class penalty parameter
$\lambda_{f}$ (while keeping $\lambda$ fixed). At $\lambda_{f}=0$, the
constraint coincides with the regular ridge penalty. As $\lambda_{f}$
increases, the ellipsoid shrinks along the minor principal axis $x=-y$ with no
shrinkage along $x=y$. In the limit $\lambda_{f}\to\infty$ the ellipsoid
collapses onto the identity line. Hence, the parameters
$\omega_{jj^{\prime}}^{(1)}$ and $\omega_{jj^{\prime}}^{(2)}$ are shrunken
towards each other and while their differences vanish, their sum is not
affected. Hence, the fused penalty parameter primarily shrinks the ‘sum of the
parameters’, but also fuses them as a bound on their sizes implies a bound on
their difference.
Panel 6B shows the effect of the intra-class $\lambda$ penalty (while keeping
$\lambda_{f}$ fixed). When the penalty vanishes for $\lambda\to 0$ the domain
becomes a degenerated ellipse (i.e. cylindrical for more than 2 classes) and
parameters $\omega_{jj^{\prime}}^{(1)}$ and $\omega_{jj^{\prime}}^{(2)}$ may
assume any value as long as their difference is less than
$\sqrt{2c/\lambda_{f}}$. For any $\lambda>0$, the parameter-constraint is
ellipsoidal. As $\lambda$ increases the ellipsoid is primarily shrunken along
the principal axis formed by the identity line and along the orthogonal
principal axis $(y=-x)$. In the limit $\lambda\to\infty$ the ellipsoid
collapses onto the point $(0,0)$. It is clear that the shape of the domain in
(20) is only determined by the ratio of $\lambda$ and $\lambda_{f}$.
The effect of the penalties on the domain of the obtainable estimates can be
further understood by noting that the fused ridge penalty (4) can be rewritten
as
(21)
$\tilde{\lambda}\sum_{g_{1},g_{2}}\big{\lVert}({\bm{\mathbf{\Omega}}}_{g_{1}}\\!-\bm{\mathbf{T}}_{g_{1}})+({\bm{\mathbf{\Omega}}}_{g_{2}}\\!-\bm{\mathbf{T}}_{g_{2}})\big{\rVert}_{F}^{2}+\tilde{\lambda}_{f}\sum_{g_{1},g_{2}}\big{\lVert}({\bm{\mathbf{\Omega}}}_{g_{1}}\\!-\bm{\mathbf{T}}_{g_{1}})-({\bm{\mathbf{\Omega}}}_{g_{2}}\\!-\bm{\mathbf{T}}_{g_{2}})\big{\rVert}_{F}^{2},$
for some penalties $\tilde{\lambda}$ and $\tilde{\lambda}_{f}$. The details of
this derivation can be found in Section A.1 below. The first and second
summand of the rewritten penalty (21) respectively shrink the sum and
difference of the parameters of the precision matrices. Their contributions
thus coincide with the principal axes along which two penalty parameters
shrink the domain of the parameters.
### A.1. Alternative form for the fused ridge penalty
This section shows that the alternative form (21) for the ridge penalty can be
written in the form (4). We again assume a common ridge penalty
$\lambda_{gg}=\lambda$ and a common fusion penalty
$\lambda_{g_{1}g_{2}}=\lambda_{f}$ for all classes and pairs thereof. To
simplify the notation, let
$\bm{\mathbf{A}}_{g}={\bm{\mathbf{\Omega}}}_{g}\\!-\bm{\mathbf{T}}_{g}$. Now,
$\displaystyle
f^{\mathrm{FR^{\prime}}}\bigl{(}\\{{\bm{\mathbf{\Omega}}}_{g}\\};\tilde{\lambda},\tilde{\lambda}_{f},\\{\bm{\mathbf{T}}_{g}\\}\bigr{)}$
$\displaystyle=\tilde{\lambda}\sum_{g_{1},g_{2}}\big{\|}({\bm{\mathbf{\Omega}}}_{g_{1}}\\!-\bm{\mathbf{T}}_{g_{1}})+({\bm{\mathbf{\Omega}}}_{g_{2}}\\!-\bm{\mathbf{T}}_{g_{2}})\big{\|}_{F}^{2}+\tilde{\lambda}_{f}\sum_{g_{1},g_{2}}\big{\|}({\bm{\mathbf{\Omega}}}_{g_{1}}\\!-\bm{\mathbf{T}}_{g_{1}})-({\bm{\mathbf{\Omega}}}_{g_{2}}\\!-\bm{\mathbf{T}}_{g_{2}})\big{\|}_{F}^{2}$
$\displaystyle=\tilde{\lambda}\sum_{g_{1},g_{2}}\big{\|}\bm{\mathbf{A}}_{g_{1}}\\!+\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}+\tilde{\lambda}_{f}\sum_{g_{1},g_{2}}\big{\|}\bm{\mathbf{A}}_{g_{1}}\\!-\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}$
$\displaystyle=\tilde{\lambda}\sum_{g_{1},g_{2}}\Big{(}\big{\|}\bm{\mathbf{A}}_{g_{1}}\big{\|}_{F}^{2}+\big{\|}\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}+2\langle\bm{\mathbf{A}}_{g_{1}},\bm{\mathbf{A}}_{g_{2}}\rangle\Big{)}+\tilde{\lambda}_{f}\sum_{g_{1},g_{2}}\big{\|}\bm{\mathbf{A}}_{g_{1}}\\!-\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}$
$\displaystyle=\tilde{\lambda}\sum_{g_{1},g_{2}}\Big{(}2\big{\|}\bm{\mathbf{A}}_{g_{1}}\big{\|}_{F}^{2}+2\big{\|}\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}-\big{\|}\bm{\mathbf{A}}_{g_{1}}-\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}\Big{)}+\tilde{\lambda}_{f}\sum_{g_{1},g_{2}}\big{\|}\bm{\mathbf{A}}_{g_{1}}\\!-\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}$
$\displaystyle=4\tilde{\lambda}G\sum_{g}\big{\|}\bm{\mathbf{A}}_{g}\big{\|}_{F}^{2}-\tilde{\lambda}\sum_{g_{1},g_{2}}\big{\|}\bm{\mathbf{A}}_{g_{1}}\\!-\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}+\tilde{\lambda}_{f}\sum_{g_{1},g_{2}}\big{\|}\bm{\mathbf{A}}_{g_{1}}\\!-\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}$
$\displaystyle=4\tilde{\lambda}G\sum_{g}\big{\|}\bm{\mathbf{A}}_{g}\big{\|}_{F}^{2}+(\tilde{\lambda}_{f}-\tilde{\lambda})\sum_{g_{1},g_{2}}\big{\|}\bm{\mathbf{A}}_{g_{1}}-\bm{\mathbf{A}}_{g_{2}}\big{\|}_{F}^{2}$
$\displaystyle=4\tilde{\lambda}G\sum_{g}\big{\|}({\bm{\mathbf{\Omega}}}_{g}\\!-\bm{\mathbf{T}}_{g})\big{\|}_{F}^{2}+(\tilde{\lambda}_{f}-\tilde{\lambda})\sum_{g_{1},g_{2}}\big{\|}({\bm{\mathbf{\Omega}}}_{g_{1}}\\!-\bm{\mathbf{T}}_{g_{1}})-({\bm{\mathbf{\Omega}}}_{g_{2}}\\!-\bm{\mathbf{T}}_{g_{2}})\big{\|}_{F}^{2}.$
Hence, the alternative penalty (21) is also of the form (4) and thus the fused
ridge of (21) is equivalent to (4) for appropriate choices of the penalties.
## Appendix B Results and proofs
Section B.1 contains supporting results from other sources and results in
support of Algorithm 1. Section B.2 contains proofs of the results stated in
the main text as well as additional results conducive in those proofs.
### B.1. Supporting results
###### Lemma 1 (van Wieringen and Peeters [42]).
Amend the log-likelihood (1) with the $\ell_{2}$-penalty
$\frac{\lambda}{2}\big{\lVert}{\bm{\mathbf{\Omega}}}\\!-\bm{\mathbf{T}}\big{\rVert}_{F}^{2},$
with $\mathbf{T}\in\mathcal{S}_{+}^{p}$ denoting a fixed symmetric p.s.d.
target matrix, and where $\lambda\in(0,\infty)$ denotes a penalty parameter.
The zero gradient equation w.r.t. the precision matrix then amounts to
(22)
$\hat{{\bm{\mathbf{\Omega}}}}^{-1}-(\bm{\mathbf{S}}-\lambda\bm{\mathbf{T}})-\lambda\hat{{\bm{\mathbf{\Omega}}}}=\bm{\mathbf{0}},$
whose solution gives a penalized ML ridge estimator of the precision matrix:
$\hat{\mathbf{\Omega}}(\lambda)=\left\\{\left[\lambda\mathbf{I}_{p}+\frac{1}{4}(\mathbf{S}-\lambda\mathbf{T})^{2}\right]^{1/2}+\frac{1}{2}(\mathbf{S}-\lambda\mathbf{T})\right\\}^{-1}.$
###### Lemma 2 (van Wieringen and Peeters [42]).
Consider $\hat{{\bm{\mathbf{\Omega}}}}(\lambda)$ from Lemma 1 and define
$[\hat{{\bm{\mathbf{\Omega}}}}(\lambda)]^{-1}\equiv\hat{{\bm{\mathbf{\Sigma}}}}(\lambda)$.
The following identity then holds:
$\bm{\mathbf{S}}-\lambda\bm{\mathbf{T}}=\hat{{\bm{\mathbf{\Sigma}}}}(\lambda)-\lambda\hat{{\bm{\mathbf{\Omega}}}}(\lambda).$
###### Lemma 3.
Let ${\bm{\mathbf{\Lambda}}}\in\mathcal{S}^{G}$ be a matrix of fixed penalty
parameters such that ${\bm{\mathbf{\Lambda}}}\geq\bm{\mathbf{0}}$. Moreover,
let $\\{\bm{\mathbf{T}}_{g}\\}\in\mathcal{S}_{+}^{p}$. Then if
$\operatorname{diag}({\bm{\mathbf{\Lambda}}})>\bm{\mathbf{0}}$, the problem of
(5) is strictly concave.
###### Proof of Lemma 3.
By $\operatorname{diag}({\bm{\mathbf{\Lambda}}})>\bm{\mathbf{0}}$, it is clear
that the fused ridge penalty (4) is strictly convex as it is a conical
combination of strictly convex and convex functions. Hence, the negative fused
ridge penalty is strictly concave. The log-likelihood of (3) is a conical
combination of concave functions and is thus also concave. Therefore, the
penalized log-likelihood is strictly concave. ∎
### B.2. Proofs and additional results
###### Proof of Proposition 1.
To find the maximizing argument for a specific class of the general fused
ridge penalized log-likelihood problem (5) we must obtain its first-order
derivative w.r.t. that class and solve the resulting zero gradient equation.
To this end we first rewrite the ridge penalty (4) into a second alternative
form. Using that ${\bm{\mathbf{\Lambda}}}={{\bm{\mathbf{\Lambda}}}}^{\top}$,
and keeping in mind the cyclic property of the trace as well as properties of
${\bm{\mathbf{\Omega}}}_{g}$ and $\bm{\mathbf{T}}_{g}$ stemming from their
symmetry, we may find:
$\displaystyle
f^{\mathrm{FR^{\prime\prime}}}\bigl{(}\\{{\bm{\mathbf{\Omega}}}_{g}\\};{\bm{\mathbf{\Lambda}}},\\{\bm{\mathbf{T}}_{g}\\}\bigr{)}$
$\displaystyle\qquad=\sum_{g}\frac{\lambda_{gg}}{2}\big{\lVert}{\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g}\big{\rVert}_{F}^{2}+\sum_{g_{1},g_{2}}\frac{\lambda_{g_{1}g_{2}}}{4}\big{\lVert}({\bm{\mathbf{\Omega}}}_{g_{1}}{-}\bm{\mathbf{T}}_{g_{1}})-({\bm{\mathbf{\Omega}}}_{g_{2}}{-}\bm{\mathbf{T}}_{g_{2}})\big{\rVert}_{F}^{2}$
(23)
$\displaystyle\qquad=\sum_{g}\frac{\lambda_{g\bullet}}{2}\operatorname*{tr}\left[({\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g})^{\top}({\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g})\right]-\sum_{\mathclap{\begin{subarray}{c}g_{1},g_{2}\\\
g_{1}\neq
g_{2}\end{subarray}}}\frac{\lambda_{g_{1}g_{2}}}{2}\operatorname*{tr}\left[({\bm{\mathbf{\Omega}}}_{g_{1}}{-}\bm{\mathbf{T}}_{g_{1}})^{\top}({\bm{\mathbf{\Omega}}}_{g_{2}}{-}\bm{\mathbf{T}}_{g_{2}})\right],$
where $\lambda_{g\bullet}=\sum_{g^{\prime}}\lambda_{gg^{\prime}}$ denotes the
sum over the g$th$ row (or column) of ${\bm{\mathbf{\Lambda}}}$. Taking the
first-order partial derivative of (23) w.r.t. ${\bm{\mathbf{\Omega}}}_{g_{0}}$
yields:
$\displaystyle\frac{\partial}{\partial{\bm{\mathbf{\Omega}}}_{g_{0}}}f^{\mathrm{FR^{\prime\prime}}}\bigl{(}\\{{\bm{\mathbf{\Omega}}}_{g}\\};{\bm{\mathbf{\Lambda}}},\\{\bm{\mathbf{T}}_{g}\\}\bigr{)}$
(24)
$\displaystyle\qquad=\lambda_{g_{0}\bullet}\left[2({\bm{\mathbf{\Omega}}}_{g_{0}}{-}\bm{\mathbf{T}}_{g_{0}})-({\bm{\mathbf{\Omega}}}_{g_{0}}{-}\bm{\mathbf{T}}_{g_{0}})\circ\bm{\mathbf{I}}_{p}\right]-\sum_{g\neq
g_{0}}\lambda_{gg_{0}}\left[2({\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g})-({\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g})\circ\bm{\mathbf{I}}_{p}\right].$
The first-order partial derivative of (3) w.r.t.
${\bm{\mathbf{\Omega}}}_{g_{0}}$ results in:
$\displaystyle\frac{\partial}{\partial{\bm{\mathbf{\Omega}}}_{g_{0}}}\mathcal{L}(\\{{\bm{\mathbf{\Omega}}}_{g}\\};\\{\bm{\mathbf{S}}_{g}\\})$
$\displaystyle=\frac{\partial}{\partial{\bm{\mathbf{\Omega}}}_{g_{0}}}\sum_{g}n_{g}\big{\\{}\ln|{\bm{\mathbf{\Omega}}}_{g}|-\operatorname*{tr}(\bm{\mathbf{S}}_{g}{\bm{\mathbf{\Omega}}}_{g})\big{\\}},$
(25)
$\displaystyle=n_{g_{0}}\left[2({\bm{\mathbf{\Omega}}}_{g_{0}}^{-1}\\!-\bm{\mathbf{S}}_{g_{0}})-({\bm{\mathbf{\Omega}}}_{g_{0}}^{-1}\\!-\bm{\mathbf{S}}_{g_{0}})\circ\bm{\mathbf{I}}_{p}\right].$
Subtracting (24) from (25) yields
(26)
$\left[n_{g_{0}}({\bm{\mathbf{\Omega}}}_{g_{0}}^{-1}\\!-\bm{\mathbf{S}}_{g_{0}})-\lambda_{g_{0}\bullet}({\bm{\mathbf{\Omega}}}_{g_{0}}{-}\bm{\mathbf{T}}_{g_{0}})+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}({\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g})\right]\circ(2\bm{\mathbf{J}}_{p}-\bm{\mathbf{I}}_{p}),$
which, clearly, is $\bm{\mathbf{0}}$ only when
$n_{g_{0}}({\bm{\mathbf{\Omega}}}_{g_{0}}^{-1}\\!-\bm{\mathbf{S}}_{g_{0}})-\lambda_{g_{0}\bullet}({\bm{\mathbf{\Omega}}}_{g_{0}}{-}\bm{\mathbf{T}}_{g_{0}})+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}({\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g})=\bm{\mathbf{0}}$.
From (26) we may then find our (conveniently scaled) zero gradient equation to
be:
(27)
$\hat{{\bm{\mathbf{\Omega}}}}_{g_{0}}^{-1}\\!-\bm{\mathbf{S}}_{g_{0}}-\frac{\lambda_{g_{0}\bullet}}{n_{g_{0}}}(\hat{{\bm{\mathbf{\Omega}}}}_{g_{0}}{-}\bm{\mathbf{T}}_{g_{0}})+\sum_{g\neq
g_{0}}\frac{\lambda_{gg_{0}}}{n_{g_{0}}}({\bm{\mathbf{\Omega}}}_{g}{-}\bm{\mathbf{T}}_{g})=\bm{\mathbf{0}}.$
Now, rewrite (27) to
(28)
$\hat{{\bm{\mathbf{\Omega}}}}_{g_{0}}^{-1}-\bar{\bm{\mathbf{S}}}_{g_{0}}-\bar{\lambda}_{g_{0}}(\hat{{\bm{\mathbf{\Omega}}}}_{g_{0}}\\!-\bar{\bm{\mathbf{T}}}_{g_{0}})=\bm{\mathbf{0}},$
where $\bar{\bm{\mathbf{S}}}_{g_{0}}=\bm{\mathbf{S}}_{g_{0}}-\sum_{g\neq
g_{0}}\frac{\lambda_{gg_{0}}}{n_{g_{0}}}({\bm{\mathbf{\Omega}}}_{g}\\!-\bm{\mathbf{T}}_{g})$,
$\bar{\bm{\mathbf{T}}}_{g_{0}}=\bm{\mathbf{T}}_{g_{0}}$, and
$\bar{\lambda}_{g_{0}}=\lambda_{g_{0}\bullet}/n_{g_{0}}$. It can be seen that
(28) is of the form (22). Lemma 1 may then be applied to obtain the solution
(7). ∎
###### Corollary 1.
Consider the estimator (7). Let
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}$
be the precision matrix estimate of the $g$th class. Also, let
$\operatorname{diag}({\bm{\mathbf{\Lambda}}})>\bm{\mathbf{0}}$ and assume that
all off-diagonal elements of ${\bm{\mathbf{\Lambda}}}$ are zero. Then
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}$
reduces to the non-fused ridge estimate of class $g$:
(29)
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}={\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\lambda_{gg})=\left\\{\left[\frac{\lambda_{gg}}{n_{g}}\bm{\mathbf{I}}_{p}+\frac{1}{4}\left(\bm{\mathbf{S}}_{g}-\frac{\lambda_{gg}}{n_{g}}\bm{\mathbf{T}}_{g}\right)^{2}\right]^{1/2}+\frac{1}{2}\left(\bm{\mathbf{S}}_{g}-\frac{\lambda_{gg}}{n_{g}}\bm{\mathbf{T}}_{g}\right)\right\\}^{-1}.$
###### Proof of Corollary 1.
The result follows directly from equations (7) and (8) by using that
$\sum_{g^{\prime}\neq g}\lambda_{gg^{\prime}}=\sum_{g^{\prime}\neq
g}\lambda_{g^{\prime}g}=0$ for all $g$. ∎
###### Lemma 4.
Let $\\{\bm{\mathbf{T}}_{g}\\}\in\mathcal{S}_{+}^{p}$ and assume
$\lambda_{gg}\in\mathbb{R}_{++}$ in addition to
$0\leq\lambda_{gg^{\prime}}<\infty$ for all $g^{\prime}\neq g$. Then
$\lim_{\lambda_{gg}\rightarrow\infty^{-}}\left\|{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}\right\|_{F}<\infty.$
###### Proof of Lemma 4.
The result is shown through proof by contradiction. Hence, suppose
$\lim_{\lambda_{gg}\rightarrow\infty^{-}}\|{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}\|_{F}$
is unbounded. Let $d[\cdot]_{jj}$ denote the $j$th largest eigenvalue. Then,
as
$\left\|{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}\right\|_{F}=\left\\{\sum_{j=1}^{p}d\left[{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}\right]_{jj}^{2}\right\\}^{1/2},$
at least one eigenvalue must tend to infinity along with $\lambda_{gg}$.
Assume without loss of generality that this is only the first (and largest)
eigenvalue:
(30)
$\lim_{\lambda_{gg}\rightarrow\infty^{-}}d\left[{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}\right]_{11}=\mathcal{O}(\lambda_{gg}^{\gamma}),$
for some $\gamma>0$. Now, for any $\lambda_{gg}$, the precision can be written
as an eigendecomposition:
(31)
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}=d_{11}\mathbf{v}_{1}\mathbf{v}_{1}^{\top}+\sum_{j=2}^{p}d_{jj}\mathbf{v}_{j}\mathbf{v}_{j}^{\top},$
where the dependency of the eigenvalues and eigenvectors on the target
matrices and penalty parameters has been suppressed (for notational brevity
and clarity). It is the first summand on the right-hand side that dominates
the precision for large $\lambda_{gg}$. Furthermore, this ridge ML precision
estimate of the $g$th group satisfies, by (26), the following gradient
equation:
$n_{g}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{-1}\\!-\bm{\mathbf{S}}_{g})-\lambda_{gg}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}{-}\bm{\mathbf{T}}_{g})-\sum_{g^{\prime}\neq
g}\lambda_{g^{\prime}g}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}{-}\bm{\mathbf{T}}_{g})+\sum_{g^{\prime}\neq
g}\lambda_{g^{\prime}g}({\bm{\mathbf{\Omega}}}_{g^{\prime}}{-}\bm{\mathbf{T}}_{g^{\prime}})=\bm{\mathbf{0}}.$
We now make three observations: (i) From item (i) of Proposition 2 it follows
that
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}$
is always p.d. for $\lambda_{gg}\in\mathbb{R}_{++}$. Consequently,
$\lim_{\lambda_{gg}\rightarrow\infty^{-}}\|{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigl{(}{\bm{\mathbf{\Lambda}}},\\{{\bm{\mathbf{\Omega}}}_{g^{\prime}}\\}_{g^{\prime}{\neq}g}\bigr{)}^{-1}\|_{F}<\infty$;
(ii) The target matrices do not depend on $\lambda_{gg}$; and (iii) The finite
$\lambda_{gg^{\prime}}$ ensure that the norms of
${\bm{\mathbf{\Omega}}}_{g^{\prime}}$ can only exceed the norm of
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}$ by a function (independent of
$\lambda_{gg}$) of the constant $\lambda_{gg^{\prime}}$. Hence, in the limit,
the norms of the ${\bm{\mathbf{\Omega}}}_{g^{\prime}}$ cannot exceed the norm
of ${\hat{\bm{\mathbf{\Omega}}}}{}_{g}$. These observations give that, as
$\lambda_{gg}$ tends towards infinity, the term
$\lambda_{gg}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}{-}\bm{\mathbf{T}}_{g})$ will
dominate the gradient equation. In fact, the term
$\lambda_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}$ will dominate as, using (30)
and (31):
$\displaystyle\bm{\mathbf{0}}$ $\displaystyle\approx$
$\displaystyle-\lambda_{gg}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}-\bm{\mathbf{T}}_{g})$
$\displaystyle\approx$
$\displaystyle-\lambda_{gg}d_{11}\mathbf{v}_{1}\mathbf{v}_{1}^{\top}+\lambda_{gg}\bm{\mathbf{T}}$
$\displaystyle\approx$
$\displaystyle-\lambda_{gg}^{1+\gamma}\mathbf{v}_{1}\mathbf{v}_{1}^{\top}+\lambda_{gg}\bm{\mathbf{T}}$
$\displaystyle\approx$
$\displaystyle-\lambda_{gg}^{1+\gamma}(\mathbf{v}_{1}\mathbf{v}_{1}^{\top}+\lambda_{gg}^{-\gamma}\bm{\mathbf{T}})$
$\displaystyle\approx$
$\displaystyle-\lambda_{gg}^{1+\gamma}\mathbf{v}_{1}\mathbf{v}_{1}^{\top}.$
This latter statement is contradictory as it can only be true if the first
eigenvalue tends to zero. This, in turn, contradicts the assumption of
unboundedness (in the Frobenius norm) of the precision estimate. Hence, the
fused ridge ML precision estimate must be bounded. ∎
###### Proof of Proposition 2.
(i) Note that (27) for class $g$ may be rewritten to
$\hat{{\bm{\mathbf{\Omega}}}}_{g}^{-1}\\!-\bm{\mathbf{S}}_{g}-\frac{\lambda_{g\bullet}}{n_{g}}\left\\{\hat{{\bm{\mathbf{\Omega}}}}_{g}-\left[\bm{\mathbf{T}}_{g}+\sum_{g^{\prime}\neq
g}\frac{\lambda_{gg^{\prime}}}{\lambda_{g\bullet}}({\bm{\mathbf{\Omega}}}_{g^{\prime}}{-}\bm{\mathbf{T}}_{g^{\prime}})\right]\right\\}=\bm{\mathbf{0}},$
implying that (7) can be obtained under the following alternative updating
scheme to (8):
$\bar{\bm{\mathbf{S}}}_{g}=\bm{\mathbf{S}}_{g},\quad\bar{\bm{\mathbf{T}}}_{g}=\bm{\mathbf{T}}_{g}+\sum_{g^{\prime}\neq
g}\frac{\lambda_{gg^{\prime}}}{\lambda_{g\bullet}}({\bm{\mathbf{\Omega}}}_{g^{\prime}}\\!-\bm{\mathbf{T}}_{g^{\prime}}),\quad\text{and}\quad\bar{\lambda}_{g}=\frac{\lambda_{g\bullet}}{n_{g}}.$
Now, let $d[{\;\cdot\;}]_{jj}$ denote the $j$th largest eigenvalue. Then
$\displaystyle
d\left\\{[\hat{{\bm{\mathbf{\Omega}}}}_{g}]^{-1}\right\\}_{jj}=d\left[\frac{1}{2}(\bm{\mathbf{S}}_{g}-\bar{\lambda}_{g}\bar{\bm{\mathbf{T}}}_{g})\right]_{jj}+\sqrt{\left\\{d\left[\frac{1}{2}(\bm{\mathbf{S}}_{g}-\bar{\lambda}_{g}\bar{\bm{\mathbf{T}}}_{g})\right]_{jj}\right\\}^{2}+\bar{\lambda}_{g}}>0,$
when $\bar{\lambda}_{g}>0$. As
$\bar{\lambda}_{g}=\sum_{g^{\prime}}(\lambda_{g^{\prime}g}/n_{g})$ and as
$\lambda_{g^{\prime}g}$ may be $0$ for all $g^{\prime}\neq g$,
$\hat{{\bm{\mathbf{\Omega}}}}_{g}$ is guaranteed to be p.d. whenever
$\lambda_{gg}\in\mathbb{R}_{++}$.
(ii) Note that $\sum_{g^{\prime}\neq
g}\lambda_{gg^{\prime}}=\sum_{g^{\prime}\neq g}\lambda_{g^{\prime}g}=0$
implies that $\hat{{\bm{\mathbf{\Omega}}}}_{g}$ reduces to the non-fused class
estimate (29) by way of Corollary 1. The stated right-hand limit is then
immediate by using $\lambda_{gg}=0$ in (29). Under the distributional
assumptions this limit exists with probability 1 when $p\leq n_{g}$.
(iii) Consider the zero gradient equation (27) for the $g$th class. Multiply
it by $n_{g}/\lambda_{g\bullet}$ to factor out the dominant term:
(32)
$\frac{n_{g}}{\lambda_{g\bullet}}\hat{{\bm{\mathbf{\Omega}}}}_{g}^{-1}\\!-\frac{n_{g}}{\lambda_{g\bullet}}\bm{\mathbf{S}}_{g}-(\hat{{\bm{\mathbf{\Omega}}}}_{g}\\!-\bm{\mathbf{T}}_{g})+\sum_{g^{\prime}\neq
g}\frac{\lambda_{g^{\prime}g}}{\lambda_{g\bullet}}({\bm{\mathbf{\Omega}}}_{g^{\prime}}\\!-\bm{\mathbf{T}}_{g^{\prime}})=\bm{\mathbf{0}}.$
When $\lambda_{gg}\to\infty^{-}$,
$\lambda_{g\bullet}=\sum_{g^{\prime}}\lambda_{gg^{\prime}}\to\infty^{-}$,
implying that the first two terms of (32) vanish. Under the assumption that
$\lambda_{gg^{\prime}}<\infty$ for all $g^{\prime}\neq g$ we have that
$\lambda_{g^{\prime}g}/\lambda_{g\bullet}\to 0$ when
$\lambda_{gg}\to\infty^{-}$ for all $g^{\prime}\neq g$. Thus, all terms of the
sum also vanish as Lemma 4 implies that the
${\bm{\mathbf{\Omega}}}_{g^{\prime}}$ are all bounded. Hence, when
$\lambda_{gg}\to\infty^{-}$ and $\lambda_{gg^{\prime}}<\infty$ for all
$g^{\prime}\neq g$, the zero gradient equation reduces to
$\hat{{\bm{\mathbf{\Omega}}}}_{g}\\!-\bm{\mathbf{T}}_{g}=\bm{\mathbf{0}}$,
implying the stated left-hand limit.
(iv) The proof strategy follows the proof of item iii. Multiply the zero
gradient equation (27) for the $g_{1}$th class with
$n_{g_{1}}/\lambda_{g_{1}g_{2}}$ to obtain:
(33)
$\frac{n_{g_{1}}}{\lambda_{g_{1}g_{2}}}\hat{{\bm{\mathbf{\Omega}}}}_{g_{1}}^{-1}\\!-\frac{n_{g_{1}}}{\lambda_{g_{1}g_{2}}}\bm{\mathbf{S}}_{g_{1}}-\frac{\lambda_{g_{1}\bullet}}{\lambda_{g_{1}g_{2}}}(\hat{{\bm{\mathbf{\Omega}}}}_{g_{1}}\\!-\bm{\mathbf{T}}_{g_{1}})+\sum_{g^{\prime}\neq
g_{1}}\frac{\lambda_{g^{\prime}g_{1}}}{\lambda_{g_{1}g_{2}}}({\bm{\mathbf{\Omega}}}_{g^{\prime}}\\!-\bm{\mathbf{T}}_{g^{\prime}})=\bm{\mathbf{0}}.$
The first two terms are immediately seen to vanish when
$\lambda_{g_{1}g_{2}}\to\infty^{-}$. Under the assumption that all penalties
except $\lambda_{g_{1}g_{2}}$ are finite, we have that
$\lambda_{g_{1}\bullet}/\lambda_{g_{1}g_{2}}\to 1$ for
$\lambda_{g_{1}g_{2}}\to\infty^{-}$. Similarly, all elements of the sum term
in (33) vanish except the element where $g^{\prime}=g_{2}$. Hence, when
$\lambda_{g_{1}g_{2}}\to\infty^{-}$ and when
$\lambda_{g_{1}^{\prime}g_{2}^{\prime}}<\infty$ for all
$\\{g_{1}^{\prime},g_{2}^{\prime}\\}\neq\\{g_{1},g_{2}\\}$, the zero gradient
equation for class $g_{1}$ reduces to:
(34)
$-(\hat{{\bm{\mathbf{\Omega}}}}_{g_{1}}\\!-\bm{\mathbf{T}}_{g_{1}})+({\bm{\mathbf{\Omega}}}_{g_{2}}\\!-\bm{\mathbf{T}}_{g_{2}})=\bm{\mathbf{0}}.$
Conversely, by multiplying the zero gradient equation (27) for the $g_{2}$th
class with $n_{g_{2}}/\lambda_{g_{1}g_{2}}$ one obtains, through the same
development as above, that the zero gradient equation for class $g_{2}$
reduces to the $\hat{{\bm{\mathbf{\Omega}}}}_{g_{2}}$-analogy of equation
(34). The result (34) then immediately implies the stated limiting result. ∎
###### Corollary 2.
Consider item iv of Proposition 2. When, in addition,
$\bm{\mathbf{T}}_{g_{1}}=\bm{\mathbf{T}}_{g_{2}}$, we have that
$\lim\limits_{\lambda_{g_{1}g_{2}}\to\infty^{-}}({\hat{\bm{\mathbf{\Omega}}}}{}_{g_{1}}-\bm{\mathbf{T}}_{g_{1}})=\lim\limits_{\lambda_{g_{1}g_{2}}\to\infty^{-}}({\hat{\bm{\mathbf{\Omega}}}}{}_{g_{2}}-\bm{\mathbf{T}}_{g_{2}})\qquad\Longrightarrow\qquad\hat{{\bm{\mathbf{\Omega}}}}_{g_{1}}=\hat{{\bm{\mathbf{\Omega}}}}_{g_{2}}.$
###### Proof of Corollary 2.
The implication follows directly by using
$\bm{\mathbf{T}}_{g_{1}}=\bm{\mathbf{T}}_{g_{2}}$ in (34). ∎
###### Proof of Proposition 3.
The result follows directly from Proposition 1 and Lemma 2. ∎
###### Proof of Proposition 4.
Note that line 8 of Algorithm 1 implies that the initializing estimates are
p.d. Moreover, regardless of the value of the fused penalties (in the feasible
domain), the estimate in line 11 of Algorithm 1 is p.d. as a consequence of
Proposition 2. ∎
## References
* Alizadeh et al. [2000] A. A. Alizadeh, M. B. Eisen, R. E. Davis, C. Ma, I. S. Lossos, A. Rosenwald, J. C. Boldrick, H. Sabet, T. Tran, X. Yu, J. I. Powell, L. Yang, G. E. Marti, T. Moore, J. Hudson, L. Lu, D. B. Lewis, R. Tibshirani, G. Sherlock, W. C. Chan, T. C. Greiner, D. D. Weisenburger, J. O. Armitage, R. Warnke, R. Levy, W. Wilson, M. R. Grever, J. C. Byrd, D. Botstein, P. O. Brown, and L. M. Staudt. Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. _Nature_ , 403(6769):503–511, 2000.
* Banerjee et al. [2008] O. Banerjee, L. El Ghaoui, and A. D’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. _The Journal of Machine Learning Research_ , 9:485–516, 2008.
* Barabási [2009] A. L. Barabási. Scale-free networks: A decade and beyond. _Science_ , 325(5939):412–413, 2009.
* Barabási and Albert [1999] A. L. Barabási and R. Albert. Emergence of scaling in random networks. _Science_ , 286(5439):509–512, 1999.
* Barrett et al. [2013] T. Barrett, S. E. Wilhite, P. Ledoux, C. Evangelista, I. F. Kim, M. Tomashevsky, K. A. Marshall, K. H. Phillippy, P. M. Sherman, M. Holko, A. Yefanov, H. Lee, N. Zhang, C. L. Robertson, N. Serova, S Davis, and A. Soboleva. NCBI GEO: Archive for functional genomics data sets–update. _Nucleic Acids Research_ , 41(D1):D991–D995, 2013\.
* Bera and Bilias [2001] A. K. Bera and Y. Bilias. Rao’s score, Neyman’s $c(\alpha)$ and Silvey’s LM tests: An essay on historical developments and some new results. _Journal of Statistical Planning and Inference_ , 97(1):9–44, 2001.
* Bidère et al. [2009] N. Bidère, V. N. Ngo, J. Lee, C. Collins, L. Zheng, F. Wan, R. E. Davis, G. Lenz, D. E. Anderson, D. Arnoult, A. Vazquez, K. Sakai, J. Zhang, Z. Meng, T. D. Veenstra, L. M. Staudt, and M. J. Lenardo. Casein kinase 1$\alpha$ governs antigen-receptor-induced NF-$\kappa$B activation and human lymphoma cell survival. _Nature_ , 458(7234):92–96, 2009.
* Bilgrau and Falgreen [2014] A. E. Bilgrau and S. Falgreen. _DLBCLdata : Automated and Reproducible Download and Preprocessing of DLBCL Data_, 2014. URL http://github.com/AEBilgrau/DLBCLdata. R package version 0.9.
* Bilgrau et al. [2015] A. E. Bilgrau, P. S. Eriksen, K. Dybkær, and M. Bøgsted. Estimation of a common covariance matrix for multiple classes with applications in meta- and discriminant analysis. _Submitted to Annals of Applied Statistics, arXiv:1503.07990_ , 269553, 2015.
* Browning et al. [1997] J. L. Browning, I. D. Sizing, P. Lawton, P. R. Bourdon, P. D. Rennert, G. R. Majeau, C. M. Ambrose, C. Hession, K. Miatkowski, D. A. Griffiths, Ngam ek A., Meier W., Benjamin C. D., and Hochman P. S. Characterization of lymphotoxin-$\alpha\beta$ complexes on the surface of mouse lymphocytes. _The Journal of Immunology_ , 159(7):3288–3298, 1997.
* Care et al. [2013] M. A. Care, S. Barrans, L. Worrillow, A. Jack, D. R. Westhead, and R. M. Tooze. A microarray platform-independent classification tool for cell of origin class allows comparative analysis of gene expression in diffuse large B-cell lymphoma. _PLoS One_ , 8(2):e55895, 2013.
* Dai et al. [2005] M. Dai, P. Wang, A. D. Boyd, G. Kostov, B. Athey, E. G. Jones, W. E. Bunney, R. M. Myers, T. P. Speed, H. Akil, S. J. Watson, and F. Meng. Evolving gene/transcript definitions significantly alter the interpretation of GeneChip data. _Nucleic Acids Research_ , 33(20):e175, 2005.
* Danaher et al. [2014] P. Danaher, P. Wang, and D. M. Witten. The joint graphical lasso for inverse covariance estimation across multiple classes. _Journal of the Royal Statistical Society, Series B_ , 76(2):373–397, 2014.
* Dybkær et al. [2015] K. Dybkær, M. Bøgsted, S. Falgreen, J. S. Bødker, M. K. Kjeldsen, A. Schmitz, A. E. Bilgrau, Z. Y. Xu-Monette, L. Li, K. S. Bergkvist, M. B. Laursen, M. Rodrigo-Domingo, S. C. Marques, S. B. Rasmussen, M. Nyegaard, M. Gaihede, M. B. Møller, R. J. Samworth, R. D. Shah, P. Johansen, T. C. El-Galaly, K. H. Young, and H. E. Johnsen. A diffuse large B-cell lymphoma classification system that associates normal B-cell subset phenotypes with prognosis. _Journal Of Clinical Oncology_ , 33(12):1379–1388, 2015.
* Eddelbuettel [2013] D. Eddelbuettel. _Seamless R and C++ Integration with Rcpp_. Springer-Verlag, New York, 2013.
* Eddelbuettel and François [2011] D. Eddelbuettel and R. François. Rcpp: Seamless R and C++ integration. _Journal of Statistical Software_ , 40(8), 2011.
* Efron et al. [2001] B. Efron, R. Tibshirani, J. D. Storey, and V. Tusher. Empirical Bayes analysis of a microarray experiment. _Journal of the American Statistical Association_ , 96:1151–1160, 2001.
* François et al. [2012] R. François, D. Eddelbuettel, and D. Bates. _RcppArmadillo : Rcpp Integration for Armadillo Templated Linear Algebra Library_, 2012. URL http://CRAN.R-project.org/package=RcppArmadillo. R package version 0.3.6.1.
* Friedman et al. [2008] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. _Biostatistics_ , 9(3):432–41, 2008.
* Gautier et al. [2004] L. Gautier, L. Cope, B. M. Bolstad, and R. A. Irizarry. affy—analysis of Affymetrix GeneChip data at the probe level. _Bioinformatics_ , 20(3):307–315, 2004.
* Guo et al. [2011] Y. Guo, E. Levina, G. Michailidis, and J. Zhu. Joint estimation of multiple graphical models. _Biometrika_ , 98(1):1–15, 2011.
* Jones and West [2005] B. Jones and M. West. Covariance decomposition in undirected Gaussian graphical models. _Biometrika_ , 92:779–786, 2005.
* Kanehisa and Goto [2000] M. Kanehisa and S. Goto. KEGG: Kyoto Encyclopedia of Genes and Genomes. _Nucleic Acids Research_ , 28(1):27–30, 2000\.
* Lauritzen [1996] S. L. Lauritzen. _Graphical models_. Clarendon Press, Oxford, 1996.
* Liu et al. [2010] H. Liu, K. Roeder, and L. Wasserman. Stability approach to regularization selection (StARS) for high dimensional graphical models. In J.D. Lafferty, C.K.I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, _Advances in Neural Information Processing Systems 23_ , pages 1432–1440. Curran Associates, Inc., 2010.
* Lu and Zhang [2006] X. Lu and X. Zhang. The effect of GeneChip gene definitions on the microarray study of cancers. _Bioessays_ , 28(7):739–46, 2006.
* Mei et al. [2011] S. Mei, X. Zhang, and M. Cao. _Power Grid Complexity_. Tsinghua University Press, Beijing and Springer-Verlag Berlin, 2011.
* Mersmann [2014] O. Mersmann. _microbenchmark : Accurate Timing Functions_, 2014. URL http://CRAN.R-project.org/package=microbenchmark. R package version 1.4-2.
* Newman [2010] M. E. J. Newman. _Networks: An Introduction_. Oxford University Press, Oxford, 2010.
* Nowakowski et al. [2015] G. S. Nowakowski, B. LaPlant, W. R. Macon, C. B. Reeder, J. M. Foran, G. D. Nelson, C. A. Thompson, C. E. Rivera, D. J. Inwards, I. N. Micallef, P. B. Johnston, L. F. Porrata, S. M. Ansell, R. D. Gascoyne, T. M. Habermann, and T. E. Witzig. Lenalidomide combined with R-CHOP overcomes negative prognostic impact of non-germinal center B-cell phenotype in newly diagnosed diffuse large B-cell lymphoma: A phase II study. _Journal of Clinical Oncology_ , 33(3):251–257, 2015.
* Peeters et al. [forthcoming] C. F. W. Peeters, A. E. Bilgrau, and W. N. van Wieringen. _rags2ridges : Ridge Estimation of Precision Matrices from High-Dimensional Data_, forthcoming. R package version 2.0.
* Peterson et al. [2015] C. Peterson, F. C. Stingo, and M. Vannucci. Bayesian inference of multiple Gaussian graphical models. _Journal of the American Statistical Association_ , 110(509):159–174, 2015.
* Price et al. [2015] B. S. Price, C. J. Geyer, and A. J. Rothman. Ridge fusion in statistical learning. _Journal of Computational and Graphical Statistics_ , 24(2):439–454, 2015.
* R Core Team [2012] R Core Team. _R : A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria, 2012. URL http://www.R-project.org/.
* Roschewski et al. [2014] M. Roschewski, L. M. Staudt, and W. H. Wilson. Diffuse large B-cell lymphoma-treatment approaches in the molecular era. _Nature Reviews Clinical Oncology_ , 11(1):12–23, 2014.
* Ruan et al. [2011] J. Ruan, P. Martin, R. R. Furman, S. M. Lee, K. Cheung, J. M. Vose, A. LaCasce, J. Morrison, R. Elstrom, S. Ely, A. Chadburn, E. Cesarman, M. Coleman, and J. P. Leonard. Bortezomib plus CHOP-rituximab for previously untreated diffuse large B-cell lymphoma and mantle cell lymphoma. _Journal of Clinical Oncology_ , 29(6):690–697, 2011.
* Sandberg and Larsson [2007] R. Sandberg and O. Larsson. Improved precision and accuracy for microarrays using updated probe set definitions. _BMC Bioinformatics_ , 8(1):48, 2007.
* Sanderson [2010] C. Sanderson. _Armadillo : An Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments._ Technical Report, NICTA, 2010. URL http://arma.sourceforge.net.
* Schäfer and Strimmer [2005] J. Schäfer and K. Strimmer. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. _Statistical Applications in Genetics and Molecular Biology_ , 4:art. 32, 2005.
* Schuetz et al. [2012] J. M. Schuetz, N. A. Johnson, R. D. Morin, D. W. Scott, K. Tan, S Ben-Nierah, M Boyle, G. W. Slack, M. A. Marra, J. M. Connors, A. R. Brooks-Wilson, and R. D. Gascoyne. BCL2 mutations in diffuse large B-cell lymphoma. _Leukemia_ , 26(6):1383–90, 2012.
* The Non-Hodgkin’s Lymphoma Classification Project [1997] The Non-Hodgkin’s Lymphoma Classification Project. A clinical evaluation of the international lymphoma study group classification of non-Hodgkin’s lymphoma. _Blood_ , 89(11):3909–3918, 1997.
* van Wieringen and Peeters [2015] W. N. van Wieringen and C. F. W. Peeters. Ridge estimation of inverse covariance matrices from high-dimensional data. __Submitted to_ Computational Statistics & Data Analysis, arXiv:1403.0904v3_, 2015.
* Vujačić et al. [2015] I. Vujačić, A. Abbruzzo, and E. Wit. A computationally fast alternative to cross-validation in penalized gaussian graphical models. _Journal of Statistical Computation and Simulation_ , (ahead-of-print):1–13, 2015. doi: 10.1080/00949655.2014.992020.
* Watts and Strogatz [1998] D. J. Watts and S. H. Strogatz. Collective dynamics of ‘small-world’ networks. _Nature_ , 393(6684):440–442, 1998.
* Williams-Abbott et al. [1997] L. Williams-Abbott, B. N. Walter, T. C. Cheung, C. R. Goh, A. G. Porter, and C. F. Ware. The lymphotoxin-$\alpha$ (lt$\alpha$) subunit is essential for the assembly, but not for the receptor specificity, of the membrane-anchored lt$\alpha 1\beta 2$ heterotrimeric ligand. _The Journal of Biological Chemistry_ , 271(31):19451–6, 1997.
* Witten and Tibshirani [2009] D. M. Witten and R. Tibshirani. Covariance-regularized regression and classification for high-dimensional problems. _Journal of the Royal Statistical Society, Series B_ , 71:615–636, 2009.
* Yang et al. [2012] Y. Yang, A. L. Shaffer, N. C. T. Emre, M. Ceribelli, M. Zhang, G. Wright, W. Xiao, J. Powell, J. Platig, H. Kohlhammer, Young R. M., H. Zhao, Y. Yang, W. Xu, J. J. Buggy, S. Balasubramanian, L. A. Mathews, P. Shinn, R. Guha, M. Ferrer, C. Thomas, T. A. Waldmann, and L. M. Staudt. Exploiting synthetic lethality for the therapy of ABC diffuse large B cell lymphoma. _Cancer cell_ , 21(6):723–737, 2012.
* Yuan and Lin [2007] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. _Biometrika_ , 94:19–35, 2007.
* Yuan [2008] Y. Yuan. Efficient computation of $\ell_{1}$ regularized estimates in Gaussian graphical models. _Journal of Computational and Graphical Statistics_ , 17:809–826, 2008.
* Zhang and Wiemann [2009] J. D. Zhang and S. Wiemann. KEGGgraph: A graph approach to KEGG pathway in R and Bioconductor. _Bioinformatics_ , 25(11):1470–1471, 2009.
SUPPLEMENTARY MATERIAL
## 1\. Alternative fused ridge solutions
This section derives two equivalent (in terms of (7)) alternative updating
schemes to (8). The motivation for the exploration of these alternative
recursive estimators is twofold. First, alternative recursions can exhibit
differing numerical (in)stability for extreme values of the penalty matrix
${\bm{\mathbf{\Lambda}}}=[\lambda_{g_{1}g_{2}}]$. Second, they provide
additional intuition and understanding of the targeted fused ridge estimator.
The general strategy to finding the alternatives is to rewrite the gradient
equation (27) into the non-fused form (28), which we will repeat here:
(S1)
$\hat{{\bm{\mathbf{\Omega}}}}_{g_{0}}^{-1}-\bar{\bm{\mathbf{S}}}_{g_{0}}-\bar{\lambda}_{g_{0}}(\hat{{\bm{\mathbf{\Omega}}}}_{g_{0}}\\!-\bar{\bm{\mathbf{T}}}_{g_{0}})=\bm{\mathbf{0}},$
where $\bar{\lambda}_{g_{0}}$, $\bar{\bm{\mathbf{T}}}_{g_{0}}$, and
$\bar{\bm{\mathbf{S}}}_{g_{0}}$ do not depend on
${\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}$. Note that an explicit closed-form
solution to (S1) exists in the form of (7).
### 1.1. First alternative
The first alternative scheme is straightforward. Rewrite (27) to:
(S2) $\displaystyle\bm{\mathbf{0}}$
$\displaystyle=n_{g_{0}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}^{-1}\\!-n_{g_{0}}\bm{\mathbf{S}}_{g_{0}}-\lambda_{g_{0}\bullet}({\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}\\!-\bm{\mathbf{T}}_{g_{0}})+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}({\bm{\mathbf{\Omega}}}_{g}\\!-\bm{\mathbf{T}}_{g})$
$\displaystyle=n_{g_{0}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}^{-1}\\!-n_{g_{0}}\bm{\mathbf{S}}_{g_{0}}-\lambda_{g_{0}\bullet}\left\\{{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}\\!-\bigg{[}\bm{\mathbf{T}}_{g_{0}}+\sum_{g\neq
g_{0}}\frac{\lambda_{gg_{0}}}{\lambda_{g_{0}\bullet}}({\bm{\mathbf{\Omega}}}_{g}\\!-\bm{\mathbf{T}}_{g})\bigg{]}\right\\},$
where $\lambda_{g_{0}\bullet}=\sum_{g}\lambda_{gg_{0}}$. In terms of (S1), we
thus have the updating scheme given in equation (9). As stated in the main
text, it has the intuitive interpretation that a fused class target is used
which is a combination of the class-specific target and the ‘target corrected’
estimates of remaining classes.
### 1.2. Second alternative
We now derive a second alternative recursion scheme. Add and subtract
$\lambda_{g_{0}\bullet}\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}$, to (S2) and rewrite such
that:
$\displaystyle\bm{\mathbf{0}}$
$\displaystyle=n_{g_{0}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}^{-1}\\!-n_{g_{0}}\bm{\mathbf{S}}_{g_{0}}-\lambda_{g_{0}\bullet}({\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}\\!-\bm{\mathbf{T}}_{g_{0}})+\lambda_{g_{0}\bullet}\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}({\bm{\mathbf{\Omega}}}_{g}\\!-\bm{\mathbf{T}}_{g})-\lambda_{g_{0}\bullet}\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}$
$\displaystyle=n_{g_{0}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}^{-1}\\!-n_{g_{0}}\bm{\mathbf{S}}_{g_{0}}-\lambda_{g_{0}\bullet}\left[{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}\\!-\bigg{(}\bm{\mathbf{T}}_{g_{0}}+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}\bigg{)}\right]+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}-\sum_{g\neq
g_{0}}\lambda_{gg_{0}}\bm{\mathbf{T}}_{g}-\lambda_{g_{0}\bullet}\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}$
$\displaystyle=n_{g_{0}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}^{-1}\\!-n_{g_{0}}\bm{\mathbf{S}}_{g_{0}}-\lambda_{g_{0}\bullet}\left[{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}\\!-\bigg{(}\bm{\mathbf{T}}_{g_{0}}+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}\bigg{)}\right]-\sum_{g\neq
g_{0}}\lambda_{gg_{0}}\bm{\mathbf{T}}_{g}-(\lambda_{g_{0}\bullet}{-}1)\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}$
$\displaystyle=n_{g_{0}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}^{-1}-n_{g_{0}}\left[\bm{\mathbf{S}}_{g_{0}}+\frac{\lambda_{g_{0}\bullet}{-}1}{n_{g_{0}}}\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}+\sum_{g\neq
g_{0}}\frac{\lambda_{gg_{0}}}{n_{g_{0}}}\bm{\mathbf{T}}_{g}\right]-\lambda_{g_{0}\bullet}\left[{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}\\!-\bigg{(}\bm{\mathbf{T}}_{g_{0}}+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}\bigg{)}\right].$
Dividing by $n_{g_{0}}$ gives
$\displaystyle\bm{\mathbf{0}}={\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}^{-1}-\left[\bm{\mathbf{S}}_{g_{0}}+\frac{\lambda_{g_{0}\bullet}{-}1}{n_{g_{0}}}\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}+\sum_{g\neq
g_{0}}\frac{\lambda_{gg_{0}}}{n_{g_{0}}}\bm{\mathbf{T}}_{g}\right]-\frac{\lambda_{g_{0}\bullet}}{n_{g_{0}}}\left[{\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}\\!-\bigg{(}\bm{\mathbf{T}}_{g_{0}}+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}\bigg{)}\right],$
which brings the expression to the desired form (S1) with the updating scheme
$\displaystyle\bar{\bm{\mathbf{S}}}_{g_{0}}=\bm{\mathbf{S}}_{g_{0}}+\frac{\lambda_{g_{0}\bullet}{-}1}{n_{g_{0}}}\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g}+\sum_{g\neq
g_{0}}\frac{\lambda_{gg_{0}}}{n_{g_{0}}}\bm{\mathbf{T}}_{g},\quad\bar{\bm{\mathbf{T}}}_{g_{0}}=\bm{\mathbf{T}}_{g_{0}}+\sum_{g\neq
g_{0}}\lambda_{gg_{0}}{\bm{\mathbf{\Omega}}}_{g},\quad\text{and}\quad\bar{\lambda}_{g_{0}}=\frac{\lambda_{g_{0}\bullet}}{n_{g_{0}}}.$
Again, a solution for ${\hat{\bm{\mathbf{\Omega}}}}{}_{g_{0}}$ with fixed
${\bm{\mathbf{\Omega}}}_{g}$ for all $g\neq g_{0}$, is available through Lemma
1 [42] and is given in (7).
### 1.3. Motivation
Though seemingly more complicated, these alternative updating schemes can be
numerically more stable for extreme penalties. In both alternatives, we see
that $\bar{\bm{\mathbf{S}}}_{g_{0}}$ is p.s.d. for (nearly) all very large and
very small penalties. Likewise, $\bar{\bm{\mathbf{T}}}_{g_{0}}$ is always
positive definite. Compare the alternative expressions to the updating scheme
given by (8) which can be seen to be numerically unstable for very large
penalties: For very large $\lambda_{gg}$ or $\lambda_{g_{1}g_{2}}$ the
$\bar{\bm{\mathbf{S}}}_{g_{0}}$ in (8) may be a matrix with numerically
extreme values. This implies ill-conditioning and numerical instability under
finite computer precision. On the other hand, ‘updating’ the target matrix
will generally lead to updates for which the resulting estimator is not
rotationally equivariant. This implies a reduction in computational speed.
## 2\. Estimation in special cases
Here we explore scenarios for which we arrive at explicit targeted fused ridge
estimators. These explicit solutions further insight into the behavior of the
general estimator and they can provide computational speed-ups in certain
situations. Three special cases are covered:
1. I.
$\lambda_{gg^{\prime}}=0$ for all $g\neq g^{\prime}$ or equivalently
$\sum_{g^{\prime}}\lambda_{gg^{\prime}}=\lambda_{g\bullet}=\lambda_{gg}$ for
all $g$;
2. II.
${\bm{\mathbf{\Omega}}}_{1}=\cdots={\bm{\mathbf{\Omega}}}_{G}$ and
$\bm{\mathbf{T}}_{g}=\bm{\mathbf{T}}$ for all $g$;
3. III.
$\bm{\mathbf{T}}_{g}=\bm{\mathbf{T}}$ for all $g$, $\lambda_{gg}=\lambda$ for
all $g$, $\lambda_{g_{1}g_{2}}=\lambda_{f}$ for all $g_{1}\neq g_{2}$, and
$\lambda_{f}\to\infty^{-}$.
### 2.1. Special case I
When $\sum_{g^{\prime}}\lambda_{gg^{\prime}}=\lambda_{g\bullet}=\lambda_{gg}$
for all $g$, we have that $\sum_{g^{\prime}\neq
g}\lambda_{gg^{\prime}}=\sum_{g^{\prime}\neq g}\lambda_{g^{\prime}g}=0$ for
all $g$. Hence, all fusion penalties are zero. The zero gradient equation (27)
for class $g$ then no longer hinges upon information from the remaining
classes $g^{\prime}$. The targeted fused precision estimate for class $g$ then
reduces to (29) of Corollary 1. This case thus coincides, as expected, with
obtaining $G$ decoupled non-fused ridge precision estimates. A special case
that results in the same estimates occurs when considering
$\lambda_{g_{1}g_{2}}=\lambda_{f}$ for all $g_{1}\neq g_{2}$ and $\lambda_{f}$
is taken to be $0$.
### 2.2. Special case II
Suppose ${\bm{\mathbf{\Omega}}}_{g}={\bm{\mathbf{\Omega}}}$ and
$\bm{\mathbf{T}}_{g}=\bm{\mathbf{T}}$ for all $g$. Consequently, the fusion
penalty term vanishes irrespective of the values of the
$\lambda_{g_{1}g_{2}}$, $g_{1}\neq g_{2}$. The zero gradient equation (27)
then reduces to
$\bm{\mathbf{0}}=n_{g}{\hat{\bm{\mathbf{\Omega}}}}{}^{-1}-n_{g}\bm{\mathbf{S}}_{g}-\lambda_{gg}({\hat{\bm{\mathbf{\Omega}}}}{}-\bm{\mathbf{T}}),$
for each class $g$. Adding all $G$ equations implies:
$\displaystyle\bm{\mathbf{0}}$
$\displaystyle=\sum_{g=1}^{G}n_{g}{\hat{\bm{\mathbf{\Omega}}}}{}^{-1}-\sum_{g=1}^{G}n_{g}\bm{\mathbf{S}}_{g}-\left(\sum_{g=1}^{G}\lambda_{gg}\right)({\hat{\bm{\mathbf{\Omega}}}}{}-\bm{\mathbf{T}})$
$\displaystyle=n_{\bullet}{\hat{\bm{\mathbf{\Omega}}}}{}^{-1}-n_{\bullet}\bm{\mathbf{S}}_{\bullet}-\operatorname*{tr}({\bm{\mathbf{\Lambda}}})({\hat{\bm{\mathbf{\Omega}}}}{}-\bm{\mathbf{T}})$
(S3)
$\displaystyle={\hat{\bm{\mathbf{\Omega}}}}{}^{-1}-\left[\bm{\mathbf{S}}_{\bullet}-\frac{\operatorname*{tr}({\bm{\mathbf{\Lambda}}})}{n_{\bullet}}\bm{\mathbf{T}}\right]-\frac{\operatorname*{tr}({\bm{\mathbf{\Lambda}}})}{n_{\bullet}}{\hat{\bm{\mathbf{\Omega}}}}{}.$
We recognize that (S3) is of the form (22). Lemma 1 may then be directly
applied to obtain the solution:
(S4)
$\displaystyle{\hat{\bm{\mathbf{\Omega}}}}{}({\bm{\mathbf{\Lambda}}})=\left\\{\left[\lambda^{\ast}\bm{\mathbf{I}}_{p}+\frac{1}{4}(\bm{\mathbf{S}}_{\bullet}-\lambda^{\ast}\bm{\mathbf{T}})^{2}\right]^{1/2}+\frac{1}{2}(\bm{\mathbf{S}}_{\bullet}-\lambda^{\ast}\bm{\mathbf{T}})\right\\}^{-1},$
where
$\lambda^{\ast}=\operatorname*{tr}({\bm{\mathbf{\Lambda}}})/n_{\bullet}$.
Hence, this second special case gives a non-fused penalized estimate that uses
the pooled covariance matrix. It can be interpreted as an averaged penalized
estimator. It is of importance in testing equality of the class precision
matrices (see Section 4.1 of the main text).
### 2.3. Special case III
Suppose that $\bm{\mathbf{T}}_{g}=\bm{\mathbf{T}}$ for all $g$, that
$\lambda_{gg}=\lambda$ for all $g$, and that
$\lambda_{g_{1}g_{2}}=\lambda_{f}$ for all $g_{1}\neq g_{2}$. The main
optimization problem then reduces to (6). Clearly, for
$\lambda_{f}\to\infty^{-}$ the fused penalty
$\displaystyle
f^{\text{FR}}(\\{\mathbf{{\bm{\mathbf{\Omega}}}_{g}}\\};\lambda,\lambda_{f},\bm{\mathbf{T}})=\frac{\lambda}{2}\sum_{g}\big{\|}{\bm{\mathbf{\Omega}}}_{g}\\!-\bm{\mathbf{T}}\big{\|}_{F}^{2}+\frac{\lambda_{f}}{4}\sum_{g_{1},g_{2}}\big{\|}({\bm{\mathbf{\Omega}}}_{g_{1}}\\!-{\bm{\mathbf{\Omega}}}_{g_{2}})\big{\|}_{F}^{2}$
is minimized when
${\bm{\mathbf{\Omega}}}_{1}={\bm{\mathbf{\Omega}}}_{2}=\cdots={\bm{\mathbf{\Omega}}}_{G}$.
This is also implied, more rigorously, by Corollary 2. Hence, the problem
reduces to the special case of section 2.2 considered above. The solution to
the penalized ML problem when $\lambda_{f}=\infty$ is then given by (S4) where
$\operatorname*{tr}({\bm{\mathbf{\Lambda}}})$ now implies $G\lambda$.
## 3\. Fused Kullback-Leibler approximate cross-validation
### 3.1. Motivation
In $\ell_{1}$-penalized estimation of the precision matrix, penalty selection
implies (graphical) model selection: Regularization results in automatic
selection of conditional dependencies. One then seeks to select an optimal
value for the penalty parameter in terms of model selection consistency. To
this end, the Bayesian information criterion (BIC), the extended BIC (EBIC),
and the stability approach to regularization selection (StARS) are appropriate
[25]. The (fused) $\ell_{2}$-penalty will not directly induce sparsity in
precision matrix estimates. Hence, in $\ell_{2}$-penalized problems it is
natural to choose the penalty parameters on the basis of efficiency loss. Of
interest are then estimators of the Kullback-Leibler (KL) divergence, such as
LOOCV, generalized approximate cross-validation (GACV), and Akaike’s
information criterion (AIC). While superior in terms of predictive accuracy
due to its data-driven nature, the LOOCV is computationally very expensive.
Vujačić et al. [43] proposed a KL-based CV loss with superior performance to
both AIC and GACV. The proposed method has closed-form solutions and thus
provides a fast approximation to LOOCV. Here, we extend this method to provide
a computationally friendly approximation of the fused LOOCV score.
### 3.2. Formulation
Following Vujačić et al. [43], we now restate the KL approximation to LOOCV in
the fused ridge setting. Let the true precision matrix for class $g$ be
denoted by ${\bm{\mathbf{\Omega}}}_{g}$. Its estimate, shorthanded by
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}$ can be obtained through Algorithm 1. The
KL divergence between the multivariate normal distributions
$\mathcal{N}_{p}(\bm{\mathbf{0}},{\bm{\mathbf{\Omega}}}_{g}^{-1})$ and
$\mathcal{N}_{p}(\bm{\mathbf{0}},{\hat{\bm{\mathbf{\Omega}}}}{}{}_{g}^{-1})$
can be shown to be:
$\operatorname{KL}({\bm{\mathbf{\Omega}}}_{g},{\hat{\bm{\mathbf{\Omega}}}}{}_{g})=\frac{1}{2}\Bigl{\\{}\operatorname*{tr}({\bm{\mathbf{\Omega}}}_{g}^{-1}{\hat{\bm{\mathbf{\Omega}}}}{}_{g})-\ln\lvert{\bm{\mathbf{\Omega}}}_{g}^{-1}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\rvert-p\Bigr{\\}}.$
For each $g$ we wish to minimize this divergence. In the fused case we
therefore consider the _fused Kullback-Leibler_ (FKL) divergence which,
motivated by the LOOCV score, is taken to be a weighted average of KL
divergences:
$\displaystyle\operatorname{FKL}(\\{{\bm{\mathbf{\Omega}}}_{g}\\},\\{{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\\})$
(S5)
$\displaystyle\quad=\frac{1}{n_{\bullet}}\sum_{g=1}^{G}n_{g}\operatorname{KL}({\bm{\mathbf{\Omega}}}_{g},{\hat{\bm{\mathbf{\Omega}}}}{}_{g})=\frac{1}{n_{\bullet}}\sum_{g=1}^{G}\frac{n_{g}}{2}\Bigl{\\{}\operatorname*{tr}({\bm{\mathbf{\Omega}}}_{g}^{-1}{\hat{\bm{\mathbf{\Omega}}}}{}_{g})-\ln\lvert{\bm{\mathbf{\Omega}}}_{g}^{-1}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\rvert-p\Bigr{\\}}.$
The FKL divergence (S5) can, using the likelihood $\eqref{eq:loglik}$, be
rewritten as
$\displaystyle\operatorname{FKL}=-\frac{1}{n_{\bullet}}\mathcal{L}\bigl{(}\\{{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\\};\\{\bm{\mathbf{S}}_{g}\\}\bigr{)}+\mathrm{bias},\quad\text{where}\quad\mathrm{bias}=\frac{1}{2n_{\bullet}}\sum_{g=1}^{G}n_{g}\operatorname*{tr}\bigl{[}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}({\bm{\mathbf{\Omega}}}_{g}^{-1}-\bm{\mathbf{S}}_{g})\bigr{]},$
and where the equality holds up to the addition of a constant. It is clear
that the $\mathrm{bias}$ term depends on the unknown true precision matrices
and thus needs to be estimated. The fused analogue to the proposal of Vujačić
et al. [43], called the _fused Kullback-Leibler approximate cross-validation_
score or simply _approximate fused LOOCV_ score, then is
(S6)
$\operatorname{\widehat{\operatorname{FKL}}}\bigl{(}{\bm{\mathbf{\Lambda}}}\bigr{)}=-\frac{1}{n_{\bullet}}\mathcal{L}\bigl{(}\\{{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\\};\\{\bm{\mathbf{S}}_{g}\\}\bigr{)}+\widehat{\mathrm{bias}},$
with
(S7)
$\widehat{\mathrm{bias}}=\frac{1}{2n_{\bullet}}\sum_{g=1}^{G}\sum_{i=1}^{n_{g}}\Bigl{\\{}{\bm{\mathrm{y}}}_{ig}^{\top}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2}-{\hat{\bm{\mathbf{\Omega}}}}{}_{g}){\bm{\mathrm{y}}}_{ig}+\bar{\lambda}_{g}{\bm{\mathrm{y}}}_{ig}^{\top}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{4}-{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{3}){\bm{\mathrm{y}}}_{ig}\Bigr{\\}},$
and where $\bar{\lambda}_{g}=\tfrac{\lambda_{g\bullet}}{n_{g}}$. The
derivation of this estimate is given in Section 3.3. Derivation below. One
would then choose ${\bm{\mathbf{\Lambda}}}^{\star}$ such that the FKL
approximate cross-validation score is minimized:
(S8)
${\bm{\mathbf{\Lambda}}}^{\star}=\operatorname*{arg\,min}_{\mathclap{{\bm{\mathbf{\Lambda}}}}}\operatorname{\widehat{\operatorname{FKL}}}\bigl{(}{\bm{\mathbf{\Lambda}}}\bigr{)},\quad\text{subject
to:}\quad{\bm{\mathbf{\Lambda}}}\geq\bm{\mathbf{0}}\wedge\operatorname{diag}({\bm{\mathbf{\Lambda}}})>\bm{\mathbf{0}}.$
The closed form expression in (S6) implies that
${\bm{\mathbf{\Lambda}}}^{\star}$ is more rapidly determined than
${\bm{\mathbf{\Lambda}}}^{*}$. As seen in the derivation,
${\bm{\mathbf{\Lambda}}}^{*}\approx{\bm{\mathbf{\Lambda}}}^{\star}$ for large
sample sizes.
### 3.3. Derivation
Here we give, borrowing some ideas from Vujačić et al. [43], the derivation of
the estimate (S6). Let observation $i$ in class $g$ be denoted by
${\bm{\mathrm{y}}}_{ig}$ and let
$\bm{\mathbf{S}}=\bm{\mathbf{S}}_{ig}={\bm{\mathrm{y}}}_{ig}{\bm{\mathrm{y}}}_{ig}^{\top}$
be the sample covariance or scatter matrix of that observation. As before, the
singularly indexed
$\bm{\mathbf{S}}_{g}=\frac{1}{n_{g}}\sum_{i=1}^{n_{g}}\bm{\mathbf{S}}_{ig}$ is
the class-specific sample covariance matrix. Throughout this section we will
conveniently drop (some of) the explicit notation.
The $\operatorname{FKL}$ divergence reframes the $\operatorname{LOOCV}$ score
in terms of a likelihood evaluation and a bias term when $\bm{\mathbf{S}}$ is
_not_ left out of class $g$. We thus study the change in the estimate as
function of the single scatter matrix $\bm{\mathbf{S}}$. Let
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{S}})={\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig}$ be the estimate in class $g$ when $\bm{\mathbf{S}}$ is omitted. That is,
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{S}})$ is part of the solution
to the system
(S9)
${\bm{\mathbf{\Omega}}}{}^{-1}_{a}+\mu_{aa}{\bm{\mathbf{\Omega}}}_{a}+\mathds{1}[a{=}g]\bm{\mathbf{S}}+\sum_{b\not=a}\mu_{ab}{\bm{\mathbf{\Omega}}}_{b}+\bm{\mathbf{A}}_{a}=\bm{\mathbf{0}},\quad\text{for
all}\quad a=1,\ldots,G,$
where $\mu_{aa}=-\tfrac{\lambda_{a\bullet}}{n_{a}}$,
$\mu_{ab}=\tfrac{\lambda_{ab}}{n_{a}}$, and where $\bm{\mathbf{A}}_{a}$ is a
matrix determined by the remaining data, penalty parameters and targets. Note
that the penalized MLE can be denoted
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}={\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{0}})$,
which corresponds to the ‘full’ estimate resulting from the full gradient
equation (27).
We wish to approximate ${\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{S}})$
by a Taylor expansion around
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{0}})$, i.e.:
${\hat{\bm{\mathbf{\Omega}}}}{}_{a}(\bm{\mathbf{S}})\approx{\hat{\bm{\mathbf{\Omega}}}}{}_{a}(\bm{\mathbf{0}})+\sum_{j,j^{\prime}}\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{a}}{\partial
S_{jj^{\prime}}}S_{jj^{\prime}}.$
Differentiating (S9) w.r.t. $S_{jj^{\prime}}$, the $(j,j^{\prime})$th entry in
$\bm{\mathbf{S}}$, and equating to zero yields
$\displaystyle\bm{\mathbf{0}}$
$\displaystyle=-{\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a}\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{a}}{\partial
S_{jj^{\prime}}}{\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a}+\mu_{aa}\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{a}}{\partial
S_{jj^{\prime}}}+\mathds{1}[a{=}g]\bm{\mathbf{E}}_{jj^{\prime}}+\sum_{b\not=a}\mu_{ab}\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{b}}{\partial
S_{jj^{\prime}}}$ (S10)
$\displaystyle=-{\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a}\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{a}}{\partial
S_{jj^{\prime}}}{\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a}+\sum_{b}\mu_{ab}\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{b}}{\partial
S_{jj^{\prime}}}+\mathds{1}[a{=}g]\bm{\mathbf{E}}_{jj^{\prime}},\quad\text{for
all}\quad j,j^{\prime},$
where $\bm{\mathbf{E}}_{jj^{\prime}}$ is the null matrix except for unity in
entries $(j,j^{\prime})$ and $(j^{\prime},j)$. The third term is obtained as
$\partial\bm{\mathbf{S}}/\partial
S_{jj^{\prime}}=\bm{\mathbf{E}}_{jj^{\prime}}$ by the symmetric structure of
$\bm{\mathbf{S}}$. This is also seen from the fact that
$\bm{\mathbf{S}}=\sum_{jj^{\prime}}S_{jj^{\prime}}\bm{\mathbf{E}}_{jj^{\prime}}$.
Let
$\bm{\mathbf{V}}(\bm{\mathbf{S}})_{a}=\sum_{j,j^{\prime}}\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{a}}{\partial
S_{jj^{\prime}}}S_{jj^{\prime}},$
and multiply (S10) by $S_{jj^{\prime}}$ and sum over all $j,j^{\prime}$ to
obtain
(S11)
${\hat{\bm{\mathbf{\Omega}}}}{}_{a}^{-1}\bm{\mathbf{V}}(\bm{\mathbf{S}})_{a}{\hat{\bm{\mathbf{\Omega}}}}{}_{a}^{-1}-\sum_{b}\mu_{ab}\bm{\mathbf{V}}(\bm{\mathbf{S}})_{b}=\mathds{1}[a{=}g]\bm{\mathbf{S}},\quad\text{for
all}\quad a=1,\ldots,G.$
We seek the solution vector
$\bm{\mathbf{V}}=\bigl{\\{}\bm{\mathbf{V}}(\bm{\mathbf{S}})_{a}\bigr{\\}}{}_{a=1}^{G}$
of square matrices for the system of equations in (S11) which can be rewritten
in the following way. Introduce and consider the linear operator (or block
matrix):
$\displaystyle\bm{\mathbf{N}}=\bigl{\\{}\bm{\mathbf{N}}_{ab}\bigr{\\}}{}_{a,b=1}^{G}\quad\text{where}\quad\bm{\mathbf{N}}_{ab}=\begin{cases}{\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a}-\mu_{aa}\bm{\mathbf{I}}_{p}\otimes\bm{\mathbf{I}}_{p}&\text{if}\quad
a=b\\\ -\mu_{ab}\bm{\mathbf{I}}_{p}\otimes\bm{\mathbf{I}}_{p}&\text{if}\quad
a\neq b\end{cases}.$
Then $\bm{\mathbf{V}}$ can be verified to be the solution to the system (S10)
as
$\displaystyle\bm{\mathbf{N}}(\bm{\mathbf{V}})_{a}$
$\displaystyle=\sum_{b}\bm{\mathbf{N}}_{ab}\bm{\mathbf{V}}(\bm{\mathbf{S}})_{b}=\bm{\mathbf{0}}\quad\text{for}\quad
a\neq g,\quad\text{and}\quad$
$\displaystyle\bm{\mathbf{N}}(\bm{\mathbf{V}})_{g}$
$\displaystyle=\sum_{b}\bm{\mathbf{N}}_{gb}\bm{\mathbf{V}}(\bm{\mathbf{S}})_{b}=\bm{\mathbf{S}}\quad\text{for}\quad
a=g.$
Hence we need to invert $\bm{\mathbf{N}}$ to solve for $\bm{\mathbf{V}}$. The
structure of $\bm{\mathbf{N}}$ is relatively simple, but there seems to be no
(if any) simple inverse. Note that
$\bm{\mathbf{N}}=\bm{\mathbf{D}}-\bm{\mathbf{M}}$ is the difference of a
(block) diagonal matrix $\bm{\mathbf{D}}$ and a matrix $\bm{\mathbf{M}}$
depending on the $\mu$’s:
$\displaystyle\bm{\mathbf{D}}_{aa}$
$\displaystyle={\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}{}^{-1}_{a},$
$\displaystyle\bm{\mathbf{M}}_{ab}$
$\displaystyle=\mu_{ab}\bm{\mathbf{I}}_{p}\otimes\bm{\mathbf{I}}_{p}.$
In terms of the $\mu$’s we obtain to first order that
$\bm{\mathbf{N}}^{-1}=(\bm{\mathbf{D}}-\bm{\mathbf{M}})^{-1}\approx\bm{\mathbf{D}}^{-1}+\bm{\mathbf{D}}^{-1}\bm{\mathbf{M}}\bm{\mathbf{D}}^{-1},$
yielding the approximation
$\displaystyle{\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{S}})$
$\displaystyle\approx{\hat{\bm{\mathbf{\Omega}}}}{}_{g}+({\hat{\bm{\mathbf{\Omega}}}}{}_{g}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}_{g}+\mu_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2})(\bm{\mathbf{S}})$
(S12)
$\displaystyle={\hat{\bm{\mathbf{\Omega}}}}{}_{g}+{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}+\mu_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2},$
where
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}={\hat{\bm{\mathbf{\Omega}}}}{}(\bm{\mathbf{0}})$.
To a first order in $\mu_{gg}$ this is the same as the approximation
${\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{S}})\approx{\hat{\bm{\mathbf{\Omega}}}}{}_{g}+({\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{-1}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{-1}-\mu_{gg}\bm{\mathbf{I}}_{p}\otimes\bm{\mathbf{I}}_{p})^{-1}(\bm{\mathbf{S}}).$
We also need an approximation for
$\ln\lvert{\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{S}})\rvert$. By
first-order Taylor expansion around $\bm{\mathbf{S}}=\bm{\mathbf{0}}$ we have
$\displaystyle\ln\lvert{\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{S}})\rvert$
$\displaystyle\approx\ln\lvert{\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{0}})\rvert+\sum_{j,j^{\prime}}\operatorname*{tr}\Bigl{[}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{-1}(\bm{\mathbf{0}})\frac{\partial{\hat{\bm{\mathbf{\Omega}}}}{}_{g}}{\partial
S_{jj^{\prime}}}\Bigr{]}S_{jj^{\prime}}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:OmegaSapprox1}}}}{{\approx}}\ln\lvert{\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{0}})\rvert+\operatorname*{tr}\Bigl{[}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{-1}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}_{g}+\mu_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2})(\bm{\mathbf{S}})\Bigr{]}$
(S13)
$\displaystyle=\ln\lvert{\hat{\bm{\mathbf{\Omega}}}}{}_{g}(\bm{\mathbf{0}})\rvert+\operatorname*{tr}\bigl{(}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}+\mu_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2}\bigr{)},$
where we have used that
$\tfrac{d}{dt}\ln\lvert\bm{\mathbf{A}}(t)\rvert=\operatorname*{tr}\bigl{[}\bm{\mathbf{A}}(t)^{-1}\tfrac{d\bm{\mathbf{A}}}{dt}\bigr{]}$
and $\frac{\partial{\bm{\mathbf{\Omega}}}_{g}}{\partial
S_{jj^{\prime}}}\approx({\hat{\bm{\mathbf{\Omega}}}}{}_{g}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}_{g}+\mu_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2}\otimes{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{2})(\bm{\mathbf{E}}_{jj^{\prime}}).$
We now have the necessary equations to derive the $\operatorname{FKL}$
approximate cross-validation score.
Define
(S14) $\displaystyle
f(\bm{\mathbf{A}},\bm{\mathbf{B}})=\ln|\bm{\mathbf{B}}|-\operatorname*{tr}(\bm{\mathbf{B}}\bm{\mathbf{A}})$
by which the identity
(S15)
$\displaystyle\sum_{i=1}^{n_{g}}f(\bm{\mathbf{S}}_{ig},{\bm{\mathbf{\Omega}}}_{g})=n_{g}f(\bm{\mathbf{S}}_{g},{\bm{\mathbf{\Omega}}}_{g})$
holds for all $g$. The full likelihood (3) in terms of $f$ is given by
(S16)
$\displaystyle\mathcal{L}(\\{{\bm{\mathbf{\Omega}}}_{g}\\};\\{\bm{\mathbf{S}}_{g}\\})\propto\sum_{g=1}^{G}\frac{n_{g}}{2}\Bigl{\\{}\ln|{\bm{\mathbf{\Omega}}}_{g}|-\operatorname*{tr}({\bm{\mathbf{\Omega}}}_{g}\bm{\mathbf{S}}_{g})\Bigr{\\}}=\sum_{g=1}^{G}\frac{n_{g}}{2}f(\bm{\mathbf{S}}_{g},{\bm{\mathbf{\Omega}}}_{g}),$
while the likelihood of a single $\bm{\mathbf{S}}_{ig}$ is
(S17)
$\displaystyle\mathcal{L}_{ig}({\bm{\mathbf{\Omega}}}_{g};\bm{\mathbf{S}}_{ig})\propto\frac{1}{2}\Bigl{\\{}\ln|{\bm{\mathbf{\Omega}}}_{g}|-\operatorname*{tr}({\bm{\mathbf{\Omega}}}_{g}\bm{\mathbf{S}}_{ig})\Bigr{\\}}=\frac{1}{2}f(\bm{\mathbf{S}}_{ig},{\bm{\mathbf{\Omega}}}_{g}).$
In our setting, the fused LOOCV score is given by:
$\displaystyle\operatorname{LOOCV}$
$\displaystyle=-\frac{1}{n_{\bullet}}\sum_{g=1}^{G}\sum_{i=1}^{n_{g}}\mathcal{L}_{ig}\big{(}{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig};\bm{\mathbf{S}}_{ig}\big{)}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:singleloglik}}}}{{=}}-\frac{1}{n_{\bullet}}\sum_{g=1}^{G}\sum_{i=1}^{n_{g}}\frac{1}{2}f(\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig})$
$\displaystyle=-\frac{1}{n_{\bullet}}\sum_{g=1}^{G}\frac{1}{2}\sum_{i=1}^{n_{g}}\Bigl{[}f\bigl{(}\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigr{)}+f\bigl{(}\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig}\bigr{)}-f\bigl{(}\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigr{)}\Bigr{]}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:fidentity}}}}{{=}}-\frac{1}{n_{\bullet}}\sum_{g=1}^{G}\frac{n_{g}}{2}f\bigl{(}\bm{\mathbf{S}}_{g},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigr{)}-\frac{1}{n_{\bullet}}\sum_{g=1}^{G}\frac{1}{2}\sum_{i=1}^{n_{g}}\Bigl{[}f\bigl{(}\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig}\bigr{)}-f\bigl{(}\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigr{)}\Bigr{]}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:loglikidentity}}}}{{=}}-\frac{1}{n_{\bullet}}\mathcal{L}(\\{{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\\};\\{\bm{\mathbf{S}}_{g}\\})-\frac{1}{2n_{\bullet}}\sum_{g=1}^{G}\sum_{i=1}^{n_{g}}\Bigl{[}f\bigl{(}\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig}\bigr{)}-f\bigl{(}\bm{\mathbf{S}}_{ig},{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bigr{)}\Bigr{]}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:fdefinition}}}}{{=}}-\frac{1}{n_{\bullet}}\mathcal{L}(\\{{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\\};\\{\bm{\mathbf{S}}_{g}\\})-\frac{1}{2n_{\bullet}}\sum_{g=1}^{G}\sum_{i=1}^{n_{g}}\Bigl{[}\ln\lvert{\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig}\rvert-\operatorname*{tr}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}^{\neg
ig}\bm{\mathbf{S}}_{ig})-\ln\lvert{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\rvert+\operatorname*{tr}({\hat{\bm{\mathbf{\Omega}}}}{}_{g}\bm{\mathbf{S}}_{ig})\Bigr{]}.$
Now, substitution of (S12) and (S13) gives the $\operatorname{FKL}$
approximate cross-validation score as an approximation to the fused LOOCV
score:
$\displaystyle\operatorname{LOOCV}\approx\widehat{\operatorname{FKL}}=-\frac{1}{n_{\bullet}}\mathcal{L}(\\{{\hat{\bm{\mathbf{\Omega}}}}{}_{g}\\};\\{\bm{\mathbf{S}}_{g}\\})+\frac{1}{2n_{\bullet}}\sum_{g=1}^{G}\sum_{i=1}^{n_{g}}{\bm{\mathbf{\zeta}}}_{ig},$
where
$\displaystyle{\bm{\mathbf{\zeta}}}_{ig}$
$\displaystyle=\operatorname*{tr}\bigl{(}{\hat{\bm{\mathbf{\Omega}}}}{}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}+\mu_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}^{2}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}^{2}\bigr{)}-\operatorname*{tr}\bigl{(}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}+\mu_{gg}{\hat{\bm{\mathbf{\Omega}}}}{}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}^{2}\bigr{)}$
$\displaystyle=\operatorname*{tr}\bigl{(}{\hat{\bm{\mathbf{\Omega}}}}{}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}\bigr{)}+\mu_{gg}\operatorname*{tr}\bigl{(}{\hat{\bm{\mathbf{\Omega}}}}{}^{2}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}^{2}\bigr{)}-\operatorname*{tr}\bigl{(}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}\bigr{)}-\mu_{gg}\operatorname*{tr}\bigl{(}{\hat{\bm{\mathbf{\Omega}}}}{}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}^{2}\bigr{)}$
$\displaystyle=\operatorname*{tr}\bigl{(}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}^{2}\bigr{)}+\mu_{gg}\operatorname*{tr}\bigl{(}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}^{4}\bigr{)}-\operatorname*{tr}\bigl{(}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}\bigr{)}-\mu_{gg}\operatorname*{tr}\bigl{(}\bm{\mathbf{S}}{\hat{\bm{\mathbf{\Omega}}}}{}^{3}\bigr{)}$
$\displaystyle=\operatorname*{tr}\bigl{[}\bm{\mathbf{S}}({\hat{\bm{\mathbf{\Omega}}}}{}^{2}-{\hat{\bm{\mathbf{\Omega}}}}{})\bigr{]}+\mu_{gg}\operatorname*{tr}\bigl{[}\bm{\mathbf{S}}({\hat{\bm{\mathbf{\Omega}}}}{}^{4}-{\hat{\bm{\mathbf{\Omega}}}}{}^{3})\bigr{]}$
(S18)
$\displaystyle={\bm{\mathrm{y}}}_{ig}^{\top}({\hat{\bm{\mathbf{\Omega}}}}{}^{2}-{\hat{\bm{\mathbf{\Omega}}}}{}){\bm{\mathrm{y}}}_{ig}+\mu_{gg}{\bm{\mathrm{y}}}_{ig}^{\top}({\hat{\bm{\mathbf{\Omega}}}}{}^{4}-{\hat{\bm{\mathbf{\Omega}}}}{}^{3}){\bm{\mathrm{y}}}_{ig}.$
To arrive at (S18) we have used the linear and cyclic properties of the trace
operator. As
$\bm{\mathbf{S}}={\bm{\mathrm{y}}}_{ig}{\bm{\mathrm{y}}}_{ig}^{\top}$, the
cyclic property implies the final equality since
$\operatorname*{tr}(\bm{\mathbf{S}}\bm{\mathbf{A}})=\operatorname*{tr}({\bm{\mathrm{y}}}_{ig}{\bm{\mathrm{y}}}_{ig}^{\top}\bm{\mathbf{A}})=\operatorname*{tr}({\bm{\mathrm{y}}}_{ig}^{\top}\bm{\mathbf{A}}{\bm{\mathrm{y}}}_{ig})={\bm{\mathrm{y}}}_{ig}^{\top}\bm{\mathbf{A}}{\bm{\mathrm{y}}}_{ig}$.
Equation (S18) is equivalent to the summand in (S7). |
# Packing Peanuts:
The Role Synthetic Data Can Play in Enhancing Conventional Economic Prediction
Models
Vansh Murad Kalia
Candidate for Master’s of Arts in
Quantitative Methods for the Social Sciences
Columbia University
Thesis Advisor: Prof. Gregory M. Eirich
###### Abstract
Packing peanuts, as defined by Wikipedia, is a common loose-fill packaging and
cushioning material that helps prevent damage to fragile items. In this paper,
I propose that synthetic data, akin to packing peanuts, can serve as a
valuable asset for economic prediction models, enhancing their performance and
robustness when integrated with real data. This hybrid approach proves
particularly beneficial in scenarios where data is either missing or limited
in availability. Through the utilization of Affinity credit card spending and
Womply small business datasets, this study demonstrates the substantial
performance improvements achieved by employing a hybrid data approach,
surpassing the capabilities of traditional economic modeling techniques.
## Index
Section | Page
---|---
1\. Introduction | 3
2\. Literature Review | 3 - 5
3\. Data | 5
4\. Methodology | 6 - 12
4.1 Exploratory Data Analysis | 6 - 7
4.2 Data Pre-processing | 8
4.3 Model Selection | 8 - 10
4.4 Model Testing Results | 10 - 12
5\. Conclusion | 12
6\. Limitations | 13
7\. Next Steps | 13
References | 14
## 1 Introduction
In recent years, the use of machine learning models for economic prediction
has gained significant traction. While the adoption of these techniques has
expedited the process of synthesizing vast amounts of data, one of the main
challenges that remains is obtaining the data itself (or enough of it, at
least!). Traditional approaches to data collection in the field of economics
can be time-consuming, expensive, and limited in scope. There are countless
cases where data is available but spotty, with missing samples. In such cases,
synthetic data has emerged as a promising candidate to help fill that gap, but
what is synthetic data?
On the highest level, synthetic data can be categorized into three main types:
* •
Derived from real datasets, inheriting their statistical properties.
* •
Generated independently of real data, without using any existing datasets.
* •
Hybrid in nature, combining aspects of the first two types.
This paper focuses on the Hybrid type, exploring its potential applications in
enhancing economic prediction models.
Utilizing data from Affinity and Womply, this study aims to investigate
whether the integration of synthetic data can improve model performance and
robustness in scenarios characterized by limited data availability,
potentially outperforming models reliant solely on real data.
## 2 Literature Review
Given the nascent nature of the academic intersection of economic prediction
models and synthetic data, there is not a lot of academic research that
focuses directly on this topic. As such, for this research, I leverage some
academic literature on synthetic data in relational fields like computer
science, to formulate my hypothesis.
Synthetic Data Generation for Economists: koenecke2020synthetic
In my search for academic literature at the intersection of synthetic data and
economics, this paper stands out as one of the most important contemporary
pieces. In this study, the authors address synthetic data generation within
the field of economics by recognizing the challenges associated with accessing
and handling sensitive or limited datasets. Koenecke and Varian discuss the
methodologies and implications of generating synthetic data, providing
economists with a valuable resource for exploring and testing hypotheses in
situations where real data availability is constrained. The authors propose
the use of synthetic data as an alternative for economic researchers:
* •
Assist with privacy issues related to the use of data.
* •
Increase the number of samples available for a certain type of data.
* •
Test the robustness of existing models.
The paper contributes as an important piece to my research by offering
insights into the potential benefits of synthetic data and helping formulate
my hypothesis that using the hybrid of synthetic and real data should improve
the performance of an economic prediction model.
Macroeconomic Predictions using Payments Data and Machine Learning:
chapman2022macroeconomic
In this study, the authors focus on predicting the economy’s short-term
dynamics and delve into economic forecasting by leveraging payments data and
machine learning techniques. This paper aims to demonstrate that non-
traditional and timely data such as retail and wholesale payments, with the
aid of nonlinear machine learning approaches, can provide policymakers with
sophisticated models to accurately estimate key macroeconomic indicators in
near real-time. By incorporating advanced machine learning algorithms and non-
linear learning approaches, Chapman and Desai show over 40 percent improved
accuracy in macroeconomic nowcasting. As a deeply quantitative study, this
paper helped me structure the quantitative analysis for my research and nudged
me towards the data I use as well.
Augmentation Techniques in Time Series Domain: A Survey and
TaxonomyIglesias_2023
This study offers a comprehensive overview of various data augmentation
methods specifically tailored for time series data. In this paper, the authors
delve into a systematic classification of different augmentation techniques,
categorizing them based on their underlying principles and applications. The
authors explore a wide array of augmentation approaches including traditional
methods such as linear interpolation and synthetic data generation techniques
like Generative Adversarial Networks (GANs). They discuss the advantages,
limitations, and potential applications of each technique, providing insights
into their effectiveness in addressing various challenges encountered in time
series analysis. Additionally, the paper examines the implications of data
augmentation on model generalization, robustness, and interpretability.
Overall, this survey and taxonomy have helped me navigate the landscape of
data augmentation techniques in the context of the time series analysis
pertinent to my research.
K-Nearest Neighbor (k-NN) based Missing Data Imputationinproceedings
The authors of this paper explore the application of the K-Nearest Neighbor
(k-NN) algorithm for imputing missing data. They investigate the use of the
k-NN method as a means to address missing data in datasets and through their
research, they propose a framework that leverages the k-NN algorithm to
predict missing values based on the values of neighboring data points. This
approach aims to improve data completeness and accuracy in datasets affected
by missing information. While this paper contributes to the field of data
imputation by offering a novel method that utilizes machine learning
techniques to handle missing data effectively, it’s not necessarily most
suitable for my research as the distance between the missing data points is
too much to be able to efficiently use the k-NN algorithm.
## 3 Data
For this research, I’ve opted to use the Affinity credit card spending
datasets and Womply small business datasets from the Economic Tracker
database111https://github.com/OpportunityInsights/EconomicTracker. These
datasets offer diverse features, but to narrow the focus for hypothesis
testing, attention is given to the daily_spend_19_all variable from Affinity
and the merchants_all variable from Womply. These variables can be described
as follows:
* •
daily_spend_19_all: Daily spending in all merchant category codes (MCCs).
* •
merchants_all: Percent change in the number of small businesses open,
calculated as a seven-day moving average, seasonally adjusted, and indexed to
January 4 to 31, 2020.
The daily_spend_19_all variable comes from the Affinity dataset, as all
spending features are measured relative to January 6 to February 2, 2020,
seasonally adjusted, and calculated as a seven-day moving average. There are
additional quartile features that are subdivisions by income using the median
income of the ZIP codes; q1 is the quartile with the lowest median income and
q4 is the quartile with the highest median income. I selected these variables
on the highest level as they are ideal to test my hypothesis where there is
missing data for the merchants_all that I would look to impute.
## 4 Methodology
To test my hypothesis, I will create a real-life example using this dataset
and my aim will be to create the best possible model to predict spending
(daily_spend_19_all) using the independent variable (merchants_all)
### 4.1 Exploratory Data Analysis
As an initial step in exploratory data analysis, I examine the descriptive
statistics of the original dataset:
Table 1: Descriptive Statistics | daily_spend_19_all | merchants_all
---|---|---
count | 1253.000 | 109.000
mean | 0.280 | -0.056
std | 0.267 | 0.067
min | -0.643 | -0.302
25% | 0.124 | -0.066
50% | 0.243 | -0.049
75% | 0.455 | -0.021
max | 1.200 | 0.086
The descriptive statistics provide basic insights into the dataset. However,
they do not offer much relevant information for addressing the research
question. Thus, I proceed towards exploratory data analysis to examine the
temporal distribution of the two variables.
Figure 1: Data Distribution for Daily Spend Figure 2: Data Distribution for
All Merchants Figure 3: Missing Data Across Variables
Both Figure 1 and Figure 2 reveal notable trends, particularly a substantial
drop in both variables during 2020, coinciding with the onset of the COVID-19
pandemic. Additionally, there are intriguing outliers, such as the end of end-
of-year data points in Figure 1. Figure 3 reveals that there are no missing
variables for Daily Spend features but there are a lot of missing data for
merchants_all. Other than this, not much can be derived from an Exploratory
Data Analysis that is relevant to my research question. The missing data will
be addressed during preprocessing to ensure the success of this real-life
example.
### 4.2 Data Pre-processing
Despite the well-organized and clean nature of the data, several challenges
exist, including datatype mismatches, DateTime information, filtering, and
missing variables. To address these issues, I implement various data pre-
processing techniques, such as handling different data types, managing
DateTime information, and applying filtering. However, due to the density of
the data, a careful selection of features is necessary for visualization
purposes. Most notably, Womply’s business data is provided on a weekly basis,
in contrast to the daily spending data. This missing data could potentially
affect the accuracy of my model’s predictions and to address this discrepancy,
I plan to generate synthetic data to fill this gap and facilitate meaningful
comparisons with non-synthetic data models.
I create four base datasets that cover conventional methods of missing data
imputation used in Economics which will then be compared with the fifth model
trained on the hybrid dataset built from generated synthetic data and real
data. The techniques I follow are removing missing rows, global mean
imputation, and Monte Carlo simulations and the base models for testing will
be generated on the below datasets:
* •
Original dataset with no imputations
* •
Original dataset with missing rows removed
* •
Mean-imputed dataset that fills missing values using global mean
* •
Monte Carlo simulations imputed dataset
As mentioned in the literature review, I also considered the k-Nearest
Neighbors (k-NN) technique for another base model. However, it proved
unsuitable for the data I’m dealing with due to the lack of neighboring data
points for proper imputation. These base datasets will facilitate the creation
of the first four base models for testing and evaluation.
### 4.3 Model Selection
As I begin the model selection process for testing and evaluation, it is
important to recognize that this research involves a variety of complications
with using time-series economic data, including missing data for a variable of
interest and endogeneity concerns. As such, choosing the right model and
evaluation techniques is of immense importance.
OLS Regression:
In the initial stage of analysis, I utilize Ordinary Least Squares (OLS)
regression to investigate the linear relationship between the variables of
interest. OLS regression is a widely used statistical method for estimating
the relationship between a dependent variable and one or more independent
variables by minimizing the sum of the squared differences between observed
and predicted valuesZdaniuk2014. By fitting a linear regression model to the
data, I aim to identify any significant linear associations and quantify the
strength and direction of these relationships. Additionally, OLS regression
provides insights into the relative importance of each independent variable in
explaining the variation observed in the dependent variable. This initial
analysis will help inform subsequent modeling approaches and provide valuable
insights into the underlying factors influencing the target variable’s
behavior.
Random Forest Model:
Beyond capturing linear relationships, I employ a Random Forest model as a
secondary economic prediction model. Random Forest is an ensemble learning
method that constructs multiple decision trees during training and outputs the
class that is the mode of the classes (classification) or mean prediction
(regression) of the individual treesTripp2023DiabetIA. It is a powerful tool
for capturing nonlinear relationships and has been widely applied in economic
research for variable selection, forecasting, and causal
inferencecoulombe2020macroeconomy. The Random Forest algorithm is well-suited
for handling complex relationships and interactions within the data, providing
a more comprehensive understanding of the factors influencing the outcome. As
such, I also use a Random Forest Model to compare the model performance across
all the datasets.
Synthetic Data Generation:
To complete the Model Selection process, I will now outline the unique
approach I take to generate synthetic data to fill data gaps within the Womply
business dataset. I train a Random Forest Model on the second dataset
mentioned in the pre-processing section (original dataset with missing rows
removed) as it represents the cleanest form of real data. While the original
dataset contains missing daily values for merchants_all variable, it still has
enough weekly values for me to be able to use it as a target variable to train
the random forest model. Then, I use the trained model to predict (or impute)
the missing values within the original dataset.
This approach leverages Affinity data’s daily_spend_19_all features as inputs
and learns any non-linear relationships between the target variable and the
features. By doing so, it enhances the accuracy of filling the gaps in the
merchants_all column, leading to a more robust model. This leads to the
creation of the fifth dataset for comparison against all the base models, and
any model trained on this hybrid dataset will be referred to as Model 5 from
here onwards.
Below is a scatter plot of the distribution of this hybrid dataset:
Figure 4: Hybrid Data Distribution for All Merchants
In comparison, below is a scatter plot of the distribution of just the real
data:
Figure 5: Hybrid Data Distribution for All Merchants
### 4.4 Model Testing Results
Now that all of the models are ready, let’s take a look at how Model 5 (the
comparison model) performs against all the base models across OLS Regression
and Random Forest Models.
Base Models OLS Regression Results
Model 1 | coef | std err | t | P$>|$t$|$ | [0.025 | 0.975]
---|---|---|---|---|---|---
const | 0.0582*** | 0.017 | 3.369 | 0.001 | 0.024 | 0.092
merchants_all | 1.6710*** | 0.198 | 8.430 | 0.000 | 1.278 | 2.064
Model 2 | coef | std err | t | P$>|$t$|$ | [0.025 | 0.975]
---|---|---|---|---|---|---
const | 0.0582*** | 0.017 | 3.369 | 0.001 | 0.024 | 0.092
merchants_all | 1.6710*** | 0.198 | 8.430 | 0.000 | 1.278 | 2.064
Model 3 | coef | std err | t | P$>|$t$|$ | [0.025 | 0.975]
---|---|---|---|---|---|---
const | 0.3737*** | 0.023 | 16.473 | 0.000 | 0.329 | 0.418
merchants_all | 1.6710*** | 0.381 | 4.382 | 0.000 | 0.923 | 2.419
Model 4 | coef | std err | t | P$>|$t$|$ | [0.025 | 0.975]
---|---|---|---|---|---|---
const | 0.0582*** | 0.017 | 3.369 | 0.001 | 0.024 | 0.092
merchants_all | 1.6710*** | 0.198 | 8.430 | 0.000 | 1.278 | 2.064
Model 5 | coef | std err | t | P$>|$t$|$ | [0.025 | 0.975]
---|---|---|---|---|---|---
const | 0.3302*** | 0.006 | 51.379 | 0.000 | 0.318 | 0.343
merchants_all | 4.2133*** | 0.165 | 25.588 | 0.000 | 3.890 | 4.536
From the above table, it’s clear that while all the models have statistically
significant coefficients and constants, Model 5 stands out in comparison to
the baseline models with its notably higher coefficient value (4.2133) for the
variable merchants_all, indicating a stronger impact on the dependent
variable. Moreover, Model 5 exhibits lower standard errors for both the
constant and merchants_all, suggesting greater precision in the coefficient
estimates. The high t-values (51.379 and 25.588 respectively) and extremely
low p-values indicate high significance, further supporting the robustness of
Model 5. Overall, Model 5 appears to offer a more accurate and statistically
significant representation of the relationship between the variables compared
to the other models. This result confirms my hypothesis, however, I will still
look to substantiate it using Random Forest Models and see how Model 5
compares to the baseline models.
Table 2: Random Forest Model Results Model | Average MAE | Average MSE | Average R-squared
---|---|---|---
1 | NA | NA | NA
2 | 0.162 | 0.042 | -5.92
3 | 0.217 | 0.077 | -0.75
4 | 0.232 | 0.088 | -1.06
5 | 0.092 | 0.017 | 0.55
Similarly to the OLS Regression results, Table 2 demonstrates Model 5’s
superior performance as it exhibits the lowest average Mean Absolute Error
(MAE) of 0.092 and the lowest average Mean Squared Error (MSE) of 0.017,
indicating the closest proximity of predicted values to the actual values
compared to other models. Additionally, Model 5 achieves the highest average
R-squared value of 0.55, suggesting that it explains a higher proportion of
the variance in the dependent variable. It should be noted that Model 1 is
marked NA as the dataset for Model 1 has missing values and is not suitable
for a Random Forest analysis. Models 3 and 4 display poorer performance across
all metrics while Model 2, although showing a low average MAE and MSE, has a
substantially negative R-squared value, indicating poor model fit or potential
overfitting. Therefore, Model 5 emerges as the most favorable choice among the
presented Random Forest models, demonstrating superior predictive accuracy and
model robustness.
## 5 Conclusion
I started this research to explore whether the integration of Synthetic Data
could enhance model performance and robustness in scenarios characterized by
limited data availability. Based on the literature review, I hypothesized that
employing the hybrid approach of synthetic and real data should improve the
performance of an economic prediction model, surpassing the efficacy of
utilizing only real data. To test this hypothesis, I set up a real-life
example using the Affinity and Womply datasets and created four different
baseline models covering the conventional data-handling techniques used in
economic prediction modeling. These techniques covered using the original
dataset with no imputations, the original dataset removing rows with missing
data, imputing missing values with global mean, and Monte Carlo simulations.
My comparison model was trained on a dataset created using an advanced data
augmentation technique that leverages Random Forest Models to generate
Synthetic Data and use it in conjunction with real data. The comparison model
outperformed all the baseline models across both OLS Regression testing and
Random Forest Modeling, giving me strong conviction that my hypothesis is
correct.
## 6 Limitations
In terms of limitations of this paper, there were quite a few that I faced
throughout the course of the research:
* •
As highlighted in the literature review, the nascent nature of this topic
meant that there was not a lot of reliable academic research I could leverage
that focused directly on the intersection of economics and synthetic data. I
consider this a huge limitation, as a lot more literature on the topic could
have changed the structure of my research.
* •
The dataset I have been working with had a very high number of missing values
for the target variable which could’ve easily led to an imbalanced dataset,
adding bias in the generated data. If I had access to more data, or data with
more frequency, I potentially could’ve used other baseline modeling techniques
like k-NN and seen different results.
* •
Another limitation would be lacking the skills required to build more
sophisticated models like Generative Adversarial Networks (GANs) or
Variational Auto-Encoders (VAEs). Based on the literature review, it is very
likely that synthetic data generated through either of these models would
perform better than synthetic data generated by a Random Forest Model.
Even considering these limitations, I believe this research is well grounded
in both qualitative and quantitative logic proving valid grounds for my
hypothesis to be correct.
## 7 Next Steps
In terms of next steps, I would like to create three more hybrid datasets
using the following techniques that I discovered through my literature review
Iglesias_2023 and include them as comparison models to the existing tests:
* •
A library like Datawig to leverage Deep Learning Neural Networks.
* •
Generative Adversarial Network (GAN) to generate more accurate synthetic data.
* •
Variational Auto-Encoder (VAE) to generate more accurate synthetic data by
accounting for variance in time-series data.
All of these are more sophisticated modeling techniques that have the
potential to produce more robust results compared to Random Forest Models and
are something I can look forward to implementing in economic prediction
models.
## References
* [1] James T.. Chapman and Ajit Desai “Macroeconomic Predictions using Payments Data and Machine Learning”, 2022 arXiv:2209.00948 [econ.GN]
* [2] Philippe Goulet Coulombe “The Macroeconomy as a Random Forest” In _arXiv preprint arXiv:2006.12724_ , 2020
* [3] Guillermo Iglesias et al. “Data Augmentation techniques in time series domain: a survey and taxonomy” In _Neural Computing and Applications_ 35.14 Springer ScienceBusiness Media LLC, 2023, pp. 10123–10145 DOI: 10.1007/s00521-023-08459-3
* [4] Allison Koenecke and Hal Varian “Synthetic Data Generation for Economists”, 2020 arXiv:2011.01374 [econ.GN]
* [5] Della Murti, Utomo Pujianto, Aji Wibawa and Muhammad Akbar “K-Nearest Neighbor (K-NN) based Missing Data Imputation”, 2019, pp. 83–88 DOI: 10.1109/ICSITech46713.2019.8987530
* [6] Hector M. Tripp et al. “DiabetIA: a real-world research database to predict de novo diabetic complications using artificial intelligence” In _Nature Diabetes_ 4.3, 2023, pp. 201–211 DOI: 10.1038/s42255-023-00712-9
* [7] Bartosz Zdaniuk “Ordinary Least-Squares (OLS) Model” In _Encyclopedia of Quality of Life and Well-Being Research_ Dordrecht: Springer, 2014 DOI: 10.1007/978-94-007-0753-5“˙2008
|
††thanks: These two authors contributed equally; corresponding author:
<EMAIL_ADDRESS>These two authors contributed equally;
corresponding author<EMAIL_ADDRESS>
# Unbiasing Fermionic Quantum Monte Carlo with a Quantum Computer
William J. Huggins Google Quantum AI, Mountain View, CA, USA Bryan A.
O’Gorman Berkeley Quantum Information & Computation Center, University of
California, Berkeley, CA, USA Charles Neill Google Quantum AI, Mountain
View, CA, USA Nicholas C. Rubin Google Quantum AI, Mountain View, CA, USA
Pedram Roushan Google Quantum AI, Mountain View, CA, USA David R. Reichman
Department of Chemistry, Columbia University, New York, NY, USA Ryan Babbush
Google Quantum AI, Mountain View, CA, USA Joonho Lee Department of
Chemistry, Columbia University, New York, NY, USA Google Quantum AI, Mountain
View, CA, USA
###### Abstract
Many-electron problems pose some of the greatest challenges in computational
science, with important applications across many fields of modern science.
Fermionic quantum Monte Carlo (QMC) methods are among the most powerful
approaches to these problems. However, they can be severely biased when
controlling the fermionic sign problem using constraints is necessary for
scalability. Here we propose an approach that combines constrained QMC with
quantum computing tools to reduce such biases. We experimentally implement our
scheme using up to 16 qubits in order to unbias constrained QMC calculations
performed on chemical systems with as many as 120 orbitals. These experiments
represent the largest chemistry simulations performed on quantum computers
(more than doubling the size of prior electron correlation calculations),
while obtaining accuracy competitive with state-of-the-art classical methods.
Our results demonstrate a new paradigm of hybrid quantum-classical algorithm,
surpassing the popular variational quantum eigensolver in terms of potential
towards the first practical quantum advantage in ground state many-electron
calculations.
Introduction. An accurate solution of the Schrödinger equation for the ground
state of many-electron systems is of critical importance across many fields of
modern science.Friesner (2005); Helgaker _et al._ (2008); Cao _et al._
(2019); Bauer _et al._ (2020) The complexity of this equation seemingly grows
exponentially with the number of electrons in the system. This fact has
greatly hindered progress towards an efficient means of accurately calculating
ground state quantum mechanical properties of complex systems. Over the last
century, a substantial research effort has been devoted to the development of
new algorithms for the solution of the many-electron problem. Currently, all
available general-purpose methods can be grouped into two categories: (1)
methods which scale exponentially with system size while yielding numerically
exact answers and (2) methods whose cost scales polynomially with system size
but which are approximate by construction. Approaches of the second category
are currently the only methods that can feasibly be applied to large systems.
The accuracy of the solutions obtained in such cases may be unsatisfactory and
is nearly always difficult to assess.
Quantum computing has arisen as an alternative paradigm for the calculation of
quantum properties that may complement and potentially surpass classical
methods in terms of efficiency.Feynman (1982); Lloyd (1996) While the ultimate
ambition of this field is to construct a universal fault-tolerant quantum
computer,Shor (1996) the experimental devices of today are limited to Noisy
Intermediate-Scale Quantum (NISQ) computers.Preskill (2012) NISQ algorithms
for the computation of ground states have largely centered around the
variational quantum eigensolver (VQE) framework,Peruzzo _et al._ (2014);
McClean _et al._ (2016) which necessitates coping with optimization
difficulties, measurement overhead, and circuit noise. As an alternative,
algorithms based on imaginary time evolution have been put forward that, in
principle, avoid the optimization problem.McArdle _et al._ (2019); Motta _et
al._ (2020) However, due to the non-unitary nature of imaginary time
evolution, one must resort to optimization heuristics in order to achieve
reasonable scaling with system size. New computational strategies which avoid
these limiting factors may help to enable the first practical quantum
advantage in fermionic simulations. In this work, we propose and
experimentally demonstrate a new class of quantum-classical hybrid algorithms
that offers a different route to addressing these challenges. We do not
attempt to represent the ground state wavefunction using our quantum
processor, choosing instead to use it to guide a quantum Monte Carlo
calculation performed on a classical coprocessor. Our experimental
demonstration surpasses the scale of all prior experimental works on the
quantum simulation of chemistry.Kandala _et al._ (2017); Nam _et al._
(2020); Quantum _et al._ (2020)
Figure 1: (a) Depiction of the imaginary time evolution which shows an
exponential convergence to the ground state as a function of imaginary time
$\tau$. (b) Illustration of the fermionic sign problem. (b, top) Exact
deterministic imaginary time evolution and an unconstrained QMC calculation
which is exact on average but the signal-to-noise ratio in the average energy
$\langle E(\tau)\rangle$ diverges in $\tau$ due to the sign problem. (b,
bottom) Constrained QMC calculations with classical and quantum constraints.
The use of quantum constraint can help to reduce the bias that is non-
negligible when using the classical constraint. (c) Overview of the quantum-
classical hybrid QMC (QC-QMC) algorithm. Stochastic wavefunction samples are
represented as $\\{|\phi_{i}\rangle\\}_{\tau}$ (depicted as a matrix
manageable to store classically) which are evolved in time along with
associated weights $\\{\omega_{i}\\}_{\tau}$. Throughout the time evolution,
queries to the quantum processor about the overlap value between the quantum
trial wavefunction $|\Psi_{T}\rangle$ and a stochastic wavefunction sample
$\\{|\phi_{i}\rangle\\}_{\tau}$ are made while updating the gate parameters to
describe $\\{|\phi_{i}\rangle\\}_{\tau}$. Our quantum processor uses $N$
qubits to efficiently estimate the overlap and this is then used to evolve the
$\omega_{i}$ and then discard stochastic wavefunction samples with
$\omega_{i}<0$. Thus, observables such as $\langle E(\tau)\rangle$ are
computed on the classical computer by making overlap queries to the quantum
processor.
Theory and algorithms. Quantum Monte Carlo (QMC) approachesAcioli (1997);
Foulkes _et al._ (2001) target the exact ground state $|\Psi_{0}\rangle$ of a
many-body Hamiltonian, $\hat{H}$, via imaginary time evolution of an initial
state $|\Phi_{0}\rangle$ with a non-zero overlap with $|\Psi_{0}\rangle$:
$|\Psi_{0}\rangle\propto\lim_{\tau\rightarrow\infty}|\Psi(\tau)\rangle,\quad|\Psi(\tau)\rangle\equiv
e^{-\tau\hat{H}}|\Phi_{0}\rangle,$ (1)
where $\tau$ is imaginary time and $|\Psi(\tau)\rangle$ denotes the time-
evolved wavefunction from $|\Phi_{0}\rangle$ by $\tau$ (see Fig. 1(a)). In
QMC, the imaginary-time evolution in Eq. 1 is implemented stochastically,
which can enable a polynomial-scaling algorithm to sample an estimate for the
exact ground state energy by avoiding the explicit storage of high dimensional
objects such as $\hat{H}$ and $|\Psi_{0}\rangle$. The ground state energy,
$E_{\text{ground}}=E(\tau=\infty)$, is estimated from averaging a time series
of $\\{\langle E(\tau)\rangle\\}$, given by a weighted average over $M$
statistical samples,
$\langle E(\tau)\rangle=\sum_{i=1}^{M}w_{i}(\tau)E^{(i)}(\tau),$ (2)
where $E^{(i)}(\tau)$ is the $i$-th statistical sample for the energy and
$w_{i}(\tau)$ is the corresponding normalized weight for that sample at
imaginary time $\tau$. While formally exact, such a stochastic imaginary time
evolution algorithm will generically run into the notorious fermionic sign
problem,Troyer and Wiese (2005) which manifests due to alternating signs in
the weights of each statistical sample used in Eq. 2. In the worst case, the
fermionic sign problem causes the estimator of the energy in Eq. 2 to have
exponentially large variance (see Fig. 1(b) top), necessitating that one
averages exponentially many samples to obtain a fixed precision estimate of
observables such as the ground state energy. Accordingly, exact, unbiased QMC
approaches are only applicable to small systemsBlankenbecler _et al._ (1981);
Chang _et al._ (2015) or those lacking a sign-problem.Li and Yao (2019)
The sign problem can be controlled to give an estimator of the ground state
energy with polynomially bounded variance by imposing constraints on the
imaginary time evolution of each statistical sample represented by a
wavefunction, $|\phi_{i}(\tau)\rangle$. These constraints (which include
prominent examples such as the fixed nodeMoskowitz _et al._ (1982); Foulkes
_et al._ (2001) and phaseless approximations,Zhang _et al._ (1997); Zhang and
Krakauer (2003)) are imposed by the use of trial wavefunctions
($|\Psi_{T}\rangle\rangle$), and the accuracy of constrained QMC is wholly
determined by the choice of the trial wavefunction (see Fig. 1(b) bottom).
Such constraints necessarily introduce a potentially significant bias in the
final ground state energy estimate which can be removed in the limit that the
trial wavefunction approaches the exact ground state.
Classically, computationally tractable options for trial wavefunctions are
limited to states such as a single mean-field determinant (e.g. a Hartree-Fock
state), a linear combination of mean-field states, a simple form of the
electron-electron pair (two-body) correlator (usually called a Jastrow factor)
applied to mean-field states, or some other physically motivated
transformations applied to mean-field states such as backflow approaches.Becca
and Sorella (2017) On the other hand, any wavefunction preparable with a
quantum circuit is a candidate for a trial wavefunction on a quantum computer,
including more general two-body correlators. These trial wavefunctions will be
referred to as “quantum” trial wavefunctions.
To be more concrete, there is currently no efficient classical algorithm to
estimate (to additive error) the overlap between $|\phi_{i}(\tau)\rangle$ and
certain complex quantum trial wavefunctions $|\Psi_{T}\rangle\rangle$ such as
unitary coupled-cluster with singles and doublesBartlett _et al._ (1989) or
the multiscale entanglement renormalization ansatz, even when
$|\phi_{i}(\tau)\rangle$ is simply a computational basis state or a Slater
determinant.Evenbly and Vidal (2015) Since quantum computers can efficiently
approximate $\langle\Psi_{T}|\phi_{i}(\tau)\rangle$, there is a potential
quantum advantage in this task as well as its particular use in QMC. This
offers a different route towards quantum advantage in ground-state fermion
simulations as compared to VQE, which instead seeks quantum advantage in the
variational energy evaluation. We also note that VQE may be used to generate a
sophisticated trial wavefunction which alone would not be sufficient to
achieve high accuracy, but might offer quantitative accuracy and quantum
advantage when used as a trial wavefunction in our approach.
Our quantum-classical hybrid QMC algorithm (QC-QMC) utilizes quantum trial
wavefunctions while performing the majority of imaginary time evolution on a
classical computer, and is summarized in Fig. 1(c). In essence, on a classical
computer one performs imaginary time evolution for each wavefunction
statistical sample, $|\phi_{i}(\tau)\rangle$, and collects observables such as
the ground state energy estimate, $E^{(i)}(\tau)$. During this procedure, a
constraint associated with the quantum trial wavefunction is imposed to
control the sign problem. To perform the constrained time evolution, the only
quantity that needs to be calculated on the quantum computer is the overlap
between the trial wavefunction, $|\Psi_{T}\rangle$, and the statistical sample
of the wavefunction at imaginary time $\tau$, $|\phi_{i}(\tau)\rangle$.
Figure 2: (a) Experimental circuit used for the 8-qubit ${\rm H}_{4}$
experiment over a 2x4 qubit grid (from $\text{Q}_{1,1}$ to $\text{Q}_{2,1}$)
on the Sycamore quantum processor.Arute _et al._ (2019) In the circuit
diagram, H denotes the Hadamard gate, G denotes a Givens rotation gate
(generated by ${\rm XX}+{\rm YY}$), P denotes a Pauli gate, and
$|\Psi_{T}\rangle$ denotes the quantum trial wavefunction. Note that the
“offline” orbital rotation is not present in the actual quantum circuit
because they can be efficiently handled via classical post-processing as
discussed in Section C.1. (b) and (c): Convergence of the atomization energy
of H4 as a function of the number of measurements. (b) a minimal basis set
(STO-3G) with four orbitals total from four independent experiments with
different sets of random measurements and (c) a quadruple-zeta basis set (cc-
pVQZ) with 120 orbitals total from two independent experiments. The different
symbols in (b) and (c) show independent experimental results. Note that the
ideal (i.e., noiseless) atomization energy of quantum trial in (b) is exactly
on top of the exact one. Further note that the quantum resource used in (c) is
8-qubit, but as shown in Section C.3, our algorithm allows for adding
“virtual” electron correlation in a much larger Hilbert space.
While our approach applies generally to any form of constrained QMC, here we
discuss an experimental demonstration of the algorithm that uses an
implementation of QMC known as auxiliary-field QMC (AFQMC), which will be
referred to as QC-AFQMC. In AFQMC, the phaseless constraintZhang and Krakauer
(2003) is imposed to control the sign problem, and $|\phi_{i}(\tau)\rangle$
takes the form of a single Slater determinant in an arbitrary single-particle
basis. AFQMC has been shown to be accurate in a number of cases even with
classically available trial wavefunctions;Zheng _et al._ (2017); Williams
_et al._ (2020) however, the bias incurred from the phaseless constraint
cannot be overlooked, as we discuss in detail below. Since a single
determinant mean-field wavefunction is the most widely used classical form of
the trial function for AFQMC, here we will use “AFQMC” to denote the use of
AFQMC with mean-field trial wavefunction.
We use the quantum processor to estimate
$\langle\Psi_{T}|\phi_{i}(\tau)\rangle$ for each statistical sample at each
point in imaginary time. In this work, we accomplish this using a technique
known as shadow tomographyAaronson (2020); Huang _et al._ (2020) applied to
the trial wavefunction. Experimentally, this entails performing randomly
chosen measurements of a reference state related to $|\Psi_{T}\rangle$ prior
to beginning the QMC calculation. In this formulation of QC-AFQMC, we
emphasize that there is no need for the QMC calculation to iteratively query
the quantum processor, despite the fact that the details of the statistical
samples are not determined ahead of time. By disentangling the interaction
between the quantum and classical computer, we avoid feedback latency, an
appealing feature on NISQ platforms but at the cost of requiring potentially
expensive classical post-processing (see Section D.3 for more details).
Furthermore, our algorithm naturally achieves some degree of noise robustness
explained in Section D.6 because the quantity that is directly used in QC-
AFQMC is the ratio between overlap values, which is inherently resilient to
the overlaps being rescaled by certain error channels such as the global
depolarizing channel.Nielsen and Chuang (2010)
| Exact | AFQMC | CCSD(T) | Q. trial | QC-AFQMC
---|---|---|---|---|---
4-orbital | 64.7 | 62.9 | 59.6 | 55.2 | 64.3
120-orbital | 70.5 | 68.6 | 71.9 | 37.4 | 69.7
Table 1: Atomization energy (kcal/mol) of H4 for quantum trial (Q. trial;
experiment), AFQMC (classical), QC-AFQMC (experiment), CCSD(T) (classical
“gold standard”), and exact results for minimal (STO-3G; 4-orbital) and
quadruple-zeta (cc-pVQZ; 120-orbital) bases. Both of these experiments use 8
qubits. The statistical error of AFQMC and QC-AFQMC is less than 0.05 kcal/mol
and therefore not shown here. Note that for QT and QC-AFQMC we picked an
experiment done with a specific set of random measurements that are converged
at 1.5$\times 10^{7}$ measurements. As shown in Appendix E, QT results vary
significantly run-to-run while QC-AFQMC results are nearly identical run-to-
run (which showcases the noise resilience of QC-AFQMC).
Results and discussion. The experiments in this work were carried out on
Google’s 54-qubit quantum processor, known as Sycamore.Arute _et al._ (2019)
The circuits were compiled using hardware-native CZ gates with typical error
rates of $\approx 0.5\%$. Chen _et al._ (2021) As the first example, in Fig.
2, we illustrate the quantum primitive used to perform shadow tomography on
the H4 molecule in an 8-qubit experiment. Our eight spin-orbital quantum trial
wavefunction consists of a valence bond wavefunction known as a perfect
pairing stateGoddard _et al._ (1973); Cullen (1996) and a hardware-efficient
quantum circuitKandala _et al._ (2017) with an offline single-particle
rotation applied to this, which would be classically difficult to use as a
trial wavefunction for AFQMC. The state preparation circuit in Fig. 2(a) shows
how this trial wavefunction can be efficiently prepared on a quantum computer.
Similar state preparation circuits are used for the other chemical examples in
this work.
Figure 3: (a, top) Circuit layout showing spin-up and spin-down qubits for the
12-qubit experiment. (a, bottom) Potential energy surface of N2 in a triple
zeta basis set (cc-pVTZDunning (1989); 60-orbital). For clarity, the relative
energies are shifted to zero at $2.25$Å. Inset shows the error in total energy
relative to the exact results in kcal/mol. The dash dotted line in the inset
provides bounds for chemical accuracy (1 kcal/mol). Note that the variational
energy of the quantum trial used here is outside the plotted energy scale. The
statistical error bars of AFQMC and QC-AFQMC are not visible on this scale.
(b, top) Circuit layout showing spin-up and spin-down qubits for the 16-qubit
experiment. (b, bottom) Error in total energy as a function of lattice
constant of diamond in a double zeta basis (DZVP-GTH; 26 orbitals). The dash
dotted line shows the bounds for chemical accuracy. Our quantum trial results
are not visible on this energy scale. For high values of the lattice constant
none of these methods achieve chemical accuracy but the use of the quantum
trial still improves the AFQMC result. Inset shows a supercell structure of
diamond where two highlighted atoms form the minimal unit cell.
In this 8-qubit experiment, we consider H4 in a square geometry with side
lengths of 1.23 Å and its dissociation into four hydrogen atoms. This system
is often used as a testbed for electron correlation methods in quantum
chemistry.Paldus _et al._ (1993); Lee _et al._ (2019) We perform our
calculations using two Gaussian basis sets: the minimal (STO-3G) basisHehre
_et al._ (1969) and the correlation consistent quadruple-zeta (cc-pVQZ)
basis.Dunning (1989) The latter basis set is of a size and accuracy required
to make a direct comparison with laboratory experiments. When describing the
ground state of this system, there are two equally important, degenerate mean-
field states. This makes AFQMC with a single mean-field state trial highly
unreliable. In addition, a method often refereed to as a “gold standard”
classical approach (spin-unrestricted coupled-cluster with singles, doubles,
and perturbative triples, CCSD(T)Raghavachari _et al._ (1989)) also performs
poorly for this system.
In Table 1, the difficulties of AFQMC and CCSD(T) are well illustrated by
comparing their atomization energies with exact values in two different basis
sets. Both approaches show errors that are significantly larger than “chemical
accuracy” (1 kcal/mol). The variational energy of the quantum trial
reconstructed from experiment has a bias that can be as large as 33 kcal/mol.
The noise on our quantum device makes the quality of our quantum trial far
from that of the ideal (i.e., noiseless) ansatz as shown in Fig. 2(b),
resulting in an error as large as 10 kcal/mol in the atomization energy.
Nonetheless, QC-AFQMC reduces this error significantly, and achieves chemical
accuracy in both bases.
As shown in Section C.3, for the larger basis set, we obtain a residual
“virtual” correlation energy by using the trial on a smaller number of
orbitals to unbias an AFQMC calculation on a larger number of orbitals, with
no additional overhead to the quantum computer. This capability makes our
implementation competitive with state-of-the-art classical approaches. Similar
virtual correlation energy strategies have been previously discussed within
the framework of VQE,Takeshita _et al._ (2020) but unlike our approach, those
strategies come with a significant measurement overhead. To unravel the QC-
AFQMC results on H4 further, we illustrate in Fig. 2(b) the evolution of trial
and QC-AFQMC energies as a function of the number of measurements made on the
device. Despite the presence of significant noise within approximately
$10^{5}$ measurements, QC-AFQMC achieves chemical accuracy while coping with a
sizeable residual bias in the underlying quantum trial.
Next, we move to a larger example, N2, which requires a total of 12 qubits in
our quantum experiment. Here, a simpler quantum trial is used for QC-AFQMC by
taking just the valence bond part of the wavefunction depicted in Fig. 2. We
examine the potential energy surface of N2 from compressed to elongated
geometries, which is another common benchmark problem for classical quantum
chemistry methods.Siegbahn (1983); Lee _et al._ (2019) In Fig. 3 (a), the QC-
AFQMC result is shown for the calculations performed in a triple zeta basis
(cc-pVTZ),Dunning (1989) which corresponds to a 60-orbital or 120-qubit
Hilbert space. All examined methods, CCSD(T), AFQMC, and QC-AFQMC perform
quite well near the equilibrium geometry, but CCSD(T) and AFQMC deviate from
the exact results significantly as one stretches the bond distance. As a
result, the error of “gold-standard” CCSD(T) can be as large as 14 kcal/mol
and the error of AFQMC with a classical trial wavefunction can be as large as
-8 kcal/mol. The error in the QC-AFQMC computation ranges from -2 kcal/mol to
1 kcal/mol depending on the bond distance. Thus, while we do not achieve
chemical accuracy with QC-AFQMC, we note that even with a very simple quantum
trial wavefunction, we produce energies that are competitive with state-of-
the-art classical approaches.
Lastly, we present a 16-qubit experiment result on the ground state simulation
of a minimal unit cell (2-atom) model of periodic solid diamond in a double-
zeta basis set (DZVP-GTHVandeVondele and Hutter (2007); 26 orbitals). While at
this level of theory the model exhibits significant finite-size effects and
does not predict the correct experimental lattice constant, we aim to
illustrate the utility of our algorithm in materials science applications. We
emphasize that this is the largest quantum simulation of chemistry on a
quantum processor to date. Previously, the largest correlated quantum
simulations of chemistry involved half a dozen qubits or lessKandala _et al._
(2017) with more than an order of magnitude fewer two-qubit gates than is used
here, while the largest mean-field calculation performed on a quantum computer
involved a dozen qubits with fewer than half as many two-qubit gates.Quantum
_et al._ (2020) We again use the simple perfect pairing state as our quantum
trial wavefunction and demonstrate the improvement over a range of lattice
parameters compared with classical AFQMC and CCSD(T) in Fig. 3 (b). There is a
substantial improvement in the error going from AFQMC to QC-AFQMC showing the
increased accuracy due to better trial wavefunctions. Our accuracy is limited
by the simple form of our quantum trial and yet we achieve accuracy nearly on
par with the classical gold standard method, CCSD(T).
Conclusion. In summary, we proposed a scalable, noise-resilient quantum-
classical hybrid algorithm that seamlessly embeds a special-purpose quantum
primitive into an accurate quantum computational many-body method, namely QMC.
Our work offers an alternative computational strategy that effectively
unbiases fermionic QMC approaches by leveraging state-of-the-art quantum
information tools. We have realized this algorithm for a specific QMC
algorithm known as AFQMC, and experimentally demonstrated its performance in
experiments as large as 16-qubits on a NISQ processor, producing electronic
energies that are competitive with state-of-the-art classical quantum
chemistry methods. Our algorithm also allows for incorporating the electron
correlation energy outside the space that is handled by the quantum computer
without increasing quantum resources or measurement overheads. In Appendix F,
we discuss issues related to asymptotic scaling and the potential for quantum
advantage in our algorithm, including the challenge of measuring wavefunction
overlaps precisely. While we have yet to achieve practical quantum advantage
over available classical algorithms, the flexibility and scalability of our
proposed approach in the construction of quantum trial functions, and its
inherent noise resilience, promise a new path forward for the simulation of
chemistry in the NISQ era and beyond.
Acknowledgements. The authors thank members of the Quantum AI theory team and
Fionn Malone for helpful discussions. BO is supported by a NASA Space
Technology Research Fellowship.
Note. The code and data that support the findings of this study are available
from the corresponding authors upon request and will be made available
publicly in the future. After this work was nearly complete, a theory paper by
Yang et al. appeared on arXiv,Yang _et al._ (2021) describing a quantum
algorithm for assisting real time dynamics with unconstrained QMC.
Author contributions. JL conceived of the quantum-classical hybrid QMC
algorithm, performed QMC calculations, and drafted the manuscript with
contribution from others. WJH proposed the use of shadow tomography and
designed the experiment with contributions from others. BO helped with
theoretical analysis and compiled circuits for the experiment. CN executed the
quantum hardware experiment. PR and NCR provided help with improving the
presentation of figures. JL and RB managed the scientific collaboration. All
authors participated in conceptual discussions and contributed to writing the
manuscript and analyzing the data.
## References
* Friesner (2005) Richard A. Friesner, “Ab initio quantum chemistry: Methodology and applications,” Proc. Natl. Acad. Sci. U.S.A. 102, 6648–6653 (2005).
* Helgaker _et al._ (2008) Trygve Helgaker, Wim Klopper, and David P. Tew, “Quantitative quantum chemistry,” Mol. Phys. 106, 2107–2143 (2008).
* Cao _et al._ (2019) Yudong Cao, Jonathan Romero, Jonathan P. Olson, Matthias Degroote, Peter D. Johnson, Mária Kieferová, Ian D. Kivlichan, Tim Menke, Borja Peropadre, Nicolas P. D. Sawaya, Sukin Sim, Libor Veis, and Alán Aspuru-Guzik, “Quantum Chemistry in the Age of Quantum Computing,” Chem. Rev. 119, 10856–10915 (2019).
* Bauer _et al._ (2020) Bela Bauer, Sergey Bravyi, Mario Motta, and Garnet Kin-Lic Chan, “Quantum Algorithms for Quantum Chemistry and Quantum Materials Science,” Chem. Rev. 120, 12685–12717 (2020).
* Feynman (1982) Richard P Feynman, “Simulating physics with computers,” International Journal of Theoretical Physics 21, 467–488 (1982).
* Lloyd (1996) Seth Lloyd, “Universal Quantum Simulators,” Science 273, 1073–1078 (1996).
* Shor (1996) P.W. Shor, “Fault-tolerant quantum computation,” in _Proceedings of 37th Conference on Foundations of Computer Science_ (IEEE Comput. Soc. Press, 1996).
* Preskill (2012) John Preskill, “Quantum computing and the entanglement frontier,” arXiv:1203.5813 (2012).
* Peruzzo _et al._ (2014) Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alan Aspuru-Guzik, and Jeremy L O’Brien, “A Variational Eigenvalue Solver on a Photonic Quantum Processor,” Nature Communications 5, 1–7 (2014).
* McClean _et al._ (2016) Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alan Aspuru-Guzik, “The Theory of Variational Hybrid Quantum-Classical Algorithms,” New Journal of Physics 18, 23023 (2016).
* McArdle _et al._ (2019) Sam McArdle, Tyson Jones, Suguru Endo, Ying Li, Simon C. Benjamin, and Xiao Yuan, “Variational ansatz-based quantum simulation of imaginary time evolution,” npj Quantum Inf. 5, 1–6 (2019).
* Motta _et al._ (2020) Mario Motta, Chong Sun, Adrian T. K. Tan, Matthew J. O’Rourke, Erika Ye, Austin J. Minnich, Fernando G. S. L. Brandão, and Garnet Kin-Lic Chan, “Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution,” Nat. Phys. 16, 205–210 (2020).
* Kandala _et al._ (2017) Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M Chow, and Jay M Gambetta, “Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets,” Nature 549, 242–246 (2017).
* Nam _et al._ (2020) Yunseong Nam, Jwo-Sy Chen, Neal C Pisenti, Kenneth Wright, Conor Delaney, Dmitri Maslov, Kenneth R Brown, Stewart Allen, Jason M Amini, Joel Apisdorf, _et al._ , “Ground-state energy estimation of the water molecule on a trapped-ion quantum computer,” npj Quantum Information 6, 1–6 (2020).
* Quantum _et al._ (2020) Google AI Quantum _et al._ , “Hartree-fock on a superconducting qubit quantum computer,” Science 369, 1084–1089 (2020).
* Acioli (1997) Paulo H. Acioli, “Review of quantum Monte Carlo methods and their applications,” J. Mol. Struct. THEOCHEM 394, 75–85 (1997).
* Foulkes _et al._ (2001) W. M. C. Foulkes, L. Mitas, R. J. Needs, and G. Rajagopal, “Quantum monte carlo simulations of solids,” Rev. Mod. Phys. 73, 33 (2001).
* Troyer and Wiese (2005) Matthias Troyer and Uwe-Jens Wiese, “Computational Complexity and Fundamental Limitations to Fermionic Quantum Monte Carlo Simulations,” Phys. Rev. Lett. 94, 170201 (2005).
* Blankenbecler _et al._ (1981) R. Blankenbecler, D. J. Scalapino, and R. L. Sugar, “Monte Carlo calculations of coupled boson-fermion systems. I,” Phys. Rev. D 24, 2278–2286 (1981).
* Chang _et al._ (2015) Chia-Chen Chang, Sergiy Gogolenko, Jeffrey Perez, Zhaojun Bai, and Richard T. Scalettar, “Recent advances in determinant quantum Monte Carlo,” Philos. Mag. 95, 1260–1281 (2015).
* Li and Yao (2019) Zi-Xiang Li and Hong Yao, “Sign-Problem-Free Fermionic Quantum Monte Carlo: Developments and Applications,” Annu. Rev. Condens. Matter Phys. 10, 337–356 (2019).
* Moskowitz _et al._ (1982) Jules W. Moskowitz, K. E. Schmidt, Michael A. Lee, and M. H. Kalos, “A new look at correlation energy in atomic and molecular systems. II. The application of the Green’s function Monte Carlo method to LiH,” J. Chem. Phys. 77, 349–355 (1982).
* Zhang _et al._ (1997) Shiwei Zhang, J. Carlson, and J. E. Gubernatis, “Constrained path monte carlo method for fermion ground states,” Phys. Rev. B 55, 7464 (1997).
* Zhang and Krakauer (2003) Shiwei Zhang and Henry Krakauer, “Quantum monte carlo method using phase-free random walks with slater determinants,” Phys. Rev. Lett. 90, 136401 (2003).
* Becca and Sorella (2017) Federico Becca and Sandro Sorella, _Quantum Monte Carlo Approaches for Correlated Systems_ (Cambridge University Press, Cambridge, England, UK, 2017).
* Bartlett _et al._ (1989) Rodney J. Bartlett, Stanislaw A. Kucharski, and Jozef Noga, “Alternative coupled-cluster ansätze II. The unitary coupled-cluster method,” Chem. Phys. Lett. 155, 133–140 (1989).
* Evenbly and Vidal (2015) G. Evenbly and G. Vidal, “Tensor Network Renormalization Yields the Multiscale Entanglement Renormalization Ansatz,” Phys. Rev. Lett. 115, 200401 (2015).
* Arute _et al._ (2019) Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G.S.L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven, and John M. Martinis, “Quantum supremacy using a programmable superconducting processor,” Nature 574, 505–510 (2019).
* Zheng _et al._ (2017) Bo-Xiao Zheng, Chia-Min Chung, Philippe Corboz, Georg Ehlers, Ming-Pu Qin, Reinhard M. Noack, Hao Shi, Steven R. White, Shiwei Zhang, and Garnet Kin-Lic Chan, “Stripe order in the underdoped region of the two-dimensional Hubbard model,” Science 358, 1155–1160 (2017).
* Williams _et al._ (2020) Kiel T Williams, Yuan Yao, Jia Li, Li Chen, Hao Shi, Mario Motta, Chunyao Niu, Ushnish Ray, Sheng Guo, Robert J Anderson, _et al._ , “Direct comparison of many-body methods for realistic electronic hamiltonians,” Physical Review X 10, 011041 (2020).
* Aaronson (2020) Scott Aaronson, “Shadow tomography of quantum states,” SIAM J. Comput. 49, STOC18–368–STOC18–394 (2020).
* Huang _et al._ (2020) Hsin-Yuan Huang, Richard Kueng, and John Preskill, “Predicting many properties of a quantum system from very few measurements,” (2020), arXiv:2002.08953 [quant-ph] .
* Nielsen and Chuang (2010) Michael A. Nielsen and Isaac L. Chuang, _Quantum Computation and Quantum Information: 10th Anniversary Edition_ (Cambridge University Press, Cambridge, England, UK, 2010).
* Chen _et al._ (2021) Zijun Chen, Kevin J. Satzinger, Juan Atalaya, Alexander N. Korotkov, Andrew Dunsworth, Daniel Sank, Chris Quintana, Matt McEwen, Rami Barends, Paul V. Klimov, Sabrina Hong, Cody Jones, Andre Petukhov, Dvir Kafri, Sean Demura, Brian Burkett, Craig Gidney, Austin G. Fowler, Harald Putterman, Igor Aleiner, Frank Arute, Kunal Arya, Ryan Babbush, Joseph C. Bardin, Andreas Bengtsson, Alexandre Bourassa, Michael Broughton, Bob B. Buckley, David A. Buell, Nicholas Bushnell, Benjamin Chiaro, Roberto Collins, William Courtney, Alan R. Derk, Daniel Eppens, Catherine Erickson, Edward Farhi, Brooks Foxen, Marissa Giustina, Jonathan A. Gross, Matthew P. Harrigan, Sean D. Harrington, Jeremy Hilton, Alan Ho, Trent Huang, William J. Huggins, L. B. Ioffe, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Kostyantyn Kechedzhi, Seon Kim, Fedor Kostritsa, David Landhuis, Pavel Laptev, Erik Lucero, Orion Martin, Jarrod R. McClean, Trevor McCourt, Xiao Mi, Kevin C. Miao, Masoud Mohseni, Wojciech Mruczkiewicz, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Michael Newman, Murphy Yuezhen Niu, Thomas E. O’Brien, Alex Opremcak, Eric Ostby, Bálint Pató, Nicholas Redd, Pedram Roushan, Nicholas C. Rubin, Vladimir Shvarts, Doug Strain, Marco Szalay, Matthew D. Trevithick, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven, Sergio Boixo, Vadim Smelyanskiy, Yu Chen, Anthony Megrant, and Julian Kelly, “Exponential suppression of bit or phase flip errors with repetitive error correction,” ArXiv (2021), 2102.06132 .
* Goddard _et al._ (1973) William A. Goddard, Thom H. Dunning, William J. Hunt, and P. Jeffrey Hay, “Generalized valence bond description of bonding in low-lying states of molecules,” Acc. Chem. Res. 6, 368–376 (1973).
* Cullen (1996) John Cullen, “Generalized valence bond solutions from a constrained coupled cluster method,” Chem. Phys. 202, 217–229 (1996).
* Dunning (1989) Thom H. Dunning, “Gaussian basis sets for use in correlated molecular calculations. I. The atoms boron through neon and hydrogen,” J. Chem. Phys. 90, 1007–1023 (1989).
* Paldus _et al._ (1993) J. Paldus, P. Piecuch, L. Pylypow, and B. Jeziorski, “Application of Hilbert-space coupled-cluster theory to simple (H2)2 model systems: Planar models,” Phys. Rev. A 47, 2738–2782 (1993).
* Lee _et al._ (2019) Joonho Lee, William J. Huggins, Martin Head-Gordon, and K. Birgitta Whaley, “Generalized Unitary Coupled Cluster Wave functions for Quantum Computation,” J. Chem. Theory Comput. 15, 311–324 (2019).
* Hehre _et al._ (1969) Warren J Hehre, Robert F Stewart, and John A Pople, “Self-consistent molecular-orbital methods. i. use of gaussian expansions of slater-type atomic orbitals,” The Journal of Chemical Physics 51, 2657–2664 (1969).
* Raghavachari _et al._ (1989) Krishnan Raghavachari, Gary W. Trucks, John A. Pople, and Martin Head-Gordon, “A fifth-order perturbation comparison of electron correlation theories,” Chem. Phys. Lett. 157, 479–483 (1989).
* Takeshita _et al._ (2020) T. Takeshita, N.C. Rubin, Z. Jiang, E. Lee, R. Babbush, and J.R. McClean, “Increasing the Representation Accuracy of Quantum Simulations of Chemistry without Extra Quantum Resources,” Physical Review X 10 (2020).
* Siegbahn (1983) Per E. M. Siegbahn, “The externally contracted CI method applied to N2,” Int. J. Quanutm Chem. 23, 1869–1889 (1983).
* VandeVondele and Hutter (2007) Joost VandeVondele and Jürg Hutter, “Gaussian basis sets for accurate calculations on molecular systems in gas and condensed phases,” J. Chem. Phys. 127, 114105 (2007).
* Yang _et al._ (2021) Yongdan Yang, Bing-Nan Lu, and Ying Li, “Quantum Monte Carlo simulation of many-body dynamics with mitigated error on noisy quantum computer,” arXiv (2021), 2106.09880 .
* Yu. Kitaev (1995) A Yu. Kitaev, “Quantum measurements and the abelian stabilizer problem,” (1995), arXiv:quant-ph/9511026 [quant-ph] .
* Motta and Zhang (2018) Mario Motta and Shiwei Zhang, “Ab initio computations of molecular systems by the auxiliary-field quantum monte carlo method,” WIREs Comput. Mol. Sci. 8, e1364 (2018).
* Lee _et al._ (2021) Joonho Lee, Miguel A. Morales, and Fionn D. Malone, “A phaseless auxiliary-field quantum Monte Carlo perspective on the uniform electron gas at finite temperatures: Issues, observations, and benchmark study,” J. Chem. Phys. 154, 064109 (2021).
* Purwanto _et al._ (2015) Wirawan Purwanto, Shiwei Zhang, and Henry Krakauer, “An auxiliary-field quantum monte carlo study of the chromium dimer,” J. Chem. Phys. 142, 064302 (2015).
* Bartlett and Musiał (2007) Rodney J. Bartlett and Monika Musiał, “Coupled-cluster theory in quantum chemistry,” Rev. Mod. Phys. 79, 291 (2007).
* Van Voorhis and Head-Gordon (2000a) Troy Van Voorhis and Martin Head-Gordon, “Benchmark variational coupled cluster doubles results,” J. Chem. Phys. 113, 8873–8879 (2000a).
* Purwanto _et al._ (2008) Wirawan Purwanto, WA Al-Saidi, Henry Krakauer, and Shiwei Zhang, “Eliminating spin contamination in auxiliary-field quantum monte carlo: Realistic potential energy curve of F2,” J. Chem. Phys. 128, 114309 (2008).
* Small and Head-Gordon (2011) David W. Small and Martin Head-Gordon, “Post-modern valence bond theory for strongly correlated electron spins,” Phys. Chem. Chem. Phys. 13, 19285–19297 (2011).
* Van Voorhis and Head-Gordon (2000b) Troy Van Voorhis and Martin Head-Gordon, “The imperfect pairing approximation,” Chem. Phys. Lett. 317, 575–580 (2000b).
* Small _et al._ (2014) David W. Small, Keith V. Lawler, and Martin Head-Gordon, “Coupled Cluster Valence Bond Method: Efficient Computer Implementation and Application to Multiple Bond Dissociations and Strong Correlations in the Acenes,” J. Chem. Theory Comput. 10, 2027–2040 (2014).
* Lee _et al._ (2018) Joonho Lee, David W. Small, and Martin Head-Gordon, “Open-shell coupled-cluster valence-bond theory augmented with an independent amplitude approximation for three-pair correlations: Application to a model oxygen-evolving complex and single molecular magnet,” J. Chem. Phys. 149, 244121 (2018).
* Huggins _et al._ (2020) William J. Huggins, Joonho Lee, Unpil Baek, Bryan O’Gorman, and K. Birgitta Whaley, “A non-orthogonal variational quantum eigensolver,” New J. Phys. 22, 073009 (2020).
* Lu _et al._ (2020) Sirui Lu, Mari Carmen Bañuls, and J Ignacio Cirac, “Algorithms for quantum simulation at finite energies,” (2020), arXiv:2006.03032 [quant-ph] .
* Russo _et al._ (2021) A. E. Russo, K. M. Rudinger, B. C. A. Morrison, and A. D. Baczewski, “Evaluating Energy Differences on a Quantum Computer with Robust Phase Estimation,” Phys. Rev. Lett. 126, 210501 (2021).
* Szabo and Ostlund (1996) Attila Szabo and Neil S. Ostlund, _Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory_ (Courier Corporation, 1996).
* Bravyi (2008) Sergey Bravyi, “Contraction of matchgate tensor networks on non-planar graphs,” (2008), arXiv:0801.2989 [quant-ph] .
* Hebenstreit _et al._ (2019) M. Hebenstreit, R. Jozsa, B. Kraus, S. Strelchuk, and M. Yoganathan, “All pure fermionic non-gaussian states are magic states for matchgate computations,” Physical Review Letters 123 (2019), 10.1103/physrevlett.123.080503.
* DiVincenzo and Terhal (2005) David P. DiVincenzo and Barbara M. Terhal, “Fermionic linear optics revisited,” Foundations of Physics 35, 1967–1984 (2005).
* Chen _et al._ (2020) Senrui Chen, Wenjun Yu, Pei Zeng, and Steven T Flammia, “Robust shadow estimation,” (2020), arXiv:2011.09636 [quant-ph] .
* Struchalin _et al._ (2020) G I Struchalin, Ya A Zagorovskii, E V Kovlakov, S S Straupe, and S P Kulik, “Experimental estimation of quantum state properties from classical shadows,” (2020), 10.1038/s41567-020-0932-7, arXiv:2008.05234 [quant-ph] .
* Koh and Grewal (2020) Dax Enshan Koh and Sabee Grewal, “Classical shadows with noise,” (2020), arXiv:2011.11580 [quant-ph] .
* Zhao _et al._ (2020) Andrew Zhao, Nicholas C Rubin, and Akimasa Miyake, “Fermionic partial tomography via classical shadows,” (2020), arXiv:2010.16094 [quant-ph] .
* Aharonov _et al._ (2021) Dorit Aharonov, Jordan Cotler, and Xiao-Liang Qi, “Quantum algorithmic measurement,” (2021), arXiv:2101.04634 [quant-ph] .
* Huang _et al._ (2021) Hsin-Yuan Huang, Richard Kueng, and John Preskill, “Efficient estimation of pauli observables by derandomization,” (2021), arXiv:2103.07510 [quant-ph] .
* Hadfield (2021) Charles Hadfield, “Adaptive pauli shadows for energy estimation,” (2021), arXiv:2105.12207 [quant-ph] .
* Hu and You (2021) Hong-Ye Hu and Yi-Zhuang You, “Hamiltonian-Driven shadow tomography of quantum states,” (2021), arXiv:2102.10132 [quant-ph] .
* Gottesman (1996) Daniel Gottesman, “Class of quantum error-correcting codes saturating the quantum hamming bound,” Phys. Rev. A 54, 1862–1868 (1996).
* Aaronson and Gottesman (2004) Scott Aaronson and Daniel Gottesman, “Improved simulation of stabilizer circuits,” (2004), arXiv:quant-ph/0406196 [quant-ph] .
* Schwarz and den Nest (2013) Martin Schwarz and Maarten Van den Nest, “Simulating quantum circuits with sparse output distributions,” (2013), arXiv:1310.6749 [quant-ph] .
* Tang (2019) Ewin Tang, “A quantum-inspired classical algorithm for recommendation systems,” Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (2019), 10.1145/3313276.3316310.
* Bravyi and Maslov (2020) Sergey Bravyi and Dmitri Maslov, “Hadamard-free circuits expose the structure of the clifford group,” (2020), arXiv:2003.09412 [quant-ph] .
* (77) Cirq Developers, “Cirq (2021),” See full list of authors on Github: https://github. com/quantumlib/Cirq/graphs/contributors .
* team and collaborators (2020) Quantum AI team and collaborators, “qsim,” (2020).
* Rubin _et al._ (2021) Nicholas C Rubin, Toru Shiozaki, Kyle Throssell, Garnet Kin-Lic Chan, and Ryan Babbush, “The fermionic quantum emulator,” arXiv preprint arXiv:2104.13944 (2021).
* (80) See https://github.com/pauxy-qmc/pauxy for details on how to obtain the source code.
* Kent _et al._ (2020) P. R. C. Kent, Abdulgani Annaberdiyev, Anouar Benali, M. Chandler Bennett, Edgar Josué Landinez Borda, Peter Doak, Hongxia Hao, Kenneth D. Jordan, Jaron T. Krogel, Ilkka Kylänpää, Joonho Lee, Ye Luo, Fionn D. Malone, Cody A. Melton, Lubos Mitas, Miguel A. Morales, Eric Neuscamman, Fernando A. Reboredo, Brenda Rubenstein, Kayahan Saritas, Shiv Upadhyay, Guangming Wang, Shuai Zhang, and Luning Zhao, “QMCPACK: Advances in the development, efficiency, and application of auxiliary field and real-space variational and diffusion quantum Monte Carlo,” J. Chem. Phys. 152, 174105 (2020).
* Holmes _et al._ (2016) Adam A. Holmes, Norm M. Tubman, and C. J. Umrigar, “Heat-Bath Configuration Interaction: An Efficient Selected Configuration Interaction Algorithm Inspired by Heat-Bath Sampling,” J. Chem. Theory Comput. 12, 3674–3680 (2016).
* Lee and Head-Gordon (2018) Joonho Lee and Martin Head-Gordon, “Regularized Orbital-Optimized Second-Order Møller–Plesset Perturbation Theory: A Reliable Fifth-Order-Scaling Electron Correlation Model with Orbital Energy Dependent Regularizers,” J. Chem. Theory Comput. 14, 5203–5219 (2018).
* Goedecker _et al._ (1996) S. Goedecker, M. Teter, and J. Hutter, “Separable dual-space Gaussian pseudopotentials,” Phys. Rev. B 54, 1703–1710 (1996).
* Arute _et al._ (2020) Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Barends, Andreas Bengtsson, Sergio Boixo, Michael Broughton, Bob B Buckley, _et al._ , “Observation of separated dynamics of charge and spin in the fermi-hubbard model,” arXiv preprint arXiv:2010.07965 (2020).
* Maslov and Roetteler (2018) Dmitri Maslov and Martin Roetteler, “Shorter stabilizer circuits via bruhat decomposition and quantum circuit transformations,” IEEE Transactions on Information Theory 64, 4729–4738 (2018).
* Malone _et al._ (2019) Fionn D. Malone, Shuai Zhang, and Miguel A. Morales, “Overcoming the memory bottleneck in auxiliary field quantum monte carlo simulations with interpolative separable density fitting,” J. Chem. Theory. Comput. 15, 256 (2019).
## Appendix A Technical Introduction
Despite the tremendous advances made in theoretical chemistry and physics over
the past several decades, problems with substantial electron correlation,
namely effects beyond those treatable at the Hartree-Fock level of theory,
still present great challenges to the field.Friesner (2005); Helgaker _et
al._ (2008); Cao _et al._ (2019); Bauer _et al._ (2020) Electron correlation
effects play a central role in many important situations, ranging from the
treatment of transition-metal-containing systems to the description of
chemical bond breaking. Reaching so-called “chemical accuracy” (accuracy to
within 1 kcal/mol) in such applications is the holy grail of quantum
chemistry, and is a goal which no single method can currently reliably and
scalably achieve.
Among electronic structure methods, projector quantum Monte Carlo (QMC) has
proven to be among the most accurate and scalable. QMC implements imaginary-
time evolution of a quantum state with stochastic sampling and can produce
unbiased ground state energies when the fermionic sign problem is absent, for
example in cases with particle-hole symmetry. Widely used QMC methods include
diffusion Monte Carlo (DMC), Greens function Monte Carlo (GFMC), and
auxiliary-field QMC (AFQMC) approaches. Generally, chemical systems exhibit a
fermionic sign problem and this significantly limits the applicability of QMC
to small systems due to exponentially decreasing signal-to-noise ratio.Troyer
and Wiese (2005) Efficient QMC simulations for sizable systems are possible
only with a constraint implemented in conjunction with a trial wavefunction on
the imaginary-time trajectories, which at the same time introduces a bias in
the final ground state energy estimate.
The accuracy of QMC simulations is, therefore, wholly determined by the
quality of the trial wavefunction. In cases where strong electron correlation
is not present, using a simple single Slater determinant trial wavefunction
obtained from a mean-field (MF) approach leads to accurate approximate ground
state energies from QMC. However, for cases where MF wavefunctions are
qualitatively wrong, one must resort to other alternatives. The form of
wavefunction must be simple enough to evaluate the projection onto a working
QMC basis in an efficient manner. The QMC basis takes the form of real-space
points in DMC, occupation vectors in GFMC, and non-orthogonal Slater
determinants in AFQMC. The projection onto the QMC basis often scales
exponentially with system size for coupled-cluster states and tensor-product
states such as matrix product states. Trial wavefunctions consisting of a
linear combination of determinants have been widely used due to the simple
evaluation of the projection in this case. However, obtaining an accurate
linear combination of determinants scales poorly because the number of
important determinants generically scales exponentially with system size.
Given these facts, there is a need for a new paradigm that allows for more
flexible choices of trial wavefunctions which can lead to more efficient and
scalable QMC algorithms.
In this work, we have proposed harnessing the power of quantum computers in
performing a hybrid quantum-classical QMC simulation, which we refer to as the
QC-QMC algorithm. The key observation that we exploit is that it is possible
to perform the QMC basis projection for a wide range of wavefunctions in a
potentially more efficient manner on quantum computers than on classical
computers. This suggests that one may isolate the specific task of the
projection from the QMC algorithm and use quantum computers to perform this
task and separately communicate this information to a classical computer to
continue the QMC calculation. In principle the required quantity is
straightforward to approximate using the Hadamard test.Yu. Kitaev (1995)
However, because the QMC basis projection needs to be performed thousands of
times for a single QMC calculation, for Noisy Intermediate-Scale Quantum
(NISQ) devices we propose using shadow tomography to characterize the trial
wavefunction and evaluate the projection such that the on-line interaction
between the quantum and classical device no longer exists. This enables the
exploration of the utility of quantum trial wavefunctions without concern for
the challenges of tightly coupling high performance classical computing
resources with a NISQ device. We demonstrate the usefulness and noise
resilience of this approach by producing accurate experiments through Google’s
Sycamore processor on prototypical strongly correlated chemical systems such
as H4 in a minimal basis and a quadruple-zeta basis, as well as bond-breaking
of N2 in a triple-zeta basis. We also studied a minimal unit cell model of
diamond within a double zeta basis.
## Appendix B Review of Projector Quantum Monte Carlo
QMC methods are among the most accurate approximate electronic structure
approaches, and they can be systematically improved with the use of
increasingly sophisticated trial functions. Here, we summarize the essence of
the algorithm and discuss a specific QMC method which works in second-
quantized space, namely auxiliary-field quantum Monte Carlo (AFQMC). While we
focus on developing a strategy tailored for AFQMC in this work, the general
discussion is not limited to AFQMC and should be applicable to QMC in general.
### B.1 Projector quantum Monte Carlo
The essence of any projector QMC methods is that one computes the ground state
energy and properties via an imaginary-time propagation
$|\Psi_{0}\rangle\propto\lim_{\tau\rightarrow\infty}\exp\left(-\tau\hat{H}\right)|\Phi_{0}\rangle=\lim_{\tau\rightarrow\infty}|\Psi(\tau)\rangle,$
(3)
where $\tau$ is the imaginary time, $|\Psi_{0}\rangle$ is the exact ground
state and $|\Phi_{0}\rangle$ is an initial starting wavefunction satisfying
$\langle\Phi_{0}|\Psi_{0}\rangle\neq 0$. Without any further modification,
this is an exact approach to the computation of the ground state wavefunction.
In practice, a deterministic implementation of Eq. 3 scales exponentially with
system size and therefore one resorts to a stochastic realization of Eq. 3 for
scalable simulations. Such a stochastic realization is typically referred to
as QMC.
Unfortunately, a direct implementation of Eq. 3 via QMC suffers from the
infamous fermionic sign problem.Troyer and Wiese (2005) In first quantized QMC
methods such as DMC, fermionic antisymmetry is not imposed explicitly. Such
approaches require the imposition of the fermionic nodal structure in the
first-quantized trial function. to compute the fermionic ground state. The
treatment of this nodal structure introduces a sign problem that can be
ameliorated at the cost of introducing a bias such as treating the nodal
structure as fixed by the trial function. In second quantized QMC methods the
sign problem manifests in a different way. The statistical estimates from a
second quantizated QMC method exhibit variances that grow exponentially with
system size. Therefore for simulations of large systems no meaningful
statistical estimates can be obtained. It is then necessary to impose a
constraint in the imaginary-time propagation to deal with the the sign
problem. An example of such a constraint is the ”phaseless” constraint in
AFQMC (see below). While such constraints introduce biases in the final
estimates, rendering QMC approaches inherently approximate in practice,
different constrained approaches will have relative strengths and weaknesses
with respect to accuracy and flexibility.
### B.2 Auxiliary-field quantum Monte Carlo
Auxiliary-field quantum Monte Carlo (AFQMC) is a projector QMC method that
works in second-quantized space.Motta and Zhang (2018) Therefore, the sign
problem in AFQMC manifests in growing variance in statistical estimates. To
impose a constraint in the imaginary-time propagation, it is natural to
introduce a trial wavefunction that can be used in the importance sampling as
well as the constraint. This results in a wavefunction at imaginary time
$\tau$ expressed as
$|\Psi(\tau)\rangle=\sum_{i}w_{i}(\tau)\frac{|\phi_{i}(\tau)\rangle}{\langle\Psi_{T}|\phi_{i}(\tau)\rangle}$
(4)
where $|\phi_{i}(\tau)\rangle$ is the wavefunction of the $i$-th walker,
$w_{i}(\tau)$ is the weight of the $i$-th walker, and $|\Psi_{T}\rangle$ is
some a priori chosen trial wavefunction. From Eq. 4, it is evident that the
importance sampling is imposed based on the overlap between the walker
wavefunction and the trial wavefunction.
Walker wavefunctions in Eq. 4 are almost always chosen to be single Slater
determinants and the action of the imaginary propagation,
$\exp(-\Delta\tau\hat{H})$, for a small time step $\Delta\tau$ in Eq. 3
transforms the walkers in such a way that they stay within the single Slater
determinant manifold via the Hubbard-Stratonovich transformation. This
property is essential if the computational cost is to grow only polynomially
with system size, and is at the core of the AFQMC algorithm as well as that of
another commonly used unconstrained (and therefore unbiased) projector PQMC
approach called the determinant QMC method.Blankenbecler _et al._ (1981)
While repeatedly applying the imaginary time propagator to the wavefunction,
the AFQMC algorithm prescribes a particular way to update the walker weight
$w_{i}(\tau)$ in Eq. 4. In essence, it is necessary that all weights stay real
and positive so that the final energy estimator,
$E(\tau)=\frac{\langle\Psi_{T}|\hat{H}|\Psi(\tau)\rangle}{\langle\Psi_{T}|\Psi(\tau)\rangle}=\frac{\sum_{i}\omega_{i}E^{(i)}(\tau)}{\sum_{i}\omega_{i}},$
(5)
has a small variance where $E^{(i)}(\tau)$ is so-called the local energy,
which is defined as
$E^{(i)}(\tau)=\frac{\langle\Psi_{T}|\hat{H}|\psi_{i}(\tau)\rangle}{\langle\Psi_{T}|\psi_{i}(\tau)\rangle}.$
(6)
We note that Eq. 5 is not a variational energy expression and is commonly
referred to as the “mixed” energy estimator in QMC. The essence of the
constraint is that one updates the $i$-th walker weight from $\tau$ to
$\tau+\Delta\tau$ using
$|S_{i}(\tau)|\times\text{max}(0,\cos\theta_{i}(\tau))$ (7)
where
$S_{i}(\tau)=\frac{\langle\Psi_{T}|\phi_{i}(\tau+\Delta\tau)\rangle}{\langle\Psi_{T}|\phi_{i}(\tau)\rangle},$
(8)
and $\theta_{i}(\tau)$ is the argument of $S_{i}(\tau)$. This is in a stark
contrast with a typical importance sampling strategy which updates the walker
weights using $S_{i}(\tau)$, which does not guarantee the positivity and
reality of the walker weights. If $|\Psi_{T}\rangle$ is exact, this constraint
does not introduce any bias, but simply imposes a specific boundary condition
on the imaginary propagation which can be viewed as a “gauge-fixing” of the
wavefunction. In practice, one does not have access to the exact
$|\Psi_{T}\rangle$ and therefore can only compute an approximate energy whose
accuracy wholly depends on the choice of $|\Psi_{T}\rangle$. Such a constraint
is usually referred to as the “phaseless approximation” in the AFQMC
literature.
Currently, classically tractable trial wavefunctions that are commonly used
are either single determinant trials or take the form of a linear combination
of determinants.Lee _et al._ (2021); Purwanto _et al._ (2015) The former is
very scalable (up to 500 electrons or so) but can be often inaccurate,
especially for strongly correlated systems, while the latter is limited to a
small number of electrons (16 or so) but can produce results that are very
accurate even for strongly correlated systems. The choice of the trial
wavefunction renders AFQMC limited by the evaluation of Eq. 5 and Eq. 8. If
the computation of either one of these quantities scales exponentially with
system size, the resulting AFQMC calculation will be exponentially expensive.
## Appendix C Quantum-Classical Hybrid Auxiliary-Field QMC (QC-AFQMC)
Algorithms
In the main text, we presented the general philosophy of the QC-QMC algorithm
and here we wish to provide more AFQMC-specific details tailored to the
experiments presented in this work.
From the perspective of QMC simulations, the main benefit of using a quantum
computer is to expand the range of available trial wavefunctions beyond what
is efficient classically. Namely, we seek a class of trial wavefunctions that
are inherently more accurate than a single determinant trial while bypassing
the difficulty of variational optimization on the quantum computer. Among the
set of possible trial functions, we are interested in using wavefunctions for
which no known polynomial-scaling classical algorithm exists for the exact
evaluation of Eq. 5 and Eq. 8. The core idea in the QC-AFQMC algorithm is that
one can measure Eq. 5 and Eq. 8 on the quantum computer and implement the
majority of the imaginary-time evolution classically. Our goal is provide a
roadmap for quantum computers to apply polynomial-scaling algorithms for the
evaluation of Eq. 5 and Eq. 8 up to additive errors and thus ultimately to
observe quantum advantage in some systems. This clearly separates subroutines
into those that need to be run on quantum computers and those on classical
computers.
### C.1 Quantum trial wavefunctions
The specific trial functions of interest in this work are simple variants of
so-called coupled-cluster (CC) wavefunctions. In quantum chemistry, CC
wavefunctions are among the most accurate many-body wavefunctions.Bartlett and
Musiał (2007) They are defined by an exponential parametrization,
$|\Psi\rangle=e^{\hat{T}}|\psi_{0}\rangle,$ (9)
where $|\psi_{0}\rangle$ is a single determinant reference wavefunction and
the cluster operator $\hat{T}$ is defined as
$\hat{T}=\sum_{ai}t_{i}^{a}a_{a}^{\dagger}a_{i}+\sum_{ijab}t_{ij}^{ab}a_{b}^{\dagger}a_{a}^{\dagger}a_{j}a_{i}+\cdots.$
(10)
We use $\\{i,j,k,\cdots\\}$ to denote occupied orbitals and
$\\{a,b,c,\cdots\\}$ for unoccupied orbitals. $\hat{T}$ can be extended to
include single excitations (S), double excitations (D), triple excitations (T)
and so on. The resulting CC wavefunction is then systematically improvable by
including higher-order excitations. The most widely used version involves up
to doubles and is referred to as CC with singles and doubles (CCSD). There is
no efficient algorithm for variationally determining the CC amplitudes,
$\mathbf{t}$; however, there is an efficient projective way to determine these
amplitudes and the energy, although the resulting energy determined by this
procedure is not variational. Such non-variationality manifests as a breakdown
of conventional CC, although it has been suggested that the underlying
wavefunction is still qualitatively correct and the projective energy
evaluation is partially responsible for this issue.Van Voorhis and Head-Gordon
(2000a)
Employing CCSD (or higher-order CC wavefunctions) within the AFQMC framework
is difficult because the overlap between a CCSD wavefunction and an arbitary
Slater determinant cannot be calculated efficiently without approximations.
This is true for nearly all non-trivial variants of coupled cluster. Notably,
there is currently no known efficient classical algorithm for precisely
calculating wavefunction overlaps even for the cases of coupled cluster
wevefunctions with a limited set of amplitudes, such as generalized valence
bond perfect-pairing (PP).Goddard _et al._ (1973); Cullen (1996) In QC-AFQMC,
we can efficiently approximate the required overlaps of such wavefunctions by
using a quantum computer to prepare a unitary version of CC wavefunctions or
approximations to them. By using CC wavefunctions that we can obtain circuit
parameters for classically, we are able to avoid a costly variational
optimization procedure on the quantum device.
The simplified CC wavefunction ansatz that we utilize in this work is the
generalized valence bond PP ansatz. This ansatz is defined as
$\ket{\Psi_{\text{PP}}}=e^{\hat{T}_{\text{PP}}}e^{\hat{\kappa}}\ket{\psi_{0}},$
(11)
where the orbital rotation operator is defined as
$\hat{\kappa}=\sum_{pq}^{N_{\text{orbitals}}}(\kappa_{pq}^{\uparrow}-\kappa_{qp}^{\uparrow})\hat{a}_{p_{\uparrow}}^{\dagger}\hat{a}_{q_{\uparrow}}+(\kappa_{pq}^{\downarrow}-\kappa_{qp}^{\downarrow})\hat{a}_{p_{\downarrow}}^{\dagger}\hat{a}_{q_{\downarrow}},$
(12)
and the PP cluster operator is
$\hat{T}_{\text{PP}}=\sum_{i}^{N_{\text{pairs}}}t_{i}\hat{a}^{\dagger}_{i^{*}_{\uparrow}}\hat{a}_{i_{\uparrow}}\hat{a}^{\dagger}_{i^{*}_{\downarrow}}\hat{a}_{i_{\downarrow}}.$
(13)
In this equation, each $i$ is an occupied orbital and each $i^{*}$ is the
corresponding virtual orbital that is paired with the occupied orbital $i$. We
map the spin-orbitals of this wavefunction to qubits using the Jordan-Wigner
transformation. We note that the pair basis in $t_{i}$ is defined in the
rotated orbital basis defined by the orbital rotation operator.
Due to its natural connection with valence bond theory which often provides a
more intuitive chemical picture than does molecular orbital theory, the PP
wavefunction has played an important role in understanding chemical
processes.Goddard _et al._ (1973) Despite its exponential scaling when
implemented exactly on a classical computer, PP in conjunction with AFQMC has
been discussed previously; see Ref. 52. We will explore the scaling of the PP-
based approach in classical AFQMC and QC-AFQMC in more in detail below because
this wavefunction is used in all of our experimental examples.
The PP wavefunction is known to provide insufficient accuracy for the ground
state energy in many important examples. This is best illustrated in systems
where inter-pair correlation becomes important, such as multiple bond breaking
processes.Small and Head-Gordon (2011) While there exist ways to incorporate
inter-pair correlation classically,Van Voorhis and Head-Gordon (2000b); Small
_et al._ (2014); Lee _et al._ (2018) in this work we focus on adding multiple
layers of hardware-efficient operators to the PP ansatz. There are two kinds
of these additional layers that we have explored:
1. 1.
The first class of layers includes only density-density product terms of the
form
$e^{J_{ij}\hat{n}_{i}\hat{n}_{j}}.$ (14)
2. 2.
The second class includes only “nearest-neighbor” hopping terms between same
spin ($\sigma$) pairs
$e^{Q_{ij}\hat{a}_{i_{\sigma}}^{\dagger}\hat{a}_{j_{\sigma}}-Q_{ij}^{*}\hat{a}_{j_{\sigma}}^{\dagger}\hat{a}_{i_{\sigma}}}.$
(15)
In both cases, the $i$ and $j$ orbitals are physically neighboring in the
hardware layout. We alternate multiple layers of each kind and apply these
layers to the PP ansatz to improve the overall accuracy. The efficacy of these
layers varies with their ordering with the choice of the $i$,$j$ pairs.
Lastly, we also employ a full single particle rotation at the end of the
hardware-efficient layers. This last orbital rotation can be applied to 1-body
and 2-body Hamiltonian matrix elements classically, so we do not have to
implement this part on the quantum computer. We refer this orbital rotation as
“offline orbital rotation” as noted in Fig. 2. H4 was the only example where
we went beyond the PP wavefunction. When this type of hardware-efficient
layers is used, we no longer have an efficient classical algorithm to optimize
the wavefunction parameters. In such cases, one can resort to the variational
quantum eigensolver to obtain these parameters. In the case of H4, the Hilbert
space is small enough (4-orbitals) that we still could optimize everything
classically.
### C.2 Overlap and Local energy evaluation
As mentioned above, the overlap and local energy evaluations are the key
subroutines that involve the quantum trial wavefunctions. One approach to the
overlap evaluation is to use the Hadamard test.Yu. Kitaev (1995) Using modern
methods, one could do this without requiring the state preparation circuit to
be controlled by an ancilla qubit.Huggins _et al._ (2020); Lu _et al._
(2020); Russo _et al._ (2021) However, this approach would require a separate
evaluation for each walker at every time step. To avoid a steep prefactor in
quantum device run time, we propose the use of the technique known as shadow
tomography as discussed in Appendix D. For now, we will assume that one can
make a query to the quantum processor to obtain the overlap between a quantum
trial state and an arbitrary Slater determinant efficiently up to additive
error of the overlap.
With the ability to measure the overlap between $|\Psi_{T}\rangle$ and an
arbitrary single Slater determinant, we can easily estimate the local energy
in Eq. 6. The evaluation of the denominator is just an overlap quantity and an
efficient estimation of the denominator is possible via
$\langle\Psi_{T}|\hat{H}|\psi_{i}(\tau)\rangle=\sum_{pr}\langle\Psi_{T}|\psi_{p}^{r}\rangle\langle\psi_{p}^{r}|\hat{H}|\psi_{i}(\tau)\rangle+\sum_{pqrs}\langle\Psi_{T}|\psi_{pq}^{rs}\rangle\langle\psi_{pq}^{rs}|\hat{H}|\psi_{i}(\tau)\rangle,$
(16)
where $|\psi_{p}^{r}\rangle$ and $|\psi_{pq}^{rs}\rangle$ denote single and
double excitations from $|\psi_{i}(\tau)\rangle$, respectively. We only need
up to double excitations because our Hamiltonian has up to two-body terms. It
is then evident that the ability to estimate
$\langle\Psi_{T}|\psi_{p}^{r}\rangle$ and
$\langle\Psi_{T}|\psi_{pq}^{rs}\rangle$ efficiently is sufficient to evaluate
the entire local energy because the rest of the terms in Eq. 16 follow from
the simple application of the Slater-Condon rule.Szabo and Ostlund (1996) The
number of overlap queries made to the quantum processor scales as
$\mathcal{O}(N^{4})$ with $N$ being the system size in this algorithm. Other
“mixed” local observables can be computed via similar algorithms.
### C.3 Virtual correlation energy
Obtaining the correlation energy outside the “active” space, where the actual
quantum resource is spent, is critical for converging our simulation results
to the basis set limit (or the continuum limit). The correlation energy
outside the active space will be referred to as “virtual correlation energy”.
We are limited in terms of the number of qubits on NISQ devices, so a
procedure to incorporate correlation energy outside the relatively small
active space is essential. To this end, a virtual correlation energy strategy
has been proposed within the framework of VQE,Takeshita _et al._ (2020) but
this approach comes with a significant measurement overhead due to the
requirement of three- and four-body reduced density matrices within the active
space.
In this section, our goal is to show that a similar technique for QC-AFQMC
exists where we can obtain the virtual correlation energy without any
additional qubits or any measurement overhead. We write our trial wavefunction
as
$|\Psi_{T}\rangle=|\psi_{T}\rangle\otimes|\psi_{\mathrm{c}}\rangle\otimes|0_{\mathrm{v}}\rangle,$
(17)
where $|\psi_{T}\rangle$ is the quantum trial wavefunction within the active
space, $|\psi_{c}\rangle$ is a Slater determinant composed of occupied
orbitals outside the active space (i.e. frozen core orbitals), and
$|0_{v}\rangle$ is a vacuum state in the space of unoccupied orbitals outside
the active space (i.e., frozen virtual orbitals). We want to compute the
overlap between $|\Psi_{T}\rangle$ and a single Slater determinant
$|\phi\rangle$
$\displaystyle\innerproduct{\phi}{\Psi_{T}}=\Bra{\phi}\left(\Ket{\psi_{T}}\otimes\Ket{\psi_{c}}\otimes\ket{0_{v}}\right)$
$\displaystyle=\sum_{\begin{subarray}{c}x\in{\\{0,1\\}}^{N_{\mathrm{a}}}\\\
y\in{\\{0,1\\}}^{N_{\mathrm{c}}}\\\
z\in{\\{0,1\\}}^{N_{\mathrm{v}}}\end{subarray}}\Braket{\phi}{x,y,z}\Braket{x}{\psi_{T}}\Braket{y}{\psi_{c}}\Braket{z}{0_{v}}$
(18)
$\displaystyle=\sum_{\begin{subarray}{c}x\in{\\{0,1\\}}^{N_{\mathrm{a}}}\\\
y\in{\\{0,1\\}}^{N_{\mathrm{c}}}\\\
z\in{\\{0,1\\}}^{N_{\mathrm{v}}}\end{subarray}}\phi^{*}(x,y,z)\psi_{T}(x)\psi_{c}(y)\delta_{z,0_{v}}$
(19)
$\displaystyle=\sum_{x\in{\\{0,1\\}}^{N_{\mathrm{a}}}}\left(\sum_{y\in{\\{0,1\\}}^{N_{\mathrm{c}}}}\phi^{*}(x,y,0_{v})\psi_{c}(y)\right)\psi_{T}(x),$
(20)
where $\phi(x,y,z)=\Braket{x,y,z}{\phi}$, $\psi_{T}(x)=\Braket{x}{\psi_{T}}$,
$N_{\mathrm{a}}$ is the number of active spin orbitals, and $N_{\mathrm{c}}$
and $N_{\mathrm{v}}$ are the number of occupied and unoccupied spin orbitals
outside of the active space, respectively. We are using $x,y,z$ to denote bit
strings in the space composed of single particle orbitals used to construct
$|\Psi_{T}\rangle$. Because the tensor $\phi^{*}(x,y,z)$ represents a Slater
determinant, it is a special case of what is known as a matchgate tensor with
$N_{\mathrm{a}}+N_{\mathrm{c}}+N_{\mathrm{v}}$ open indices, as is also the
case for $\psi_{c}(y)$ and $\delta_{z,0_{v}}$ (with $N_{\mathrm{c}}$ and
$N_{\mathrm{v}}$ open indices respectively). Thus, their contraction
$\left(\sum_{y\in{\\{0,1\\}}^{N_{\mathrm{c}}}}\phi^{*}(x,y,0_{v})\psi_{c}(y)\right)$
is also a matchgate with $N_{\mathrm{a}}$ open indices and support on states
of a fixed Hamming weight (i.e. an unnormalized Slater determinant), and can
be formed efficiently by contracting over $N_{\mathrm{c}}+N_{\mathrm{v}}$ legs
with $|\psi_{c}\rangle\otimes\ket{0_{v}}$.Bravyi (2008); Hebenstreit _et al._
(2019); DiVincenzo and Terhal (2005) Let $\tilde{\phi}(x)$ denote the
resulting matchgate tensor after normalization and $\ket{\tilde{\phi}}$ the
associated state. Then $\ket{\tilde{\phi}}$ is a normalized Slater determinant
in the same Hilbert space as $\ket{\psi_{T}}$. Thus, we have
$\innerproduct{\phi}{\Psi_{T}}=\Bra{\phi}\left(\Ket{\psi_{T}}\otimes\Ket{\psi_{c}}\otimes\ket{0_{v}}\right)=\text{constant}\times\Braket{\tilde{\phi}}{\psi_{T}},$
(21)
where the constant can be efficiently evaluated classically by contracting
matchgate states and the evaluation of $\Braket{\tilde{\phi}}{\psi_{T}}$ can
now be performed on the quantum computer with only $N_{\mathrm{a}}$ qubits.
For the local energy evaluation in Eq. 6, we leverage the same technique that
we used in Eq. 16. The numerator of the local energy expression is
$\displaystyle\langle\phi|\hat{H}|\left(\Ket{\psi_{T}}\otimes\Ket{\psi_{c}}\otimes|0_{v}\rangle\right)$
$\displaystyle=\sum_{pr}\langle\phi|\hat{H}|\phi_{p}^{r}\rangle\langle\phi_{p}^{r}|\left(\Ket{\psi_{T}}\otimes\Ket{\psi_{c}}\otimes\ket{0_{v}}\right)+\sum_{pqrs}\langle\phi|\hat{H}|\phi_{pq}^{rs}\rangle\langle\phi_{pq}^{rs}|\left(\Ket{\psi_{T}}\otimes\Ket{\psi_{c}}\otimes\ket{0_{v}}\right),$
(22)
and we only need to focus on the computing the following term:
$\langle\phi_{pq}^{rs}|\left(\Ket{\psi_{T}}\otimes\Ket{\psi_{c}}\otimes\ket{0_{v}}\right)=\sum_{\begin{subarray}{c}x\in{\\{0,1\\}}^{N_{\mathrm{a}}}\\\
y\in{\\{0,1\\}}^{N_{\mathrm{c}}}\end{subarray}}\phi_{pq}^{rs}(x,y,0_{v})\psi_{T}(x)\psi_{c}(y)=\sum_{\begin{subarray}{c}x\in{\\{0,1\\}}^{N_{\mathrm{a}}}\end{subarray}}\left(\sum_{y\in{\\{0,1\\}}^{N_{\mathrm{c}}}}\phi_{pq}^{rs}(x,y,0_{v})\psi_{c}(y)\right)\psi_{T}(x).$
(23)
Then
$\left(\sum_{y\in{\\{0,1\\}}^{N_{\mathrm{c}}}}\phi_{pq}^{rs}(x,y,0_{v})\psi_{c}(y)\right)$
is the tensor corresponding to a matchgate state itself (with $N_{\mathrm{a}}$
open indices) and thus can be computed efficiently classically. Since an
equation of the form Eq. 21 also holds for $|\phi_{pq}^{rs}\rangle$, the local
energy evaluation can be performed on the quantum computer with only
$N_{\mathrm{a}}$ qubits.
## Appendix D Experimental Implementation via Shadow Tomography
The basic goal of shadow tomography (ST) is to estimate properties of a
quantum state without resorting to full state tomography. This task was
introduced in Ref. 31 and has been considered in a number of subsequent
works.Huang _et al._ (2020); Chen _et al._ (2020); Struchalin _et al._
(2020); Koh and Grewal (2020); Zhao _et al._ (2020); Aharonov _et al._
(2021); Huang _et al._ (2021); Hadfield (2021); Hu and You (2021) In the
experiments performed in this work, we make use of these tools to approximate
the quantities required to perform AFQMC, Eq. 5 and Eq. 8. We focus here on
the proposal put forward by Huang et al. in Ref. 32. This version of shadow
tomography is experimentally simple to implement and compatible with today’s
quantum hardware.
As we shall explain, the use of shadow tomography makes our experiment
particularly efficient in terms of the number of repetitions required to
evaluate the required wavefunction overlaps. This allows us to avoid
performing a separate set of experiments (e.g. using the Hadamard test) for
each timestep and walker. However, this efficiency comes at a cost; the way in
which we extract these overlaps from the experimental measurement record
requires an exponentially scaling post-processing step. We note that this
difficulty is specific to the particular choice we made to demonstrate QC-QMC
using AFQMC rather than some other QMC method. For example, if we were using a
quantum computer to provide the constraint for a Green’s function Monte Carlo
calculation, the walker wavefunctions would be computational basis states and
we could make use of shadow tomography without this issue. It is an open
question whether a more sophisticated measurement strategy could be equally
efficient in terms of the number of measurements required while also avoiding
this additional bottleneck for QC-AFQMC. Exploring the use of shadow
tomography with random fermionic gaussian circuits, as in Ref. 67, seems like
a promising direction to explore for this purpose.
In Appendix D.1, we review the general formalism of shadow tomography as
proposed in Ref. 32. We continue in Appendix D.2 by showing how we can use
shadow tomography to approximate the wavefunction overlaps required to perform
QC-QMC and discussing the scaling in terms of the number of measurement
repetitions performed on the quantum device. We explain the challenges
associated with the classical post-processing of the experimental record for
QC-AFQMC in Appendix D.3. In Appendix D.4 and Appendix D.5, we describe two
strategies we adopt for reducing the number of quantum gates required for our
experimental implementation. Appendix D.4 deals with compiling the
measurements, while Appendix D.5 explains how we make a tradeoff between the
number of gates and the number of measurements. Finally, in Appendix D.6, we
show that the quantities we ultimately estimate using the quantum device are
resilient to noise, particularly noise during the shadow tomography
measurement procedure.
### D.1 Review of Shadow Tomography
Let $\rho$ denote some unknown quantum state. We assume that we have access to
$N$ copies of $\rho$. Let $\\{O_{i}\\}$ denote a collection of $M$
observables. Our task is to estimate the quantities $\tr(\rho O_{i})$ up to
some additive error $\epsilon$ for each $O_{i}$. The key insight of Ref. 32 is
that we can accomplish this efficiently in certain circumstances by randomly
choosing measurement operators from a tomographically complete set.
To specify a protocol, we choose an ensemble of unitaries $\mathcal{U}$. We
then proceed by randomly sampling $U_{k}\in\mathcal{U}$ and measuring the
state $U_{k}\rho U_{k}^{\dagger}$ in the computational basis to obtain the
basis state $\outerproduct{b_{k}}{b_{k}}$. Consider the state
$U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}$. In expectation, the mapping
from $\rho$ to $U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}$ defines a
quantum channel,
$\mathcal{M}(\rho)\coloneqq\mathbb{E}_{k}\big{[}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{]}.$
(24)
We require that $\mathcal{M}$ be invertible, which is true if and only if the
collection of measurement operators defined by drawing $U\in\mathcal{U}$ and
measuring in the computational basis is tomographically complete. Assuming
that this is true, we can apply $\mathcal{M}^{-1}$ to both sides of Eq. (24),
yielding
$\displaystyle\rho=$
$\displaystyle\mathcal{M}^{-1}\Big{(}\mathbb{E}_{k}\big{[}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{]}\Big{)}$
$\displaystyle=$
$\displaystyle\mathbb{E}_{k}\Big{[}\mathcal{M}^{-1}\big{(}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{)}\Big{]}.$
(25)
We call the collection
$\Big{\\{}\mathcal{M}^{-1}\big{(}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{)}\Big{\\}}$
the classical shadow of $\rho$.
Many choices for the ensemble $\mathcal{U}$ are possible.Huang _et al._
(2020); Zhao _et al._ (2020); Hadfield (2021); Hu and You (2021) Formally,
the condition that the measurement channel is invertible is sufficient. In
practice, it is also desirable to impose the constraint that the classical
post-processing involved in making use of the shadow can be done efficiently.
In this work, we utilize randomly selected $N$-qubit Clifford circuits, as
well as tensor products of randomly selected Clifford circuits on fewer
qubits.
### D.2 Approximating Wavefunction Overlaps with Shadow Tomography
Let $\ket{\Psi_{T}}$ denote our trial wavefunction. We restrict ourselves to
considering $\ket{\Psi_{T}}$ that represent fermionic wavefunctions with a
definite number of particles $\eta>0$. We focus on states encoded with the
Jordan-Wigner transformation, so that the qubit wavefunction for
$\ket{\Psi_{T}}$ is a superposition of computational basis states with Hamming
weight $\eta$. Let $\ket{\phi}$ denote our walker wavefunction, which is also
a superposition of computational basis states with Hamming weight $\eta$. In
this section, we explain how to approximate the wavefunction overlap
$\innerproduct{\phi}{\Psi_{T}}$ using shadow tomography.
Our protocol begins by preparing the state $\outerproduct{\tau}{\tau}$ on the
quantum computer, where $\Ket{\tau}=(\Ket{0}+\Ket{\Psi_{T}})/\sqrt{2}$, with
$\ket{0}$ denoting the all-zero (vacuum) state. The wavefunction overlap of
interest is therefore equal to
$\innerproduct{\phi}{\Psi_{T}}=2\langle\phi\outerproduct{\tau}{\tau}0\rangle=2\Tr\left[\outerproduct{\tau}{\tau}\cdot\outerproduct{0}{\phi}\right],$
(26)
where we used the fact that $\Braket{\Psi_{T}}{0}=\Braket{\phi}{0}=0$. If we
define the observables
$\displaystyle P_{+}$
$\displaystyle=\outerproduct{0}{\phi}+\outerproduct{\phi}{0},$ $\displaystyle
P_{-}$ $\displaystyle=\outerproduct{0}{\phi}-\outerproduct{\phi}{0},$ (27)
then we have
$\displaystyle\real(\innerproduct{\phi}{\Psi_{T}})$
$\displaystyle=\Tr\left[\Ket{\tau}\Bra{\tau}P_{+}\right],$ (28) $\displaystyle
i\imaginary(\innerproduct{\phi}{\Psi_{T}})$
$\displaystyle=\Tr\left[\Ket{\tau}\Bra{\tau}P_{-}\right],$ (29)
where $z=\real(z)+i\imaginary(z)$ for $z\in\mathbb{C}$. Note that
$\Tr\left[P_{\pm}\right]=0$ and
$\Tr\left[P_{\pm}^{2}\right]=\Tr\left[\outerproduct{\phi}{\phi}+\outerproduct{0}{0}\right]=2.$
(30)
Let us assume for now that $\mathcal{U}$ is the Clifford group on $N$ qubits.
Therefore, we can use the expression for the inverse channel from Ref. 32,
$\mathcal{M}^{-1}\big{(}X)=(2^{N}+1)X-\mathbb{I}.$ (31)
In particular, we have
$\displaystyle\Tr[(P_{+}+P_{-})\mathcal{M}^{-1}\big{(}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{)}$
$\displaystyle\big{]}=(2^{N}+1)\Tr[(P_{+}+P_{-})U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{]}.$
(32)
The full expression for $\innerproduct{\phi}{\Psi_{T}}$ then becomes
$\displaystyle\innerproduct{\phi}{\Psi_{T}}=(2^{N}+1)\mathbb{E}_{k}\Big{[}\Tr[(P_{+}+P_{i})U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{]}\Big{]}=$
(33) $\displaystyle
2(2^{N}+1)\mathbb{E}_{k}\Big{[}\bra{\phi}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\ket{0}\Big{]}.$
(34)
Furthermore, because we’re expressing $\innerproduct{\phi}{\Psi_{T}}$ in terms
of the expectation values of the two operators $P_{\pm}$ with
$\Tr[P_{\pm}^{2}]=O(1)$, Theorem 1 of Ref. 32 allows us to bound the number of
measurement repetitions we require. Consider the case where we would like to
estimate the overlap of $\ket{\Psi_{T}}$ with a collection of $M$ different
wavefunctions $\\{\phi_{i}\\}$. Let $\tilde{c_{i}}$ denote our estimate of
$\innerproduct{\phi_{i}}{\Psi_{T}}$. We specify a desired accuracy in terms of
two parameters, $\epsilon$ and $\delta$, by demanding that
$|\tilde{c_{i}}-\innerproduct{\phi_{i}}{\Psi_{T}}|\leq\epsilon\;\;\forall\;\;1\leq
i\leq M$ (35)
with probability at least $1-\delta$. Theorem 1 of Ref. 32 implies that shadow
tomography using the N-qubit Clifford group allows us to achieve this accuracy
using
$R=\mathcal{O}\big{(}\frac{\log(M)-\log(\delta)}{\epsilon^{2}}\big{)}$ (36)
repetitions of state preparation and measurement.
### D.3 Classical Post-processing for Wavefunction Overlaps
In the previous section, we described how we can use shadow tomography to
estimate overlaps of the form $\Braket{\phi}{\Psi_{T}}$ by evaluating the
expression in Eq. (34),
$2(2^{N}+1)\mathbb{E}_{k}\Big{[}\bra{\phi}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\ket{0}\Big{]}$,
where the $U_{k}$ are Clifford circuits and $b_{k}$ are computational basis
states. We explained how these estimates can be made using a modest number of
experimental repetitions, even for a large collection of different
$\ket{\phi_{i}}$. However, we have not yet described the classical post-
processing required to perform this estimation. This section addresses this
aspect of our experiment and explains how the approach we took for our
implementation of QC-AFQMC in practice involves an exponentially scaling step.
We will utilize the fact that overlaps stabilizer states (including basis
states) can be efficiently computed classically using the Gottesman-Knill
theorem.Gottesman (1996); Aaronson and Gottesman (2004) For instance, the
terms $\matrixelement{b_{k}}{U_{k}}{0}$ can be efficiently calculated for any
Clifford circuit $U_{k}$. Therefore, we can just focus on computing
$\matrixelement{\phi}{U_{k}^{\dagger}}{b_{k}}$.
In special cases, this can be computed efficiently. For example, if
$\Ket{\phi}=\sum_{\alpha}c_{\alpha}\Ket{\phi_{\alpha}}$ can be written as a
linear combination of a polynomial number of stabilizer states
${\left\\{\Ket{\phi_{\alpha}}\right\\}}_{\alpha}$, then we can efficiently
compute $\matrixelement{\phi_{\alpha}}{U^{\dagger}_{k}}{b_{k}}$ for each
$\alpha$ and sum them together. QMC methods such as Green’s function Monte
Carlo where the walker wavefunctions are computational basis states are a
special case that trivially satisfies this requirement. In our QC-AFQMC
experiments, we expanded $\ket{\phi}$ in this way, except that we performed a
sum over all of the computational basis states with the correct symmetries,
incurring an exponential overhead. We emphasize, however, that the cost of
this post-processing has no effect on the number of quantum samples needed to
produce the classical shadow. Even when $\Ket{\phi}$ is not exactly sparse, it
may be approximately sparse in the computational basis (in the sense of being
close to an exactly sparse state). In such a case, if we can sample from
$\ket{\phi}$ efficiently (which is possible for a Slater determinant), we
could sample from it and obtain an exactly sparse state $\Ket{\tilde{\phi}}$
that is sufficiently close to $\Ket{\phi}$ (see, e.g., Ref. 74) and compute
$\matrixelement{\tilde{\phi}}{U_{k}^{\dagger}}{b_{k}}$ as an approximation to
the overlap.
For general $\Ket{\phi}$, computing $\Braket{\phi}{U_{k}^{\dagger}}{b_{k}}$
may be classically intractable. Specifically, when $\Ket{\phi}$ is a Slater
determinant, as our walkers are, there is no known way to efficiently compute
the desired overlap classically. Existing strategies for approximating the
overlap between two states can allow us to bypass this exponential scaling if
an additive error is acceptable. In general, it is possible to approximate the
overlap between two states up to some additive error provided that one can
sample from one of the states in the computational basis and query each of
them for the amplitudes of particular bitstrings. Techniques of this sort are
used in variational Monte CarloBecca and Sorella (2017) and have also been
studied in the context of dequantizing quantum algorithms. In particular, Ref.
75 showed that for normalized states $\Ket{\psi}$, $\Ket{\phi}$, the random
variable $\frac{\Braket{\phi}{x}}{\Braket{\psi}{x}}$ with probability
${\left|\Braket{x}{\psi}\right|}^{2}$ has mean $\Braket{\psi}{\phi}$ and
constant variance:
$\langle\psi|\phi\rangle=\sum_{x}\langle\psi\outerproduct{x}{x}\phi\rangle=\sum_{x}\frac{\langle\psi|x\rangle}{\langle\phi|x\rangle}|\langle
x|\phi\rangle|^{2}.$ (37)
This implies an algorithm to calculate $\Braket{\psi}{\phi}$ to within
$\epsilon$ additive error with failure probability at most $\delta$ using
$O(\frac{1}{\epsilon^{2}}\log\frac{1}{\delta})$ samples from $\Ket{\psi}$ and
queries to the amplitudes of $\Ket{\psi}$ and $\ket{\phi}$. Unfortunately, the
prefactor of $2(2^{N}+1)$ in Eq. (34) seems to preclude benefiting from a
strategy that estimates $\matrixelement{\phi}{U_{k}^{\dagger}}{b_{k}}$ up to
an additive error.
### D.4 Global Stabilizer Measurements
In this section, we outline a strategy for reducing the size of the circuits
required to perform shadow tomography. This strategy leverages the fact that
we measure in the computational basis immediately after performing a randomly
sampled Clifford. Therefore, any permutation of the computational basis states
that occurse immediately prior to measurement is unnecessary.
In general, applying a unitary $U$ and then measuring in the computational
basis $\left\\{\ket{\mathbf{x}}:\mathbf{x}\in{\\{0,1\\}}^{n}\right\\}$, as
shadow tomography was originally presented, is equivalent to measuring in the
rotated basis
$\left\\{U^{\dagger}\ket{\mathbf{x}}:\mathbf{x}\in{\\{0,1\\}}^{n}\right\\}$.
For a set of unitaries $\mathcal{U}$, choosing a unitary therefrom uniformly
at random and then measuring in the computational basis is equivalent to
measuring the positive operator-valued measure (POVM)
$\left\\{\frac{1}{\left|\mathcal{U}\right|}U^{\dagger}\ket{\mathbf{x}}\bra{\mathbf{x}}U:\mathbf{x}\in{\\{0,1\\}}^{n},U\in\mathcal{U}\right\\}$
. Note that the $\left|\mathcal{U}\right|2^{n}$ measurement operators need not
be distinct (e.g., if the unitaries in $\mathcal{U}$ only permute the
computational basis states). In particular, when $\mathcal{U}$ is the set of
$n$-qubit Clifford unitaries $\mathcal{C}_{n}$, each measurement operator
$U^{\dagger}\ket{\mathbf{x}}\bra{\mathbf{x}}U$ is a stabilizer state, and the
POVM is
$\left\\{\frac{2^{n}}{\left|\mathrm{stab}_{n}\right|}\ket{\psi}\bra{\psi}:\ket{\psi}\in\mathrm{stab}_{n}\right\\},$
(38)
where $\mathrm{stab}_{n}$ is the set of $n$-qubit stabilizer states. That the
weight of the measurement operators is uniform follows from the symmetry of
$\mathcal{U}$ (appending any Clifford to each $U\in\mathcal{U}$ leaves the
distribution unchanged); that the uniform weight is
$2^{n}/\left|\mathrm{stab}_{n}\right|$ will be explained later. There are
$\left|\mathcal{C}_{n}\right|=2^{n^{2}+2n}\prod_{i=1}^{n}(4^{i}-1)$ Clifford
unitaries Bravyi and Maslov (2020) and only
${2^{n}\prod_{i=1}^{n}(2^{i}+1)}\ll 2^{n}\left|\mathcal{C}_{n}\right|$
stabilizer states.Aaronson and Gottesman (2004) This suggests that sampling a
uniformly random Clifford is unnecessary. We will now construct a smaller set
of $2^{-n}\left|\mathrm{stab}_{n}\right|$ unitaries $\tilde{\mathcal{C}}_{n}$
such that the corresponding POVM is equivalent to that of $\mathcal{C}_{n}$.
Specifically,
$\mathrm{stab}_{n}=\left\\{U^{\dagger}\ket{\mathbf{x}}:U\in\tilde{\mathcal{C}}_{n},\mathbf{x}\in{\\{0,1\\}}^{n}\right\\}$.
Let $\mathcal{F}_{n}$ be the “H-free” group on $n$ qubits, i.e. the group
generated by X, CNOT, CZ. The action of any H-free operator can be written as
Bravyi and Maslov (2020)
$F(\Gamma,\boldsymbol{\gamma},\Delta,\boldsymbol{\delta})\ket{\mathbf{x}}=i^{\mathbf{x}^{\mathrm{T}}\Gamma\mathbf{x}}{(-1)}^{\boldsymbol{\gamma}\cdot\mathbf{x}}\ket{\Delta\mathbf{x}+\boldsymbol{\delta}},$
(39)
where $\Gamma$ is 0-1 symmetric matrix;
$\boldsymbol{\gamma},\boldsymbol{\delta}\in{\\{0,1\\}}^{n}$; and $\Delta$ is
an invertible 0-1 matrix. The action of an H-free operator thus is to simply
permute the basis states and add some phase. If we are measuring in the
computational basis, the phase doesn’t affect the outcome probabilities and
the affine change $\mathbf{x}\mapsto\Delta\mathbf{x}+\boldsymbol{\delta}$ is
invertible. Therefore measuring a state in the computational basis and
applying the transformation
$\mathbf{y}\mapsto\Delta^{-1}(\mathbf{y}+\boldsymbol{\delta})$ to the outcome
$\mathbf{y}$ is equivalent to applying $F^{\dagger}$ and then measuring in the
computational basis (i.e., measuring in the basis
$\left\\{F\ket{\mathbf{x}}:\mathbf{x}\in{\\{0,1\\}}^{n}\right\\}$). As shown
by Bravyi and Maslov,Bravyi and Maslov (2020) any Clifford operator can be
written in the form $F\cdot H\cdot F^{\prime}$, where
$F,F^{\prime}\in\mathcal{F}_{n}$ and $H$ is a layer of single-qubit Hadamards.
In shadow tomography, we apply a Clifford $F\cdot H\cdot F^{\prime}$ and
measure in the computational basis. As explained above, however, the second
H-free operator $F$ need not actually be applied; its effect can be
implemented entirely in classical post-processing. In general, $F$ and
$F^{\prime}$ are not unique. However, Bravyi and Maslov give a canonical form
for Clifford operators (by constraining the H-free operators $F,F^{\prime}$)
that allows for uniform sampling. If we start with their canonical form and
“push” as much of $F^{\prime}$ through the Hadamard layer into $F$, yielding a
new form $\tilde{F}\cdot H\cdot\tilde{F}^{\prime}=F\cdot H\cdot F^{\prime}$,
and neglect the new final H-free operator $\tilde{F}$, we are left with an
operator of the form
$G(I,\Gamma,\Delta)=\prod_{i\in
I}H_{i}P_{i}^{\Gamma_{i,i}}\prod_{\begin{subarray}{c}i\in I\\\ j\in I:j\neq
i\end{subarray}}\text{CZ}_{i,j}^{\Gamma_{i,j}}\prod_{\begin{subarray}{c}i\in
I\\\ j\notin I:j>i\end{subarray}}\text{CX}_{i,j}^{\Delta_{i,j}},$ (40)
where $I\subset[n]$ is a subset of qubit indices, $\Gamma$ is a 0-1 upper-
triangular matrix with support only on $I$, and $\Delta$ is 0-1. Applying a
Clifford operator and measuring in the computational basis can thus be
replaced by applying an operator of the form in Eq. (40) and measuring in the
computational basis. A priori, we would also need to do post-processing to
account for the affine transformation effected by the neglected H-free
operator, but in fact this is not needed.
### D.5 Partitioned Shadow Tomography
As we discussed in Appendix D.2, shadow tomography using the N-qubit Clifford
group can be used to simultaneously estimate $M$ wavefunction overlaps using a
number of samples that scales logarithmically in $M$. However, performing
these measurements on a NISQ devices can be challenging because of the
required circuit depth. Alternative choices of the ensemble of random
unitaries, $\mathcal{U}$, can alleviate this difficulty. In Ref. 32, Huang et
al. consider a second choice of $\mathcal{U}$ where the unitaries
$U\in\mathcal{U}$ are instead chosen to be tensor products of single-qubit
Clifford operators. This choice leads to especially simple circuits. In the
worst case, however, it requires a number of measurements scaling
exponentially with the locality of the operators to be estimated.
In the experiments performed in this work, we found it useful to interpolate
between these two extremes. Specifically, we use an ensemble of random
circuits $\mathcal{U}$ consisting of tensor products of random Clifford
circuits on $N/2$ qubits. In this section, we explain how the the techniques
for overlap estimation we presented in Appendix D.2 can be generalized to this
case. Ref. 32 explains how each choice of $\mathcal{U}$ has an associated norm
which can be used to bound the variance of the estimators derived from the
classical shadow. We do not work out the norm for our partitioned shadow
tomography here. Instead, we merely note that it performed well in practice
and leave this elaboration for a future work.
Recalling and simplifying the expression in Eq. (32), we have
$\innerproduct{\phi}{\Psi_{T}}=2\mathbb{E}_{k}\Big{[}\bra{\phi}\mathcal{M}^{-1}\big{(}U_{k}^{\dagger}\outerproduct{b_{k}}{b_{k}}U_{k}\big{)}\ket{0}\Big{]}.$
(41)
We can use an expression like the one from Eq. (31) to apply the inverse
channel, but first we need to specify some notation. We take a partitioning of
the $N$ qubits into $P$ parts. Let $N_{1},N_{2},...N_{P}$ be the number of
qubits in each part of the partition. We consider a shadow tomography protocol
that applies a randomly selected $N_{p}$-qubit Clifford to each part,
$p\in\\{1,2,...P\\}$. Thus, we have
$U_{k}=U_{k}^{1}\otimes U^{2}_{k}\otimes\cdots U_{k}^{p}.$ (42)
The inverse of the shadow tomography measurement channel is simply
$\mathcal{M}^{-1}_{\text{Par}}=\bigotimes_{p=1}^{P}\mathcal{M}^{-1}_{N_{p}},$
(43)
where, as in Eq. (31),
$\mathcal{M}^{-1}_{N_{p}}\left(X\right)=(2^{N_{p}}+1)X-\mathbb{I}_{N_{p}}.$
(44)
Now we specialize to the case where $\ket{\phi}$ is a computational basis
state, which we denote by $\ket{\beta}$. We could instead take $\ket{\phi}$ to
be any state which is separable between the parts of the partition (or a sum
of such states), but specializing to computational basis states is sufficient
for our purposes. Let $\ket{\beta_{p}}$ denote the component of $\ket{\beta}$
associated with the $p$-th part of the partition. Using this notation, we can
evaluate Eq. (41) to yield
$\innerproduct{\beta}{\Psi_{T}}=2\mathbb{E}_{k}\Big{[}\prod_{p=1}^{P}(2^{N_{p}}+1)\bra{\beta_{p}}U_{k}^{p\dagger}\outerproduct{b_{k}^{p}}{b_{k}^{p}}U_{k}^{p}\ket{0_{p}}-\innerproduct{\beta_{p}}{0_{p}}\Big{]}.$
(45)
In carrying out our experiments, we specifically chose to use a partition with
two parts, one for each of the spin sectors. All of our walker wavefunctions
$\ket{\phi}$ were superpositions of basis states with Hamming weight $\eta$
overall. Because these walkers corresponded to wavefunctions with an equal
number of electrons in each spin sector, these basis states had exactly
$\eta/2$ excitations in each part of the partition. Therefore, when we used
shadow tomography to evaluate the overlap of our walker wavefunctions
$\ket{\phi}$ with $\ket{\Psi_{T}}$ as described in Appendix D.2 and Appendix
D.3, we only considered $\ket{\beta}$ with this symmetry. As a result,
$\innerproduct{\beta_{p}}{0_{p}}=0$ for the calculations we performed and we
were able to evaluate the wavefunction overlaps using the expression
$\innerproduct{\phi}{\Psi_{T}}=\sum_{i}c_{i}\innerproduct{\beta^{i}}{\Psi_{T}}=\sum_{i}c_{i}2\mathbb{E}_{k}\Big{[}\prod_{p=1}^{P}(2^{N_{p}}+1)\bra{\beta^{i}_{p}}U_{k}^{p\dagger}\outerproduct{b_{k}^{p}}{b_{k}^{p}}U_{k}^{p}\ket{0_{p}}\Big{]},$
(46)
where $c_{i}$ are the amplitudes of $\ket{\phi}$ in the computational basis.
### D.6 Noise Resilience
We show in this section that, in certain circumstances, noise has a negligible
impact on the measurement of overlap ratios such as
$\frac{\Braket{\phi_{1}}{\Psi_{T}}}{\Braket{\phi_{2}}{\Psi_{T}}},$ (47)
where $\ket{\Psi_{T}}$ is some fixed trial wavefunction and
$\ket{\phi_{1}},\ket{\phi_{2}}$ are two arbitrary determinants. Recall that
the overlap $\Braket{\phi_{i}}{\Psi_{T}}=2\Braket{\phi_{i}}{\rho}{0}$, where
$\rho=(\ket{0}+\ket{\Psi_{T}})(\Bra{0}+\Bra{\Psi_{T}})/2$.
As a warm up, consider a simple noise model: a global depolarizing channel
$\rho\mapsto\rho^{\prime}=(1-p)\rho+p\mathbb{I}$ (48)
applied right before measurement. Then, neglecting the error in estimating the
overlaps due to measurement, our estimate of the overlap becomes
$\displaystyle\frac{2\Braket{\phi_{1}}{\rho^{\prime}}{0}}{2\Braket{\phi_{2}}{\rho^{\prime}}{0}}$
$\displaystyle=\frac{\Braket{\phi_{1}}{\rho^{\prime}}{0}}{\Braket{\phi_{2}}{\rho^{\prime}}{0}}$
(49)
$\displaystyle=\frac{(1-p)\Braket{\phi_{1}}{\rho}{0}+p\Braket{\phi_{1}}{0}}{(1-p)\Braket{\phi_{2}}{\rho}{0}+p\Braket{\phi_{2}}{0}}$
(50)
$\displaystyle=\frac{(1-p)\Braket{\phi_{1}}{\rho}{0}}{(1-p)\Braket{\phi_{2}}{\rho}{0}}$
(51)
$\displaystyle=\frac{\Braket{\phi_{1}}{\rho}{0}}{\Braket{\phi_{2}}{\rho}{0}},$
(52)
where we used the fact that $\Braket{\phi_{i}}{0}=0$. Thus the depolarizing
channel has no effect on our estimate.
Now suppose we were to apply the robust shadow tomography procedure of Ref. 64
to determine the overlap ratio in Eq. (47). We’ll assume for now that the
state $\rho$ is prepared without error and that we have some unknown noise
process occurring during the shadow tomography procedure. We focus first on
the case where our ensemble of random unitaries ($\mathcal{U}$) is the
Clifford group on all N qubits, which we refer to as the global case. First,
we would estimate a noise parameter $\hat{f}$. Then we would calculate the
classical shadow using the inverse channel
$\mathcal{M}^{-1}(\sigma)=\hat{f}^{-1}\sigma-\frac{1-\hat{f}^{-1}}{2^{N}}\Tr[\sigma]\mathbb{I},$
(53)
yielding a single-round estimate of the overlap of
$\displaystyle
2\Braket{\phi_{i}}{\mathcal{M}^{-1}(U^{\dagger}_{k}\Ket{b_{k}}\Bra{b_{k}}U_{k})}{0}$
$\displaystyle=2\hat{f}^{-1}\Braket{\phi_{i}}{U^{\dagger}_{k}}{b_{k}}\Braket{b_{k}}{U_{k}}{0}-\frac{1-\hat{f}^{-1}}{2^{N}}\Tr[U^{\dagger}_{k}\Ket{b_{k}}\Bra{b_{k}}U_{k}]\Braket{\phi_{i}}{0}$
(54)
$\displaystyle=2\hat{f}^{-1}\Braket{\phi_{i}}{U^{\dagger}_{k}}{b_{k}}\Braket{b_{k}}{U_{k}}{0}.$
(55)
As above, the factor of $\hat{f}^{-1}$ drops out when taking ratios.
Therefore, when doing shadow tomography (using global Cliffords) to calculate
ratios as above, we get robustness _for free_. That is, we can use the true
value in the noiseless case $f={(2^{N}+1)}^{-1}$ as in vanilla shadow
tomography and the estimates for the ratios are exactly the same as if we had
done robust shadow tomography, _without actually doing robust shadow
tomography_ (i.e., estimating $f$ and using that estimate to obtain the
corrected inverse channel). This is true whenever the assumptions of robust
shadow tomography hold, i.e., that the noise is gate-independent, time-
stationary and Markovian.
For partitioned shadow tomography with two partitions (as described in
Appendix D.5), we have
$\displaystyle\mathcal{M}^{-1}(\rho)=$
$\displaystyle\Bigg{[}2^{-n}\left(f_{0,0}^{-1}-f_{0,1}^{-1}-f_{1,0}^{-1}+f_{1,1}^{-1}\right)\Tr[\rho]\mathbb{I}_{n}$
(56a)
$\displaystyle\quad+2^{-n/2}\left(f_{0,1}^{-1}-f_{1,1}^{-1}\right)\left(\mathbb{I}_{n/2}\otimes\Tr_{P_{1}}\left[\rho\right]\right)$
(56b)
$\displaystyle\quad+2^{-n/2}\left(f_{1,0}^{-1}-f_{1,1}^{-1}\right)\left(\Tr_{P_{2}}\left[\rho\right]\otimes\mathbb{I}_{n/2}\right)$
(56c) $\displaystyle\quad+f_{1,1}\rho\Bigg{]}.$ (56d)
In our case, the two partitions correspond to two spin sectors. We will assume
that $\ket{\psi_{i}}$ has no overlap with any state of the form
$\ket{0}\otimes\ket{\psi}$ or $\ket{\psi}\otimes\ket{0}$; in other words, that
the state always has at least one particle of each spin.
Now again consider a single-round estimate of the overlap
$2\Braket{\phi_{i}}{\mathcal{M}^{-1}(U^{\dagger}_{k}\Ket{b}\Bra{b}U_{k})}{0}$,
where $U_{k}=U_{1}\otimes U_{2}$ and
$\ket{b_{k}}=\ket{b_{1}}\otimes\ket{b_{2}}$. There will be four contributions,
corresponding to Eq. (56a)–Eq. (56d). The first is zero because
$\Braket{\phi_{i}}{0}=0$. The second is proportional to
$\displaystyle\Braket{\phi_{i}}{\mathbb{I}_{n/2}\otimes\Tr_{P_{1}}[U^{\dagger}_{k}\ket{b}\bra{b}U_{k}]}{0}$
$\displaystyle=\Braket{\phi_{i}}{\mathbb{I}_{n/2}\otimes
U_{2}^{\dagger}\ket{b_{2}}\bra{b_{2}}U_{2}]}{0}$ (57)
$\displaystyle\propto\Bra{\phi_{i}}\left(\Ket{0}\otimes
U^{\dagger}_{2}\Ket{b_{2}}\right)=0.$ (58)
The third is also zero for the same reason. That leaves just the last term, so
that
$2\Braket{\phi_{i}}{\mathcal{M}^{-1}(U_{k}^{\dagger}\Ket{b}\Bra{b}U_{k})}{0}=2^{-n}f_{1,1}\Braket{\phi_{i}}{U^{\dagger}_{k}}{b}\Braket{b}{U_{k}}{0}.$
(59)
Thus we get robustness for free when calculating overlap ratios with this form
of partitioned shadow tomography, just as in the global case.
## Appendix E Computational and Experimental Details and Supportive Numerical
Results
We used quantum computing tools provided in Cirq, Developers qsim, team and
collaborators (2020) and Fermionic Quantum Emulator. Rubin _et al._ (2021)
For the ST experiment, we executed each Clifford circuit measurement 1000
times.
All AFQMC calculations presented here were performed with PAUXY pau and
QMCPACK. Kent _et al._ (2020) Exact energies within a basis were all obtained
using a brute-force approach called heat-bath configuration interaction
(HCI).Holmes _et al._ (2016) For AFQMC, we used more than 1000 walkers in all
numerical data presented here to ensure that the population control bias is
negligible. $\Delta t=0.005$ was used for the time step and the resulting time
step error was found to be insignificant. When choosing a set of orbitals for
the active space, we decided to use orbitals from a brute-force complete
active space self-consistent field (CASSCF) calculation. This is not really a
necessary component in our method and in the future one may determine those
orbitals by performing an active-space self-consistent calculation with some
other lower-scaling methods such as orbital-optimized Møller-Plesset
perturbation theory.Lee and Head-Gordon (2018) We do not think that the
conclusion of this work will be affected by the choice of single particle
basis (i.e., orbitals).
In this section, we will provide the raw data of numerical results that were
used in the main text. We will use atomic units for the total energies
reported in this section.
### E.1 H4, 8-qubit experiment
We studied a square geometry of H4 given as
H1 $\displaystyle:(0,0,0)$ H2 $\displaystyle:(0,0,1.23)$ H3
$\displaystyle:(1.23,0,0)$ H4 $\displaystyle:(1.23,0,1.23).$
To compute the atomization energy, one needs an energy of a single hydrogen
atom. Since Hartree-Fock is an exact approach for a single electron system
(e.g., a hydrogen atom), all correlated methods considered in this work should
be exact for this. For a minimal basis (STO-3G), we used -0.46658185 and for a
correlation-consistent quadruple zeta basis (cc-pVQZ) we used -0.499945569 for
the hydrogen atom energy.
The classical AFQMC calculations were all performed with a spin-unrestricted
Hartree-Fock (UHF) trial wavefunction and we also found that the spin-
projection technique (which is often employed to improve the AFQMC
results)Purwanto _et al._ (2008) did not provide any improvement to the AFQMC
results. We got -1.96655(4) for STO-3G and -2.10910(8) for cc-pVQZ. CCSD(T)
(classical “gold standard”) was also performed with a UHF reference
wavefunction with energies, -1.961308 (STO-3G) and -2.114275 (cc-pVQZ).
We performed both unpartitioned and partitioned ST four times for STO-3G and
twice for cc-pVQZ. To get some sense for the convergence of the ST experiments
as a function of the number of sampled Cliffords, we compute the variational
energy of the trial wavefunction via
$E_{\text{var}}=\frac{\langle\Psi_{T}|\hat{H}|\Psi_{T}\rangle}{\langle\Psi_{T}|\Psi_{T}\rangle},$
(60)
as a function of the number of Cliffords.
$N_{\text{Cliffords}}$ | repeat 1 | repeat 2 | repeat 3 | repeat 4
---|---|---|---|---
10 | -1.800644 | -1.764747 | -1.813274 | -1.658202
16 | -1.823041 | -1.802192 | -1.840494 | -1.730591
28 | -1.906644 | -1.839835 | -1.843326 | -1.746749
47 | -1.925654 | -1.888527 | -1.860863 | -1.809656
80 | -1.909567 | -1.869456 | -1.887139 | -1.846339
136 | -1.930880 | -1.902309 | -1.889992 | -1.879164
229 | -1.944249 | -1.921523 | -1.903710 | -1.890947
387 | -1.947362 | -1.934682 | -1.910477 | -1.901883
652 | -1.952416 | -1.939853 | -1.912790 | -1.905250
1100 | -1.955544 | -1.944651 | -1.915073 | -1.909122
1856 | -1.955028 | -1.945966 | -1.909558 | -1.908038
3129 | -1.953877 | -1.947763 | -1.913386 | -1.908835
5276 | -1.954697 | -1.947323 | -1.912284 | -1.909315
8896 | -1.954930 | -1.947458 | -1.913889 | -1.913068
15000 | -1.954356 | -1.948894 | -1.913894 | -1.913082
Table 2: Variational energy of $|\Psi_{T}\rangle$ from four independent repeated partitioned ST experiments with a different set of random Cliffords for H4, STO-3G (minimal basis). If the experiment was perfect (i.e., no circuit noise), then the variational energy should approach -1.969512. $N_{\text{Cliffords}}$ | repeat 1 | repeat 2 | repeat 3 | repeat 4
---|---|---|---|---
10 | -1.643633 | -1.798261 | -1.671065 | -1.462214
16 | -1.720721 | -1.848279 | -1.747911 | -1.645383
28 | -1.816519 | -1.911599 | -1.786704 | -1.737425
47 | -1.867034 | -1.920776 | -1.777655 | -1.819957
80 | -1.887030 | -1.901445 | -1.825170 | -1.844560
136 | -1.924619 | -1.930137 | -1.845217 | -1.858595
229 | -1.929421 | -1.933710 | -1.847781 | -1.871717
387 | -1.940266 | -1.936080 | -1.851352 | -1.880681
652 | -1.936394 | -1.937956 | -1.860513 | -1.878550
1100 | -1.935905 | -1.936406 | -1.875337 | -1.881012
1856 | -1.938452 | -1.938114 | -1.877807 | -1.884442
3129 | -1.939407 | -1.939186 | -1.880363 | -1.887409
5276 | -1.936669 | -1.939222 | -1.882466 | -1.890464
8896 | -1.937593 | -1.938921 | -1.872013 | -1.888485
15000 | -1.938364 | -1.939795 | -1.871097 | -1.887922
Table 3: Same as Table 2 but for the unpartitioned ST experiments. $N_{\text{Cliffords}}$ | repeat 1 | repeat 2
---|---|---
10 | -1.996118 | -1.658351
16 | -1.988746 | -1.557607
28 | -2.009853 | -1.873220
47 | -2.019875 | -1.976545
80 | -2.026756 | -1.983726
136 | -2.034241 | -2.005448
229 | -2.030444 | -2.045285
387 | -2.051324 | -2.052698
652 | -2.053210 | -2.056238
1100 | -2.059021 | -2.054032
1856 | -2.059920 | -2.053114
3129 | -2.057736 | -2.053142
5276 | -2.060762 | -2.054276
8896 | -2.060786 | -2.053847
15000 | -2.059437 | -2.054775
Table 4: Variational energy of $|\Psi_{T}\rangle$ from four independent repeated partitioned ST experiments with a different set of random Cliffords for H4, cc-pVQZ (a quadruple-zeta basis). If the experiment was perfect (i.e., no circuit noise), then the variational energy should approach -2.069364. $N_{\text{Cliffords}}$ | repeat 1 | repeat 2
---|---|---
10 | -1.794532 | -1.961018
16 | -1.864535 | -1.963510
28 | -1.971853 | -2.015256
47 | -2.028933 | -2.025942
80 | -2.022666 | -2.029521
136 | -2.044745 | -2.032204
229 | -2.050697 | -2.036077
387 | -2.055859 | -2.038768
652 | -2.054068 | -2.042764
1100 | -2.055576 | -2.047633
1856 | -2.054740 | -2.049588
3129 | -2.055636 | -2.051308
5276 | -2.056442 | -2.052641
8896 | -2.056741 | -2.052579
15000 | -2.056641 | -2.051843
Table 5: Same as Table 4 but for the unpartitioned ST experiments. $N_{\text{Cliffords}}$ | repeat 1 | repeat 2 | repeat 3 | repeat 4
---|---|---|---|---
10 | -1.96943(5) | -1.98295(6) | -1.96873(6) | -1.9724(1)
16 | -1.97376(5) | -1.97385(6) | -1.97175(4) | -1.9672(1)
28 | -1.97019(3) | -1.97083(4) | -1.97267(4) | -1.97343(8)
47 | -1.97033(2) | -1.96931(3) | -1.97261(4) | -1.97400(7)
80 | -1.97016(3) | -1.97398(4) | -1.97061(4) | -1.97038(6)
136 | -1.97042(2) | -1.97240(4) | -1.97054(4) | -1.96821(5)
229 | -1.97046(2) | -1.97090(2) | -1.96931(4) | -1.96844(5)
387 | -1.97019(2) | -1.97076(2) | -1.97010(4) | -1.96831(5)
652 | -1.97030(2) | -1.97013(2) | -1.96929(4) | -1.96861(4)
1100 | -1.96928(2) | -1.96958(2) | -1.96931(4) | -1.96882(5)
1856 | -1.96942(2) | -1.96964(1) | -1.96974(4) | -1.96909(5)
3129 | -1.96914(2) | -1.96948(2) | -1.96933(4) | -1.96922(4)
5276 | -1.96879(2) | -1.96947(2) | -1.96914(4) | -1.96944(5)
8896 | -1.96877(2) | -1.96959(2) | -1.96918(4) | -1.96952(4)
15000 | -1.96877(2) | -1.96964(2) | -1.96922(4) | -1.96941(4)
Table 6: AFQMC energy using $|\Psi_{T}\rangle$ from four independent repeated partitioned ST experiments with a different set of random Cliffords for H4, STO-3G (minimal basis). The exact ground state energy is -1.969512. The numbers in parentheses indicate the statistical error of the AFQMC energy. $N_{\text{Cliffords}}$ | repeat 1 | repeat 2 | repeat 3 | repeat 4
---|---|---|---|---
10 | -2.0058(1) | -1.97058(9) | -1.9712(1) | -1.9823(2)
16 | -1.9907(1) | -1.96982(8) | -1.97094(9) | -1.9869(1)
28 | -1.98318(7) | -1.96711(4) | -1.97036(9) | -1.97288(6)
47 | -1.97642(5) | -1.96859(3) | -1.9823(1) | -1.97291(6)
80 | -1.97430(4) | -1.97010(5) | -1.9833(1) | -1.96990(5)
136 | -1.97131(3) | -1.96846(3) | -1.97343(8) | -1.97025(6)
229 | -1.97114(2) | -1.96934(3) | -1.97253(8) | -1.96970(6)
387 | -1.96995(2) | -1.97006(3) | -1.97059(8) | -1.96981(6)
652 | -1.96982(3) | -1.96995(3) | -1.97024(7) | -1.96980(7)
1100 | -1.96975(3) | -1.97054(3) | -1.96955(7) | -1.96958(7)
1856 | -1.96940(3) | -1.97017(3) | -1.96886(7) | -1.96975(7)
3129 | -1.96926(3) | -1.97013(3) | -1.96884(7) | -1.96984(7)
5276 | -1.96940(3) | -1.96999(3) | -1.96931(7) | -1.96968(7)
8896 | -1.96950(3) | -1.97011(3) | -1.96918(8) | -1.96954(7)
15000 | -1.96952(3) | -1.97022(3) | -1.96943(7) | -1.96930(7)
Table 7: Same as Table 6 but for the unpartitioned ST experiments. $N_{\text{Cliffords}}$ | repeat 1 | repeat 2
---|---|---
10 | -2.10573(9) | -2.1461(3)
16 | -2.10766(9) | -2.1214(5)
28 | -2.1095(1) | -2.1344(3)
47 | -2.1107(2) | -2.1214(1)
80 | -2.11063(5) | -2.1313(2)
136 | -2.11039(6) | -2.1220(1)
229 | -2.11044(6) | -2.11312(5)
387 | -2.11120(7) | -2.11141(4)
652 | -2.11026(7) | -2.11176(7)
1100 | -2.11090(4) | -2.11105(4)
1856 | -2.11067(3) | -2.11131(4)
3129 | -2.11055(6) | -2.11120(5)
5276 | -2.11105(4) | -2.11090(4)
8896 | -2.11119(5) | -2.11092(6)
15000 | -2.11081(3) | -2.11098(4)
Table 8: AFQMC energy using $|\Psi_{T}\rangle$ from four independent repeated partitioned ST experiments with a different set of random Cliffords for H4, cc-pVQZ (a quadruple-zeta basis). The exact ground state energy is -2.11216599. The numbers in parentheses indicate the statistical error of the AFQMC energy. $N_{\text{Cliffords}}$ | repeat 1 | repeat 2
---|---|---
10 | -2.1188(2) | -2.1070(1)
16 | -2.1146(1) | -2.1080(1)
28 | -2.10942(9) | -2.11169(9)
47 | -2.10951(6) | -2.11108(7)
80 | -2.1111(1) | -2.11219(7)
136 | -2.11100(4) | -2.11064(6)
229 | -2.11105(4) | -2.11218(6)
387 | -2.11069(3) | -2.11197(7)
652 | -2.11068(4) | -2.11159(8)
1100 | -2.11048(4) | -2.11180(5)
1856 | -2.1109(1) | -2.11206(6)
3129 | -2.11092(6) | -2.11198(5)
5276 | -2.11015(3) | -2.11186(5)
8896 | -2.11045(3) | -2.11220(5)
15000 | -2.11040(4) | -2.11182(5)
Table 9: Same as Table 8 but for the unpartitioned ST experiments.
The corresponding variational energies are shown in Table 2 and Table 3 for a
minimal basis set (STO-3G) varying the number of Clifford circuits. Using
these trial wavefunctions we computed the phaseless AFQMC energies (i.e., QC-
AFQMC energies) as shown in Table 6 and Table 7. There is significant
variation in the variational energy depending on the number of Cliffords and
whether one uses partitioned ST or not. Nonetheless, the subsequent AFQMC
energy is nearly converged with respect to the number of Cliffords at 15000
and run-to-run variation is negligible. We observe essentially the same
qualitative results in the case of cc-pVQZ as shown in Table 8 and Table 9.
### E.2 N2, 12-qubit experiment
For N2, we performed only one set of experiments with a total of 15000
Cliffords because we observed that our final AFQMC energy varies very slightly
run-to-run in the case of H4. We used a correlation-consistent triple-zeta
basis, cc-pVTZ.Dunning (1989) The classical AFQMC calculations done with UHF
trial wavefunctions and the spin-projection technique did not change the
results discussed here. Similarly, we used UHF reference states for CCSD(T)
calculations. Here, we provide the raw data which was used in Fig. 3 (a). Our
exact results are obtained from HCI where the second-order perturbation
correction was found to be smaller than 0.002 a.u. We believe that these
“exact” results are converged with enough precision that these numbers should
be used as a benchmark for this system.
R(Å) | Exact | CCSD(T) | Quantum trial | AFQMC | QC-AFQMC
---|---|---|---|---|---
1.000 | -109.366398 | -109.365383 | -109.017231 | -109.3672(3) | -109.36697(7)
1.125 | -109.399981 | -109.398412 | -109.043176 | -109.4003(3) | -109.40094(7)
1.250 | -109.360887 | -109.355280 | -109.000672 | -109.3603(4) | -109.36085(8)
1.500 | -109.233325 | -109.215012 | -108.874636 | -109.2342(3) | -109.23109(9)
1.750 | -109.132826 | -109.110942 | -108.808418 | -109.1408(2) | -109.13325(8)
2.000 | -109.080654 | -109.066772 | -108.790143 | -109.0939(2) | -109.08341(7)
2.250 | -109.061147 | -109.053758 | -108.788486 | -109.07392(8) | -109.06177(7)
Table 10: Raw data for N2 potential energy surface for seven bond distances
($R$). Note that the energy of our quantum trial here is obtained from a
single set of experiment which may vary significantly run-to-run.
### E.3 Diamond, 16-qubit experiment
For diamond, we used the GTH-PADE pseudopotentialGoedecker _et al._ (1996)
and the DZVP-GTH basis.VandeVondele and Hutter (2007) Only the $\Gamma$-point
was considered in the Brillouin zone sampling and the computational unit cell
consists of only two carbon atoms. We used spin-restricted HF (RHF) trial
wavefunctions for classical AFQMC calculations and CCSD(T) also employed RHF
reference states. The “exact” results are obtained from HCI and the second-
order perturbation correction was found to be smaller than 0.0001 a.u. These
results should be good as reference data. We took a total of 50000 Clifford
samples to perform the ST experiment at all lattice constants considered. In
Table 11, we present the raw data used for Fig. 3 (b).
R(Å) | Exact | CCSD(T) | Quantum trial | AFQMC | QC-AFQMC
---|---|---|---|---|---
2.880 | -9.545911 | -9.546464 | -9.121081 | -9.5415(1) | -9.54582(5)
3.240 | -10.229155 | -10.230100 | -8.625292 | -10.2241(3) | -10.23051(7)
3.600 | -10.560477 | -10.562229 | -10.277938 | -10.5525(2) | -10.55861(8)
3.960 | -10.700421 | -10.703884 | -10.368882 | -10.6869(2) | -10.6949(1)
4.320 | -10.744089 | -10.751103 | -10.222206 | -10.7177(3) | -10.73701(9)
Table 11: Raw data for the diamond cold curve for five lattice constants
($R$). Note that the energy of our quantum trial here is obtained from a
single set of experiment which may vary significantly run-to-run. Note that
these energies include the Madelung constant.
### E.4 Quantum Circuit Details
Experiment | # Qubits | # CZ Gates (State Prep) | # CZ Gates (Total) | Circuit Depth
---|---|---|---|---
Hydrogen (Partitioned) | 8 | 36 | 66 | 52
Hydrogen (Unpartitioned) | 8 | 36 | 99 | 67
Nitrogen | 12 | 22 | 92 | 53
Diamond | 16 | 34 | 160 | 65
Table 12: Resource counts for the QC-AFQMC experiments realized in this work. Experiment | Reference | # Qubits | # 2q Gates
---|---|---|---
BeH2 | Kandala _et al._ (2017) | 6 | 5 ($U_{\mathrm{ENT}}$)
H2O | Nam _et al._ (2020) | 5 | 6 ($XX(\theta)$)
Hydrogen | Quantum _et al._ (2020) | 12 | 72 ($\sqrt{i\textsc{swap}}$)
Diazene | Quantum _et al._ (2020) | 10 | 50 ($\sqrt{i\textsc{swap}}$)
Hubbard, interacting (8-site) | Arute _et al._ (2020) | 16 | 608 ($\sqrt{i\textsc{swap}}$)
Hubbard, non-interacting (8-site) | Arute _et al._ (2020) | 16 | 1568 ($\sqrt{i\textsc{swap}}$)
Table 13: Resource estimates from prior fermionic simulations using gate model
quantum computers on more than four qubits. For the two Hubbard model
experiments we distinguish between dynamics simulated for an interacting
versus a non-interacting model. $N=8$ indicates an eight site linear lattice
with open boundary conditions. $U_{\mathrm{ENT}}$ is a nearest-neighbor cross-
resonance style gate and $XX(\theta)$ is a
$\mathrm{exp}(-i\theta\sigma^{i}_{x}\sigma^{j}_{x}/2)$. As far as we are
aware, these are the largest simulations using a gate-model quantum computer
targeting fermionic ground states or dynamics.
In this section we describe the construction of the particular circuits we
used in our experiments. In Table 12 and Table 13, we summarize the quantum
resource usage in our expriments and other prior works.
The circuits to be applied have two parts: the part that prepares the
superposition of the trial wavefunction and the zero state, and the shadow
tomography part that effects the measurement operator.
Our trial wave functions are perfect pairing states, followed by some number
preserving fermionic gates in the case of the eight qubit experiment. Because
the state we want to prepare is
$\Ket{\tau}=\left(\Ket{0}+\Ket{\Psi_{T}}\right)/\sqrt{2},$ (61)
it is sufficient to prepare
$\left(\Ket{0}+\Ket{\mathrm{PP}(\boldsymbol{\theta})}\right)/\sqrt{2},$ (62)
where
$\Ket{\mathrm{PP}}(\boldsymbol{\theta})=\bigotimes_{i=1}^{N/4}\Ket{\mathrm{PP}(\theta_{i})}$
(63)
and $N$ is the number of spin orbitals. We do this by creating a state
$\left(\Ket{0}+{\Ket{1000}}^{\otimes N/4}\right)/\sqrt{2}$ (64)
using a single-qubit Hadamard and a ladder of CNOT and SWAP gates. Then for
each set of 4 qubits corresponding to a pair of spatial orbitals we prepare
$\displaystyle\Ket{\mathrm{PP}(\theta)}=\cos(\theta)\Ket{1100}+\sin(\theta)\Ket{0011}\propto\text{CNOT}_{1,2}\text{CNOT}_{3,4}{\left(i\mathrm{SWAP}{1,3}\right)}^{\theta}\Ket{1000},$
(65)
where the CNOTs and iSWAP gate leave the zero part of the state unchanged.
Now we discuss how to implement the measurement operators. As discussed in
Sec. D.4, the measurement operators have the form
$G(I,\Gamma,\Delta)=\prod_{i\in
I}H_{i}P_{i}^{\Gamma_{i,i}}\prod_{\begin{subarray}{c}i\in I\\\ j\in I:j\neq
i\end{subarray}}\text{CZ}_{i,j}^{\Gamma_{i,j}}\prod_{\begin{subarray}{c}i\in
I\\\ j\notin I:j>i\end{subarray}}\text{CX}_{i,j}^{\Delta_{i,j}}.$ (66)
Let $\tilde{\Gamma}=\Gamma+\Delta$. We can rewrite $G$ as
$G(I,\Gamma,\Delta)=H^{\otimes n}\prod_{i\in
I}P_{i}^{\Gamma_{i,i}}\prod_{i,j}\text{CZ}_{i,j}^{\tilde{\Gamma}_{i,j}}\prod_{i\notin
I}H_{i},$ (67)
i.e., a CZ layer sandwiched by two layers of single-qubit gates. Maslov and
Roetteler Maslov and Roetteler (2018) showed that a CZ layer followed by
complete reversal of the qubits can be implemented using a circuit of $2n+2$
CNOT layers (plus intervening layers of single qubit powers of P). Because the
CZ layer in the circuit for $G$ is followed only by single-qubit gates and
measurement in the computational basis, the reversal of qubits can be easily
undone in post-processing. Thus the shadow tomography circuits have a 2-qubit
gate depth of at most $2n+2$. This is a significant improvement over using the
full Clifford group for shadow tomography; the best known circuit for a
general Clifford has 2-qubit depth $9n$. Bravyi and Maslov (2020) Furthermore,
the CZ circuits have the additional properties that they contain only four
unique CNOT layers and that they act only along a line, which are advantageous
for calibration and qubit mapping, respectively.
## Appendix F Outlook on Potential Quantum Advantage
In the typical electronic structure context, quantum advantage is focused on
the approximation of the ground state energy. In this outlook, we consider the
potential for quantum advantage in this general sense, as well as for the
specific quantum subroutine used in our QC-AFQMC algorithm, namely the overlap
evaluation. We explain our understanding here of the current computational
scaling and limits of our proposed approach for the overlap evaluation and the
path towards the first “practical” quantum advantage.
System size scaling. In general, we expect the overlap between
$\langle\Psi_{T}|\phi\rangle$ to approach zero exponentially quickly as the
system size increases. For example, the typical overlap value of the walker
wavefunction with a simple trial wavefunction can be as small as $10^{-5}$ for
16 atoms, $10^{-16}$ for 54 atoms, and $10^{-38}$ for 128 atoms under periodic
boundary conditions.Malone _et al._ (2019) These examples suggest that the
system size scaling consideration is not just an asymptotic consideration but
is practically relevant for system sizes that one may wish to study in the
near future. Performing AFQMC requires evaluating these overlaps to a fixed
relative precision. Therefore, as the system size increases towards the
thermodynamic limit, we would expect that QC-AFQMC formally requires
exponentially more measurements to maintain the relative precision.
In order to address the challenges due to this scaling, we anticipate that QC-
AFQMC will have to be developed beyond the formulation used in our experiment.
For example, using more sophisticated wavefunction forms for $|\phi\rangle$
than a single Slater determinant could allows one to maintain good overlap
between $|\Psi_{T}\rangle$ and $|\phi\rangle$. This would put QC-AFQMC on a
similar footing to phase estimation,Yu. Kitaev (1995) which also requires a
wavefunction with non-vanishing overlap with the ground state as an input.
Alternatively, one could pursue strategies for controlling the sign problem
which do not require computing the global wavefunction overlaps to a high
precision directly. Classically, the exponential decay of these overlap values
with respect to system size for single Slater determinant walkers is
numerically well handled by computing the $\log$ of the overlap value directly
and working only with the overlap ratio when performing the AFQMC
calculations. One might explore a similar approach using quantum computers
leveraging the particular structure of the walker wavefunctions. It also seems
reasonable that one could leverage the finite correlation length of physical
systems to avoid the need for an exponentially growing number of measurements.
Quantum advantage in the overlap estimation. A related but independently
interesting question is whether there is a potential for quantum advantage
with regards to the specific task of estimating the overlap up to an additive
error between some quantum state and an arbitrary walker wavefunction (a
single Slater determinant in our particular experiments). Although the use of
shadow tomography is guaranteed to be efficient for this task in terms of the
number of measurements, the classical post-processing used in our shadow
tomography (ST) experiments was performed with an exponential overhead
incurred by enumerating all possible determinants in the Hilbert space (see
Appendix D.2 and Section D.3). One open question raised by our work is whether
there is a way to remove this exponential overhead in the classical post-
processing of ST for QC-AFQMC, possibly by using a different ensemble of
random unitaries. Building on Ref. 67’s fermionic shadow tomography seems
promising in this regard. Even if the answer is no, one does not need to use
ST; using the Hadamard test, one can obtain the overlaps up to additive error
efficiently without any problematic classical post-processing. Thus, in
general, one can estimate these overlaps up to an additive error in a fashion
that is entirely efficient. One could also pursue a version of QC-QMC that
avoids this obstacle by using walkers that are particularly well suited for
use with shadow tomography, e.g., composed of a linear combination of
stabilizer states (states generated by Clifford circuits). Green’s function
Monte Carlo is one example of this (as the walker wavefunctions are
computational basis states).
We employed the perfect pairing (PP) wavefunction as a workhorse in all our
experiments. While to the best of our knowledge there is no efficient
classical algorithm that can compute the overlap between a PP state and an
arbitrary single Slater determinant exactly, there is an efficient classical
algorithm (see Section D.3) that can approximate this quantity up to some
additive error. Therefore, we can assert that there is no quantum advantage in
using PP trial wavefunctions in QC-AFQMC. On the other hand, more complex
states such as the one used in our H4 experiment (i.e., PP state with hardware
efficient layers), other hardware-efficient wavefunctions, some variants of
the unitary coupled-cluster (UCC) wavefunction (see Section C.1), and the two-
dimensional multiscale entanglement renormalization (2D-MERA) wavefunction may
be good candidates for seeking a quantum advantage in the estimation of
overlaps. This is due to the fact that no known classical algorithms
(including the one described in Section D.3) efficiently yield the overlap of
these wavefunctions (up to an additive error) with an arbitrary Slater
determinant, or indeed, a computational basis state. Overlaps between all
these states and a single Slater determinant can be approximated efficiently
up to additive error on the quantum computer using the Hadamard test. Overlaps
of these states with stabilizer states (including computational basis states)
can be approximated efficiently using existing shadow tomography techniques.
Quantum advantage in the ground state energy computation. When the number of
electrons that we consider is not too large, it is possible to assume that the
measurement overhead due to the vanishing overlap may not be a practical |
# Freely Decaying Saffman Turbulence Experimentally Generated by Magnetic
Stirrers
Jean-Baptiste Gorce<EMAIL_ADDRESS>Eric Falcon Université
Paris Cité, CNRS, MSC Laboratory, UMR 7057, F-75 013 Paris, France
###### Abstract
We investigate experimentally the decay of three-dimensional hydrodynamic
turbulence, initially generated by the erratic motions of centimeter-size
magnetic stirrers in a closed container. Such zero-mean-flow homogeneous
isotropic turbulence is well suited to test Saffman’s model and Batchelor’s
model of freely decaying turbulence. Here, we report a consistent set of
experimental measurements (temporal decay of the turbulent kinetic energy, of
the energy dissipation rate, and growth of the integral scale) strongly
supporting the Saffman model. We also measure the conservation of the Saffman
invariant at early times of the decay and show that the energy spectrum scales
as $k^{2}$ at large scales and keeps its self-similar shape during the decay.
This letter thus presents the first experimental evidence of the validity of
the connection between the Saffman invariant and the $k^{2}$-energy spectrum
of the large scales. The final decay regime closely corresponds to Saffman’s
model when the container size is sufficiently large.
## Introduction.—
The decay of three-dimensional (3D) turbulent flows has been extensively
investigated to comprehend the energy transfer and the dynamics of the large
scales, the scales larger than the forcing scale Davidson . Understanding the
decay rate of turbulent kinetic energy is important for fundamental theories,
numerical simulations of turbulence and applications such as weather
forecasting or energy harvesting. However, the physical mechanisms that
control the decay rate of fully developed homogeneous turbulence are not
clearly identified Davidson .
Currently, the Batchelor model Kolmogorov ; Batchelor56 and Saffman model
Saffman67 have competing hypotheses to describe the decay of homogeneous
turbulence. Both models assume distinct invariants depending on how the
turbulence is initially generated, and this distinction is reflected in the
scaling of the energy spectrum at large scales. Specifically, a turbulent flow
with significant linear momentum possesses an energy spectrum at large scales
given by $E(k)\sim k^{2}$ (Saffman) Saffman67 . Conversely, a turbulent flow
initially generated by a strong angular impulse and a negligible linear
impulse exhibits a $E(k)\sim k^{4}$ energy spectrum at large scales
(Batchelor) Batchelor56 .
Both types of turbulence can be generated in direct numerical simulations
Davidson ; Lesieur ; IshidaJFM2006 ; DavidsonJFM2012 ; YoshimatsuPRF2019 ;
Anaspof2020 , and this raises questions about how the initial conditions or
energy injection methods control the decay of turbulent flows. Direct
numerical simulations studies on freely decaying turbulence impose the
spectrum at large scales using a Gaussian process to inject energy Kaneda04 ,
while the small scales are not turbulent and do not exhibit a $k^{-5/3}$
power-law spectrum.
Experimental open systems, such as grid turbulence, can reach a Reynolds
number up to $5\times 10^{6}$ Sinhuber2015 and are plausible candidates to
measure the decay of turbulence. However, they possess a mean flow and
different decay rates were then reported using passive grids Batchelor56 ;
Comte-Bellot66 ; Mohamed90 ; Krogstad2010 ; Sinhuber2015 , fractal grids with
multiscale grids Hurst2007 ; DavidsonJFM2011 ; Valente2011 ; Valente2012 or
active grids Burattini2005 ; Mazellier2008 ; Thormann2014 . On the other hand,
there exist complementary laboratory experiments in closed systems (where
fans, loudspeakers, jets, or rotating elements energize the fluid) generating
zero-mean-flow homogeneous isotropic turbulence (HIT) to study the decay of
turbulence Nezami2023 . However, the decay rate in such closed systems is
influenced by the different degrees of isotropy, the asymmetry of the forcing,
or secondary large-scale flows Nezami2023 . Indeed, the influence of a mean
flow or secondary flows affects the energy budget and the time dependence of
the turbulent kinetic energy, which stresses why isotropy is crucial to test
the decay law Moisy2011 . More direct evidence in zero-mean-flow HIT
experimental setups is thus required to confirm Saffman’s or Batchelor’s model
and to clarify the connection between the large-scale energy spectrum and the
invariants of freely decaying turbulence.
Here, we initially generate 3D hydrodynamic turbulence using centimeter-size
magnetic stirrers immersed in a large liquid reservoir and we then halt the
forcing to study freely decaying turbulence. The advantage of such volume
forcing is to generate sufficient zero-mean-flow HIT required to compare
Saffman’s model and Batchelor’s model of freely decaying turbulence. Using
this technique, we report a consistent set of experimental observations
(kinetic energy, dissipation rate, and integral scale) robustly supporting the
Saffman model. We also measure the conservation of the Saffman invariant at
early times of the decay. The energy spectrum scales as $k^{2}$ at large
scales and conserves a self-similar shape during the decay.
## Theoretical backgrounds.—
Assuming that the energy spectrum $E(k,t)$ is analytic at $k=0$, a Taylor
expansion at small $kr$ (large scales) shows the following leading terms
DavidsonJFM2011
$\displaystyle E(k,t)=\frac{Lk^{2}}{4\pi^{2}}+\frac{Ik^{4}}{24\pi^{2}}+...$
(1)
with
$L=\int_{0}^{\infty}\left<\mathbf{u}\left(\mathbf{x},t\right)\cdot\mathbf{u}\left(\mathbf{x+r},t\right)\right>dr$
is Saffman’s integral, a measure of the linear momentum held in the turbulence
Davidson2011 ,
$I=-\int_{0}^{\infty}\left<\mathbf{u}\left(\mathbf{x},t\right)\cdot\mathbf{u}\left(\mathbf{x+r},t\right)\right>r^{2}dr$
is Loitsyansky’s integral, suggested to be related to the angular momentum
Landau and
$\left<\mathbf{u}\left(\mathbf{x},t\right)\cdot\mathbf{u}\left(\mathbf{x+r},t\right)\right>$
the autocorrelation function of the velocity field $\mathbf{u}$ Davidson ;
Davidson2011 . In fully developed freely decaying HIT, $L\sim u^{2}l^{3}$ with
$l$ the integral scale defined as $l=\int_{0}^{\infty}f(r,t)dr$, where
$f(r,t)$ is the longitudinal velocity autocorrelation function. Unlike $L$,
the integral $I$ is not, in general, an invariant during the initial decay
Davidson ; Batchelor56 ; Proudman .
The decay rate of the squared velocity fluctuations
$u^{2}=\left<\mathbf{u}^{2}\right>/3$ can be evaluated by assuming that
$du^{2}/dt$ is equal to minus the dissipation rate $-\epsilon$ Kolmogorov
$\frac{du^{2}}{dt}=-\epsilon=-C\frac{u^{3}}{l}$ (2)
with $C$ a constant of order unity, which depends on the Taylor Reynolds
number and the large-scale forcing procedures Sreenivasan ; Lohse ;
Sreenivasan1998 ; Kaneda2003 ; Vassilicos2015 . Using the invariant
$u^{2}l^{3}$ (Saffman) or $u^{2}l^{5}$ (Batchelor), the time dependence of
$u^{2}$, $l$, and $\epsilon$ can be derived Davidson ; Davidson2011 , as
summarized in Table 1. The decay of the kinetic energy during the final period
of decay is also shown in Table 1.
Model | Saffman | Batchelor
---|---|---
Large-scale spectrum | $E(k)\sim k^{2}$ | $E(k)\sim k^{4}$
Initial decay | |
Invariant | $L\sim u^{2}l^{3}$ | $I\sim u^{2}l^{5}$
$u^{2}/u_{0}^{2}$ | $\left(1+at\right)^{-6/5}$ | $\left(1+bt\right)^{-10/7}$
$l/l_{0}$ | $\left(1+at\right)^{2/5}$ | $\left(1+bt\right)^{2/7}$
$\epsilon/\epsilon_{0}$ | $\left(1+at\right)^{-11/5}$ | $\left(1+bt\right)^{-17/7}$
Final decay | |
$u^{2}$ | $\left(t-t_{*}\right)^{-3/2}$ | $\left(t-t_{*}\right)^{-5/2}$
Table 1: Time evolution of $u^{2}$, $l$, and $\epsilon$ during the initial
decay and of $u^{2}$ during the final decay depending on the initial
conditions of the turbulent flow. The large-scale energy spectrum $E(k)\sim
k^{2}$ corresponds to Saffman’s model and $E(k)\sim k^{4}$ corresponds to
Batchelor’s model. The values of the constants are
$a=\frac{5}{6}C\frac{u_{0}}{l_{0}}$ and $b=\frac{7}{10}C\frac{u_{0}}{l_{0}}$.
The initial values are indexed with 0: $u_{0}$, $l_{0}$ and $\epsilon_{0}$.
## Experimental setup.—
Experiments are carried out in two different fluid square containers sealed by
a transparent lid. The dimensions are $11\times 11\times 8$ cm3 (small tank)
and $33\times 33\times 20$ cm3 (large tank) (see the schematics in the
Supplemental Material supmatt ). The choice of these varying sizes allows for
assessing finite-size effects in the experimental observations. In the small
tank, measurements are taken using two different liquids: either water or a
lower-viscosity liquid (Novec) while exclusively water is used for
measurements in the large tank. Both fluids are seeded with hollow glass
sphere fluid tracers (10 $\upmu$m, concentration of 0.21 ppm) illuminated by a
horizontal laser sheet, and a high-speed camera (Phantom v1840) records high-
resolution movies ($2048\times 1952$ pixels2) at a range of speeds 100–400
fps. Energy is transferred into the fluid by the continuous erratic motions of
$N$ magnetic stirrers (1 cm in size) driven by a monochromatic vertical
magnetic field of frequency $F$ Falcon2013 ; Falcon2017 ; Gorce2023 , which
generate a turbulent flow Cazaubiel2021 ; Gorce2022 .
The control parameters in the small tank are the number of magnetic stirrers
$N=50$, the frequency of the oscillating magnetic field $F=50$ Hz, and the
magnetic field intensity $B=240$ G and correspond to the maximal values of
this system (see the illustrative movie in the Supplemental Material supmatt
). The typical rms velocity of the magnetic stirrers in water is 20 cm/s
Gorce2023 . The control parameters in the large tank are $N=450$, $F=20$ Hz,
and $B=360$ G.
At $t=0$, turning off the magnetic field stops the energy injection and
settles the stirrers at the container’s bottom. During this transient regime
of turbulent decay, a nonintrusive particle image velocimetry (PIV) technique
adrian1991particle using the PIVlab algorithm Thielicke2014 measures the
fluid velocity field in the $xy$ horizontal plane. For the small tank, the
initial value of the standard deviation of the fluid velocity is equal to
$u_{0}=6.6$ cm/s, giving the initial Reynolds number
$\mathrm{Re_{0}}=u_{0}l_{0}/\nu_{w}=3000$, with $l_{0}=5$ cm the initial
integral length scale and $\nu_{w}=10^{-6}$ m2/s the kinematic viscosity of
water.
Figure 1: Time evolution of Saffman invariant $u^{2}l^{3}$ using water as
working fluid. The solid line represents the mean value of the invariant up to
$t_{1}=0.54$ s. Inset: $l/l_{0}$ as a function of $u_{0}/u$. The equation of
the solid line is $y=\left(u_{0}/u\right)^{2/3}$ (Saffman) and the dashed line
is $y=\left(u_{0}/u\right)^{2/5}$ (Batchelor). The black arrow represents the
direction of time and the dashed line gives the time $t_{1}$ after which
$u^{2}l^{3}$ decreases significantly.
## Mean-flow free, homogeneity, and isotropy.—
Using the horizontal velocity fluctuations $u_{x}$ and $u_{y}$, the structure
functions $S_{2}^{u_{x}}(r)=\langle[u_{x}(x+r)-u_{x}(x)]^{2}\rangle_{x}$ and
$S_{2}^{u_{y}}(r)$ are measured nearly identical, illustrating the homogeneity
and isotropy of the velocity field during the decay in the small tank (see
Supplemental Material supmatt ). The isotropy coefficient is also measured
using the ratio of the standard deviations.$\sigma_{u_{x}}/\sigma_{u_{y}}$ is
equal to $1\pm 0.004$ on average during the decay. The ratio of the mean
velocity and standard deviation, $\left<u_{x}\right>/\sigma_{u_{x}}$ and
$\left<u_{y}\right>/\sigma_{u_{y}}$, are 2.2% and 7%, respectively (see
Supplemental Material supmatt ), confirming the isotropy, the absence of mean
and secondary flows.
Figure 2: Decay of the squared velocity fluctuations $u^{2}$ as a function of
the rescaled time $1+at$ (water). The solid line corresponds to a power law
defined as $\left(1+at\right)^{-6/5}$ (Saffman) and the dashed line represents
the power law $\left(1+at\right)^{-10/7}$ (Batchelor). Inset: time evolution
of the integral scale $l$. The solid line represents the power law
$\left(1+at\right)^{2/5}$ (Saffman) and the dashed line
$\left(1+at\right)^{2/7}$ (Batchelor).
## Initial decay.—
The initial decay in the small tank is evaluated using exclusively water.PIV
measurements of the horizontal $u_{x}$ and vertical velocity $u_{z}$ between
$z=6$ to $8$ cm confirm the turbulent decay is not affected by the downward
motion of the stirrers (see Supplemental Material supmatt ). Figure 1 shows
that the quantity $u^{2}l^{3}$ is invariant at the beginning of the decay
until it decreases after a time $t_{1}=0.54$ s. This illustrates the
invariance of Saffman’s integral $L\sim u^{2}l^{3}$ and the conservation of
linear momentum during the initial decay ($t<t_{1}$). The present measurement
supports the hypothesis that the magnetic stirrers inject strong linear
momentum into the turbulent eddies ($L>0$), which is also endorsed by the
comparison of the time evolutions of the quantities $u^{2}l^{3}$ and
$u^{2}l^{5}$ shown in Supp. Mat. supmatt . The inset of Fig. 1 also
illustrates a power-law relationship between $1/u$ and $l$ with a 2/3 slope
consistent with Saffman’s theory, as indicated by the solid line.
Figure 3: Decay of the energy dissipation rate $\epsilon$ as a function of the
rescaled time $1+at$, with water as working fluid. The solid line represents
$\left(1+at\right)^{-11/5}$ (Saffman) and the dashed line
$\left(1+at\right)^{-17/7}$ (Batchelor). The initial value of the dissipation
rate is $\epsilon_{0}=2.1\times 10^{-3}$ m2/s3. Inset: time evolution of the
constant $C$ measured from the ratio $\epsilon l/u^{3}$. The solid line
represents the mean value of $C$ for $t\leq t_{1}$.
The measurements shown in Fig. 1 suggest a potential Saffman turbulence
scenario (second column in Table 1) in which the turbulent kinetic energy
should decay as $u^{2}/u_{0}^{2}=\left(1+at\right)^{-6/5}$ and the integral
length scale increases as $l/l_{0}=\left(1+at\right)^{2/5}$, with
$a=5Cu_{0}/(6l_{0})$. The value $a=2.7$ s-1 is inferred from the initial
values of $u_{0}$ and $l_{0}$, and the constant $C=0.37\pm 0.02$ measured from
Eq. (2). A correct definition of this value is essential for accurately
assessing the time dependence of $u$ and $l$ during the decay Mohamed90 .
Figure 2 shows the decay of $u^{2}/u_{0}^{2}$ as a function of the rescaled
time $1+at$. It confirms the power-law relationship between these two
quantities and the agreement with Saffman’s model for $t\leq t_{1}$. The inset
of Fig. 2 illustrates that the integral length scale $l$ increases during the
decay and then saturates at $\left(1+at\right)\approx 6$ (i.e., $t\approx
1.85$ s). For $t\leq t_{1}$, $l/l_{0}$ is well fitted by the solid line given
by Saffman’s model and depicts a stronger increase in $l$ than in Batchelor’s
model. Deviations of $u^{2}$ and $l$ from the Saffman laws (solid lines) are
observed after a time $1+at_{1}=2.4$ because the size of the biggest eddies
[$l(t_{1})=7$ cm] becomes comparable with the size of the container.
The rate at which the kinetic energy is dissipated is computed from the
expression $\epsilon=15\nu\langle\left(\partial u_{x}/\partial
x\right)^{2}\rangle_{x,y}$, which is derived assuming HIT Wang2021 . The
measured initial dissipation rate is equal to $\epsilon_{0}=2.1\times 10^{-3}$
m2/s3. Figure 3 shows that the decrease of $\epsilon$ is in good agreement
with Saffman’s model. The measurements are very well fitted by
$\left(1+at\right)^{-11/5}$, which is represented by the solid line in Fig. 3.
The inset of Fig. 3 represents the time evolution of the constant $C$ given by
Eq. (2). This illustrates that $C$ is approximately constant up to $t=t_{1}$,
suggesting that the velocity field is not fully turbulent after $t_{1}$ and
that different physical mechanisms dissipate the turbulent kinetic energy of
the liquid such as dissipation at the tank boundaries.
Figure 4: Decay of the turbulent kinetic energy in the small tank with two
different fluids. The blue circles correspond to the measurement performed
with water and the green squares correspond to Novec. The solid lines
represent a $t^{-1}$ power law. Inset: measurements performed in the large
reservoir filled with water. The solid line represents a $t^{-1.25}$ power
law.
## Final decay.—
After $t_{1}$, the nonlinear inertial terms in the equations of motion are
supposedly negligible and the dissipation of the turbulent kinetic energy
solely depends on the viscosity $\nu$. The evolution of the turbulent kinetic
energy during this final decay period can be derived from the initial large-
scale spectrum (see Supplemental Material supmatt ). As summarized in Table 1,
the expression is given by either $u^{2}\sim\left(t-t_{*}\right)^{-3/2}$ for
$E(k)\sim k^{2}$ Saffman67 or $u^{2}\sim\left(t-t_{*}\right)^{-5/2}$ for
$E(k)\sim k^{4}$ Batchelor53 , where $t_{*}$ denotes some instant of time
inside the final period Batchelor53 . These power laws are derived under the
assumptions that $\left(t-t_{*}\right)\rightarrow\infty$, which is challenging
to achieve in experimental systems during the final decay stage. In addition,
Ref. Skrbek2000 pointed out that the value of the power-law exponent $\alpha$
in $\left(t-t_{*}\right)^{-\alpha}$ is highly sensitive to the choice of the
virtual time parameter $t_{*}$. Consequently, we have chosen to directly fit
the experimental data using a power-law model without introducing a virtual
time origin $t_{*}$.
We conducted experiments in the small tank using two fluids (water or Novec)
with different densities and viscosities to explore how these fluids dissipate
turbulent kinetic energy during the final decay. The kinematic viscosity of
Novec 7100 is $\nu_{n}=0.4\times 10^{-6}$ m2/s and its density is
$\rho_{n}=1.5\times 10^{3}$ kg/m3 novec . Figure 4 illustrates the decays of
the turbulent kinetic energy with water (circles) and Novec (squares) that are
both very well fitted by a $t^{-1}$ power law. The exponents of the power laws
are independent of the kinematic viscosity $\nu$, which is consistent with the
theoretical derivation (see Supp. Mat. supmatt ). The deviation of the
exponent from the value -3/2 derived in Saffman’s model is likely due to the
size of the biggest eddies [$l(t_{1})=7$ cm] becoming comparable with the size
of the container. This effect is known to alter the power-law exponent of the
decay Skrbek2000 ; Thornber2016 ; Meldi2017 . Additionally, finite Reynolds
number effects contribute to this deviation Anaspof2020 .
To reduce the finite-size effects of the small tank and the dissipation at its
boundaries, we also conducted experiments within the large tank. The inset of
Fig. 4 shows that a $t^{-1.25}$ power law is observed for 1 order of
magnitude. This supports the fact that the finite-size effects control the
decay rate during the final period of decay and the time power-law exponent
becomes closer to -3/2 (Saffman’s model) in the large tank experiment. Note
that the initial decay is not observed in the large tank because the initial
Reynolds number is too small
($\mathrm{Re_{0}^{\prime}}=u_{0}^{\prime}l_{0}/\nu_{w}=650$, with
$u_{0}^{\prime}=1.3$ cm/s). Indeed, Fig. 5 illustrates that the $k^{-5/3}$
power spectrum is no longer observed after only 0.01 s, which is clearly
insufficient to resolve correctly the initial decay.
Figure 5: Decay of the energy spectrum in the large reservoir. The vertical
dashed line corresponds to the initial inverse integral length $1/l_{0}$
separating the large and small scales. Here, $t=0,0.09,3.66,5.96,15.83,42.01$,
and $104.99$ s.
## Energy spectrum.—
In the absence of nonlinear transfer of energy across scales, Lin’s equation,
given by $\partial E(k,t)/\partial t\sim-2\nu k^{2}E(k,t)$, implies that the
expected $k^{2}$ energy spectrum at large scales should persist over time
throughout the decay. Measurements performed in the large tank confirm the
conservation of the $k^{2}$ power law during the final decay stage, whereas
the smaller scales lose their turbulent characteristics and exhibit a steeper
power-law trend (Fig. 5). These observations align with the idea that
viscosity dissipates the excess energy during the final decay and suggest that
Saffman turbulence is observed here.
## Conclusion.—
We report on the freely decaying 3D turbulence, initially generated by the
erratic motions of centimeter-size magnetic stirrers in a closed experimental
setup. Such isotropic, mean-flow-free turbulence is well suited to compare
Saffman and Batchelor models of freely decaying turbulence. Our experimental
measurements (temporal decay of the turbulent energy kinetic, of the energy
dissipation rate, and growth of the integral scale) robustly support Saffman
model. Saffman invariant is also well conserved at early times of the decay.
The energy spectrum scales as $k^{2}$ at large scales and conserves a self-
similar shape during the decay. This letter thus presents the first
experimental evidence of the connection between Saffman invariant $L\sim
u^{2}l^{3}$ and the large-scale energy spectrum in $k^{2}$. The final decay is
also reported in two different-size experimental systems. All these results
support the existence of freely decaying Saffman turbulence involving
turbulent eddies with a significant linear momentum input. Our results could
be applied to physical, geophysical, or industrial turbulent flows with a
finite mean flow and are of primary importance.
###### Acknowledgements.
We thank A. Di Palma and Y. Le Goas, for technical help. This work was
supported by the French National Research Agency (ANR LASCATURB project No.
ANR-23-CE30-0043-02) and by the Simons Foundation MPS No651463-Wave
Turbulence.
## References
* (1) P. A. Davidson Turbulence (Oxford University Press, New-York, 2nd ed., 2015).
* (2) A. N. Kolmogorov, On the degeneration of isotropic turbulence in an incompressible viscous fluid, Dokl. Akad. Nauk SSSR 31, 538 (1941).
* (3) G. K. Batchelor and I. Proudman, The large-scale structure of homogeneous turbulence, Phil. Trans. R. Soc. A 248, 369 (1956).
* (4) P. G. Saffman, The large-scale structure of homogeneous turbulence, J. Fluid Mech. 27, 581 (1967).
* (5) M. Lesieur Turbulence in fluids (Martinus Nijhoff Publishers, Dordrecht, 1987) pp. 205-211.
* (6) T. Ishida, P. A. Davidson and Y Kaneda, On the decay of isotropic turbulence, J. Fluid Mech. 564, 455 (2006).
* (7) P. A. Davidson, N. Okamoto, and Y. Kaneda, On freely decaying, anisotropic, axisymmetric Saffman turbulence, J. Fluid Mech. 706, 150 (2012).
* (8) K. Yoshimatsu and Y. Kaneda, No return to reflection symmetry in freely decaying homogeneous turbulence, Phys. Rev. Fluids 4, 024611 (2019).
* (9) M. Anas, P. Joshi, and M. K. Verma, Freely decaying turbulence in a finite domain at finite Reynolds number, Phys. Fluids 32, 095109 (2020).
* (10) Y. Kaneda, T. Ishihara, M. Yokokawa, K. Itakura and A. Uno, High-resolution direct numerical simulation of turbulence — spectra of fourth-order velocity moments, in IUTAM Symposium on Reynolds Number Scaling in Turbulent Flow. Fluid Mechanics and its Applications, edited by A.J. Smits, (Springer, Dordrecht, 2004), Vol. 74. ( pp. 155-162). 10.1007/978-94-007-0997-3_27.
* (11) M. Sinhuber, E. Bodenschatz, and G. P. Bewley, Decay of turbulence at high Reynolds numbers, Phys. Rev. Lett. 114, 034501 (2015).
* (12) G. Comte-Bellot and S. Corrsin, The use of a contraction to improve the isotropy of grid-generated turbulence, J. Fluid Mech. 25, 657 (1966).
* (13) M. S. Mohamed and J. C. LaRue, The decay power-law in grid-generated turbulence, J. Fluid Mech. 219, 195 (1990).
* (14) P. Å. Krogstad and P. A. Davidson, Is grid turbulence Saffman turbulence?, J. Fluid Mech. 642, 373 (2010).
* (15) D. Hurst and J. C. Vassilicos, Scalings and decay of fractal-generated turbulence, Phys. Fluids 19, 035103 (2007).
* (16) P. Å. Krogstad and P. A. Davidson, Freely decaying, homogeneous turbulence generated by multi-scale grids, J. Fluid Mech. 680, 417 (2011).
* (17) P. C. Valente and J. C. Vassilicos, The decay of turbulence generated by a class of multiscale grids, J. Fluid Mech. 687, 300 (2011).
* (18) P. C. Valente and J. C. Vassilicos, Dependence of decaying homogeneous isotropic turbulence on inflow conditions, Phys. Lett. A 376, 510 (2012).
* (19) P. Burattini, P. Lavoie, and R. Antonia, On the normalized turbulent energy dissipation rate, Phys. Fluids 17, 98103 (2005).
* (20) N. Mazellier and J. C. Vassilicos, The turbulence dissipation constant is not universal because of its universal dependence on large-scale flow topology, Phys. Fluids 20, 015101 (2008).
* (21) A. Thormann and C. Meneveau, Decay of homogeneous, nearly isotropic turbulence behind active fractal grids, Phys. Fluids 26, 025112 (2014).
* (22) A. G. Nezami, M. Byron, and B. A. Johnson, Laboratory generation of zero-mean-flow homogeneous isotropic turbulence: Non-grid approaches, Flow 3, E42 (2023) and references therein.
* (23) F. Moisy, C. Morize, M. Rabaud, and J. Sommeria, Decay laws, anisotropy and cyclone–anticyclone asymmetry in decaying rotating turbulence, J. Fluid Mech. 666, 5 (2011); C. Morize, F. Moisy, and M. Rabaud, Decaying grid-generated turbulence in a rotating tank, Phys. Fluids 17, 095105 (2005).
* (24) P. A. Davidson, The minimum energy decay rate in quasi-isotropic grid turbulence, Phys. Fluids 23, 085108 (2011).
* (25) $I$ is only a conserved quantity for a fast enough decay of $f(r)$ when $r\rightarrow 0$, and thus a direct consequence of the conservation of angular momentum.
* (26) I. Proudman and W. H. Reid, On the decay of a normally distributed and homogeneous turbulent velocity field, Phil. Trans. R. Soc. A 247, 163 (1954).
* (27) K. R. Sreenivasan, On the scaling of the turbulence energy-dissipation rate, Phys. Fluids 27, 1048 (1984).
* (28) D. Lohse, Crossover from high to low Reynolds number turbulence, Phys. Rev. Lett. 73, 3223 (1994).
* (29) K. R. Sreenivasan, An update on the energy dissipation rate in isotropic turbulence, Phys. Fluids 10, 528 (1998).
* (30) Y. Kaneda, T. Ishihara, M. Yokokawa, K. I. Itakura, and A. Uno, Energy dissipation rate and energy spectrum in high-resolution direct numerical simulations of turbulence in a periodic box, Phys. Fluids 15, L21 (2003).
* (31) J. C. Vassilicos, Dissipation in turbulent flows, Annu. Rev. Fluid Mech. 47, 95 (2015) and references therein.
* (32) See Supplemental Material at http://link.aps.org/… for further data analyses: (i) movies, (ii) a comparison between Saffman and Batchelor turbulence during the initial decay, (iii) experimental setup details and additional data supporting the isotropy of the velocity field during the decay, and (iv) the derivation of the final decay prediction.
* (33) E. Falcon, J.-C. Bacri, and C. Laroche, Equation of state of a granular gas homogeneously driven by particle rotations, Europhys. Lett. 103, 64004 (2013).
* (34) E. Falcon, J.-C. Bacri, and C. Laroche, Dissipated power within a turbulent flow forced homogeneously by magnetic particles, Phys. Rev. Fluids 2, 102601(R) (2017).
* (35) J.-B. Gorce and E. Falcon, Statistics of a two-dimensional immersed granular gas magnetically forced in volume, Phys. Rev. E 107, 034903 (2023).
* (36) A. Cazaubiel, J.-B. Gorce, J.-C. Bacri, M. Berhanu, C. Laroche, and E. Falcon, Three-dimensional turbulence generated homogeneously by magnetic particles, Phys. Rev. Fluids 6, L112601 (2021).
* (37) J.-B. Gorce and E. Falcon, Statistical equilibrium of large scales in three-dimensional hydrodynamic turbulence, Phys. Rev. Lett. 129, 054501 (2022).
* (38) R. J. Adrian, Particle-imaging techniques for experimental fluid mechanics, Annu. Rev. Fluid Mech. 23, 261 (1991).
* (39) W. Thielicke and E. Stamhuis, PIVlab towards user-friendly, affordable and accurate digital particle image velocimetry in MATLAB, J. Open Res. Software 2, 1 (2014).
* (40) G. Wang, F. Yang, K. Wu, Y. Ma, C. Peng, T. Liu, and L. P. Wang, Estimation of the dissipation rate of turbulent kinetic energy: A review, Chem. Eng. Sci. 229, 116133 (2021).
* (41) G. K. Batchelor, The Theory of Homogeneous Turbulence (Cambridge University Press, New York, 1953).
* (42) L. Skrbek and S. R. Stalp, On the decay of homogeneous isotropic turbulence, Phys. Fluids 12, 1997 (2000).
* (43) Values provided at $20^{\circ}$C by the manufacturer 3M.
* (44) B. Thornber, Impact of domain size and statistical errors in simulations of homogeneous decaying turbulence and the Richtmyer-Meshkov instability, Phys. Fluids 28, 045106 (2016).
* (45) M. Meldi and P. Sagaut, Turbulence in a box: Quantification of large-scale resolution effects in isotropic turbulence free decay, J. Fluid Mech. 818, 697 (2017).
|
# Research on fine co-focus adjustment method for segmented solar telescope
Kunyan Wang 1,2 Yichun Dai 1,3* Bin Wang 1,3 Xu Tan 1,2 Dehua Yang 4 and
Zhenyu Jin1,3 1Yunnan Observatories, Chinese Academy of Sciences, Kunming
650216, China
2University of Chinese Academy of Sciences, Beijing 100049, China
3Yunnan Key Laboratory of Solar Physics and Space Science, 650216, China
4Nanjing Institute of Astronomical Optics and Technology, Chinese Academy of
Sciences, Nanjing 210042, China<EMAIL_ADDRESS>
††journal: opticajournal††articletype: Research Article
For segmented telescopes, achieving fine co-focus adjustment is essential for
realizing co-phase adjustment and maintenance, which involves adjusting the
millimeter-scale piston between segments to fall within the capture range of
the co-phase detection system. CGST proposes using a SHWFS for piston
detection during the co-focus adjustment stage. However, the residual piston
after adjustment exceeds the capture range of the broadband PSF phasing
algorithm$(\pm 30\mu m)$, and the multi-wavelength PSF algorithm requires even
higher precision in co-focus adjustment. To improve the co-focus adjustment
accuracy of CGST, a fine co-focus adjustment based on cross-calibration is
proposed. This method utilizes a high-precision detector to calibrate and fit
the measurements from the SHWFS, thereby reducing the impact of atmospheric
turbulence and systematic errors on piston measurement accuracy during co-
focus adjustment. Simulation results using CGST demonstrate that the proposed
method significantly enhances adjustment accuracy compared to the SHWFS
detection method. Additionally, the residual piston after fine co-focus
adjustment using this method falls within the capture range of the multi-
wavelength PSF algorithm. To verify the feasibility of this method,
experiments were conducted on an 800mm ring segmented mirror system,
successfully achieving fine co-focus adjustment where the remaining piston of
all segments fell within $\pm 15\mu m$.
## 1 Introduction
The Chinese Giant Solar Telescope (CGST) is a next-generation giant solar
telescope program jointly proposed by the Chinese solar physics community. A
significant option for CGST’s primary mirror involves employing ring-segmented
mirrors. The scientific objective of CGST is to measure the delicate
structures of magnetic and flow fields across various levels of the solar
atmosphere, as well as their high spatial and temporal resolution evolutionary
processes. For this purpose, the telescope is required to achieve co-phase
within the 1 $\mu$ m wavelength range to realize high-resolution observation
and research on the 20-km delicate structures of the solar surface[1, 2, 3].
Due to the high precision requirement for co-phasing, which is in the order of
nanometers, the capture range of the classical phasing algorithm is limited to
±30$\mu$m (Keck’s broadband PSF phasing algorithm)[4]. However, following the
initial mechanical alignment, the piston between segments may reach the
millimeter-scale. Therefore, it is necessary to adjust the large-scale piston
before performing co-phasing, referred to as co-focus adjustment. In
comparison with traditional co-focus adjustment, the paper focuses on fine co-
focus adjustment, which not only entails adjusting the millimeter-scale piston
to the capture range of the co-phase measurement system but also improving the
accuracy of the co-focus adjustment, thereby simplifying the subsequent co-
phase adjustment process.
Currently, the primary methods used for co-focus adjustment in segmented
telescopes include spherometer measurement[4], Shack-Hartmann Wavefront
Sensor(SHWFS) detection[5, 6], and interferometer measurement[7]. The 10m Keck
telescope in the United States utilized a hand-held spherometer to adjust the
large-scale piston within the capture range of the broadband PSF phasing
algorithm $(\pm 30\mu m)$[4]. Similarly, telescopes like the 9.2m SALT in
South Africa and the LAMOST in China employed SHWFS for co-focus adjustment,
where defocus measurement reflects the segment’s large-scale piston[5, 6, 8,
9]. However, the SHWFS of SALT had a practical detection accuracy of 60
$\mu$m(Root Mean Square, RMS) for large-scale piston, leading to the adoption
of spherometer measurement, achieving co-focus adjustment with an accuracy of
15 $\mu$mRMS[5]. Furthermore, interferometer measurement was proposed for co-
focus adjustment in the 9.2m HET telescope in the United States, aiming for an
accuracy of 25 $\mu$mRMS. However, due to insufficient robustness in the
observing environment, SHWFS was ultimately employed for co-focus
adjustment[7].
When the piston between segments reaches the millimeter-scale, it results in
defocus. Therefore, the piston during the co-focus stage can be approximately
obtained from the defocus measurement using a SHWFS. This method offers a more
straightforward implementation than a spherometer or interferometer and can
detect tip/tilt accurately. CGST plans to use SHWFS for piston measurement in
the co-focus stage. However, due to atmospheric turbulence and systematic
error[10, 11], the current accuracy of SHWFS measurement for detecting large-
scale piston is about 60 $\mu$mRMS (SALT), which falls short of the capture
range of typical phasing algorithm ($\pm 30\mu m$ for Keck’s broadband PSF
phasing algorithm). Additionally, the broadband PSF phasing algorithm is
cumbersome because it requires scanning at a fixed step size, and the actuator
displacements result in cumulative errors with too many scans. The co-focus
measurement of TMT favors a multi-wavelength PSF detection method, which has a
capture range of about $\pm 15\mu m$, imposing higher demands on the accuracy
of co-focus adjustment[12, 13]. This paper proposes a fine co-focus adjustment
based on cross-calibration to improve the performance of the co-focus of CGST
and facilitate the subsequent co-phase adjustment. The method employs a
detector with higher detection accuracy to calibrate and fit the measurement
results obtained from SHWFS. This approach aims to diminish the effects of
atmospheric turbulence and systematic error on the measurement accuracy of the
piston during the co-focus stage and improve the adjustment accuracy of the
co-focus. To better verify the feasibility of this method, experiments were
conducted on the 800mm ring segmented mirror system, and the adjustment
results were analyzed.
Section 2 of this paper analyzes the fine co-focus adjustment method for CGST.
It introduces the fine co-focus adjustment based on cross-calibration proposed
in this paper and conducts a simulation analysis of the potential errors in
the actual measurement. Section 3 presents the experimental results of fine
co-focus adjustment using the above method on an 800mm ring segmented mirror
system. The conclusion is to be presented in Section 4.
## 2 Analysis of fine co-focus adjustment method for CGST
An important alternative for the primary mirror of CGST is the utilization of
ring-segmented mirrors. This configuration consists of 24 segments, each of
which is an annular sector. The segments have a long base of 1040 mm, a short
base of 779 mm, and a height of 1015 mm. The primary mirror is a parabolic
reflector with a ring width of 1 m and a focal ratio of 1, as illustrated in
Fig. 1[1].
Figure 1: Schematic diagram of CGST and primary mirror
### 2.1 The principle and detection accuracy of SHWFS
The SHWFS for co-focus adjustment is placed at the exit pupil of the optical
system. Sixteen sub-apertures are planned to be allocated inside each segment
to detect tip/tilt and piston during the co-focus stage. An additional two
sub-apertures are placed at each edge for piston sensing during the co-phase
stage, resulting in a total of 432 sub-apertures, as depicted in Fig. 2(a).
For SHWFS detection, $Z_{2}$, $Z_{3}$ and $Z_{4}$ denoting tip/tilt and
defocus can typically be reconstructed using the modal method[14, 15]. During
the co-focus stage, the piston between segments is large, which can be
approximately obtained from defocus measurements. $Z_{4}$ denotes the piston
during the co-focus stage in the paper. Due to the unique structure of the
ring segmented mirrors, in this paper, the outer circle of the micro-lens
array, corresponding to each segment, is selected as the unit-orthogonal
circular domain for the Zernike polynomials, as illustrated in Fig.2(b). It
has been verified that in the annular sector region, the 2nd Zernike
polynomial is orthogonal to the 3rd Zernike polynomial, and the 3rd Zernike
polynomial is orthogonal to the 4th Zernike polynomial, can be expressed as
$\left\\{\begin{array}[]{c}\frac{1}{S}\iint_{A}(2x)\cdot(2y)d\sigma=0\\\
\frac{1}{S}\iint_{A}(2x)\cdot\sqrt{3}\left(2\left(x^{2}+y^{2}\right)-1\right)d\sigma=0\end{array}\right.$
(1)
Where A denotes the annular sector region and S is the area of the annular
sector. Therefore, using the modal wavefront reconstruction enables the
decoupling of the segment’s tip/tilt and piston.
(a)
(b)
Figure 2: The SHWFS for CGST.(a)Micro-lens array corresponding to a segment.
(b)The unit circle domain of the Zernike polynomials.
During actual detection, the accuracy of co-focus adjustment using SHWFS can
be affected by environmental factors. In the presence of atmospheric
turbulence, the mean-square value of the angle of arrival in the x or y
direction on a circular aperture with a diameter D can be mathematically
expressed as[16]
$\left\langle\alpha_{x}^{2}\right\rangle=\left\langle\alpha_{y}^{2}\right\rangle=0.170\left(\frac{\lambda}{D}\right)^{2}\left(\frac{D}{r_{0}}\right)^{5/3}$
(2)
where $\lambda$ is the wavelength and $r_{0}$ is the atmospheric coherence
length. Suppose the sub-aperture of the micro-lens array corresponds to about
9.8 cm(the value of $r_{0}$ is approximately equivalent) on the primary
mirror. In that case, the tilt error due to atmospheric turbulence is 0.55
arcsecond from Eq.(2) when $r_{0}$ is 10 cm and $\lambda$ is 650 nm. To
mitigate the impact of atmospheric turbulence, integrating multiple frames of
measurement data with successive short exposures over time can effectively
suppress the effect[17]. An atmospheric random phase screen based on the
Kolmogorov model was generated to simulate atmospheric turbulence using the
power spectral inversion method[18]. The simulation results indicate that the
accuracy of SHWFS for large-scale piston measurements is 195 $\mu$mRMS using a
single frame and improves to 64 $\mu$mRMS when using 10 frames.
Moreover, practical measurements with SHWFS may be affected by systematic
error. For instance, the temperature gradient of the air in the measurement
optical path and its slow change can introduce systematic measurement error.
Compared to the random wavefront fluctuation caused by turbulence, the spatio-
temporal frequency characteristic of the temperature gradient is lower, and
the wavefront aberration is hardly to be eliminated by smoothing[19]. This
paper proposes a fine co-focus adjustment method based on cross-calibration to
minimize the impact of various sources of uncertainty and improve the
precision of co-focus adjustment.
### 2.2 Fine co-focus adjustment based on cross-calibration
#### 2.2.1 Basic Principles
The principle of cross-calibration is to calibrate the measurement results of
SHWFS using a more accurate measurement device. The cross-calibration is
performed at multiple positions before and after the initial co-focus “0
point” obtained from SHWFS. The theoretical co-focus "0-point" position can be
obtained more accurately by fitting the data. The mathematical expression of
the cross-calibration method is shown in Eq. (3).
$\hat{L}=f\left(Z_{4}\right)=\mathrm{k}*Z_{4}+b$ (3)
In Eq. (3), L represents the detection value obtained from the calibration
device with higher accuracy. $Z_{4}$ is the measured value of SHWFS,
indicating piston during the co-focus stage. The intercept b in the equation
represents the difference between the initial co-focus “0 point” obtained from
SHWFS and the more accurate theoretical co-focus “0 point” obtained from the
fine co-focus method. Therefore, after SHWFS detection and adjustment, the
segment still needs to be moved by the amount of b.
The accuracy of the fine co-focus adjustment based on cross-calibration is
determined by the solution precision of the intercept b, as indicated in Eq.
(3). The standard deviation of b, denoted as $s_{b}$, can be expressed as
$s_{b}=\frac{s_{L}}{\sqrt{n}}\sqrt{1+\frac{{\overline{Z_{4}}}^{2}}{s_{Z_{4}}{}^{2}}}$
(4)
where n represents the number of measurement points; $s_{L}$ is the root mean
square error (RMSE) of the fitting for L, given by
$s_{L}=\sqrt{\frac{1}{n-2}\sum_{i=1}^{n}\left(L_{i}-k*Z_{4,i}-b\right)^{2}}$
(5)
$\overline{Z_{4}}$ represents the average of the measured values
$Z_{4,i}$;$s_{Z_{4}}$ is the standard deviation of the measured value
$Z_{4,i}$,given by
$s_{Z_{4}}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(Z_{4,\mathrm{i}}-\overline{Z_{4}}\right)^{2}}$
(6)
Since the measurement points are situated before and after the initial co-
focus “0 point” obtained from SHWFS,$\overline{Z_{4}}$ is small. Hence, Eq.
(4) can be approximated as
$s_{b}=\frac{s_{L}}{\sqrt{n}}$ (7)
If the detection error of the calibration device is $\Delta_{L}$, the
uncertainty of the intercept b can be expressed as shown in Eq. (8).
$U_{b}=\sqrt{U_{1}{}^{2}+{U_{2}}^{2}}=\sqrt{\left(t_{0.95}(n-2)*s_{b}\right)^{2}+\left(\Delta_{L}\right)^{2}}$
(8)
In Eq. (8), $t_{0.95}(n-2)$ represents the t-distribution factor for a
probability of 0.95 and n-2 degrees of freedom. This factor is approximately
1.8, and the exact value for different values of n can be obtained by
referring to the appropriate table.
From Eq. (8), the accuracy of the fine co-focus adjustment based on cross-
calibration is related to the number of measurement points n and the detection
error of the calibration device $\Delta_{L}$. A higher co-focus adjustment
accuracy can be achieved when n is larger and $\Delta_{L}$ is smaller. For
instance, if the detection accuracy of SHWFS is 64 $\mu$mRMS, the value of
$s_{L}$ is approximately 60 $\mu$m. When the number of measurement points n <
100, $U_{1}$ > 10 $\mu$m, $U_{b}$ is primarily determined by the magnitude of
$U_{1}$, assuming the value of $\Delta_{L}$ is a few micrometers. However,
when n surpasses 100, the influence of $U_{2}$ in Eq. (8), which represents
the detection accuracy of the calibration device, gradually becomes more
significant as n increases. Theoretically, with a sufficiently large number of
measurement points n, the highest detection accuracy of the fine co-focus
adjustment based on cross-calibration is determined by the detection accuracy
of the calibration device.
#### 2.2.2 Simulation analysis
If we only consider the influence of atmospheric turbulence with an
atmospheric coherence length $r_{0}=10cm$, the simulation results of cross-
calibration for measurement points of 10 and 50 are presented in Fig. 3. In
the figure, the horizontal axis represents the $Z_{4}$ obtained from SHWFS by
using 10 frames data for wavefront reconstruction. In contrast, the vertical
axis indicates the detection value of the calibration device with a piston
moving step of 20 $\mu$m. In the analysis of this paper, the uncertainty
$U_{b}$ of the intercept b in Eq. (8) is employed to quantify the range of
residual piston after fine co-focus adjustment. It can be inferred that the
residual piston, after fine co-focus adjustment, can fall within the capture
range of the multi-wavelength PSF algorithm when the number of measurement
points n is approximately 50. Since the number of measurement points n
utilized in the simulation analysis is less than 100, the detection error of
the calibration device $\Delta_{L}$ can be neglected in the calculation. The
range of residual pistons after adjustment using different measurement points
are summarized in Table 1.The simulation results show that compared with SHWFS
detection, the fine co-focus adjustment based on cross-calibration enhances
the detection accuracy of large-scale piston. When using 10 frames data for
wavefront reconstruction and employing 50 measurement points, the range of
residual piston after the fine co-focus adjustment is $\pm$14.2 $\mu$m.
Figure 3: Simulation results of the fine co-focus adjustment based on cross-calibration (effect of atmospheric turbulence) Table 1: The range of residual piston of the fine co-focus adjustment based on cross-calibration(effect of atmospheric turbulence) Number of measurement points | 10 | 20 | 30 | 40 | 50
---|---|---|---|---|---
The range of residual piston / $\mu$m | $\pm$29.7 | $\pm$22.1 | $\pm$17.1 | $\pm$15.5 | $\pm$14.2
Furthermore, in the presence of a systematic detection error $\sigma$, which
may arise from factors such as temperature gradient or other sources, the
result using cross-calibration is illustrated in Fig.4. The result indicates
that the segment needs to be moved by -$\sigma$ after SHWFS detection and
adjustment. Consequently, this adjustment method can also effectively mitigate
the impact of systematic errors.
Figure 4: Simulation result of the fine co-focus adjustment based on cross-
calibration(effect of systematic error)
## 3 Experimental results and analysis
### 3.1 Introduction to the experimental system
The experimental system for fine co-focus calibration is a primary mirror
composed of 8 annular sector spherical segmented mirrors, as illustrated in
Fig. 5(a). The parameters of the segmented mirror are presented in Table 2.
The co-focus detection optical path of the system is depicted in Fig. 6. In
this configuration, a light source is positioned at the aplanatic points of
the primary mirror, and SHWFS is employed to detect the tip/tilt and piston
between the segments. A ring micro-lens array is situated at the primary
mirror’s exit pupil. Within the micro-lens array, each segment has seven
internal sub-apertures and two sub-apertures at the edge, yielding 72 sub-
apertures. During the co-focus adjustment stage, the seven internal sub-
apertures are utilized to detect the segment’s tip/tilt and piston, with the
piston approximately obtained from defocus measurements. The two sub-apertures
at the edge detect the piston during the co-phase adjustment stage. Once the
co-focus adjustment is completed, a filter with a center wavelength of 636 nm
and a bandwidth of 10 nm is employed for broadband scanning to adjust the
piston, whose capture range is $\pm$15 $\mu$m. Therefore, the segment’s fine
co-focus adjustment accuracy must match the co-phase stage’s capture range.
The calibration device is the LVDT (Linear Variable Differential Transformer),
sold on the shelf with a detection accuracy of better than 3 $\mu$mRMS from
the specification. It is mounted next to the actuator and is used to measure
the actuator’s linear displacement or displacement difference, as shown in
Fig. 5(b).The actuator is screw-driven and capable of achieving high-precision
micro-motions (during the co-phase stage). However, the actuator exhibits
significant errors in the order of hundreds of micrometers for larger
displacements, such as in co-focus adjustment. The LVDT measures the linear
displacement of an object through the variation of the induced voltage in the
secondary coil and behaves with high sensitivity and good linearity. In this
paper’s experiments, the displacement of the segment is determined by the
readings of the LVDT. Additionally, the LVDT has a reference position known as
the "0 point", which indicates that when the measured object is positioned at
the center of the LVDT, the output voltage is zero.
(a)
(b)
Figure 5: 800mm ring segmented mirror system.(a)The primary mirror.(b)Actuator
and LVDT
(a)
(b)
Figure 6: Co-focus detection optical path.(a)Schematic diagram.(b)Physical image Table 2: Optical parameters of the 800mm ring segmented mirror system Parameter | Symbol | Value
---|---|---
Diameter of primary mirror | D/mm | 800
Radius of curvature | R/mm | 3200
Ring width | $\Delta$D/mm | 120
Focal ratio | f/$\\#$ | 2
Focal length of collimator lens 1 | f1/mm | 37.5
Focal length of collimator lens 2 | f2/mm | 72.38
Focal length of imaging lens | f3/mm | 78.39
Diameter of pinhole | d1/ $\mu$m | 20
Diameter of sub-aperture | d2/ $\mu$m | 583
Focal length of micro-lens array | f4/mm | 77
Detector resolution | p*p/(pixel*pixel) | 2048*2048
Pixel size | d3*d3/( $\mu$m * $\mu$m) | 5.5*5.5
### 3.2 The detection accuracy of LVDT and SHWFS
For our SHWFS, the detection accuracy of tip/tilt is higher than 0.085
arcsecond RMS when the centroid sensing obtains sub-pixel resolution. This
level of accuracy corresponds to an actuator length of 0.08 $\mu$mRMS. The
LVDT has a detection accuracy of better than 3 $\mu$mRMS from the
specification. The actual detection accuracy of the LVDT should be determined
before it is used as the high detection means for cross-calibration of SHWFS.
Due to the high accuracy of the SHWFS in detecting tip/tilt, we utilized tip
measurements to calibrate the measurement accuracy of the LVDT. First, a
movement of 4 $\mu$m was applied to the controller of the actuator M1 to
produce a tip, while the readings of the LVDT and the detected value $Z_{3}$
of the SHWFS were recorded. Subsequently, using the relationship between
$Z_{3}$ and the displacement of actuator M1, $Z_{3}$ was converted into the
corresponding actuator displacement, considering it as a reference value. The
difference between the measured value of the LVDT and the reference value
represented the detection error of the LVDT. Fig.7 illustrates the detection
errors of the LVDT from 40 measurements, with an RMS value of 1.93 $\mu$m.
Therefore, the LVDT can be utilized to cross-calibrate the SHWFS.
Figure 7: The detection error of the LVDT
Similarly, the accuracy of LVDT is higher than SHWFS in detecting large-scale
piston, so the measurement values obtained from the LVDT can be used to
evaluate the accuracy of SHWFS in detecting large-scale piston. First, a
displacement of 50 $\mu$m was applied as the input to the controllers of three
actuators corresponding to the segment, generating a piston. Simultaneously,
the readings of the LVDT and $Z_{4}$ from the SHWFS were recorded, with the
LVDT readings as the reference value. Subsequently, using the relationship
between $Z_{4}$ and the displacement of the actuators, $Z_{4}$ was converted
into corresponding actuator displacement. The difference between the converted
value and the reference value represented the detection error of the piston.
Fig.8 illustrates the detection errors of 40 measurements of piston obtained
from the SHWFS. The RMS value of the detection errors is 20.41 $\mu$m, while
the peak-to-valley (PTV) value is 90.99 $\mu$m.
Figure 8: The detection error of the piston(SHWFS)
### 3.3 Adjustment results and analysis of fine co-focus adjustment based on
cross-calibration
Fine co-focus adjustment using LVDT and SFWFS was as follows: Firstly, the
initial co-focus "0 point" was obtained after SHWFS detection and
adjustment.To ensure that the residual piston after fine co-focus adjustment
could fall into the capture range of the broadband PSF phasing of $\pm$ 15
$\mu$m, the number of measurement points, denoted as n, could be estimated
using Eq. (8) to be 10, where the RMSE of the LVDT fitting $s_{L}$ was
approximately 15 $\mu$m. Consequently, five points were measured in 50
$\mu$m(controller input value for actuator) increments both before and after
the initial co-focus "0 point", and the corresponding LVDT readings and
$Z_{4}$ measured by SHWFS were simultaneously recorded. A least-squares
fitting was then performed on the variations in LVDT readings and the
corresponding $Z_{4}$ .Finally, the value of LVDT when $Z_{4}$=0 was
calculated, representing the additional displacement required for the segment
to obtain a more precise theoretical co-focus “0-point”. Fig. 9 illustrates
the fine co-focus adjustment results of segment 1 repeated four times. The
horizontal axis represents the detection value $Z_{4}$ of the SHWFS, while the
vertical axis represents the variation in LVDT readings. "-22$\mu$m",
"-26$\mu$m", "-15$\mu$m", and "-30$\mu$m" represent the adjustments required
for segment 1 after SHWFS detection and adjustment (results of 4 experiments).
Figure 9: Results of multiple co-focus adjustments for segment 1.
In the fine co-focus adjustment, the range of residual piston is estimated
from the uncertainty $U_{b}$ of the intercept b in Eq. (8), where the value of
$t_{0.95}(n-2)$ is determined to be 1.86 for n is 10. The standard deviation
$s_{b}$ of the intercept b, obtained by fitting the measured data of the eight
segments, is presented in Table 3. The detection error $\Delta_{L}$ of the
LVDT is 3 $\mu$mRMS, which can be disregarded in the calculation. The range of
residual pistons after the adjustment of the eight segments using the fine co-
focus adjustment based on cross-calibration are displayed in Table 4.
Table 3: Standard deviation of the intercept b obtained from real measurements of 800mm ring segmented mirror system segment | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8
---|---|---|---|---|---|---|---|---
$s_{b}$/ $\mu$m | 6.1 | 7.0 | 4.9 | 4.6 | 7.0 | 2.7 | 3.0 | 4.7
Table 4: The range of residual piston after fine co-focus adjustment for 800mm ring segmented mirror system segment | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8
---|---|---|---|---|---|---|---|---
The range of residual piston / $\mu$m | $\pm$11.3 | $\pm$13.0 | $\pm$9.1 | $\pm$8.6 | $\pm$13.0 | $\pm$5.0 | $\pm$5.6 | $\pm$8.7
Based on the experimental results presented in Table 4, it can be observed
that when the number of measurement points is 10, the detection accuracy of
fine co-focus adjustment based on cross-calibration can reach 26$\mu$m(PTV),
and the residual pistons of all segments after fine co-focus adjustment fall
into the capture range of the broadband PSF phasing($\pm$15 $\mu$m).
Furthermore, these residual pistons also remain within the capture range of
the multi-wavelength PSF algorithm. As a result, the subsequent co-phase
adjustment can be carried out.
## 4 Conclusion
This paper presents the fine co-focus adjustment based on cross-calibration
with high precision for the CGST. The accuracy of this method relies on two
key factors: the number of measurement points used for cross-calibration and
the detection accuracy of the calibration device. This method effectively
mitigates the detection error caused by atmospheric turbulence and reduces the
impact of systematic error. Compared to the SHWFS detection system, the method
significantly improves the accuracy of co-focus adjustment. Moreover, when
using 50 measurement points, the residual piston is within $\pm$14.2 $\mu$m,
ensuring it falls within the capture range of the multi-wavelength PSF phasing
algorithm. As a result, this method simplifies the following co-phase
adjustment process.
Co-focus adjustment experiments were conducted on the 800mm ring segmented
mirror system using the proposed method, which involved the cross-calibration
of SHWFS with LVDT. The LVDT demonstrates a measurement accuracy better than 3
$\mu$mRMS. Compared to the SHWFS detection system, the proposed fine co-focus
adjustment method(when using 10 measurement points) improves the co-focus
adjustment accuracy of the experimental system from 91$\mu$m(PTV) to
26$\mu$m(PTV). The residual pistons for all segments fall within the capture
range of the broadband PSF phasing algorithm($\pm$15 $\mu$m). Moreover, the
LVDT has a "0 point" that can be used to record the segment’s state during the
co-focusing, which is beneficial for the segment to return to the co-focus
state quickly. This fine co-focus adjustment method, employing LVDT and SHWFS
cross-calibration, is also a suitable reference scheme for CGST’s fine co-
focus adjustment. Furthermore, the "0 point" of the LVDT may experience drift
due to environmental conditions. Therefore, further experimental research is
necessary to address this potential issue.
Funding National Natural Science Foundation of China (12273109);Yunnan
Province Young and Middle-aged Academic and Technical Leaders Reserve Talents
Project (202405AC350004);Yunnan Key Laboratory of Solar Physics and Space
Science(202205AG070009);Yunnan Revitalization Talent Support
Program(202305AS350029, 202305AT350005);Yunnan Provincial Science and
Technology Department(202103AD150013).
Acknowledgments We would like to express our heartfelt gratitude to the
Laboratory of Astronomical Technologies of Yunnan Observatories, Chinese
Academy of Sciences, for its strong support and assistance throughout this
research. The laboratory’s facilities and resources have been instrumental in
ensuring the smooth progress of our study.
Disclosures The authors declare that there are no conflicts of interest
related to this paper.
Data availability Data underlying the results presented in this paper may be
obtained from the authors upon reasonable request.
## References
* [1] Z. Liu, Y. Deng, and H. Ji, “The chinese giant solar telescope,” Proceedings of the International Astronomical Union 8, 349–354 (2013).
* [2] Z. Liu, H. Ji, Z. Jin, _et al._ , “Science cases in the integrated modeling of Chinese Giant Solar Telescope,” in _Integrated Modeling of Complex Optomechanical Systems II,_ vol. 10012 M. Riva, ed., International Society for Optics and Photonics (SPIE, 2016), p. 1001204.
* [3] Z. Liu, Y. Deng, Z. yu Jin, and H. Ji, “Introduction to the chinese giant solar telescope,” in _Other Conferences,_ (2012).
* [4] G. Chanan, M. Troy, F. Dekens, _et al._ , “Phasing the mirror segments of the keck telescopes: the broadbandphasing algorithm,” Appl. Opt. 37, 140–155 (1998).
* [5] J. Swiegers and H. Gajjar, “Completion of the Southern African Large Telescope (SALT) primary mirror system,” in _Ground-based Telescopes,_ vol. 5489 J. M. O. Jr., ed., International Society for Optics and Photonics (SPIE, 2004), pp. 881 – 891.
* [6] D.-Q. Su and X.-Q. Cui, “Active optics in lamost,” Chinese Journal of Astronomy and Astrophysics 4, 1 (2004).
* [7] M. J. Wolf, M. H. Ward, J. R. A. Booth, and B. Roman, “Polarization shearing laser interferometer for aligning segmented telescope mirrors,” in _SPIE Astronomical Telescopes + Instrumentation,_ (2003).
* [8] A. Wirth, T. Gonsiorowski, J. E. Roberts, _et al._ , “Developing and testing an optical alignment system for salt’s segmented primary mirror,” in _SPIE Astronomical Telescopes + Instrumentation,_ (2004).
* [9] Y. Zhang, “Progress of LAMOST wavefront sensing,” in _Ground-based and Airborne Telescopes II,_ vol. 7012 L. M. Stepp and R. Gilmozzi, eds., International Society for Optics and Photonics (SPIE, 2008), p. 70123H.
* [10] A. Ardeberg and T. Andersen, “Matching and improving the best atmospheric turbulence conditions with very large telescopes,” New Astronomy Reviews 42, 557–562 (1998).
* [11] M. Bonamente, _Systematic Errors and Intrinsic Scatter_ (Springer Nature Singapore, Singapore, 2022), pp. 315–322.
* [12] M. Schoeck and G. Chanan, “Analysis of edge jumps in multi-wavelength phasing of segmented-mirror telescopes,” Appl. Opt. 62, 6760–6770 (2023).
* [13] G. Chanan, M. Schoeck, and M. Troy, “Approach based on $\chi$2 to phasing segmented-mirror telescopesusing multiple wavelengths: data reduction, wavelength selection, capturerange,” Appl. Opt. 61, 935–944 (2022).
* [14] N. A. Roddier, “Atmospheric wavefront simulation and zernike polynomials,” in _Astronomical Telescopes and Instrumentation,_ (1990).
* [15] X. Ke and J. Wu, _Adaptive Optics Correction_ (Springer Nature Singapore, Singapore, 2022), pp. 293–340.
* [16] J. W. Hardy and L. A. Thompson, “Adaptive optics for astronomical telescopes,” Physics Today 53, 94–94 (1998).
* [17] S. Yuan, Z. Jin, Y. Li, and Z. Liu, “Effect of atmospheric turbulence on optical tip measurement of active control of ring segmented telescope,” ACTA OPTICA SINICA 32 (2012).
* [18] B. L. McGlamery, “Restoration of turbulence-degraded images$\ast$,” J. Opt. Soc. Am. 57, 293–297 (1967).
* [19] C. Innocenti and A. Consortini, “Refractive index gradient of the atmosphere at near ground levels,” Journal of Modern Optics 52, 671–689 (2005).
|
# Bayesian Methods in Tensor Analysis
Yiyao Shi
University of California, Irvine
Irvine, CA, US
<EMAIL_ADDRESS>
&Weining Shen
University of California, Irvine
Irvine, CA, US
<EMAIL_ADDRESS>
###### Abstract
Tensors, also known as multidimensional arrays, are useful data structures in
machine learning and statistics. In recent years, Bayesian methods have
emerged as a popular direction for analyzing tensor-valued data since they
provide a convenient way to introduce sparsity into the model and conduct
uncertainty quantification. In this article, we provide an overview of
frequentist and Bayesian methods for solving tensor completion and regression
problems, with a focus on Bayesian methods. We review common Bayesian tensor
approaches including model formulation, prior assignment, posterior
computation, and theoretical properties. We also discuss potential future
directions in this field.
_K_ eywords Imaging analysis $\cdot$ Posterior inference $\cdot$ Recommender
system $\cdot$ Tensor completion $\cdot$ Tensor decomposition $\cdot$ Tensor
regression
## 1 Introduction
Tensors, also known as multidimensional arrays, are higher dimensional
analogues of two-dimensional matrices. Tensor data analysis has gained
popularity in many scientific research and business applications, including
medical imaging [5], recommender systems [78], relational learning [93],
computer vision [83] and network analysis [53]. There is a vast literature on
studying tensor-related problems such as tensor decomposition [46, 71, 89],
tensor regression [25, 86], tensor completion [83], tensor clustering [5, 86],
tensor reinforcement learning and deep learning [86]. Among them, tensor
completion and tensor regression are two fundamental problems and we focus on
their review in this article.
Tensor completion aims at imputing missing or unobserved entries in a
partially observed tensor. Important applications of tensor completion include
providing personalized services and recommendations in context-aware
recommender systems (CARS) [78], restoring incomplete images collected from
magnetic resonance imaging (MRI) and computerized tomography (CT) [20], and
inpainting missing pixels in images and videos [58, 65]. In this review, we
divide tensor completion methods into trace norm based methods and
decomposition based methods, and introduce common approaches in each category.
Different from tensor completion, tensor regression investigates the
association between tensor-valued objects and other variables. For example,
medical imaging data such as brain MRI are naturally stored as a multi-
dimensional array, and tensor regression methods are applied to analyze their
relationship with clinical outcomes (e.g., diagnostic status, cognition and
memory score) [51, 87]. Based on the role that tensor-valued object plays in
the regression model, tensor regression methods can be categorized into tensor
predictor regression and tensor response regression.
Frequentist approaches have received a big success in tensor analysis [98, 5].
In recent years, Bayesian approaches have also gained popularity as they
provide a useful way to induce sparsity in tensor models and conduct
uncertainty quantification for estimation and predictions. In this article, we
will briefly discuss common frequentist approaches to solve tensor completion
and regression problems and focus on Bayesian approaches. We also review two
commonly used tensor decompositions, i.e., CANDECOMP/PARAFAC (CP)
decomposition [42] and the Tucker decomposition [94], since they are the
foundations for most Bayesian tensor models. For example, many Bayesian tensor
completion approaches begin with certain decomposition structure on the
tensor-valued data and then use Bayesian methods to infer the decomposition
parameters and impute the missing entries. Based on the decomposition
structures being utilized, we divide these methods into CP-based, Tucker-
based, and nonparametric methods. For tensor regression methods, we classify
the Bayesian tensor regression into Bayesian tensor predictor regression and
Bayesian tensor response regression. For each category, we review the prior
construction, model setup, posterior convergence property and sampling
strategies.
The rest of this article is organized as follows. Section 2 provides a
background introduction to tensor notations, operations and decompositions.
Section 3 and 4 review common frequentist approaches for tensor completion and
regression problems, respectively. Section 5 and 6 review Bayesian tensor
completion and regression approaches, including the prior construction,
posterior computing, and theoretical properties. Section 7 provides concluding
remarks and discusses several future directions for Bayesian tensor analysis.
Figure 1 shows an outline of our review.
Figure 1: Outline of this survey.
## 2 Background
In this section, we follow [46] to introduce the notations, definitions, and
operations of tensor. We also discuss two popular tensor decomposition
approaches and highlight some challenges in tensor analysis.
Figure 2: An example of first, second and third-order tensors.
### 2.1 Basics
#### Notation:
A tensor is a multidimensional array. The dimension of a tensor is also known
as mode, way, or order. A first-order tensor is a vector; a second-order
tensor is a matrix; and tensors of order three and higher are referred to as
higher-order tensors (see Figure 2). In this review, a tensor is denoted by
Euler script letter $\mathcal{X}\in\mathbb{R}^{n_{1}\times
n_{2}\times...\times n_{d}}$. Here $d$ is the order of tensor $\mathcal{X}$,
and $n_{k}$ is the marginal dimension of the $k$th mode ($k=1,2,...,d$). The
$(i_{1},i_{2},...,i_{d})$th element of the tensor $\mathcal{X}$ is denoted by
$x_{i_{1}i_{2}...i_{d}}$ for $i_{k}=1,2,...,n_{k}$ and $k=1,2,...,d$.
Subarrays of a tensor are formed through fixing a subset of indices in the
tensor. A fiber is a vector defined by fixing all but one indices of a tensor,
and a slice is a matrix created by fixing all the indices except for those of
two specific orders in the tensor. For instance, a third-order tensor
$\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}$ has column, row and
tube fibers, which are respectively denoted by
$\mathcal{X}_{:i_{2}i_{3}},\mathcal{X}_{i_{1}:i_{3}}$, and
$\mathcal{X}_{i_{1}i_{2}:}$ (see Figure 3(a)(b)(c)). A third-order tensor also
has horizontal, lateral, and frontal slices, denoted by
$\mathcal{X}_{i_{1}::},\mathcal{X}_{:i_{2}:}$ and $\mathcal{X}_{::i_{3}}$,
respectively (see Figure 3(d)(e)(f)).
(a) Mode-1 (column) fibers: $\mathcal{X}_{:i_{2}i_{3}}$
(b) Mode-2 (row) fibers: $\mathcal{X}_{i_{1}:i_{3}}$
(c) Mode-3 (tube) fibers: $\mathcal{X}_{i_{1}i_{2}:}$
(d) Horizontal slices: $\mathcal{X}_{i_{1}::}$
(e) Lateral slices: $\mathcal{X}_{:i_{2}:}$
(f) Frontal slices: $\mathcal{X}_{::i_{3}}$
Figure 3: Example of fibers and slices of third-order tensor. Figure 4:
Rank-$r$ CP decomposition for a third-order tensor:
$\mathcal{X}\approx\sum_{j=1}^{r}w_{j}\boldsymbol{p}_{j}^{1}\circ\boldsymbol{p}_{j}^{2}\circ\boldsymbol{p}_{j}^{3}.$
#### Tensor Operations:
The norm of a tensor $\mathcal{X}\in\mathbb{R}^{n_{1}\times
n_{2}\times...\times n_{d}}$ is the square root of the sum of the squares of
all elements, i.e.,
$\|\mathcal{X}\|=\sqrt{\sum_{i_{1}=1}^{n_{1}}\sum_{i_{2}=1}^{n_{2}}\cdots\sum_{i_{d}=1}^{n_{d}}x_{i_{1}i_{2}...i_{d}}^{2}}.$
(1)
The inner product of two same-sized tensors
$\mathcal{X},\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ is the sum
of products of their corresponding entries, i.e.,
$\langle\mathcal{X},\mathcal{Y}\rangle=\sum_{i_{1}=1}^{n_{1}}\sum_{i_{2}=1}^{n_{2}}\cdots\sum_{i_{d}=1}^{n_{d}}x_{i_{1}i_{2}...i_{d}}y_{i_{1}i_{2}...i_{d}}.$
(2)
It immediately follows that
$\langle\mathcal{X},\mathcal{X}\rangle=\|\mathcal{X}\|^{2}.$ The tensor
Hadamard product of two tensors
$\mathcal{X}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ and
$\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ is denoted by
$\mathcal{X}*_{H}\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ and
each entry of $\mathcal{X}*_{H}\mathcal{Y}$ is the product of the
corresponding entries in tensors $\mathcal{X}$ and $\mathcal{Y}$:
$(\mathcal{X}*_{H}\mathcal{Y})_{i_{1}...i_{d}}=x_{i_{1}...i_{d}}\cdot
y_{i_{1}...i_{d}}.$ (3)
The tensor contraction product, also known as the Einstein product, of two
tensors $\mathcal{X}\in\mathbb{R}^{n_{1}\times...\times n_{d}\times
p_{1}\times...\times p_{k}}$ and
$\mathcal{Y}\in\mathbb{R}^{p_{1}\times...\times p_{k}\times
m_{1}\times...\times m_{q}}$ is denoted by
$\mathcal{X}*\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{d}\times
m_{1}\times...\times m_{q}}$ and defined as
$\begin{split}(\mathcal{X}*\mathcal{Y})_{i_{1},...,i_{d},j_{1},...,j_{q}}=\sum_{c_{1}=1}^{p_{1}}\cdots\sum_{c_{k}=1}^{p_{k}}x_{i_{1},...,i_{d},c_{1},...,c_{k}}y_{c_{1},...,c_{k},j_{1},...,j_{q}},\end{split}$
(4)
where $i_{g}=1,2,...,n_{g}$ for $g=1,2,...,d$, and $j_{s}=1,2,...,m_{s}$ for
$s=1,2,...,q$. Moreover, a $d$th-order tensor
$\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times...\times n_{d}}$ is rank
one if it can be written as the outer product of $d$ vectors, i.e,
$\mathcal{X}=\boldsymbol{p}^{1}\circ\boldsymbol{p}^{2}\circ\cdots\circ\boldsymbol{p}^{d},$
where
$\boldsymbol{p}^{k}=(p_{1}^{k},p_{2}^{k},...,p_{n_{k}}^{k})\in\mathbb{R}^{n_{k}}$
$(k=1,2,...,d)$ is a vector, and the symbol “$\circ$” represents the vector
outer product. Each element of the tensor $\mathcal{X}$ is the product of
corresponding vector elements:
$x_{i_{1}i_{2}...i_{d}}=p_{i_{1}}^{1}p_{i_{2}}^{2}...p_{i_{d}}^{d}$ for
$i_{k}=1,2,...,n_{k}$ and $k=1,2,...,d$. A tensor $\mathcal{X}$ is rank $r$ if
$r$ is the smallest number such that $\mathcal{X}$ is the sum of $r$ outer
products of vectors:
$\mathcal{X}=\sum_{j=1}^{r}\boldsymbol{p}_{j}^{1}\circ\boldsymbol{p}_{j}^{2}\circ\cdots\circ\boldsymbol{p}_{j}^{d}$.
Tensor matricization, also known as tensor unfolding or flattening, is an
operation that transforms a tensor into a matrix. Given a tensor
$\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times...\times n_{d}}$, the
$k$th-mode matricization arranges the mode-$k$ fibers to be columns of the
resulting matrix, which is denoted by $\boldsymbol{X}_{(k)}$ ($k=1,2,...,d$).
The element $(i_{1},i_{2},...,i_{d})$ of tensor $\mathcal{X}$ corresponds to
the entry $(i_{k},j)$ of $\boldsymbol{X}_{(k)}$, where $j=1+\sum_{t=1,t\neq
k}^{d}(i_{t}-1)J_{t}$ with $J_{t}=\prod_{m=1,m\neq k}^{t-1}n_{m}$. In
addition, a tensor can be transformed into a vector through tensor
vectorization. For a tensor $\mathcal{X}\in\mathbb{R}^{n_{1}\times...\times
n_{d}}$, the vectorization of $\mathcal{X}$ is denoted by
vec($\mathcal{X})\in\mathbb{R}^{\prod_{i=1}^{d}n_{i}}$. The element
$(i_{1},i_{2},...,i_{d})$ of tensor $\mathcal{X}$ corresponds to the element
$1+\sum_{t=1}^{d}(i_{t}-1)M_{t}$ of vec($\mathcal{X}$), where
$M_{t}=\prod_{m=1}^{t-1}n_{m}$.
The $k$-mode tensor matrix product of a tensor
$\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}$ with a
matrix $\boldsymbol{A}\in\mathbb{R}^{m\times n_{k}}$ is denoted by
$\mathcal{X}\times_{k}\boldsymbol{A}$, which is of size
$n_{1}\times\cdots\times n_{k-1}\times m\times n_{k+1}\times\cdots\times
n_{d}$. Elementwisely, we have
$(\mathcal{X}\times_{k}\boldsymbol{A})_{i_{1},\ldots,i_{k-1},j,i_{k+1},\ldots,i_{d}}=\sum_{i_{k}=1}^{n_{k}}\mathcal{X}_{i_{1},\ldots,i_{d}}\boldsymbol{A}_{ji_{k}}$.
The $k$-mode vector product of a tensor $\mathcal{X}\in\mathbb{R}^{n_{1}\times
n_{2}\times\cdots\times n_{d}}$ with a vector
$\boldsymbol{a}\in\mathbb{R}^{n_{k}}$ is denoted by
$\mathcal{X}\bar{\times}_{k}\boldsymbol{a}$, which is of size
$n_{1}\times\cdots\times n_{k-1}\times n_{k+1}\times\cdots\times n_{d}$.
Elementwisely, $(\mathcal{X}\bar{\times}_{k}\boldsymbol{a})_{i_{1}\ldots
i_{k-1}i_{k+1}\ldots
i_{d}}=\sum_{i_{k}=1}^{n_{k}}x_{i_{1}i_{2}...i_{d}}a_{i_{k}}.$
### 2.2 Tensor Decompositions
Tensor decompositions refer to methods that express a tensor by a combination
of simple arrays. Here we introduce two widely-used tensor decompositions and
discuss their applications.
#### CP decomposition:
The CANDECOMP/PARAFAC decomposition (CP decomposition) [42] factorizes a
tensor into a sum of rank-1 tensors. For a $d$th-mode tensor $\mathcal{X}$,
the rank-$r$ CP decomposition is written as
$\mathcal{X}\approx\sum_{j=1}^{r}w_{j}\boldsymbol{p}_{j}^{1}\circ\boldsymbol{p}_{j}^{2}\circ\cdots\circ\boldsymbol{p}_{j}^{d},$
(5)
where
$w_{j}\in\mathbb{R},\boldsymbol{p}_{j}^{k}\in\mathbb{S}^{n_{k}},j=1,...,r,k=1,2,...,d,\mathbb{S}^{n_{k}}=\\{\boldsymbol{a}\in\mathbb{R}^{n_{k}}|\|\boldsymbol{a}\|=1\\},$
and $\circ$ is the outer product. See Figure 4 for a graphical illustration of
CP decomposition. Sometimes the CP-decomposition is denoted by an
abbreviation:
$\mathcal{X}\approx[\\![\boldsymbol{W};\boldsymbol{P}^{1},\boldsymbol{P}^{2},...,\boldsymbol{P}^{d}]\\!],$
where $\boldsymbol{W}=\text{diag(}w_{1},...,w_{r})\in\mathbb{R}^{r\times r}$
is a diagonal matrix, and
$\boldsymbol{P}^{k}=[\boldsymbol{p}_{1}^{k},\boldsymbol{p}_{2}^{k}...,\boldsymbol{p}_{r}^{k}]\in\mathbb{R}^{n_{k}\times
r}$ are factor matrices. If tensor $\mathcal{X}$ admits a CP structure, then
the number of free parameters changes from $\prod_{i=1}^{d}n_{i}$ to
$r\times(\sum_{i=1}^{d}n_{i}-d+1)$.
If Equation (5) attains equality, the decomposition is called an exact CP
decomposition. Even for exact CP decomposition, there is no straightforward
algorithm to determine the rank $r$ of a specific tensor, and in fact the
problem is NP-hard [31]. In practice, most procedures numerically infer the
rank by fitting CP models with different ranks and choosing the one with the
best numerical performance.
Figure 5: Tucker decomposition of third-order tensor
$\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}$, where
$\mathcal{C}\in\mathbb{R}^{m_{1}\times m_{2}\times m_{3}}$ is the core tensor,
and $\boldsymbol{Q}^{k}\in\mathbb{R}^{n_{k}\times m_{k}}(k=1,2,3)$ are factor
matrices.
#### Tucker decomposition:
The Tucker decomposition factorizes a tensor into a core tensor multiplied by
a matrix along each mode. Given a $d$th-order tensor
$\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times...\times n_{d}}$, the
Tucker decomposition is defined as
$\begin{split}\mathcal{X}\approx\mathcal{C}\times_{1}\boldsymbol{Q}^{1}\times_{2}\boldsymbol{Q}^{2}\times_{3}\cdots\times_{d}\boldsymbol{Q}^{d}=\sum_{j_{1}=1}^{m_{1}}\sum_{j_{2}=1}^{m_{2}}\cdots\sum_{j_{d}=1}^{m_{d}}c_{j_{1}j_{2}\dots
j_{d}}\boldsymbol{q}_{j_{1}}^{1}\circ\boldsymbol{q}_{j_{2}}^{2}\circ\cdots\circ\boldsymbol{q}_{j_{d}}^{d},\\\
\end{split}$ (6)
where $\mathcal{C}\in\mathbb{R}^{m_{1}\times m_{2}\times...\times m_{d}}$ is
the core tensor, $\boldsymbol{Q}^{k}\in\mathbb{R}^{n_{k}\times
m_{k}}(k=1,2,...,d)$ are factor matrices,
$c_{j_{1}j_{2}...j_{d}}\in\mathbb{R},\boldsymbol{q}_{j_{k}}^{k}\in\mathbb{S}^{n_{k}}(j_{k}=1,2,...,m_{k},k=1,2,...,d)$.
See Figure 5 for a graphical illustration of Tucker decomposition. The Tucker
decomposition can be denoted as
$\mathcal{X}\approx[\\![\mathcal{C};\boldsymbol{Q}^{1},\boldsymbol{Q}^{2},...,\boldsymbol{Q}^{d}]\\!].$
If $\mathcal{X}$ admits a Tucker structure, the number of free parameters in
$\mathcal{X}$ changes from $\prod_{i=1}^{d}n_{i}$ to
$\sum_{i=1}^{d}(n_{i}-1)\times m_{i}+\prod_{i=1}^{d}m_{i}$.
The $k$-rank of $\mathcal{X}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$,
denoted by rankk($\mathcal{X}$), is defined as the column rank of $k$th-mode
matricization matrix $\boldsymbol{X}_{(k)}$. Let
$R_{k}=$rank${}_{k}(\mathcal{X})$, then $\mathcal{X}$ is a
rank-$(R_{1},R_{2},...,R_{d})$ tensor. Trivially, $R_{k}\leq n_{k}$ for
$k=1,2,...,d$. When the equality in Equation (6) is attained, the
decomposition is called an exact Tucker decomposition. For a given tensor
$\mathcal{X}$, there always exists an exact Tucker decomposition with core
tensor $\mathcal{C}\in\mathbb{R}^{m_{1}\times m_{2}\times\cdots\times m_{d}}$
where $m_{k}$ is the true $k$-rank for $k=1,2,...,d$. Nevertheless, for one or
more $k$, if $m_{k}<R_{k}$, then the Tucker decomposition is not necessarily
exact; and if $m_{k}>R_{k}$, the model will contain redundant parameters.
Therefore, we usually want to identify the true tensor rank, i.e.,
$m_{k}=R_{k}$. While this job is easy for noiseless complete tensors, for
tensors obtained in real-world applications, which are usually noisy or
partially observed, the rank still needs to be determined by certain searching
procedure.
### 2.3 Challenges in tensor analysis
In tensor analysis, the ultrahigh dimensionality of the tensor-valued
coefficients and tensor data creates challenges such as heavy computational
burden and vulnerability to model overfitting. Conventional approaches usually
transform the tensors into vectors or matrices and utilize dimension reduction
and low-dimensional techniques. However, these methods are usually incapable
of accounting for the dependence structure in tensor entries. In past decades,
increasing number of studies impose decomposition structures on the tensor-
valued coefficients or data; thus naturally reducing the number of free
parameters, and avoiding the issues brought by high dimensionality.
In this paper, we focus on tensor regression and tensor completion problems,
where various decomposition structures including CP and Tucker have been
widely used. Specifically, a large proportion of tensor completion methods are
realized through inferring the decomposition structure based on the partially
observed tensor, and then impute the missing values through the inferred
decomposition structure. Also, tensor regression problems usually include
tensor-valued coefficients, and decomposition structures are imposed on the
coefficient tensor to achieve parsimony in parameters. In both situations, the
decomposition is not performed on a completely observed tensor, thus the rank
of decomposition cannot be directly inferred from the data. Most optimization-
based approaches determine the rank by various selection criteria, which may
suffer from low stability issues. Bayesian approaches perform automatic rank
inference through the introduction of sparsity-inducing priors. However,
efficient posterior computing and study of theoretical properties of the
posterior distributions are largely needed.
Low rankness and sparsity are commonly used assumptions in the literature to
help reduce the number of free parameters. For non-Bayesian methods,
oftentimes the task is formulated into an optimization problem, and the
assumptions are enforced by sparsity-inducing penalty functions. In
comparison, the Bayesian methods perform decompositions in the probabilistic
setting, and enforce sparsity assumptions through sparsity priors. We will
discuss more details about these approaches and how they resolve challenges in
the following sections.
## 3 Tensor Completion
Tensor completion methods aim at imputing missing or unobserved entries from a
partially observed tensor. It is a fundamental problem in tensor research and
has wide applications in numerous domains. For instance, tensor completion
techniques are extensively utilized in context-aware recommender systems
(CARS) to provide personalized services and recommendations. In ordinary
recommender systems, the user-item interaction data are collected and
formulated into a sparse interaction matrix, and the goal is to complete the
matrix and thus recommend individualized items to the users. In CARS, the
user-item interaction is collected with their contextual information (e.g.,
time and network), and the data is formulated as a high-order tensor where the
modes respectively represent users, items, and contexts. Therefore, the matrix
completion problem in ordinary recommender systems is transformed into a
tensor completion problem in CARS, and the purpose is to make personalized
recommendations to users based on the collected user-item interaction and
contextual information.
Apart from CARS, tensor completion is also applied in other research domains
including healthcare, computer vision and chemometrics. For example, medical
images collected from MRI and CT play important roles in the clinical
diagnosis process. Due to the high acquisition speed, oftentimes these high-
order images are incomplete, thus necessitating the application of tensor
completion algorithms. In the field of computer vision, the color videos can
be represented by a fourth-order tensor
(length$\times$width$\times$channel$\times$frame) by stacking the frames in
time order (see Figure 6). Tensor completion can be adopted to impute the
missing pixels and restore the lossy videos. As another example, chemometrics
is a discipline that employs mathematical, statistical and other methods to
improve chemical analysis. Tensor completion methods have been successfully
applied on various benchmark chemometric datasets including semi-realistic
amino acid fluorescence dataset [9] and flow injection dataset [66].
Tensor completion can be viewed as a generalization of matrix completion.
Since the matrix completion problems have been well-studied in the past few
decades, a natural way to conduct tensor completion is to unfold or slice the
tensor into a matrix (or matrices) and apply matrix completion methods to the
transformed matrix (or matrices). Nevertheless, the performance and efficiency
of such approaches are largely reduced by the loss of structural information
during the matricization process and excessive computational cost due to the
high dimensionality of the original tensor.
Under such circumstance, various methods that specifically focus on high-order
tensor completion have been developed. Among these techniques, a classical
group of approaches perform tensor completion through tensor decomposition.
Generally speaking, these methods impose a decomposition structure on a
tensor, and estimate the decomposition parameters based on the observed
entries of the tensor. After that, the estimated decomposition structure is
utilized to infer the missing entries of the tensor. Trace-norm based methods
are another popular class of tensor completion methods. These methods first
formulate tensor completion as a rank minimization problem, and then employ
tensor trace norm to further transform the task into a convex optimization
problem. Finally, various optimization techniques are applied to solve the
problem and thus complete the tensor. In this section we provide a brief
review of decomposition based and trace norm based tensor completion methods.
More details on these two methods and other variants of tensor completion
approaches can be found in Song et al. [83].
Figure 6: An illustration of color videos. Each frame of the video is
formulated as a third-order tensor, where the modes are length, width and
channels (RGB channels in this case). The frames are then stacked into a
fourth-order tensor according to time order.
### 3.1 Decomposition Based Methods
CP decomposition (5) and Tucker decomposition (6) are two most commonly used
decomposition-based methods for tensor completion. In [91], the authors
propose to perform CP decomposition on partially observed tensors by
iteratively imputing the missing values and estimating the latent vectors in
the CP structure. Specifically, in iteration $s~{}(s\geq 1)$, the partially
observed tensor $\mathcal{X}$ is completed by:
$\tilde{\mathcal{X}}^{(s)}=\mathcal{X}*_{H}\mathcal{M}+\mathcal{Y}^{(s)}*_{H}(\boldsymbol{1}-\mathcal{M}),$
where $*_{H}$ is the tensor Hadamard product defined in (3),
$\tilde{\mathcal{X}}^{(s)},\mathcal{X},\mathcal{Y}^{(s)},\mathcal{M}\in\mathbb{R}^{n_{1}\times...\times
n_{d}}$ are tensors of same size, $\tilde{\mathcal{X}}^{(s)}$ is the completed
tensor, $\mathcal{Y}^{(s)}$ is the interim low-rank approximation based on CP
decomposition, and $\mathcal{M}$ is the observation index tensor defined as
$\mathcal{M}_{i_{1}...i_{d}}=\begin{cases}1\quad\text{if
}\mathcal{X}_{i_{1}...i_{d}}\text{ is observed},\\\ 0\quad\text{if
}\mathcal{X}_{i_{1}...i_{d}}\text{ is unobserved}.\end{cases}$
After the tensor is completed, the decomposition parameters are estimated by
alternating least square optimization (ALS). The loop of tensor completion and
parameter estimation is repeated until convergence.
Similar approaches were adopted by Kiers et al. [43] and Kroonenberg [48] to
impute missing entries. These methods are referred to as EM-like methods,
because they can be viewed as a special expectation maximization (EM) method
when the residuals independently follow a Gaussian distribution. While the EM-
like methods are usually easy to implement, they may not perform well (e.g.,
slow convergence and converging to a local maximum) when there is a high
proportion of missing values.
Also based on the CP decomposition, Bro et al. [10] propose another type of
tensor completion methods called the Missing-Skipping (MS) method. It conducts
CP decomposition based only on the observed entries in the tensor, and is
typically more robust than the EM-like approaches when applied to tensors with
high proportion of missingness. In general, the MS methods seek to optimize
the following objective function
$L=\sum_{(i_{1},i_{2},...,i_{d})\in\Omega}\mathcal{D}(\mathcal{X}_{i_{1},...i_{d}},\mathcal{Y}_{i_{1},...,i_{d}}),$
(7)
where $\mathcal{X}\in\mathbb{R}^{n_{1}\times...n_{d}}$ is the observed tensor,
$\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ is the estimated
tensor with a CP structure, $\Omega$ is a set containing indices of all
observed entries in tensor $\mathcal{X}$, and $\mathcal{D}$ is an error
measure.
Under the optimization framework (7), Tomasi and Bro [91] define the error
measure $\mathcal{D}$ to be the squared difference between the observed and
estimated entry
$\mathcal{D}(\mathcal{X}_{i_{1},...i_{d}},\mathcal{Y}_{i_{1},...,i_{d}})=(\mathcal{X}_{i_{1},...i_{d}}-\mathcal{Y}_{i_{1},...,i_{d}})^{2}$,
and employ a modified Gauss-Newton iterative algorithm (i.e., Levenberg-
Marquardt method) [50, 63] to solve the optimization problem. Acar et al. [1]
utilize a weighted error and minimize the objective function based on first-
order gradient, which is shown to be more scalable to larger problem sizes
than the second-order optimization method in [91]. Moreover, the optimization
problem can be analyzed in a Bayesian setting by treating the error measure
$\mathcal{D}$ to be the log-likelihood function. We will discuss more details
about these probabilistic methods in Section 5.
Tucker decomposition is another widely utilized tool to conduct tensor
completion. While the CP-based completion approaches enjoy nice properties
including uniqueness (with the exception of elementary indeterminacies of
scaling and permutation) and nice interpretability of latent vectors, methods
that employ Tucker structure are able to accommodate more complex interaction
among latent vectors and are more effective than CP-based methods. Therefore,
in some real-world applications where the completion accuracy is prioritized
over the uniqueness and latent vector interpretation, Tucker-based approaches
are potentially more suitable than the CP-based methods.
Similar to CP-based methods, EM-like approaches and MS approaches are still
two conventional ways for Tucker-based tensor completion algorithms. Walczak
and Massart [96] and Andersson and Bro [2] discuss the idea of utilizing EM-
like Tucker decomposition to solve tensor completion in their earlier works.
This method is further combined with higher-order orthogonal iteration to
impute missing data [22]. As an example of MS Tucker decomposition,
Karatzoglou et al. [40] employ a stochastic gradient descent algorithm to
optimize the loss function based only on the observed entries. There are also
researches that develop the MS-based approaches under the Bayesian framework,
see Section 5 for more details.
In recent years, several studies utilize hierarchical tensor (HT)
representations to provide a generalization of classical Tucker models. Most
of the HT representation based methods are implemented using projected
gradient methods. For instance, Rauhut et al. [76, 77] employ Riemannian
gradient iteration method to establish an iterative hard thresholding
algorithm in their model. Besides, the Riemannian optimization is utilized to
construct the manifold for low-rank tensors in [14, 41, 47].
### 3.2 Trace Norm Based Methods
In [58] and a subsequent paper [57], the authors generalize matrix completion
to study tensors and solve the tensor completion problem by considering the
following optimization:
$\begin{split}\min_{\mathcal{Y}}&:\|\mathcal{Y}\|_{*},\\\
\text{s.t.}&:\mathcal{Y}_{\Omega}=\mathcal{X}_{\Omega},\\\ \end{split}$ (8)
where $\mathcal{X}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ is the observed
tensor, $\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ is the
estimated tensor, $\Omega$ is the set containing indices of all observed
entries in tensor $\mathcal{X}$, and $\|\cdot\|_{*}$ is the tensor trace norm.
They further define the trace norm to be a convex combination of the trace
norms of all unfolding matrices
$\|\mathcal{Y}\|_{*}:=\sum_{k=1}^{d}\alpha_{k}\|\boldsymbol{Y}_{(k)}\|_{*},$
(9)
where $\alpha_{i}$’s are non-negative weights satisfying
$\sum_{k=1}^{d}\alpha_{k}=1$. The optimization problem (8) with norm defined
in (9) is called a sum of nuclear norm (SNN) model. If the noise is Gaussian
distributed, the SNN model is equivalent with
$\min_{\mathcal{Y}}\frac{\lambda}{2}\|\mathcal{P}_{\Omega}(\mathcal{Y}-\mathcal{X})\|^{2}+\sum_{k=1}^{d}\alpha_{k}\|\boldsymbol{Y}_{(k)}\|_{*},$
(10)
where $\lambda>0$ is a tuning parameter, $\mathcal{P}_{\Omega}(\cdot)$ denotes
all the entries in the observed index set $\Omega$, $\|\cdot\|$ is the tensor
norm defined in (1), and $\|\cdot\|_{*}$ is the matrix trace norm. This
optimization problem can be solved by block coordinate decent algorithm [58]
and splitting methods (e.g., Alternating Direction Method of Multipliers,
ADMM) [20, 92, 82].
Using a similar model as (8), Mu et al. [65] propose to apply trace norm on a
balanced unfolding matrix instead of utilizing the summation of trace norms in
(9). In literature, it is also common to consider alternative norms such as
incoherent trace norm [103] and tensor nuclear norm [44, 106]. There are other
studies that impose trace norms on the factorized matrices rather than
unfolding matrices [59, 102, 62]; these approaches can be viewed as a
combination of decomposition based and trace norm based completion methods.
## 4 Tensor Regression
In this section, we review tensor regression methods, where the primary goal
is to analyze the association between tensor-valued objects and other
variables. Based on the role that the tensor plays in the regression, the
problem can be further categorized into tensor predictor regression (with
tensor-valued predictors and an univariate or multivariate response variable)
and tensor response regression (with tensor-valued response and predictors
that can be a vector, a tensor or even multiple tensors).
### 4.1 Tensor Predictor Regression
Many tensor predictor regression methods are motivated by the need of
analyzing anatomical magnetic resonance imaging (MRI) data. Usually stored in
the form of 3D images (see Figure 7 for an example), MRI presents the shape,
volume, intensity, or developmental changes in brain tissues and blood brain
barrier. These characteristics are closely related to the clinical outcomes
including diagnostic status, and cognition and memory score. It is hence
natural to formulate a tensor predictor regression to model the changes of
these scalar or vector-valued clinical outcomes with respect to the tensor-
valued MRI images.
Figure 7: An example of 3D magnetic resonance imaging (MRI). The image is
adapted with permissions from Science Photo Library. url:
https://www.sciencephoto.com/media/306963/view
In medical imaging analysis, conventional approaches are generally based on
vectorized data, either by summarizing the image data through a small number
of preidentified region of interest (ROIs), or by transforming the entire
image into a long vector. The former is highly dependent on the prior domain
knowledge and does not fully utilize the information in the raw image, and the
latter suffers from the high-dimensionality of voxels in the 3D image and
abandons important spatial information during the vectorization process. In
order to circumvent these limitations, a class of regression methods have been
developed to preserve the tensor structure. Specifically, given a univariate
response $Y$ (e.g. memory test score, disease status) and a tensor-valued
predictor $\mathcal{X}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ (e.g. 3D
image), Guo et al. [28] propose a linear regression model
$Y=\langle\mathcal{W},\mathcal{X}\rangle+b,$ (11)
where $\langle\cdot,\cdot\rangle$ is the tensor inner product defined in (2),
$\mathcal{W}$ is the coefficient tensor, and $b$ is the error. While model
(11) is a direct extension of classical linear regression model, the extension
can result in the explosion of number of unknown parameters. Specifically, the
coefficient tensor $\mathcal{W}$ includes $\prod_{i=1}^{d}n_{i}$ free
parameters, which far exceeds the typical sample size. To address this issue,
Guo et al. [28] impose a rank-$r$ CP structure (5) on $\mathcal{W}$, which
reduces the number of parameters in $\mathcal{W}$ to $r\sum_{i=1}^{d}n_{i}$.
Li et al. [55] extend model (11) to the multivariate response
$\boldsymbol{Y}=(Y_{1},Y_{2},...,Y_{q})^{\top}$ case, where each marginal
response $Y_{k}~{}(1\leq k\leq q)$ is assumed to be the summation of
$\langle\mathcal{X},\mathcal{B}_{k}\rangle$ and an error term, where
$\mathcal{X}$ is the predictor tensor, and
$\mathcal{B}_{k}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ is the coefficient
tensor. Under the assumption that the coefficients share common features, the
coefficient tensors are further formulated into a stack
$\mathcal{B}=[\mathcal{B}_{1},...,\mathcal{B}_{q}]\in\mathbb{R}^{n_{1}\times...\times
n_{d}\times q}$, on which a CP structure is imposed for parameter number
reduction.
Additionally, Zhou et al. [116] integrate model (11) with the generalized
linear regression framework, and incorporate the association between response
and other adjusting covariates into the model. Consider a scalar response $Y$,
a tensor-valued predictor $\mathcal{X}\in\mathbb{R}^{n_{1}\times...\times
n_{d}}$ and vectorized covariates $\boldsymbol{z}\in\mathbb{R}^{n_{0}}$ (e.g.,
demographic features), the generalized linear model is given by
$g\\{\mathbb{E}(Y)\\}=b+\boldsymbol{\gamma}^{\top}\boldsymbol{z}+\langle\mathcal{W},\mathcal{X}\rangle,$
(12)
where $\boldsymbol{\gamma}$ is the vector coefficient for $\boldsymbol{z}$,
$g(\cdot)$ is a link function, and $\mathcal{W}$ is the coefficient tensor
where a CP structure is assumed. Still for model (12), Li et al. [54] impose a
Tucker decomposition on $\mathcal{W}$, and demonstrate that the Tucker
structure allows for more flexibility.
In order to accommodate longitudinal correlation of the data in the imaging
analysis, Zhang et al. [105] extend model (12) under the generalized
estimating equation setting and establish asymptotic properties of the method.
Hao et al. [30] show that the linearity assumption in (11) may be violated in
some applications, and propose a nonparametric extension of (11) that
accommodates the nonlinear relationship between the response and tensor
predictor. Zhang et al. [104] use importance sketching to reduce the high
computational cost associated with the low-rank factorization in tensor
predictor regression, and establish the optimality of their method in terms of
reducing mean squared error under the Tucker structure assumption and
randomized Gaussian design. Beyond the regression framework, Wimalawarne et
al. [98] propose a binary classification method by considering a logistic loss
function and various tensor norms for regularization.
### 4.2 Tensor Response Regression
While the main focus of tensor predictor regression is analyzing the effects
of tensors on the response variables, researchers are also interested in
studying how tensor-valued outcomes change with respect to covariates. For
example, an important question in MRI studies is to compare the scans of
brains between subjects with neurological disorder (e.g., attention deficit
disorder) and normal controls, after adjusting for other covariates such as
age and sex. This problem can be formulated as a tensor response regression
problem where the MRI data, usually taking form of a three-dimensional image,
is the tensor-valued response, and other variables are predictors. Apart from
medical imaging analysis, tensor response regression is also useful in
advertisement industry. For example, the click-through rate (CTR) of digital
advertisements is often considered to be a significant indicator of the
effectiveness of an advertisement campaign. Thus an important business
question is to understand how CTR is affected by different features. Since the
CTR data can be formulated as a high-dimensional tensor (see Figure 8), we can
develop a regression model to address this problem, where the click-through
rate on target audience is the tensor-valued response, and the features of
advertisements are predictors of interest.
Figure 8: An illustration of click through rate data, which is formulated as a
third-mode tensor, where each voxel represents the click-through rate of user
$i$ reacting to advertisements from publisher $j$ at time $k$.
Given a $d$th-order tensor response
$\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{d}}$ and a vector predictor
$\boldsymbol{x}\in\mathbb{R}^{q}$, Rabusseau and Kadri [72] and Sun and Li
[87] propose a linear regression model
$\mathcal{Y}=\mathcal{B}\bar{\times}_{d+1}\boldsymbol{x}+\mathcal{E},$ (13)
where $\mathcal{B}\in\mathbb{R}^{n_{1}\times n_{2}\times...\times n_{d}\times
q}$ is an $(d+1)$th-order tensor coefficient, $\mathcal{E}$ is an error tensor
independent of $\boldsymbol{x}$, and $\bar{\times}_{d+1}$ is the $(d+1)$-mode
vector product. Without loss of generality, the intercept is set to be zero to
simplify the presentation.
Both studies [72, 87] propose to estimate the coefficients $\mathcal{B}$ by
solving an optimization problem, which consists of a squared tensor norm of
the difference between observed and estimated response
$\|\mathcal{Y}-\mathcal{B}\bar{\times}_{d+1}\boldsymbol{x}\|^{2}$ and a
sparsity structure. In Rabusseau and Kadri [72], the sparsity is obtained by a
$L_{2}$-penalty of parameters. In Sun and Li [87], the sparsity structure is
realized through a hard-thresholding constraint on the coefficients. For both
studies, decomposition structures are imposed on the tensor coefficient
$\mathcal{B}$ to facilitate parsimonious estimation of high-dimensional
parameters.
Lock [60] further extends (13) to a tensor-on-tensor regression model,
allowing a predictor of arbitrary order. Given $N$ independent samples, the
responses can be stacked into a tensor $\mathcal{Y}\in\mathbb{R}^{N\times
m_{1}\times m_{2}\times...\times m_{q}}$, and the predictors are denoted by
$\mathcal{X}\in\mathbb{R}^{N\times n_{1}\times n_{2}\times...\times n_{d}}$.
Lock [60] propose the following model:
$\mathcal{Y}=\mathcal{X}*\mathcal{B}+\mathcal{E},$ (14)
where $*$ is tensor contraction product defined in (4),
$\mathcal{B}\in\mathbb{R}^{n_{1}\times...\times n_{d}\times
m_{1}\times...\times m_{q}}$ is the coefficient tensor and $\mathcal{E}$
denotes the error. A CP structure is imposed on $\mathcal{B}$ to achieve
parsimony in parameters. The estimation of $\mathcal{B}$ is also transformed
into an optimization problem, and a $L_{2}$-penalty is included in the loss
function to prevent over-fitting. Under a similar modeling framework, Gahrooei
et al. [19] develop a multiple tenor-on-tensor regression model, where the
predictors are a set of tensors with various orders and sizes.
Based on (14), Li and Zhang [51] propose a tensor response regression that
utilizes the envelop method to remove redundant information from the response.
Raskutti et al. [75] analyze the tensor regression problem with convex and
weakly decomposable regularizers. In their regression model, either or both
the predictors and responses are tensors, and the low-rankness assumption is
realized by a nuclear norm penalty. Zhou et al. [117] focus on tensor
regression where the response is a partially observed dynamic tensor, and
impose low-rankness, sparsity and temporal smoothness constraints in the
optimization. Chen et al. [11] extend model (14) to the generalized tensor
regression setting and utilize a projected gradient descent algorithm to solve
the non-convex optimization.
## 5 Bayesian Methods in Tensor Completion
In Section 3.1, we mention that the tensor completion tasks can be realized by
performing decomposition on partially observed tensors and using the inferred
decomposition structure to impute the missing data (e.g., the Missing-Skipping
methods). The Bayesian tensor decomposition methods can be naturally applied
to study partially observed tensors. Generally, a large proportion of Bayesian
decomposition methods are based on CP (5)) or Tucker decomposition (6). A
class of nonparametric methods have also been proposed to model complex non-
linear interaction among latent factors. Recently, more decomposition
structures are analyzed under the Bayesian framework (e.g., tensor ring
decomposition [61], tensor train decomposition [38] and neural decomposition
[33]). A summary of the methods discussed in this section is given in Table 1.
### 5.1 Bayesian CP-Based Decomposition
Under the Bayesian framework, Xiong et al. [99] utilize a CP decomposition
based method to model time-evolving relational data in recommender systems. In
their study, the observed data is formed into a three-dimensional tensor
$\mathcal{R}\in\mathbb{R}^{N\times M\times K}$, where each entry
$\mathcal{R}_{ij}^{k}$ denotes user $i$’s rate on item $j$ given time $k$. A
CP structure (5) is then imposed on $\mathcal{R}$:
$\mathcal{R}\approx\sum_{d=1}^{D}\boldsymbol{U}_{d:}\circ\boldsymbol{V}_{d:}\circ\boldsymbol{T}_{d:}=[\\![\boldsymbol{U},\boldsymbol{V},\boldsymbol{T}]\\!],$
(15)
where $\boldsymbol{U},\boldsymbol{V},\boldsymbol{T}$ are latent factors
respectively corresponding to user, item, and time; and
$\boldsymbol{U}_{d:},\boldsymbol{V}_{d:},\boldsymbol{T}_{d:}$ represent the
$d$th-row of $\boldsymbol{U},\boldsymbol{V}$ and $\boldsymbol{T}$. Xiong et
al. [99] assign a Gaussian prior on the continuous entries $R_{ij}^{k}$
conditional on $\boldsymbol{U},\boldsymbol{V},\boldsymbol{T}$ as follows,
$R_{ij}^{k}|\boldsymbol{U},\boldsymbol{V},\boldsymbol{T}\sim\mathcal{N}(\langle\boldsymbol{U}_{:i},\boldsymbol{V}_{:j},\boldsymbol{T}_{:k}\rangle,\alpha^{-1}),$
(16)
where $\alpha$ is the precision, and
$\langle\boldsymbol{U}_{:i},\boldsymbol{V}_{:j},\boldsymbol{T}_{:k}\rangle$ is
the inner product of three $D$-dimensional vectors defined as
$\langle\boldsymbol{U}_{:i},\boldsymbol{V}_{:j},\boldsymbol{T}_{:k}\rangle=\sum_{d=1}^{D}U_{di}V_{dj}T_{dk}.$
A complete Bayesian setting requires full specification of the parameter
priors. In the study, multivariate Gaussian priors are put on the latent
vectors corresponding to users and items
$\displaystyle\boldsymbol{U}_{i}\sim\mathcal{N}(\boldsymbol{\mu}_{U},\boldsymbol{\Lambda}_{U}^{-1}),\quad
i=1,2,...,N,$ (17)
$\displaystyle\boldsymbol{V}_{j}\sim\mathcal{N}(\boldsymbol{\mu}_{V},\boldsymbol{\Lambda}_{V}^{-1}),\quad
j=1,2,...,M,$ (18)
and each time feature vector is assumed to depend only on its immediate
predecessor due to temporal smoothness:
$\displaystyle\boldsymbol{T}_{k}\sim\mathcal{N}(\boldsymbol{T}_{k-1},\boldsymbol{\Lambda}_{T}^{-1}),\quad
k=1,2,...,K,$ (19)
$\displaystyle\boldsymbol{T}_{0}\sim\mathcal{N}(\boldsymbol{\mu}_{T},\boldsymbol{\Lambda}_{T}^{-1}).$
(20)
Moreover, Xiong et al. [99] consider a hierarchical Bayesian structure where
the hyper-parameters
$\alpha,\boldsymbol{\Theta}_{U}\equiv\\{\boldsymbol{\mu}_{U},\boldsymbol{\Lambda}_{U}\\},\boldsymbol{\Theta}_{V}\equiv\\{\boldsymbol{\mu}_{V},\boldsymbol{\Lambda}_{V}\\},$
and
$\boldsymbol{\Theta}_{T}\equiv\\{\boldsymbol{\mu}_{T},\boldsymbol{\Lambda}_{T}\\}$
are viewed as random variables, and their prior distributions (i.e., hyper-
priors), denoted by $p(\cdot)$, are
$\begin{split}p(\alpha)=\mathcal{W}(\alpha|\tilde{W}_{0},\tilde{\nu}_{0}),\qquad\qquad\qquad\qquad\qquad\\\
p(\boldsymbol{\Theta}_{U})=p(\boldsymbol{\mu}_{U}|\boldsymbol{\Lambda}_{U})p(\boldsymbol{\Lambda}_{U})=\mathcal{N}(\boldsymbol{\mu}_{0},(\beta_{0}\boldsymbol{\Lambda}_{U})^{-1})\mathcal{W}(\boldsymbol{\Lambda}_{U}|\boldsymbol{W}_{0},\nu_{0}),\\\
p(\boldsymbol{\Theta}_{V})=p(\boldsymbol{\mu}_{V}|\boldsymbol{\Lambda}_{V})p(\boldsymbol{\Lambda}_{V})=\mathcal{N}(\boldsymbol{\mu}_{0},(\beta_{0}\boldsymbol{\Lambda}_{V})^{-1})\mathcal{W}(\boldsymbol{\Lambda}_{V}|\boldsymbol{W}_{0},\nu_{0}),\\\
p(\boldsymbol{\Theta}_{T})=p(\boldsymbol{\mu}_{T}|\boldsymbol{\Lambda}_{T})p(\boldsymbol{\Lambda}_{T})=\mathcal{N}(\boldsymbol{\mu}_{0},(\beta_{0}\boldsymbol{\Lambda}_{T})^{-1})\mathcal{W}(\boldsymbol{\Lambda}_{T}|\boldsymbol{W}_{0},\nu_{0}).\\\
\end{split}$ (21)
Here $\mathcal{W}(\boldsymbol{\Lambda}|\boldsymbol{W}_{0},\nu_{0})$ is the
Wishart distribution of a $D\times D$ random matrix $\boldsymbol{\Lambda}$
with $\nu_{0}$ degrees of freedom and a $D\times D$ scale matrix
$\boldsymbol{W}_{0}$:
$\mathcal{W}(\boldsymbol{\Lambda}|\boldsymbol{W}_{0},\nu_{0})=|\boldsymbol{\Lambda}|^{(\nu_{0}-D-1)/2}\exp\left(-\frac{\text{Tr}(\boldsymbol{W}_{0}^{-1}\boldsymbol{\Lambda})}{2}\right).$
The priors in (21) are conjugate priors for the Gaussian parameters to help
simplify the posterior computation. The parameters in the hyper-priors
$\boldsymbol{\mu}_{0},\beta_{0},\boldsymbol{W}_{0},\nu_{0},\tilde{W}_{0}$ and
$\tilde{\nu}_{0}$ can be chosen by prior knowledge or tuned by model training.
The Bayesian model in (16)–(21) is called Bayesian Probabilistic Tensor
Factorization (BPTF). The posterior distribution of the BPTF model is obtained
by Markov Chain Monte Carlo (MCMC) with Gibbs sampling [21]. While Xiong et
al. [99] use the BPTF model to perform tensor decomposition on continuous
rating data in recommender systems, similar priors have been adapted in other
applications and data types. For example, Chen et al. [12] formulate the
spatio-temporal traffic data as a third-order tensor (road
segment$\times$day$\times$time of day), where a CP structure is assumed and a
Gaussian-Wishart prior is put on the latent factors to promote conjugacy in
priors. A similar model has been used to study multi-relational network [81],
where the interaction data form a partially symmetric third-order tensor and
the tensor entries are binary indicators of whether there exists a certain
type of relationship. Correspondingly, a sigmoid function is employed in (16)
to map the outer product of latent factors onto the range $[0,1]$.
In addition, Schein et al. [79] develop a Poisson tensor factorization (PTF)
method to deal with dyadic interaction data in social networks. Specifically,
the interaction data is formulated as a fourth-order tensor $\mathcal{X}$,
where $\mathcal{X}_{ijat}$ denotes the number of interactions within a
discrete time interval $t$ involving a particular sender $i$, receiver $j$,
and action-type $a$. A Poisson distribution is employed to connect the CP
structure to the count-valued data:
$\mathcal{X}_{ijat}\sim\text{Poisson}(\sum_{k=1}^{K}\theta_{ik}^{s}\theta_{jk}^{r}\psi_{ak}\delta_{tk}).$
(22)
Gamma priors are then assigned to the latent factors,
$\begin{split}\theta_{ik}^{s}&\sim\text{Gamma}(a,b),\\\
\theta_{jk}^{r}&\sim\text{Gamma}(a,b),\\\ \psi_{ak}&\sim\text{Gamma}(c,d),\\\
\delta_{tk}&\sim\text{Gamma}(e,f).\end{split}$ (23)
Schein et al. [79] then represent the Poisson likelihood (22) as a sum of $K$
independent Poisson random variables, and derive a Variational Bayesian (VB)
algorithm to make inference on the posterior distribution.
All the aforementioned methods assume that the interactions among the latent
factors are multi-linear, which may not necessarily hold in practice. To
address this issue, Liu et al. [56] consider a neural CP decomposition that
exploits both the neural networks and probabilistic methods to capture
potential nonlinear interactions among the tensor entries. Given a tensor
$\mathcal{X}$ and the latent matrices in its CP structure
$\boldsymbol{U}^{1},...,\boldsymbol{U}^{D}$, the distribution of $\mathcal{X}$
conditional on $\boldsymbol{U}^{1},...,\boldsymbol{U}^{D}$ is given by
$p(\mathcal{X}|\\{\boldsymbol{U}^{d}\\}_{d=1}^{D})=\prod_{i_{1},...,i_{D}}\mathcal{N}(x|\mu,\sigma^{2}),$
where $x,\mu,\sigma^{2}$ are respectively short forms of
$x_{i_{1}...i_{D}},\mu_{i_{1}...i_{D}}$ and $\sigma^{2}_{i_{1}...i_{D}}$. In
order to accommodate nonlinear interaction between latent factors, $\mu$ and
$\sigma^{2}$ are defined as functions of $\boldsymbol{u}$
($\mu=\mu(\boldsymbol{u}),\sigma^{2}=\sigma^{2}(\boldsymbol{u})$), where
$\boldsymbol{u}=(U_{i_{1}:}^{1},...,U_{i_{D}:}^{D})\in\mathbb{R}^{DR}$ is a
long vector generated by concatenating one by one the $i_{d}$-th row of the
factor matrix $U^{d}$. In particular, the two functions $\mu(\cdot)$ and
$\sigma^{2}(\cdot)$ are modeled by two neural networks with the same input
$U_{i_{1}:}^{1},...,U_{i_{D}:}^{D}$
$\begin{split}\mu&=\boldsymbol{w}_{\mu}^{\top}\boldsymbol{h}(\boldsymbol{u})+b_{\mu},\\\
\log\sigma^{2}&=\boldsymbol{w}_{\sigma}^{\top}\boldsymbol{h}(\boldsymbol{u})+b_{\sigma},\end{split}$
where $\boldsymbol{h}(\boldsymbol{u})$ is a nonlinear hidden layer shared by
these two neural networks, and is defined as a tanh activation function in
[56]:
$\boldsymbol{h}(\boldsymbol{u})=tanh(\boldsymbol{W}^{\top}\boldsymbol{u}+\boldsymbol{b}).$
As discussed in Section 2.2, determining the rank of CP can be challenging in
practice. Even for a noise-free tensor, its rank specification is an NP-hard
problem [31]. In order to determine the CP rank, a common practice is to fit
models with different ranks and choose the best rank based on certain
criteria. Nevertheless, this approach may suffer from low stability issue and
high computational cost. An alternative approach is to use sparsity-inducing
priors. For example, in [74] and a subsequent work [73], the authors propose a
Bayesian low-rank CP decomposition method, which utilizes the multiplicative
gamma process (MGP) prior [4] to automatically infer the rank. Specifically,
given a CP structure
$\mathcal{X}=\sum_{r=1}^{R}\lambda_{r}\cdot\boldsymbol{u}_{r}^{(1)}\circ\boldsymbol{u}_{r}^{(2)}\circ\cdots\circ\boldsymbol{u}_{r}^{(K)},$
the following priors are put on the vector
$\boldsymbol{\lambda}=(\lambda_{1},\lambda_{2},...,\lambda_{R})$:
$\displaystyle\lambda_{r}\sim\mathcal{N}(0,\tau_{r}^{-1}),\quad 1\leq r\leq R$
(24)
$\displaystyle\tau_{r}=\prod_{l=1}^{r}\delta_{l},\quad\delta_{l}\sim\text{Gamma}(a_{c},1),\quad
a_{c}>1.$ (25)
In MGP prior, as $r$ increases, the precision $\tau_{r}$ takes large values
hence shrinks $\lambda_{r}$ towards zero. Small $\lambda_{r}$ values indicate
that the term
$\lambda_{r}\cdot\boldsymbol{u}_{r}^{(1)}\circ\boldsymbol{u}_{r}^{(2)}\circ\cdots\circ\boldsymbol{u}_{r}^{(K)}$
does not have a significant impact on the CP structure, hence could be removed
from the model. Two generalizations of MGP prior are further developed,
including truncation based variant MGP-CPt and the adaptive variant MGP-CPa,
to automatically infer the rank $R$ [74, 73].
Hu et al. [37] develop a Bayesian non-negative tensor factorization that deals
with count data and automatically infers the rank of CP decomposition. In
their work, the Poisson distribution is utilized to establish a connection
between CP structure and the count-valued data. Given a tensor
$\mathcal{Y}\in\mathbb{R}^{n_{1}\times...\times n_{K}}$ and its entry
$\boldsymbol{i}=\\{i_{1},...,i_{K}\\}$, we have
$\mathcal{Y}_{\boldsymbol{i}}\sim\text{Poisson}\left(\sum_{r=1}^{R}\lambda_{r}\prod_{k=1}^{K}u_{i_{k}r}^{(k)}\right).$
The non-negativity constraints on the factor matrices
$\boldsymbol{U}^{(1)},...,\boldsymbol{U}^{(K)}$
($\boldsymbol{U}^{(k)}=[\boldsymbol{u}_{1}^{(k)},...,\boldsymbol{u}_{R}^{(k)}],k=1,2,...,K$)
are naturally satisfied by imposing Dirichlet priors on the factors
$\boldsymbol{u}_{r}^{(k)}=[u_{1r}^{(k)},...,u_{i_{k}r}^{(k)}]^{\top}$:
$\boldsymbol{u}_{r}^{(k)}\sim\text{Dir}(a^{(k)},...,a^{(k)}),$
and a gamma-beta hierarchical prior is put on $\lambda_{r}$ to promote the
automatic rank specification:
$\displaystyle\lambda_{r}\sim\text{Gamma}(g_{r},\frac{p_{r}}{1-p_{r}}),$ (26)
$\displaystyle p_{r}\sim\text{Beta}(c\epsilon,c(1-\epsilon))~{}~{}~{}\text{for
some}~{}c>0.$ (27)
Similar to the MGP prior in (24) and (25), the gamma-beta hierarchical prior
in (26) and (27) also shrinks $\lambda_{r}$ to zero as $r$ increases, and is
thus able to select the CP rank. This model is also extended to binary data by
adding an additional layer
$b_{\boldsymbol{i}}=\boldsymbol{1}(y_{\boldsymbol{i}}\geq 1)$, which takes a
count-valued entry $y_{\boldsymbol{i}}$ in $\mathcal{Y}$ and thresholds this
latent count at one to generate binary-valued entries $b_{\boldsymbol{i}}$
[36].
Instead of imposing sparsity priors on the core elements of CP structure, Zhao
et al. [108] place a hierarchical prior over the latent factors. Let
$\mathcal{X}\in\mathbb{R}^{I_{1}\times\cdots\times I_{N}}$ have a CP structure
$\mathcal{X}=[\\![\boldsymbol{A}^{(1)},...,\boldsymbol{A}^{(N)}]\\!],$
where
$\boldsymbol{A}^{(n)}=[\boldsymbol{a}_{1}^{(n)},...,\boldsymbol{a}_{I_{n}}^{(n)}]$
$(n=1,2,...,N)$ are latent factors. Let
$\boldsymbol{\lambda}=[\lambda_{1},...,\lambda_{R}]$ and
$\boldsymbol{\Lambda}=$ diag($\boldsymbol{\lambda}$). The prior distribution
of $\boldsymbol{A}^{(n)}$ is
$p(\boldsymbol{A}^{(n)}|\boldsymbol{\lambda})=\prod_{i_{n}=1}^{I_{n}}\mathcal{N}(\boldsymbol{a}_{i_{n}}^{(n)}|\boldsymbol{0},\boldsymbol{\Lambda}^{-1}),\quad
n=1,2,\ldots,N.$
A hyperprior is further defined over $\boldsymbol{\lambda}$, which is
factorized over latent dimensions
$p(\boldsymbol{\lambda})=\prod_{r=1}^{R}\text{Gamma}(\lambda_{r}|c_{0}^{r},d_{0}^{r}).$
Here $R$ is a pre-specified maximum possible rank. The latent vectors (the
$r$-th row of all latent matrices) will shrink to a zero vector as
$\lambda_{r}^{-1}$ approaches to zero. This model can also accommodate various
types of outliers and non-Gaussian noises through the introduction of a sparse
structure, and the tradeoff between the low-rank approximation and the sparse
representation can be learned automatically by maximizing the model evidence
[111].
In real-world applications including recommender systems, image/video data
analysis and internet networks, the data are sometimes produced continuously
(streaming data). Therefore it is of interest to generalize the tensor
decomposition models to analyze such data under a real-time manner, where the
model parameters can be updated efficiently upon receiving new data without
retrieving previous entries. To this end, a class of streaming tensor
decomposition methods have been developed, and some are analyzed under the
Bayesian CP network [107, 15, 18]. In general, these algorithms start with a
prior distribution of unknown parameters and then infer a posterior that best
approximates the joint distribution of these parameters upon the arrival of
new streaming data. The estimated posterior is then used as the prior for the
next update. These methods are implemented either by streaming variational
Bayes (SVB) [107, 15], or assume-density filtering (ADF) and expectation-
propagation (EP) [18].
### 5.2 Tucker-based Bayesian Decomposition Methods
Compared to the CP decomposition, the Tucker structure (6) can model more
complex interaction between latent factors. One of the early works that employ
a probabilistic Tucker structure is proposed by Chu and Ghahramani [13], where
a probabilistic framework called pTucker is developed to perform decomposition
on partially observed tensors. Given a continuous third-order tensor
$\mathcal{Y}\in\mathbb{R}^{n\times m\times d}$, a Gaussian distribution is
assigned to each entry of tensor $\mathcal{Y}$,
$\mathcal{Y}_{ijr}|\mathcal{T}\sim\mathcal{N}(\mathcal{F}_{ijr},\sigma^{2}).$
Here $\mathcal{F}$ has a Tucker structure with a core tensor $\mathcal{T}$
$\mathcal{F}_{ijr}=\text{vec}(\mathcal{T})^{\top}(\boldsymbol{v}_{r}\otimes\boldsymbol{z}_{j}\otimes\boldsymbol{x}_{i}),$
where $\otimes$ is the Kronecker product, and
$\boldsymbol{v}_{r},\boldsymbol{z}_{j}$ and $\boldsymbol{x}_{i}$ are latent
vectors. Next, independent standard normal distributions are specified over
the entries in $\mathcal{T}$ as priors:
$\mathcal{T}_{kls}\sim\mathcal{N}(0,1),\quad\forall k,l,s.$
By integrating out the core tensor $\mathcal{T}$ from the joint
$\prod_{i,j,r}p(\mathcal{Y}_{ijr}|\mathcal{T})\prod_{k,l,s}p(\mathcal{T}_{kls})$,
the distribution over the observational array still follows a Gaussian
distribution:
$\text{vec}(\mathcal{Y})\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{U}\boldsymbol{U}^{\top}+\sigma^{2}\boldsymbol{I}),$
where $\text{vec}(\mathcal{Y})$ is the vectorized tensor, $\sigma^{2}$ is the
noise level, and
$\boldsymbol{U}=\boldsymbol{V}\otimes\boldsymbol{Z}\otimes\boldsymbol{X}$
where $\boldsymbol{V},\boldsymbol{Z}$ and $\boldsymbol{X}$ are latent
matrices. To complete the Bayesian framework, standard normal distributions
are further used as priors for latent components
$\boldsymbol{X},\boldsymbol{Z}$ and $\boldsymbol{V}$. Finally, the latent
factors are estimated by maximum a posteriori (MAP) with gradient descent.
While the MAP method provides an efficient alternative to perform point
estimation for latent factors, it also has significant disadvantages including
vulnerability to overfitting and incapability to quantify the uncertainty in
the parameters. To this end, various approaches seek to provide a fully
Bayesian treatment through inferring the posterior distribution of parameters.
For instance, Hayashi et al. [32] utilize the expectation maximization (EM)
method that combines the Laplace approximation and the Gaussian process to
perform posterior inference of latent factors. They use the exponential family
distributions to connect the Tucker structure with the observed tensor, thus
developing a decomposition method compatible with various data types. In
addition, Schein et al. [80] propose a Bayesian Poisson Tucker decomposition
(BPTD) that uses MCMC with Gibbs sampling for posterior inference. Especially
focusing on the count-valued observed tensor, that method puts Poisson priors
on the Tucker structure, and assigns Gamma priors to the latent factors.
Recently, Fang et al. [16] develop a Bayesian streaming sparse Tucker
decomposition (BASS-Tucker) to deal with streaming data. BASS-Tucker assigns a
spike-and-slab prior over entries of core tensor and employs an extended
assumed density filtering (ADF) framework for posterior inference.
Similar to CP-based methods, an important task for Tucker decomposition based
method is to choose an appropriate tensor rank. Unfortunately, the problem is
still challenging when dealing with partially observed data corrupted with
noise. Zhao et al. [109] employ hierarchical sparsity-inducing priors to
perform automatic rank determination in their Bayesian tensor decomposition
(BTD) model. Specifically, the observed tensor
$\mathcal{Y}\in\mathbb{R}^{I_{1}\times...\times I_{N}}$ is assumed to follow a
Gaussian distribution with the mean following a Tucker structure:
$\text{vec}(\mathcal{Y})|\\{\boldsymbol{U}^{(n)}\\},\mathcal{G},\tau\sim\mathcal{N}((\bigotimes_{n}\boldsymbol{U}^{(n)}))\text{vec}(\mathcal{G}),\tau^{-1}\boldsymbol{I}),$
where $\\{\boldsymbol{U}^{(n)}\\}$ are latent matrices, $\mathcal{G}$ is the
core tensor, and $\tau$ is the precision. To allow a fully Bayesian treatment,
hierarchical priors are placed over all model parameters. First, a
noninformative Gamma prior is assigned to the precision parameter $\tau$
$\tau\sim\text{Gamma}(a_{0}^{\tau},b_{0}^{\tau}).$
Next, a group sparsity prior is employed over the factor matrices, i.e., each
$\boldsymbol{U}^{(n)}=[\boldsymbol{u}_{1}^{(n)},...,\boldsymbol{u}_{I_{n}}^{(n)}]^{\top}$
($\boldsymbol{u}_{i_{n}}^{(n)}$ are latent vectors) is governed by hyper-
parameters
$\boldsymbol{\lambda}^{(n)}=(\lambda_{1}^{(n)},...,\lambda_{R_{n}}^{(n)})$,
where $\lambda_{r_{n}}^{(n)}$ controls the precision related to $r_{n}$ group
(i.e., $r_{n}$-th column of $\boldsymbol{U}^{(n)}$). Let
$\boldsymbol{\Lambda}^{(n)}=$diag($\boldsymbol{\lambda}^{(n)}$), then the
group sparsity prior is given by
$\boldsymbol{u}_{i_{n}}^{(n)}|\boldsymbol{\lambda}^{(n)}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{\Lambda}^{(n)^{-1}}),\quad\forall
n,\forall i_{n}.$
The sparsity assumption is also imposed on the core tensor $\mathcal{G}$.
Considering the connection between latent factors and the corresponding entry
of core tensor, the precision parameter for $\mathcal{G}_{r_{1},...,r_{N}}$
can be specified as the product of precisions over $\\{\boldsymbol{u}_{\cdot
r_{n}}^{(n)}\\}_{n=1}^{N}$, which is represented by
$\mathcal{G}_{r_{1}...r_{N}}|\\{\boldsymbol{\lambda}^{(n)}\\},\beta\sim\mathcal{N}(0,(\beta\prod_{n}\lambda_{r_{n}}^{(n)})^{-1}),$
or equivalently
$\text{vec}(\mathcal{G})|\\{\boldsymbol{\lambda}^{(n)}\\},\beta\sim
N(\boldsymbol{0},(\beta\bigotimes_{n}\boldsymbol{\Lambda}^{(n)})^{-1}),$
where $\beta$ is a scalar parameter on which a Gamma prior is placed
$\beta\sim\text{Gamma}(a_{0}^{\beta},b_{0}^{\beta}).$
The hyperprior for $\boldsymbol{\lambda}^{(n)}$ plays a key role for different
sparsity-inducing priors. Two options (student-$t$ and Laplace) are commonly
used to achieve group sparsity:
$\text{Student-}t:\lambda_{r_{n}}^{(n)}\sim\text{Gamma}(a_{0}^{\lambda},b_{0}^{\lambda}),\quad\forall
n,\forall r_{n};$ $\displaystyle\text{Laplace}:$
$\displaystyle~{}\lambda_{r_{n}}^{(n)}\sim\text{IG}(1,\frac{\gamma}{2}),\quad\forall
n,\forall r_{n},$
$\displaystyle~{}\gamma\sim\text{Gamma}(a_{0}^{\gamma},b_{0}^{\gamma}).$
Table 1: Summary of Bayesian tensor decomposition methods.
Name | Decomposition | Rank Specification | Posterior | Data Type
---|---|---|---|---
Structure | Inference
BPTF [99] | | Pre-specify | Gibbs | Continuous
PLTF [81] | | Pre-specify | Gibbs | Binary
BGCP [12] | | Pre-specify | Gibbs | Continuous
PTF [79] | | Pre-specify | VB | Count
NeuralCP [56] | | Pre-specify | AEVB | Continuous
MGP-CP [74] | | Automatically inferred | Gibbs | Continuous/Binary
PGCP [73] | CP | Automatically inferred | Gibbs/EM | Binary/Count
BNBCP [37] | Decomposition | Automatically inferred | Gibbs/VB | Count
ZTP-CP [36] | | Automatically inferred | Gibbs | Binary
FBCP [108] | | Automatically inferred | VB | Continuous
BRTF [111] | | Automatically inferred | VB | Continuous
POST [15] | | Pre-specify | SVB | Continuous/Binary
BRST [107] | | Automatically inferred | SVB | Continuous
SBDT [18] | | Pre-specify | ADF&EP | Continuous/Binary
pTucker [13] | | Pre-specify | MAP/EM | Continuous
Hayashi et al. [32] | Tucker | Pre-specify | EM | All
BPTD [80] | Decomposition | Pre-specify | Gibbs | Count
BTD [109] | Automatically inferred | VB | Continuous
BASS-Tucker [16] | | Pre-specify | ADF&EP | Continuous
InfTucker [100] | Nonparametric | Pre-Specify | VEM | Binary/Continuous
Zhe et al. [114] | VEM
DinTucker [113] | VEM
Zhe et al. [115] | VI
SNBTD [70] | ADF&EP
POND [90] | VB
Zhe and Du [112] | VEM
Wang et al. [97] | VI
BCTT [17] | EP
TR-VBI [61] | Tensor Ring | Automatically inferred | VB | Continuous
KFT [38] | Tensor Train | N/A | VI | Continuous
He et al. [33] | Neural | N/A | AEVB | All
ADF: Assume-density filtering [8]. AEVB: Auto-Encoding Variational Bayes [45].
EM: Expectation maximization. EP: Expectation propagation [64]. Gibbs: Markov
chain Monte Carlo (MCMC) with Gibbs sampling. MAP: Maximum a posteriori. SVB:
Steaming variational Bayes. VB: Variational Bayes. VEM: Variational
expectation maximization. VI: Variational Inference. N/A: Not applicable.
Neural: Neural tensor decomposition.
### 5.3 Nonparametric Bayesian Decomposition Methods
In addition to the aforementioned linear models, a class of nonparametric
Bayesian approaches have been developed to capture the potential nonlinear
relationship between tensor entries. One of the pioneering works is InfTucker
proposed by Xu et al. [100]. Generally, InfTucker maps the latent factors into
an infinite feature space and then performs Tucker decomposition with the core
tensor of infinite size. Let $\mathcal{M}\in\mathbb{R}^{m_{1}\times...\times
m_{K}}$ be a tensor following a Tucker structure with a core tensor
$\mathcal{W}$ and latent factors
$\boldsymbol{U}^{(1)},...,\boldsymbol{U}^{(K)}$. One can assign an element-
wise standard Gaussian prior over the core tensor $\mathcal{W}$
(vec$(\mathcal{W})\sim
N(\text{vec}(\mathcal{W});\boldsymbol{0},\boldsymbol{I})$) and marginalize out
$\mathcal{W}$. The marginal distribution for tensor $\mathcal{M}$ is then
given by
$p(\mathcal{M}|\boldsymbol{U}^{(1)},...,\boldsymbol{U}^{(K)})=\mathcal{N}(\text{vec}(\mathcal{M});\boldsymbol{0},\boldsymbol{\Sigma}^{(1)}\otimes...\otimes\boldsymbol{\Sigma}^{(K)})),$
(28)
where
$\boldsymbol{\Sigma}^{(K)}=\boldsymbol{U}^{(K)}\boldsymbol{U}^{(K)^{\top}}$.
Since the goal is to capture nonlinear relationships, each row
$\boldsymbol{u}_{t}^{k}$ of the latent factors $\boldsymbol{U}^{(k)}$ is
replaced by a nonlinear feature mapping $\phi(\boldsymbol{u}_{t}^{k})$. Then a
nonlinear covariance matrix
$\boldsymbol{\Sigma}^{(k)}=k(\boldsymbol{U}^{(k)},\boldsymbol{U}^{(k)})$ can
be obtained, where $k(\cdot,\cdot)$ is a nonlinear covariance kernel function.
In InfTucker [100], $k(\cdot,\cdot)$ is chosen as the radial basis function
kernel. After feature mapping, the core tensor $\mathcal{W}$ has the size of
the mapped feature vector $\boldsymbol{u}_{t}^{k}$ on mode $k$, which is
potentially infinity. Because the covariance of vec($\mathcal{M}$) is a
function of the latent factors
$\mathcal{U}=\\{\boldsymbol{U}^{(1)},...,\boldsymbol{U}^{(K)}\\}$, equation
(28) actually defines a Gaussian process (GP) on tensor entries, where the
input is based on the corresponding latent factors $\mathcal{U}$. To encourage
sparse estimation, element-wise Laplace priors are assigned on $\mathcal{U}$:
$\boldsymbol{u}_{i}^{(k)}\sim\mathcal{L}(\lambda)\propto\text{exp}(-\lambda\|\boldsymbol{u}_{i}^{(k)}\|_{1}).$
(29)
Finally, the observed tensor $\mathcal{Y}$ is sampled from a noisy model
$p(\mathcal{Y}|\mathcal{M})$, of which the form depends on the data type of
$\mathcal{Y}$. The joint distribution is then given by
$p(\mathcal{Y},\mathcal{M},\mathcal{U})=p(\mathcal{U})p(\mathcal{M}|\mathcal{U})p(\mathcal{Y}|\mathcal{M}),$
where $p(\mathcal{U})$ is given by (29), and $p(\mathcal{M}|\mathcal{U})$ is
given by (28) with
$\boldsymbol{\Sigma}^{(k)}=k(\boldsymbol{U}^{(k)},\boldsymbol{U}^{(k)})$.
Under a similar modeling framework, Zhe et al. [114] make two modifications to
InfTucker. One is to assign a Dirichlet process mixture (DPM) prior [3] over
the latent factors that allows an undetermined number of latent clusters. The
other is to utilize a local GP assumption instead of a global GP when
generating the observed array given the latent factors, which enables fast
computation over subarrays. Specifically, the local GP-based construction is
realized by firstly breaking the whole array $\mathcal{Y}$ into smaller
subarrays $\\{\mathcal{Y}_{1},..,\mathcal{Y}_{N}\\}$. Then for each subarray
$\mathcal{Y}_{n}$, a latent real-valued subarray $\mathcal{M}_{n}$ is
generated by a local GP based on the corresponding subset of latent factors
$\mathcal{U}_{n}=\\{\boldsymbol{U}_{n}^{(1)},...,\boldsymbol{U}_{n}^{(K)}\\}$
and the noisy observation $\mathcal{Y}_{n}$ is sampled according to
$\mathcal{M}_{n}$,
$\begin{split}p(\mathcal{Y}_{n},\mathcal{M}_{n}|\mathcal{U})&\\!=\\!p(\mathcal{M}_{n}|\mathcal{U}_{n})p(\mathcal{Y}_{n}|\mathcal{M}_{n})\\!=\\!\mathcal{N}(\text{vec}(\mathcal{M}_{n});\boldsymbol{0},\boldsymbol{\Sigma}_{n}^{(1)}\otimes...\otimes\boldsymbol{\Sigma}_{n}^{(K)})p(\mathcal{Y}_{n}|\mathcal{M}_{n}),\end{split}$
where
$\boldsymbol{\Sigma}_{n}^{(k)}=k(\boldsymbol{U}_{n}^{(k)},\boldsymbol{U}_{n}^{(k)})$
is the $k$-th mode covariance matrix over the sub-factors $\mathcal{U}_{n}$.
Likewise, DinTucker [113] consider a local GP assumption and sample each of
the subarrays $\\{\mathcal{Y}_{1},...,\mathcal{Y}_{n}\\}$ from a GP based on
the latent factors
$\tilde{\mathcal{U}}_{n}=\\{\tilde{\boldsymbol{U}}_{n}^{(1)},...,\tilde{\boldsymbol{U}}_{n}^{(K)}\\}$.
Different from Zhe et al. [114], in DinTucker these latent factors are then
tied to a set of common latent factors
$\mathcal{U}=\\{\boldsymbol{U}^{(1)},...,\boldsymbol{U}^{(K)}\\}$ via a prior
distribution
$p(\tilde{\mathcal{U}}_{n}|\mathcal{U})=\prod_{k=1}^{K}\mathcal{N}(\text{vec}(\tilde{\boldsymbol{U}}_{n}^{(k)})|\text{vec}(\boldsymbol{U}^{(k)}),\lambda\boldsymbol{I}),$
where $\lambda$ is the variance parameter that controls the similarity between
$\mathcal{U}$ and $\tilde{\mathcal{U}}_{n}$. Furthermore, DinTucker divides
each subarray $\mathcal{Y}_{n}$ into $T_{n}$ smaller subarrays
$\mathcal{Y}_{n}=\\{\mathcal{Y}_{n1},...,\mathcal{Y}_{nT_{n}}\\}$ that share
the same latent factors $\\{\tilde{\mathcal{U}}_{n}\\}$, and the joint
probability is given by
$\begin{split}p(\mathcal{U},\\{\tilde{\mathcal{U}}_{n},\mathcal{M}_{n},&\mathcal{Y}_{n}\\}_{n=1}^{N})=\prod_{n=1}^{N}p(\tilde{\mathcal{U}}_{n}|\mathcal{U})\prod_{t=1}^{T_{n}}p(\mathcal{M}_{nt}|\tilde{\mathcal{U}}_{n})p(\mathcal{Y}_{nt}|\mathcal{M}_{nt}),\end{split}$
where $\mathcal{M}_{nt}$ is a latent subarray, and
$\mathcal{M}_{n}=\\{\mathcal{M}_{nt}\\}_{t=1}^{T_{n}}$. The local terms
require less memory and has a faster processing time than the global term.
More importantly, the additive nature of these local terms in the log domain
enables distributed inference, which is then realized through the MapReduce
system.
While Zhe et al. [114] and DinTucker [113] improve the scalability of their
GP-based approaches through modeling the subtensors, their methods can still
run into challenges when the sparsity level is very high in observed tensors.
To address this issue, a class of methods that does not rely on the Kronecker-
product structure in the variance (28) are proposed based on the idea of
selecting an arbitrary subset of tensor entries for training. Assume that the
decomposition is to be performed on a sparsely observed tensor
$\mathcal{Y}\in\mathbb{R}^{d_{1}\times...\times d_{K}}$. For each tensor entry
$\boldsymbol{i}=(i_{1},...,i_{K})$, Zhe et al. [115] first construct an input
$\boldsymbol{x_{i}}$ by concatenating the corresponding latent factors from
all the modes:
$\boldsymbol{x_{i}}=[\boldsymbol{u}_{i_{1}}^{(1)},...,\boldsymbol{u}_{i_{K}}^{(K)}]$,
where $\boldsymbol{u}_{i_{k}}^{(k)}$ is the $i_{k}$-th row in the latent
factor matrix $\boldsymbol{U}^{(k)}$ for mode $k$. Then each
$\boldsymbol{x}_{\boldsymbol{i}}$ is transformed to a scalar
$m_{\boldsymbol{i}}$ through an underlying function
$f:\mathbb{R}^{\sum_{j=1}^{K}d_{j}}\to\mathbb{R}$ such that
$m_{\boldsymbol{i}}=f(\boldsymbol{x_{i}})=f([\boldsymbol{u}_{i_{1}}^{(1)},...,\boldsymbol{u}_{i_{K}}^{(K)}])$.
After that, a GP prior is assigned over $f$ to learn the unknown function: for
any set of tensor entries $S=\\{\boldsymbol{i}_{1},...,\boldsymbol{i}_{N}\\}$,
the function values
$\boldsymbol{f}_{S}=\\{f(\boldsymbol{x}_{\boldsymbol{i}_{1}}),...,f(\boldsymbol{x}_{\boldsymbol{i}_{N}})\\}$
are distributed according to a multivariate Gaussian distribution with mean
$\boldsymbol{0}$ and covariance determined by
$\boldsymbol{X}_{S}=\\{\boldsymbol{x}_{\boldsymbol{i}_{1}},...,\boldsymbol{x}_{\boldsymbol{i}_{N}}\\}$:
$p(\boldsymbol{f}_{S}|\mathcal{U})=\mathcal{N}(\boldsymbol{f}_{S}|\boldsymbol{0},k(\boldsymbol{X}_{S},\boldsymbol{X}_{S})),$
(30)
where $\mathcal{U}$ is the latent factor, and $k(\cdot,\cdot)$ is a nonlinear
covariance kernel. Note that this method is equivalent to InfTucker [100] if
all entries are selected and a Kronecker-product structure in the full
covariance is applied. A standard normal prior is assigned over the latent
factors, and the observed entries
$\boldsymbol{y}=[y_{\boldsymbol{i}_{1}},...,y_{\boldsymbol{i}_{N}}]$ are
sampled from a noise model $p(\boldsymbol{y}|\boldsymbol{m})$, where
$p(\cdot)$ is selected based on the data type.
Following the sparse GP framework (30), Pan et al. [70] propose the Streaming
Nonlinear Bayesian Tensor Decomposition (SNBTD) that performs fast posterior
updates upon receiving new tensor entries. The model is augmented with feature
weights to incorporate a linear structure, and the assumed-density-filtering
(ADF) framework is extended to perform reliable streaming inference. Also
based on (30), Tillinghast et al. [90] utilize convolutional neural networks
to construct a deep kernel $k(\cdot,\cdot)$ for GP modeling, which is more
powerful in estimating arbitrarily complicated relationships in data compared
to the methods based on shallow kernel functions (e.g., RBF kernel).
Furthermore, the tensor data are sometimes observed with temporal information;
and various approaches have been proposed to preserve the accurate timestamps
and take full advantage of the temporal information. Among these methods, Zhe
and Du [112] and Wang et al. [97] perform decomposition based on event-tensors
to capture complete temporal information, and Fang et al. [17] model the core
tensor as a time-varying function, where GP prior is placed to estimate
different types of temporal dynamics.
## 6 Bayesian Methods in Tensor Regression
Similar to frequentist tensor regression methods discussed in Section 4,
Bayesian tensor regression methods can be categorized into Bayesian tensor
predictor regression and Bayesian tensor response regression. We discuss these
two classes of methods in Section 6.1 and 6.2, and their theoretical
properties in Section 6.3. We also review posterior computing in Section 6.4.
A summary of the methods discussed in this section is given in Table 2.
Table 2: Summary of Bayesian tensor regression methods.
Name | Predictor | Response | Tensor | Algorithm
---|---|---|---|---
Type | Type | Structure
Suzuki [88] | Tensor | Scalar | CP | Gibbs
BTR [26] | Tensor+Vector | Scalar | CP | Gibbs
Zhao et al. [110] | Tensor | Scalar | Nonparametric | MAP
OLGP [35] | Tensor | Scalar | Nonparametric | OLGP
AMNR [39] | Tensor | Scalar | Nonparametric | MC
Yang and Dunsun [101] | Vector (Categorical) | Scalar (Categorical) | Tucker | Gibbs
CATCH [69] | Tensor+Vector | Scalar (Categorical) | Tucker | MLE
BTRR [27] | Vector | Tensor | CP | Gibbs
Spencer et al. [84, 85] | Vector | Tensor | CP | Gibbs
SGTM [23] | Vector | Symmetric Tensor | CP | Gibbs
BSTN [49] | Vector | Tensor | Other | Gibbs
SGPRN [52] | Matrix | Tensor | Nonparametric | VI
MLTR [34] | Tensor | Tensor | Tucker | Gibbs
ART [7] | Tensor | Tensor | CP | Gibbs
Gibbs: MCMC with Gibbs sampling. MAP: Maximum a posteriori. MC: Monte Carlo
Method. MLE: Maximum likelihood estimator. OLGP: Online local Gaussian process
[68, 95]. VI: Variational Inference.
### 6.1 Bayesian Tensor Predictor Regression
In recent years, Bayesian tensor predictor regression models have gained an
increasing attention. In 2015, Suzuki [88] developed a Bayesian framework
based on the basic tensor linear regression model
$Y_{i}=\langle\mathcal{W},\mathcal{X}_{i}\rangle+\epsilon_{i},$ (31)
where $Y_{i}\in\mathbb{R}$ is a univariate response,
$\mathcal{X}_{i}\in\mathbb{R}^{M_{1}\times\cdots\times M_{K}}$ is a tensor-
valued predictor, $\mathcal{W}\in\mathbb{R}^{M_{1}\times\cdots\times M_{K}}$
is the coefficient tensor, and $\langle\cdot,\cdot\rangle$ is the tensor inner
product (2). The error terms $\epsilon_{i}$’s are assumed i.i.d. following a
normal distribution $\mathcal{N}(0,\sigma^{2})$. To achieve parsimony in free
parameters, a rank-$r$ CP structure (5) is imposed on the coefficient tensor
$\mathcal{W}$:
$\mathcal{W}=[\\![\boldsymbol{U}^{(1)},...,\boldsymbol{U}^{(K)}]\\!],$
where $\boldsymbol{U}^{(k)}\in\mathbb{R}^{r\times M_{K}}$ ($k=1,2,...,K$) are
latent factors. To complete model specification, a Gaussian prior is placed on
the latent matrices:
$\pi(\boldsymbol{U}^{(1)},...,\boldsymbol{U}^{(K)}|r)\propto\exp\Big{\\{}-\frac{r}{2\sigma^{2}_{p}}\sum_{k=1}^{K}\text{Tr}[\boldsymbol{U}^{(k)^{\top}}\boldsymbol{U}^{(k)}]\Big{\\}},$
and a prior on the rank $r$:
$\pi(r)=\frac{1}{N_{\xi}}\xi^{r(M_{1}+\cdots+M_{K})},$
where $0<\xi<1$ is a positive real number, and $N_{\xi}$ is the normalizing
constant.
In order to adjust for other covariates in the model and accommodate various
data types of the response variable, Guhaniyogi et al. [26] propose a Bayesian
method based on the generalized tensor predictor regression model (12). Given
a scalar response $y$, vectorized predictors $\boldsymbol{z}\in\mathbb{R}^{p}$
and tensor predictor $\mathcal{X}\in\mathbb{R}^{p_{1}\times
p_{2}\times...\times p_{D}}$, the regression model is given by
$y\sim
f(\alpha+\boldsymbol{z}^{\top}\boldsymbol{\gamma}+\langle\mathcal{X},\mathcal{B}\rangle,\sigma),$
(32)
where $f(\mu,\sigma)$ is a family of distributions with location $\mu$ and
scale $\sigma$, $\boldsymbol{\gamma}\in\mathbb{R}^{p}$ are coefficients for
predictors $\boldsymbol{z}$, $\mathcal{B}\in\mathbb{R}^{p_{1}\times
p_{2}\times...\times p_{D}}$ is the coefficient tensor, and
$\langle\cdot,\cdot\rangle$ is the tensor inner product (2). A CP structure is
imposed on the tensor coefficient $\mathcal{B}$:
$\mathcal{B}=\sum_{r=1}^{R}\boldsymbol{\beta}_{1}^{(r)}\circ\cdots\circ\boldsymbol{\beta}_{D}^{(r)}.$
Under the Bayesian framework, Guhaniyogi et al. [26] propose a multiway
Dirichlet generalized double Pareto (M-DGDP) prior over the latent factors
$\boldsymbol{\beta}_{j}^{(r)}$. This prior promotes the joint shrinkage on the
global and local component parameters, as well as accommodates dimension
reduction by favoring low-rank decompositions. Specifically, M-GDGP prior
first assigns a multivariate Gaussian prior on $\boldsymbol{\beta}_{j}^{(r)}$:
$\boldsymbol{\beta}_{j}^{(r)}\sim\mathcal{N}(\boldsymbol{0},(\phi_{r}\tau)\boldsymbol{W}_{jr}),~{}j=1,\ldots,D.$
(33)
The shrinkage across components is induced in an exchangeable way, with global
scale $\tau\sim\text{Gamma}(a_{\tau},b_{\tau})$ adjusted in each component by
$\phi_{r}$ for $r=1,2,...,R$, where
$\Phi=(\phi_{1},...,\phi_{R})\sim\text{Dirichlet}(\alpha_{1},...,\alpha_{R})$
that encourages shrinkage towards lower ranks in the CP structure. In
addition, $\boldsymbol{W}_{jr}=\text{diag}(w_{jr,1},\cdots,w_{jr,p_{j}})$,
$j=1,2,...,D$ and $r=1,2,...,R$ are scale parameters for each component, where
a hierarchical prior is placed,
$w_{jr,k}\sim\text{Exp}(\lambda^{2}_{jr}/2),\quad\lambda_{jr}\sim\text{Gamma}(a_{\lambda},b_{\lambda}).$
(34)
In the M-GDGP prior, flexibility in estimating
$\mathcal{B}_{r}=\\{\boldsymbol{\beta}_{j}^{(r)};1\leq j\leq D\\}$ is achieved
by modeling within-margin heterogeneity via element-specific scaling
$w_{jr,k}$. The common rate parameter $\lambda_{jr}$ shares information
between margin elements, hence encouraging shrinkage at the local scale.
Besides linear models, a class of Gaussian process (GP) based nonparametric
approaches have been proposed to model nonlinear relationships in the tensor-
valued predictors. Given a dataset of $N$ paired observations
$\mathcal{D}=\\{(\mathcal{X}_{n},y_{n})|n=1,2,...,N\\}$, Zhao et al. [110]
aggregate all $N$ tensor inputs $\mathcal{X}_{n}~{}(n=1,2,...,N)$ into a
design tensor $\mathcal{X}\in\mathbb{R}^{N\times I_{1}\times\cdots\times
I_{M}}$, and collect the responses in the vector form
$\boldsymbol{y}=[y_{1},...,y_{N}]^{\top}$. The distribution of response vector
can be factored over the observations as
$\boldsymbol{y}\sim\prod_{n=1}^{N}\mathcal{N}(y_{n}|f(\mathcal{X}_{n}),\sigma^{2}).$
(35)
Here $f(\cdot)$ is a latent function on which a GP prior is placed
$f(\mathcal{X})\sim\text{GP}(m(\mathcal{X}),k(\mathcal{X},\mathcal{X}^{\prime})|\boldsymbol{\theta}),$
(36)
where $k(\mathcal{X},\mathcal{X}^{\prime})$ is the covariance function
(kernel), $\boldsymbol{\theta}$ is the associated hyperparameter vector, and
$m(\mathcal{X})$ is the mean function which is set to be zero in [110]. The
authors further propose to use the following product kernel in (36):
$k(\mathcal{X},\mathcal{X}^{\prime})=\alpha^{2}\prod_{d=1}^{D}\exp(\frac{KL(p(\boldsymbol{x}|\Omega_{d}^{\mathcal{X}})~{}\|~{}q(\boldsymbol{x}^{\prime}|\Omega_{d}^{\mathcal{X}^{\prime}}))}{-2\beta_{d}^{2}}),$
(37)
where $\alpha$ is a magnitude hyperparameter, and $\beta_{d}$ denotes the
$d$-mode length-scale hyper-parameter. The distributions $p$ and $q$ in the
Kullback-Leibler (KL) divergence are characterized by the hyper-parameters
$\Omega_{d}$, which can be estimated from the $d$-mode unfolding matrix
$\boldsymbol{X}_{d}$ of tensor $\mathcal{X}$ by treating each
$\boldsymbol{X}_{d}$ as a generative model with $I_{d}$ variables and
$I_{1}\times\cdots\times I_{d-1}\times I_{d+1}\times\cdots\times I_{D}$
observations. Given the complete prior construction, the hyperparameters
$\boldsymbol{\theta}=\\{\alpha,\beta_{d}|d=1,2,...,D\\}$ and $\sigma$ are then
estimated by maximum a posteriori (MAP). While the computational complexity of
GP-based methods is usually excessive, Hou et al. [35] take advantage of
online local Gaussian Process (OLGP) to present a computationally-efficient
approach for the nonparametric model in (35)-(37).
To further mitigate the burden of high-dimensionality, Imaizumi and Hayashi
[39] propose an additive-multiplicative nonparametric regression (AMNR) that
concurrently decompose the functional space and the input space, and thus is
referred to as a doubly decomposing nonparametric tensor regression method.
Denote a Sobolev space by $\mathcal{W}^{\beta}(\mathcal{X})$, which is a space
of $\beta$ times differentiable functions with support $\mathcal{X}$. Let
$\mathcal{X}=\bigotimes_{k}\boldsymbol{x}_{k}:=\boldsymbol{x}_{1}\otimes\cdots\otimes\boldsymbol{x}_{K}$
be a rank-one tensor denoted by the outer product of vectors
$\boldsymbol{x}_{k}\in\mathcal{X}^{(k)}$ ($\otimes$ is the outer product). Let
$f\in\mathcal{W}^{\beta}(\bigotimes_{k}\mathcal{X}^{(k)})$ be a function on a
rank one tensor, and for any $f$ we can construct
$\tilde{f}(\boldsymbol{x}_{1},...,\boldsymbol{x}_{K})\in\mathcal{W}^{\beta}(\mathcal{X}^{(1)}\times\cdots\times\mathcal{X}^{(k)})$
such that
$\tilde{f}(\boldsymbol{x}_{1},...,\boldsymbol{x}_{K})=f(\mathcal{X})$ using
function decomposition as $\tilde{f}=f\circ h$ with
$h:(\boldsymbol{x}_{1},...,\boldsymbol{x}_{K})\to\bigotimes_{k}\boldsymbol{x}_{k}$.
Then $f$ can be decomposed into a set of local functions
$\\{f_{m}^{k}\in\mathcal{W}^{\beta}(\mathcal{X}^{(k)})\\}_{m}$ following [29]:
$f(\mathcal{X})=\tilde{f}(\boldsymbol{x}_{1},...,\boldsymbol{x}_{K})=\sum_{m=1}^{M}\prod_{k=1}^{K}f_{m}^{(k)}(\boldsymbol{x}_{k}),$
(38)
where $M$ represents the complexity of $f$ (i.e., the “rank” of the model).
Based on (38), for a rank-$R$ tensor $\mathcal{X}$, Imaizumi and Hayashi [39]
define the AMNR function as:
$f^{AMNR}(\mathcal{X}):=\sum_{m=1}^{M}\sum_{r=1}^{R}\lambda_{r}\prod_{k=1}^{K}f_{m}^{(k)}(\boldsymbol{x}_{r}^{(k)}),$
(39)
which is achieved by first writing a rank-$R$ tensor as the sum of $R$ rank-
one tensors, then for each rank-one tensor decomposing the function into a set
of local functions. Under the Bayesian framework, a GP prior is assigned to
the local functions $f_{m}^{(k)}$, and the Gaussian distribution (35) is
utilized to associate the scalar response $Y_{i}$ with the function
$f^{AMNR}(\mathcal{X}_{i})$.
While the previous studies mainly deal with the regression problems where the
response is a continuous variable, the probabilistic methods are also apply to
categorical-response regression with tensor-valued predictors, i.e., the
tensor classification problems. For example, Pan et al. [69] propose a
covariate-adjusted tensor classification model (CATCH), which jointly models
the relationship among the covariates, tensor predictors, and categorical
responses. Given categorical response $Y\in\\{1,2,...,K\\}$, vector of
adjusting covariates $\boldsymbol{U}\in\mathbb{R}^{q}$, and tensor-variate
predictors $\mathcal{X}\in\mathbb{R}^{p_{1}\times\cdots\times p_{M}}$, the
CATCH model is proposed as
$\displaystyle\boldsymbol{U}|(Y=k)\sim
N(\boldsymbol{\Phi}_{k},\boldsymbol{\Psi})$ (40)
$\displaystyle\mathcal{X}|(\boldsymbol{U}=\boldsymbol{u},Y=k)\sim\text{TN}(\boldsymbol{\mu}_{k}+\boldsymbol{\alpha}\bar{\times}_{(M+1)}\boldsymbol{u};\boldsymbol{\Sigma}_{1},...,\boldsymbol{\Sigma}_{M}),$
(41)
where
$\boldsymbol{\Phi}_{k}\in\mathbb{R}^{q},\boldsymbol{\Psi}\in\mathbb{R}^{q\times
q}$ ($\boldsymbol{\Psi}>0$ and symmetric),
$\boldsymbol{\alpha}\in\mathbb{R}^{p_{1}\times...\times p_{M}\times
q},\boldsymbol{\mu}_{k}\in\mathbb{R}^{p_{1}\times...\times p_{M}}$, and
$\boldsymbol{\Sigma}_{m}\in\mathbb{R}^{p_{m}\times p_{m}}$
($\boldsymbol{\Sigma}_{m}>0$ and symmetric, $m=1,...,M$). Here TN$(\cdot)$ is
the tensor normal distribution, and $\bar{\times}_{(M+1)}$ is the $(M+1)$-mode
tensor vector product.
In equation (40), it is assumed that $\\{Y,\boldsymbol{U}\\}$ follows a
classical LDA model, where $\boldsymbol{\Phi}_{k}$ is the mean of
$\boldsymbol{U}$ within class $k$ and $\boldsymbol{\Psi}$ is the common within
class covariance of $\boldsymbol{U}$. Similarly, in equation (41) a common
within class covariance structure of $\mathcal{X}$ is assumed (denoted by
$\boldsymbol{\Sigma}_{m},m=1,2,...,M$), which does not depend on $Y$ after
adjusting for the covariates $\boldsymbol{U}$. The tensor coefficient
$\boldsymbol{\alpha}$ characterizes the linear dependence of tensor predictor
$\mathcal{X}$ on the covariates $\boldsymbol{U}$, and $\boldsymbol{\mu}_{k}$
is the covariate-adjusted within-class mean of $\mathcal{X}$ in class $k$.
While the goal is to predict $Y$ given $\\{\boldsymbol{U},\mathcal{X}\\}$,
based on the Bayes’ rule the optimal classifier under the CATCH model is
derived by maximizing the posterior probability
$\begin{split}\hat{Y}&=\arg\max_{k=1,2,...,K}P(Y=k|\mathcal{X}=\boldsymbol{x},\boldsymbol{U}=\boldsymbol{u})\\\
&=\arg\max_{k=1,2,...,K}\pi_{k}f_{k}(\boldsymbol{x},\boldsymbol{u}),\end{split}$
(42)
where $\pi_{k}=P(Y=k)$ and $f_{k}(\boldsymbol{x},\boldsymbol{u})$ is the joint
density function of $\mathcal{X}$ and $\boldsymbol{U}$ conditional on $Y=k$.
Combining (40) and (41), equation (42) is transformed into
$\hat{Y}=\arg\max_{k=1,2,...,K}\\{a_{k}+\boldsymbol{\gamma}_{k}^{\top}\boldsymbol{U}+\langle\mathcal{B}_{k},\mathcal{X}-\boldsymbol{\alpha}\bar{\times}_{(M+1)}\boldsymbol{U}\rangle\\},$
where
$\boldsymbol{\gamma}_{k}=\boldsymbol{\Psi}^{-1}(\boldsymbol{\Phi}_{k}-\boldsymbol{\Phi}_{1}),\mathcal{B}_{k}=[\\![\boldsymbol{\mu}_{k}-\boldsymbol{\mu}_{1};\boldsymbol{\Sigma}_{1}^{-1},...,\boldsymbol{\Sigma}_{M}^{-1}]\\!]$
following a Tucker structure with core tensor
$\boldsymbol{\mu}_{k}-\boldsymbol{\mu}_{1}$ and latent matrices
$\boldsymbol{\Sigma}_{1}^{-1},...,\boldsymbol{\Sigma}_{M}^{-1}$, and
$a_{k}=\log(\pi_{k}/\pi_{1})-\frac{1}{2}\boldsymbol{\gamma}_{k}^{\top}(\boldsymbol{\Phi}_{k}+\boldsymbol{\Phi}_{1})-\langle\mathcal{B}_{k},\frac{1}{2}(\boldsymbol{\mu}_{k}+\boldsymbol{\mu}_{1})\rangle$
is a scalar that does not involve $\mathcal{X}$ or $\boldsymbol{U}$.
Given i.i.d samples
$\\{Y^{i},\boldsymbol{U}^{i},\mathcal{X}^{i}\\}_{i=1}^{n}$, the parameters
$\\{\pi_{k},\boldsymbol{\Phi}_{k},\boldsymbol{\gamma}_{k},\boldsymbol{\mu}_{k},\mathcal{B}_{k}\\}_{k=1}^{K}$
and $\\{\boldsymbol{\Sigma}_{m}\\}_{m=1}^{M}$ can be estimated to build an
accurate classifier based on the data. It is worth noting that the estimation
of $\mathcal{B}_{k}$ are penalized in order to facilitate sparsity and achieve
parsimony.
Though not modeling tensorized predictors, Yang and Dunsun [101] employ tensor
methods to deal with classification problems with categorical predictors.
Specifically, [101] develops a framework for nonparametric Bayesian
classification through performing decomposition on the tensor transformed from
the conditional probability
$P(Y=y|X_{1}=x_{1},...,X_{p}=x_{p}),$
with a categorical response $Y\in\\{1,2,...,d_{0}\\}$ and a vector of $p$
categorical predictors $\boldsymbol{X}=(X_{1},X_{2},...,X_{p})^{\top}$. The
conditional probability can be structured as a $d_{0}\times
d_{1}\times\cdots\times d_{p}$-dimensional tensor, where $d_{j}$
$(j=1,2,...,p)$ denotes the number of levels of the $j$-th categorical
predictor $X_{j}$. The tensor is referred to as conditional probability
tensor, and the set of all conditional probability tensors is denoted by
$\mathcal{P}_{d_{1},...,d_{p}}(d_{0})$. Therefore,
$\mathcal{P}\in\mathcal{P}_{d_{1},...,d_{p}}(d_{0})$ implies
$\displaystyle\mathcal{P}_{y,x_{1},...,x_{p}}\geq 0\quad\text{for
every}~{}y,x_{1},...,x_{p};$
$\displaystyle\sum_{y=1}^{d_{0}}\mathcal{P}_{y,x_{1},...,x_{p}}=1\quad\text{for
every}~{}x_{1},...,x_{p}.$
Now all the conditional probabilities are described by an entry of the
conditional probability tensor, and thus the classification problem is
converted into a tensor decomposition problem. Additionally, Yang and Dunsun
[101] prove that every conditional probability tensor
$\mathcal{P}\in\mathcal{P}_{d_{1},...,d_{p}}(d_{0})$ can be expressed by a
Tucker structure
$\begin{split}\mathcal{P}_{y,x_{1},...,x_{p}}&=P(y|x_{1},...,x_{p})=\sum_{h_{1}=1}^{k_{1}}\cdots\sum_{h_{p}=1}^{k_{p}}\lambda_{h_{1}h_{2}...h_{p}}(y)\prod_{j=1}^{p}\pi_{h_{j}}^{(j)}(x_{j}),\end{split}$
with all positive parameters satisfying
$\begin{split}&\sum_{c=1}^{d_{0}}\lambda_{h_{1}h_{2}...h_{p}}(c)=1,\quad\text{for
every}~{}h_{1},h_{2},...,h_{p},\\\
&\sum_{h=1}^{k_{j}}\pi_{h}^{(j)}(x_{j})=1,\quad\text{for every pair of
}~{}j,x_{j}.\end{split}$
The inference on the Tucker coefficients is done under the Bayesian framework.
Specifically, independent Dirichlet priors are assigned to the parameters
$\boldsymbol{\Lambda}=\\{\lambda_{h_{1},...,h_{p}}(c),c=1,2,...,d_{0}\\}$ and
$\boldsymbol{\pi}=\\{\pi_{h_{j}}^{(j)}(x_{j}),h_{j}=1,2,...,k_{j}\\}$
($x_{j}=1,2,...,d_{j},h_{j}=1,2,...,k_{j},j=1,2,...,p$):
$\displaystyle\bigg{\\{}\lambda_{h_{1},...,h_{p}}(1),...,\lambda_{h_{1},...,h_{p}}(d_{0})\bigg{\\}}\sim\text{Dirichlet}(\frac{1}{d_{0}},...,\frac{1}{d_{0}}),$
$\displaystyle\bigg{\\{}\pi_{1}^{(j)}(x_{j}),...,\pi_{k_{j}}^{(j)}(x_{j})\bigg{\\}}\sim\text{Dirichlet}(\frac{1}{k_{j}},...,\frac{1}{k_{j}}),~{}j=1,...,p.$
These priors can impose non-negative and sum-to-one constraints naturally and
lead to conditional conjugacy in posterior computation. Besides, [101] also
assign priors to the hyper-parameters in the Dirichlet priors to promote a
fully Bayesian treatment. These priors place most of the probability on a few
elements to induce sparsity in these vectors.
### 6.2 Bayesian Tensor Response Regression
Guhaniyogi and Spencer [27] propose a Bayesian regression model with tensor
response and scalar predictors. Let $\mathcal{Y}_{t}\in\mathbb{R}^{p_{1}\times
p_{2}\times...\times p_{D}}$ be a tensor valued response, and
$\boldsymbol{x}_{t}=(x_{1,t},...,x_{m,t})\in\mathcal{X}\subset\mathbb{R}^{m}$
be an $m$-dimensional vector predictor measured at time $t$. Assuming that
both response $\mathcal{Y}_{t}$ and predictors $\boldsymbol{x}_{t}$ are
centered around their respective means, the proposed regression model for
$\mathcal{Y}_{t}$ on $\boldsymbol{x}_{t}$ is given by
$\mathcal{Y}_{t}=\boldsymbol{\Gamma}_{1}x_{1,t}+\cdots+\boldsymbol{\Gamma}_{m}x_{m,t}+\mathcal{E}_{t},\quad
i=1,2,...,n,$ (43)
where $\boldsymbol{\Gamma}_{k}\in\mathbb{R}^{p_{1}\times p_{2}\times...\times
p_{D}},k=1,2,...,m$ is the tensor coefficient corresponding to predictor
$x_{k,t}$, and $\mathcal{E}_{t}\in\mathbb{R}^{p_{1}\times p_{2}\times...\times
p_{D}}$ represents the error tensor. To account for the temporal correlation
of the response tensor, the error tensor $\mathcal{E}_{t}$ is assumed to
follow a component-wise AR(1) structure across $t$:
vec$(\mathcal{E}_{t})=\kappa\text{vec}(\mathcal{E}_{t-1})+\text{vec}(\boldsymbol{\eta}_{t})$,
where $\kappa\in(-1,1)$ is the correlation coefficient, and
$\boldsymbol{\eta}_{t}\in\mathbb{R}^{p_{1}\times p_{2}\times...\times p_{D}}$
is a random tensor, with each entry following a Gaussian distribution
$\mathcal{N}(0,\sigma^{2}/(1-\kappa^{2}))$.
Next, a CP structure is imposed on each $\boldsymbol{\Gamma}_{k}$ to reduce
the dimensionality of coefficient tensors, i.e.,
$\boldsymbol{\Gamma}_{k}=\sum_{r=1}^{R}\boldsymbol{\gamma}_{1,k}^{(r)}\circ\cdots\circ\boldsymbol{\gamma}_{D,k}^{(r)}$.
Although Guhaniyogi’s previously proposed M-DGDP prior (33)(34) over the
latent factors $\boldsymbol{\gamma}_{j,k}^{(r)}$ can promote global and local
sparsity, Guhaniyogi and Spencer [27] claim that the straightforward
application of M-DGDP prior leads to inaccurate estimation due to less
desirable tail behavior of the coefficient distributions. Instead, a multiway
stick breaking shrinkage prior (M-SB) is assigned to
$\boldsymbol{\gamma}_{j,k}^{(r)}$, where the main difference compared to
M-DGDP prior is how shrinkage is achieved across ranks. The construction of
the M-SB prior is given as follows. Let
$\boldsymbol{W}_{jr,k}=\text{diag}(w_{jr,k,1},...,w_{jr,k,p_{d}})$. Then
$\boldsymbol{\gamma}_{j,k}^{(r)}\sim\mathcal{N}(0,\tau_{r,k}\boldsymbol{W}_{jr,k}).$
Further set $\tau_{r,k}=\phi_{r,k}\tau_{k}$ to be scaling specific to rank $r$
($r=1,...,R$). Then effective shrinkage across ranks is achieved by adopting a
stick breaking construction for the rank-specific parameter $\phi_{r,k}$:
$\displaystyle\phi_{r,k}=\xi_{r,k}\prod_{l=1}^{r-1}(1-\xi_{l,k}),\quad
r=1,...,R-1,$ $\displaystyle\phi_{R,k}=\prod_{l=1}^{R-1}(1-\xi_{l,k}),$
where $\xi_{r,k}\sim_{iid}\text{Beta}(1,\alpha_{k}).$ The Bayesian setting is
then completed by specifying
$\displaystyle\tau_{k}\sim\text{InvGamma}(a_{\tau},b_{\tau}),~{}~{}w_{jr,k,i}\sim\text{Exp}(\lambda_{jr,k}^{2}/2),~{}~{}\lambda_{jr,k}\sim\text{Gamma}(a_{\lambda},b_{\lambda}),$
where the hierarchical prior of $w_{jr,k,i}$ allows the local scale parameters
$\boldsymbol{W}_{jr,k}$ to achieve margin level shrinkage.
Based on the regression function (43), Spencer et al. [84, 85] develop an
additive mixed effect model that simultaneously measures the activation due to
stimulus at voxels in the $g$th brain region and connectivity among $G$ brain
regions. Let $\mathcal{Y}_{i,g,t}\in\mathbb{R}^{p_{1,g}\times\cdots\times
p_{D,g}}$ be the tensor of observed fMRI data in brain region $g$ for the
$i$-th subject at the $t$-th time point, and
$x_{1,i,t},...,x_{m,i,t}\in\mathbb{R}$ be activation-related predictors, the
regression function is given by
$\mathcal{Y}_{i,g,t}=\boldsymbol{\Gamma}_{1,g}x_{1,i,t}+\cdots\boldsymbol{\Gamma}_{m,g}x_{m,i,t}+d_{i,g}+\mathcal{E}_{i,g,t}$
for subject $i=1,2,...,n$ in region $g=1,2,...,G$ and time $t=1,2,...,T$. Here
$\mathcal{E}_{i,g,t}\in\mathbb{R}^{p_{1,g}\times\cdots\times p_{D,g}}$ is the
error tensor, of which the elements are assumed to follow a normal
distribution with zero mean and shared variance $\sigma_{y}^{2}$.
$\boldsymbol{\Gamma}_{k,g}\in\mathbb{R}^{p_{1,g}\times\cdots\times p_{D,g}}$
represents activation due to the $k$-th stimulus at $g$-th brain region. Each
$\boldsymbol{\Gamma}_{k,g}$ is assumed to follow a CP structure, and M-SB
prior is assigned to the latent factors of the CP decomposition to determine
the nature of activation. Also, $d_{i,g}\in\mathbb{R}$ are region- and
subject-specific random effects which are jointly modeled to borrow
information across regions of interest. Specifically, a Gaussian graphical
LASSO prior is imposed on these random effects:
$\displaystyle\boldsymbol{d}_{i}=(d_{i,1},...,d_{i,G})^{\top}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{\Sigma}^{-1}),\quad
i=1,2,...,n,$ $\displaystyle
p(\boldsymbol{\sigma}|\zeta)=C^{-1}\prod_{g<g_{1}}[DE(\sigma_{gg_{1}}|\zeta)]\prod_{g=1}^{G}[\text{Exp}(\sigma_{gg}|\frac{\zeta}{2})]\boldsymbol{1}_{\boldsymbol{\Sigma}\in\mathcal{P}^{+}},$
where $\mathcal{P}^{+}$ is the class of all symmetric positive definite
matrices and $C$ is a normalization constant. The covariance
$\boldsymbol{\sigma}=(\sigma_{gg_{1}}:g\leq g_{1})$ is a vector of upper
triangle and diagonal entries of the precision matrix $\boldsymbol{\Sigma}$.
By the properties of multivariate Gaussian distribution, a small value of
$\sigma_{gg_{1}}$ stands for weak connectivity between regions of interest
(ROIs) $g$ and $g_{1}$, given other ROIs. In practice, a double exponential
prior distributions is employed on the off-diagonal entries of the precision
matrix $\boldsymbol{\Sigma}$ to favor shrinkage among these entries. A full
Bayesian prior construction is completed by assigning Gamma prior on $\zeta$
and an inverse Gamma prior on the variance parameter $\sigma_{y}^{2}$.
To study brain connectome datasets acquired using diffusion weighted magnetic
resonance imaging (DWI), Guha and Guhaniyogi [23] propose a generalized
Bayesian linear modeling framework with a symmetric tensor response and scalar
predictors. Let
$\mathcal{Y}_{i}\in\mathcal{Y}\subset\mathbb{R}^{p\times...\times p}$ be a
symmetric tensor response with diagonal entries zero,
$\boldsymbol{x}_{i}=(x_{i1},...,x_{im})^{\top}$ be $m$ predictors of interest,
and $\boldsymbol{z}_{i}=(z_{i1},...,z_{il})^{\top}$ be $l$ auxiliary
predictors corresponding to the $i$th individual. Let
$\mathcal{J}=\\{\boldsymbol{j}=(j_{1},...,j_{D}):1\leq j_{1}<\cdots<j_{D}\leq
p\\}$ be a set of indices. Given that $\mathcal{Y}_{i}$ is symmetric with
dummy diagonal entries, it suffices to build a probabilistic generative
mechanism for $y_{i,\boldsymbol{j}}~{}(\boldsymbol{j}\in\mathcal{J})$. In
practice, a set of conditionally independent generalized linear models is
utilized. Let $E(y_{i,\boldsymbol{j}})=\omega_{i,\boldsymbol{j}}$, for
$\boldsymbol{j}\in\mathcal{J}$ we have
$\omega_{i,\boldsymbol{j}}=H^{-1}(\beta_{0}+B_{1,\boldsymbol{j}}x_{i1}+\cdots+B_{m,\boldsymbol{j}}x_{im}+\beta_{1}z_{i1}+\cdots+\beta_{l}z_{il}),$
where $B_{1,\boldsymbol{j}},...,B_{m,\boldsymbol{j}}$ respectively represents
the entry $\boldsymbol{j}=(j_{1},...,j_{D})$ of the $p\times\cdots\times p$
symmetric coefficient tensors $\mathcal{B}_{1},...,\mathcal{B}_{m}$ with
diagonal entries zero, $\beta_{0},\beta_{1},...,\beta_{l}\in\mathbb{R}$ are
the intercept and coefficients corresponding to variables $z_{i1},...,z_{il}$
respectively, and $H(\cdot)$ is the link function. The model formulation
implies a similar effect of any of the auxiliary variables
$(z_{i1},...,z_{il})$ on all entries of the response tensor but varying
effects of $h$-th predictor on different entries
$\boldsymbol{j}\in\mathcal{J}$ of the response tensor. To account for
associations between tensor nodes and predictors and to achieve parsimony in
tensor coefficients, a CP-like structure is imposed on symmetric coefficient
tensors $\mathcal{B}_{1},...,\mathcal{B}_{m}$, i.e.,
$B_{h,\boldsymbol{j}}=\sum_{r=1}^{R}\lambda_{h,r}u_{h,j_{1}}^{(r)}\cdots
u_{h,j_{D}}^{(r)},\quad h=1,2,...,m;~{}\boldsymbol{j}\in\mathcal{J},$ (44)
where
$\boldsymbol{u}_{h}^{(r)}=(u_{h,1}^{(r)},...,u_{h,p}^{(r)})^{\top}\in\mathbb{R}^{p}$
are latent factors and $\lambda_{h,r}\in\\{0,1\\}$ is the binary inclusion
variable determining if the $r$-th summand in (44) is relevant in model
setting. Further let
$\tilde{\boldsymbol{u}}_{h,k}=(u_{h,k}^{(1)},...,u_{h,k}^{(R)})$, then the
$h$-th predictor of interest is considered to have no impact on the $k$-th
tensor if $\tilde{\boldsymbol{u}}_{h,k}=0$. In order to directly study the
effect of tensor nodes related to the $h$-th predictor of interest, a spike-
and-slab mixture distribution prior is assigned on
$\tilde{\boldsymbol{u}}_{h,k}$:
$\displaystyle\tilde{\boldsymbol{u}}_{h,k}\sim\begin{cases}\mathcal{N}(\boldsymbol{0},\boldsymbol{M}_{h}),&\text{if
}\eta_{h,k}=1\\\ \delta_{\boldsymbol{0}},&\text{if
}\eta_{h,k}=0\end{cases},~{}~{}\eta_{h,k}\sim\text{Bern}(\xi_{h}),$
$\displaystyle\boldsymbol{M}_{h}\sim IW(\boldsymbol{S},\nu),~{}~{}\xi_{h}\sim
U(0,1),$
where $\delta_{\boldsymbol{0}}$ is the Dirac function at $\boldsymbol{0}$ and
$\boldsymbol{M}_{h}$ is a covariance matrix of order $R\times R$.
$IW(\boldsymbol{S},\nu)$ denotes an Inverse-Wishart distribution with an
$R\times R$ positive definite scale matrix $\boldsymbol{S}$ and degrees of
freedom $\nu$. The parameter $\xi_{h}$ corresponds to the probability of the
nonzero mixture component and $\eta_{h,k}$ is a binary indicator that equals
$0$ if $\tilde{\boldsymbol{u}}_{h,k}=\delta_{\boldsymbol{0}}$. Thus, the
posterior distributions of $\eta_{h,k}$’s can help identify nodes related to a
predictor.
To impart increasing shrinkage on $\lambda_{h,r}$ as $r$ grows, a hierarchical
prior is imposed on $\lambda_{h,r}$:
$\lambda_{h,r}\sim\text{Bern}(\nu_{h,r}),~{}\nu_{h,r}\sim\text{Beta}(1,r^{\zeta}),\zeta>1.$
Additionally, a Gaussian prior is placed $N(a_{\beta},b_{\beta})$ on
$\beta_{0},\beta_{1},...,\beta_{l}$.
Recently, Lee et al. [49] develop a Bayesian skewed tensor normal (BSTN)
regression, which addresses the problem of considerable skewness in the
tensorized response in a study of periodontal disease (PD). For an order-$K$
tensor response $\mathcal{Y}_{i}\in\mathbb{R}^{d_{1}\times\cdots\times d_{K}}$
with a vector of covariates $\boldsymbol{x}_{i}\in\mathbb{R}^{p}$, the
regression model is given by
$\mathcal{Y}_{i}=\mathcal{B}\bar{\times}_{(K+1)}\boldsymbol{x}_{i}+\mathcal{E}_{i},\quad\text{for
}i=1,2,...,n,$
where $\mathcal{B}\in\mathbb{R}^{d_{1}\times\cdots\times d_{K}\times p}$ is an
order-$(K+1)$ coefficient tensor, $\bar{\times}_{(K+1)}$ is the $(K+1)$-th
mode vector product, and
$\mathcal{E}_{i}\in\mathbb{R}^{d_{1}\times\cdots\times d_{K}}$ is the error
tensor. The skewness in the distribution of $\mathcal{Y}$ is modeled by
$\mathcal{E}_{i}=|\mathcal{Z}_{2i}|\times_{K}\boldsymbol{\Lambda}+\mathcal{Z}_{1i},$
where
$\boldsymbol{\Lambda}=\text{diag}(\lambda_{1},...,\lambda_{d_{K}})\in\mathbb{R}^{d_{K}\times
d_{K}}$ is a digonal matrix with skewness parameters
$\boldsymbol{\lambda}=(\lambda_{1},...,\lambda_{d_{K}})$, $|\boldsymbol{M}|$
denotes a matrix whose elements are absolute values of the corresponding
elements in matrix $\boldsymbol{M}$, and $\times_{K}$ is the mode-$K$ tensor
matrix product. The tensor
$\mathcal{Z}_{2i}\in\mathbb{R}^{d_{1}\times\cdots\times d_{K}}$ follows a
tensor normal distribution
$\mathcal{Z}_{2i}\sim\text{TN}(\boldsymbol{0};\boldsymbol{I}_{d_{1}},...,\boldsymbol{I}_{d_{K-1}},\boldsymbol{D}_{\boldsymbol{\sigma}}^{2})$,
and is assumed to be independent of
$\mathcal{Z}_{1i}\sim\text{TN}(\boldsymbol{0};\boldsymbol{R}_{1},...,\boldsymbol{R}_{K-1},\boldsymbol{D}_{\boldsymbol{\sigma}}\boldsymbol{R}_{K}\boldsymbol{D}_{\boldsymbol{\sigma}})$,
where $\boldsymbol{R}_{1},...,\boldsymbol{R}_{K}$ are positive-definite
correlation matrices, and
$\boldsymbol{D}_{\boldsymbol{\sigma}}=\text{diag}(\sigma_{1},...,\sigma_{d_{K}})$
is a diagonal matrix of positive scale parameters
$\sigma_{1},...,\sigma_{d_{K}}$. The parameterization for the tensor normal
$\mathcal{Z}_{1i}$ via correlation matrices
$\boldsymbol{R}_{1},...,\boldsymbol{R}_{K}$ avoids the common identifiability
issue. Only the $K$th mode of $\mathcal{Z}_{2i}$ is multiplied by a skewness
matrix $\boldsymbol{\Lambda}=\text{diag}(\lambda_{1},...,\lambda_{d_{K}})$
because it is assumed that there is a same level of skewness in all
combinations of the first $(K-1)$ modes in the PD dataset. When $\lambda_{j}$
is positive (or negative), the corresponding marginal density of
$y_{i_{1},...,i_{K-1},j}$ of tensor response $\mathcal{Y}$ is skewed to the
right (left).
Various prior distributions are put on the parameters. Specifically,
independent zero-mean normal density with pre-specified variance is utilized
as the common prior for
$\boldsymbol{\lambda}=(\lambda_{1},...,\lambda_{d_{K}})$, and common
independent inverse-gamma distribution $IG(g_{1},g_{2})$ with pre-specified
shape $g_{1}>0$ and scale $g_{2}>0$ is imposed on
$\boldsymbol{\sigma}=(\sigma_{1},...,\sigma_{d_{K}})$. Besides, the parametric
correlation matrices $\boldsymbol{R}_{1},...,\boldsymbol{R}_{K}$ are assumed
to be equicorrelation matrices with independent uniform priors $Unif(-1,1)$
for unknown off-diagonal elements. The tensor normal distribution
$\text{TN}(\boldsymbol{0};\boldsymbol{C}_{1},...,\boldsymbol{C}_{K+1})$ with
zero mean and known covariance matrices
$\boldsymbol{C}_{1},...,\boldsymbol{C}_{K+1}$ are put on tensor coefficient
$\mathcal{B}$. Lee et al. [49] also propose an alternative prior distribution
for $\mathcal{B}$, where a spike-and-slab prior is employed to introduce
sparsity.
Similar to the tensor predictor regression, Gaussian Process (GP) based
nonparametric models are also studied for regression problems with tensorized
responses. Li et al. [52] proposes a method based on the Gaussian process
regression networks (GPRN), where no special kernel structure is pre-assumed,
and tensor/matrix-normal variational posteriors are introduced to improve the
inference performance.
The previously discussed methods assume a low-dimensional structure of the
predictors (either in the form of vector or matrix), and are generally
incapable of modeling high-dimensional tensorized predictors. Under such
circumstance, various tensor-on-tensor methods are proposed to deal with
regression problems with both tensor-valued responses and predictors, and some
are analyzed under the Bayesian framework. Given a tensor response
$\mathcal{Y}_{i}\in\mathbb{R}^{p_{1}\times...\times p_{K}}$ and tensor
predictor $\mathcal{X}_{i}\in\mathbb{R}^{m_{1}\times...\times m_{K}}$, Hoff
[34] propose to associate $\mathcal{Y}_{i}$ and $\mathcal{X}_{i}$ through a
Tucker structure (6)
$\mathcal{Y}_{i}=\mathcal{X}_{i}\times_{1}\boldsymbol{B}_{1}\times_{2}\boldsymbol{B}_{2}\times_{3}\cdots\times_{K}\boldsymbol{B}_{K}+\mathcal{E}_{i},$
(45)
where $\boldsymbol{B}_{1},...,\boldsymbol{B}_{K}$ are matrices of dimension
$p_{1}\times m_{1},...,p_{K}\times m_{K}$ respectively. The error tensor
$\mathcal{E}_{i}$ are i.i.d with dimension $p_{1}\times\cdots\times p_{D}$,
and are assumed to follow a tensor normal distribution
$\mathcal{E}_{i}\sim\text{TN}(\boldsymbol{0};\boldsymbol{\Sigma}_{1},...,\boldsymbol{\Sigma}_{K}).$
Under the Bayesian framework, the matrix normal priors are assigned to
$\boldsymbol{B}_{k}|\boldsymbol{\Sigma}_{k}$, and inverse Wishart priors are
imposed on $\boldsymbol{\Sigma}_{k}$ ($k=1,2,...,K$) to deliver efficient
posterior computation.
Hoff [34] requires that the responses and predictors have same number of
modes. Lock [60] circumvent this restriction by employing the regression
structure based on the tensor contraction product in (14). Utilizing the same
structure, Billio et al. [7] develop a Bayesian dynamic regression model that
allows tensor-valued predictors and responses to be of arbitrary dimension.
Specifically, denote the tensor response by
$\mathcal{Y}_{t}\in\mathbb{R}^{p_{1}\times...\times p_{D_{1}}}$ and the tensor
predictor measured at time by
$t$$\mathcal{X}_{t}\in\mathbb{R}^{q_{1}\times...\times q_{D_{2}}}$. Billio et
al. [7] propose the following dynamic regression model:
$\mathcal{Y}_{t}=\sum_{j=1}^{q}\mathcal{B}_{j}*\mathcal{Y}_{t-j}+\mathcal{A}*\mathcal{X}_{t}+\mathcal{E}_{t},$
where $\mathcal{B}_{j}$ and $\mathcal{A}$ are coefficient tensors of dimension
$p_{1}\times\cdots\times p_{D_{1}}\times p_{1}\times\cdots\times p_{D_{1}}$
and $p_{1}\times\cdots\times p_{D_{1}}\times q_{1}\times\cdots\times
q_{D_{2}}$ respectively, and $*$ is the tensor contraction product (4). The
random error tensor $\mathcal{E}_{t}$ follows a tensor normal distribution,
$\mathcal{E}_{t}\sim\text{TN}(\boldsymbol{0};\boldsymbol{\Sigma}_{1},...,\boldsymbol{\Sigma}_{D_{1}}).$
The parsimony of coefficients is achieved by CP structures on the tensor
coefficients, and an M-DGDP prior is assigned to the latent factors to promote
shrinkage across tensor coefficients and improve computational scalability in
high-dimensional settings.
### 6.3 Theoretical Properties of Bayesian Tensor Regression
In this section, we discuss the theoretical properties for several Bayesian
tensor regression methods.
In [88], the in-sample predictive accuracy of an estimator coefficient tensor
$\hat{\mathcal{W}}$ in (31) is defined by
$\|\hat{\mathcal{W}}-\mathcal{W}^{*}\|_{n}^{2}:=\frac{1}{n}\sum_{i=1}^{n}\langle
X_{i},\hat{\mathcal{W}}-\mathcal{W}^{*}\rangle^{2},$
where $\mathcal{W}^{*}$ is the true coefficient tensor,
$\\{X_{i}\\}_{i=1}^{n}$ are the observed input samples. The out-of-sample
predictive accuracy is defined by
$\|\hat{\mathcal{W}}-\mathcal{W}^{*}\|_{L_{2}(P(X))}^{2}:=E_{X\sim
P(X)}[\langle X,\hat{\mathcal{W}}-\mathcal{W}^{*}\rangle^{2}],$
where $P(X)$ is the distribution of $X$ that generates the observed samples
$\\{X_{i}\\}_{i=1}^{n}$ and the expectation is taken over independent
realization $X$ from the observed ones.
Assume that the $l_{1}$-norm of $X_{i}$ is bounded by $1$, the convergence
rate of the expected in-sample predictive accuracy of the posterior mean
estimator $\int\mathcal{W}d\Pi(\mathcal{W}|Y_{1:n})$,
$E\bigg{[}\bigg{\|}\int\mathcal{W}d\Pi(\mathcal{W}|Y_{1:n})-\mathcal{W}^{*}\bigg{\|}_{n}^{2}\bigg{]},$
is characterized by the actual degree of freedom up to a log term.
Specifically, let $d^{*}$ be the CP-rank of the true tensor $\mathcal{W}^{*}$,
and $M_{1},...,M_{K}$ be the dimensions for each order of $\mathcal{W}^{*}$,
the rate is essentially
$O\left(\frac{\text{degree of
freedom}}{n}\right)=O(\frac{d^{*}(M_{1}+\cdots+M_{K})}{n})$
(up to a log term) and is optimal. Besides, although the true rank $d^{*}$ is
unknown, by placing a prior distribution on the rank, the Bayes estimator can
appropriately estimate the rank and give an almost optimal rate depending on
the rank. In this sense, the Bayes estimator has adaptivity to the true rank.
Additionally, the frequentist methods often assume a variant of strong
convexity (e.g., a restricted eigenvalue condition [6] and restricted strong
convexity [67]) to derive a fast convergence rate of sparse estimators such as
Lasso and the trace norm regularization estimator. In contrast, the
convergence rate in [88] does not require any strong convexity assumption in
the model.
In terms of the out-of-sample predictive accuracy, the convergence rate
achieved is also optimal up to the log-term under the infinity norm
thresholding assumption ($\|\mathcal{W}^{*}\|_{\infty}<R$, where $R>0$).
Specifically, the rate is
$O(\frac{d^{*}(M_{1}+\cdots+M_{K})}{n}(R^{2}\vee 1))$
up to a log order.
Based on equation (32), Guhaniyogi et al. [26] prove the posterior consistency
of the estimated coefficient tensor $\mathcal{B}$. Define a Kulback-Leibler
(KL) neighborhood around the true tensor $\mathcal{B}_{n}^{0}$ as
$\mathbb{B}_{n}=\bigg{\\{}\mathcal{B}_{n}:\frac{1}{n}\sum_{i=1}^{n}\text{KL}(f(y_{i}|\mathcal{B}_{n}^{0}),f(y_{i}|\mathcal{B}_{n}))<\epsilon\bigg{\\}},$
where $f(\cdot)$ is the glm density in (32). Let $\Pi_{n}$ denote posterior
probability given $n$ observations, Guhnaiyogi et al. [26] establish the
posterior consistency by showing that
$\Pi_{n}(\mathbb{B}_{n}^{c})\to 0~{}~{}\text{under
}\mathcal{B}_{n}^{0}~{}~{}a.s.~{}\text{as }n\to\infty$
when the prior $\pi_{n}(\mathcal{B}_{n})$ satisfies a concentration condition.
Based on this conclusion, Guhaniyogi et al. further establish the posterior
consistency for the M-DGDP prior employed in their study.
In a subsequent work [24], the authors relax the key assumption in [26] which
requires that both the true and fitted tensor coefficients have the same rank
in CP decomposition. Instead, the theoretical properties are obtained based on
a more realistic assumption that the rank of the fitted tensor coefficient is
merely greater than the rank of the true tensor coefficients. Under additional
assumptions, the authors prove that the in-sample predictive accuracy is upper
bounded by a quantity given below:
$E_{\mathcal{B}_{n}^{0}}\int\|\mathcal{B}_{n}-\mathcal{B}_{n}^{0}\|_{n}^{2}\Pi(\mathcal{B}_{n}|y_{1:n},X_{1:n})\leq
AH_{n}/n,$
where $H_{n}=o\\{\log(n)^{d}\\}$ and $A$ are positive constants depending on
other parameters. By applying the Jensen’s inequality
$\begin{split}E_{\mathcal{B}_{n}^{0}}&[\|E(\mathcal{B}_{n}|Y_{1:n},\mathcal{X}_{1:n})-\mathcal{B}_{n}^{0}\|_{n}^{2}]\leq
E_{\mathcal{B}_{n}^{0}}\int\|\mathcal{B}_{n}-\mathcal{B}_{n}^{0}\|_{n}^{2}\Pi(\mathcal{B}_{n}|Y_{1:n},X_{1:n}),\end{split}$
the posterior mean of the tensor coefficient,
$E(\mathcal{B}_{n}|Y_{1:n},X_{1:n})$, converges to the truth with a rate of
order $n^{-1/2}$ up to a $\log(n)$ factor, which is near-optimal. Similar to
Suzuki [88], this conclusion on convergence rate does not require the strong
convexity assumption on the model.
For the AMNR function defined in equation (39), Imaizumi and Hayashi [39]
establish the asymptotic property of the distance between the true function
and its estimator. Let $f^{*}\in\mathcal{W}^{\beta}(\mathcal{X})$
($\mathcal{W}^{\beta}(\mathcal{X})$ is the Sobolev space) be the true function
and $\hat{f}_{n}$ be the estimator for $f^{*}$. Let $M^{*}$ be the rank of the
true function. Then the behavior of the distance $\|f^{*}-\hat{f}_{n}\|$
strongly depends on $M^{*}$. Let $\|f\|_{n}$ be the empirical norm satisfying
$\|f\|_{n}^{2}:=\frac{1}{n}\sum_{i=1}^{n}f(x_{i})^{2}.$
When $M^{*}$ is finite, under certain assumptions and for some finite constant
$C>0$, by [39], it follows that
$E\|\hat{f}_{n}-f^{*}\|_{n}^{2}\leq Cn^{-2\beta/(2\beta+\max_{k}I_{k})},$
where $\max_{k}I_{k}$ is the maximum dimension of the tensor predictor
$\mathcal{X}$. This property indicates that the convergence rate of the
estimator corresponds to the minimax optimal rate of estimating a function in
$\mathcal{W}^{\beta}$ on a compact support in $\mathbb{R}^{I_{k}}$. The
convergence rate of AMNR depends only on the largest dimensionality of
$\mathcal{X}$.
When $M^{*}$ is infinite, by truncating $M^{*}$ at a finite value $M$, the
convergence rate is nearly the same as the case of finite $M^{*}$, which is
slightly worsened by a factor $\gamma/(1+\gamma)$ [39]:
$E\|\hat{f}_{n}-f^{*}\|_{n}^{2}\leq
C(n^{-2\beta/(2\beta+\max_{k}I_{k})})^{\gamma/(1+\gamma)}.$
For the CATCH model in (40)-(42), Pan et al. [69] establish the asymptotic
properties for a simplified model, where only tensor predictor $\mathcal{X}$
is collected (the covariates $\boldsymbol{U}$ are not included). They define
the classification error rate of the CATCH estimator and that of the Bayes
rule as
$\displaystyle
R_{n}=\text{Pr}(\hat{Y}(\mathcal{X}^{\text{new}}|\hat{\mathcal{B}}_{k},\hat{\pi}_{k},\hat{\boldsymbol{\mu}}_{k})\neq
Y^{\text{new}}),$ $\displaystyle
R=\text{Pr}(\hat{Y}(\mathcal{X}^{\text{new}}|\mathcal{B}_{k},\pi_{k},\boldsymbol{\mu}_{k})\neq
Y^{\text{new}}),$
where $\hat{\mathcal{B}}_{k},\hat{\pi}_{k}$ and $\hat{\boldsymbol{\mu}}_{k}$
are the estimated coefficients, and $\mathcal{B}_{k},\pi_{k}$ and
$\boldsymbol{\mu}_{k}$ are true coefficients. Under certain conditions,
$R_{n}\to R$ with probability tending to 1. In other words, CATCH can
asymptotically achieve the optimal classification accuracy.
In [101], Yang and Dunsun establish the posterior contraction rate of their
proposed classification model. Suppose that data are obtained for $n$
observations $y^{n}=(y_{1},...,y_{n})^{\top}$ ($y_{i}\in\\{1,2,...,d_{0}\\}$),
which are conditionally independent given
$\boldsymbol{X}^{n}=(\boldsymbol{x}_{1},...,\boldsymbol{x}_{n})^{\top}$ with
$\boldsymbol{x}_{i}=(x_{i1},...,x_{ip_{n}})^{\top}$, $x_{ij}\in\\{1,...,d\\}$
and $p_{n}\gg n$. Assume that the design points
$\boldsymbol{x}_{1},...,\boldsymbol{x}_{n}$ are independent observations from
an unknown probability distribution $G_{n}$ on $\\{1,2,...,d\\}^{p_{n}}$,
denote
$\begin{split}d(P,P_{0})=\int\sum_{y=1}^{d_{0}}|P&(y|x_{1},...,x_{p})-P_{0}(y|x_{1},...,x_{p})|G_{n}(dx_{1},...,dx_{p}),\end{split}$
where $P_{0}$ is the true distribution, and $P$ is the estimated distribution.
Then under the given prior and other assumptions, it follows that
$\Pi_{n}\\{P:d(P,P_{0})\geq M\epsilon_{n}|y^{n},\boldsymbol{X}^{n}\\}\to
0~{}~{}a.s.,$
where $\epsilon_{n}\to
0~{}(n\epsilon_{n}^{2}\to\infty,\sum_{n}\exp(-n\epsilon_{n}^{2})<\infty)$, $M$
is a constant, and $\Pi_{n}(A|y^{n},\boldsymbol{X}^{n})$ is the posterior
distribution of $A$ given the observations. Based on this result, Yang and
Dunson [101] further prove that the posterior convergence of the model can be
very close to $n^{-1/2}$ under appropriate near low rank conditions.
Among tensor response regression problems, Guha and Guhaniyogi [23] establish
the convergence rate for predictive densities of their proposed SGTM model.
Specifically, let $f^{*}(\mathcal{Y}|\boldsymbol{x})$ be the true conditional
density of $\mathcal{Y}$ given $\boldsymbol{x}$ and
$f(\mathcal{Y}|\boldsymbol{x})$ be the random predictive density for which a
posterior is obtained. Define an integrated Hellinger distance between $f^{*}$
and $f$ as
$\mathcal{D}_{H}(f,f^{*})=\sqrt{\int\int(\sqrt{f(\mathcal{Y}|\boldsymbol{x})}-\sqrt{f^{*}(\mathcal{Y}|\boldsymbol{x})})^{2}\nu_{\mathcal{Y}}(d\mathcal{Y})\nu_{\boldsymbol{x}}(d\boldsymbol{x})},$
where $\nu_{\boldsymbol{x}}$ is the unknown probability measure for
$\boldsymbol{x}$ and $\nu_{\mathcal{Y}}$ is the dominating measure for $f$ and
$f^{*}$. For a sequence $\epsilon_{n}$ satisfying
$0<\epsilon_{n}<1,\epsilon_{n}\to 0$, and $n\epsilon_{n}^{2}\to\infty$, under
certain conditions it satisfies
$E_{f^{*}}\Pi_{n}\\{\mathcal{D}_{H}(f,f^{*})>4\epsilon_{n}|\\{\mathcal{Y}_{i},\boldsymbol{x}_{i}\\}_{i=1}^{n}\\}<4e^{-n\epsilon_{n}^{2}}$
for all large $n$, where $\Pi_{n}$ is the posterior density. This result
implies that the posterior probability outside a shrinking neighborhood around
the true predictive density $f^{*}$ converges to $0$ as $n\to\infty$. Under
further assumptions, the convergence rate $\epsilon_{n}$ can have an order
close to the parametric optimal rate of $n^{-1/2}$ up to a $\log(n)$ factor.
### 6.4 Posterior computation
In terms of posterior inference methods, sampling methods such as MCMC and
variational methods (e.g., Variational Expectation Maximization, Variational
Inference, and Variational Bayes) are the two popular choices for Bayesian
tensor analysis. MCMC is utilized in a majority of Bayesian tensor regression
and some Bayesian tensor completion (decomposition) problems. The ergodic
theory of MCMC guarantees that the sampled chain converges to the desired
posterior distribution, and sometimes the MAP result is utilized to initialize
the MCMC sampling for accelerating the convergence rate [99, 81]. In order to
reduce the computational cost and adapt to different situations, batch MCMC
and online MCMC are also used for posterior sampling [37, 36].
As an alternative strategy to approximate posterior densities for Bayesian
models, variational inference is very frequently employed in Bayesian tensor
completion methods. These methods do not guarantee producing samples from the
exact target density, but they are in general faster and more scalable to
large data than MCMC. In this category, Variational Expectation Maximization
(VEM) [100, 114, 113, 112], Variational Inference (VI) [115, 97, 38, 52], and
Variational Bayes (VB) [79, 37, 108, 109, 111, 90, 61] are the classical
choices, and the recently developed auto-encoding VB algorithm is employed to
deal with intractable distributions [56, 33]. Various studies also adopt
specific frameworks to reduce computational complexity (e.g., batch VB [37],
variational sparse Gaussian Processes [90, 112, 115, 97]) and accommodate
online or streaming data (e.g., online VB-EM [114], streaming VB [15, 107],
and Assumed Density Filtering / Expectation Propagation [16, 18, 70, 17]).
Additionally, Bayesian tensor completion (regression) methods also utilize
other methods including MLE [69], MAP [110] and EM [73, 32].
## 7 Conclusion
In Bayesian tensor analysis, the unique data structure and its high
dimensionality create challenges in both computation and theory. Bayesian
methods impose different decomposition structures on the tensor-valued data or
coefficients to reduce the number of free parameters. While CP, Tucker and
non-parametric decomposition are the most commonly used decomposition
structures, other decompositions have received some attention under the
Bayesian framework in recent years (e.g., tensor ring [61], tensor train [38],
neural [33]).
A full Bayesian model requires complete specification of a probabilistic model
and priors over model parameters, both of which depends on the data type. For
example, in tensor completion, when the tensor is continuous, the elements are
usually assumed to follow a Gaussian distribution with the tensor mean
following a decomposition structure [99, 56, 100]. The Gaussian distribution
can be extended to model the binary data through a link function [81]. In
terms of count data, element-wise Poisson distribution is often utilized to
relate the decomposition structure to the tensor-valued data, and a Dirichlet
or Gamma prior can be applied to latent factors or core tensor to enforce the
non-negativity in coefficients [79, 37, 80]. For tensor regression problems,
multivariate normal priors are placed over latent factors of the CP
decomposition, with Gaussian-Wishart prior on the hyper-parameters of the
normal distribution to achieve conjugacy [99, 12, 81]. Besides, specific
priors on core tensor (e.g., MGP prior [74, 73], Gamma-Beta hierarchical prior
[37]) or latent factors [109] in CP/Tucker structure can promote automatic
rank inference by letting the posterior decide the optimal rank. Sparsity
priors such as M-DGDP prior [26, 7] and M-SB prior [27] are also popular
choices for latent factors in the CP structure to promote low rankness, and
local/global sparsity. Integrating robust, interpretable and computationally
scalable Bayesian tensor methods with complex models (e.g., nonlinear machine
learning, reinforcement learning, causal inference, and dynamic models) is a
still interesting future direction.
Bayesian tensor regression has been widely used in applications, especially in
medical imaging analysis (e.g., MRI and EGG), where high resolution spatially
correlated data are produced. For both tensor-predictor and tensor-response
regressions, there is a need to model tensor-valued coefficients, which is
achieved by using CP/Tucker decomposition or nonparametric models that utilize
Gaussian process to model the non-linear relationship in the coefficient
tensor. Posterior inference is conducted by Markov Chain Monte Carlo (MCMC)
with Gibbs sampling, optimization based methods (e.g., variational Bayes) or
streaming methods (e.g., expectation propagation). It is still of interest to
develop scalable algorithms that accommodate challenging settings such as
streaming data analysis.
In terms of theoretical studies, most of the existing work is about
(near-)optimal convergence rate for posterior distributions of the tensor
coefficients in regression-related problems [88, 24, 39, 69, 101, 23]. There
are still many open problems such as theoretical analysis for Bayesian tensor
completion (and other tensor problems that we did not cover in this review)
and convergence analysis of computational algorithms.
## References
* [1] Evrim Acar, Daniel M Dunlavy, Tamara G Kolda, and Morten Mørup. Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1):41–56, 2011\.
* [2] Claus A Andersson and Rasmus Bro. Improving the speed of multi-way algorithms:: Part i. tucker3. Chemometrics and intelligent laboratory systems, 42(1-2):93–103, 1998.
* [3] Charles E Antoniak. Mixtures of dirichlet processes with applications to bayesian nonparametric problems. The annals of statistics, pages 1152–1174, 1974.
* [4] Anirban Bhattacharya and David B Dunson. Sparse bayesian infinite factor models. Biometrika, pages 291–306, 2011.
* [5] Xuan Bi, Xiwei Tang, Yubai Yuan, Yanqing Zhang, and Annie Qu. Tensors in statistics. Annual review of statistics and its application, 8:345–368, 2021\.
* [6] Peter J Bickel, Ya’acov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of lasso and dantzig selector. 2009\.
* [7] Monica Billio, Roberto Casarin, Matteo Iacopini, and Sylvia Kaufmann. Bayesian dynamic tensor regression. Journal of Business & Economic Statistics, pages 1–11, 2022.
* [8] Xavier Boyen and Daphne Koller. Tractable inference for complex stochastic processes. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pages 33–42, 1998.
* [9] Rasmus Bro. Parafac. tutorial and applications. Chemometrics and intelligent laboratory systems, 38(2):149–171, 1997.
* [10] Rasmus Bro. Multi-way analysis in the food industry-models, algorithms, and applications. In MRI, EPG and EMA,” Proc ICSLP 2000. Citeseer, 1998.
* [11] Han Chen, Garvesh Raskutti, and Ming Yuan. Non-convex projected gradient descent for generalized low-rank tensor regression. The Journal of Machine Learning Research, 20(1):172–208, 2019.
* [12] Xinyu Chen, Zhaocheng He, and Lijun Sun. A bayesian tensor decomposition approach for spatiotemporal traffic data imputation. Transportation research part C: emerging technologies, 98:73–84, 2019.
* [13] Wei Chu and Zoubin Ghahramani. Probabilistic models for incomplete multi-dimensional arrays. In Artificial Intelligence and Statistics, pages 89–96. PMLR, 2009\.
* [14] Curt Da Silva and FJ Herrmann. Hierarchical tucker tensor optimization-applications to tensor completion. sampta 2013. In 10th International Conference on Sampling Theory and Application, Jacobs University Bremen, 2013.
* [15] Yishuai Du, Yimin Zheng, Kuang-chih Lee, and Shandian Zhe. Probabilistic streaming tensor decomposition. In 2018 IEEE International Conference on Data Mining (ICDM), pages 99–108. IEEE, 2018.
* [16] Shikai Fang, Robert M Kirby, and Shandian Zhe. Bayesian streaming sparse tucker decomposition. In Uncertainty in Artificial Intelligence, pages 558–567. PMLR, 2021.
* [17] Shikai Fang, Akil Narayan, Robert Kirby, and Shandian Zhe. Bayesian continuous-time tucker decomposition. In International Conference on Machine Learning, pages 6235–6245. PMLR, 2022.
* [18] Shikai Fang, Zheng Wang, Zhimeng Pan, Ji Liu, and Shandian Zhe. Streaming bayesian deep tensor factorization. In International Conference on Machine Learning, pages 3133–3142. PMLR, 2021.
* [19] Mostafa Reisi Gahrooei, Hao Yan, Kamran Paynabar, and Jianjun Shi. Multiple tensor-on-tensor regression: An approach for modeling processes with heterogeneous sources of data. Technometrics, 63(2):147–159, 2021.
* [20] Silvia Gandy, Benjamin Recht, and Isao Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse problems, 27(2):025010, 2011.
* [21] Alan E Gelfand and Adrian FM Smith. Sampling-based approaches to calculating marginal densities. Journal of the American statistical association, 85(410):398–409, 1990. |
of the proof. Out of the next $n_{6}{}$ rows, we examine all the corresponding
columns that are activated. We take two cases:
* –
For $j^{\prime}\in[|\mathcal{V}_{1}|]$, let $x_{i,j^{\prime}}$ be the number
described between bits $\log_{2}{\overline{N}}\cdot j^{\prime}$ and
$\log_{2}{\overline{N}}\cdot(j^{\prime}+1)-1$ of $i$. Let
$u_{i,j^{\prime}}=x_{i,j^{\prime}}$ if $x_{i,j^{\prime}}\in V_{i^{\prime}}$,
and $u_{i,j^{\prime}}=\alpha$ otherwise. In other words, $u_{i,j^{\prime}}$ is
the $j^{\prime}$-th node of $U_{1}(i)$.
For $i^{\prime}\in[|\mathcal{V}_{3}|]$, columns
$x_{i,j^{\prime}}|\mathcal{V}_{3}|(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)(N+2)+i^{\prime}(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)(N+2)+j^{\prime}(N+2)+u_{i,j^{\prime}}$
are activated.
Let us fix $i^{\prime}\in[|\mathcal{V}_{3}|]$. To describe the weight of the
corresponding row, first let $z_{i^{\prime},i+l}\in[\overline{N}]$ be the
number described between bits $\log_{2}{\overline{N}}\cdot i^{\prime}$ and
$\log_{2}{\overline{N}}\cdot(i^{\prime}+1)-1$ of $i+l$. If
$i^{\prime}<|\mathcal{V}_{1}|$, $(z_{i^{\prime},i+l}-x_{i,j^{\prime}})\in
V_{|\mathcal{V}_{1}|+|\mathcal{V}_{2}|+i^{\prime}}$ and $x_{i,j^{\prime}}\in
V_{i^{\prime}}$, then let
$v_{i^{\prime},x_{i,j^{\prime}},i+l}=z_{i^{\prime},i+l}-x_{i,j^{\prime}}$.
Else if $i^{\prime}\geq|\mathcal{V}_{1}|$ and $z_{i^{\prime},i+l}\in
V_{|\mathcal{V}_{1}|+|\mathcal{V}_{2}|+i^{\prime}}$ let
$v_{i^{\prime},x_{i,j^{\prime}},i+l}=z_{i^{\prime},i+l}$. Else let
$v_{i^{\prime},x_{i,j^{\prime}},i+l}=\beta$.
The weight of row
$x_{i,j^{\prime}}|\mathcal{V}_{3}|(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)(N+2)+i^{\prime}(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)(N+2)+j^{\prime}(N+2)+u_{i,j^{\prime}}$
is equal to $2M+w(v_{i^{\prime},x_{i,j^{\prime}},i+l},u_{i,j^{\prime}})$.
Notice that
$\sum_{p\in[|\mathcal{V}_{1}|],q\in[|\mathcal{V}_{3}|]}w(v_{q,x_{i,p},i+l},u_{i,p})$
is a function of $i$ and $i+l$, therefore a function of $i$ and $l$, which we
call $g(i,l)$. It holds that $g(i,l)\leq|\mathcal{V}_{1}||\mathcal{V}_{3}|2M$
for any $i,l$.
Furthermore, assume that $U_{1}(i)\not\ni\alpha$ and $U_{3}(l)\not\ni\beta$
(thus $u_{i,j^{\prime}}=x_{i,j^{\prime}}$). Then the
$(p+1)\log_{2}{\overline{N}}-1$ bit of both $U_{1}(i)$ and $U_{3}(l)$ is
always $0$ for any $p$, meaning that when we add $i$ and $l$, there is no
carry from the $(p+1)\log_{2}{\overline{N}}-1$ to the
$(p+1)\log_{2}{\overline{N}}$ bit. In effect, the number between bits
$i^{\prime}\log_{2}{\overline{N}}$ and
$(i^{\prime}+1)\log_{2}{\overline{N}}-1$ of $i+l$ is the same as the sum of
the number between bits $i^{\prime}\log_{2}{\overline{N}}$ and
$(i^{\prime}+1)\log_{2}{\overline{N}}-1$ of $i$ and the number between bits
$i^{\prime}\log_{2}{\overline{N}}$ and
$(i^{\prime}+1)\log_{2}{\overline{N}}-1$ of $l$. Viewing it the other way
around, the number between bits $i^{\prime}\log_{2}{\overline{N}}$ and
$(i^{\prime}+1)\log_{2}{\overline{N}}-1$ of $l$ (the $i^{\prime}$-th node in
$U_{3}(l)$) is equal to the number between bits
$i^{\prime}\log_{2}{\overline{N}}$ and
$(i^{\prime}+1)\log_{2}{\overline{N}}-1$ of $i+l$ minus the number between
bits $i^{\prime}\log_{2}{\overline{N}}$ and
$(i^{\prime}+1)\log_{2}{\overline{N}}-1$ of $i$ (this difference is exactly
$v_{i^{\prime},x_{i,j^{\prime}},i+l}$).
We conclude that if $U_{1}(i)\not\ni\alpha$ and $U_{3}(l)\not\ni\beta$, then
$w(v_{i^{\prime},x_{i,j^{\prime}},i+l},u_{i,j^{\prime}})$ is the weight of the
edge between the $j^{\prime}$-th node of $U_{1}(i)$ and the $i^{\prime}$-th
node of $U_{3}(l)$. Therefore $g(i,l)=w(U_{1}(i),U_{3}(l))\leq M$.
We now show that if $\alpha\in U_{1}(i)$ or $\beta\in U_{3}(l)$, then $g(i,l)$
is too large. This is later used to ensure the properties of
$h(\cdot,\cdot,\cdot)$ specified by Definition 5.3.
* *
If $\alpha\in U_{1}(i)$ then $g(i,l)$ contains at least $|\mathcal{V}_{3}|$
terms that are $2M$, and the sum of all negative terms is at least $-M$, by
definition of $M$.
* *
Similarly, if $v_{q,x_{i,p},i+l}=\beta$, for any $p,q$, then $g(i,l)$ has
$|\mathcal{V}_{1}|$ terms that are $2M$, and the sum of all negative terms is
at least $-M$, by definition of $M$.
* *
If $\alpha\not\in U_{1}(i)$ but $\beta\in U_{3}(l)$, let $r$ be the smallest
term in $U_{3}(l)$ that is equal to $\beta$. Then, as we argued previously and
by definition of $r$, it should be that for $r^{\prime}\leq r$ we have that
the $r^{\prime}$-th node in $U_{3}(l)$ is equal to
$v_{r^{\prime},x_{i,j^{\prime}},i+l}$. Therefore
$v_{r,x_{i,j^{\prime}},i+l}=\beta$, and as in the previous case $g(i,l)$ has
$|\mathcal{V}_{1}|$ terms that are $2M$, and the sum of all negative terms is
at least $-M$, by definition of $M$.
* –
For $j^{\prime}\in[|\mathcal{V}_{1}|,|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)$ the
activated columns are the
$i^{\prime}(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)(N+2)+j^{\prime}(N+2)+u_{j^{\prime}}$,
$i^{\prime}\in[|\mathcal{V}_{3}|]$, where $u_{j^{\prime}}$ is the
$(j^{\prime}-|\mathcal{V}_{1}|)$th node of $U_{2}(j)$. Notice that these
columns are distinct from the ones activated in the previous case, as
$u_{j^{\prime}}\in V_{j^{\prime}}$, while $u_{i,j}$ was always either $\alpha$
or in some $V_{j^{\prime\prime}}$ with
$j^{\prime\prime}\in[|\mathcal{V}_{1}|]$. The weight of these rows is as
previously, but now we have that
$v_{i^{\prime},x_{i,j^{\prime}},i+l}=z_{i^{\prime},i+l}$ if
$z_{i^{\prime},i+l}\in V_{|\mathcal{V}_{1}|+|\mathcal{V}_{2}|+i^{\prime}}$,
and $v_{i^{\prime},x_{i,j^{\prime}},i+l}=\beta$ otherwise.
The weight of row
$i^{\prime}(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)(N+2)+j^{\prime}(N+2)+u_{j^{\prime}}$
is equal to $2M+w(v_{i^{\prime},x_{i,j^{\prime}},i+l},u_{j^{\prime}})$.
Notice that
$\sum_{p\in[|\mathcal{V}_{2}],q\in[|\mathcal{V}_{3}|]}w(v_{q,x_{i,p+|\mathcal{V}_{1}|},i+l},u_{p+|\mathcal{V}_{1}|})$
is a function of $i,j,l$ which we call $g^{\prime}(i,j,l)$ (we now have a
dependence on $j$ because of the definition of $u_{p+|\mathcal{V}_{1}|}$).
This is upper bounded by $|\mathcal{V}_{2}||\mathcal{V}_{3}|2M$.
With the same arguments as previously, if $U_{1}(i)\not\ni\alpha$ and
$U_{3}(l)\not\ni\beta$, then $g^{\prime}(i,j,l)=w(U_{2}(j),U_{3}(l))$.
Let $h(i,j,l)=g(i,l)+g^{\prime}(i,j,l)$. We conclude that the total cost is
$(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)|\mathcal{V}_{3}|2M+h(i,j,l)$. If
$U_{1}(i)\not\ni\alpha$ and $U_{3}(l)\not\ni\beta$ then
$h(i,j,l)=w(U_{1}(i)\circ U_{2}(j),U_{3}(l))\leq M$. On the other hand, if
$U_{1}(i)\ni\alpha$ or $U_{3}(l)\ni\beta$ then $h(i,j,l)\geq
g(i,l)\geq\min\\{|\mathcal{V}_{1}|,|\mathcal{V}_{3}|\\}2M-M$, which is at
least $2M$ for sufficiently large $k$. Therefore, for any fixed $i,j$ such
that $U_{1}(i)\not\ni\alpha$, it holds that there exists an $l_{i,j}$ such
that $U_{3}(l_{i,j})\not\ni\beta$ and $h(i,j,l_{i,j})\leq h(i,j,l)$ for all
$l$. Finally, for any $i,j,l$ we upper bound $h(i,j,l)$ by $2N^{2}M$, using
the upper bounds of $g$ and $g^{\prime}$.
This proves the desired cost when $i=\tau_{1}-1$, as the next $n_{6}{}$ rows
all have non-activated corresponding columns.
When $i<\tau_{1}-1$, then we have the exact same analysis for the next
$n_{6}{}$ rows, with the only difference being that we use $i+1$ instead of
$i$, and we reverse the sign of the weights. Therefore we get an additional
cost $(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)|\mathcal{V}_{3}|2M-h(i+1,j,l)$
from these rows. Along with the additional
$(g{}-(|\mathcal{V}_{1}|+|\mathcal{V}_{2}|)|\mathcal{V}_{3}|)2M$ cost from the
fourth row of the gadget, this proves we indeed get the desired cost.
#### The $(Y_{1}{}+\tau_{1}+l,jB{}+\tau_{1}+1)$ gadgets, with
$j\in[\tau_{2}],l\in[\tau_{3}]$.
Desired cost: $A_{4}{}+2g{}M+w(U_{3}(l),U_{3}(l)\circ U_{4}(s))$.
From rows $1,\ldots,n_{1}{}$, only the seventh row has both non-zero weight
$((g{}-|\mathcal{V}_{3}||\mathcal{V}_{4}|)2M+w(U_{3}(l),U_{3}(l)))$, and the
corresponding (seventh) column is activated.
Out of the next $n_{2}{}+n_{3}{}+n_{4}{}$ rows, all their corresponding
columns are not activated.
Out of the next $n_{5}{}$ rows, the corresponding columns activated are the
$i^{\prime}|\mathcal{V}_{4}|(N+2)+j^{\prime}(N+2)+u_{j^{\prime}}$, with
$i^{\prime}\in[|\mathcal{V}_{3}|],j^{\prime}\in[|\mathcal{V}_{4}|]$, and
$u_{j^{\prime}}$ is the $j^{\prime}$-th node of $U_{4}(s)$. For each such
column, the weight of the corresponding row is equal to the weight of the edge
between the $i^{\prime}$-th node of $U_{3}(l)$ and $u_{j^{\prime}}$, plus
$2M$. Summing up these costs gives
$|\mathcal{V}_{3}||\mathcal{V}_{4}|2M+w(U_{3}(l),U_{4}(s))$.
The next $2n_{6}{}$ rows all have non-activated corresponding columns.
Summing up all the costs, we get $A_{4}{}+2g{}M+w(U_{3}(l),U_{3}(l)\circ
U_{4}(s))$.
#### The $(Y_{2}{}+jB{}+\tau_{1}-t-1,\tau_{2}B{})$ gadgets, $j\in[\tau_{2}]$.
Desired cost: $A_{4}{}+j\cdot A_{1}{}$.
From rows $1,\ldots,n_{1}{}$, only the second row has both non-zero weight
$j\cdot A_{1}{}$ and its corresponding (second) column is activated.
The next $n_{2}{}$ rows all have weight $0$.
Out of the next $n_{3}{}$ rows, we have non-zero weight in row $u$ if and only
if $u\in U_{1}(\tau_{1}-t-1)$. However, in these cases the corresponding
columns are deactivated, therefore the total contribution is zero.
The next $n_{4}{}+n_{5}+2n_{6}{}$ rows all have weight $0$.
Therefore the cost of such a gadget is the desired cost $A_{4}{}+j\cdot
A_{1}{}$.
#### The $(y_{1},0)$ gadgets and the $(y_{2},\tau_{2}B{})$ gadgets, with
$y_{1}<Y_{2}{},y_{1}\not\in\\{jB{}+t\mid j\in[\tau_{2}]\\},y_{2}\geq
Y_{1}{},y_{2}\not\in\\{Y_{2}{}+jB{}+\tau_{1}-t-1\mid j\in[\tau_{2}]\\}$.
Desired cost: At least $A_{4}{}+A_{2}{}$.
We only argue about the $(y_{1},0)$ gadgets, as the situation is similar for
the $(y_{2},\tau_{2}B{})$ gadgets. We only need a lower bound, therefore we
can ignore rows $1,\ldots,n_{1}{}+n_{2}{}$.
For the next $n_{3}{}$ rows we take take two cases:
* –
The $\alpha$-th row has weight $A_{2}{}$. Notice that the corresponding column
is activated, because we always assume $\alpha\not\in U_{1}(\tau_{1}-t-1)$.
Therefore the gadget has cost at least $A_{4}{}+A_{2}{}$.
* –
The $\alpha$-th row does not have weight $A_{2}{}$. From the definition of row
weights, this means $y_{1}=jB+i$ for some $i\in[\tau_{1}],j\in[\tau_{2}]$. But
as $i\neq t$, we have that $U_{1}(\tau_{1}-i-1)\neq U_{1}(\tau_{1}-t-1)$. Let
$u$ be a node in $U_{1}(\tau_{1}-i-1)$ such that $u\not\in
U_{1}(\tau_{1}-t-1)$. Then the $u$-th row has weight $A_{2}{}$ and the
corresponding column is activated, meaning that again the cost of the gadget
is at least $A_{4}{}+A_{2}{}$.
#### The $(y,x)$ gadgets, for $y<Y_{1}{}$ or $y\geq Y_{2}{}$, and
$0<x<\tau_{2}B{}$.
Desired cost: $A_{4}{}$.
From rows $1,\ldots,n_{1}{}$, only the first and the second row have non-zero
weight, but the first and second column are not activated.
The next $n_{2}{}$ rows all have weight $0$.
Out of the next $n_{3}{}+n_{4}{}$ rows, all coresponding columns are not
activated.
The next $n_{5}{}+2n_{6}{}$ rows all have zero weight.
Therefore the cost of such a gadget is the desired cost $A_{4}{}$.
#### The $(Y_{1}{}+y,jB{}+i+1)$ gadgets,
$i\in[\tau_{1}],j\in[\tau_{2}],y\in[H{}]\setminus[i,i+\tau_{3})$.
Desired cost: At least $A_{4}{}+(H{}+i-y)A_{0}$.
From rows $1,\ldots,n_{1}{}$, the third row has both non-zero weight (greater
than $(H{}-y)A_{0}{}$) and the corresponding (third) column is activated.
Out of the next $n_{2}{}$ rows, the $x$-th of them has weight $2^{x-1}$ and
the corresponding column is activated if and only if the $x$-th bit in the
binary representation of $i\cdot A_{0}{}$ is $1$. Therefore the total cost
from these rows is $i\cdot A_{0}{}$.
This proves that the cost of such a gadget is at least
$A_{4}{}+(H{}+i-y)A_{0}$.
#### The $(Y_{1}{}+y,jB{}+\tau_{1}+1)$ gadgets, with
$y\in[\tau_{1}]\cup[\tau_{1}+\tau_{3},H{})$, and $j\in[\tau_{2}]$.
Desired cost: At least $A_{4}{}+A_{2}{}$.
From rows $1,\ldots,n_{1}{}$, the sixth row has weight $A_{2}{}$ and the sixth
column is activated. Therefore the cost of such a gadget is at least
$A_{4}{}+A_{2}{}$.
#### The $(Y_{1}{}+y,jB{}+\tau_{1}+i+2)$ gadgets, with
$i\in[\tau_{1}],j\in[\tau_{2}],y\in[H{}]$.
Desired cost: $A_{4}{}+(H{}-\tau_{1}-1+y-i)A_{0}{}$.
From rows $1,\ldots,n_{1}{}$, only the fifth row has both non-zero weight
($(H{}+y-2\tau_{1})A_{0}{}$) and the corresponding (fifth) column is
activated.
Out of the next $n_{2}{}$ rows, the $x$-th of them has weight $2^{x-1}$ and
the corresponding column is activated if and only if the $x$-th bit in the
binary representation of $(\tau_{1}-i-1)\cdot A_{0}{}$ is $1$. Therefore the
total cost from these rows is $(\tau_{1}-i-1)\cdot A_{0}{}$.
Out of the next $n_{3}{}+n_{4}{}+n_{5}{}+2n_{6}{}$ rows, all their
corresponding columns are deactivated.
We conclude that the cost of such a gadget is
$A_{4}{}+(H{}-\tau_{1}-1+y-i)A_{0}{}$.
#### The $(Y_{1}{}+y,jB{}+2\tau_{1}+2)$ gadgets, with
$j\in[\tau_{2}],y\in[H{}]$.
Desired cost: At least $A_{4}{}+A_{2}{}$.
From rows $1,\ldots,n_{1}{}$, the eighth row has weight $A_{2}{}$ and the
corresponding (eighth) column is activated.
Therefore the cost of such a gadget is at least $A_{4}{}+A_{2}{}$.
### 5.4 Lower bound
We finally prove our lower bound for Intermediary. Recall that for $i\in[1,3]$
we have
$|\mathcal{V}_{i}|=\lfloor\rho_{i}k\rfloor,|\mathcal{V}_{4}|=k-|\mathcal{V}_{1}|-|\mathcal{V}_{2}|-|\mathcal{V}_{3}|$.
Additionally, for $i\in[1,4]$ we have
$\tau_{i}=\Theta(N^{|\mathcal{V}_{i}|})$. Finally,
$n=n_{r}=\Theta(\tau_{3}+\tau_{1}\tau_{2}),m=n_{c}=\Theta(\tau_{1}\tau_{2})$.
See 1.4
* Proof.
Let
$\rho_{1}=\frac{\beta\gamma}{(c+2)},\rho_{2}=\frac{(1-\beta)\gamma}{(c+2)},\rho_{3}=\frac{1}{(c+2)}$.
Notice that with these $\rho_{1},\rho_{2},\rho_{3}$ values, and as
$n=\Theta(\tau_{3}+\tau_{1}\tau_{2})$, we get $n=\Theta(\tau_{3})$.
For $i\in[|\mathcal{V}_{1}|]$ let $u_{i}=i\cdot N/k$, and let integer
$p=f(u_{0},\ldots,u_{|\mathcal{V}_{1}|-1})$ be the encoding of this sequence
(therefore $U_{1}(p)=u_{0},\ldots,u_{|\mathcal{V}_{1}|-1}$, and
$U_{1}(p)\not\ni\alpha$). Given a Negative-$k$-Clique instance, for a
sufficiently large constant $k$, we use the reduction of Section 5.2 to
formulate an instance of Intermediary. We properly set the activations of the
columns so that we start at phase $(0,p)$. For any given $s\in[\tau_{4}]$, we
iterate over all phases $(s,t)$ with $t\in[\tau_{1}]$ and
$U_{1}(\tau_{1}-t-1)\not\ni\alpha$ by properly updating our data structure. In
each phase we query our data structure.
Let $C_{s,t}$ be the minimum cost of a $k$-Clique in $G_{0}$ that includes all
nodes in $U_{1}(\tau_{1}-t-1)$ and $U_{4}(s)$. By Lemma 5.4 the shortest path
at phase $(s,t)$ is a restricted path. Therefore we can acquire the length of
the shortest restricted path at phase $(s,t)$ by querying the data structure.
By Corollary 5.1 we can retrieve $C_{s,t}$ given the length of the shortest
restricted path. As we iterate over all relevant $(s,t)$, we can compute the
minimum cost of any $k$-Clique, which means we can decide whether there exists
a Negative-$k$-Clique.
Concerning the running time, notice that we switch from a phase $(s,t)$ to a
phase $(s,t^{\prime})$ a total of $O(\tau_{1}\tau_{4})$ times, and we only
need to update the columns of the $(y,0)$ and the $(y,\tau_{2}B{})$ gadgets,
for any $y$. There are at most $g{}=O(N^{2})$ such columns. We switch from a
phase $(s,t)$ to a phase $(s^{\prime},t^{\prime})$ with $s^{\prime}\neq s$ a
total of $\tau_{4}-1$ times, and every time we need to update the columns of
the $(y,0)$, the $(y,\tau_{2}B{})$ and the $(y,jB{}+\tau_{1}+1)$ gadgets, for
$j\in[\tau_{2}]$ and all $y$. There are $O(\tau_{2}g{})$ such columns.
Therefore the time we spend to solve Negative-$k$-Clique is
$O(T_{p}(n,m)+(\tau_{1}+\tau_{2})\tau_{4}N^{2}T_{u}(n,m)+\tau_{1}\tau_{4}T_{q}(n,m))$
Assuming the Negative-$k$-Clique Hypothesis, for all $\delta^{\prime}>0$ we
have that
$\displaystyle
T_{p}(n,m)+(\tau_{1}+\tau_{2})\tau_{4}N^{2}T_{u}(n,m)+\tau_{1}\tau_{4}T_{q}(n,m)$
$\displaystyle=\Omega(N^{k-\delta^{\prime}})\implies$ $\displaystyle
T_{p}(n,m)+(\tau_{1}+\tau_{2})\tau_{4}N^{2}T_{u}(n,m)+\tau_{1}\tau_{4}T_{q}(n,m)$
$\displaystyle=\Omega(\tau_{1}\cdot\tau_{2}\cdot\tau_{3}\cdot\tau_{4}N^{-\delta^{\prime}})\implies$
$\displaystyle
T_{p}(n,m)/\tau_{4}+(\tau_{1}+\tau_{2})T_{u}(n,m)+\tau_{1}T_{q}(n,m)$
$\displaystyle=\Omega(\tau_{1}\cdot\tau_{2}\cdot\tau_{3}\cdot
N^{-2-\delta^{\prime}})$
We now have that:
$\displaystyle T_{p}(n,m)/\tau_{4}$
$\displaystyle=O((N^{\rho_{3}k}+N^{\rho_{1}k+\rho_{2}k})^{c}/N^{k-\lfloor\rho_{1}k\rfloor-\lfloor\rho_{3}k\rfloor-\lfloor\rho_{3}k\rfloor})$
$\displaystyle=O((N^{ck/(c+2)}/N^{(1-2/(c+2))k})$ $\displaystyle=O(1)$
Therefore the term $T_{p}(n,m)/\tau_{4}$ is negligible. As
$\rho_{1}\leq\rho_{2}$, we get
$\tau_{2}T_{u}(n,m)+\tau_{1}T_{q}(n,m)=\Omega(\tau_{1}\cdot\tau_{2}\cdot\tau_{3}\cdot
N^{-2-\delta^{\prime}})$
Thus either $\tau_{2}T_{u}(n,m)=\Omega(\tau_{1}\cdot\tau_{2}\cdot\tau_{3}\cdot
N^{-2-\delta^{\prime}})$ or
$\tau_{1}T_{q}(n,m)=\Omega(\tau_{1}\cdot\tau_{2}\cdot\tau_{3}\cdot
N^{-2-\delta^{\prime}})$.
Assume $\tau_{2}T_{u}(n,m)=\Omega(\tau_{1}\cdot\tau_{2}\cdot\tau_{3}\cdot
N^{-2-\delta^{\prime}})$, then
$\displaystyle T_{u}(n,m)$ $\displaystyle=\Omega(\tau_{1}\cdot\tau_{3}\cdot
N^{-2-\delta^{\prime}})$ $\displaystyle=\Omega(N^{\rho_{1}k-1}\cdot n\cdot
N^{-2-\delta^{\prime}})$ $\displaystyle=\Omega(\frac{m^{\beta}}{N}\cdot n\cdot
N^{-2-\delta^{\prime}})$ $\displaystyle=\Omega(m^{\beta}\cdot n\cdot
N^{-3-\delta^{\prime}})$
But $m=\Theta(N^{\lfloor\rho_{1}k\rfloor+\lfloor\rho_{2}k\rfloor})$, therefore
there exists a sufficiently large $k$ such that
$N^{-3-\delta^{\prime}}=\Omega(m^{-\delta})$, which gives us that
$T_{u}(n,m)=\Omega(n\cdot m^{\beta-\delta})$
Repeating the same arguments gives that if
$\tau_{1}T_{q}(n,m)=\Omega(\tau_{1}\cdot\tau_{2}\cdot\tau_{3}\cdot
N^{-2-\delta^{\prime}})$ then $T_{q}(n,m)=\Omega(n\cdot m^{1-\beta-\delta})$.
Finally, to prove that $m=\Omega(n^{\gamma-\varepsilon})\cap
O(n^{\gamma+\varepsilon})$ notice that:
* –
$m=\Theta(N^{\lfloor\rho_{1}k\rfloor+\lfloor\rho_{2}k\rfloor})$, thus
$m\in\Omega(N^{\rho_{1}k+\rho_{2}k-2})\cap
O(N^{\rho_{1}k+\rho_{2}k})=\Omega(N^{\frac{\gamma}{(c+2)}k-2})\cap
O(N^{\frac{\gamma}{(c+2)}k})$.
* –
$n=\Theta(N^{\lfloor\rho_{3}k\rfloor})$, thus $n\in\Omega(N^{\rho_{3}k-1})\cap
O(N^{\rho_{3}k})=\Omega(N^{\frac{1}{(c+2)}k-1})\cap O(N^{\frac{1}{(c+2)}k})$.
For sufficiently large $k$ we get $m\in O(N^{\frac{\gamma}{(c+2)}k})\leq
O(N^{\frac{\gamma}{(c+2)}k-\gamma+\frac{\varepsilon}{(c+2)}k-\varepsilon})=O(N^{(\frac{1}{c+2}k-1)(\gamma+\varepsilon)})\leq
O(n^{\gamma+\varepsilon})$.
Similarly,
$m\in\Omega(N^{\frac{\gamma}{(c+2)}k-2})\geq\Omega(N^{\frac{\gamma}{(c+2)}k-\frac{\varepsilon}{(c+2)}k})=\Omega(N^{\frac{1}{c+2}k(\gamma-\varepsilon)})\geq\Omega(n^{\gamma-\varepsilon})$.
## References
* [ABDN18] Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof. More consequences of falsifying SETH and the orthogonal vectors conjecture. In Ilias Diakonikolas, David Kempe, and Monika Henzinger, editors, Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 253–266. ACM, 2018. doi:10.1145/3188745.3188938.
* [ABW15] Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Tight hardness results for lcs and other sequence similarity measures. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 59–78. IEEE, 2015. doi:10.1109/focs.2015.14.
* [AC01] John Aach and George M Church. Aligning gene expression time series with time warping algorithms. Bioinformatics, 17(6):495–508, 2001. doi:10.1093/bioinformatics/17.6.495.
* [AD16] Amir Abboud and Søren Dahlgaard. Popular conjectures as a barrier for dynamic planar graph algorithms. In Irit Dinur, editor, IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 477–486. IEEE Computer Society, 2016. doi:10.1109/FOCS.2016.58.
* [AFPY16] Pankaj K. Agarwal, Kyle Fox, Jiangwei Pan, and Rex Ying. Approximating dynamic time warping and edit distance for a pair of point sequences. In Sándor P. Fekete and Anna Lubiw, editors, 32nd International Symposium on Computational Geometry, SoCG 2016, June 14-18, 2016, Boston, MA, USA, volume 51 of LIPIcs, pages 6:1–6:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPIcs.SoCG.2016.6.
* [AKM+87] Alok Aggarwal, Maria M. Klawe, Shlomo Moran, Peter W. Shor, and Robert E. Wilber. Geometric applications of a matrix-searching algorithm. Algorithmica, 2:195–208, 1987. doi:10.1007/BF01840359.
* [AWW14] Amir Abboud, Virginia Vassilevska Williams, and Oren Weimann. Consequences of faster alignment of sequences. In Javier Esparza, Pierre Fraigniaud, Thore Husfeldt, and Elias Koutsoupias, editors, Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I, volume 8572 of Lecture Notes in Computer Science, pages 39–51. Springer, 2014. doi:10.1007/978-3-662-43948-7\\_4.
* [BBMM17] Kevin Buchin, Maike Buchin, Wouter Meulemans, and Wolfgang Mulzer. Four soviets walk the dog: Improved bounds for computing the Fréchet distance. Discrete & Computational Geometry, 58(1):180–216, 2017.
* [BCKY20] Vladimir Braverman, Moses Charikar, William Kuszmaul, and Lin F. Yang. The one-way communication complexity of dynamic time warping distance. J. Comput. Geom., 11(2):62–93, 2020. doi:10.20382/jocg.v11i2a4.
* [BCM22] Karl Bringmann, Nofar Carmeli, and Stefan Mengel. Tight fine-grained bounds for direct access on join queries. In Leonid Libkin and Pablo Barceló, editors, PODS ’22: International Conference on Management of Data, Philadelphia, PA, USA, June 12 - 17, 2022, pages 427–436. ACM, 2022. doi:10.1145/3517804.3526234.
* [BDNP22] Karl Bringmann, Anne Driemel, André Nusser, and Ioannis Psarros. Tight bounds for approximate near neighbor searching for time series under the fréchet distance. In Joseph (Seffi) Naor and Niv Buchbinder, editors, Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms, SODA 2022, Virtual Conference / Alexandria, VA, USA, January 9 - 12, 2022, pages 517–550. SIAM, 2022. doi:10.1137/1.9781611977073.25.
* [BDT16] Arturs Backurs, Nishanth Dikkala, and Christos Tzamos. Tight hardness results for maximum weight rectangles. In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and Davide Sangiorgi, editors, 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy, volume 55 of LIPIcs, pages 81:1–81:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPIcs.ICALP.2016.81.
* [BGMW20] Karl Bringmann, Pawel Gawrychowski, Shay Mozes, and Oren Weimann. Tree edit distance cannot be computed in strongly subcubic time (unless APSP can). ACM Trans. Algorithms, 16(4):48:1–48:22, 2020. doi:10.1145/3381878.
* [BGMW23] Itai Boneh, Shay Golan, Shay Mozes, and Oren Weimann. Near-optimal dynamic time warping on run-length encoded strings, 2023\. arXiv:2302.06252, doi:10.48550/arXiv.2302.06252.
* [BI18] Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). SIAM Journal on Computing, 47(3):1087–1097, 2018. doi:10.1137/15M1053128.
* [BK15] Karl Bringmann and Marvin Künnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 79–97. IEEE, 2015. doi:10.1109/focs.2015.15.
* [BKK+22] Karl Bringmann, Sándor Kisfaludi-Bak, Marvin Künnemann, Dániel Marx, and André Nusser. Dynamic time warping under translation: Approximation guided by space-filling curves. In Xavier Goaoc and Michael Kerber, editors, 38th International Symposium on Computational Geometry, SoCG 2022, June 7-10, 2022, Berlin, Germany, volume 224 of LIPIcs, pages 20:1–20:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.SoCG.2022.20.
* [BKN21] Karl Bringmann, Marvin Künnemann, and André Nusser. Discrete fréchet distance under translation: Conditional hardness and an improved algorithm. ACM Trans. Algorithms, 17(3):25:1–25:42, 2021. doi:10.1145/3460656.
* [BM16] Karl Bringmann and Wolfgang Mulzer. Approximability of the discrete fréchet distance. J. Comput. Geom., 7(2):46–76, 2016. doi:10.20382/jocg.v7i2a4.
* [Bri14] Karl Bringmann. Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless seth fails. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 661–670. IEEE, 2014. doi:10.1109/focs.2014.76.
* [BT17] Arturs Backurs and Christos Tzamos. Improving viterbi is hard: Better runtimes imply faster clique algorithms. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 311–321. PMLR, 2017. URL: http://proceedings.mlr.press/v70/backurs17a.html.
* [BvO+22] Maike Buchin, Ivor van der Hoog, Tim Ophelders, Lena Schlipf, Rodrigo I. Silveira, and Frank Staals. Efficient fréchet distance queries for segments. In 30th Annual European Symposium on Algorithms, ESA 2022, volume 244 of LIPIcs, pages 29:1–29:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.ESA.2022.29.
* [CKM20] Panagiotis Charalampopoulos, Tomasz Kociumaka, and Shay Mozes. Dynamic string alignment. In Inge Li Gørtz and Oren Weimann, editors, 31st Annual Symposium on Combinatorial Pattern Matching, CPM 2020, June 17-19, 2020, Copenhagen, Denmark, volume 161 of LIPIcs, pages 9:1–9:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.CPM.2020.9.
* [CKW23] Alejandro Cassis, Tomasz Kociumaka, and Philip Wellnitz. Optimal algorithms for bounded weighted edit distance. In 64th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2023. IEEE, 2023. arXiv:2305.06659.
* [Cor01] Andrea Corradini. Dynamic time warping for off-line recognition of a small gesture vocabulary. In Proceedings IEEE ICCV workshop on recognition, analysis, and tracking of faces and gestures in real-time systems, pages 82–89. IEEE, 2001\.
* [CPB+98] Enrico Gianluca Caiani, A Porta, Giuseppe Baselli, M Turiel, S Muzzupappa, F Pieruzzi, C Crema, A Malliani, and Sergio Cerutti. Warped-average template technique to track on a cycle-by-cycle basis the cardiac filling phases on left ventricular volume. In Computers in Cardiology 1998. Vol. 25 (Cat. No. 98CH36292), pages 73–76. IEEE, 1998.
* [CR18] Timothy M. Chan and Zahed Rahmati. An improved approximation algorithm for the discrete fréchet distance. Inf. Process. Lett., 138:72–74, 2018. doi:10.1016/j.ipl.2018.06.011.
* [dBMO17] Mark de Berg, Ali D. Mehrabi, and Tim Ophelders. Data structures for fréchet queries in trajectory data. In Joachim Gudmundsson and Michiel H. M. Smid, editors, 29th Canadian Conference on Computational Geometry (CCCG’17), pages 214–219, 2017\.
* [DHP13] Anne Driemel and Sariel Har-Peled. Jaywalking your dog: computing the Fréchet distance with shortcuts. SIAM Journal on Computing, 42(5):1830–1866, 2013. doi:10.1137/120865112.
* [DPS19] Anne Driemel, Ioannis Psarros, and Melanie Schmidt. Sublinear data structures for short fréchet queries. CoRR, abs/1907.04420, 2019. URL: http://arxiv.org/abs/1907.04420, arXiv:1907.04420.
* [EFV07] Alon Efrat, Quanfu Fan, and Suresh Venkatasubramanian. Curve matching, time warping, and light fields: New algorithms for computing similarity between curves. Journal of Mathematical Imaging and Vision, 27(3):203–216, 2007\. doi:10.1007/s10851-006-0647-0.
* [EM94] Thomas Eiter and Heikki Mannila. Computing discrete Fréchet distance. Technical Report CD-TR 94/64, Christian Doppler Laboratory for Expert Systems, TU Vienna, Austria, 1994.
* [FF21] Arnold Filtser and Omrit Filtser. Static and streaming data structures for fréchet distance queries. In Dániel Marx, editor, Symposium on Discrete Algorithms (SODA) 2021, pages 1150–1170. SIAM, 2021. doi:10.1137/1.9781611976465.71.
* [Fil18] Omrit Filtser. Universal approximate simplification under the discrete fréchet distance. Inf. Process. Lett., 132:22–27, 2018. doi:10.1016/j.ipl.2017.10.002.
* [FJRW23] Vincent Froese, Brijnesh J. Jain, Maciej Rymar, and Mathias Weller. Fast exact dynamic time warping on run-length encoded time series. Algorithmica, 85(2):492–508, 2023. doi:10.1007/s00453-022-01038-3.
* [FK20] Omrit Filtser and Matthew J. Katz. Algorithms for the discrete fréchet distance under translation. J. Comput. Geom., 11(1):156–175, 2020. doi:10.20382/jocg.v11i1a7.
* [FR06] Jittat Fakcharoenphol and Satish Rao. Planar graphs, negative weight edges, shortest paths, and near linear time. J. Comput. Syst. Sci., 72(5):868–889, 2006. doi:10.1016/J.JCSS.2005.05.007.
* [GDPS22] Garance Gourdel, Anne Driemel, Pierre Peterlongo, and Tatiana Starikovskaya. Pattern matching under DTW distance. In Diego Arroyuelo and Barbara Poblete, editors, String Processing and Information Retrieval - 29th International Symposium, SPIRE 2022, Concepción, Chile, November 8-10, 2022, Proceedings, volume 13617 of Lecture Notes in Computer Science, pages 315–330. Springer, 2022\. URL: https://doi.org/10.1007/978-3-031-20643-6_23, doi:10.1007/978-3-031-20643-6\\_{2}{3}.
* [GJ06] Jie Gu and Xiaomin Jin. A simple approximation for dynamic time warping search in large time series database. In Intelligent Data Engineering and Automated Learning–IDEAL 2006: 7th International Conference, Burgos, Spain, September 20-23, 2006. Proceedings 7, pages 841–848. Springer, 2006. doi:10.1007/11875581_101.
* [GS18] Omer Gold and Micha Sharir. Dynamic time warping and geometric edit distance: Breaking the quadratic barrier. ACM Trans. Algorithms, 14(4):50:1–50:17, 2018. doi:10.1145/3230734.
* [HG19] Youngha Hwang and Saul B. Gelfand. Binary sparse dynamic time warping. In Petra Perner, editor, Machine Learning and Data Mining in Pattern Recognition, 15th International Conference on Machine Learning and Data Mining, MLDM 2019, New York, NY, USA, July 20-25, 2019, Proceedings, Volume II, pages 748–759. ibai Publishing, 2019.
* [HG22] Youngha Hwang and Saul B. Gelfand. Fast sparse dynamic time warping. In 26th International Conference on Pattern Recognition, ICPR 2022, Montreal, QC, Canada, August 21-25, 2022, pages 3872–3877. IEEE, 2022\. doi:10.1109/ICPR56361.2022.9956686.
* [Kle05] Philip N. Klein. Multiple-source shortest paths in planar graphs. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, Vancouver, British Columbia, Canada, January 23-25, 2005, pages 146–155. SIAM, 2005. URL: http://dl.acm.org/citation.cfm?id=1070432.1070454.
* [KMS23] Tomasz Kociumaka, Anish Mukherjee, and Barna Saha. Approximating edit distance in the fully dynamic model. In 64th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2023). IEEE, 2023.
* [KS01] Tamer Kahveci and Ambuj K. Singh. Variable length queries for time series data. In Proceedings of the 17th International Conference on Data Engineering, April 2-6, 2001, Heidelberg, Germany, pages 273–282. IEEE Computer Society, 2001. doi:10.1109/ICDE.2001.914838.
* [Kus19] William Kuszmaul. Dynamic time warping in strongly subquadratic time: Algorithms for the low-distance regime and approximate evaluation. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019), volume 132 of LIPIcs, pages 80:1–80:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.ICALP.2019.80.
* [Kus21] William Kuszmaul. Binary dynamic time warping in linear time, 2021. arXiv:2101.01108.
* [KZ07] Ana Kuzmanic and Vlasta Zanchi. Hand shape classification using dtw and lcss as similarity measures for vision-based gesture recognition system. In EUROCON 2007-The International Conference on” Computer as a Tool”, pages 264–269. IEEE, 2007. doi:10.1109/eurcon.2007.4400350.
* [LLW19] Rio LaVigne, Andrea Lincoln, and Virginia Vassilevska Williams. Public-key cryptography in the fine-grained setting. In Alexandra Boldyreva and Daniele Micciancio, editors, Advances in Cryptology - CRYPTO 2019 - 39th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2019, Proceedings, Part III, volume 11694 of Lecture Notes in Computer Science, pages 605–635. Springer, 2019. doi:10.1007/978-3-030-26954-8\\_20.
* [LWW18] Andrea Lincoln, Virginia Vassilevska Williams, and R. Ryan Williams. Tight hardness for shortest cycles and paths in sparse graphs. In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 1236–1252. SIAM, 2018. doi:10.1137/1.9781611975031.80.
* [MMK06] Meinard Müller, Henning Mattes, and Frank Kurth. An efficient multiscale approach to audio synchronization. In ISMIR 2006, 7th International Conference on Music Information Retrieval, Victoria, Canada, 8-12 October 2006, Proceedings, volume 546, pages 192–197, 2006.
* [Mon81] Gaspard Monge. Mémoire sur la théorie des déblais et des remblais. Imprimerie royale, 1781.
* [MP99] Mario E Munich and Pietro Perona. Continuous dynamic time warping for translation-invariant curve alignment with applications to signature verification. In Proceedings of the Seventh IEEE International Conference on Computer Vision, volume 1, pages 108–115. IEEE, 1999. doi:10.1109/iccv.1999.791205.
* [MRR80] Cory Myers, Lawrence Rabiner, and Aaron Rosenberg. Performance tradeoffs in dynamic time warping algorithms for isolated word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 28(6):623–635, 1980. doi:10.1109/tassp.1980.1163491.
* [Mül07] Meinard Müller. Dtw-based motion comparison and retrieval. Information Retrieval for Music and Motion, pages 211–226, 2007\.
* [NNI+20] Akihiro Nishi, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai, and Masayuki Takeda. Towards efficient interactive computation of dynamic time warping distance. In Christina Boucher and Sharma V. Thankachan, editors, String Processing and Information Retrieval - 27th International Symposium, SPIRE 2020, Orlando, FL, USA, October 13-15, 2020, Proceedings, volume 12303 of Lecture Notes in Computer Science, pages 27–41. Springer, 2020. URL: https://doi.org/10.1007/978-3-030-59212-7_3, doi:10.1007/978-3-030-59212-7\\_3.
* [NR07] Vit Niennattrakul and Chotirat Ann Ratanamahatana. On clustering multimedia time series data using k-means and dynamic time warping. In 2007 International Conference on Multimedia and Ubiquitous Engineering (MUE’07), pages 733–738. IEEE, 2007. doi:10.1109/mue.2007.165.
* [NW70] Saul B. Needleman and Christian D. Wunsch. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of Molecular Biology, 48(3):443–453, 1970. doi:10.1016/0022-2836(70)90057-4.
* [SC78] Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE transactions on acoustics, speech, and signal processing, 26(1):43–49, 1978. doi:10.1016/b978-0-08-051584-7.50016-4.
* [SDH+18] Anooshiravan Sharabiani, Houshang Darabi, Samuel Harford, Elnaz Douzali, Fazle Karim, Hereford Johnson, and Shun Chen. Asymptotic dynamic time warping calculation with utilizing value repetition. Knowl. Inf. Syst., 57(2):359–388, 2018. doi:10.1007/s10115-018-1163-4.
* [Sel74] Peter H. Sellers. On the theory and computation of evolutionary distances. SIAM Journal on Applied Mathematics, 26(4):787–793, 1974. doi:10.1137/0126070.
* [Sen08] Pavel Senin. Dynamic time warping algorithm review. Information and Computer Science Department University of Hawaii at Manoa Honolulu, USA, 855(1-23):40, 2008.
* [SI20] Yoshifumi Sakai and Shunsuke Inenaga. A reduction of the dynamic time warping distance to the longest increasing subsequence length. In Yixin Cao, Siu-Wing Cheng, and Minming Li, editors, 31st International Symposium on Algorithms and Computation, ISAAC 2020, December 14-18, 2020, Hong Kong, China (Virtual Conference), volume 181 of LIPIcs, pages 6:1–6:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.ISAAC.2020.6.
* [SI22] Yoshifumi Sakai and Shunsuke Inenaga. A faster reduction of the dynamic time warping distance to the longest increasing subsequence length. Algorithmica, 84(9):2581–2596, 2022. doi:10.1007/s00453-022-00968-2.
* [TSW90] Charles C. Tappert, Ching Y. Suen, and Toru Wakahara. The state of the art in online handwriting recognition. IEEE Transactions on pattern analysis and machine intelligence, 12(8):787–808, 1990. doi:10.1109/34.57669.
* [vdHvKOS23] Thijs van der Horst, Marc J. van Kreveld, Tim Ophelders, and Bettina Speckmann. A subquadratic _n_ ${}^{\mbox{{$\epsilon$}}}$-approximation for the continuous fréchet distance. In Nikhil Bansal and Viswanath Nagarajan, editors, Proceedings of the 2023 ACM-SIAM Symposium on Discrete Algorithms, SODA 2023, Florence, Italy, January 22-25, 2023, pages 1759–1776. SIAM, 2023. doi:10.1137/1.9781611977554.ch67.
* [WF74] Robert A. Wagner and Michael J. Fischer. The string-to-string correction problem. Journal of the ACM, 21(1):168–173, 1974. doi:10.1145/321796.321811.
* [WW10] Virginia Vassilevska Williams and Ryan Williams. Subcubic equivalences between path, matrix and triangle problems. In 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, October 23-26, 2010, Las Vegas, Nevada, USA, pages 645–654. IEEE Computer Society, 2010. doi:10.1109/FOCS.2010.67.
* [XK22] Zoe Xi and William Kuszmaul. Approximating dynamic time warping distance between run-length encoded strings. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms, ESA 2022, September 5-9, 2022, Berlin/Potsdam, Germany, volume 244 of LIPIcs, pages 90:1–90:19. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.ESA.2022.90.
* [YPFA16] Rex Ying, Jiangwei Pan, Kyle Fox, and Pankaj K. Agarwal. A simple efficient approximation algorithm for dynamic time warping. In Siva Ravada, Mohammed Eunus Ali, Shawn D. Newsam, Matthias Renz, and Goce Trajcevski, editors, Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, GIS 2016, Burlingame, California, USA, October 31 - November 3, 2016, pages 21:1–21:10. ACM, 2016. doi:10.1145/2996913.2996954.
* [ZS03] Yunyue Zhu and Dennis Shasha. Warping indexes with envelope transforms for query by humming. In Proceedings of the 2003 ACM SIGMOD international conference on Management of data, pages 181–192, 2003. doi:10.1145/872757.872780. |
# On schemes evinced by generalized additive decompositions and their
regularity
Alessandra Bernardi, Alessandro Oneto, Daniele Taufer Università di Trento,
Via Sommarive, 14 - 38123 Povo (Trento), Italy<EMAIL_ADDRESS><EMAIL_ADDRESS>KU Leuven<EMAIL_ADDRESS>
###### Abstract.
We define and explicitly construct schemes evinced by generalized additive
decompositions (GADs) of a given $d$-homogeneous polynomial $F$. We employ
GADs to investigate the regularity of $0$-dimensional schemes apolar to $F$,
focusing on those satisfying some minimality conditions. We show that
irredundant schemes to $F$ need not be $d$-regular, unless they are evinced by
special GADs of $F$. Instead, we prove that tangential decompositions of
minimal length are always $d$-regular, as well as irredundant apolar schemes
of length at most $2d+1$.
###### Key words and phrases:
Generalized additive decompositions, $0$-dimensional schemes, Hilbert
function, regularity, cactus rank
###### 2020 Mathematics Subject Classification:
14N07, 13D40
## 1\. Introduction
Algebraic and geometric properties of $0$-dimensional schemes have been
largely studied from several perspectives in algebraic geometry, commutative
algebra, and computational algebra. Through _apolarity theory_ , these studies
find applications in the study of _additive decompositions_ of homogeneous
polynomials and, more in general, _tensor decompositions_ [Lan12, BCC+18,
BC19].
In this paper, we are interested in $0$-dimensional schemes that are _apolar_
to a given $d$-homogeneous polynomial $F$, namely the $0$-dimensional schemes
defined by ideals annihilating $F$ by derivation. Understanding the possible
Hilbert functions of minimal apolar schemes is a deep and largely open
question, which could give useful information on the nature of additive
decompositions of polynomials and secant varieties, and whose grasp is
challenging even in moderately small cases [RS000, IR01, BB12, ER12, CCG12,
LO13, BBT13, BB14b, CJN15, Jel18, BJMR18, Chi19, MO20, BB21, ACO23].
Our work aims to study when these Hilbert functions stabilize, and more
specifically at discerning essential conditions for a given $d$-homogeneous
polynomial to have _minimal_ $0$-dimensional apolar schemes that are regular
in degree $d$. This subtle problem carries far-reaching implications spanning
the domains of classical algebraic geometry and complexity theory. In the
context of algebraic geometry, these concepts are part of a longstanding
tradition of exploring secant varieties and Waring problems, see [BCC+18] for
a general overview. From a complexity theory perspective, the knowledge of the
regularity of minimal apolar schemes to a given polynomial might improve the
efficiency of symbolic algorithms for computing ranks and minimal
decomposition of polynomials [BCMT10, BT20, BDHM17].
### 1.1. Additive decompositions
As already recalled, the study of apolar schemes is related to notions of
_rank_ and _additive decompositions_ associated with homogeneous polynomials.
The minimal length of a $0$-dimensional scheme apolar to $F$ is the _cactus
rank_ of $F$ [RS11, BB14a]. If we restrict to schemes that are locally
contained in $(d+1)$-fat points, then they correspond to _generalized additive
decompositions_ (GADs) of $F$, namely expressions as
$F=\sum_{i=1}^{r}L_{i}^{d-k_{i}}G_{i},$
where the $L_{i}$’s are pairwise non-proportional linear forms not dividing
the corresponding $G_{i}$’s [IK99, BBM14, BT20]. Special cases of such
decompositions include _tangential decompositions_ , when $k_{i}=1$ [BT20,
CGG, Bal22], and _Waring decompositions_ , when $k_{i}=0$ [Ger96, BCC+18].
This algebraic description of ranks and additive decompositions has a
geometric interpretation in terms of _Veronese varieties_ and their _secant
varieties_ [Zak93, Ådl87, BCC+18]. A Waring decomposition corresponds to a set
of points on the Veronese variety whose linear span contains the projective
point corresponding to the polynomial $F$. Analogously, tangential
decompositions (generalized additive decompositions, respectively) correspond
to a set of points on the tangential variety (osculating varieties,
respectively) of the Veronese variety whose linear span contains the
projective point of $F$ [BCC+18, CGG, BCGI07, BCGI09, BF03]. In this view,
GADs parameterize generic points of a _joint variety_ of osculating varieties
to certain Veronese variety.
### 1.2. Content of the paper and main results
After recalling the standard definition and results in Section 2, we define
and provide an explicit construction of schemes evinced by GADs in Section 3.
This construction locally agrees with the natural apolar schemes defined in
[BJMR18], but is made effective by delving into the computational details. An
implementation of this construction routine in Macaulay2 [GS] and Magma
[BCP97] can be found in [BOT].
In Section 4 we investigate the weaker and more geometric irredundancy
condition, i.e. we look at schemes that are minimal by inclusion among the
apolar schemes to a given form $F$ of degree $d$. With Example 4.4 we observe
that schemes evinced by GADs might well be redundant, whereas we prove in
Proposition 4.3 that irredundant schemes are evinced by a GAD of $F$ precisely
when their connected components are contained in $(d+1)$-fat points.
Therefore, all schemes apolar to $F$ with _short_ components are evinced by
certain families of GADs of $F$. However, Example 4.6 shows that schemes with
_long_ components may only arise from GADs of higher degree polynomials.
In Section 5 we tackle the regularity of minimal apolar schemes. We show that
non-redundancy to a degree-$d$ form is not enough to ensure $d$-regularity.
Indeed, in Examples 5.8 and 5.10 we present degree-$d$ homogenous polynomials
admitting an apolar scheme that is irredundant but not $d$-regular. However,
we notice that in both cases such schemes are not minimal by length.
In Proposition 5.2 we show that the addenda constituting a GAD evincing an
irredundant scheme $Z$ may never appear in its inverse systems. We use this
result in Proposition 5.3 to guarantee $d$-regularity for schemes evinced by
GADs such that the $L_{i}$’s are linearly independent and the $k_{i}$’s are
small enough, regardless of the scheme being minimal. However, we point out in
Remark 5.7 that all the assumptions of Proposition 5.3 are sharp.
Drawing from the intuition that schemes with components of low multiplicity
usually exhibit low regularity, in Proposition 5.9 we prove that minimal
tangential decompositions of degree-$d$ forms always evince $d$-regular
schemes. Example 5.10 shows the condition of having minimal length is
essential, while irredundancy is not enough.
Finally, we show in Proposition 5.11 that if the cactus rank of a degree-$d$
form is not greater than $2d+1$, then non-redundancy is actually enough to
guarantee $d$-regularity. In particular, all the schemes of minimal length
apolar to degree-$d$ forms with length smaller or equal to $2d+1$ are
$d$-regular.
### Acknowledgements
We sincerely thank E. Ballico, W. Buczyńska, J. Buczyński, M.V. Catalisano, C.
Ciliberto and B. Mourrain for fruitful conversations. DT acknowledges the
hospitality of the TensorDec Laboratory during a research stay at the
Department of Mathematics at the University of Trento, where part of the
present work has been conducted.
### Funding
AB has been partially supported by GNSAGA of INDAM. DT has been supported by
the European Union’s H2020 Programme ERC-669891, and by the Research
Foundation - Flanders via the FWO postdoctoral fellowship 12ZZC23N and the
travel grant V425623N. All the authors have been partially supported by the
Thematic Research Programme “Tensors: geometry, complexity and quantum
entanglement”, University of Warsaw, Excellence Initiative – Research
University and the Simons Foundation Award No. 663281.
## 2\. Preliminaries
In this paper, $\Bbbk$ will always be an algebraically closed field of
characteristic $0$. Given $\alpha=(\alpha_{0},\ldots,\alpha_{n})$ and
$\beta=(\beta_{0},\ldots,\beta_{n})$ in $\mathbb{N}^{n+1}$, let
$|\alpha|=\sum_{i=0}^{n}\alpha_{i}$ and $\alpha!=\prod_{i=0}^{n}\alpha_{i}!$.
We write $\alpha\succeq\beta$ if $\alpha_{i}\geq\beta_{i}$ for every $0\leq
i\leq n$. We use the standard short notation
$X^{\alpha}=X_{0}^{\alpha_{0}}\cdots X_{n}^{\alpha_{n}}$.
### 2.1. Apolarity
Let
$\mathcal{S}=\Bbbk[X_{0},\ldots,X_{n}]=\bigoplus_{d\in\mathbb{N}}\mathcal{S}_{d}$
and
$\mathcal{R}=\Bbbk[Y_{0},\ldots,Y_{n}]=\bigoplus_{d\in\mathbb{N}}\mathcal{R}_{d}$
be standard graded polynomial rings, where $\mathcal{S}_{d}$ and
$\mathcal{R}_{d}$ denote the $\Bbbk$-vector spaces of degree-$d$ homogeneous
polynomials. We also write $\mathcal{S}_{\leq d}=\bigoplus_{e\leq
d}\mathcal{S}_{e}$ and $\mathcal{R}_{\leq d}=\bigoplus_{e\leq
d}\mathcal{R}_{e}$.
We consider the apolarity action of $\mathcal{R}$ on $\mathcal{S}$ given by
differentiation, i.e.,
$Y^{\beta}\circ
X^{\alpha}=\begin{cases}\partial_{\beta}(X^{\alpha})=\frac{\alpha!}{(\alpha-\beta)!}X^{\alpha-\beta}&\text{
if }\alpha\succeq\beta,\\\ 0&\text{ otherwise},\end{cases}$
extended by $\Bbbk$-linearity. Given $F\in\mathcal{S}$, we consider its
annihilator
$\textnormal{Ann}(F)=\\{G\in\mathcal{R}~{}:~{}G\circ F=0\\},$
which is an ideal of $\mathcal{R}$.
This action defines a non-degenerate perfect pairing
$\mathcal{R}_{d}\times\mathcal{S}_{d}\to\Bbbk$ for every $d\in\mathbb{N}$.
Given a subspace $V\subseteq\mathcal{S}_{d}$, we denote by
$V^{\perp}\subseteq\mathcal{R}_{d}$ its orthogonal space with respect to such
pairing. If $V=\langle F\rangle$, we simply denote its orthogonal space by
$F^{\perp}$.
###### Remark 2.1.
A classical result by Macaulay [Mac16] shows that graded Artinian Gorenstein
algebras are all, and only, quotient rings of polynomial rings by annihilator
ideals of homogeneous polynomials, see [Ger96, Theorem 8.7], [IK99, Lemma
2.12] or [Eis13, Theorem 21.6].
In the following, we always identify $\mathcal{R}$ with the coordinate ring of
$\mathbb{P}^{n}=\mathbb{P}(\mathcal{S}_{1})$.
###### Definition 2.2.
Let $F\in\mathcal{S}_{d}$. A $0$-dimensional scheme $Z\subset\mathbb{P}^{n}$
is apolar to $F$ if $I(Z)\subseteq\textnormal{Ann}(F)$.
A famous characterization of schemes apolar to a given form is provided by the
well-known Apolarity Lemma, see e.g. [IK99, Lemma 1.15] in the classical case
of reduced schemes, [BJMR18, Lemma 1] for non-reduced scheme or [RGV18, Lemma
1.3] into a more general framework.
###### Lemma 2.3 (Apolarity Lemma).
Let $F\in\mathcal{S}_{d}$. The following are equivalent:
* •
$F\in I(Z)^{\perp}_{d}$;
* •
$I(Z)\subset\textnormal{Ann}(F)$.
Let $\mathcal{S}_{\rm dp}$ be the polynomial ring $\mathcal{S}$ equipped with
a divided power structure, i.e. endowed with the divided powers monomial basis
$X^{[\alpha]}=\frac{1}{\alpha!}X^{\alpha}$. We denote by $F_{\rm
dp}\in\mathcal{S}_{\rm dp}$ the polynomial $F\in\mathcal{S}$ expressed in
divided powers.
For convenience in our computation throughout the paper, we also consider the
action of $\mathcal{R}$ on $\mathcal{S}_{\rm dp}$ by contraction, namely,
$Y^{\beta}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}X^{\alpha}=\begin{cases}X^{\alpha-\beta}&\text{
if }\alpha\succeq\beta,\\\ 0&\text{ otherwise}.\end{cases}$
For a given $F\in\mathcal{S}_{\rm dp}$, its annihilator with respect to this
action will be denoted by
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(F)=\left\\{G\in\mathcal{R}~{}:~{}G\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}F=0\right\\}.$
One can directly verify that
$G\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}F_{\rm
dp}=(G\circ F)_{\rm dp}$.
### 2.2. Minimality
In this paper, we consider the $0$-dimensional schemes apolar to a given
$F\in\mathcal{S}_{d}$. Among them, we are particularly interested in those
that are minimal by inclusion or length.
###### Definition 2.4.
Let $Z\subset\mathbb{P}^{n}$ be a $0$-dimensional scheme apolar to
$F\in\mathcal{S}_{d}$. We say that $Z$ is irredundant to $F$ if there is no
strict subscheme $Z^{\prime}\subsetneq Z$ among the schemes apolar to $F$.
The minimal length of a $0$-dimensional scheme apolar to $F$ is called both
scheme length of $F$ [IK99] or cactus rank of $F$ [BR13, BB14a].
###### Definition 2.5.
Let $Z\subset\mathbb{P}^{n}$ be a $0$-dimensional scheme apolar to
$F\in\mathcal{S}_{d}$. We say that $Z$ evinces the cactus rank, or evinces the
scheme length of $F$, or simply is minimal apolar to $F$, if $Z$ is of minimal
length among the $0$-dimensional schemes in $\mathbb{P}^{n}$ and apolar to
$F$.
### 2.3. Regularity
We study when the Hilbert function of minimal apolar schemes stabilizes.
###### Definition 2.6.
Given a homogeneous ideal $I\subset\mathcal{R}$, the Hilbert function of the
quotient $\mathcal{R}/I$ is the function
$\mathrm{HF}_{\mathcal{R}/I}:\mathbb{N}\to\mathbb{N}$ such that
$\mathrm{HF}_{\mathcal{R}/I}(i)=\dim\mathcal{R}_{i}/I_{i}$, where
$I_{i}=I\cap\mathcal{R}_{i}$. For a scheme $Z\subset\mathbb{P}^{n}$ we denote
the Hilbert function of $Z$ as
$\mathrm{HF}_{Z}=\mathrm{HF}_{\mathcal{R}/I(Z)}$.
We simply write $\mathrm{HF}_{Z}=(a_{0},a_{1},a_{2},\dots)$ to denote
$\mathrm{HF}_{Z}(i)=a_{i}$.
The Hilbert function of a $0$-dimensional scheme $Z$ is always strictly
increasing until it reaches its length ${\rm len}(Z)$, and then it remains
constant.
###### Definition 2.7.
Given a $0$-dimensional scheme $Z\subset\mathbb{P}^{n}$, the regularity of $Z$
is
${\rm
reg}(Z)=\min_{i\in\mathbb{N}}\\{\mathrm{HF}_{Z}(i)=\mathrm{HF}_{Z}(i+1)={\rm
len}(Z)\\}.$
We say that $Z$ is regular in degree $d$, or $d$-regular, if ${\rm reg}(Z)\leq
d$.
## 3\. Schemes evinced by GADs
We devote the present section to connecting two well-known concepts such as
natural apolar schemes [BJMR18] and generalized additive decomposition [IK99].
Their link serves as the cornerstone of our paper, while their explicit
construction may be beneficial even for expert readers. A complete
implementation in Macaulay2 [GS] and Magma [BCP97] of these procedures may be
found in [BOT].
### 3.1. Natural apolar scheme to $F$ supported at $L$
There is a natural way to associate a local scheme apolar to a given
$F\in\mathcal{S}_{d}$ supported at a prescribed point $[L]\in\mathbb{P}^{n}$
[BJMR18, Section 4]. Let $f_{L}\in\mathcal{S}_{\rm
dp}/(L-1)=\underline{\mathcal{S}}_{\rm dp}$ be the dehomogenization of $F_{\rm
dp}$ by $L$. We consider the projection $\mathcal{S}_{\rm
dp}\to\underline{\mathcal{S}}_{\rm dp}$ and its dual projection
$\mathcal{R}\to\underline{\mathcal{R}}$. We denote the latter projection of an
ideal $J\subset\mathcal{R}$ by $\underline{J}\subset\underline{\mathcal{R}}$.
We will always use lowercase letters for the elements and the variables after
these projections, e.g., we identify $\underline{\mathcal{S}}_{\rm
dp}\simeq\Bbbk[x_{1},\ldots,x_{n}]_{\rm dp}$ and
$\underline{\mathcal{R}}\simeq\Bbbk[y_{1},\ldots,y_{n}]$.
###### Definition 3.1.
Let $F\in\mathcal{S}_{d}$ and $L\in\mathcal{S}_{1}$. We define the natural
apolar scheme to $F$ supported at $L$ the scheme
$Z_{F,L}\subset\mathbb{P}^{n}$ supported at $[L]\in\mathbb{P}^{n}$ and locally
defined by
$\underline{I}(Z_{F,L})=\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{L})\subset\underline{\mathcal{R}}$.
Note that $\underline{\mathcal{R}}$ can be regarded as the coordinate ring of
the affine chart $U_{0}=\\{[L]~{}:~{}Y_{0}\circ L\neq
0\\}\subset\mathbb{P}^{n}$ and $Z_{F,L}$ is a local $0$-dimensional scheme
supported at the origin of $U_{0}$.
Contraction behaves well with dehomogenization with respect to dual variables.
In particular, if $g\in\underline{\mathcal{R}}$ is the dehomogenization of
$G\in\mathcal{R}$ with respect to $Y_{0}$, and
$g\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}f_{X_{0}}=0$,
then
$G\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}F_{\rm
dp}=0$ [BJMR18, Corollary 3], and the last equality implies that $G\circ F=0$
as observed in Section 2.1. Hence, the scheme $Z_{F,L}$ is apolar to $F$
according to Definition 2.2.
###### Lemma 3.2 ([BJMR18, Corollary 4]).
The scheme $Z_{F,L}$ is apolar to $F$.
Here we detail how to concretely construct the ideal defining such a scheme.
Fix $F\in\mathcal{S}_{d}$ and
$L=\ell_{0}X_{0}+\ldots+\ell_{n}X_{n}\in\mathcal{S}_{1}$. Without loss of
generalities we may assume $\ell_{0}=1$. Over $\mathcal{S}$, we consider the
change of variables given by
(1) $\phi:\mathcal{S}\rightarrow\mathcal{S},\qquad\begin{cases}X_{0}\mapsto
X_{0}-\sum_{i=1}^{n}\ell_{i}X_{i},\\\ X_{i}\mapsto X_{i},&\text{ for
}i\in\\{1,\ldots,n\\}.\end{cases}$
We have $\phi(L)=X_{0}$ and $\tilde{F}=\phi(F)$, therefore we can represent
$f_{L}$ as $\tilde{f}_{X_{0}}=\tilde{F}_{\rm
dp}(1,x_{1},\ldots,x_{n})\in\underline{\mathcal{S}}_{\rm dp}$. Then
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{L})$
is the kernel of the infinite-dimensional _Hankel operator_ [BT20, BCMT10]:
$H(f_{L}):\underline{\mathcal{R}}\to\underline{\mathcal{S}}_{\rm dp},\quad
g\mapsto
g\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}f_{L}.$
However, since
$y^{\beta}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}f_{L}=0$
for every $|\beta|>\deg(f_{L})$, the annihilator of $f_{L}$ is generated by
the kernel of a truncated Hankel operator. Let $e=\deg(f_{L})$ and consider
the restriction
$H^{e+1}(f_{L}):\underline{\mathcal{R}}_{\leq
e+1}\to\left(\underline{\mathcal{S}}_{\rm dp}\right)_{\leq e}.$
Then,
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{L})=\ker
H^{e+1}(f_{L}).$
Note that the coefficients of the Hankel matrix can be computed directly from
$\tilde{F}$. Indeed, if we label rows and columns of $H^{e+1}(f_{L})$
accordingly to the divided powers monomial basis of
$\left(\underline{\mathcal{S}}_{\rm dp}\right)_{\leq e}$ and the standard
monomial basis of $\underline{\mathcal{R}}_{\leq e+1}$, respectively, we have
(2) $[H^{e+1}(f_{L})]_{\alpha,\beta}={\rm
eval}_{(0,\ldots,0)}\left(y^{\alpha+\beta}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}f_{L}\right)=\begin{cases}Y^{(d-(|\alpha|+|\beta|),\alpha_{1}+\beta_{1},\cdots,\alpha_{n}+\beta_{n})}\circ\tilde{F}&\text{
if }|\alpha|+|\beta|\leq d,\\\ 0&\text{ otherwise.}\end{cases}$
###### Remark 3.3.
Let $g_{\rm dp}\in\underline{\mathcal{S}}_{\rm dp}$ be a degree-$d$ polynomial
obtained from $g\in\underline{\mathcal{S}}$ by passing to divided powers. The
ideal
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(g_{\rm
dp})=\textnormal{Ann}(g)$ has minimal generators in degree $d+1$ if and only
if $g$ is a pure $d$-th power. When it is the case, we actually need to
consider the kernel of $H^{e+1}(g_{\rm dp})$ to compute
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(g_{\rm
dp})$, see e.g. Example 3.10. However, whenever $g$ is not a pure power, we
may compute its annihilator by restricting its Hankel matrix to $H^{e}(g_{\rm
dp}):\underline{\mathcal{R}}_{\leq e}\to\left(\underline{\mathcal{S}}_{\rm
dp}\right)_{\leq e}$, which makes the kernel computation more efficient, see
e.g. Examples 3.8 and 3.9.
The homogenization
$\tilde{I}=I(Z_{\tilde{F},X_{0}})=[\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{L})]^{\rm
hom}\subset\mathcal{R}$ with respect to $Y_{0}$ defines a $0$-dimensional
scheme apolar to $\tilde{F}$ and supported at $[X_{0}]\in\mathbb{P}^{n}$ as in
Definition 3.1. Note that the ideal homogenization is the only step in which
non-linear algebra (e.g. Gröbner bases) may be required.
Finally, to obtain the ideal defining $Z_{F,L}$ as in Definition 3.1, we need
to support $\tilde{I}$ on $[L]\in\mathbb{P}^{n}$, hence we perform the change
of coordinate in $\mathcal{R}$ given by the dualization of the inverse of eq.
1:
(3)
$\psi=(\phi^{-1})^{T}:\mathcal{R}\rightarrow\mathcal{R},\qquad\begin{cases}Y_{0}\mapsto
Y_{0},\\\ Y_{i}\mapsto\ell_{i}Y_{0}+Y_{i},&\text{ for
}i\in\\{1,\ldots,n\\}.\end{cases}$
The ideal $I=\psi(\tilde{I})\subset\mathcal{R}$ defines a $0$-dimensional
scheme which is supported at $[L]$ and apolar to $F$. Indeed, the following
lemma shows that the action by derivation is preserved under the changes of
coordinates given by eqs. 1 and 3.
###### Lemma 3.4.
Let $\phi$ and $\psi$ be changes of coordinates of eqs. 1 and 3. Then we have
$\psi(Y^{\beta})\circ\phi^{-1}(X^{\alpha})=Y^{\beta}\circ X^{\alpha}.$
###### Proof.
We write
$\psi(Y^{\beta})\circ\phi^{-1}(X^{\alpha})=\psi(Y_{0}^{\beta_{0}})\circ\left(\psi(Y_{1}^{\beta_{1}})\circ\left(\dots\psi(Y_{n}^{\beta_{n}})\circ\phi^{-1}(X^{\alpha})\right)\right).$
By the chain rule of derivation, if $L\circ M=0$ then $L^{b}\circ M^{a}=0$ for
any $a,b\in\mathbb{N}$. In particular, for every $j\in\\{1,\dots,n\\}$ we have
$\displaystyle\psi(Y_{j}^{\beta_{j}})\circ\phi^{-1}(X^{\alpha})$
$\displaystyle=(-\ell_{j}Y_{0}+Y_{j})^{\beta_{j}}\circ\left[\left(X_{0}+\ell_{1}X_{1}+\ldots+\ell_{n}X_{n}\right)^{\alpha_{0}}\prod_{i=1}^{n}X_{i}^{\alpha_{i}}\right]$
$\displaystyle=\left[(X_{0}+\ell_{1}X_{1}+\ldots+\ell_{n}X_{n})^{\alpha_{0}}\prod_{\begin{subarray}{c}1\leq
i\leq n\\\ i\neq
j\end{subarray}}X_{i}^{\alpha_{i}}\right]\cdot(Y_{j}^{\beta_{j}}\circ
X_{j}^{\alpha_{j}}).$
Therefore, by repeatedly applying the above equation for every $j$ we obtain
$\psi(Y_{1}^{\beta_{1}})\circ\left(\dots\psi(Y_{n}^{\beta_{n}})\circ\phi^{-1}(X^{\alpha})\right)=(X_{0}+\ell_{1}X_{1}+\ldots+\ell_{n}X_{n})^{\alpha_{0}}\cdot(Y_{1}^{\beta_{1}}\circ
X_{1}^{\alpha_{1}})\cdots(Y_{n}^{\beta_{n}}\circ X_{n}^{\alpha_{n}}).$
The result follows by acting with $\psi(Y_{0}^{\beta_{0}})=Y_{0}^{\beta_{0}}$
on the above quantity. ∎
Note that our choice of the change of coordinates in eq. 1 was arbitrary. It
would have been enough to consider any set of linear forms
$\\{L_{1},\ldots,L_{n}\\}$ completing a basis of $\mathcal{S}_{1}$ together
with $L$. Then, $\phi$ is the change of coordinates inverse to the one sending
$X_{0}\mapsto L_{0}$ and $X_{i}\mapsto L_{i}$, for any $i\in\\{1,\ldots,n\\}.$
###### Algorithm 1 (Natural Apolar Scheme).
Summary of construction of natural apolar schemes.
Input: A homogeneous polynomial $F\in\mathcal{S}_{d}$ and a linear form
$L\in\mathcal{S}_{1}$.
Output: The ideal $I(Z_{F,L})\subseteq\mathcal{R}$.
1. (1)
Define $\tilde{F}$ as the base-change of $F$ as in eq. 1.
2. (2)
Compute $f_{L}$ as $\tilde{F}_{\rm dp}(1,x_{1},\dots,x_{n})$ and set
$e=\deg(f_{L})$.
3. (3)
Compute the ideal $\underline{I}=\ker H^{e+1}(f_{L})$.
4. (4)
Compute the homogenization $I\subset\mathcal{R}$ of
$\underline{I}\subset\underline{\mathcal{R}}$.
5. (5)
Return the base-change of the ideal $I$ as in eq. 3.
### 3.2. Generalized Additive Decompositions (GADs)
We recall the definition of the so-called generalized additive decompositions
as introduced in [IK99], and we associate to them $0$-dimensional schemes by
employing the notion of natural apolar scheme introduced in Section 3.1.
###### Definition 3.5.
Let $F\in\mathcal{S}_{d}$ and let $L_{1},\ldots,L_{s}\in\mathcal{S}_{1}$ be
pairwise non-proportional linear forms. A generalized additive decomposition
(GAD) of $F$ supported at $\\{L_{1},\ldots,L_{s}\\}$ is an expression
(4) $F=\sum_{i=1}^{s}L_{i}^{d-k_{i}}G_{i},$
where for every $i\in\\{1,\ldots,s\\}$ we have $0\leq k_{i}\leq d$ and
$G_{i}\in\mathcal{S}_{k_{i}}$ is not divisible by $L_{i}$. If $s=1$, we call
this GAD _local_.
Following [BBM14, BJMR18], we associate a $0$-dimensional scheme to any GAD as
eq. 4.
###### Definition 3.6.
The scheme evinced by a GAD as in eq. 4 is the union of the natural apolar
schemes to each summand with respect to the corresponding $L_{i}$, i.e.,
$Z=\bigcup_{i=1}^{s}Z_{L_{i}^{d-k_{i}}G_{i},L_{i}}.$
The size of a GAD as in eq. 4 is the length of the evinced scheme $Z$.
Note that the same scheme may be evinced by different GADs. Indeed, $L^{d-k}G$
and $L^{d-k}G^{\prime}$ evince the same scheme whenever
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(g_{L})=\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(g^{\prime}_{L})$.
However, schemes evinced by GADs of a given $F$ are always apolar to it.
###### Lemma 3.7.
Let $Z$ be the scheme evinced by a GAD of $F$. Then $Z$ is apolar to $F$.
###### Proof.
To ease notation, denote $F_{i}=L_{i}^{d-k_{i}}G_{i}$ in eq. 4. Let
$I(Z)_{d}=I(Z)\cap\mathcal{R}_{d}$ and let $I(Z)_{d}^{\perp}$ be the
orthogonal space via the non-degenerate pairing
$\mathcal{R}_{d}\times\mathcal{S}_{d}\to\Bbbk$ induced by derivation. Then,
$I(Z)^{\perp}_{d}=\left(I\left(Z_{F_{1},L_{1}}\right)_{d}\cap\ldots\cap
I\left(Z_{F_{s},L_{s}}\right)_{d}\right)^{\perp}=I\left(Z_{F_{1},L_{1}}\right)^{\perp}_{d}+\ldots+I\left(Z_{F_{s},L_{s}}\right)_{d}^{\perp},$
see e.g. [Ger96, Proposition 2.6]. For every $i\in\\{1,\ldots,s\\}$ we have
$F_{i}\in I\left(Z_{F_{i},L_{i}}\right)_{d}^{\perp}$ by Lemma 3.2. Hence,
$F\in I(Z)_{d}^{\perp}$ and, by the Apolarity Lemma (Lemma 2.3), this implies
that $I(Z)\subseteq\textnormal{Ann}(F)$. ∎
The ideal defining schemes evinced by GADs can be easily computed by
intersecting the ideals defining natural apolar schemes to local pieces of the
additive decomposition, computed as in 1.
### 3.3. Examples
Here we illustrate the above construction with some examples.
###### Example 3.8.
Let $F=(X_{0}+3X_{1}-2X_{2})(X_{1}+X_{2})X_{2}\in\mathcal{S}_{3}$ and
$L=X_{0}+3X_{1}-2X_{2}\in\mathcal{S}_{1}$. Following 1 we obtain
$\tilde{F}=X_{0}X_{1}X_{2}+X_{0}X_{2}^{2}\in\mathcal{S}$ by
$X_{0}\leftarrow X_{0}-3X_{1}+2X_{2}.$
In divided powers it becomes $\tilde{F}_{\rm
dp}=X_{0}X_{1}X_{2}+2X_{0}X_{2}^{[2]}$, whose de-homogenization by $X_{0}=1$
is equal to $f_{L}=X_{1}X_{2}+2X_{2}^{[2]}\in\underline{\mathcal{S}}_{\rm
dp}$. Since $X_{1}(X_{2}+X_{2})$ is not a pure power, by Remark 3.3 we
consider the truncation of the Hankel matrix in degree $2=\deg(f_{L})$, i.e.,
$H^{2}(f_{L})=\blockarray{ccccccccccc}&1y_{1}y_{2}y_{1}^{2}y_{1}y_{2}y_{2}^{2}\\\
\block{c(cccccccccc)}1000012\\\ y_{1}001000\\\ y_{2}012000\\\
y_{1}^{2}000000\\\ y_{1}y_{2}100000\\\ y_{2}^{2}200000\\\ \ ,$
whose kernel defines the ideal
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{L})=\big{(}y_{2}(2y_{1}-y_{2}),y_{1}^{2}\big{)}\subset\underline{\mathcal{R}}$.
Its homogenization in $\mathcal{R}$ is the ideal
$\big{(}Y_{2}(2Y_{1}-Y_{2}),Y_{1}^{2}\big{)}$, which we need to base-change as
in eq. 3, i.e.
$Y_{1}\leftarrow-3Y_{0}+Y_{1},\quad Y_{2}\leftarrow 2Y_{0}+Y_{2}.$
This way we obtain the ideal
$I=\left((2Y_{0}+Y_{2})(8Y_{0}-2Y_{1}+Y_{2}),(3Y_{0}-Y_{1})^{2}\right)\subset\mathcal{R}$.
Its radical ideal is $(2Y_{1}+3Y_{2},2Y_{0}+Y_{2})$, i.e. it defines a
$0$-dimensional scheme supported at
$[L]=[X_{1}+3X_{2}-2X_{3}]\in\mathbb{P}^{n}$. One can directly verify that
this scheme has length $4$ and it is apolar to $F$. Indeed, it is immediate to
check that
$\displaystyle(2Y_{0}+Y_{2})(8Y_{0}-2Y_{1}+Y_{2})\circ F$
$\displaystyle=(-16Y_{0}^{2}+4Y_{0}Y_{1}-10Y_{0}Y_{2}+2Y_{1}Y_{2}-Y_{2}^{2})\circ
F=0,$ $\displaystyle(3Y_{0}-Y_{1})^{2}\circ F$
$\displaystyle=(9Y_{0}^{2}-6Y_{0}Y_{1}+Y_{1}^{2})\circ F=0.$
Hence, $I$ is the ideal defining $Z_{F,L}$. $\spadesuit$
###### Example 3.9.
Let $F=(X_{0}+3X_{1}-2X_{2})(X_{1}+X_{2})X_{2}\in\mathcal{S}_{3}$ the same
polynomial of Example 3.8 and consider $L=X_{0}\in\mathcal{S}_{1}$. As the
support is $X_{0}$, we do not need to change coordinates, so we directly de-
homogenize $F_{\rm dp}$ with respect to $L$, obtaining
$f_{L}=y_{1}y_{2}+2y_{2}^{[2]}+6y_{1}^{[2]}y_{2}+2y_{1}y_{2}^{[2]}-12y_{2}^{[3]}$.
Since $F$ is not a pure cube, we consider the truncation of the Hankel matrix
in degree $3=\deg(f_{L})$, namely
$H^{3}(f_{L})=\blockarray{ccccccccccccccc}&1y_{1}y_{2}y_{1}^{2}y_{1}y_{2}y_{2}^{2}y_{1}^{3}y_{1}^{2}y_{2}y_{1}y_{2}^{2}y_{2}^{3}\\\
\block{c(cccccccccccccc)}1000012062-12\\\ y_{1}0010620000\\\
y_{2}01262-120000\\\ y_{1}^{2}0060000000\\\ y_{1}y_{2}1620000000\\\
y_{2}^{2}22-120000000\\\ y_{1}^{3}0000000000\\\ y_{1}^{2}y_{2}6000000000\\\
y_{1}y_{2}^{2}2000000000\\\ y_{2}^{3}-12000000000\\\ \ .$
Its kernel is given by the ideal
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{L})=(5y_{2}^{3}+76y_{1}^{2}-12y_{1}y_{2}+36y_{2}^{2},2y_{1}^{2}y_{2}+y_{2}^{3},y_{1}^{3},6y_{1}y_{2}^{2}+y_{2}^{3})\subset\underline{\mathcal{R}}.$
To homogenize it, we compute a Gröbner basis with respect to the graded
lexicographic ordering:
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{L})=(y_{1}^{3},5y_{1}^{2}y_{2}-38y_{1}^{2}+6y_{1}y_{2}-18y_{2}^{2},15y_{1}y_{2}^{2}-38y_{1}^{2}+6y_{1}y_{2}-18y_{2}^{2},5y_{2}^{3}+76y_{1}^{2}-12y_{1}y_{2}+36y_{2}^{2}).$
Hence the natural apolar scheme is defined by the ideal
$\left(\begin{multlined}Y_{1}^{3},5Y_{1}^{2}Y_{2}-38Y_{0}Y_{1}^{2}+6Y_{0}Y_{1}Y_{2}-18Y_{0}Y_{2}^{2},\\\
15Y_{1}Y_{2}^{2}-38Y_{0}Y_{1}^{2}+6Y_{0}Y_{1}Y_{2}-18Y_{0}Y_{2}^{2},5Y_{2}^{3}+76Y_{0}Y_{1}^{2}-12Y_{0}Y_{1}Y_{2}+36Y_{0}Y_{2}^{2}\end{multlined}Y_{1}^{3},5Y_{1}^{2}Y_{2}-38Y_{0}Y_{1}^{2}+6Y_{0}Y_{1}Y_{2}-18Y_{0}Y_{2}^{2},\\\
15Y_{1}Y_{2}^{2}-38Y_{0}Y_{1}^{2}+6Y_{0}Y_{1}Y_{2}-18Y_{0}Y_{2}^{2},5Y_{2}^{3}+76Y_{0}Y_{1}^{2}-12Y_{0}Y_{1}Y_{2}+36Y_{0}Y_{2}^{2}\right)\subset\mathcal{R}.$
One can easily verify that this ideal indeed defines a $0$-dimensional scheme
apolar to $F$ and supported at $[X_{0}]$, whose length is $6$. $\spadesuit$
###### Example 3.10.
Let $F=(X_{0}+3X_{1}-2X_{2})(X_{1}+X_{2})X_{2}\in\mathcal{S}_{3}$ be the
polynomial of Example 3.8. From the equality
$(X_{1}+X_{2})X_{2}=(\frac{X_{1}}{2}+X_{2})^{2}-(\frac{X_{1}}{2})^{2}$ we
immediately get another non-local GAD of $F$, namely
(5)
$F=\left(\frac{X_{1}}{2}+X_{2}\right)^{2}(X_{0}+3X_{1}-2X_{2})-\left(\frac{X_{1}}{2}\right)^{2}(X_{0}+3X_{1}-2X_{2}).$
We compute the scheme $Z$ evinced by the above GAD, supported at
$[X_{1}+2X_{2}]$ and $[X_{1}]$.
We begin with the first addendum
$F_{1}=\frac{1}{4}(X_{1}+2X_{2})^{2}(X_{0}+3X_{1}-2X_{2})$ and
$L_{1}=X_{1}+2X_{2}$. We can neglect the constant factor $\frac{1}{4}$, and
since $L_{1}$ has no $X_{0}$ terms, we simply switch the roles of $X_{0}$ and
$X_{1}$. In order to de-homogenize with respect to $L_{1}$, we perform the
substitution
$\quad X_{1}\leftarrow X_{1}-2X_{2},$
and we get $(f_{1})_{L_{1}}=x_{0}+3-8x_{2}$. Since $X_{0}+3X_{1}-8X_{2}$ is a
pure power, we need to consider the truncation of the Hankel matrix in degree
$2=\deg\big{(}(f_{1})_{L_{1}}\big{)}+1$, i.e.
$\mathbb{H}^{2}\big{(}(f_{1})_{L_{1}}\big{)}=\blockarray{ccccccc}&1y_{0}y_{2}y_{0}^{2}y_{0}y_{2}y_{2}^{2}\\\
\block{c(cccccc)}1182-16000\\\ y_{0}200000\\\ y_{2}-1600000\\\ \,,$
whose kernel defines the ideal
$(8y_{0}+y_{2},y_{2}^{2})\subset\underline{\mathcal{R}}$. After the
homogenization and the base-change
$Y_{2}\leftarrow-2Y_{1}+Y_{2},$
we obtain the ideal
$\left(8Y_{0}-2Y_{1}+Y_{2},(2Y_{1}-Y_{2})^{2}\right)\subset\mathcal{R}$
defining the scheme $Z_{1}=Z_{F_{1},X_{1}+2X_{2}}$, which is $0$-dimensional,
of length $2$ and supported at the point $[X_{1}+2X_{2}]\in\mathbb{P}^{n}$.
We proceed with the second addendum
$F_{2}=\frac{1}{4}X_{1}^{2}(X_{0}+3X_{1}-2X_{2})$ and $L_{2}=X_{1}$. As above,
$X_{1}$ plays the role of $X_{0}$. Since $(f_{2})_{L_{2}}=x_{0}+3-2x_{2}$, we
again consider the truncation of the Hankel matrix in degree $2$:
$\mathbb{H}^{2}\big{(}(f_{2})_{L_{2}}\big{)}=\blockarray{ccccccc}&1y_{0}y_{2}y_{0}^{2}y_{0}y_{2}y_{2}^{2}\\\
\block{c(cccccc)}1182-4000\\\ y_{0}200000\\\ y_{2}-400000\\\ \,,$
whose kernel defines the ideal
$(2y_{0}+y_{2},y_{2}^{2})\subset\underline{\mathcal{R}}$. Hence, the scheme
$Z_{2}=Z_{F_{2},X_{1}}$ is defined by the ideal
$(2Y_{0}+Y_{2},Y_{2}^{2})\subset\mathcal{R}$, it is $0$-dimensional of length
$2$ and is supported at the point $[X_{1}]\in\mathbb{P}^{n}$. In conclusion,
the GAD of eq. 5 evinces the scheme $Z=Z_{1}\cup Z_{2}$ defined by
$I(Z)=\big{(}8Y_{0}-2Y_{1}+Y_{2},(2Y_{1}-Y_{2})^{2}\big{)}\cap(2Y_{0}+Y_{2},Y_{2}^{2})=(4Y_{0}Y_{1}-10Y_{0}Y_{2}+2Y_{1}Y_{2}-Y_{2}^{2},Y_{0}^{2}).$
One can directly check that $Z$ has length $4$, it is supported at the points
$[X_{1}]$ and $[X_{1}+2X_{2}]$, and it is apolar to $F$. $\spadesuit$
## 4\. GADs and Irredundant schemes
In this section, we investigate irredundant schemes evinced by GADs by
employing ideas on natural apolar schemes from [BJMR18].
###### Remark 4.1.
Let $[L]\in\mathbb{P}^{n}$ be a simple point defined by the ideal
$\wp_{L}\subset\mathcal{R}$. Recall that the $j$-fat point supported at $[L]$
is the $0$-dimensional scheme defined by the ideal $\wp_{L}^{j}$ For any
$k\leq d$, the natural apolar scheme of $F=L^{d-k}G\in\mathcal{S}_{d}$
supported at $[L]$ is contained in the $(k+1)$-fat point supported at $[L]$,
since the localization $f_{L}$ has degree at most $k$. Thus, $Z_{F,L}$ is
$k$-regular, as a $(k+1)$-fat point is always $k$-regular and the containment
preserves the regularity [BCGI07, BCGI09]. Finally, if $F$ is concise in $n+1$
variables, i.e., $\mathrm{HF}_{Z}(1)=n+1$, then $Z_{F,L}$ is regular in degree
$k-n$ since its Hilbert function starts with $\mathrm{HF}_{Z}=(1,n+1,\dots)$
and is strictly increasing until it stabilizes.
###### Remark 4.2.
By [BJMR18, Lemma 3], given a local scheme $Z\subset\mathbb{P}^{n}$ apolar to
$F\in\mathcal{S}_{d}$ and supported at $[L]$, there exists
$G\in\mathcal{S}_{D}$ ($D\geq d$) such that $Z_{G,L}\subseteq Z$ and $F=H\circ
G$ for some $H\in\mathcal{R}_{D-d}$. Furthermore, in [BJMR18, Proposition 1]
it is shown that, under minimality assumption, the localizations of $F_{\rm
dp}$ and $G_{\rm dp}$ with respect to $L$ are equal up to degree $d$. In that
result, the minimality requirement is in terms of minimal length among the
schemes supported at $[L]$ and apolar to $F$. However, we observe that in that
proof irredundancy is actually enough. For the sake of completeness, we report
here the proof of the following statement, which may be seen as a non-local
version of [BJMR18, Proposition 1].
###### Proposition 4.3.
Let $Z$ be a $0$-dimensional scheme apolar and irredundant to
$F\in\mathcal{S}_{d}$. Then $Z$ is evinced by a GAD of $F$ if and only if
there are $L_{1},\dots,L_{s}\in\mathcal{S}_{1}$ such that
$I(Z)\supseteq\bigcap_{i=1}^{s}\wp_{L_{i}}^{d+1}$.
###### Proof.
Let $Z=Z_{1}\cup\dots\cup Z_{s}$ be the irreducible decomposition of $Z$.
If $Z$ is evinced by a GAD as in eq. 4, then each
$Z_{i}=Z_{L_{i}^{d-k_{i}}G_{i}}$ is contained in a $(k_{i}+1)$-fat point by
Remark 4.1, hence
$I(Z_{i})\supseteq\wp_{L_{i}}^{k_{i}+1}\supseteq\wp_{L_{i}}^{d+1}$. Note that
this implication does not need irredundancy.
Conversely, since $I(Z)\subseteq\textnormal{Ann}(F)$, then we have
$F\in I(Z)^{\perp}_{d}=I(Z_{1})_{d}^{\perp}+\dots+I(Z_{s})^{\perp}_{d}.$
Therefore, we have an additive decomposition $F=\sum_{i=1}^{s}F_{i}$ with
$F_{i}\in I(Z_{i})^{\perp}_{d}$. By Remark 4.2 there are
$G_{i}\in\mathcal{S}_{D_{i}}$ and $H_{i}\in\mathcal{R}_{D_{i}-d}$ such that
$Z_{G_{i},L_{i}}\subseteq Z_{i}$ and $H_{i}\circ G_{i}=F_{i}$. By [BJMR18,
Lemma 3] we know that
$h_{i}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}(g_{i})_{L_{i}}$
and $(f_{i})_{L_{i}}$ are equal up to degree $d$, but since
$I(Z_{i})\supseteq\wp_{L_{i}}^{d+1}$, the degree of the local generator
$h_{i}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}(g_{i})_{L_{i}}$
is bounded by $d$, so it equals $(f_{i})_{L_{i}}$. Hence, we have
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}\big{(}(f_{i})_{L_{i}}\big{)}=\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}\big{(}h_{i}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}(g_{i})_{L_{i}}\big{)}\supseteq\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}\big{(}(g_{i})_{L_{i}}\big{)},$
therefore the natural apolar scheme $Z_{F_{i},L_{i}}$ is contained in
$Z_{G_{i},L_{i}}$ and is apolar to $F_{i}$. However, the scheme $Z_{i}$ needs
to be irredundant to $F_{i}$, hence we conclude that
$Z_{F_{i},L_{i}}=Z_{G_{i},L_{i}}=Z_{i}$. Therefore $Z$ is the scheme evinced
by the additive decomposition $\sum_{i=1}^{s}F_{i}$ supported at
$L_{1},\dots,L_{s}$. ∎
In the following example, we observe that even if $Z$ is evinced by a GAD of
$F\in\mathcal{S}_{d}$ and its components are contained in $(d+1)$-fat points,
$Z$ may still be redundant to $F$.
###### Example 4.4.
Consider the GAD $F=X_{0}G_{1}+X_{1}^{2}G_{2}\in\mathcal{S}_{3}$, where
$G_{1}=4X_{0}^{2}+2X_{0}X_{1}-4X_{1}^{2},\quad G_{2}=-3X_{0}-5X_{1}.$
The scheme $Z$ evinced by such GAD is given by the ideal
$I(Z)=(Y_{0}^{2}Y_{1}^{3})=\wp_{X_{0}}^{3}\cap\wp_{X_{1}}^{2}\subset\mathcal{R}.$
Its Hilbert function is $\mathrm{HF}_{Z}=(1,2,3,4,5,5,\dots)$, hence it is not
$3$-regular. We move the addendum in $G_{1}$ containing $X_{1}^{2}$ to
$G_{2}$, obtaining a different GAD supported at the same points:
$F=X_{0}^{2}\tilde{G}_{1}+X_{1}^{2}\tilde{G}_{2}$, where
$\tilde{G}_{1}=4X_{0}+2X_{1},\quad\tilde{G}_{2}=-7X_{0}-5X_{1}.$
The scheme $\tilde{Z}$ evinced by the last GAD is by construction apolar to
$F$, and it is defined by
$I(\tilde{Z})=(Y_{0}^{2}Y_{1}^{2})=\wp_{X_{0}}^{2}\cap\wp_{X_{1}}^{2}\subset\mathcal{R}.$
The Hilbert function of $\tilde{Z}\neq Z$ is
$\mathrm{HF}_{\tilde{Z}}=(1,2,3,4,4,\dots)$, and clearly $I(Z)\subsetneq
I(\tilde{Z})\subseteq\textnormal{Ann}(F)$. Hence, $Z$ is redundant.
It can be directly verified that $\tilde{Z}$ is irredundant to $F$ (e.g. as in
Example 5.10), but not minimal, since
$I_{W}=(79Y_{0}^{2}-166Y_{0}Y_{1}+88Y_{2}^{2})\subset\mathcal{R}$
is evinced by the (unique) Waring decomposition of $F$, defining a scheme of
length $2$ apolar to $F$. $\spadesuit$
###### Corollary 4.5.
Let $L_{1},\ldots,L_{s}\in\mathcal{S}_{1}$ and $Z=Z_{1}\cup\ldots\cup Z_{s}$
be a $0$-dimensional scheme apolar to $F\in\mathcal{S}_{d}$ such that for
every $i\in\\{1,\dots,s\\}$ we have
$I(Z_{i})\supset\wp_{L_{i}}^{\tilde{k}_{i}+1}$ with $\tilde{k}_{i}\leq d$.
Then $Z$ contains a scheme evinced by a GAD of $F$ as in eq. 4, with
$k_{i}\leq\tilde{k}_{i}$.
###### Proof.
Let $Y=Y_{1}\cup\ldots\cup Y_{s}\subseteq Z$ be non-redundant and apolar to
$F$, with $Y_{i}\subseteq Z_{i}$. Then, it is enough to apply the proof of
Proposition 4.3 to $Y$, since
$I(Y_{i})^{\perp}_{d}\subseteq
I(Z_{i})^{\perp}_{d}\subseteq(\wp_{i}^{\tilde{k}_{i}+1})_{d}^{\perp}=\langle
L_{i}^{d-{\tilde{k}_{i}}}Q~{}:~{}Q\in\mathcal{S}_{\tilde{k}_{i}}\rangle,$
where the last equality is a classical result, see e.g. [Ger96, Theorem 3.2].
We conclude that $I(Y_{i})$ is evinced by
$F_{i}=L_{i}^{d-{\tilde{k}_{i}}}Q_{i}$, which becomes a valid (local) GAD
after collecting all the factors $L_{i}$ in $Q_{i}$. Thus, $Y$ is evinced by
the GAD $F=\sum_{i=1}^{s}F_{i}$ supported at $L_{1},\dots,L_{s}$. ∎
In the following example, we show that the degree $D$ of the polynomial $G$
from Remark 4.2 may well exceed $d=\deg(F)$. We thank J. Buczyński for
pointing it out.
###### Example 4.6.
Consider the following polynomial:
$\displaystyle F=24\,X_{0}^{3}$
$\displaystyle+70\,X_{0}^{2}X_{1}+75\,X_{0}^{2}X_{2}+70\,X_{0}^{2}X_{3}+180\,X_{0}^{2}X_{4}+10\,X_{0}^{2}X_{5}+10\,X_{0}X_{1}^{2}$
$\displaystyle+70\,X_{0}X_{2}^{2}+360\,X_{0}X_{2}X_{3}+120\,X_{0}X_{2}X_{4}+60\,X_{0}X_{3}^{2}+60\,X_{2}^{3}+60\,X_{2}^{2}X_{3}\in\mathcal{S}_{3},$
and let $Z$ be the scheme defined by the ideal
$\displaystyle I(Z)=(-Y_{0}Y_{3}+Y_{2}^{2},\,-Y_{1}Y_{4}+Y_{2}Y_{3},\,$
$\displaystyle-
Y_{1}Y_{5}+Y_{1}^{2},\,-6Y_{1}Y_{5}+Y_{2}Y_{4},\,-6Y_{1}Y_{5}+Y_{3}^{2},$
$\displaystyle
Y_{1}Y_{2},\,Y_{1}Y_{3},\,Y_{1}Y_{4},\,Y_{1}Y_{5},\,Y_{2}Y_{5},\,Y_{3}Y_{4},\,Y_{3}Y_{5},\,Y_{4}^{2},\,Y_{4}Y_{5},\,Y_{5}^{2})\subset\mathcal{R}.$
One can computationally check that $Z$ is a local $0$-dimensional scheme
apolar to $F$, of minimal length $6$ and supported at
$[X_{0}]\in\mathbb{P}^{n}$. One can also verify that it is the unique scheme
of minimal length apolar to such $F$, by explicitly computing minimal apolar
schemes [BT20], or by observing that
$I(Z)=\textnormal{Ann}(F)\cap\mathcal{R}_{\leq 2}$ and the Hilbert function of
$\mathcal{R}/\textnormal{Ann}(F)$ is $(1,6,6,1)$. In particular, $Z$ is non-
redundant. Since $I(Z)\not\supseteq\wp_{X_{0}}^{4}$, by Proposition 4.3 there
is no GAD of $F$ that evinces this apolar scheme. However, as recalled in
Remark 4.2, since $I(Z)\supseteq\wp_{X_{0}}^{5}$ then $Z$ is evinced by a GAD
of a degree-$4$ polynomial $G$ having $F$ among its partials. Indeed, let us
consider the polynomial
$\displaystyle G=\ $ $\displaystyle
6X_{0}^{4}+\frac{70}{3}X_{0}^{3}X_{1}+25X_{0}^{3}X_{2}+\frac{70}{3}X_{0}^{3}X_{3}+60X_{0}^{3}X_{4}+\frac{10}{3}X_{0}^{3}X_{5}+5X_{0}^{2}X_{1}^{2}+35X_{0}^{2}X_{2}^{2}$
$\displaystyle+180X_{0}^{2}X_{2}X_{3}+60X_{0}^{2}X_{2}X_{4}+30X_{0}^{2}X_{3}^{2}+60X_{0}X_{2}^{3}+60X_{0}X_{2}^{2}X_{3}+5X_{2}^{4}\in\mathcal{S}_{4}.$
Note that $Y_{0}\circ G=F$. Moreover, $Z=Z_{G,X_{0}}$, i.e., it is evinced by
the trivial GAD of $G$ given by $G=X_{0}^{0}G$. This example shows why the
containment in $(d+1)$-fat points is crucial for Proposition 4.3 and Corollary
4.5. In particular, we have that
$\displaystyle g_{X_{0}}$ $\displaystyle=120x_{2}^{4}+f_{X_{0}}=$
$\displaystyle=120x_{2}^{4}+360x_{2}^{3}+120x_{2}^{2}x_{3}+20x_{1}^{2}+140x_{2}^{2}+360x_{2}x_{3}+120x_{2}x_{4}+120x_{3}^{2}+140x_{1}+150x_{2}$
$\displaystyle\quad+140x_{3}+360x_{4}+20x_{5}+144.$
We observe that $g_{X_{0}}$ and $f_{X_{0}}$ are equal up to degree $3$, but
since
$(y_{2}^{2}-y_{3})\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}f_{X_{0}}=-120x_{2}^{2}\neq
0,$
then
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(g_{X_{0}})\not\subseteq\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(f_{X_{0}})$.
$\spadesuit$
## 5\. Regularity of schemes evicing GADs
### 5.1. Apolar schemes with low multiplicities and independent supports
For a given $L\in\mathcal{S}_{1}$, let $D_{L}=L^{\perp}\cap\mathcal{R}_{1}$
and $D^{e}_{L}\subset\mathrm{Sym}^{e}\mathcal{R}_{1}$ be its $e$-th symmetric
power. We also define the $\Bbbk$-vector spaces
$\mathcal{D}^{e}_{L}(F)=\langle H\circ F~{}:~{}H\in
D^{e}_{L}\rangle\subseteq\mathcal{S}_{d-e},$
and given a vector space $V\subseteq\mathcal{S}_{m}$ and
$H\in\mathcal{S}_{l}$, we write
$H\cdot V=\\{HF~{}:~{}F\in V\\}\subseteq\mathcal{S}_{l+m}.$
With the notation of the previous sections, as in [BJMR18, Remark 3], we have
$I(Z_{F,L})_{d}^{\perp}=\mathbb{P}\left(\sum_{e=0}^{d}L^{e}\cdot\mathcal{D}^{e}_{L}(F)\right)\subset\mathbb{P}(\mathcal{S}_{d}).$
When $F=L^{d-k}G$, from the above equality and the chain rule of derivation we
get
$I(Z_{L^{d-k}G,L})_{d}^{\perp}=\mathbb{P}\left(\sum_{e=0}^{k}L^{d-k+e}\cdot\mathcal{D}^{e}_{L}(G)\right)\subset\mathbb{P}(\mathcal{S}_{d}).$
###### Remark 5.1.
Let $Z=\cup_{i=1}^{s}Z_{i}\subset\mathbb{P}^{n}$ be the irreducible
decomposition of a $0$-dimensional scheme. Then $Z$ is $h$-regular precisely
when $\dim I(Z)_{h}^{\perp}=\deg(Z)=\sum_{i=1}^{s}\deg(Z_{i})$, therefore
there cannot be $\Bbbk$-linear relations involving generators of
$I(Z_{i})_{h}^{\perp}$ for different $i$’s.
If there is a relation between different $I(Z_{i})_{d}^{\perp}$ as in Remark
5.1, the scheme $Z$ is not $d$-regular. However, the following proposition
shows that if such $Z$ is evinced by a GAD of $F\in\mathcal{S}_{d}$ and is
irredundant to it, such a relation cannot involve addenda appearing in that
GAD.
###### Proposition 5.2.
Let $Z$ be the scheme evinced by the GAD
$F=\sum_{i=1}^{s}L_{i}^{d-k_{i}}G_{i}\in\mathcal{S}_{d}$. If, for some
$i\in\\{1,\dots,s\\}$, we have
(6) $L_{i}^{d-k_{i}}G_{i}\in\sum_{1\leq e_{i}\leq
k_{i}}L_{i}^{d-k_{i}+e_{i}}\cdot\mathcal{D}^{e_{i}}_{L_{i}}(G_{i})+\sum_{\begin{subarray}{c}1\leq
j\leq s\\\ j\neq i\end{subarray}}\sum_{0\leq e_{j}\leq
k_{j}}L_{j}^{d-k_{j}+e_{j}}\cdot\mathcal{D}^{e_{j}}_{L_{j}}(G_{j}),$
then $Z$ is redundant to $F$. It is intended that the first sum in eq. 6 is
empty if $k_{i}=0$.
###### Proof.
Without loss of generality, we may assume that in eq. 6 we have $i=1$. We
define a scheme $Z^{\prime}$ apolar to $F$ as follows.
$\bullet$ If $k_{1}=0$, by eq. 6, we simplify the GAD as
$F=\sum_{j=2}^{s}L_{j}^{d-k_{i}}G^{\prime}_{j}$ with
$G^{\prime}_{j}\in\sum_{e=0}^{k_{i}}L_{j}^{e}D^{e}_{L_{j}}(G_{j})$. We call
$Z^{\prime}$ the scheme evinced by this GAD of $F$.
$\bullet$ If $k_{1}>0$, we replace $L_{1}^{d-k_{1}}G_{1}$ in the GAD of $F$
with the linear combination deduced from eq. 6. In particular, there are
elements $H_{j,e_{j}}\in\mathcal{D}_{L_{j}}^{e_{j}}$ and integers
$m_{j}\in\mathbb{N}$ such that we can write
(7) $F=\sum_{j=1}^{s}L_{j}^{d-k_{j}+m_{j}}\left(\sum_{m_{j}\leq e_{j}\leq
k_{j}}L_{j}^{e_{j}-m_{j}}\left(H_{j,e_{j}}\circ G_{j}\right)\right).$
Since $k_{1}>0$, then we have $m_{1}\geq 1$ in eq. 7. The last equation is a
GAD of $F$ up to deleting vanishing addenda and, for all the others, choosing
$m_{j}$ such that $H_{j,m_{j}}\neq 0$. Let $Z^{\prime}$ be the scheme evinced
by the new GAD in eq. 7.
By construction, $Z^{\prime}$ is apolar to $F$, so it is sufficient to show
that $Z^{\prime}\subsetneq Z$.
Following the notation introduced in Section 3.1, let
$g_{j}=(g_{j})_{L_{j}}\in\underline{\mathcal{S}}$ be the de-homogenization of
$(G_{j})_{\rm dp}$ with respect to $L_{j}$, and let
$h_{j,e_{j}}\in\underline{\mathcal{R}}$ be the dehomogenization of
$H_{j,e_{j}}$ with respect to the dual linear form $L_{j}^{*}$ of $L_{j}$.
Since $H_{j,e_{j}}\in D^{e_{j}}_{L_{j}}\subset{\rm Sym}^{e_{j}}L_{j}^{\perp}$,
then $H_{j,e_{j}}$ does not involve $L_{j}^{*}$, so its dehomogenization
$h_{j,e_{j}}$ is equal to $H_{j,e_{j}}$. Thus, the de-homogenization of
$(H_{j,e_{j}}\circ G_{j})_{\rm
dp}=H_{j,e_{j}}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}(G_{j})_{\rm
dp}$ with respect to $L_{j}$ coincides with
$h_{j,e_{j}}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}g_{j}$.
In particular, the $j$-th component of $Z^{\prime}$ is defined by
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}\big{(}\sum_{m_{j}\leq
e_{j}\leq
k_{j}}h_{j,e_{j}}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}g_{j}\big{)}$.
Since
$\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}\left(\sum_{m_{j}\leq
e_{j}\leq
k_{j}}h_{j,e_{j}}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}g_{j}\right)\supseteq\textnormal{Ann}^{\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}}(g_{j}),$
we deduce that $Z^{\prime}\subseteq Z$. We now show that this containment is
proper.
In the case $k_{1}=0$, then the containment is strict because $Z^{\prime}$ has
no support on $[L_{1}]$. In the case $k_{1}>0$, since $m_{1}\geq 1$,
$\deg(\sum_{m_{i}\leq e_{i}\leq
k_{i}}h_{1,e_{1}}\mathbin{\mathchoice{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\displaystyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\textstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptstyle\lnot$}}}{\raisebox{0.0pt}{\scalebox{1.0}[-1.0]{$\scriptscriptstyle\lnot$}}}}g_{1})<\deg(g_{1})$
so the socle degree of the first component of $I(Z^{\prime})$ and $I(Z)$ are
different, so again they must be different. ∎
###### Proposition 5.3.
Let $s>1$ and $L_{1},\ldots,L_{s}\in\mathcal{S}_{1}$ be $\Bbbk$-linearly
independent forms and $Z$ be the scheme evinced by a GAD of
$F\in\mathcal{S}_{d}$ as in eq. 4. If either
1. (a)
$d>\max_{i\neq j}\\{k_{i}+k_{j}\\}$, or
2. (b)
$d>\max_{i\neq j}\\{k_{i}+k_{j}-2\\}$ and $Z$ is irredundant,
then $Z$ is $d$-regular.
###### Proof.
For every $1\leq i\leq s$ we let $Z_{i}$ be the natural apolar scheme to
$F_{i}=L_{i}^{d-k_{i}}G_{i}$ supported at $L_{i}$, so $Z=\cup_{i=1}^{s}Z_{i}$.
By Remark 4.2 each $Z_{i}$ is $d$-regular, therefore $\dim
I(Z_{i})_{d}^{\perp}=\deg(Z_{i})$. By Remark 5.1, we only need to show that
there are no $\Bbbk$-linear relations involving generators of
$I(Z_{i})_{d}^{\perp}$ for different $i$’s. If there was such a relation,
there should exist $Q_{i}\in\sum_{e=0}^{k_{i}}L_{i}^{e}\cdot
D^{e}_{L_{i}}(G_{i})$, for $i=1,\ldots,s$, such that
(8) $L_{i}^{d-k_{i}}Q_{i}=\sum_{i\neq j}L_{j}^{d-k_{j}}Q_{j}.$
Since the $L_{i}$’s are linearly independent, up to a change of coordinates we
can write the above as
$X_{i}^{d-k_{i}}\tilde{Q}_{i}=\sum_{i\neq j}X_{j}^{d-k_{j}}\tilde{Q}_{j}.$
In case (a), the hypothesis $d-k_{i}>k_{j}=\deg(Q_{j})=\deg(\tilde{Q}_{j})$
prevents form factoring $X_{i}^{d-k_{i}}$ out of the right-hand side of the
above equation. Thus, no such relation may hold.
In case (b), since $Z$ is irredundant, by Proposition 5.2 we may assume that
any relation between the $I(Z_{i})_{d}^{\perp}$’s does not involve any of the
terms $L_{i}^{d-k_{i}}G_{i}$. Thus, eq. 8 actually leads to a relation of the
form
$X_{i}^{d-k_{i}+1}\tilde{Q}_{i}=\sum_{i\neq j}X_{j}^{d-k_{j}+1}\tilde{Q}_{j}.$
As in the previous case, the factor $X_{i}^{d-k_{i}+1}$ cannot appear on the
right-hand side of the above sum due to $d>\max_{i\neq j}\\{k_{i}+k_{j}\\}-2$.
In conclusion, in both cases, the $I(Z_{i})_{d}^{\perp}$’s cannot intersect,
so $Z$ is $d$-regular. ∎
###### Remark 5.4.
We note that requiring $s>1$ in Proposition 5.3 is not restrictive, as in the
local case ($s=1$) Remark 4.2 already contains a stronger result.
An immediate corollary of Proposition 5.3 is the following.
###### Corollary 5.5.
Let $Z$ be the scheme evinced by the GAD
$F=\sum_{i=1}^{s}L_{i}^{d-k_{i}}G_{i}\in\mathcal{S}_{d}$, such that
$L_{1},\ldots,L_{s}$ are $\Bbbk$-linearly independent and $k_{i}<\frac{d}{2}$
for every $i\in\\{1,\ldots,s\\}$. Then $Z$ is $d$-regular.
###### Corollary 5.6.
Let $Z=\bigcup_{i=1}^{s}Z_{i}$ be a $0$-dimensional scheme apolar and
irredundant to $F\in\mathcal{S}_{d}$, such that
$I(Z_{i})\supset\wp_{L_{i}}^{\lceil{\frac{d}{2}}\rceil+1}$ and the $L_{i}$ are
$\Bbbk$-linearly independent. Then $Z$ is $d$-regular.
###### Proof.
It follows by Corollary 4.5 together with Corollary 5.5. ∎
###### Remark 5.7.
We notice that every requirement of Proposition 5.3 is sharp. In fact, Example
4.4 shows that the inequality in (a) cannot be improved: if $d=\max_{i\neq
j}\\{k_{i}+k_{j}\\}$ the scheme $Z$ may be not $d$-regular. Similarly, the
following Example 5.8 shows that the inequality in (b) is also sharp. Finally,
Example 5.10 will show that the $\Bbbk$-linear independence of the supports is
also needed.
The following example shows that schemes that are irredundant to
$F\in\mathcal{S}_{d}$ may be not $d$-regular.
###### Example 5.8.
Let us consider the scheme $Z$ evinced by the GAD
$F=X_{0}G_{1}+X_{1}G_{2}\in\mathcal{S}_{4}$, where
$\displaystyle G_{1}$
$\displaystyle=10X_{0}^{3}-4X_{0}^{2}X_{1}+4X_{0}^{2}X_{2}-4X_{0}X_{1}^{2}-8X_{0}X_{1}X_{2}-3X_{0}X_{2}^{2}-8X_{1}^{3}-4X_{2}^{3}\in\mathcal{S}_{3},$
$\displaystyle G_{2}$
$\displaystyle=5X_{0}^{3}+9X_{0}X_{1}^{2}-5X_{1}^{3}-7X_{1}^{2}X_{2}+6X_{1}X_{2}^{2}-X_{2}^{3}\in\mathcal{S}_{3}.$
Its defining ideal is
$I(Z)=(Y_{0}^{3}Y_{1}^{3}-2Y_{0}^{3}Y_{2}^{3}+5Y_{1}^{3}Y_{2}^{3},3Y_{0}^{2}Y_{1}Y_{2}-2Y_{0}Y_{2}^{3},Y_{0}Y_{1}^{2}Y_{2},Y_{0}Y_{1}Y_{2}^{2},Y_{2}^{4}),$
whose minimal primary decomposition is $I(Z)=I_{1}\cap I_{2}$, where
$I_{1}=(-3Y_{0}Y_{1}Y_{2}+Y_{1}^{3},Y_{1}^{2}Y_{2},Y_{1}Y_{2}^{2},Y_{1}^{3}-2Y_{2}^{3}),\quad
I_{2}=(Y_{2}^{4},Y_{0}^{3}+5Y_{2}^{3},Y_{0}Y_{2}).$
Its Hilbert function is $\mathrm{HF}_{Z}=(1,3,6,10,11,12,12,\dots)$, hence $Z$
is not regular in degree $4=\deg(F)$.
###### Claim.
$Z$ is irredundant to $F$.
###### Proof of Claim..
The connected components of $Z$ are both contained in $4$-fat points, i.e.
$I_{i}\supset\wp_{X_{i-1}}^{4}$, hence by Corollary 4.5 it is sufficient to
show that the unique scheme $Y\subseteq Z$ evinced by a GAD of $F$ of type
$F=X_{0}^{a_{0}}Q_{1}+X_{1}^{a_{1}}Q_{2}$ with $a_{0},a_{1}\geq 1$ is $Z$
itself. Since in the expression of $F$ appear the monomials $-4X_{0}X_{2}^{3}$
and $-X_{1}X_{2}^{3}$, then it is easy to see that there is no such a GAD of
$F$ for $a_{0}>1$ or $a_{1}>1$, therefore we assume $a_{0}=a_{1}=1$.
Since this new additive decomposition is still equal to $F$, we have
$X_{0}(Q_{1}-G_{1})+X_{1}(Q_{2}-G_{2})=0,$
hence there is $T\in\mathcal{S}_{2}$ such that
$X_{1}T=Q_{1}-G_{1},\quad X_{2}T=-Q_{2}+G_{2}.$
This means that $Y$ is evinced by a GAD of $F$ of type
$F=X_{0}(G_{1}+X_{1}T)+X_{1}(G_{2}-X_{0}T),$
for some
$T=\lambda_{1}X_{0}^{2}+\lambda_{2}X_{0}X_{1}+\lambda_{3}X_{0}X_{2}+\lambda_{4}X_{1}^{2}+\lambda_{5}X_{1}X_{2}+\lambda_{6}X_{2}^{2}\in\mathcal{S}_{2}.$
If $Y=Y_{1}\cup Y_{2}\subseteq Z$, then we have
$I_{1}\subseteq
I(Y_{1})=I(Z_{X_{0}(G_{1}+X_{1}T),X_{0}})\subseteq\textnormal{Ann}\big{(}X_{0}(G_{1}+X_{1}T)\big{)},$
which implies
$\begin{cases}0=(-3Y_{0}Y_{1}Y_{2}+Y_{1}^{3})\circ\big{(}X_{0}(G_{1}+X_{1}T)\big{)}=6(-\lambda_{3}+\lambda_{4})X_{0}-6\lambda_{5}X_{1}-6\lambda_{6}X_{2},\\\
0=(Y_{1}^{2}Y_{2})\circ\big{(}X_{0}(G_{1}+X_{1}T)\big{)}=2\lambda_{5}X_{0},\\\
0=(Y_{1}Y_{2}^{2})\circ\big{(}X_{0}(G_{1}+X_{1}T)\big{)}=2\lambda_{6}X_{0},\\\
0=(Y_{1}^{3}-2Y_{2}^{3})\circ\big{(}X_{0}(G_{1}+X_{1}T)\big{)}=6\lambda_{4}X_{0}.\end{cases}$
Similarly, from
$I_{2}\subseteq\textnormal{Ann}\big{(}X_{1}(G_{2}+X_{0}T)\big{)}$ we obtain
$\begin{cases}0=(Y_{2}^{4})\circ\big{(}X_{1}(G_{2}+X_{0}T)\big{)}=0,\\\
0=(Y_{0}^{3}+5Y_{2}^{3})\circ\big{(}X_{1}(G_{2}+X_{0}T)\big{)}=-6\lambda_{1}X_{1},\\\
0=(Y_{0}Y_{2})\circ\big{(}X_{1}(G_{2}+X_{0}T)\big{)}=-2\lambda_{3}X_{0}X_{1}-\lambda_{5}X_{1}^{2}-2\lambda_{6}X_{1}X_{2}.\end{cases}$
The above systems imply
$\lambda_{1}=\lambda_{3}=\lambda_{4}=\lambda_{5}=\lambda_{6}=0,$
thus we conclude that $T=\lambda_{2}X_{0}X_{1}$. We computationally verify
that the scheme evinced by the GAD
$X_{0}(G_{1}+\lambda_{2}X_{0}X_{1}^{2})+X_{1}(G_{2}-\lambda_{2}X_{0}^{2}X_{1})$
is independent on $\lambda_{2}\in\Bbbk$, and it is always equal to $I(Z)$.
Therefore, we conclude that $Y=Z$, i.e. $Z$ is irredundant. ∎
The proof of the above claim shows an effective way for establishing
irredundancy to $F$ by symbolically testing its GADs. $\spadesuit$
### 5.2. Tangential decompositions
In this section, we prove that if a minimal apolar scheme to
$F\in\mathcal{S}_{d}$ is a union of simple points and $2$-jets (i.e. local
$0$-dimensional schemes of length $2$), then it is $d$-regular. Such schemes
are evinced by GADs as in eq. 9, which are called _tangential decompositions_
due to their relation with secant varieties of tangential varieties of
Veronese varieties [BT20, CGG].
###### Proposition 5.9.
Let $Z=Z_{1}\cup\ldots\cup Z_{r}$ such that ${\rm len}(Z_{i})\leq 2$ for every
$i\in\\{1,\dots,r\\}$. If $Z$ is of minimal length among the apolar schemes to
$F\in\mathcal{S}_{d}$, then $Z$ is $d$-regular.
###### Proof.
By Corollary 4.5, $Z$ is evinced by a GAD of $F$ of type
(9) $F=\sum_{i=1}^{s}L_{i}^{d-1}G_{i}+\sum_{i=s+1}^{r}L_{i}^{d}$
for some $0\leq s\leq r$ and $L_{i},G_{i}\in\mathcal{S}_{1}$. Moreover, we
have
$I(Z)^{\perp}_{d}=\langle
L_{i}^{d},L_{j}^{d-1}G_{j}\rangle_{\begin{subarray}{c}1\leq i\leq r\\\ 1\leq
j\leq s\end{subarray}}.$
Since ${\rm len}(Z)$ is $r+s$, which is also equal to the number of generators
of $I(Z)^{\perp}_{d}$, in order to prove that $Z$ is $d$-regular it is
sufficient to show that all those generators are $\Bbbk$-linearly independent.
We prove that if there is a linear relation between the $L_{i}^{d}$’s and the
$L_{j}^{d-1}G_{j}$’s as above, then we can explicitly produce an apolar scheme
that has smaller length than $Z$, contradicting its minimality.
When such a relation involves an addenda appearing in the above GAD of $F$,
then $Z$ is redundant by Proposition 5.2, contradicting the minimality. Thus,
we only need to show that $L_{1}^{d},\ldots,L_{s}^{d}$ are linearly
independent. We will prove a stronger fact, namely that
$L_{1}^{d-1},\ldots,L_{s}^{d-1}$ are linearly independent. Suppose by
contradiction that $L_{1}^{d-1}=\sum_{i=2}^{s}\lambda_{i}L_{i}^{d-1}$ for some
$\lambda_{i}\in\Bbbk$. By substituting this relation in the above GAD, we get
$F=\sum_{i=2}^{s}L_{i}^{d-1}(G_{i}+\lambda_{i}G_{1})+\sum_{i=s+1}^{r}L_{i}^{d}.$
The scheme $Z^{\prime}$ evinced by this new GAD of $F$ has length at most
$s+r-2={\rm len}(Z)-2<{\rm len}(Z)$. ∎
Notice that in the proof of Proposition 5.9 we have employed the length-
minimality of the scheme $Z$ apolar to $F$. Indeed, the irredundancy of an
apolar scheme of $2$-jets is not sufficient to guarantee the regularity in the
degree of $F$, as shown in the following example.
###### Example 5.10.
Let $Z$ be the scheme evinced by the GAD
$F=X_{0}^{2}X_{2}+X_{1}^{2}X_{3}+(X_{0}+X_{1})^{2}X_{4}+(X_{0}-X_{1})^{2}(X_{2}-3X_{3}-2X_{4})+(X_{0}+2X_{1})^{2}(X_{2}+X_{3}+X_{4})\in\mathcal{S}_{3}.$
It is easy to check that $F$ is written in essential variables [IK99, Car06],
and that $Z$ is the union of five $2$-jets $Z_{1},\ldots,Z_{5}$ supported on
points $[L_{1}],\dots,[L_{5}]\in\mathbb{P}^{n}$ of the rational normal cubic.
Its Hilbert function is $\mathrm{HF}_{Z}=(1,5,8,9,10,10,\ldots)$, therefore
$Z$ is not regular in degree $3=\deg(F)$.
However, $Z$ is irredundant: any proper subscheme of $Z=\cup_{i=1}^{5}Z_{i}$
has to be contained in one of the following, for $i\in\\{1,\dots,5\\}$:
$Y_{i}=[L_{i}]\cup\bigcup_{j\neq i}Z_{j}.$
We computationally verify that for every $i$ we have
$I(Y_{i})\subsetneq\textnormal{Ann}(F)$, therefore no proper subscheme of $Z$
is apolar to $F$.
We now verify that the strategy of Proposition 5.9 produces an apolar scheme
that is shorter than $Z$, but not contained in it. Substituting the relation
$(X_{0}-X_{1})^{2}=2X_{0}^{2}+2X_{1}^{2}-(X_{0}+X_{1})^{2}$
we obtain the new GAD of $F$:
$X_{0}^{2}(3X_{2}-6X_{3}-4X_{4})+X_{1}^{2}(2X_{2}-5X_{3}-4X_{4})+(X_{0}+X_{1})^{2}(-X_{2}+3X_{3}+3X_{4})+(X_{0}+2X_{1})^{2}(X_{2}+X_{3}+X_{4}).$
The scheme evinced by this GAD has length $8$ but is not contained in $Z$. We
can repeat the procedure with the relation
$(X_{0}+2X_{1})^{2}=2(X_{0}+X_{1})^{2}-X_{0}^{2}+2X_{1}^{2},$
which leads us to another GAD
$F=X_{0}^{2}(2X_{2}-7X_{3}-5X_{4})+X_{1}^{2}(4X_{2}-3X_{3}-2X_{4})+(X_{0}+X_{1})^{2}(X_{2}+5X_{3}+5X_{4}).$
The scheme evinced the last GAD is minimal among the apolar schemes to $F$: as
it has length $6$ and, up to a change of variables, $F$ is the Perazzo cubic
[Per00] which has cactus rank 6 (see eg. [BBM14, Example 2.8], [BB15, Section
4]). This can also be directly verified with [BT20, Algorithm 3]. $\spadesuit$
### 5.3. Apolar schemes with low length
###### Proposition 5.11.
Let $Z\subset\mathbb{P}^{n}$ be a $0$-dimensional scheme apolar and
irredundant to $F\in\mathcal{S}_{d}$. If ${\rm len}(Z)\leq 2d+1$, then $Z$ is
$d$-regular.
###### Proof.
By contradiction, let us assume that $Z$ is not $d$-regular. Then, by [BGI11,
Lemma 34], there exists a line $L$ such that ${\rm len}(Z\cap L)\geq d+2$. Let
${\rm Res}_{L}(Z)$ be the residual scheme of $Z$ with respect to $L$ defined
by the colon ideal $\big{(}I(Z):(L)\big{)}$. Since
${\rm len}(Z\cap L)+{\rm len}\big{(}{\rm Res}_{L}(Z)\big{)}={\rm len}(Z)\leq
2d+1,$
then given the irreducible decomposition $Z=Z_{1}+\cdots+Z_{s}$, there exists
a component $Z_{i}$ such that the schematic intersection $Z_{i}\cap L$
satisfies ${\rm len}(Z_{i}\cap L)>{\rm len}({\rm Res}_{L}(Z_{i}))$. Without
loss of generality, we may assume that $i=1$, $I(Z_{1})\subseteq\wp_{X_{0}}$
and $I(L)=(X_{1},\dots,X_{n})$. Let $H$ be the orthogonal hyperplane to
$X_{0}$, i.e. $I(H)=(X_{0})$, and let $m={\rm len}(Z_{1}\cap L)$. We consider
the scheme $Z^{\prime}$ defined by
$I(Z^{\prime})=I\left(Z_{1}\cap(m-1)H\right)\cap I(Z_{2})\cap\dots\cap
I(Z_{s}).$
It is clear that $Z^{\prime}\subsetneq Z$, hence to get the desired
contradiction it is sufficient to show that $Z^{\prime}$ is apolar to $F$,
which follows directly from the following fact by the Apolarity Lemma (Lemma
2.3).
###### Claim 5.1.
$I(Z)^{\perp}_{d}=I(Z^{\prime})^{\perp}_{d}$.
_Proof of 5.1._
Since $m>{\rm len}\big{(}{\rm Res}(Z_{1})\big{)}$ we have
$(X_{0}^{m-1})\cap(X_{1},\ldots,X_{n})\subseteq I(Z_{1})$, hence
$I(Z_{1})=\big{(}I(Z_{1})+(X_{0}^{m-1})\big{)}\cap\big{(}I(Z_{1})+(X_{1},\ldots,X_{m})\big{)}.$
$\bullet$ We prove that $I(Z_{1})+(X_{0}^{m-1})$ equals the saturated ideal
$I\big{(}Z_{1}\cap(m-1)H\big{)}$.
There are obvious ideal inclusions:
(10) $I(Z_{1})\subseteq I(Z_{1})+(X_{0}^{m-1})\subseteq
I\big{(}Z_{1}\cap(m-1)H\big{)}.$
It is enough to show that the last two ideals have the same Hilbert function.
Since $Z_{1}\cap(m-1)H$ has colength $1$ inside $Z_{1}$ and their homogeneous
defining ideals agree up to degree $m-2$, we deduce that
$\mathrm{HF}_{Z_{1}\cap(m-1)H}(i)=\begin{cases}\mathrm{HF}_{Z_{1}}(i)&\text{for
$i\leq m-2$,}\\\ \mathrm{HF}_{Z_{1}}(i)-1&\text{for $i\geq m-1$.}\end{cases}$
By eq. 10 the Hilbert function $\mathrm{HF}_{*}$ of
$\mathcal{S}/\big{(}I(Z_{1})+(X_{0}^{m-1})\big{)}$ is squeezed:
$\mathrm{HF}_{Z_{1}\cap(m-1)H}\leq\mathrm{HF}_{*}\leq\mathrm{HF}_{Z_{1}}$.
However, for every $k\geq m-1$ we have
$X_{0}^{k}\in\left(I(Z_{1})+(X_{0}^{m-1})\right)\setminus I(Z_{1})$, thus
$\mathrm{HF}_{*}(k)<\mathrm{HF}_{Z_{1}}(k)$ for every $k\geq m-1$. This
implies that $\mathrm{HF}_{*}$ completely agrees with
$\mathrm{HF}_{Z_{1}\cap(m-1)H}$.
$\bullet$ For every $i\in\\{2,\ldots,s\\}$, we trivially have
$I(Z_{i})=I(Z_{i})\cap\big{(}I(Z_{i})+(X_{1},\ldots,X_{n})\big{)}.$
Hence, we can write:
$\displaystyle I(Z)$
$\displaystyle=I\big{(}Z_{1}\cap(m-1)H\big{)}\cap\big{(}I(Z_{1})+(X_{1},\ldots,X_{m})\big{)}\cap\bigcap_{i=2}^{s}\big{(}I(Z_{i})+(X_{1},\ldots,X_{n})\big{)}\cap
I(Z_{i})$
$\displaystyle=I(Z^{\prime})\cap\left(\bigcap_{i=1}^{s}I(Z_{i})+(X_{1},\ldots,X_{n})\right)=I(Z^{\prime})\cap
I(Z\cap L).$
$\bullet$ From the non-degeneracy of the apolar action we get
$I(Z)_{d}^{\perp}=[I(Z^{\prime})\cap I(Z\cap
L)]_{d}^{\perp}=I(Z^{\prime})_{d}^{\perp}+I(Z\cap L)_{d}^{\perp}$
but $I(Z\cap L)_{d}=I(Z^{\prime}\cap L)_{d}$ because they define schemes of
length $d+1$ on the same normal curve $\nu_{d}(L)\subset\mathbb{P}^{d}$. Thus,
we conclude
$I(Z)_{d}^{\perp}=I(Z^{\prime})_{d}^{\perp}+I(Z^{\prime}\cap
L)_{d}^{\perp}=I(Z^{\prime})_{d}^{\perp},$
which proves the claim and then concludes the proof. ∎
We notice that Proposition 5.11 provides a good criterion for proving that the
minimal apolar schemes to a _given_ $F\in\mathcal{S}_{d}$ is $d$-regular, by
exhibiting at least one scheme $Z$ apolar to $F$ and of length not bigger than
$2d+1$.
###### Example 5.12.
Let $F\in\mathcal{S}_{4}$ be the polynomial considered in Example 5.8. We
consider another GAD $F=X_{0}\tilde{G}_{1}+X_{1}\tilde{G}_{2}$, where
$\displaystyle\tilde{G}_{1}$
$\displaystyle=10X_{0}^{3}+X_{0}^{2}X_{1}+4X_{0}^{2}X_{2}-4X_{0}X_{1}^{2}-8X_{0}X_{1}X_{2}-3X_{0}X_{2}^{2}-4X_{2}^{3}\in\mathcal{S}_{3},$
$\displaystyle\tilde{G}_{2}$
$\displaystyle=X_{0}X_{1}^{2}-5X_{1}^{3}-7X_{1}^{2}X_{2}+6X_{1}X_{2}^{2}-X_{2}^{3}\in\mathcal{S}_{3}.$
This GAD evinces the scheme $\tilde{Z}$ defined by
$I(\tilde{Z})=\left(Y_{0}^{2}Y_{1}Y_{2}-\frac{2}{3}Y_{0}Y_{2}^{3},Y_{0}Y_{1}Y_{2}^{2},Y_{2}^{4},Y_{0}Y_{1}^{2}-\frac{5}{2}Y_{0}Y_{1}Y_{2}+Y_{2}^{3}\right).$
Its Hilbert function is $\mathrm{HF}_{Z}=(1,3,6,9,9,\dots)$. Since ${\rm
len}(Z)=9\leq 2\cdot 4+1$, by Proposition 5.11 we can guarantee that minimal
schemes apolar to such a $F$ are $4$-regular, even without computing them.
## 6\. Conclusion
In the present work, we investigated the $d$-regularity of certain families of
schemes apolar to $F\in\mathcal{S}_{d}$. In all the examples we presented, the
schemes of minimal lengths were $d$-regular, so it is natural to ask whether
this is always the case.
###### Question 1.
Let $F\in\mathcal{S}_{d}$ and $Z$ be a $0$-dimensional scheme evincing its
cactus rank. Is $Z$ $d$-regular?
Actually, a careful reader would have noticed that none of the examples we
considered really required to reach degree $d$ for regularity, hence we may
state an even more compelling question.
###### Question 2.
Let $F\in\mathcal{S}_{d}$ and $Z$ be a $0$-dimensional scheme evincing its
cactus rank. Is $Z$ $(d-1)$-regular?
To the best of our knowledge, we do not know the answer to 1 and 2. We believe
that our results and examples could be useful in either direction. Our
positive results restrict the identikit of a possible example providing a
negative answer to 1 to have some component of high multiplicity. On the other
side, when trying to prove a positive answer to 1 or 2, Example 4.4 shows that
we really need to use the global assumption of minimality in terms of the
cactus rank, and that cannot be relaxed with the local condition of minimality
by inclusion.
## References
* [ACO23] E. Angelini, L. Chiantini, and A. Oneto. Waring decompositions of special ternary forms with different hilbert functions. arXiv e-prints, pages arXiv–2303, 2023.
* [Ådl87] B. Ådlandsvik. Joins and higher secant varieties. Math. Scand., 61(2):213–222, 1987.
* [Bal22] E. Ballico. On the secant varieties of tangential varieties. J. Pure Appl. Algebra, 226(12):Paper No. 107132, 24, 2022.
* [BB12] E. Ballico and A. Bernardi. Symmetric tensor rank with a tangent vector: a generic uniqueness theorem. Proc. Amer. Math. Soc., 140(10):3377–3384, 2012.
* [BB14a] W. Buczyńska and J. Buczyński. Secant varieties to high degree Veronese reembeddings, catalecticant matrices and smoothable Gorenstein schemes. Journal of Algebraic Geometry, 23:63–90, 2014.
* [BB14b] Weronika Buczyńska and Jaroslenaw Buczyński. Secant varieties to high degree veronese reembeddings, catalecticant matrices and smoothable gorenstein schemes. Journal of Algebraic Geometry, 23(1):63–90, 2014.
* [BB15] W. Buczyńska and J. Buczyński. On differences between the border rank and the smoothable rank of a polynomial. Glasgow Mathematical Journal, 57(2):401–413, 2015.
* [BB21] W. Buczyńska and J. Buczyński. Apolarity, border rank, and multigraded hilbert scheme. Duke Mathematical Journal, 170(16):3659–3702, 2021.
* [BBM14] A. Bernardi, J. Brachat, and B. Mourrain. A comparison of different notions of ranks of symmetric tensors. Linear Algebra and its Applications, 460:205–230, 2014.
* [BBT13] W. Buczyńska, J. Buczyński, and Z. Teitler. Waring decompositions of monomials. Journal of Algebra, 378:45–57, 2013.
* [BC19] C. Bocci and L. Chiantini. An Introduction to Algebraic Statistics with Tensors. Springer International Publishing, 2019.
* [BCC+18] A. Bernardi, E. Carlini, M.V. Catalisano, A. Gimigliano, and A. Oneto. The hitchhiker guide to: Secant varieties and tensor decomposition. Mathematics, 6(12), 2018.
* [BCGI07] A. Bernardi, M.V. Catalisano, A. Gimigliano, and M. Idà. Osculating varieties of Veronese varieties and their higher secant varieties. Can. J. Math., 59(3):488–502, 2007.
* [BCGI09] A. Bernardi, M.V. Catalisano, A. Gimigliano, and M. Idà. Secant varieties to osculating varieties of veronese embeddings of pn. Journal of Algebra, 321(3):982 – 1004, 2009.
* [BCMT10] J. Brachat, P. Comon, B. Mourrain, and E. Tsigaridas. Symmetric tensor decomposition. Linear Algebra and its Applications, 433(11):1851–1872, 2010.
* [BCP97] W. Bosma, J. Cannon, and C. Playoust. The Magma algebra system. I. The user language. J. Symbolic Comput., 24(3-4):235–265, 1997. Computational algebra and number theory (London, 1993).
* [BDHM17] A. Bernardi, N.S. Daleo, J.D. Hauenstein, and B. Mourrain. Tensor decomposition and homotopy continuation. Differential Geom. Appl., 55:78–105, 2017.
* [BF03] E. Ballico and C. Fontanari. On the secant varieties to the osculating variety of a Veronese surface. Cent. Eur. J. Math., 1(3):315–326, 2003.
* [BGI11] A. Bernardi, A. Gimigliano, and M. Idà. Computing symmetric rank for symmetric tensors. J. Symbolic Comput, 46:34–53, 2011.
* [BJMR18] A. Bernardi, J. Jelisiejew, P.M. Marques, and K. Ranestad. On polynomials with given Hilbert function and applications. Collectanea Mathematica, 69(1):39–64, 2018.
* [BOT] A. Bernardi, A. Oneto, and D. Taufer. Implemented code and examples. Available at https://github.com/DTaufer/SchemesEvincedByGADs.git.
* [BR13] A. Bernardi and K. Ranestad. The cactus rank of cubic forms. Journal of Symbolic Computation 50 (2013) 291-297, 50:291–297, 2013\.
* [BT20] A. Bernardi and D. Taufer. Waring, tangential and cactus decompositions. Journal de Mathématiques Pures et Appliquées, 143:1–30, 2020.
* [Car06] E. Carlini. Reducing the number of variables of a polynomial. In Algebraic geometry and geometric modeling, Math. Vis., pages 237–247. Springer, Berlin, 2006.
* [CCG12] Enrico Carlini, Maria Virginia Catalisano, and Anthony V Geramita. The solution to the waring problem for monomials and the sum of coprime monomials. Journal of algebra, 370:5–14, 2012.
* [CGG] M.V. Catalisano, A.V. Geramita, and A. Gimigliano. On the secant varieties to the tangential varieties of a Veronesean. Proc. Amer. Math. Soc., (4):975–985.
* [Chi19] L. Chiantini. Hilbert functions and tensor analysis. In Quantum physics and geometry, volume 25 of Lect. Notes Unione Mat. Ital., pages 125–151. Springer, Cham, 2019.
* [CJN15] G. Casnati, J. Jelisiejew, and R. Notari. Irreducibility of the Gorenstein loci of Hilbert schemes via ray families. Algebra Number Theory, 9(7):1525–1570, 2015.
* [Eis13] D. Eisenbud. Commutative algebra: with a view toward algebraic geometry, volume 150. Springer Science & Business Media, 2013.
* [ER12] Juan Elias and M Rossi. Isomorphism classes of short gorenstein local rings via macaulay’s inverse system. Transactions of the American Mathematical Society, 364(9):4589–4604, 2012.
* [Ger96] A.V. Geramita. Inverse systems of fat points: Waring’s problem, secant varieties of Veronese varieties and parameter spaces for Gorenstein ideals. In The curves seminar at Queen’s, volume 10, pages 2–114, 1996\.
* [GS] D.R. Grayson and M.E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at http://www2.macaulay2.com.
* [IK99] A. Iarrobino and V. Kanev. Power sums, Gorenstein algebras, and determinantal loci. Springer Science & Business Media, 1999.
* [IR01] Atanas Iliev and Kristian Ranestad. K3 surfaces of genus 8 and varieties of sums of powers of cubic fourfolds. Transactions of the American Mathematical Society, 353(4):1455–1468, 2001.
* [Jel18] Joachim Jelisiejew. Vsps of cubic fourfolds and the gorenstein locus of the hilbert scheme of 14 points on a6. Linear Algebra and its Applications, 557:265–286, 2018.
* [Lan12] J. Landsberg. Tensors: geometry and applications. Representation theory, 381(402):3, 2012.
* [LO13] Joseph M Landsberg and Giorgio Ottaviani. Equations for secant varieties of veronese and other varieties. Annali di Matematica Pura ed Applicata, 192(4):569–606, 2013.
* [Mac16] Francis Sowerby Macaulay. The algebraic theory of modular systems. Cambridge tracts in mathematics and mathematical physics; no. 19., 1916.
* [MO20] B. Mourrain and A. Oneto. On minimal decompositions of low rank symmetric tensors. Linear Algebra Appl., 607:347–377, 2020.
* [Per00] U. Perazzo. Sulle verità cubiche la cui hessiana svanisce identicamente. G. Mat. Battaglini, 1900.
* [RGV18] Kristian Ranestad, Matteo Gallet, and Nelly Villamizar. Varieties of apolar subschemes of toric surfaces. Arkiv för matematik, 56(1):73–99, 2018.
* [RS000] Varieties of sums of power. 2000(525):147–181, 2000.
* [RS11] K. Ranestad and F.O. Schreyer. On the rank of a symmetric form. Journal of Algebra, 346:340–342, 2011.
* [Zak93] F.L. Zak. Tangents and secants of algebraic varieties, volume 127 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1993. Translated from the Russian manuscript by the author.
|
# Entropy-minimizing dynamical transport on Riemannian manifolds
Gabriele Bocchi Department of Mathematics, University of Rome Tor Vergata. Via
della Ricerca Scientifica 1, 00133 Roma, Italy<EMAIL_ADDRESS>Alessio
Porretta Department of Mathematics, University of Rome Tor Vergata. Via della
Ricerca Scientifica 1, 00133 Roma, Italy<EMAIL_ADDRESS>
###### Abstract
Given a smooth Riemannian manifold $(M,g)$, compact and without boundary, we
analyze the dynamical optimal mass transport problem where the cost is given
by the sum of the kinetic energy and the relative entropy with respect to a
reference volume measure $e^{-V}dx$. Under the only assumption that the
prescribed marginals lie in $L^{1}(M)$, and a lower bound on the Ricci
curvature, we characterize the minimal curves as unique weak solutions of the
optimality system coupling the continuity equation with a backward Hamilton-
Jacobi equation (with source given by $\log(m)$). We give evidence that the
entropic cost enhances diffusive effects in the evolution of the optimal
densities, proving $L^{1}\to L^{\infty}$ regularization in time for any
initial-terminal data, and smoothness of the solutions whenever the marginals
are positive and smooth. We use displacement convexity arguments (in the
Eulerian approach) and gradient bounds from quasilinear elliptic equations. We
also prove the convergence of optimal curves towards the classical Wasserstein
geodesics, as the entropic term is multiplied by a vanishing parameter,
showing that this kind of functionals can be used to build a smoothing
approximation of the standard optimal transport problem.
## 1 Introduction
Let $(M,g)$ be a smooth, connected $d$-dimensional Riemannian manifold,
assumed to be compact and without boundary, endowed with a metric tensor
$g=(g_{ij})$ and a volume form $dx$. We denote by ${\mathcal{P}}(M)$ the set
of probability measures on $M$. Given a time horizon $T>0$ and two fixed
(initial and terminal) measures $m_{0},m_{1}\in{\mathcal{P}}(M)$, we analyze
in this note the optimal transport problem
$\displaystyle\min\,\,\mathcal{F}_{\varepsilon}(m,v)\coloneqq\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}\left|v\right|^{2}\,dm+\varepsilon\int_{0}^{T}{\mathcal{H}}(m(t);\nu)\,,\qquad$
(1.1) $\displaystyle\qquad\hbox{among all
}\quad(m,v)\quad\hbox{}:\begin{cases}\partial_{t}m-div_{g}(vm)=0\\\
m(0)=m_{0}\,,\,\,m(T)=m_{1}\end{cases}$
where $\nu:=e^{-V(x)}dx$ and
${\mathcal{H}}(m;\nu)=\int_{M}\log\left(\frac{dm}{d\nu}\right)dm=\int_{0}^{T}\\!\\!\\!\\!\int_{M}m(\log
m+V)\,dxdt$
denotes the relative entropy of $m$ with respect to a reference measure
$e^{-V}dx$, given for some Lipschitz continuous vector field $V$.
In (1.1), $m(t)$ is an (absolutely continuous) arc joining $m_{0}$ and $m_{1}$
with velocity $v$, $\left|\cdot\right|$ is the length of vector fields and
$div_{g}(\cdot)$ the intrinsic divergence operator on the manifold $M$. The
functional (1.1) can be seen as a perturbation of the kinetic energy
functional used in the dynamical version of mass transportation [2]. The
additional term in (1.1) prevents concentration effects by penalizing the
relative entropy and is supposed to enhance some form of dissipation along
optimal curves. This is only one, yet very natural, among possibly different
entropic regularizations of the classical optimal transport energy. In this
respect, it follows a stream of research which has been very intensive in
recent times, where other kind of regularizations of the Wasserstein distance
were suggested (see [13], [17], [30], [34]).
The evolution of optimal transport densities with additional costs that
consider the effects of congestion has been exploited so far in several
directions, see e.g. [3], [5], and especially [27], where some
$L^{1}-L^{\infty}$ regularization in the time evolution of the optimal curves
was proved using variational techniques. Similar problems were addressed in
[7], [8], [9], [10], [20], [41] with a different approach based on ideas
coming from mean field game theory and PDE estimates on the optimality system
(state-adjoint state) associated to (1.1), which is
$\begin{cases}-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}=\varepsilon(\log(m)+V(x))\quad&\text{in $(0,T)\times M$}\\\
\partial_{t}m-div_{g}(m\nabla u)=0&\text{in $(0,T)\times M$}\\\
m(0)=m_{0},\quad m(T)=m_{1}&\text{in $M$.}\end{cases}$ (1.2)
As firstly observed by P.-L. Lions [35], (1.2) is just one instance of PDE
system appearing in mean field game theory ([25], [26]) and some smoothness on
the optimal curves of this kind of functionals can be derived from gradient
estimates on the adjoint state $u$ (the so-called Kantorovich potential, in
mass transportation language). This approach, relying on the ellipticity
hidden in the optimaliy system, was thoroughly developed in [39], [40], [43].
In particular, the case of functional (1.1), with the additional entropy term,
was addressed in [43] for convex domains of $\mathbb{R}^{d}$ (with no-flux
condition at the boundary) assuming that the marginals $m_{0},m_{1}$ are
positive and smooth, in which case the minima can be proved to be positive and
smooth for all times. Similar results were also proved for Gaussian-like
measures in the whole Euclidean space.
The goal of this paper is twofold. First of all, we give some general result
on problem (1.1), under the only assumption that the marginals $m_{0},m_{1}\in
L^{1}(M)$. In particular, the marginals do not need to be positive (nor
smooth), extending many results of [43] to the case of nonnegative initial-
terminal data. Except for some special results obtained in one dimension [11],
the case of compactly supported marginals had not been developed, so far.
Secondly, we analyze the problem in the setting of a Riemannian manifold, in
order to get a more exhaustive comprehension of some crucial tools. In fact,
in the genuine optimal transport viewpoint, it is well understood ([14], [15],
[16], [36], [38], [47], [49]) that the Riemannian setting is the most natural
to observe the role of Ricci curvature in regularity arguments related to
displacement convexity and entropy dissipation. In our results, we will
require the only condition that the Ricci curvature is bounded below, and we
will obtain estimates which are totally consistent with the pure mass
transport problem, embedding therefore the case $\varepsilon=0$ in (1.1) into
a family of similar problems. As an example, if $Ric(M)\geq\Lambda I$, we will
prove the diplacement $\Lambda-$ convexity of the entropy along Wasserstein
geodesics as the limit of $\Lambda_{\varepsilon}$-convexity along the optimal
curves of (1.1).
Concerning the characterization of minimal curves of (1.1), we summarize our
main results in the following statement.
###### Theorem 1.1:
Let $(M,g)$ be a smooth compact Riemannian manifold without boundary, with
$Ric_{g}(M)$ bounded below. Let $m_{0},m_{1}\in L^{1}(M)\cap\mathcal{P}(M)$,
and assume that $V\in W^{2,\infty}(M)$, $\varepsilon>0$. Then the functional
$\mathcal{F}_{\varepsilon}$ in (1.1) admits a unique minimum, given by
$(m,\nabla u)$, where $(m,u)$ is the unique weak solution (in the sense of
Definition 5.1) of the system (1.2), with $\int_{M}u(T)m_{1}=0$. Moreover we
have:
* (i)
$m>0$ a.e. in $(0,T)\times M$.
* (ii)
$u,m\in L^{\infty}_{loc}((0,T)\times M)$ and $u(0)\in L^{1}(dm_{0}),u(T)\in
L^{1}(dm_{1})$.
* (iii)
if $m_{0},m_{1}\in W^{1,\infty}(M)$ and are (strictly) positive, and if $V\in
C^{k,\alpha}(M)$, then $u\in C^{k+1,\alpha}((0,T)\times M),m\in
C^{k,\alpha}((0,T)\times M)$.
The statement of Theorem 1.1 summarizes several different results that we
establish later. In fact, we will start from the case of smooth and positive
marginals (see Theorem 5.4), proving the existence of smooth solutions to the
system (1.2). To this purpose, we follow the strategy suggested by P.-L. Lions
[35], and developed in [40], [43], which consists in rewriting the system
(1.2) as a quasilinear elliptic equation for $u$ (see (3.16)) and using the
continuity method, relying on gradient bounds, in order to produce a smooth
solution $u$. Following [43], this strategy is first employed for a penalized
auxiliary problem (4.14), where the $L^{\infty}-$ norm of $u$ is readily
controlled. Then a compactness argument yields a smooth solution $(m,u)$ after
a suitable normalization of $u$.
Once we have built smooth solutions of (1.2), we will obtain all relevant
estimates which remain robust for merely $L^{1}$ marginals $m_{0},m_{1}$. This
step includes a bound on the minimal value of $\mathcal{F}_{\varepsilon}$,
obtained by building suitable competitors, which in turn will yield local
uniform bounds on $u$.
From the local bounds on $u$ we will also obtain the local $L^{\infty}$\-
bound on $m$. Notice that this $L^{1}-L^{\infty}$ regularization on the
density is not a straightforward extension from the euclidean case, because
the Ricci curvature is allowed to be negative. In particular, the $L^{\infty}$
control on $m$ does not follow directly from the displacement convexity
inequalities as in [43].
Finally, we will derive local (in time) bounds for $Du$ in $L^{2}$ (from the
HJ equation) and for the Fisher information of $m$ (by displacement convexity
estimates) that yield the suitable compactness arguments. This latter step
includes a relaxation result on the system and the convergence to weak
solutions, providing with the final characterization of the minima of (1.1)
(see Theorem 5.10).
Many of those tools, involving estimates and stability on the optimality
system (1.2), are stable as $\varepsilon\to 0$.
Indeed, we conclude the article by giving a result of convergence of the
optimal curves towards the Wasserstein geodesic, as $\varepsilon\to 0$, as
well as the convergence of $\min\mathcal{F}_{\varepsilon}$ towards
$\min\mathcal{F}_{0}$. This convergence occurs with a rate $O(\varepsilon)$
when the marginals have finite entropy, while we cannot prove a rate better
than $O(\varepsilon|\log\varepsilon|)$ for the general case of marginals only
in $L^{1}(M)$.
###### Theorem 1.2:
Under the assumptions of Theorem 1.1, let $(m_{\varepsilon},\nabla
u_{\varepsilon})$ be the minima of (1.1), where
$(m_{\varepsilon},u_{\varepsilon})$ solves (1.2), with
$\int_{M}u_{\varepsilon}(T)m_{1}=0$, and let $(m,\nabla u)$ be the Wasserstein
geodesic between $m_{0},m_{1}$. Then, as $\varepsilon\to 0$, we have
$\displaystyle m_{\varepsilon}$ $\displaystyle\to m\quad\hbox{in
$C^{0}([0,T],{\mathcal{P}}(M))$ and weakly in $L^{1}((0,T)\times M)$,}$
$\displaystyle m_{\varepsilon}\nabla u_{\varepsilon}$ $\displaystyle\to
m\nabla u\quad\hbox{weakly in $L^{1}((0,T)\times M)$,}$
and $\min\mathcal{F}_{\varepsilon}\to\min\mathcal{F}_{0}$. In particular we
have
$\min\mathcal{F}_{\varepsilon}=\min\mathcal{F}_{0}+r_{\varepsilon}$
where $r_{\varepsilon}=O(\varepsilon)$ if $m_{0},m_{1}$ have finite entropy,
otherwise $r_{\varepsilon}=O(\varepsilon|\log\varepsilon|)$.
Last but not least, we will show (see Theorem 6.4) that using a suitable
approximation of $m_{0},m_{1}$ with sequences
$m_{0\varepsilon},m_{1\varepsilon}$ of smooth positive functions, the
Wasserstein geodesic between $m_{0},m_{1}$ can be approximated by smooth
minimizers of $\mathcal{F}_{\varepsilon}$, in a way that $u_{\varepsilon}$
remains uniformly bounded in Lipschitz norm. Hence, in particular, the
Kantorovich potentials converge uniformly in $M$.
This latter result shows that the purpose of smoothing the Wasserstein
geodesic can be fully accomplished with the functional (1.1); in particular,
this gives a general Eulerian strategy towards the proof of displacement
convexity properties of the geodesics of optimal transport. As an example, we
recover some results of [14], [15] with an alternative proof, avoiding the use
of EVI inequalities in favor of a standard Eulerian approach which is now
justified going through problems (1.1).
## 2 Notations and setting of the problem
In the following, we recall some elements of Riemannian geometry (see e.g.
[46]). Throughout the paper, $(M,g)$ denotes a smooth, compact, connected and
oriented $d$-dimensional Riemannian manifold without boundary, with metric
tensor $g=(g_{ij})$, inverse $g^{-1}=(g^{ij})$ and determinant
$\left|g\right|$. The orientation induces a unitary volume form $dx$. If $w$
and $v$ are two vector fields on $M$, we denote by
$w\operatorname{\cdot_{g}\\!}w\coloneqq\sum_{ij}g_{ij}(x)w_{i}v_{j}$
their scalar product in the tangent space $T_{x}\\!M$. The length of a vector
field is given by $|w|=\sqrt{w\operatorname{\cdot_{g}\\!}w}$. Correspondingly,
there is a scalar product in the cotangent space $T^{*}_{x}\\!M$, which is
defined on differential $1$-forms $\omega$ and $\nu$ on $M$ as
$\omega\operatorname{\cdot_{g}\\!}\nu\coloneqq\sum_{ij}g^{ij}(x)\omega_{i}\nu_{j}$.
Let $x_{j}$, $j=1,\dots,d$, be a local system of coordinates: if $u\in
C^{1}(M)$, the covariant gradient of $u$, denoted by $\nabla u$, is the vector
field with coordinates $\nabla_{i}u=g^{ij}(x)u_{x_{j}}$. Therefore, given
$u,v\in C^{1}(M)$,
$\nabla u\operatorname{\cdot_{g}\\!}\nabla
v=\sum_{ij}g^{ij}(x)u_{x_{i}}v_{x_{j}}$
We denote the Levi-Civita connection associated to the metric $g$ with the
letter $D$ and we will derivate covariantly vector and tensor fields on $M$.
Recalling that, in local coordinates, the Christoffel symbols are
$\Gamma^{k}_{ij}=\frac{1}{2}\sum_{l}\left(\frac{\partial g_{jl}}{\partial
x_{i}}+\frac{\partial g_{li}}{\partial x_{j}}-\frac{\partial g_{ij}}{\partial
x_{l}}\right)g^{lk}\,,$
the covariant derivative of a $C^{1}$ vector field $X=(X_{j})$ along the
vector field $v=(v_{i})$ is the vector field $D_{v}X$ with $k$-th coordinate
given by
$(D_{v}X)_{k}=\sum_{ij}v_{i}X_{j}\Gamma^{k}_{ij}+(\nabla
X_{k})\operatorname{\cdot_{g}\\!}v.$
If $X=(X_{j})$ is a $C^{1}$ vector field on $M$, the divergence of $X$ is
defined by
$div_{g}X=\frac{1}{\sqrt{\left|g\right|}}\sum_{k}(\sqrt{\left|g\right|}X^{k})_{x_{k}}$
and the Leibniz rule: $div_{g}(fX)=\nabla
f\operatorname{\cdot_{g}\\!}X+fdiv_{g}X$ holds for every $f\in C^{1}(M)$ and
any $C^{1}$ vector field $X$ on $M$. Furthermore, by the Stokes theorem, we
have
$\int_{M}div_{g}X\,dx=0\,.$
The Hessian $\nabla^{2}u$ of a $C^{2}$ function $u$ is the symmetric
$2$-tensor given by
$(\nabla^{2}u)(v,w)\coloneqq(D_{v}\nabla
u)\operatorname{\cdot_{g}\\!}w=(D_{w}\nabla u)\operatorname{\cdot_{g}\\!}v$
for every vector fields $v,w$ on $M$, where the last equality follows by the
symmetry and the compatibility with the metric of the Levi-Civita connection.
The components of the Hessian are the second covariant derivatives, given by
$\nabla_{ij}u=u_{x_{i}x_{j}}-\sum_{k}\Gamma^{k}_{ij}u_{x_{k}}.$
In particular for every $C^{2}$ functions $f,u$ and every vector field $v$ it
holds
$v\operatorname{\cdot_{g}\\!}\nabla(\nabla f\operatorname{\cdot_{g}\\!}\nabla
u)=(\nabla^{2}f)(v,\nabla u)+(\nabla^{2}u)(v,\nabla f).$
As usual, we denote $\Delta_{g}u=div_{g}(\nabla u)$ the Laplace-Beltrami
operator on $M$. We recall the Böchner formula (see e.g. [4]):
$\tfrac{1}{2}\Delta_{g}\left|\nabla
f\right|^{2}=\left|\nabla^{2}f\right|^{2}+\nabla(\Delta_{g}f)\operatorname{\cdot_{g}\\!}\nabla
f+Ricc_{g}(\nabla f,\nabla f)$ (2.1)
for every $f\in C^{3}(M)$, where $Ricc_{g}$ is the curvature tensor of the
metric $g$. Throughout all the paper, we will assume that $Ric_{g}(M)$ is
bounded below, i.e.
$Ricc_{g}(X,X)\geq-\lambda\left|X\right|^{2}$ (2.2)
for every vector field $X$ on $M$, for some $\lambda\geq 0$. Given $M$, we
denote by ${\mathcal{P}}(M)$ the space of probability measures on $M$, endowed
with the Wasserstein metric. The characterization of the Wasserstein geodesics
in terms of optimal mass transportation on $M$ is well established, since $M$
is compact, see [38], [49]. We will always identify the measures with their
densities when they are absolutely continuous with respect to the unitary
volume measure $dx$. With $L^{p}(M)$ we indicate the standard Lebesgue space,
for $p\in[1,+\infty]$, and with $W^{k,p}(M)$ the Sobolev space of functions
with $k$ weak $L^{p}$-derivatives. We will use the Sobolev and Poincaré-
Wirtinger inequalities on $M$, for which we refer to [22]. Finally, throughout
the paper we denote $Q_{T}$ the cylinder $(0,T)\times M$, and
$\overline{Q}_{T}=[0,T]\times M$.
### 2.1 Optimal transport functional
We now make precise the sense of the minimization problem (1.1).
###### Definition 2.1:
Let $m_{0},m_{1}\in{\mathcal{P}}(M)$. A couple $(m,v)$ is a solution of the
continuity equation
$\begin{cases}\partial_{t}m-div_{g}(vm)=0\\\
m(0)=m_{0}\,,\,\,m(T)=m_{1}\,,\end{cases}$ (2.3)
if $m\in C([0,T];{\mathcal{P}}(M))$ with $m(0)=m_{0}$ and $m(T)=m_{1}$,
$v(t,x)$ is a measurable vector field on $Q_{T}$ such that
$\int_{0}^{T}\\!\\!\\!\\!\int_{M}|v|^{2}\,dm<\infty$ and the following
equality holds
$\int_{M}\varphi(t)dm(t)-\int_{M}\varphi(s)dm(s)+\int_{s}^{t}\\!\\!\\!\\!\int_{M}\left(-\partial_{t}\varphi+v\operatorname{\cdot_{g}\\!}\nabla\varphi\right)\,dm=0\,,$
for every $0\leq s<t\leq T$ and every function $\varphi\in
C^{1}(\overline{Q}_{T})$.
We recall (see [1]) that weak solutions as defined above are essentially
equivalent to absolutely continuous curves from $[0,T]$ into
${\mathcal{P}}(M)$ which have $L^{2}$ metric derivative. We also recall that
any convex, superlinear function $F(r)$ induces a lower semicontinuous
functional on the space of probability measures:
$F(m):=\begin{cases}\int_{M}F(m)\,dx&\hbox{if $m$ is absolutely continuous}\\\
+\infty&\hbox{otherwise.}\end{cases}$
Similar kind of functionals have been extensively studied, see e.g. [27] and
references therein. Even if we could consider general functions $F$, for the
sake of clarity we restrict the analysis in this paper to the specific
entropic case, in which $F(m)=m\log(m)$, and more generally to the relative
entropy in terms of a possibly inhomogeneous reference measure
$\nu=e^{-V(x)}dx$:
${\mathcal{H}}(m;\nu):=\int_{M}F\left(\frac{dm}{d\nu}\right)d\nu=\int_{M}\log\left(\frac{dm}{d\nu}\right)dm=\int_{M}m(\log
m+V)dx\,$ (2.4)
with the convention that ${\mathcal{H}}(m;\nu)=+\infty$ whenever $m$ is not
absolutely continuous with respect to $dx$. In what follows, we assume that
$V$ is (at least) Lipschitz continuous on $M$. Thanks to Definition 2.1, the
meaning of the optimal transport problem (1.1) is now clarified, to be read as
$\displaystyle\min\mathcal{F}_{\varepsilon}(m,v)\coloneqq\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}\left|v\right|^{2}\,\,dm+\varepsilon\int_{0}^{T}{\mathcal{H}}(m;\nu)\,,\qquad\nu:=e^{-V(x)}dx$
(2.5) $\displaystyle\qquad\hbox{among all
}\quad(m,v)\,:\quad\begin{cases}\partial_{t}m-div_{g}(vm)=0\\\
m(0)=m_{0}\,,\,\,m(T)=m_{1}\end{cases},$
where the equation is understood as above.
We first establish that, for every $m_{0},m_{1}\in{\mathcal{P}}(M)$, there
exists an arc along which the above functional is finite, so it admits a
finite minimum. In addition, we can give a universal upper bound on the
minimal value of $\mathcal{F}_{\varepsilon}$.
###### Proposition 2.2:
Let (2.2) hold true. There exists a constant $C(M,d,\lambda,T,\|V\|_{\infty})$
(depending on $M$ as well as on $d,\lambda,T,\|V\|_{\infty}$) such that
$\min\mathcal{F}_{\varepsilon}\leq C(M,d,\lambda,T,\|V\|_{\infty})$ (2.6)
for every $m_{0},m_{1}\in{\mathcal{P}}(M)$, and every $\varepsilon\leq 1$.
###### Proof.
Consider the heat kernel $p_{t}(x,y)$ associated to the volume measure $dx$
and the curve $\mu_{0}(\cdot):[0,1]\to{\mathcal{P}}(M)\cap C^{\infty}_{+}(M)$
generated by the heat semigroup $S_{t}$:
$t\to\mu_{0}(t,x)=S_{t}(m_{0})\coloneqq\int_{M}p_{t}(x,y)\,dm_{0}(y).$
It is a classical result (cfr. [21, Chapters 7, 8]) that $\mu_{0}(\cdot)$ is
well defined and is a smooth solution of the heat equation
$\frac{\partial}{\partial t}\mu_{0}=\Delta_{g}\mu_{0}\qquad\hbox{on
$[0,\infty)\times M$.}$
In particular we have $\mu_{0}(t,\cdot)>0$ for every $t>0$ by the strong
maximum principle. It follows that the velocity of such curve for $t>0$ is
given by the vector field
$\nu_{0}(t,x)\coloneqq\frac{\nabla\mu_{0}}{\mu_{0}}\,.$
By the Li-Yau inequality [28, Theorem 1.4] we know that there exists a
constant $C(d,\lambda)$ such that
$\frac{\left|\nabla\mu_{0}\right|^{2}}{\mu_{0}}-2\frac{\partial}{\partial
t}\mu_{0}\leq(C+\frac{2d}{t})\,\mu_{0}\,.$
Recalling that $\mu_{0}$ is a probability density for every $t>0$, integrating
the above inequality we get
$\int_{M}\left|\nu_{0}\right|^{2}\mu_{0}\,dx\leq C+\frac{2d}{t}.$ (2.7)
We now study the decay of the entropy along the heat flow. We recall that
$\frac{\partial}{\partial
t}\int_{M}\mu_{0}\log(\mu_{0})\,dx=\frac{\partial}{\partial
t}\int_{M}(\mu_{0}\log(\mu_{0})-\mu_{0})\,dx=-\int_{M}\frac{\left|\nabla\mu_{0}\right|^{2}}{\mu_{0}}\,dx\,.$
By Sobolev and Poincaré-Wirtinger inequality we have (for
$2^{*}=\frac{2d}{d-2}$ if $d>2$, or $2^{*}$ any sufficient large number if
$d=2$)
$\displaystyle\int_{M}\frac{\left|\nabla\mu_{0}\right|^{2}}{\mu_{0}}\,dx$
$\displaystyle=4\int_{M}\left|\nabla\sqrt{\mu_{0}}\right|^{2}\,dx\geq
C_{S}\left(\int_{M}\left|\sqrt{\mu_{0}}-Vol(M)^{-1}\\!\\!\int_{M}\sqrt{\mu_{0}}\,dx\right|^{2^{*}}dx\right)^{\frac{2}{2^{*}}}$
$\displaystyle\geq
c_{1}(\int_{M}\sqrt{\mu_{0}}^{2^{*}}dx)^{\frac{2}{2^{*}}}-c_{2}(\int_{M}\sqrt{\mu_{0}}\,dx)^{2}$
$\displaystyle\geq
c_{1}(\int_{M}\sqrt{\mu_{0}}^{2^{*}}dx)^{\frac{2}{2^{*}}}-c_{2}Vol(M)\,.$
By the concavity of the $\log$ function and Jensen inequality for the
probability measure $\mu_{0}$
$\begin{split}\log\left(\int_{M}\frac{1}{\mu_{0}}\left|\nabla\mu_{0}\right|^{2}\,dx+c_{2}Vol(M)\right)&\geq\frac{2}{2^{*}}\log\left(\int_{M}\sqrt{\mu_{0}}^{2^{*}}dx\right)+\log(c_{1})\\\
&\geq\frac{2}{2^{*}}\int_{M}\log\left(\mu_{0}^{\frac{2^{*}-2}{2}}\right)\mu_{0}dx+\log(c_{1})\\\
&=\frac{2}{d}\int_{M}\log\left(\mu_{0}\right)\mu_{0}dx+\log(c_{1})\,.\end{split}$
(2.8)
In other words, if $\varphi(t)\coloneqq\int_{M}\mu_{0}\log\mu_{0}\,dx$, then
we deduce
$\varphi^{\prime}(t)\leq-c_{1}e^{\frac{2}{d}\varphi}+C$
for a constant $C$ depending only on $Vol(M)$ and $d$. This implies
$(\varphi^{\prime}(t)-C)e^{-\frac{2}{d}(\varphi-Ct)}\leq-
c_{1}e^{\frac{2C}{d}t}\leq-c_{1}$
and then, integrating in $(t_{0},t_{1})$, we get
$-\frac{d}{2}e^{-\frac{2}{d}(\varphi(t_{1})-Ct_{1})}+\frac{d}{2}e^{-\frac{2}{d}(\varphi(t_{0})-Ct_{0})}+c_{1}(t_{1}-t_{0})\leq
0\,.$
In particular, letting $t_{0}\to 0$ we deduce
$-\frac{d}{2}e^{-\frac{2}{d}(\varphi(t_{1})-Ct_{1})}+c_{1}t_{1}\leq 0.$
Since $t_{1}$ is arbitrary, this means that
$\int_{M}\mu_{0}(t)\log\mu_{0}(t)\,dx=\varphi(t)\leq-\frac{d}{2}\log\left(\frac{2c_{1}}{d}t\right)+C(d,M)t$
(2.9)
for every $t>0$.
Now, for any given $\beta>1$, we consider the reparametrization
$\tilde{\mu}_{0}(t,\cdot)\coloneqq\mu_{0}(t^{\beta})$ for every $t>0$. Its
velocity field is
$\tilde{\nu}_{0}(t,\cdot)\coloneqq\beta t^{\beta-1}\nu_{0}(t^{\beta},\cdot)$
so, for any fixed $0<\delta_{0}<\frac{T}{3}$, by (2.7)
$\displaystyle\int_{0}^{\delta_{0}}\\!\\!\int_{M}\left|\tilde{\nu}_{0}\right|^{2}\tilde{\mu}_{0}\,dxdt$
$\displaystyle=\frac{1}{\beta}\int_{0}^{\delta_{0}^{\beta}}\\!\\!t^{1-\frac{1}{\beta}}\int_{M}\left|\nu_{0}\right|^{2}\mu_{0}\,dxdt$
$\displaystyle\leq\frac{1}{\beta}\int_{0}^{\delta_{0}^{\beta}}\\!\\!\frac{1}{t^{\frac{1}{\beta}}}\,dt$
which is finite for every $\beta>1$. With such a choice, if we merge this
estimate with (2.9) we obtain
$\int_{0}^{\delta_{0}}\\!\\!\int_{M}\left|\tilde{\nu}_{0}\right|^{2}\tilde{\mu}_{0}\,dxdt+\varepsilon\int_{0}^{\delta_{0}}\\!\int_{M}\tilde{\mu}_{0}(t)(\log\tilde{\mu}_{0}(t)+V)\,dxdt\leq
C_{0}$
where $C_{0}$ is a constant depending only on $M,d,\lambda,T,\|\varepsilon
V\|_{\infty},\beta$ and $\delta_{0}$.
In a similar way, for a fixed $T/3<\delta_{1}<T$, we find a smooth curve of
probability densities $\tilde{\mu}_{1}$, with velocity $\tilde{\nu}_{1}$ such
that
$\int_{\delta_{1}}^{T}\\!\\!\int_{M}\left|\tilde{\nu}_{1}\right|^{2}\tilde{\mu}_{1}\,dxdt+\varepsilon\int_{\delta_{1}}^{T}\\!\int_{M}\tilde{\mu}_{1}(t)(\log\tilde{\mu}_{1}(t)+V)\,dxdt\leq
C_{1}$
where $C_{1}$ is a constant depending only on $M,d,\lambda,T,\|\varepsilon
V\|_{\infty},\beta$ and $\delta_{1}$.
Consider now the 2-Wasserstein geodesic $(\overline{\mu},\overline{\nu})$
between $\tilde{\mu}_{0}(\delta_{0},\cdot)$ and
$\tilde{\mu}_{1}(\delta_{1},\cdot)$. We recall (see e.g. [15]) that the
entropy functional is $(-\lambda)$-convex along the 2-Wasserstein geodesic.
Hence, by (2.9),
$\int_{\delta_{0}}^{\delta_{1}}\\!\\!\int_{M}\left|\overline{\nu}\right|^{2}\overline{\mu}\,dxdt+\varepsilon\int_{\delta_{0}}^{\delta_{1}}\\!\\!\int_{M}\overline{\mu}(\log\overline{\mu}+V)\,dxdt\leq
C$
where the constant $C$ depends only on $M,d,\lambda,T,\|\varepsilon
V\|_{\infty},\delta_{0},\delta_{1}$ and the Wasserstein distance
$W_{2}\bigl{(}\tilde{\mu}_{0}(\delta_{0},\cdot),\tilde{\mu}_{1}(\delta_{1},\cdot)\bigr{)}$.
However, the latter is uniformly estimated in terms of the manifold, thanks to
the compactness of $M$. Finally, gluing the paths from $m_{0}$ to
$\tilde{\mu}_{0}(\delta_{0},\cdot)$ and from
$\tilde{\mu}_{1}(\delta_{1},\cdot)$ to $m_{1}$ with the 2-Wasserstein geodesic
$\overline{\mu}$, we have built an admissible arc joining $m_{0}$ and $m_{1}$,
and with a convenient choice of $\delta_{0},\delta_{1}$ we estimate
$\inf\mathcal{F}_{\varepsilon}(m_{0},m_{1})\leq C_{0}+C+C_{1}$
where last constants only depend on $M,d,\lambda,T,\|\varepsilon
V\|_{\infty}$.
It is a classical result, after Benamou-Brenier’s trick [2] and the weak-lower
semicontinuity of the entropy, that the above estimate, with the existence of
an admissible curve, yields the existence of a minimizer of
$\mathcal{F}_{\varepsilon}$ by direct methods of Calculus of Variations. For a
similar proof in euclidean context, see e.g. [27, Proposition 2.9]. ∎
###### Remark 2.3:
With a suitable choice of $\delta_{0},\delta_{1}$ in the above proof, it is
possible to give an estimate of the dependence of the constant $C$ in (2.6)
from the time-horizon $T$. Alternatively, if we denote
$\min\mathcal{F}_{\varepsilon}^{1}$ the minimum in unit time $T=1$, with a
simple time-scaling one can estimate
$\min\mathcal{F}_{\varepsilon}\leq\max\left(\frac{1}{T},T\right)\min\mathcal{F}_{\varepsilon}^{1}\leq\max\left(\frac{1}{T},T\right)C(M,d,\lambda)$
###### Remark 2.4:
We stress that in the above proof we can avoid the use of the Wasserstein
geodesic $(\overline{\mu},\overline{\nu})$ between
$\tilde{\mu}_{0}(\delta_{0},\cdot)$ and $\tilde{\mu}_{1}(\delta_{1},\cdot)$
(and consequently, avoid the use of the $(-\lambda)$\- displacement convexity
of the geodesic). In fact, since $\tilde{\mu}_{0}(\delta_{0},\cdot)$ and
$\tilde{\mu}_{1}(\delta_{1},\cdot)$ are smooth and positive, we can take the
smooth optimal curve of $\mathcal{F}_{\varepsilon}$ joining the two measures,
whose existence will be proved in Section 5, Theorem 5.4.
## 3 The optimality system
In this Section we discuss the structure of the optimality system satisfied by
minima of functional (2.5). This is a first order PDE system which takes the
following form
$\left\\{\begin{aligned} &-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}=\varepsilon(\log(m)+V)\,,\qquad t\in(0,T)\\\
&\partial_{t}m-div_{g}(m\nabla u)=0\,,\qquad\qquad\qquad
t\in(0,T).\end{aligned}\right.$ (3.1)
Let us first underline that, at least formally, (3.1) is the optimality
condition of (2.5), in the sense that $(m,\nabla u)$ provides with the couple
$(m,v)$ minimizing (2.5). This is a consequence of the convexity of the
functional, following the original idea of Benamou and Brenier [2]. Even if
this is clear to expert readers, we provide a proof for completeness.
###### Lemma 3.1:
Let $(u,m)$ be a smooth solution of system (3.1), in the sense that $u\in
C^{1}([0,T]\times M),m\in C^{0}([0,T]\times M)$ with $m(0)=m_{0},m(T)=m_{1}$,
and $m>0$. Then $(m,\nabla u)$ is a minimum point of (2.5).
###### Proof.
Let $(\mu,v)$ be any couple which solves the continuity equation in the sense
of Definition 2.1. We can assume that
$\mathcal{F}_{\varepsilon}(\mu,v)<\infty$ (otherwise, the inequality
$\mathcal{F}_{\varepsilon}(m,\nabla u)\leq\mathcal{F}_{\varepsilon}(\mu,v)$ is
obvious), and in particular $\mu\in L^{1}(Q_{T})$. Let us define the following
convex and lower semicontinuous function in $\mathbb{R}^{d}\times\mathbb{R}$:
$\Psi(p,m)=\left\\{\begin{array}[c]{ll}\frac{|p|^{2}}{2m}&\hbox{if}\quad
m>0,\\\ 0&\hbox{if}\quad m=0\hbox{ and }p=0,\\\
+\infty&\hbox{otherwise}.\end{array}\right.$ (3.2)
We set $w=\mu v,\hat{w}=m\nabla u$. Since $m,u$ are smooth solutions of (3.1)
(with $m>0$), then $\partial\Psi(\hat{w},m)$ is well defined. By convexity of
$\Psi$, we have
$\displaystyle\Psi(\hat{w},m)-\Psi(w,\mu)$
$\displaystyle\leq\partial_{p}\Psi(\hat{w},m)\cdot(\hat{w}-w)+\partial_{m}\Psi(\hat{w},m)(m-\mu)$
$\displaystyle=\frac{\hat{w}}{m}\cdot(\hat{w}-w)-\frac{|\hat{w}|^{2}}{2m^{2}}(m-\mu)$
Since $\mu\in L^{1}$ and $\mu|v|^{2}\in L^{1}$, we have $w\in L^{1}$ and the
above inequality is integrable on $M$. From the very definition of $w,\hat{w}$
we deduce
$\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m|\nabla
u|^{2}\leq\frac{1}{2}\int_{0}^{T}\\!\\!\\!\\!\int_{M}\mu|v|^{2}+\frac{1}{2}\int_{0}^{T}\\!\\!\\!\\!\int_{M}m|\nabla
u|^{2}-\int_{0}^{T}\\!\\!\\!\\!\int_{M}w\cdot\nabla
u+\frac{1}{2}\int_{0}^{T}\\!\\!\\!\\!\int_{M}\mu|\nabla u|^{2}$ (3.3)
Since $u\in C^{1}$, the continuity equation gives
$\displaystyle\int_{0}^{T}\\!\\!\\!\\!\int_{M}w\cdot\nabla u$
$\displaystyle=\int_{0}^{T}\\!\\!\\!\\!\int_{M}\partial_{t}u\,\mu-\int_{M}u(T)m_{1}+\int_{M}u(0)m_{0}$
$\displaystyle=-\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}(\log
m+V)\mu+\frac{1}{2}|\nabla u|^{2}\mu-\int_{M}u(T)m_{1}+\int_{M}u(0)m_{0}\,.$
Hence from (3.3) we get
$\displaystyle\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m|\nabla u|^{2}$
$\displaystyle\leq\frac{1}{2}\int_{0}^{T}\\!\\!\\!\\!\int_{M}\mu|v|^{2}+\frac{1}{2}\int_{0}^{T}\\!\\!\\!\\!\int_{M}m|\nabla
u|^{2}+\int_{M}u(T)m_{1}-\int_{M}u(0)m_{0}$
$\displaystyle+\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}(\log m+V)\mu$
$\displaystyle=\frac{1}{2}\int_{0}^{T}\\!\\!\\!\\!\int_{M}\mu|v|^{2}-\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}m(\log(m)+V)+\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}(\log
m+V)\mu$
By convexity we obviously have
$m\log(m)-\mu\log(\mu)\leq\log(m)(m-\mu)+(m-\mu)$, where last term disappears
after integration. Then we conclude that
$\mathcal{F}_{\varepsilon}(m,\nabla u)\leq\mathcal{F}_{\varepsilon}(\mu,v)\,.$
∎
Since (3.1) is a Hamiltonian system, there is some invariant of motion. The
proof is straightforward.
###### Lemma 3.2:
Let $(u,m)$ be a smooth solution of system (3.1). Then we have that the
quantity
$E(m_{0},m_{1}):=\frac{1}{2}\int_{M}m|\nabla
u|^{2}-\varepsilon\int_{M}m(\log(m)+V)$
is constant in time and it holds
$E(m_{0},m_{1})=\frac{1}{T}{\mathcal{B}}_{\varepsilon}(m_{0},m_{1})-\frac{2\varepsilon}{T}\int_{0}^{T}\\!\\!\\!\\!\int_{M}m(\log(m)+V),$
(3.4)
where
${\mathcal{B}}_{\varepsilon}(m_{0},m_{1})=\min\mathcal{F}_{\varepsilon}$.
We observe that the quantity $E$ only depends on the marginals $m_{0},m_{1}$,
and is easily estimated. In particular, by Jensen’s inequality, we have
$\int_{M}m(\log m+V)dx\geq-\log(\nu(M))$, for the measure $\nu:=e^{-V}dx$.
Recalling Proposition 2.2 (and Remark 2.3), we deduce that
$E(m_{0},m_{1})\leq\frac{1}{T^{2}\wedge
1}\,C(M,d,\lambda)+2\varepsilon\log(\int_{M}e^{-V}dx)\leq K$ (3.5)
for some constant $K$ which is uniform for all
$m_{0},m_{1}\in{\mathcal{P}}(M),V\in L^{\infty}(M)$ and any $\varepsilon\leq
1$.
### 3.1 Displacement convexity estimates
In this section we study the convexity of some energy functional along the
optimal curves of (2.5). This is obtained in the Eulerian approach by
exploiting dissipativity properties of the solutions of system (3.1). We
consider smooth solutions, which justifies the computations below; as we will
see later, this is no loss of generality, since all solutions will be obtained
as limit of classical ones. The following is an extension to the Riemannian
setting of the results proved in [19], (or in [43] with Neumann conditions);
the only new ingredient is provided by the Böchner formula (2.1).
###### Proposition 3.3:
Let $u\in C^{2}(\overline{Q}_{T})$ and $m\in C^{1}(\overline{Q}_{T})$ be
classical solutions to the system (3.1), where $V\in W^{2,\infty}(M)$.
Let $U:(0,+\infty)\rightarrow\mathbb{R}$ be a $C^{1}$ function such that
$P(r)\coloneqq U^{\prime}(r)r-U(r)\geq 0$
Then
$\begin{split}\frac{d^{2}}{dt^{2}}\int_{M}U(m)\,dx\geq&\int_{M}\left[P^{\prime}(m)m-(1-\frac{1}{d})P(m)\right](\Delta_{g}u)^{2}\,dx\\\
&+\int_{M}P(m)Ricc_{g}(\nabla u,\nabla u)\,dx+\\\
&+\int_{M}\varepsilon\frac{P^{\prime}(m)}{m}\left|\nabla
m\right|^{2}\,dx+\varepsilon\int_{M}P^{\prime}(m)\nabla
m\operatorname{\cdot_{g}\\!}\nabla V\,dx\end{split}$ (3.6)
###### Proof.
We begin by calculating the first derivate of the function
$t\rightarrow\int_{M}U(m)\,dx$,
$\displaystyle\frac{d}{dt}\int_{M}U(m)\,dx$
$\displaystyle=\int_{M}U^{\prime}(m)\partial_{t}m\,dx$
$\displaystyle=\int_{M}U^{\prime}(m)(m\Delta_{g}u+\nabla
m\operatorname{\cdot_{g}\\!}\nabla u)\,dx$
$\displaystyle=\int_{M}P(m)\Delta_{g}u\,dx$
recalling that $P(r)=U^{\prime}(r)r-U(r)$. So, the second derivative takes the
form
$\displaystyle\frac{d^{2}}{dt^{2}}\int_{M}U(m)\,dx=$
$\displaystyle\int_{M}P^{\prime}(m)\partial_{t}m\Delta_{g}u+P(m)\Delta_{g}(\partial_{t}u)\,dx$
$\displaystyle=$ $\displaystyle\int_{M}P^{\prime}(m)(m\Delta_{g}u+\nabla
m\operatorname{\cdot_{g}\\!}\nabla u)\Delta_{g}u\,dx+$
$\displaystyle+\int_{M}P(m)\Delta_{g}(\frac{1}{2}\left|\nabla
u\right|^{2}-\varepsilon(\log(m)+V))\,dx$ $\displaystyle=$
$\displaystyle\int_{M}[P^{\prime}(m)m-P(m)](\Delta_{g}u)^{2}\,dx-\int_{M}P(m)\nabla(\Delta_{g}u)\operatorname{\cdot_{g}\\!}\nabla
u\,dx$ $\displaystyle+\int_{M}P(m)\Delta_{g}(\frac{1}{2}\left|\nabla
u\right|^{2}-\varepsilon(\log(m)+V))\,dx\,.$
Now we use Böchner’s formula (2.1) to calculate
$\Delta_{g}(\frac{1}{2}\left|\nabla u\right|^{2})$, obtaining
$\displaystyle\frac{d^{2}}{dt^{2}}\int_{M}U(m)\,dx=$
$\displaystyle\int_{M}[P^{\prime}(m)m-P(m)](\Delta_{g}u)^{2}\,dx$
$\displaystyle+\int_{M}P(m)[tr((\nabla^{2}u)^{2})+Ricc_{g}(\nabla u,\nabla
u)]\,dx$ $\displaystyle+\int_{M}\varepsilon\frac{P^{\prime}(m)}{m}\left|\nabla
m\right|^{2}dx+\varepsilon\int_{M}P^{\prime}(m)\nabla
m\operatorname{\cdot_{g}\\!}\nabla V\,dx.$
Since, by the symmetry of $\nabla^{2}u$, it holds
$tr((\nabla^{2}u)^{2})\geq\frac{1}{d}(tr(\nabla^{2}u))^{2}=\frac{1}{d}(\Delta_{g}u)^{2}$,
we get (3.6). ∎
In particular, the inequality (3.6) implies the semi-convexity of the
$\log$-entropy along the optimal curve $m(t)$, and the strict convexity of the
relative entropy whenever $Ricc_{g}+D^{2}V\geq 0$.
###### Corollary 3.4:
Under the assumptions of Proposition 3.3, let ${\mathcal{H}}(m(t);\nu)$ be the
relative entropy defined in (2.4), for $\nu=e^{-V}dx$. Then we have
$\begin{split}\frac{d^{2}}{dt^{2}}{\mathcal{H}}(m(t);\nu)&\geq\int_{M}m(Ricc_{g}(\nabla
u,\nabla u)+D^{2}V(\nabla u,\nabla u))\,dx\\\
&\qquad\qquad+\varepsilon\int_{M}|\nabla(\log m+V)|^{2}\,m\,dx\,.\end{split}$
(3.7)
Moreover, let $\lambda\geq 0$ satisfy (2.2), and define
$\varphi(t)=\int_{M}m(t)\log m(t)\,dx\,.$
Then we have:
* (i)
there exists a constant $\Lambda_{\varepsilon}$, depending on
$M,d,\lambda,\|V\|_{W^{1,\infty}},T,\varepsilon$, such that $\varphi$ is
$\Lambda_{\varepsilon}-$ semiconvex in $(0,T)$, hence
$\displaystyle\varphi(t)\leq\frac{T-t}{T}\varphi(0)+\frac{t}{T}\varphi(T)+\Lambda_{\varepsilon}\frac{t(T-t)}{2T^{2}}$
(3.8)
Moreover, the constant $\Lambda_{\varepsilon}$ is bounded independently of
$\varepsilon$ (for $\varepsilon\leq 1$), and we have
$\Lambda_{\varepsilon}\,\mathop{\to}\limits^{\varepsilon\to
0}\,\frac{\lambda}{T}W_{2}(m_{0},m_{1})^{2}$.
* (ii)
there exists a constant $L=L(M,d,\lambda,\|V\|_{W^{1,\infty}},T)$ such that
$\varphi(t)\leq
d\,|\log(t(T-t))|+\frac{d}{2}\,|\log\varepsilon|+L\qquad\forall t\in(0,T)\,.$
(3.9)
###### Proof.
We use Proposition 3.3 with $U(r)=r\log r-r$ (so that $P(r)=r$) and we get
$\begin{split}\frac{d^{2}}{dt^{2}}\int_{M}m\log
m=&\frac{d^{2}}{dt^{2}}\int_{M}(m\log m-m)\,dx\geq\int_{M}mRicc_{g}(\nabla
u,\nabla u)\,dx+\\\ &\qquad+\int_{M}\varepsilon\frac{1}{m}\left|\nabla
m\right|^{2}\,dx+\varepsilon\int_{M}\nabla m\operatorname{\cdot_{g}\\!}\nabla
V\,dx\,.\end{split}$ (3.10)
Similarly, we compute
$\displaystyle\frac{d^{2}}{dt^{2}}\int_{M}m\,V=$
$\displaystyle\frac{d}{dt}\int_{M}\partial_{t}m\,V=-\frac{d}{dt}\int_{M}m\,\nabla
V\operatorname{\cdot_{g}\\!}\nabla u$ $\displaystyle=-\int_{M}(\nabla
V\operatorname{\cdot_{g}\\!}\nabla u)div_{g}(m\nabla u)-\int_{M}m\,\nabla
V\operatorname{\cdot_{g}\\!}\nabla(\frac{1}{2}|\nabla
u|^{2}-\varepsilon(log(m)+V))$ $\displaystyle=\int_{M}m\,D^{2}V(\nabla
u,\nabla u)+\varepsilon\int_{M}m\,\nabla
V\operatorname{\cdot_{g}\\!}\nabla(log(m)+V)\,.$
Adding this equality to (3.10), we obtain (3.7).
Now, if we come back to (3.10) and use the lower bound on $Ric_{g}$, we get
$\frac{d^{2}}{dt^{2}}\int_{M}m\log m\geq-\lambda\int_{M}m\left|\nabla
u\right|^{2}\,dx+\frac{\varepsilon}{2}\int_{M}\frac{1}{m}\left|\nabla
m\right|^{2}\,dx-\frac{\varepsilon}{2}\int_{M}m|\nabla V|^{2}\,dx\,.$
By definition of the quantity $E(m_{0},m_{1})$ we obtain
$\begin{split}\frac{d^{2}}{dt^{2}}\int_{M}m\log m\geq&-2\lambda
E(m_{0},m_{1})-2\lambda\varepsilon\int_{M}m(\log m+V)dx\\\
&+\frac{\varepsilon}{2}\int_{M}\frac{1}{m}\left|\nabla
m\right|^{2}\,dx-\frac{\varepsilon}{2}\int_{M}m|\nabla V|^{2}\,dx\\\
\geq&-2\lambda E(m_{0},m_{1})-2\lambda\varepsilon\int_{M}m\log mdx\\\
&+\frac{\varepsilon}{2}\int_{M}\frac{1}{m}\left|\nabla
m\right|^{2}\,dx-\varepsilon\,c(\lambda,\|V\|_{W^{1,\infty}}).\end{split}$
(3.11)
We estimate the Fisher information of $m$ as in Proposition 2.2, see (2.8):
$\int_{M}\frac{1}{m}\left|\nabla m\right|^{2}\,dx\geq
c_{1}\exp\left(\frac{2}{d}\int_{M}m\log m\right)-c_{2}Vol(M)\,.$
Therefore, if $\varphi(t)\coloneqq\int_{M}m\log m\,dx$, we deduce
$\varphi^{\prime\prime}\geq-2\lambda
E(m_{0},m_{1})+\varepsilon(-2\lambda\varphi+c_{3}e^{\frac{2}{d}\varphi})-\varepsilon\,c(\lambda,\|V\|_{W^{1,\infty}},M)\,,$
(3.12)
for some constant $c_{3}$. We note that the function $r\to-2\lambda
r+c_{3}e^{\frac{2}{d}r}$ has a finite minimum on $[0,+\infty)$, and that
$E(m_{0},m_{1})$ is bounded above by some constant $K$ only depending on
$M,d,\lambda,T,\|V\|_{\infty}$, see (3.5). Hence we have
$\varphi^{\prime\prime}\geq-2\lambda K-\varepsilon
C(\lambda,\|V\|_{W^{1,\infty}},M)$
which gives the semiconvexity of the entropy along the optimal curves, with a
semi-convexity constant $\Lambda_{\varepsilon}$ which is bounded uniformly for
$\varepsilon\leq 1$. In a more precise form, on account of (3.4) we can
estimate
$\Lambda_{\varepsilon}=\varepsilon C+2\lambda E(m_{0},m_{1})\simeq\varepsilon
C(1+|\log\varepsilon|)+2\frac{\lambda}{T}\min(\mathcal{F}_{\varepsilon})\,.$
As we will prove in Section 6, it holds that
$\min(\mathcal{F}_{\varepsilon})\to\frac{1}{2}W_{2}(m_{0},m_{1})^{2}$; hence
we deduce
$\Lambda_{\varepsilon}\,\mathop{\to}^{\varepsilon\to
0}\,\,\frac{\lambda}{T}W_{2}(m_{0},m_{1})^{2}\,.$
Now we also obtain a local bound for the entropy, independently from the
initial and terminal marginals. Indeed, we deduce from (3.12) that
$\varphi^{\prime\prime}\geq\varepsilon\,c_{4}e^{\frac{2}{d}\varphi}-c_{5}\qquad
t\in(0,T),$
for some constant $c_{4},c_{5}$ depending on
$M,d,\lambda,\|V\|_{W^{1,\infty}},T,\varepsilon$ (and uniform for
$\varepsilon\leq 1$). With a suitable choice of $L$ (depending on
$c_{4},c_{5},T$), we have that the function
$\psi(t):=-d\log(\sqrt{\varepsilon}\,t(T-t))+L$
is a supersolution of the same equation, i.e.
$\psi^{\prime\prime}\leq\varepsilon\,c_{4}e^{\frac{2}{d}\psi}-c_{5}$ for
$t\in(0,T)$. Since $\psi$ blows-up at $t=0,t=T$, we conclude by comparison
that $\varphi\leq\psi$, which gives (3.9). ∎
### 3.2 The optimality system as an elliptic equation
System (3.1) can be recasted as a single elliptic equation, in time-space
variables, for $u$. This comes by noting that
$me^{V}=\exp(\frac{1}{\varepsilon}(\frac{|\nabla u|^{2}}{2}-\partial_{t}u))$,
which can be inserted in the continuity equation, giving rise to a quasilinear
elliptic equation in divergence form for $u$.
This is in fact a special case of a general approach suggested by P-L. Lions
in his lectures at Collège de France [35], in order to handle mean-field game
systems of first order, such as
$\left\\{\begin{aligned} &-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}=f(m)+V\\\ &\partial_{t}m-m\Delta_{g}u-\nabla
u\operatorname{\cdot_{g}\\!}\nabla m=0\end{aligned}\right.$ (3.13)
whenever $f$ is an increasing function. For the reader’s convenience, we
derive here this equation in the Riemannian context. To this purpose, we first
compute the covariant gradient of both terms in the Hamilton-Jacobi equation:
$\begin{split}f^{\prime}(m)\nabla
m&=\nabla\left(-\partial_{t}u+\frac{\left|\nabla
u\right|^{2}}{2}-V\right)\,.\end{split}$
Taking the scalar product with $\nabla u$ we get
$f^{\prime}(m)\nabla m\operatorname{\cdot_{g}\\!}\nabla
u=\nabla\left(-\partial_{t}u+\frac{\left|\nabla
u\right|^{2}}{2}-V\right)\operatorname{\cdot_{g}\\!}\nabla u\,.$
Similarly, by taking the time derivative from the Hamilton-Jacobi equation, we
have:
$\begin{split}f^{\prime}(m)\partial_{t}m&=\frac{\partial}{\partial
t}\left(-\partial_{t}u+\frac{\left|\nabla u\right|^{2}}{2}-V\right)\\\
&=-\partial_{tt}u+\nabla(\partial_{t}u)\operatorname{\cdot_{g}\\!}\nabla
u\,.\\\ \end{split}$ (3.14)
From the above equalities, merged with the continuity equation for $m$, we
obtain
$\begin{split}-\partial_{tt}u+\nabla(\partial_{t}u)\operatorname{\cdot_{g}\\!}\nabla
u-\nabla\left(-\partial_{t}u+\frac{\left|\nabla
u\right|^{2}}{2}-V\right)\operatorname{\cdot_{g}\\!}\nabla
u&=f^{\prime}(m)\left(\partial_{t}m-\nabla m\operatorname{\cdot_{g}\\!}\nabla
u\right)\\\ &=f^{\prime}(m)\,m\Delta_{g}u\end{split}$ (3.15)
which becomes a second order equation in the only unknown $u$. Indeed, by
setting $\phi\coloneqq(f)^{-1}$, the first equation of the system reads as
$m=\phi\left(-\partial_{t}u+\frac{\left|\nabla u\right|^{2}}{2}-V\right)\,,$
and so (3.15) becomes a single equation for $u$. In the particular case of
$f(m)=\varepsilon\log m$, we have $f^{\prime}(m)m=\varepsilon$, and (3.15)
simplifies further; if we also scale the potential $V$ into $\varepsilon\,V$
(this is not necessary of course, but it is more consistent with the model of
relative entropy), we get to the following form:
$-\partial_{tt}u+2\nabla
u\operatorname{\cdot_{g}\\!}\nabla(\partial_{t}u)-(\nabla^{2}u)(\nabla
u,\nabla u)-\varepsilon\Delta_{g}u+\varepsilon\nabla
u\operatorname{\cdot_{g}\\!}\nabla V=0.$ (3.16)
We can shorten such equation by writing it as a quasilinear equation in the
space-time variable
$-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}u\right)+\varepsilon\nabla
u\operatorname{\cdot_{g}\\!}\nabla V(x)=0$ (3.17)
where $\mathcal{A}(x,\eta)$ is the endomorphism of $\mathbb{R}\times
T_{x}\\!M$ defined by
$\begin{array}[]{ccl}\mathbb{R}\times
T_{x}\\!M&\longrightarrow&\mathbb{R}\times T_{x}\\!M\\\
[\overline{w},w]&\longrightarrow&[-(\eta\operatorname{\cdot_{g}\\!}w-\overline{w}),(\eta\operatorname{\cdot_{g}\\!}w-\overline{w})\eta+\varepsilon
w]\end{array}$
and, for every $C^{2}$ function $f$ on $[0,T]\times M$, the endomorphism
$\overline{\nabla}^{2}f$ is given by
$\begin{array}[]{ccl}\mathbb{R}\times
T_{x}\\!M&\longrightarrow&\mathbb{R}\times T_{x}\\!M\\\
[\overline{w},w]&\longrightarrow&[\overline{w}\partial_{tt}f+\nabla(\partial_{t}f)\operatorname{\cdot_{g}\\!}w,\overline{w}\nabla(\partial_{t}f)+D_{w}\nabla
f]\end{array}$
Note that $\mathcal{A}(x,\eta)$ is independent of $t\in[0,T]$ and it is
symmetric for every choice $(x,\eta)\in M\times T_{x}M$. The symbol
$\mathcal{A}(x,\eta)$ will denote also the bilinear form induced by such
endomorphism through the product metric $\underline{g}$ of the manifold
$\mathbb{R}\times M$. Namely,
$\displaystyle\mathcal{A}(x,\eta)([\overline{w},w],[\overline{v},v])$
$\displaystyle\coloneqq\left(\mathcal{A}(x,\eta)[\overline{w},w]\right)\cdot_{\underline{g}}[\overline{v},v]$
$\displaystyle=(\eta\operatorname{\cdot_{g}\\!}w-\overline{w})(\eta\operatorname{\cdot_{g}\\!}v-\overline{v})+\varepsilon
w\operatorname{\cdot_{g}\\!}v\,.$
Finally we note that such bilinear form is elliptic (though not uniformly), in
fact for every $[\overline{w},w]\in\mathbb{R}\times T_{x}\\!M$ we have
$\mathcal{A}(x,\eta)([\overline{w},w],[\overline{w},w])=(\eta\operatorname{\cdot_{g}\\!}w-\overline{w})^{2}+\varepsilon\left|w\right|^{2}>0\qquad\forall[\overline{w},w]\neq[0,0].$
## 4 Gradient bounds for smooth solutions
In this section we obtain estimates for smooth solutions of the system (1.2),
by exploiting the elliptic character of the quasilinear equation (3.17). We
mostly follow the ideas developed in [35], [40], [43], although specifying to
the case of the entropy nonlinearity allows us to simplify some argument and
to give estimates in a more precise form.
As a first step, we derive from (3.16) the differential equations solved by
some auxiliary functions of $u$ and its derivatives.
###### Lemma 4.1:
Let $u\in C^{3}([0,T]\times M)$ be a solution of
$-tr\left(\mathcal{A}(x,\nabla u)\circ\overline{\nabla}^{2}u\right)+\rho
u+\varepsilon\nabla u\operatorname{\cdot_{g}\\!}\nabla V(x)=0$ (4.1)
Then
1. i)
for every $K\in\mathbb{R}$, the function $h\coloneqq(u+K)^{2}$ satisfies
$\displaystyle-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}h\right)+\varepsilon\nabla
u\operatorname{\cdot_{g}\\!}\nabla V$ $\displaystyle+2\rho u\,(u+K)$
$\displaystyle=-2\mathcal{A}(x,\nabla u)\left([\partial_{t}u,\nabla
u],[\partial_{t}u,\nabla u]\right).$
2. ii)
Set $\omega\coloneqq\partial_{t}u$, then it satisfies
$\displaystyle-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}\omega\right)+\varepsilon\nabla\omega\operatorname{\cdot_{g}\\!}\nabla
V+\rho\omega=$
$\displaystyle-2\left|\nabla\omega\right|^{2}+2(\nabla^{2}u)(\nabla
u,\nabla\omega).$
3. iii)
Set $\varphi\coloneqq\frac{1}{2}\left|\nabla u\right|^{2}$, then it satisfies
$\displaystyle-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}\varphi\right)$
$\displaystyle+\varepsilon\nabla\varphi\operatorname{\cdot_{g}\\!}\nabla
V+2\rho\varphi=\left|\nabla\varphi\right|^{2}-|\nabla(\partial_{t}u)|^{2}$
$\displaystyle-\varepsilon\left(\left|\nabla^{2}u\right|^{2}+(\nabla^{2}V)(\nabla
u,\nabla u)+Ricc_{g}(\nabla u,\nabla u)\right)$
###### Proof.
Equations i) and ii) are straightful computations based on equation (4.1). For
i) we use the chain rule. For ii) we derive in time equation (4.1) and we use
that $\mathcal{A}(x,\nabla u)$ is independent from time.
iii) We now study the differential equation solved by
$\varphi\coloneqq\tfrac{1}{2}\left|\nabla u\right|^{2}$. We first observe
that, if we develop the trace in (4.1), as we did in (3.16), we can rewrite
the equation as
$\partial_{tt}u+\varepsilon\Delta_{g}u=2\partial_{t}\varphi-\nabla
u\operatorname{\cdot_{g}\\!}\nabla\varphi+\varepsilon\nabla
u\operatorname{\cdot_{g}\\!}\nabla V+\rho u$ (4.2)
Now, using the Bochner’s formula (2.1), which means
$\Delta_{g}\varphi=\tfrac{1}{2}\Delta_{g}\left|\nabla
u\right|^{2}=\left|\nabla^{2}u\right|^{2}+\nabla(\Delta_{g}u)\operatorname{\cdot_{g}\\!}\nabla
u+Ricc_{g}(\nabla u,\nabla u)$ (4.3)
we get
$\displaystyle\partial_{tt}\varphi+\varepsilon\Delta_{g}\varphi$
$\displaystyle=\nabla(\partial_{t}u)\operatorname{\cdot_{g}\\!}\nabla(\partial_{t}u)+\nabla
u\operatorname{\cdot_{g}\\!}\nabla(\partial_{tt}u)+\varepsilon\left(\nabla(\Delta_{g}u)\operatorname{\cdot_{g}\\!}\nabla
u+\left|\nabla^{2}u\right|^{2}+Ricc_{g}(\nabla u,\nabla u)\right)$
$\displaystyle=\nabla\bigl{(}\partial_{tt}u+\varepsilon\Delta_{g}u\bigr{)}\operatorname{\cdot_{g}\\!}\nabla
u+|\nabla(\partial_{t}u)|^{2}+\varepsilon\left[\left|\nabla^{2}u\right|^{2}+Ricc_{g}(\nabla
u,\nabla u)\right]\,.$
Then, using (4.2), we have
$\displaystyle\partial_{tt}\varphi+\varepsilon\Delta_{g}\varphi$
$\displaystyle=2\nabla(\partial_{t}\varphi)\operatorname{\cdot_{g}\\!}\nabla
u-(\nabla^{2}u)(\nabla u,\nabla\varphi)-(\nabla^{2}\varphi)(\nabla u,\nabla
u)+$ $\displaystyle\quad+\varepsilon(\nabla^{2}u)(\nabla u,\nabla
V)+\varepsilon(\nabla^{2}V)(\nabla u,\nabla u)+2\rho\varphi+$
$\displaystyle\quad+|\nabla(\partial_{t}u)|^{2}+\varepsilon\left[\left|\nabla^{2}u\right|^{2}+Ricc_{g}(\nabla
u,\nabla u)\right]\,.$
We note that for every vector field $v$ on $M$ we have
$\nabla\varphi\operatorname{\cdot_{g}\\!}v=(\nabla^{2}u)(\nabla u,v)$, so the
above equality becomes
$\displaystyle\partial_{tt}\varphi+\varepsilon\Delta_{g}\varphi$
$\displaystyle=2\nabla(\partial_{t}\varphi)\operatorname{\cdot_{g}\\!}\nabla
u-\nabla\varphi\operatorname{\cdot_{g}\\!}\nabla\varphi-(\nabla^{2}\varphi)(\nabla
u,\nabla u)+\varepsilon\nabla\varphi\operatorname{\cdot_{g}\\!}\nabla V+$
$\displaystyle+\varepsilon(\nabla^{2}V)(\nabla u,\nabla
u)+2\rho\varphi+|\nabla(\partial_{t}u)|^{2}+$
$\displaystyle+\varepsilon\left[\left|\nabla^{2}u\right|^{2}+Ricc_{g}(\nabla
u,\nabla u)\right].$
Finally, if we look at the whole chain of equalities we get
$\displaystyle-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}\varphi\right)+\varepsilon\nabla\varphi\operatorname{\cdot_{g}\\!}\nabla
V+2\rho\varphi=\left|\nabla\varphi\right|^{2}-|\nabla(\partial_{t}u)|^{2}$
$\displaystyle\qquad\qquad\qquad-\varepsilon\left(\left|\nabla^{2}u\right|^{2}+(\nabla^{2}V)(\nabla
u,\nabla u)+Ricc_{g}(\nabla u,\nabla u)\right)\,.$
∎
### 4.1 Global Lipschitz bound on $u$
In this section we will prove a $W^{1,\infty}$ bound for the solution $u\in
C^{3}(Q_{T})$ of the system
$\begin{cases}-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}u\right)+\rho u+\varepsilon\nabla
u\operatorname{\cdot_{g}\\!}\nabla V(x)=0&\text{in $Q_{T}$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}=\delta
u+\varepsilon(\log(m_{1})+V(x))&\text{in $t=T,x\in M$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}+\delta
u=\varepsilon(\log(m_{0})+V(x))&\text{in $t=0,x\in M$.}\\\ \end{cases}$ (4.4)
We recall that $Q_{T}=(0,T)\times M$, and hereafter we denote
$\displaystyle\Sigma_{0}\coloneqq\\{0\\}\times
M\quad\Sigma_{T}\coloneqq\\{T\\}\times M.$
We notice that (4.4) is a perturbation of (3.17), containing an additional
term $\rho u$ in the interior (this will simplify a preliminary step, detailed
in Appendix) and an additional term $\delta u$ on the boundary. The latter is
used to control the function $u$ in a first step. Indeed, using the maximum
principle, we have
$\delta\lVert
u\rVert_{\infty}\leq\left(\lVert\varepsilon(\log(m_{0})+V)\rVert_{\infty}+\lVert\varepsilon(\log(m_{1})+V)\rVert_{\infty}\right).$
(4.5)
Analogously, using Lemma 4.1 and the maximum principle, we bound the time
derivative.
###### Lemma 4.2:
Let $u$ be a solution of (4.4). It holds that
$\partial_{t}u(t,x)\leq\sup_{\Sigma_{0}\cup\Sigma_{T}}(\partial_{t}u)_{+}\quad\text{and}\quad\left|\partial_{t}u(t,x)\right|\leq\sup_{\Sigma_{0}\cup\Sigma_{T}}\left|\partial_{t}u(t,x)\right|\quad\forall(t,x)\in
Q_{T}$ (4.6)
Finally we give a bound for the space derivative in terms of the $\sup$-norm
of $u$.
###### Theorem 4.3:
Let $u$ be a solution of (4.4). There exists a constant $C$, independent from
$\rho$ and $\delta$, such that
$\lVert{\nabla}u\rVert_{\infty}\leq C(1+\lVert
u\rVert_{\infty})\qquad;\qquad\lVert{\partial_{t}}u\rVert_{\infty}\leq
C(1+\lVert u\rVert_{\infty}^{2}).$ (4.7)
Such constant $C$ depends on
$\lVert\varepsilon\log(m_{0})\rVert_{W^{1,\infty}},\lVert\varepsilon\log(m_{1})\rVert_{W^{1,\infty}}$
and on $\lVert\varepsilon V\rVert_{W^{2,\infty}}$.
###### Proof.
We follow P.-L. Lions’ method, as developed in [40], [43]. First of all, we
replace $u$ with the auxiliary function
$v\coloneqq u+K-C_{0}\left(\tfrac{T-t}{T}\right)\,,\quad\text{where }K=2\lVert
u\rVert_{\infty}+1,\quad C_{0}=2K\,.$
Notice that $\nabla v=\nabla u$, so $v$ solves the same elliptic equation of
$u$, up to an additional term due to the time translation. Moreover, we have
$\lVert v\rVert_{\infty}\leq C(1+\lVert u\rVert_{\infty})$. We set
$z\coloneqq\tfrac{1}{2}\left|\nabla v\right|^{2}+\tfrac{\gamma}{2}v^{2}$
where
$\gamma\coloneqq\frac{\sigma}{(1+\lVert u\rVert_{\infty})^{2}}$
for some small constant $\sigma$ to be chosen later. The goal is to obtain an
upper bound on $z$ by means of the maximum principle. If the maximum occurs at
the boundary, this means that either $t=0$ or $t=T$; here one uses that, by
construction, $v(T)\geq 1$, $v(0)\leq-1$, then reasoning exactly as in [43]
(Thm 3.4, Step 1) one obtains that
$\lVert\nabla v\rVert_{\infty}\leq C(1+\|u\|_{\infty})$
for some
$C=C(\lVert(\varepsilon\log(m_{1})\rVert_{W^{1,\infty}},\lVert(\varepsilon\log(m_{0})\rVert_{W^{1,\infty}},\|\varepsilon\nabla
V\|_{\infty})$, in case of a maximum attained at the extremal times $t=0,T$.
Let us focus on a possibly interior maximum point. From Lemma 4.1, part (i)
(applied to $v$) and part (iii), we have
$\begin{split}&-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}z\right)+\varepsilon\nabla
z\operatorname{\cdot_{g}\\!}\nabla V+2\rho\,z=-\gamma\mathcal{A}(x,\nabla
u)([\partial_{t}v,\nabla v],[\partial_{t}v,\nabla v])]+\\\
&\quad+\left|\nabla\varphi\right|^{2}-|\nabla(\partial_{t}v)|^{2}-\varepsilon\left(\left|\nabla^{2}u\right|^{2}+Ricc_{g}(\nabla
u,\nabla u)+(\nabla^{2}V)(\nabla u,\nabla u)\right)\\\
&\quad+\gamma\rho(K-C_{0}\tfrac{T-t}{T})v\end{split}$ (4.8)
where $\varphi=\frac{1}{2}|\nabla u|^{2}$. By definition of the matrix
$\mathcal{A}(x,\nabla u)$, we get
$\mathcal{A}(x,\nabla u)([\partial_{t}v,\nabla v],[\partial_{t}v,\nabla
v])=\left|-\partial_{t}v+\left|\nabla
v\right|^{2}\right|^{2}+\varepsilon\left|\nabla v\right|^{2}\\\ $
while the definition of $\varphi$ implies
$|\nabla\varphi|^{2}=\left|\nabla z\right|^{2}-2\gamma v\nabla
v\operatorname{\cdot_{g}\\!}\nabla z+\gamma^{2}v^{2}\left|\nabla
v\right|^{2}.$
Inserting the above equalities in (4.8) we get
$\displaystyle-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}z\right)+\varepsilon\nabla
z\operatorname{\cdot_{g}\\!}\nabla V+2\rho
z+\gamma\left|-\partial_{t}v+\left|\nabla
v\right|^{2}\right|^{2}+\gamma\varepsilon\left|\nabla v\right|^{2}$
$\displaystyle\quad=\left|\nabla z\right|^{2}-2\gamma v\nabla
v\operatorname{\cdot_{g}\\!}\nabla z+\gamma^{2}v^{2}\left|\nabla
v\right|^{2}-\left|\nabla(\partial_{t}v)\right|^{2}$
$\displaystyle\qquad-\varepsilon\left(\left|\nabla^{2}u\right|^{2}+Ricc_{g}(\nabla
u,\nabla u)+(\nabla^{2}V)(\nabla u,\nabla
u)\right)+\gamma\rho(K-C_{0}\tfrac{T-t}{T})v$
and since $Ric_{g}$ is bounded below, and $\gamma v^{2}\leq C\sigma$ by the
initial choice, we can estimate the right-hand side obtaining
$\begin{split}&-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}z\right)+\nabla z\operatorname{\cdot_{g}\\!}\nabla
V+2\rho z+\gamma\left|-\partial_{t}v+\left|\nabla
v\right|^{2}\right|^{2}+\gamma\varepsilon\left|\nabla v\right|^{2}\\\
&\quad\leq\left|\nabla z\right|^{2}-2\gamma v\nabla
v\operatorname{\cdot_{g}\\!}\nabla z+C(1+|\nabla v|^{2})\end{split}$ (4.9)
for some constant $C$ depending on $(Ric_{g}+\nabla^{2}V)_{-}$ and on
$\sigma$. Let us focus on the quantity $-\partial_{t}v+|\nabla v|^{2}$: at an
interior maximum point of $z$, we have
$\displaystyle-\partial_{t}v+|\nabla v|^{2}$
$\displaystyle=(\max_{\overline{Q}}z-\partial_{t}u)-\frac{C_{0}}{T}-\frac{\gamma}{2}v^{2}+\frac{1}{2}|\nabla
v|^{2}$
However, thanks to Lemma 4.2, and to the boundary conditions at $t=0,T$, we
have
$\displaystyle\partial_{t}u$
$\displaystyle\leq\sup_{\Sigma_{0}\cup\Sigma_{t}}\partial_{t}u$
$\displaystyle\leq\max_{\overline{Q}}z+\|\delta
u\|_{\infty}+\frac{\gamma}{2}\|v\|_{\infty}^{2}+\|\varepsilon
V\|_{\infty}+\varepsilon\max\left(\|\log(m_{0})\|_{\infty},\|\log(m_{1})\|_{\infty}\right)$
which implies, using $\gamma v^{2}\leq C\sigma$ and (4.5),
$\partial_{t}u-\max_{\overline{Q}}z\leq\overline{K}$
for a certain $\overline{K}>0$ depending only on $\lVert\varepsilon
V\rVert_{\infty},\lVert\varepsilon\log(m_{1})\rVert_{\infty}$ and
$\lVert\varepsilon\log(m_{0})\rVert_{\infty}$. Therefore, we conclude that
$\displaystyle-\partial_{t}v+|\nabla v|^{2}$
$\displaystyle\geq-\overline{K}-\frac{C_{0}}{T}-\frac{\gamma}{2}v^{2}+\frac{1}{2}|\nabla
v|^{2}$ $\displaystyle\geq-K+\frac{1}{2}|\nabla v|^{2}$
where $K=\overline{K}+\frac{C_{0}}{T}+C\sigma$. So, either $|\nabla v|^{2}\leq
4K$ (and then $\max_{\overline{Q}}z\leq 2K+C\frac{\sigma}{2}$) or we have
$-K+\frac{1}{2}|\nabla v|^{2}\geq\frac{1}{4}|\nabla v|^{2}$, and we estimate
$|-\partial_{t}v+|\nabla v|^{2}|^{2}\geq\frac{1}{16}|\nabla v|^{4}$. In this
latter case, looking at (4.9) on a maximum point of $z$, where $\nabla z=0$,
we deduce that
$\frac{\gamma}{16}|\nabla v|^{4}\leq C(1+|\nabla v|^{2})$
which implies $|\nabla v|^{2}\leq C(1+\frac{1}{\gamma})\leq
C(1+\|u\|_{\infty}^{2})$. Thus, we conclude in both cases with an estimate
like (4.7), for the spatial gradient $\nabla u$. Due to Lemma 4.2, this also
yields an estimate for $\partial_{t}u$, and then the full gradient
$\overline{\nabla}u$ is estimated. ∎
### 4.2 Local bound on the density
We next derive a local (in time) version of the gradient estimate. This is not
enough to give a local Lipschitz bound for $u$, but provides with a local
$L^{\infty}$-bound for the density $m$.
###### Proposition 4.4:
Let $u$ be a solution of (4.4). Let $0<a<b<T$ and $\kappa\in(0,\frac{1}{2})$.
There exists a constant $K>0$ (independent of $\rho,\delta$) such that
$\varepsilon\,(\log(m(t))+V)+\kappa|\nabla u|^{2}\leq
K\left(\frac{1}{(t-a)^{2}}+\frac{1}{(b-t)^{2}}\right)\quad\forall t\in(a,b)$
(4.10)
where $K$ depends only on $\kappa,T$, $||u||_{L^{\infty}((a,b)\times M)}$ and
$\|(Ricc_{g}+\nabla^{2}V)_{-}\|_{\infty}$.
###### Proof.
We follow an idea introduced in [43], where a similar result was proved in the
Euclidean case.
Let us consider $z:[a,b]\times M\to\mathbb{R}$ defined by
$z\coloneqq\theta|\nabla
u|^{2}-\partial_{t}u+\gamma\frac{u^{2}}{2},\quad\hbox{where $\theta\in(0,1)$,
$\gamma=\frac{1}{(1+||u||_{L^{\infty}((a,b)\times M)})^{2}}$}.$
By using Lemma 4.1 and denoting $\varphi=\frac{|\nabla u|^{2}}{2}$, we obtain
$\begin{split}-&tr\left(\mathcal{A}(x,\nabla
u)\overline{\nabla}^{2}z\right)+\varepsilon\nabla
z\operatorname{\cdot_{g}\\!}\nabla V+\rho
z\leq-2\theta\,\varepsilon\left(|\nabla^{2}u|^{2}+Ricc_{g}(\nabla u,\nabla
u)+(\nabla^{2}V)(\nabla u,\nabla u)\right)\\\
&+2|\nabla(\partial_{t}u)|^{2}-2\nabla\varphi\operatorname{\cdot_{g}\\!}\nabla(\partial_{t}u)+2\theta\,[|\nabla\varphi|^{2}-|\nabla(\partial_{t}u)|^{2}]-\gamma\,\mathcal{A}(x,\nabla
u)(\overline{\nabla}u,\overline{\nabla}u)\end{split}$ (4.11)
where we notice that
$\displaystyle
2|\nabla(\partial_{t}u)|^{2}-2\nabla\varphi\operatorname{\cdot_{g}\\!}\nabla(\partial_{t}u)+2\theta\,[|\nabla\varphi|^{2}-|\nabla(\partial_{t}u)|^{2}]$
$\displaystyle\quad=-\nabla(\partial_{t}u\operatorname{\cdot_{g}\\!}(2\nabla\varphi-2\nabla(\partial_{t}u))+\theta\,\nabla|\nabla
u|^{2}\operatorname{\cdot_{g}\\!}(2\nabla\varphi-2\nabla(\partial_{t}u))-2\theta|\nabla\varphi-\nabla(\partial_{t}u)|^{2}$
$\displaystyle\quad=\nabla
z\operatorname{\cdot_{g}\\!}(2\nabla\varphi-2\nabla(\partial_{t}u))-2\theta\,|\nabla\varphi-\nabla(\partial_{t}u)|^{2}-\gamma\,u\,\nabla
u\operatorname{\cdot_{g}\\!}(2\nabla\varphi-2\nabla(\partial_{t}u))$
$\displaystyle\quad\leq\frac{1}{\theta}|\nabla
z|^{2}+\frac{1}{\theta}\,\gamma^{2}\,u^{2}\,|\nabla u|^{2}\,.$
By construction of $\mathcal{A}(x,\nabla u)$, we have
$\mathcal{A}(x,\nabla u)(\overline{\nabla}u,\overline{\nabla}u)=||\nabla
u|^{2}-\partial_{t}u|^{2}+\varepsilon|\nabla u|^{2}$
so that, inserting the above estimates in (4.11), we obtain
$\displaystyle-$ $\displaystyle tr\left(\mathcal{A}(x,\nabla
u)\overline{\nabla}^{2}z\right)+\varepsilon\nabla
z\operatorname{\cdot_{g}\\!}\nabla V+\rho
z\leq-2\theta\,\varepsilon\left(|\nabla^{2}u|^{2}+Ricc_{g}(\nabla u,\nabla
u)+(\nabla^{2}V)(\nabla u,\nabla u)\right)$
$\displaystyle\quad+\frac{1}{\theta}|\nabla
z|^{2}+\frac{1}{\theta}\,\gamma^{2}\,u^{2}\,|\nabla u|^{2}-\gamma\,||\nabla
u|^{2}-\partial_{t}u|^{2}-\gamma\,\varepsilon|\nabla u|^{2}\,.$
Using $\gamma\,u^{2}\leq 1$ and the regularity of $V$ we get
$\displaystyle-tr\left(\mathcal{A}(x,\nabla
u)\overline{\nabla}^{2}z\right)+\varepsilon\nabla
z\operatorname{\cdot_{g}\\!}\nabla V+\rho
z\leq\left(\frac{\gamma}{\theta}+\hat{C}\right)|\nabla
u|^{2}+\frac{1}{\theta}|\nabla z|^{2}-\gamma\,||\nabla
u|^{2}-\partial_{t}u|^{2}$ (4.12)
where $\hat{C}$ is a constant depending on the lower bound of
$Ric_{g}+\nabla^{2}V$. Given $L>0$, let
$\psi\coloneqq L\left(\frac{1}{(t-a)^{2}}+\frac{1}{(b-t)^{2}}\right)$
Since $\psi$ blows-up at $t=a,b$, we have that $z-\psi$ admits a maximum point
in $(a,b)\times M$. In such point we have $\nabla z=0$ and
$-tr\left(\mathcal{A}(x,\nabla u)\overline{\nabla}^{2}z\right)\geq-
tr\left(\mathcal{A}(x,\nabla
u)\overline{\nabla}^{2}\psi\right)=-6L\left(\frac{1}{(t-a)^{4}}+\frac{1}{(b-t)^{4}}\right).$
If $L_{0}\coloneqq\max(z-\psi)\geq\frac{1}{2}$, then, at a maximum point of
$z-\psi$ we have
$|\nabla u|^{2}-\partial_{t}u=(1-\theta)|\nabla
u|^{2}+z-\gamma\frac{u^{2}}{2}\geq(1-\theta)|\nabla
u|^{2}+\psi+L_{0}-\frac{1}{2}\geq(1-\theta)|\nabla u|^{2}+\psi>0$
so
$||\nabla u|^{2}-\partial_{t}u|^{2}\geq(1-\theta)^{2}\,|\nabla
u|^{4}+\psi^{2}.$
Using all of these in (4.12), we get
$-6L\left(\frac{1}{(t-a)^{4}}+\frac{1}{(b-t)^{4}}\right)\leq\,\left(\frac{\gamma}{\theta}+\hat{C}\right)|\nabla
u|^{2}-\gamma\,(1-\theta)^{2}|\nabla u|^{4}-\gamma\psi^{2}$
which gives
$\displaystyle\left(\delta\,L^{2}-6L\right)\left(\frac{1}{(t-a)^{4}}+\frac{1}{(b-t)^{4}}\right)$
$\displaystyle\leq\,\left(\frac{\gamma}{\theta}+\hat{C}\right)|\nabla
u|^{2}-\gamma\,(1-\theta)^{2}|\nabla u|^{4}$ $\displaystyle\leq
K(1+\|u\|_{L^{\infty}((a,b)\times M)}^{2})$
for some $K$ only depending on $\theta$ and $\hat{C}$. But the inequality
above cannot hold for any too large $L$ (handling constants with care, this
occurs for $L=O(K[(b-a)\vee 1]^{4}\,(1+\|u\|_{L^{\infty}((a,b)\times
M)}^{2}))$). The conclusion is that we have $\max(z-\psi)\leq\frac{1}{2}$,
hence
$z\leq L\left(\frac{1}{(t-a)^{2}}+\frac{1}{(b-t)^{2}}\right)+\frac{1}{2}\,.$
Using the definition of $z$ leads to
$\theta\,|\nabla u|^{2}-\partial_{t}u\leq
L\left(\frac{1}{(t-a)^{2}}+\frac{1}{(b-t)^{2}}\right)+\frac{1}{2}$
and choosing $\theta>\frac{1}{2}$, from $\partial_{t}u=\frac{1}{2}|\nabla
u|^{2}-\varepsilon(\log(m)+V)$ we obtain (4.10) with
$\kappa=\theta-\frac{1}{2}<\frac{1}{2}$. ∎
### 4.3 Existence of smooth solutions for a penalized problem
We first collect all the above ingredients to show that a penalized version of
the optimality system admits a classical solution. We consider, for
$\delta>0$, the problem
$\begin{cases}-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}u\right)+\varepsilon\nabla
u\operatorname{\cdot_{g}\\!}\nabla V(x)=0&\text{in $Q_{T}$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}=\delta
u+\varepsilon(\log(m_{1})+V(x))&\text{in $\Sigma_{T}$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}+\delta
u=\varepsilon(\log(m_{0})+V(x))&\text{in $\Sigma_{0}$.}\\\ \end{cases}$ (4.13)
which is equivalent, reasoning as in Section 3.2, to the system
$\begin{cases}-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}=\varepsilon(\log(m)+V)&\hbox{in $Q_{T}$}\\\
\partial_{t}m-div_{g}(m\nabla u)=0&\hbox{in $Q_{T}$}\\\ \delta
u(T)=\varepsilon\log(m(T))-\varepsilon\log(m_{1})\,,&\hbox{in $\Sigma_{T}$}\\\
\delta u(0)=\varepsilon\log(m_{0})-\varepsilon\log(m(0))\,&\hbox{in
$\Sigma_{0}$.}\end{cases}$ (4.14)
The auxiliary penalized problem (4.14) has the advantage that the
$L^{\infty}$-norm of $u$ is controlled, for $\delta>0$, see (4.5). The
estimates derived from the elliptic theory (see Appendix A) yield a smooth
solution, with $m>0$, provided the marginals are positive and smooth.
###### Theorem 4.5:
Assume that $V\in W^{2,\infty}(M)$, $m_{0},m_{1}\in C^{1,\alpha}(M)$ and
$m_{0},m_{1}>0$ in $M$. For every $\delta>0$, there exists a unique smooth
solution $(u^{\delta},m^{\delta})$ of (4.14), in the sense that $u\in
C^{2,\alpha}(Q_{T})\cap C^{1,\alpha}(\overline{Q}_{T}),m\in
C^{1,\alpha}(Q_{T})\cap C^{0,\alpha}(\overline{Q}_{T})$, $m>0$ and the
equations are satisfied in a classical sense.
###### Proof.
We first rely on Proposition 7.1 which is proved in the Appendix. This gives a
sequence of solutions $u_{\rho}$ of problem (7.1). By maximum principle, we
have $\|u_{\rho}\|_{\infty}\leq\frac{C}{\delta}$. By the gradient bound proved
in Theorem 4.3, we have that
$\lVert\overline{\nabla}u_{\rho}\rVert_{\infty}\leq C$. By elliptic estimates
(see also Lemma 7.2 below) we deduce first that
$\|u_{\rho}\|_{C^{1,\alpha}(\overline{Q}_{T})}\leq C$, and then, bootstrapping
Schauder estimates, $u_{\rho}$ is bounded in $C^{2,\alpha}$ on any compact
subset of $Q_{T}$. Defining
$m_{\rho}=\exp\left(\frac{-\partial_{t}u_{\rho}+\frac{1}{2}\left|\nabla
u_{\rho}\right|^{2}}{\varepsilon}-V\right)$
we deduce the $C^{1,\alpha}(Q_{T})\cap C^{0,\alpha}(\overline{Q}_{T})$
estimates on $m_{\rho}$ and, in particular, $m_{\rho}$ is uniformly bounded
below due to the gradient bounds for $u_{\rho}$. Passing to the limit as
$\rho\to 0$ gives the desired solution of (4.13), hence of (4.14). ∎
The next step will consist in letting $\delta\to 0$ in (4.14), still assuming
that the marginals $m_{0},m_{1}$ are positive, showing that the minima of
(1.1) are smooth for positive smooth marginals. This step is left to the
stability results of the next Section.
## 5 Existence and regularity of optimal curves
In this section we obtain the existence and the characterization of the minima
of (1.1) in terms of the optimality system (1.2), thus proving Theorem 1.1
stated in the Introduction. We first obtain the existence of smooth minima,
whenever the marginals $m_{0},m_{1}$ are positive and smooth; this is achieved
by passing to the limit as $\delta\to 0$ in problem (4.14) and using the
“elliptic ”Lipschitz estimates of Theorem 4.3. Then we will enlarge the set of
admitted marginals $m_{0}$ and $m_{1}$ to merely $L^{1}$, nonnegative
densities. To this purpose, we will need a relaxed definition of weak solution
to the system (3.1), where merely sub-solutions of the Hamilton-Jacobi
equation are taken into account. This kind of notion of weak solutions,
introduced in [7] (see also [8], [9]) for first order mean-field game systems,
is by now well established also in the context of mean-field transport
problems, see e.g. [20], [41]. In particular, in this latter paper a notion of
trace was developed for functions which are (distributional) sub-solutions of
Hamilton-Jacobi equations, e.g. when $u$ satisfies
$-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}\leq\alpha\,,\qquad\alpha\in L^{1}_{loc}((0,T)\times\Omega)$ (5.1)
in some open set $\Omega$. Relying on the fact that $u$ is nondecreasing in
time, up to an absolutely continuous function, one-sided traces of $u$ were
defined in the sense of limits of measurable functions (with possible range in
$[-\infty,+\infty]$), see [41, Prop 5.6]. In particular, if $\alpha\in
L^{1}(Q_{T})$, any $u$ satisfying (5.1) admits traces at $t=0,t=T$, denoted
below as $u(0),u(T)$ respectively, which are the pointwise limits of
$u(t,\cdot)$ as $t\downarrow 0$ (respectively $t\uparrow T$) in the sense of
measurable functions (for a possibly well defined precise representative of
$u$).
###### Definition 5.1:
A pair $(u,m)$ is a weak solution of
$\begin{cases}-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}=\varepsilon(\log(m)+V(x))\quad&\text{in $Q_{T}$}\\\
\partial_{t}m-div_{g}(m\nabla u)=0&\text{in $Q_{T}$}\\\ m(0,\cdot)=m_{0},\quad
m(T,\cdot)=m_{1}&\text{in $M$}\end{cases}$ (5.2)
if $m\in C^{0}([0,T];\mathcal{P}(M))\cap L^{1}(Q_{T})$ with $m(0)=m_{0},$
$m(T)=m_{1}$ and $\log(m)\in L^{1}_{loc}((0,T);L^{1}(M))$;
$u\in L^{2}_{loc}((0,T);H^{1}(M))$ and in addition $m\left|\nabla
u\right|^{2}\in L^{1}(Q_{T})$, $m\log m\in L^{1}(Q_{T})$ and $(u,m)$ satisfy
1. i)
$u$ is a weak sub-solution satisfying, in the sense of distributions,
$-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}\leq\varepsilon(\log(m)+V(x))\quad\text{in $Q_{T}$}$
2. ii)
$m$ is a weak solution satisfying, in the sense of distributions, the
continuity equation
$\partial_{t}m-div_{g}(m\,\nabla u)=0\quad\text{in $Q_{T}$}$
3. iii)
$(u,m)$ satisfy the identity
$\int_{M}m_{0}u(0)dx-\int_{M}m_{1}u(T)dx=\int_{0}^{T}\\!\\!\\!\\!\int_{M}\left(\frac{1}{2}\left|\nabla
u\right|^{2}m+\varepsilon m(\log m+V)\right)\,dxdt$ (5.3)
where $u(0),u(T)$ are the one-sided traces of $u$ mentioned above.
The existence of weak solutions to (5.2) will be produced through a stability
argument, which relies on several steps. Some crucial a priori estimates are
given in the following lemma, which applies to solutions of both (4.14) and
(5.2) (corresponding to $\delta=0$).
###### Lemma 5.2:
Assume that $(u,m)$ is a smooth solution to (4.14), for some $\delta\in[0,1]$,
and define $\hat{u}:=u-\int_{M}u(T)m_{1}$. Let $\lambda$ be given by (2.2).
There exists a constant $\hat{C}=\hat{C}(M,d,\lambda,T,\|V\|_{\infty})$ such
that
$\displaystyle-\frac{\hat{C}}{t}\leq\hat{u}(t,x)\qquad\forall(t,x)\in(0,T]\times
M\,,$ (5.4)
$\displaystyle\hat{u}(t,x)\leq\frac{\hat{C}}{T-t}\qquad\forall(t,x)\in[0,T)\times
M$
and
$\begin{split}&\int_{M}\hat{u}(0)m_{0}\,dx\geq-\varepsilon\log\left(\int_{M}e^{-V}dx\right)\,\\\
&\int_{0}^{T}\\!\\!\\!\\!\int_{M}\left(m\,|\nabla u|^{2}+\varepsilon m\log
m\right)dxdt\leq\hat{C}\,.\end{split}$ (5.5)
In addition, for any $0<a<b<T$ there exists $C$, depending on $\hat{C}$ and
additionally on $a,(T-b)$, such that
$\int_{a}^{b}\\!\\!\\!\int_{M}|\nabla
u|^{2}\,dxdt+\varepsilon\int_{a}^{b}\\!\\!\\!\int_{M}|\log m|\,dxdt\leq C\,.$
(5.6)
Finally, if $m_{0},m_{1}>0$ on $M$, then there exists $K$, depending on
$\hat{C}$ and additionally on $\min\limits_{M}[\varepsilon\log(m_{0}e^{V})]$
and $\min\limits_{M}[\varepsilon\log(m_{1}e^{V})]$, such that
$||\hat{u}||_{L^{\infty}([0,T]\times M)}<K\,.$ (5.7)
###### Proof.
By definition of $\hat{u}$, we have
$\displaystyle\int_{0}^{T}\\!\\!\int_{M}\left(\frac{1}{2}\left|\nabla
u\right|^{2}m+\varepsilon m(\log
m+V)\right)\,dxdt=\int_{M}m(0)u(0)dx-\int_{M}m(T)u(T)dx$
$\displaystyle\qquad=\int_{M}(m(0)-m_{0})u(0)dx-\int_{M}(m(T)-m_{1})u(T)dx+\int_{M}m_{0}\hat{u}(0)\,dx$
This implies, using the initial-terminal conditions of (4.14),
$\begin{split}&\int_{M}m_{0}\hat{u}(0)dx=\int_{0}^{T}\\!\\!\int_{M}\left(\frac{1}{2}\left|\nabla
u\right|^{2}m+\varepsilon m(\log m+V)\right)\,dxdt\\\
&\quad+\frac{\varepsilon}{\delta}\int_{M}(m_{0}-m(0))[\log(m_{0})-\log(m(0))]\,dx\\\
&\quad+\frac{\varepsilon}{\delta}\int_{M}(m(T)-m_{1})[\log
m(T)-\log(m_{1})]\,dx\,,\end{split}$ (5.8)
where last two terms should be treated as zero in case that $\delta=0$. In
particular, since the right-side is bounded below, we deduce that
$\begin{split}\int_{M}m(0)\hat{u}(0)dx&\geq\int_{0}^{T}\\!\\!\int_{M}\left(\frac{1}{2}\left|\nabla
u\right|^{2}m+\varepsilon m(\log m+V)\right)\,dxdt\\\
&\geq-\varepsilon\,\log\left(\int_{M}e^{-V}dx\right)\end{split}$ (5.9)
where we used Jensen’s inequality in the last step. Of course, $\hat{u}$
satisfies the same Hamilton-Jacobi equation as $u$, from which we derive now
all the estimates.
At first, for any fixed $x_{0}\in M$, letting $\delta_{x_{0}}$ be the Dirac’s
delta concentrated on $x_{0}$, we consider the solution $\mu$ to the
continuity equation
$\begin{cases}\partial_{t}\mu-div_{g}(\mu v)=0&\text{in $(s,T)\times M$}\\\
\mu(T)=m_{1},\mu(s)=\delta_{x_{0}}&\text{in $M$}\\\ \end{cases}$
which was built in Proposition 2.2. Such a curve satisfies the estimate
$\int^{T}_{s}\\!\\!\\!\int_{M}\left(\frac{\left|v\right|^{2}}{2}\,\mu+\varepsilon\mu(\log\mu+V)\right)dxdt\leq\frac{K}{T-s}$
with $K$ depending only on $M,d,\lambda,T$ and $\|V\|_{\infty}$, but
independent of $x_{0}$. We multiply by $\mu$ the equation of $\hat{u}$ and we
integrate over $(s,T)\times M$: we get
$\displaystyle\hat{u}(x_{0})$
$\displaystyle=\int^{T}_{s}\\!\\!\\!\int_{M}v\cdot\nabla
u\,\mu-\int^{T}_{s}\\!\\!\\!\int_{M}\frac{1}{2}\left|\nabla
u\right|^{2}\,\mu+\varepsilon\int^{T}_{s}\\!\\!\\!\int_{M}(\log(m)+V)\mu\,dxdt$
$\displaystyle\leq\int^{T}_{s}\\!\\!\\!\int_{M}\frac{1}{2}\left|v\right|^{2}\,\mu+\varepsilon\int^{T}_{s}\\!\\!\\!\int_{M}(\log(m)+V)\mu\,dxdt\,.$
Using that $a\log(b)\leq a\log(a)+b$ for every $a,b\in(0,+\infty)$ we obtain
$\displaystyle\hat{u}(x_{0})$
$\displaystyle\leq\int^{T}_{s}\\!\\!\\!\int_{M}\frac{1}{2}\left|v\right|^{2}\,\mu+\varepsilon\int^{T}_{s}\\!\\!\\!\int_{M}\mu(\log(\mu)+V)\,dxdt+\varepsilon(T-s)$
$\displaystyle\leq\frac{K}{T-s}+\varepsilon(T-s)\,.$
So there exists a constant $\hat{C}>0$, depending on $T$, $\|V\|_{\infty}$ and
$M$, but independent of $\varepsilon$ and $x_{0}$, such that for every
$t\in[0,T)$
$\hat{u}(t,x_{0})\leq\frac{\hat{C}}{T-t}\,.$ (5.10)
Reasoning in a similar way, namely using an analogous curve between $m_{0}$
and $\delta_{x_{0}}$, and the bound (5.9), we conclude that there exists a
constant $\hat{C}>0$ such that for every $t\in(0,T]$
$-\frac{\hat{C}}{t}\leq\hat{u}(t,x_{0}).$ (5.11)
By the arbitrariness in the choice of $x_{0}\in M$, we get (5.4).
Fix now $0<a<b<T$, from the equation of $\hat{u}$ we have
$\displaystyle\int_{M}\hat{u}(a,\cdot)\,dx-\int_{M}\hat{u}(b,\cdot)\,dx$
$\displaystyle+\int_{a}^{b}\\!\\!\int_{M}\frac{1}{2}\left|\nabla
u\right|^{2}\,dxdt$
$\displaystyle\leq-\varepsilon\int_{a}^{b}\\!\\!\int_{M}|\log(m)|dxdt+\varepsilon\,c_{M}(1+\|V\|_{\infty})\,.$
Using (5.10) and (5.11), we obtain estimate (5.6). Coming back to (5.9) and
using the upper bound (5.10), we also get (5.5).
Finally, to get (5.7) we use the comparison principle on the equation (4.13).
In fact, the linear function $\psi:=\frac{\hat{C}}{T}+Bt$ is clearly a
solution of the same equation, and it is a strict supersolution at $t=T$ if
$\varepsilon(\log(m_{1}e^{V}))>-B$. This is possible if $m_{1}>0$, choosing
$B$ sufficiently large. Since $\psi(0)>\hat{u}(0)$ by (5.10), one can compare
$\hat{u}$ with $\psi$ (note that $\hat{u}$ satisfies the same condition as $u$
at $t=T$ up to a bounded term, and $B$ can be chosen possibly larger to
compensate). Hence $\hat{u}\leq\frac{\hat{C}}{T}+Bt$. Similarly we reason to
get the estimate from below using the positivity of $m_{0}$, and we conclude
with the global bound (5.7). ∎
We notice that the local bounds of $\hat{u}$, given by (5.4), are totally
independent of $m$. This allows us to infer the local boundedness of $m$ too,
as a consequence of Proposition 4.4.
###### Corollary 5.3:
Under the assumptions of Lemma 5.2, given any $0<a<b<T$ there exists a
constant
$C=C(M,d,\lambda,T,\|V\|_{\infty},\|(Ric_{g}+D^{2}V)_{-}\|_{\infty},\varepsilon,a,T-b)$
such that
$\|m(t)\|_{L^{\infty}((a,b)\times M)}\leq C\,.$
###### Proof.
By estimates (5.4), we have $\|\hat{u}\|_{L^{\infty}((a,b)\times
M)}\leq\hat{C}\,\max(a^{-1},(T-b)^{-1})$. Hence, from (4.10) we deduce that
$\varepsilon\,\log(m(t))$ is bounded above for $t\in(a,b)$ by some constant
depending on $\hat{C},T,\max(a^{-1},(T-b)^{-1})$ and
$\|(Ric_{g}+D^{2}V)_{-}\|_{\infty}$, which yields the conclusion. ∎
Thanks to the global bound (5.7), we can now pass to the limit as $\delta\to
0$ in (4.14), obtaining the existence of smooth minimizers for positive smooth
marginals.
###### Theorem 5.4:
Assume that $m_{0},m_{1}\in W^{1,\infty}(M)$ and $m_{0},m_{1}>0$ in $M$. Let
$V\in W^{2,\infty}(M)$, $\varepsilon>0$. Then there exists a (unique) smooth
solution $(u,m)$ of the system (1.2) such that $\int_{M}u(T)m_{1}=0$, $u\in
C^{2,\alpha}(Q_{T})\cap C^{1,\alpha}(\overline{Q}_{T})$, $m\in
C^{1,\alpha}(Q_{T})\cap C^{0,\alpha}(\overline{Q}_{T})$, $\alpha\in(0,1)$. In
addition we have $m>0$ in $\overline{Q}_{T}$, $u$ is a classical solution of
the elliptic equation (3.17), and $(m,\nabla u)$ is the unique minimum of the
functional $\mathcal{F}_{\varepsilon}$ in (1.1).
Finally, if $V\in C^{k,\alpha}(M)$, we have $u\in C^{k+1,\alpha}(M),m\in
C^{k,\alpha}(M)$.
###### Proof.
In a first step, we take $m_{0},m_{1}\in C^{1,\alpha}(M)$ and positive on $M$.
By Theorem 4.5, problem (4.14) admits a smooth solution
$(u^{\delta},m^{\delta})$, and we set as before
$\hat{u}^{\delta}=u^{\delta}-\int_{M}u^{\delta}(T)\,m_{1}$. By (5.7), we have
that $\hat{u}^{\delta}$ is uniformly bounded in $Q_{T}$. Then, by Theorem 4.3,
we deduce that $\hat{u}^{\delta}$ is uniformly bounded in Lipschitz norm
(time-space). By elliptic estimates (same as in Lemma 7.2), we have that
$\hat{u}^{\delta}$ is bounded in $C^{1,\alpha}(\overline{Q}_{T})$ and the
bound only depends on
$\|\log(m_{0})\|_{W^{1,\infty}(M)},\|\log(m_{1})\|_{W^{1,\infty}(M)}$ (and of
course on $\varepsilon,\|V\|_{W^{2,\infty}}$). We also have interior local
bounds on $\hat{u}^{\delta}$ in $C^{2,\alpha}$, because the elliptic equation
have coefficients bounded in $C^{0,\alpha}$. Therefore, by compactness, we
deduce that $\hat{u}^{\delta}$ converges to some $u\in C^{2,\alpha}(Q_{T})\cap
C^{1,\alpha}(\overline{Q}_{T})$, which is a classical solution of the elliptic
equation (3.17). At the boundary, e.g. at $t=T$, we have
$\begin{split}-\partial_{t}\hat{u}^{\delta}&+\frac{1}{2}\left|\nabla\hat{u}^{\delta}\right|^{2}=\varepsilon(\log(m_{1})+V(x))+\delta\hat{u}^{\delta}+\delta\int_{M}u^{\delta}(T)m_{1}\\\
&=\varepsilon(\log(m_{1})+V(x))+\delta\hat{u}^{\delta}+\varepsilon\int_{M}(\log(m^{\delta}(T))-\log(m_{1}))m_{1}\,.\end{split}$
(5.12)
However, by (5.8) and the bound on $\hat{u}^{\delta}$, we know that
$\int_{M}(\log(m^{\delta}(T))-\log(m_{1}))(m^{\delta}(T)-m_{1})\leq
C\,\delta\,\,\mathop{\to}^{\delta\to 0}0$
which implies that $m^{\delta}(T)\to m_{1}$. In particular, last term in
(5.12) vanishes as $\delta\to 0$, and since we also have
$\delta\hat{u}^{\delta}\to 0$ we conclude that $u$ satisfies the boundary
condition at $t=T$
$-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}=\varepsilon(\log(m_{1})+V(x))\,.$
We argue in the same way for $t=0$ and we conclude that the limit $u$
satisfies the elliptic problem (4.4) with $\rho=0,\delta=0$. Furthermore, we
notice that $u$ satisfies an elliptic equation where the second order
coefficients only depend on $\nabla u$, and the first order coefficients
depend on $\nabla V$. Hence, by the interior Schauder regularity and a
boostrap argument, we have $u\in C^{k+1,\alpha}(M)$ provided $V\in
C^{k,\alpha}(M)$, $k\geq 1$.
Finally, defining
$m:=e^{-V(x)}\,\exp\left(\frac{-\partial_{t}u+\frac{1}{2}|\nabla
u|^{2}}{\varepsilon}\right)\,,$
we have $m\in C^{1,\alpha}(Q_{T})\cap C^{0,\alpha}(\overline{Q}_{T})$, $m>0$
and $m(0)=m_{0},m(T)=m_{1}$. In other words, $(u,m)$ is a smooth solution of
system (1.2) (unique, with the normalization of $u$), and is the minimum of
the functional $\mathcal{F}_{\varepsilon}$. This is also the unique minimum,
as we will prove in more generality in our next results. As a last remark, the
result extends by approximation to positive marginals $m_{0},m_{1}\in
W^{1,\infty}(M)$ since in fact all the estimates above remain true. ∎
We will now extend the existence result to the case of general $L^{1}$
marginals. We will need some more a priori estimates and new compactness
arguments. To this purpose, we first make use of the displacement convexity
estimates of Proposition 3.3 to derive a local $L^{2}$-bound on $\nabla m$.
###### Lemma 5.5:
Let $(u,m)$ be a smooth solution of the system (3.1). For every $0<a<b<T$
there exist a constant $C=C(M,d,\lambda,T,b-a,\lVert V\rVert_{W^{1,\infty}})$
such that
$\displaystyle\int_{a}^{b}\\!\\!\int_{M}\left|\nabla\sqrt{m}\right|^{2}\,dxdt$
$\displaystyle\leq C\,\frac{|\log\varepsilon|}{\varepsilon}\,.$
###### Proof.
First, we recall inequality (3.11) which implies
$\displaystyle\frac{d^{2}}{dt^{2}}\int_{M}m\log m$
$\displaystyle\geq-2\lambda\varepsilon\int_{M}m\log
mdx+\frac{\varepsilon}{2}\int_{M}\frac{1}{m}\left|\nabla m\right|^{2}\,dx-C$
for some $C$ depending on $M,d,\lambda,T,\|V\|_{W^{1,\infty}}$, and
independent of $\varepsilon\leq 1$. Now we fix $t_{0}\in(0,T)$, and
$R<R_{0}\coloneqq\min(t_{0},T-t_{0})$; then, for $\tau\in(0,R)$ we let
$\xi(t)$ be a smooth cut-off function such that
$\begin{cases}\xi(t)=1&\text{if }t\in(t_{0}-\tau,t_{0}+\tau)\\\
\xi(t)=0&\text{if }\left|t-t_{0}\right|>R\\\
\left|\xi^{\prime}(t)\right|^{2}+\left|\xi^{\prime\prime}(t)\right|\leq\alpha_{\xi}\end{cases}$
for a certain $\alpha_{\xi}>0$. Then we have
$\displaystyle\frac{d^{2}}{dt^{2}}\left(\xi^{2}\int_{M}m\log m\,dx\right)\geq$
$\displaystyle-2\lambda\varepsilon\xi^{2}\int_{M}m\log
m\,dx+\frac{\varepsilon}{2}\int_{M}\xi^{2}\frac{\left|\nabla
m\right|^{2}}{m}\,dx-C\,\xi^{2}$
$\displaystyle+4\xi\xi^{\prime}\frac{d}{dt}\int_{M}m\log m\,dx+2(\xi^{\prime
2}+\xi\xi^{\prime\prime})\int_{M}m\log m\,dx\,.$
If we integrate in $(0,T)$ both sides we get
$\displaystyle\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}\xi^{2}\left|\nabla\sqrt{m}\right|^{2}\,dxdt\leq\,$
$\displaystyle C_{t_{0},T}\left[\int_{0}^{T}\\!\\!\\!\\!\int_{M}m\log
m\,dxdt+C\right],$
for a possibly different constant $C$. Since
$\int_{0}^{T}\\!\\!\\!\\!\int_{M}m\log(m)$ is estimated by (3.9), we conclude.
∎
We will also use the following stability result.
###### Lemma 5.6:
Let $(u^{n},m^{n})$ be a sequence of smooth solutions of (3.1), possibly for
different parameters $\varepsilon_{n}\to\varepsilon\geq 0$. Assume that
$u_{n}$ satisfies (5.4) for some constant $\hat{C}$ independent of $n$. Let
$(u,m)$ be such that $u^{n}\to u$ weakly in $L^{2}((a,b);H^{1}(M))$, for any
$0<a<b<T$, and $m^{n}\to m$ weakly in $L^{1}(Q_{T})$. Then we have:
1. 1.
$u$ satisfies, in the sense of distributions,
$-\partial_{t}u+\frac{1}{2}|\nabla u|^{2}\leq\varepsilon(\log
m+V)\qquad\hbox{in $(0,T)\times M$.}$ (5.13)
2. 2.
For every sequences $m_{0n},m_{1n}$ such that $m_{0n}\to m_{0}$, $m_{1n}\to
m_{1}$ strongly in $L^{1}(M)$, we have
$\limsup_{n\to\infty}\int_{M}u^{n}(0)m_{0n}\leq\int_{M}u(0)dm_{0}\,,$
$\liminf_{n\to\infty}\int_{M}u^{n}(T)m_{1n}\geq\int_{M}u(T)dm_{1}\,.$
3. 3.
For every $(\mu,v)$ which solves (2.3) and such that $\mu(t)\in L^{1}(M)$ for
every $t$ and $\mu\log\mu\in L^{1}(Q_{T})$, it holds
$\begin{split}\int_{M}u(s)\mu(s)\,dx-\int_{M}u(t)\,\mu(t)\,dx&\leq\int_{s}^{t}\\!\\!\\!\int_{M}[\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla
u-\frac{1}{2}|\nabla u|^{2}\,\mu]\,dxd\tau\\\
&\quad+\varepsilon\int_{s}^{t}\\!\\!\\!\int_{M}\left(\log m+V\right)\mu
dxd\tau\end{split}$ (5.14)
for every $0\leq s<t\leq T$.
###### Remark 5.7:
We notice that (5.13) implies
$-\partial_{t}u+\frac{1}{2}|\nabla u|^{2}\leq\varepsilon(m+V)\,\,\,\in
L^{1}(Q_{T})$
hence $u$ satisfies (5.1) for some $\alpha\in L^{1}(Q_{T})$ and admits one-
sided traces up to $t=0$ and $t=T$ (those traces are used in items 2,3 above).
###### Proof.
By definition, $u^{n}$ satisfies
$\int_{0}^{T}\\!\\!\\!\\!\int_{M}u_{n}\,\partial_{t}\varphi\,dxdt+\frac{1}{2}\int_{0}^{T}\\!\\!\\!\\!\int_{M}|\nabla
u^{n}|^{2}\,\varphi\,dxdt=\int_{0}^{T}\\!\\!\\!\\!\int_{M}\varepsilon(\log(m^{n})+V)\varphi\,dxdt$
for every $\varphi\in C^{1,0}_{c}(Q_{T})$. In particular the integrals are
restricted to some interval $(a,b)$ containing the support of $\varphi$, with
$0<a<b<T$. Then, by simply using the weak convergence of $\nabla u_{n}$ in
$L^{2}((a,b)\times M)$ and $m^{n}$ in $L^{1}((a,b)\times M)$, and the weak
lower semicontinuity for the convex functions $|p|^{2}$ and $-\log(m)$,
respectively, we conclude that $u$ satisfies (5.13), in distributional sense.
To observe the relaxation on the initial traces, we fix $k\in\mathbb{N}$ and
consider the truncations $u_{n,k}\coloneqq\max\\{u_{n},-k\\}$. By a standard
argument in Sobolev spaces (see e.g. Lemma 5.3 in [41]), we have that
$u_{n,k}:=\max\\{u_{n},-k\\}$ are Lipschitz functions that satisfy
$\begin{split}-\partial_{t}u_{n,k}+\frac{1}{2}|\nabla
u_{n,k}|^{2}&\leq\mathds{1}_{\\{u_{n}\geq-k\\}}\varepsilon_{n}\left(\log
m_{n}+V\right)\\\ &\leq\varepsilon_{n}\,(m_{n}+|V|)\,.\end{split}$ (5.15)
For every $k\in\mathbb{N}$, let $\xi_{k}$ be the piecewise linear function
such that
$\begin{cases}\xi_{k}(s)=1&\forall s\in\left[0,\frac{\hat{C}}{k}\right]\\\
\xi_{k}(s)=0&\forall s\in\left[2\frac{\hat{C}}{k},T\right]\\\
\xi_{\delta,k}^{\prime}(s)=-\frac{k}{\hat{C}}&\forall
s\in\left[\frac{\hat{C}}{k},2\frac{\hat{C}}{k}\right]\\\ \end{cases}$
where $\hat{C}$ is the same constant which appears in (5.4). We now fix a
positive $\varphi\in L^{\infty}(M)$ and we multiply (5.15) by
$\phi=\xi_{k}\varphi$. Integrating by parts the time derivative we get
$\displaystyle\int_{M}u_{n,k}(0,\cdot)\varphi
dx-\frac{k}{\hat{C}}\int_{\frac{\hat{C}}{k}}^{2\frac{\hat{C}}{k}}\\!\\!\\!\\!\int_{M}u_{n,k}\varphi\,dxdt$
$\displaystyle\leq C\,\frac{\|\varphi\|_{\infty}}{k}\,.$
The sequence $u_{n,k}(0,\cdot)$ is uniformly bounded in $n$ by construction,
let $\chi_{k}$ be its weak-* limit, up to subsequences, in $L^{\infty}(M)$. By
(5.4), we note that $u_{n,k}=u_{n}$ for $t\geq\frac{\hat{C}}{k}$. Then we pass
to the limit for $n\rightarrow+\infty$, using the weak convergence of $u_{n}$,
and we get
$\int_{M}\chi_{k}\,\varphi
dx-\frac{k}{\hat{C}}\int_{\frac{\hat{C}}{k}}^{2\frac{\hat{C}}{k}}\\!\\!\\!\\!\int_{M}u\,\varphi\,dxdt\leq
C\,\frac{\|\varphi\|_{\infty}}{k}.$ (5.16)
From equation (5.13), setting
$F(t,x):=\varepsilon\int_{0}^{t}(m(s,x)+V(x))\,ds$, we know that $u+F$ is
nondecreasing in time. Hence
$\frac{k}{\hat{C}}\int_{\frac{\hat{C}}{k}}^{2\frac{\hat{C}}{k}}\\!\\!\\!\\!\int_{M}u\,\varphi\,dxdt\leq\int_{M}u(2\frac{\hat{C}}{k},\cdot)\varphi\,dx-\frac{k}{\hat{C}}\int_{\frac{\hat{C}}{k}}^{2\frac{\hat{C}}{k}}\\!\\!\\!\\!\int_{M}\left(F(t,x)-F(2\frac{\hat{C}}{k},x)\right)\varphi\,dxdt$
and last term vanishes as $k\to\infty$ because $F$ is a primitive of a $L^{1}$
function. Therefore, we have
$\limsup_{k\to\infty}\frac{k}{\hat{C}}\int_{\frac{\hat{C}}{k}}^{2\frac{\hat{C}}{k}}\\!\\!\\!\\!\int_{M}u\,\varphi\,dxdt\leq\limsup_{k\to\infty}\int_{M}u(2\frac{\hat{C}}{k},\cdot)\varphi\,dx\leq\int_{M}u(0)\,\varphi\,dx$
where we used the pointwise convergence of $u(t,\cdot)$ as $t\to 0^{+}$ and
the fact that $u$ is bounded above by (5.4), which allows us to apply Fatou’s
lemma. Finally, letting $k\to\infty$ in (5.16), we obtain
$\limsup_{k\to\infty}\int_{M}\chi_{k}\varphi dx\leq\int_{M}u(0)\,\varphi\,dx$
(5.17)
for every $\varphi\in L^{\infty}(M)$. Let us define now $T_{j}(f)=\min(f,j)$
the truncation operator in $L^{1}(M)$. We recall that $m_{0,n}$ converges
strongly to $m_{0}$ in $L^{1}(M)$ and $u(0)$ is bounded above, so
$\displaystyle\limsup_{n}\int_{M}m_{0,n}u_{n}(0)dx$
$\displaystyle\leq\limsup_{n}\int_{M}T_{j}(m_{0,n})u_{n}(0)dx$
$\displaystyle\quad+\limsup_{n}\int_{M}(m_{0,n}-T_{j}(m_{0,n}))u_{n}(0)dx$
$\displaystyle\leq\limsup_{n}\int_{M}T_{j}(m_{0,n})u_{n,k}(0)dx$
$\displaystyle\quad+\frac{\hat{C}}{T}\limsup_{n}\int_{M}(m_{0,n}-T_{j}(m_{0,n}))dx$
$\displaystyle\leq\int_{M}T_{j}(m_{0})\chi_{k}dx+\frac{\hat{C}}{T}\int_{M}(m_{0}-T_{j}(m_{0}))dx\,.$
Using (5.17) with $\varphi=T_{j}(m_{0})$ we obtain, letting $k\to\infty$,
$\limsup_{n}\int_{M}m_{0,n}u_{n}(0)dx\leq\int_{M}u(0)\,T_{j}(m_{0})\,dx+\frac{\hat{C}}{T}\int_{M}(m_{0}-T_{j}(m_{0}))dx\,.$
Letting finally $j\to\infty$, we get
$\limsup_{n}\int_{M}m_{0,n}u_{n}(0)dx\leq\int_{M}u(0)\,m_{0}\,dx\,.$
Similarly we argue for $t=T$, using now $u_{n,k}=\min(u_{n},k)$.
We are left to prove (5.14). To this purpose, for any $f\in L^{1}(Q_{T})$, we
denote by $f_{h},f_{-h}$ the time-average functions
$f_{h}:=\frac{1}{h}\int_{t}^{t+h}f(x,s)ds$ and
$f_{-h}:=\frac{1}{h}\int_{t-h}^{t}f(x,s)ds$. Integrating (5.15) in $(t,t+h)$
and dividing by $h$, we obtain, by means of Jensen inequality,
$-\partial_{t}(u_{n,k})_{h}+\frac{1}{2}|\nabla(u_{n,k})_{h}|^{2}\leq\frac{1}{h}\int_{t}^{t+h}\left\\{\mathds{1}_{\\{u_{n}\geq-k\\}}\varepsilon_{n}\left(\log
m_{n}+V\right)\right\\}ds\,.$
Let $(\mu,v)$ be any solution of the continuity equation such that $\mu(t)\in
L^{1}(M)$ for every $t$, and $\mu\log(\mu)\in L^{1}(Q_{T})$. By a density
argument, any Lipschitz function $\varphi$ can be used as test function in the
continuity equation; hence, multiplying by $u_{n,k}$, we get (for any $0\leq
s<r<T-h$)
$\begin{split}&\int_{M}(u_{n,k})_{h}(s)\mu(s)\,dx-\int_{M}(u_{n,k})_{h}(r)\,\mu(r)\,dx\\\
&\quad\qquad\leq\int_{s}^{r}\\!\\!\\!\int_{M}\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla(u_{n,k})_{h}-\frac{1}{2}|\nabla(u_{n,k})_{h}|^{2}\,\mu\,dxd\tau\\\
&\quad\qquad\quad+\varepsilon_{n}\int_{s}^{r}\\!\\!\\!\int_{M}\frac{1}{h}\int_{\tau}^{\tau+h}\left\\{\mathds{1}_{\\{u_{n}\geq-k\\}}\left(\log
m_{n}+V\right)\right\\}\mu(\tau)dsdxd\tau\,.\end{split}$ (5.18)
Now we let $n\to\infty$. Recall that $u_{n}$ is bounded in
$L^{2}((a,b);H^{1}(M))$, and, using (5.6), we have in fact $\partial_{t}u_{n}$
bounded in $L^{1}((a,b);L^{1}(M))$; using classical compactness results (see
[45]), we deduce that $u_{n}$ is compact in $L^{2}((a,b);L^{2}(M))$. Moreover,
$u_{n}$ is locally uniformly bounded (and bounded above up to $t=0$); since
$\mu(t)\in L^{1}(M)$, we deduce that
$\int_{M}(u_{n,k})_{h}(t)\,\mu(t)\,dx\mathop{\to}^{n\to\infty}\int_{M}(u_{k})_{h}(t)\,\mu(t)\,dx$
for any $t\in[0,T)$, where $u_{k}=\max(u,-k)$. Moreover, by weak convergence
of $u_{n}$ in $L^{2}((a,b);H^{1}(M))$, for any small $\eta>0$ we have
$\displaystyle\limsup_{n\to\infty}$
$\displaystyle\int_{s}^{r}\\!\\!\\!\int_{M}\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla(u_{n,k})_{h}-\frac{1}{2}|\nabla(u_{n,k})_{h}|^{2}\,\mu\,dxd\tau$
$\displaystyle\leq\limsup_{n\to\infty}\int_{s+\eta}^{r}\\!\\!\int_{M}\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla(u_{n,k})_{h}-\frac{1}{2}|\nabla(u_{n,k})_{h}|^{2}\,\mu\,dxd\tau+\int_{s}^{s+\eta}\\!\\!\\!\int_{M}\frac{1}{2}\mu|v|^{2}\,dxd\tau$
$\displaystyle\leq\int_{s+\eta}^{r}\\!\\!\int_{M}\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla(u_{k})_{h}-\frac{1}{2}|\nabla(u_{k})_{h}|^{2}\,\mu\,dxd\tau+\int_{s}^{s+\eta}\\!\\!\\!\int_{M}\frac{1}{2}\mu|v|^{2}\,dxd\tau\,,$
and letting $\eta\to 0$ Fatou’s lemma yields
$\displaystyle\limsup_{n\to\infty}$
$\displaystyle\int_{s}^{r}\\!\\!\\!\int_{M}\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla(u_{n,k})_{h}-\frac{1}{2}|\nabla(u_{n,k})_{h}|^{2}\,\mu\,dxd\tau$
$\displaystyle\qquad\qquad\leq\int_{s}^{r}\\!\\!\\!\int_{M}\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla(u_{k})_{h}-\frac{1}{2}|\nabla(u_{k})_{h}|^{2}\,\mu\,dxd\tau\,.$
We are left with the last integral in (5.18). Of course, if
$\varepsilon_{n}\to 0$, using $\mu\log m_{n}\leq\mu\log\mu+m_{n}$, we estimate
$\displaystyle\varepsilon_{n}\int_{s}^{r}\\!\\!\\!\int_{M}\frac{1}{h}\int_{\tau}^{\tau+h}\left\\{\mathds{1}_{\\{u_{n}\geq-k\\}}\left(\log
m_{n}+V\right)\right\\}\mu(\tau)dsdxd\tau$
$\displaystyle\qquad\leq\varepsilon_{n}\int_{s}^{r}\\!\\!\\!\int_{M}\frac{1}{h}\int_{\tau}^{\tau+h}\left(m_{n}+\mu\log\mu+V\,\mu\right)dsdxd\tau\mathop{\to}^{n\to\infty}0\,.$
So we suppose that $\varepsilon_{n}\to\varepsilon>0$. In this case, from Lemma
5.5 we get that $\sqrt{m_{n}}$ is bounded in $L^{2}((a,b);H^{1}(M))$, for any
$0<a<b<T$. Since $m_{n}|\nabla u_{n}|^{2}$ is bounded in $L^{1}(Q_{T})$ and
$(\sqrt{m_{n}})_{t}=\frac{1}{2}div_{g}(\sqrt{m_{n}}\nabla
u_{n})+\frac{1}{2}\nabla u_{n}\operatorname{\cdot_{g}\\!}\nabla\sqrt{m_{n}}$
we also deduce that $(\sqrt{m_{n}})_{t}$ is bounded in
$L^{2}((a,b);(H^{1}(M))^{*})+L^{1}((a,b)\times M)$. By classical compactness
results (see [45]), we infer that $\sqrt{m_{n}}$ is strongly compact in
$L^{2}((a,b)\times M)$, which means that $m_{n}$ is relatively compact in the
strong $L^{1}$-topology, locally in time, and converges almost everywhere, up
to subsequences. In particular, still using that $\mu\log
m_{n}\leq\mu\log\mu+m_{n}$, we can apply Fatou’s lemma in the last integral in
(5.18) (notice that $\mathds{1}_{\\{u_{n}\geq-k\\}}$ converges to
$\mathds{1}_{\\{u\geq-k\\}}$ for almost every $k$, which we can suppose to be
the case). Finally, by (5.18) we obtain, letting $n\to\infty$:
$\displaystyle\int_{M}(u_{k})_{h}(s)\mu(s)\,dx-\int_{M}(u_{k})_{h}(r)\,\mu(r)\,dx\leq$
$\displaystyle\qquad\leq\int_{s}^{r}\\!\\!\\!\int_{M}\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla(u_{k})_{h}-\frac{1}{2}|\nabla(u_{k})_{h}|^{2}\,\mu\,dxd\tau$
$\displaystyle\qquad\quad+\varepsilon\int_{s}^{r}\\!\\!\\!\int_{M}\frac{1}{h}\int_{\tau}^{\tau+h}\mathds{1}_{\\{u\geq-k\\}}\left(\log
m+V\right)\mu(\tau)dsdxd\tau\,.$
Now we let $h\to 0$. Once more, last term can be handled through Fatou’s
lemma; indeed, since $\log(m)\in L^{1}_{loc}((0,T);L^{1}(M))$, we have that
$\frac{1}{h}\int_{\tau}^{\tau+h}\mathds{1}_{\\{u\geq-k\\}}\left(\log
m+V\right)\mathop{\to}\limits^{h\to 0}\mathds{1}_{\\{u\geq-k\\}}\left(\log
m+V\right)\quad\hbox{for a.e. $\tau\in(0,T),x\in M$}$
and
$\displaystyle\mu(\tau)\frac{1}{h}\int_{\tau}^{\tau+h}\mathds{1}_{\\{u\geq-k\\}}\left(\log
m+V\right)\leq\mu\,\log\mu+\mu\,V+m_{h}$
where last sequence is strongly convergent in $L^{1}(Q_{T})$. Hence Fatou’s
lemma can be applied and yields
$\displaystyle\limsup_{h\to
0}\int_{s}^{r}\\!\\!\\!\int_{M}\frac{1}{h}\int_{\tau}^{\tau+h}\mathds{1}_{\\{u\geq-k\\}}\left(\log
m+V\right)\mu(\tau)dsdxd\tau$
$\displaystyle\qquad\leq\int_{s}^{r}\\!\\!\\!\int_{M}\mathds{1}_{\\{u\geq-k\\}}\left(\log
m+V\right)\mu dxd\tau\,.$
Similarly we argue for the term involving $\nabla u_{k}$, where we also use
Fatou’s lemma, because $\nabla u_{k}\in L^{2}_{loc}((0,T)\times M)$ and
$\nabla(u_{k})_{h}\to\nabla u_{k}$ almost everywhere, up to extracting a
suitable subsequence. Finally, using that $u_{k}$ is uniformly bounded in
$(0,r+h)$ and $u$ admits one-sided traces (in the sense of monotone limits of
measurable functions, as recalled above), we have that $(u_{k})_{h}(t)\to
u_{k}(t)$ (we can use here the precise representative for $u$ at any $t$,
otherwise we should limit ourselves to a.e. $t$). Notice that the convergence
$(u_{k})_{h}(t)\to u_{k}(t)$ is pointwise but also weak$-*$ $L^{\infty}$, and
holds for all $t\geq 0$. Therefore, once $h\to 0$ we get
$\displaystyle\int_{M}u_{k}(s)\mu(s)\,dx-\int_{M}u_{k}(r)\,\mu(r)\,dx$
$\displaystyle\leq\int_{s}^{r}\\!\\!\\!\int_{M}[\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla
u_{k}-\frac{1}{2}|\nabla u_{k}|^{2}\,\mu]\,dxd\tau$
$\displaystyle+\varepsilon\int_{s}^{r}\\!\\!\\!\int_{M}\mathds{1}_{\\{u\geq-k\\}}\left(\log
m+V\right)\mu dxd\tau\,.$
Letting now $k\to\infty$, using Fatou’s lemma (and the monotone convergence
theorem if $s=0$), we obtain
$\displaystyle\int_{M}u(s)\mu(s)\,dx-\int_{M}u(r)\,\mu(r)\,dx$
$\displaystyle\leq\int_{s}^{r}\\!\\!\\!\int_{M}[\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla
u-\frac{1}{2}|\nabla u|^{2}\,\mu]\,dxd\tau$
$\displaystyle\quad+\varepsilon\int_{s}^{r}\\!\\!\\!\int_{M}\left(\log
m+V\right)\mu dxd\tau\,.$
With a symmetric argument, using the left time-averages $u_{-h}$, we also
obtain the inequality
$\displaystyle\int_{M}u(r)\mu(r)\,dx-\int_{M}u(t)\,\mu(t)\,dx$
$\displaystyle\leq\int_{r}^{t}\\!\\!\\!\int_{M}[\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla
u-\frac{1}{2}|\nabla u|^{2}\,\mu]\,dxd\tau$
$\displaystyle\quad+\varepsilon\int_{r}^{t}\\!\\!\\!\int_{M}\left(\log
m+V\right)\mu dxd\tau\,$
for every $0<r<t\leq T$. Adding the last two inequalities we obtain (5.14). ∎
###### Remark 5.8:
The inequality (5.14) includes the case that $s=0$ or $t=T$. In particular, it
is a byproduct of the previous proof that $u(0)\in L^{1}(dm_{0})$ and $u(T)\in
L^{1}(dm_{1})$, which would not be guaranteed a priori. This is indeed a
consequence of item 2 of the statement, which implies that
$\int_{M}u(0)m_{0}\,dx$ is bounded below and $\int_{M}u(T)m_{1}\,dx$ is
bounded above. Since the other bounds are obvious from (5.4), this yields
$u(0)\in L^{1}(dm_{0}),u(T)\in L^{1}(dm_{1})$.
###### Remark 5.9:
We point out that inequality (5.14) remains true when $\varepsilon=0$ even
without requiring that $\mu\log(\mu)\in L^{1}(Q_{T})$. It is enough that
$\mu\in L^{1}(Q_{T})$ in order that (5.14) holds for all $s,t\in(0,T)$ (such
that $\mu(s),\mu(t)\in L^{1}(M)$), and even for $s=0,t=T$ assuming for
instance that $\mu(\cdot)$ is continuous in $[0,T]$ in the weak
$L^{1}$-topology.
In fact, we know from Proposition 4.4 that $\varepsilon_{n}(\log(m_{n}))_{+}$
is locally uniformly bounded; in addition, if $\varepsilon_{n}\to 0$, using
estimate (3.9) we have
$\displaystyle\|\varepsilon_{n}(\log(m_{n}))_{+}\|_{L^{1}((0,T)\times M)}$
$\displaystyle\leq\varepsilon_{n}\int_{0}^{T}\\!\\!\\!\\!\int_{M}m_{n}(\log(m_{n}))_{+}$
$\displaystyle\leq C\,\varepsilon_{n}\left(1+|\log(\varepsilon_{n})|\right)\to
0\,.$
Hence $\varepsilon_{n}(\log(m_{n}))_{+}$ converges to zero in $L^{1}$ and
weakly$-*$ in $L^{\infty}((a,b)\times M)$. This implies that last term in
(5.18) vanishes as $n\to\infty$, only using that $\mu$ is in $L^{1}(Q_{T})$.
Thus we obtain again the inequality
$\int_{M}u_{k}(s)\mu(s)\,dx-\int_{M}u_{k}(r)\,\mu(r)\,dx\leq\int_{s}^{r}\\!\\!\\!\int_{M}[\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla
u_{k}-\frac{1}{2}|\nabla u_{k}|^{2}\,\mu]\,dxd\tau$
for any $0<s<r<T$. Since $u$ is locally bounded, here one can readily get rid
of the truncation $k$ for $s,r\in(0,T)$. To get the inequality up to $t=0$,
one can first let $s\to 0^{+}$ using that $\mu(\cdot)$ is continuous in the
weak $L^{1}$-topology and $u_{k}$ is uniformly bounded. This leads the first
integral towards $\int_{M}u_{k}(0)m_{0}\,dx$, which gives the desired term by
letting $k\to\infty$ and using the monotone convergence theorem. Simmetrically
one argue up to $t=T$ to get (5.14) in the whole interval $(0,T)$. ∎
Now we have all the ingredients to prove our main result on the existence and
characterization of minima for nonnegative marginals $m_{0},m_{1}$, which are
only assumed to be $L^{1}(M)$.
###### Theorem 5.10:
Let $V\in W^{2,\infty}(M),\,\,m_{0},m_{1}\in\mathcal{P}(M)\cap L^{1}(M)$, and
$\varepsilon>0$.
Then there is a unique $m$ and a unique $u$ such that $(u,m)$ is a weak
solution of problem (5.2) with $\int_{M}u(T)m_{1}=0$. Moreover we have that
$u,m\in L^{\infty}_{loc}(Q_{T})$, and $(m,\nabla u)$ is the unique minimum of
the functional $\mathcal{F}_{\varepsilon}$ in (1.1).
###### Proof.
We first approximate $m_{0},m_{1}$ with positive smooth marginals. To this
goal, we consider the solutions of the heat equation
$\frac{\partial}{\partial t}\tilde{m}_{i}=\Delta\tilde{m}_{i}\quad\text{with
$\tilde{m}_{i}(0,\cdot)=m_{i}$ for $i=0,1$.}$
Thanks to the compactness of $M$, it is well-known that such solutions are
smooth on $(0,+\infty)\times M$ and that they are curves of probability
measures (cfr. [21, Chapter 7]). Furthermore we have
$\tilde{m}_{i}(t,\cdot)>0$ for every $t>0$ by the strong parabolic maximum
principle (cfr. [21, Theorem 8.11]) and $\tilde{m}_{i}(t,\cdot)\to m_{i}$ in
$L^{1}(M)$ for $t\to 0$ ([21, Theorem 7.19]).
For every $n\in\mathbb{N}$ let $m_{0,n}=\tilde{m}_{0}(\frac{1}{n},\cdot)$ and
$m_{1,n}=\tilde{m}_{1}(\frac{1}{n},\cdot)$. By Theorem 5.4, we obtain the
existence of a couple $(u_{n},m_{n})$ which is a classical solution of
$\begin{cases}-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}=\varepsilon(\log(m)+V(x))\quad&\text{in $Q_{T}$}\\\
\partial_{t}m-div_{g}(m\nabla u)=0&\text{in $Q_{T}$}\\\
m(0,\cdot)=m_{0,n},\quad m(T,\cdot)=m_{1,n}&\text{in $M$}\end{cases}$ (5.19)
with $\int_{M}u_{n}(T)m_{1,n}=0$ $\forall n\in\mathbb{N}$.
Now we use Lemma 5.2 and Lemma 5.5 to get estimates for $u_{n},m_{n}$. In
particular, $u_{n}$ satisfies (5.4), hence it is locally uniformly bounded,
and the same holds for $m_{n}$ due to Corollary 5.3. In addition $u_{n}$ is
bounded in $L^{2}((a,b);H^{1}(M))$ and $m_{n}|\nabla u_{n}|^{2}$ is bounded in
$L^{1}(Q)$. With the same compactness arguments used in Lemma 5.6, we deduce
that both $u_{n}$ and $m_{n}$ are relatively compact in the strong
$L^{1}(M)$-topology, locally in time. Therefore, we deduce that there exist
functions $u,m$ such that for a subsequence, not relabeled,
$\displaystyle u_{n}$ $\displaystyle\rightarrow u\quad\hbox{weakly in
$L^{2}((a,b);H^{1}(M))$ and strongly in $L^{p}((a,b)\times(M))\,\forall
p>1$,}$ $\displaystyle m_{n}$ $\displaystyle\rightarrow m\quad\hbox{strongly
in $L^{p}((a,b)\times(M))\,\forall p>1$, and a.e. in $Q$.}$
In addition, we know that $m_{n}(t)$ has bounded entropy (at any time $t$), so
it is weakly compact in $L^{1}(M)$; and due to the bound of $m_{n}|\nabla
u_{n}|^{2}$ (see (5.5)), we get that $m_{n}$ is equi-continuous from $[0,T]$
into the space of measures. By Ascoli-Arzela’s theorem, $m_{n}(t)\to m(t)$ in
the Wasserstein topology (uniformly in $[0,T]$), and actually in $L^{1}$-weak
for all $t\in(0,T)$, due to the bound of the entropy. By continuity, we
conclude that $m(0)=m_{0}$ and $m(T)=m_{1}$. Notice that, by the local strong
convergence of $m_{n}$, and due to the bound (5.5), we also deduce that
$\sqrt{m_{n}}\nabla u_{n}\to\sqrt{m}\nabla u\quad\hbox{weakly in
$L^{2}((0,T)\times M))$,}$ (5.20)
and, in particular, we have that $m_{n}\nabla u_{n}\to m\nabla u$ in the sense
of distributions, and actually weakly in $L^{1}((0,T)\times M)$. Thus, we
proved so far that $m\in C^{0}([0,T];{\mathcal{P}}(M))$ and is a weak solution
of
$\begin{cases}\partial_{t}m-div_{g}(m\nabla u)=0&\hbox{in $Q_{T}$,}\\\
m(0)=m_{0}\,,\,m(T)=m_{1}\,.&\end{cases}$ (5.21)
In addition, by (5.6) and Fatou’s lemma, we have that $\log(m)\in
L^{1}((a,b)\times(M))$, for any $0<a<b<T$, and in particular $m>0$ a.e. in
$Q$.
As for the Hamilton-Jacobi equation, we use Lemma 5.6 to deduce that
$-\partial_{t}u+\frac{1}{2}|\nabla u|^{2}\leq\varepsilon(\log(m)+V)\,.$ (5.22)
and we also have
$\limsup_{n\to\infty}\int_{M}m_{0n}u_{n}(0)dx\leq\int u(0)dm_{0}\,.$ (5.23)
In particular (since the upper bound follows from (5.4)), we deduce that
$u(0)\in L^{1}(dm_{0})$. Similarly we reason for $t=T$, obtaining
$\int_{M}m_{1}\,u(T)\,dx\leq\liminf_{n\to\infty}\int_{M}m_{1n}u_{n}(T)=0\,.$
(5.24)
As before, this implies, in particular, that $u(T)\in L^{1}(dm_{1})$. Now we
only need to conclude that $(u,m)$ satisfy condition (iii) of Definition 5.1.
To this purpose, we follow the steps of [41], [12], on account of Lemma 5.6.
First we go back to (5.19), which implies, using $\int_{M}u_{n}(T)m_{1n}=0$,
$\int_{M}m_{0n}\hat{u}_{n}(0)dx=\int_{0}^{T}\\!\\!\int_{M}\frac{1}{2}m_{n}|\nabla
u_{n}|^{2}+\varepsilon m_{n}(\log m_{n}+V)\,dxdt\,.$
Using (5.20) and weak lower semicontinuity, as well as (5.23), we deduce, as
$n\to\infty$:
$\int_{0}^{T}\\!\\!\int_{M}\frac{1}{2}m|\nabla u|^{2}+\varepsilon m(\log
m+V)\,dxdt\leq\int u(0)dm_{0}\,.$ (5.25)
However, applying (5.14) with $(\mu,v)=(m,\nabla u)$, $s=0$ and $t=T$, we get
$\displaystyle\int_{M}m_{0}u(0)dx$
$\displaystyle\leq\int_{0}^{T}\\!\\!\int_{M}\frac{1}{2}\left|\nabla
u\right|^{2}m+\varepsilon m(\log m+V)\,dxdt+\int_{M}u(T)\,dm_{1}$
$\displaystyle\leq\int_{0}^{T}\\!\\!\int_{M}\frac{1}{2}\left|\nabla
u\right|^{2}m+\varepsilon m(\log m+V)\,dxdt$
where we used (5.24). Putting together the above information with (5.25) we
obtain
$\int_{0}^{T}\\!\\!\int_{M}\frac{1}{2}m|\nabla u|^{2}+\varepsilon m(\log
m+V)\,dxdt=\int_{M}m_{0}u(0)dx\,,$
and, in between, we also get that $\int_{M}m_{1}\,u(T)\,dx=0$. This means that
$(u,m)$ satisfy Definition 5.1. In addition, from the bounds derived before,
we have $u,m\in L^{\infty}_{loc}(Q_{T})$.
We are left to prove that $(m,\nabla u)$ is the unique minimum of
$\mathcal{F}_{\varepsilon}$. To show that, let $(\mu,v)$ be an admissible
couple for the functional ${\mathcal{F}}_{\varepsilon}$. Without loss of
generality, we can assume that $\mathcal{F}_{\varepsilon}(\mu,v)<\infty$, in
particular $\mu\log\mu\in L^{1}(Q_{T})$. We use once more (5.25) together with
(5.14) and we get
$\displaystyle\int_{0}^{T}\\!\\!\int_{M}$
$\displaystyle\left(\frac{1}{2}m|\nabla u|^{2}+\varepsilon m(\log
m+V)\right)dxdt\leq\int_{0}^{T}\\!\\!\\!\int_{M}[\mu\,v\,\operatorname{\cdot_{g}\\!}\nabla
u-\frac{1}{2}|\nabla u|^{2}\,\mu]\,dxdt$
$\displaystyle\quad\qquad\qquad\qquad\qquad\qquad\qquad+\varepsilon\int_{0}^{T}\\!\\!\\!\int_{M}\left(\log
m+V\right)\mu dxdt$
$\displaystyle\qquad\qquad\qquad\qquad\leq\int_{0}^{T}\\!\\!\\!\int_{M}\frac{1}{2}|v|^{2}\,\mu\,dxdt+\varepsilon\int_{0}^{T}\\!\\!\\!\int_{M}\left(\log
m+V\right)\mu dxdt\,.$
By the strict convexity of $r\to r\log r-r$, for every $a\geq 0$ and $b>0$ we
have
$a\log(a)-a\geq b\log(b)-b+\log(b)(a-b)$
where $a\log a$ is extended as $0$ for $a=0$. Furthermore the equality holds
if and only if $a=b$. This is equivalent to
$(\log(b)-\log(a))a\leq b-a$ (5.26)
with equality if and only if $a=b$. Applying this inequality with $a=\mu$ and
$b=m$ (which is positive a.e.), we obtain
$\displaystyle\int_{0}^{T}\\!\\!\int_{M}\left(\frac{1}{2}m|\nabla
u|^{2}+\varepsilon m(\log m+V)\right)dxdt$
$\displaystyle\leq{\mathcal{F}}_{\varepsilon}(\mu,v)+\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}(m-\mu)dxdt$
$\displaystyle=\,{\mathcal{F}}_{\varepsilon}(\mu,v)$
and the equality holds if and only if $\mu=m$. This concludes the proof that
$(m,\nabla u)$ is the unique minimum of $\mathcal{F}_{\varepsilon}$.
In fact, the solution we found is also the unique weak solution of the system
(5.2) (up to addition of a constant to $u$). Indeed, the uniqueness of $(m,u)$
can be proved similarly as in [12, Thm 1.16]. Compared to this latter result,
we observe that, being $m>0$ almost everywhere, there is no loss of
information here due to the set where $m$ vanishes. ∎
## 6 Convergence to Wasserstein geodesic
We now briefly analyze the limit $\varepsilon\rightarrow 0$ to show that the
minimal curves of $\mathcal{F}_{\varepsilon}$ converge to the geodesics of the
classical mass transport problem
$\min\mathcal{F}_{0}(m,v)\coloneqq\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}\left|v\right|^{2}\,dm\,,\quad(m,v):\begin{cases}\partial_{t}m-div_{g}(vm)=0\\\
m(0)=m_{0}\\\ m(T)=m_{1}\,.\end{cases}$ (6.1)
We first consider the easier case that the marginals $m_{0},m_{1}$ have finite
entropy. This assumption implies that $\mathcal{F}_{\varepsilon}$ converges to
$\mathcal{F}_{0}$ with a rate of order $\varepsilon$.
###### Theorem 6.1:
Let $V\in W^{2,\infty}(M),\,\,m_{0},m_{1}\in\mathcal{P}(M)\cap L^{1}(M)$ and
such that ${\mathcal{H}}(m_{0};\nu),{\mathcal{H}}(m_{1};\nu)<\infty$. For
$\varepsilon\in(0,1)$, let $(m_{\varepsilon},u_{\varepsilon})$ be the unique
solution of (5.2) given by Theorem 5.10, and $(m_{\varepsilon},\nabla
u_{\varepsilon})$ the unique minima of $\mathcal{F}_{\varepsilon}$.
As $\varepsilon\to 0$, we have:
$\begin{split}m_{\varepsilon}&\to m\quad\hbox{in
$C^{0}([0,T],{\mathcal{P}}(M))$ and weakly in $L^{1}(Q_{T})$,}\\\
m_{\varepsilon}\nabla u_{\varepsilon}&\to m\nabla u\quad\hbox{weakly in
$L^{1}(Q_{T})$,}\end{split}$ (6.2)
where $m$ is the Wasserstein geodesic between $m_{0},m_{1}$, and $(m,\nabla
u)$ is a minimum of $\mathcal{F}_{0}$.
Moreover, we have $(\min\mathcal{F}_{0})=\lim\limits_{\varepsilon\rightarrow
0}(\min\mathcal{F}_{\varepsilon})$, and in particular, for some $K>0$,
$|\min\mathcal{F}_{\varepsilon}-\min\mathcal{F}_{0}|\leq K\,\varepsilon\,.$
(6.3)
###### Proof.
For every $\varepsilon>0$ we can apply Theorem 5.10 and define the couple
$(u_{\varepsilon},m_{\varepsilon})$ which is the unique weak solution to the
problem
$\begin{cases}-\partial_{t}u+\frac{1}{2}|\nabla
u|^{2}=\varepsilon(\log(m)+V(x))\quad&\text{in $Q_{T}$}\\\
\partial_{t}m-div_{g}(m\nabla u)=0&\text{in $Q_{T}$}\\\ m(0)=m_{0},\quad
m(T)=m_{1}&\text{in $M$}\end{cases}$
with $\int_{M}u_{\varepsilon}(T)m_{1}=0$.
By Lemma 5.2, we have that $u_{\varepsilon}$ is bounded in
$L^{2}((a,b),H^{1}(M))$ and in $L^{\infty}((a,b)\times M)$ for every
$0<a<b<T$, so it is weakly relatively compact in $L^{2}((a,b);H^{1}(M))$.
For any sequence extracted out of $u_{\varepsilon}$, by a diagonal process we
can select (and fix) a function $u\in L^{2}_{loc}((0,T);H^{1}(M))\cap
L^{\infty}_{loc}((0,T)\times M)$ and a subsequence (that we will not rename)
such that $u_{\varepsilon}$ converges weakly to $u$ in $L^{2}((a,b);H^{1}(M))$
for every $0<a<b<T$.
Furthermore, as a consequence of estimate (5.5), we have
$d_{W}(m_{\varepsilon}(t),m_{\varepsilon}(s))\leq C\sqrt{t-s}$ for any $t>s$
and some $C>0$, where $d_{W}$ is the Wasserstein distance. Hence, by Ascoli-
Arzela’s theorem, there exists $m\in C([0,T];{\mathcal{P}}(M))$ such that, up
to a subsequence, $m_{\varepsilon}(t)\to m(t)$ in the Wasserstein topology,
uniformly in time. Since $m_{0},m_{1}$ have finite entropy, by estimate (3.8),
we have that $\int_{M}m_{\varepsilon}(t)\log(m_{\varepsilon}(t))$ is uniformly
bounded in $(0,T)$. We deduce that $m_{\varepsilon}$ is weakly relatively
compact in $L^{1}(Q_{T})$ and then, by Schwartz inequality and (5.5),
$m_{\varepsilon}\nabla u_{\varepsilon}$ is also weakly relatively compact in
$L^{1}(Q_{T})$. In particular, there exists $m\in C([0,T];{\mathcal{P}}(M))$,
and $w\in L^{1}(Q_{T})$, such that
$\displaystyle m_{\varepsilon}$ $\displaystyle\to m\qquad\hbox{in
$C([0,T];{\mathcal{P}}(M))$ and weakly in $L^{1}(Q_{T})$,}$ $\displaystyle
m_{\varepsilon}\nabla u_{\varepsilon}$ $\displaystyle\to w\qquad\hbox{ weakly
in $L^{1}(Q_{T})$.}$
We now identify $w$ as $m\nabla u$. To this goal, we first use the semi-
continuity for the function $\Psi$ defined in (3.2), and we get
$\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{|w|^{2}}{2m}dxdt\leq\liminf_{\varepsilon\to
0}\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m_{\varepsilon}|\nabla
u_{\varepsilon}|^{2}dxdt=\int_{M}u_{\varepsilon}(0)\,m_{0}\,dx+O(\varepsilon)$
(6.4)
where we used the bound on the entropy. By Lemma 5.6 we deduce
$\begin{split}\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{|w|^{2}}{2m}dxdt&\leq\liminf_{\varepsilon\to
0}\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m_{\varepsilon}|\nabla
u_{\varepsilon}|^{2}dxdt\\\ &\leq\limsup_{\varepsilon\to
0}\int_{M}u_{\varepsilon}(0)\,m_{0}\,dx\leq\int_{M}u(0)m_{0}\,dx\,.\end{split}$
(6.5)
Notice that this inequality also implies that $w=0$ a.e. in the set
$\\{m=0\\}$. Setting $v:=\frac{w}{m}\mathds{1}_{\\{m>0\\}}$ we deduce that
$(m,v)$ is a solution to (2.3). We also get from Lemma 5.6 a similar
inequality at $t=T$, namely
$\int_{M}u(T)m_{1}\,dx\leq\liminf_{\varepsilon\to
0}\int_{M}u_{\varepsilon}(T)\,m_{1}\,dx=0\,.$
We insert this information into (5.14) (where $\varepsilon=0,s=0,t=T$) and we
get
$\int_{M}u(0)m_{0}\,dx\leq\int_{0}^{T}\\!\\!\\!\\!\int_{M}[\frac{w}{m}\operatorname{\cdot_{g}\\!}\nabla
u-\frac{|\nabla
u|^{2}}{2}]\,m\,dx\leq\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{|w|^{2}}{2m}\,dxdt\,.$
Combining this with (6.5) we conclude that $w=m\,\nabla u$ and that
$\int_{M}u(0)m_{0}\,dx=\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m|\nabla
u|^{2}dxdt\,,$
and then
$\int_{M}u_{\varepsilon}(0)\,m_{0}\,dx\to\int_{M}u(0)m_{0}\,dx\,,$
and
$\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m_{\varepsilon}|\nabla
u_{\varepsilon}|^{2}dxdt\to\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m|\nabla
u|^{2}dxdt\,.$
We now show that $(m,\nabla u)$ is a minimum of $\mathcal{F}_{0}$. To this
goal, we recall (see [14], [38]) that the minimum of $\mathcal{F}_{0}$ is
attained at a unique geodesic $\mu^{*}$ such that $\mu^{*}(t)\in L^{1}(M)$ for
every $t$ and $\mu^{*}(\cdot)$ is continuous in the weak-$L^{1}$ topology. On
account of Remark 5.9, we can use inequality (5.14) with $\varepsilon=0$ and
$\mu=\mu^{*}$, which yields:
$\displaystyle\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m|\nabla u|^{2}dxdt$
$\displaystyle=\int_{M}u(0)m_{0}\,dx\leq\int_{0}^{T}\\!\\!\\!\int_{M}[\mu^{*}\,v\,\operatorname{\cdot_{g}\\!}\nabla
u-\frac{1}{2}|\nabla u|^{2}\,\mu^{*}]\,dxdt$
$\displaystyle\leq\int_{0}^{T}\\!\\!\\!\int_{M}\frac{1}{2}|v|^{2}\,\mu^{*}\,dxdt=\min\mathcal{F}_{0}$
Hence $(m,\nabla u)$ is the minimum point of $\mathcal{F}_{0}$ and coincides
with the unique geodesic between $m_{0},m_{1}$ (notice that the previous
inequality implies $v=\nabla u$). Finally, we observe that, still using (5.14)
for $(m_{\varepsilon},u_{\varepsilon})$ and $(m,u)$, we have
$\displaystyle\min\mathcal{F}_{\varepsilon}$
$\displaystyle=\int_{M}u_{\varepsilon}(0)\,m_{0}\,dx$
$\displaystyle\leq\int_{0}^{T}\\!\\!\\!\\!\int_{M}\frac{1}{2}m|\nabla
u|^{2}dxdt+\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}m(\log(m_{\varepsilon})+V)\,dxdt$
$\displaystyle\leq\min\mathcal{F}_{0}+\varepsilon\int_{0}^{T}\\!\\!\\!\\!\int_{M}m\log(m)+C\,\varepsilon=\min\mathcal{F}_{0}+O(\varepsilon)$
due to the fact that $m$ has finite entropy. Since the opposite inequality is
obviously true, we conclude with (6.3). ∎
As a byproduct of the previous result, we have proved that whenever
$m_{0},m_{1}$ have finite entropy, then the Wasserstein geodesic connecting
$m_{0},m_{1}$ has finite entropy for all times. In fact, using Corollary 3.4
we also have a quantitative estimate of the semiconvexity of the log-entropy
functional along the geodesic. In this way, we recover a result proved by
Daneri and Savaré ([15]) with a different and independent approach. We point
out that we can avoid the use of this semiconvexity property of the geodesic
in all our estimates and stability results (see also Remark 2.4), so that this
is just deduced from the limit $\varepsilon\to 0$ of the semiconvexity of the
optimal curves of $\mathcal{F}_{\varepsilon}$.
###### Corollary 6.2:
In the assumptions of Theorem 6.1, let $\Lambda\in\mathbb{R}$ be such that
$Ric_{g}+D^{2}V\geq\Lambda\,I_{g}$
in the sense that $(Ricc_{g}+D^{2}V)(X,X)\geq\Lambda\left|X\right|^{2}$ for
every vector field $X$ on $M$.
Then the relative entropy functional ${\mathcal{H}}(m;\nu)$ is
$\Lambda$-convex along the 2-Wasserstein geodesics. In other words, if
$m:[0,1]\to{\mathcal{P}}(M)$ is the geodesic between $m_{0}$ and $m_{1}$, it
holds
${\mathcal{H}}(m(t);\nu)\leq
t{\mathcal{H}}(m_{1};\nu)+(1-t){\mathcal{H}}(m_{0};\nu)-\frac{\Lambda}{2}t(1-t)W_{2}^{2}(m_{0},m_{1})$
(6.6)
for every $t\in[0,1]$.
###### Proof.
As in Theorem 6.1, let $(u_{\varepsilon},m_{\varepsilon})$ be the sequence of
solutions of (1.2), with $\int_{M}u_{\varepsilon}(T)m_{1}=0$. At first, let us
suppose that $m_{0},m_{1}$ are smooth and positive, so that
$u_{\varepsilon},m_{\varepsilon}$ are smooth solutions (Theorem 5.4). By (3.7)
in Corollary 3.4, we have
$\displaystyle\frac{d^{2}}{dt^{2}}{\mathcal{H}}(m_{\varepsilon}(t);\nu)$
$\displaystyle\geq\Lambda\int_{M}m_{\varepsilon}\,|\nabla
u_{\varepsilon}|^{2}dx$
$\displaystyle=2\Lambda{\mathcal{B}}_{\varepsilon}(m_{0},m_{1})+2\Lambda\varepsilon{\mathcal{H}}(m_{\varepsilon}(t);\nu)$
where we used (3.4) (with $T=1$) and, we recall,
${\mathcal{B}}_{\varepsilon}(m_{0},m_{1})=\min(\mathcal{F}_{\varepsilon})$.
Since $m_{0},m_{1}$ have finite entropy, we already know from (3.8) that
${\mathcal{H}}(m_{\varepsilon}(t);\nu)$ is bounded independently of
$\varepsilon$, for every $t\in(0,T)$. Hence we get, for some $C>0$:
$\frac{d^{2}}{dt^{2}}{\mathcal{H}}(m_{\varepsilon}(t);\nu)\geq\Lambda_{\varepsilon}:=2\Lambda{\mathcal{B}}_{\varepsilon}(m_{0},m_{1})-C\,\varepsilon\,$
which implies
${\mathcal{H}}(m_{\varepsilon}(t);\nu)\leq
t{\mathcal{H}}(m_{1};\nu)+(1-t){\mathcal{H}}(m_{0};\nu)-\frac{\Lambda_{\varepsilon}}{2}t(1-t)\quad\forall
t\in([0,1].$ (6.7)
Note that $\Lambda_{\varepsilon}$ is stable by approximation of $m_{0},m_{1}$
with smooth positive densities, due to Theorem 5.10, so the above inequality
holds for any $m_{0},m_{1}$ with finite entropy. Finally, we let
$\varepsilon\to 0$; by Theorem 6.1 we know that $m_{\varepsilon}(t)$ weakly
converges in $L^{1}(M)$ towards $m(t)$, where $m$ is the Wasserstein geodesic
between $m_{0},m_{1}$. Thus we can use the weak lower semicontinuity of the
entropy for the left-hand side. We also know that
${\mathcal{B}}_{\varepsilon}(m_{0},m_{1})=\min(\mathcal{F}_{\varepsilon})$
converges towards $\min(\mathcal{F}_{0})=\frac{1}{2}W_{2}^{2}(m_{0},m_{1})$.
Hence $\Lambda_{\varepsilon}\to\Lambda\,W_{2}^{2}(m_{0},m_{1})$ and from (6.7)
we deduce (6.6). ∎
We conclude by extending the convergence result of Theorem 6.1 to marginals
which only belong to $L^{1}(M)$, without having necessarily finite entropy. It
is known that, if $m_{0},m_{1}\in L^{1}(M)$, then the Wasserstein geodesic
belongs to $L^{1}(M)$ for all times $t$, see e.g. [14], [38]. This is also a
byproduct of our next result, since we will prove that (6.2) still holds for
merely $L^{1}$ marginals. To get the necessary equi-integrability for this
goal, we will use an idea suggested to us by G. Savaré, based on displacement
convexity and the following lemma essentially contained in [49].
###### Lemma 6.3:
Let $m_{0}\in L^{1}(M)$. Then there exists a function
$U:[0,\infty)\to\mathbb{R}^{+}$ such that:
(i) $U\in C^{2}(0,\infty)$, is convex and satisfies
$\frac{U(r)}{r}\mathop{\to}\limits^{r\to\infty}+\infty$.
(ii) $P^{\prime}(r)r-(1-\frac{1}{d})P(r)\geq 0$, and $P(r)\leq K\,r$ for every
$r>0$, where $P(r)=U^{\prime}(r)r-U(r)$, $K>0$.
(iii) $U(m_{0})\in L^{1}(M)$.
Even if the above Lemma mostly follows from [49, Proposition 17.7] combined
with De la Vallée Poussin lemma, we will give a proof in the Appendix, for the
reader’s convenience. Standing on Lemma 6.3, we will show that
$\min\mathcal{F}_{\varepsilon}(m_{0},m_{1})\to\min\mathcal{F}_{0}(m_{0},m_{1})$,
although now the rate of convergence appears to be of order
$O(\varepsilon\log\varepsilon)$. We will also prove a further property here,
namely that, up to approximating $m_{0},m_{1}$ with suitable smooth sequences
$m_{0\varepsilon},m_{1\varepsilon}$, we can build a minimizing curve of
$\mathcal{F}_{\varepsilon}$ which is a smooth approximation of the Wasserstein
geodesic, with the adjoint states uniformly converging to the Kantorovich
potentials.
###### Theorem 6.4:
Let $V\in W^{2,\infty}(M)$, and $m_{0},m_{1}\in\mathcal{P}(M)\cap L^{1}(M)$.
For $\varepsilon\in(0,1)$, let $(m_{\varepsilon},u_{\varepsilon})$ be the
unique solution of (5.2) given by Theorem 5.10, and $m$ be the Wasserstein
geodesic between $m_{0},m_{1}$, with $(m,\nabla u)$ the minimum of
$\mathcal{F}_{0}$. Then we have that (6.2) holds true and
$\min\mathcal{F}_{0}=\lim\limits_{\varepsilon\rightarrow
0}\min\mathcal{F}_{\varepsilon}$. In particular, we have
$\min\mathcal{F}_{0}-c_{0}\varepsilon\leq\min\mathcal{F}_{\varepsilon}\leq\min\mathcal{F}_{0}+c_{1}\,\varepsilon|\log\varepsilon|\,$
(6.8)
for some $c_{0},c_{1}>0$.
In addition, there exist sequences $m_{0\varepsilon},m_{1\varepsilon}$,
converging respectively to $m_{0},m_{1}$ in $L^{1}(M)$, such that:
(i) $(m_{\varepsilon},\nabla u_{\varepsilon}):={\rm
argmin}\mathcal{F}_{\varepsilon}$ is smooth in $(0,T)\times M$.
(ii) $u_{\varepsilon}$ is bounded in $W^{1,\infty}(Q_{T})$ and converges
uniformly to a Lipschitz continuous solution $u$ of the Hamilton-Jacobi
equation $\partial_{t}u=|\nabla u|^{2}/2$ in $Q_{T}$.
(iii) $m_{\varepsilon}\to m$ in $C^{0}([0,T],{\mathcal{P}}(M))$ where $m$ is
the Wasserstein geodesic connecting $m_{0},m_{1}$, with $\nabla u$ being the
metric velocity of the geodesic and $u(0),u(T)$ the Kantorovich optimal
potentials.
###### Proof.
Let $U$ be the function given by Lemma 6.3 (built replacing $m_{0}$ with
$\max(m_{0},m_{1})$). Using Proposition 3.3, we have
$\displaystyle\frac{d^{2}}{dt^{2}}\int_{M}U(m_{\varepsilon})\,dx$
$\displaystyle\geq\int_{M}P(m_{\varepsilon})Ricc_{g}(\nabla
u_{\varepsilon},\nabla
u_{\varepsilon})\,dx-\varepsilon\int_{M}m_{\varepsilon}\,|\nabla V|^{2}\,dx$
$\displaystyle\geq-\lambda\,K\int_{M}m_{\varepsilon}\,|\nabla
u_{\varepsilon}|^{2}\,dx-c\,\varepsilon\,$
where we used the property (ii) of $U$, from Lemma 6.3. Setting
$\varphi_{\varepsilon}(t)=\int_{M}U(m_{\varepsilon})(t)\,dx$, we deduce that
$\varphi_{\varepsilon}$ satisfies
$\begin{cases}-\varphi_{\varepsilon}^{\prime\prime}(t)\leq\lambda\,Kf_{\varepsilon}(t)+c\varepsilon&t\in(0,T)\\\
\varphi_{\varepsilon}(0)=\int_{M}U(m_{0})\,,\varphi_{\varepsilon}(T)=\int_{M}U(m_{1})&\,,\end{cases}$
where $f_{\varepsilon}:=\int_{M}m_{\varepsilon}\,|\nabla
u_{\varepsilon}|^{2}\,dx$ is bounded in $L^{1}(0,T)$ by Lemma 5.2. By the
(compact) embedding of $H^{1}(0,T)$ into $C^{0}([0,T])$, we deduce that
$\varphi_{\varepsilon}$ is uniformly bounded and actually it is relatively
compact in the uniform topology. Since $U$ is superlinear, this means that
$m_{\varepsilon}(t)$ is weakly compact in $L^{1}(M)$ and weakly converges to
$m(t)$, for every $t\in[0,T]$. In addition, $t\mapsto m(t)$ is continuous in
$L^{1}(M)$ endowed with the weak topology. With this in hand, we also have
that $m_{\varepsilon}\nabla u_{\varepsilon}$ is weakly compact in
$L^{1}(Q_{T})$. Moreover, from Lemma 5.2, we know that $u_{\varepsilon}$
weakly converges to some $u\in L^{2}_{loc}((0,T);H^{1}(M))\cap
L^{\infty}_{loc}((0,T)\times M)$, exactly as in Theorem 6.1. In order to
identify the limit of $m_{\varepsilon}\nabla u_{\varepsilon}$ as $m\nabla u$,
we can proceed as before, using (5.14) on account of Remark 5.9. Thus we
obtain the same conclusion (6.2) as before. However, the rate of convergence
(6.3) does not follow in this case since we can only estimate
$\int_{0}^{T}\\!\\!\\!\\!\int_{M}m_{\varepsilon}\log(m_{\varepsilon})\,dxdt=O(\varepsilon|\log\varepsilon|)$
from estimate (3.9). This yields (6.8).
Finally, we observe that we can build a smooth approximation of the
Wasserstein geodesic if we approximate $m_{0},m_{1}$ with the heat semigroup,
namely $m_{0\varepsilon}=S_{\varepsilon}(m_{0})$,
$m_{1\varepsilon}=S_{\varepsilon}(m_{1})$, where $S_{t}$ is the heat semigroup
as in Proposition 2.2. By using Li-Yau estimates on the heat kernel, in the
improved form given e.g. in [29, Thm 1.8] for Riemannian manifolds with Ricci
curvature bounded below, we have that there exists a constant $C$, only
depending on $M,d$, such that
$\varepsilon\,\left(\frac{|\nabla
S_{\varepsilon}(m_{0})|}{S_{\varepsilon}(m_{0})}+|\log(S_{\varepsilon}(m_{0}))|\right)\leq
C\,.$
This means that $\varepsilon\log(m_{0\varepsilon})$ is bounded in
$W^{1,\infty}(M)$, and so is for $\varepsilon\log(m_{1\varepsilon})$. From
(5.7) in Lemma 5.2 we deduce that $u_{\varepsilon}$ is uniformly bounded, and
then from Theorem 4.3 $u_{\varepsilon}$ is bounded in Lipschitz norm.
Moreover, at fixed $\varepsilon$, $(u_{\varepsilon},m_{\varepsilon})$ are
smooth according to Theorem 5.4. Finally, by Ascoli-Arzelá’s theorem,
$u_{\varepsilon}$ is relatively compact in $C^{0}(\overline{Q}_{T})$ and
converges uniformly towards its limit $u$. It is a standard result that $u$ is
a viscosity solution of the Hamilton Jacobi equation
$\partial_{t}u=\frac{1}{2}|\nabla u|^{2}$. ∎
## 7 Appendix A: existence of smooth solutions
Here we show the existence of solutions to the differential system
$\begin{cases}-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}u\right)+\rho u+\varepsilon\nabla
u\operatorname{\cdot_{g}\\!}\nabla V(x)=0&\text{in $Q$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}=\delta
u+\varepsilon(\log(m_{1})+V(x))&\text{in $\Sigma_{T}$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}+\delta
u=\varepsilon(\log(m_{0})+V(x))&\text{in $\Sigma_{0}$}\\\ \end{cases}$ (7.1)
and we recall (see (3.16)) that the expanded form of the first equation is
$-\partial_{tt}u+2\nabla(\partial_{t}u)\operatorname{\cdot_{g}\\!}\nabla
u-\varepsilon\Delta_{g}u-(\nabla^{2}u)(\nabla u,\nabla u)+\rho
u+\varepsilon\nabla u\operatorname{\cdot_{g}\\!}\nabla V=0.$ (7.2)
###### Proposition 7.1:
Let $\rho,\delta,\varepsilon>0$. Assume that $V\in W^{2,\infty}(M)$ and
$m_{0},m_{1}\in{\mathcal{P}}(M)\cap C^{1,\alpha}(M)$ with $m_{0},m_{1}>0$ in
$M$.
Then there exists a classical solution $u\in C^{2,\alpha}(\overline{Q}_{T})$
of the quasilinear elliptic problem (7.1).
We will prove such result by means of the method of continuity. For
convenience of notation, we set $\varepsilon=1$ in what follows. We consider
the differential operators $F:C^{2,\alpha}(\overline{Q}_{T})\rightarrow
C^{\alpha}(\overline{Q}_{T})$ and $G:C^{2,\alpha}(\overline{Q}_{T})\rightarrow
C^{1,\alpha}(\Sigma_{0}\cup\Sigma_{T})$ defined by
$\displaystyle F[u]$ $\displaystyle\coloneqq-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}u\right)+\rho u+\nabla
u\operatorname{\cdot_{g}\\!}\nabla V(x)$ $\displaystyle G[u]$
$\displaystyle\coloneqq\begin{cases}-\partial_{t}u+\frac{1}{2}\left|\nabla
u\right|^{2}-\delta u-\log(m_{1})-V(x)&\text{in $\Sigma_{T}$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}+\delta
u-\log(m_{0})-V(x)&\text{in $\Sigma_{0}$.}\\\ \end{cases}$
Finally we define the operator
$\begin{array}[]{rcl}P:C^{2,\alpha}(\overline{Q}_{T})&\longrightarrow&C^{\alpha}(\overline{Q}_{T})\times
C^{1,\alpha}(\Sigma_{0}\cup\Sigma_{T})\\\
u&\longrightarrow&\left(F[u],G[u]\right)\end{array}$
and the set
$E\coloneqq\Set{u\in C^{2,\alpha}(\overline{Q}_{T})}{\exists\tau\in[0,1]\quad
s.t.\quad P[u]=(1-\tau)P[0]}$
We note that $0\in E$ and that if a function $u\in
C^{2,\alpha}(\overline{Q}_{T})$ satisfies $P[u]=0$ then it is a solution for
the elliptic problem (7.1).
###### Lemma 7.2:
The set $E$ is bounded in $C^{1,\alpha}(\overline{Q}_{T})$.
###### Proof.
In the expanded form, given $\tau\in[0,1]$, the problem $P[u]=(1-\tau)P[0]$ is
$\begin{cases}-tr\left(\mathcal{A}(x,\nabla
u)\circ\overline{\nabla}^{2}u\right)+\rho u+\nabla
u\operatorname{\cdot_{g}\\!}\nabla V(x)=0&\text{in $Q_{T}$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}=\delta
u+\tau\left(\log(m_{1})+V(x)\right)&\text{in $\Sigma_{T}$}\\\
-\partial_{t}u+\frac{1}{2}\left|\nabla u\right|^{2}+\delta
u=\tau\left(\log(m_{0})+V(x)\right)&\text{in $\Sigma_{0}$}\\\ \end{cases}$
(7.3)
So by Theorem 4.3, (4.5), and $\tau\leq 1$ we get
$\displaystyle\lVert u\rVert_{W^{1,\infty}}$ $\displaystyle\leq
C(\delta,\lVert\tau\log(m_{0})\rVert_{W^{1,\infty}},\lVert\tau\log(m_{1})\rVert_{W^{1,\infty}},\lVert\tau
V\rVert_{W^{2,\infty}})$ $\displaystyle\leq
C(\delta,\lVert\log(m_{0})\rVert_{W^{1,\infty}},\lVert\log(m_{1})\rVert_{W^{1,\infty}},\lVert
V\rVert_{W^{2,\infty}})\,.$
Moreover, since $V\in W^{2,\infty}$, we get that the coefficients of the
elliptic problem $P[u]=(1-\tau)P[0]$ are $C^{0,\alpha}$ in the interior and
$C^{1,\alpha}$ on the boundary with respect to $(t,x)$, independently of
$\tau$.
Fixed any local system of coordinates on $M$, we recall that the second
covariant derivatives of $\psi$ are
$\nabla_{ij}\psi=\frac{\partial^{2}}{\partial x_{i}\partial
x_{j}}\psi-\sum_{k}\Gamma_{ij}^{k}\frac{\partial}{\partial x_{k}}\psi$
where the $\Gamma_{ij}^{k}$ are the Christoffel symbols.
Hence, if we localize the differential system (7.3), we get a differential
problem on $\mathbb{R}^{d}$ which differs from (7.3) only in the first order
terms (because of the Christoffel symbols, which are smooth), so an elliptic
problem with Hölder coefficients in their arguments. Therefore, we can apply
the classical results on Schauder estimates (e.g. [18], and [32, Lemma 2.4]
for the boundary estimates) in every local chart of a finite atlas ($M$ is
compact). We obtain a global $C^{1,\alpha}(\overline{Q}_{T})$ estimate on $u$,
independent of $\tau$. ∎
We now observe that, thanks to Lemma 17.29 in [18], we have that $E$ is closed
in $C^{2,\alpha}(\overline{Q}_{T})$ and that the set
$S\coloneqq\Set{\tau\in[0,1]}{\exists u\in C^{2,\alpha}(\overline{Q}_{T})\quad
s.t.\quad P[u]=(1-\tau)P[0]}$
is closed. Note that $S$ is not empty because $0\in S$. We want to prove that
$S$ is also open, so that it will coincide with $[0,1]$. To this purpose, we
prove that for each $u\in E$ the linear differential system induced by the
Gateaux derivative of $P$ has one and only one solution.
###### Lemma 7.3:
Given $u\in E$, $\phi\in
C^{0,\alpha}(\overline{Q}_{T}),\zeta_{1},\zeta_{0}\in{C^{1,\alpha}(M)}$ there
exists one and only one solution $\psi\in C^{2,\alpha}(\overline{Q}_{T})$ of
the linear problem |
# SU(4) spin waves in the $\nu=\pm 1$ quantum Hall ferromagnet in graphene
Jonathan Atteia<EMAIL_ADDRESS>Laboratoire de Physique des
Solides, Université Paris Saclay, CNRS UMR 8502, F-91405 Orsay Cedex, France
Mark Oliver Goerbig<EMAIL_ADDRESS>Laboratoire de Physique des Solides, Université Paris Saclay, CNRS UMR 8502,
F-91405 Orsay Cedex, France
###### Abstract
We study generalized spin waves in graphene under a strong magnetic field when
the Landau-level filling factor is $\nu=\pm 1$. In this case, the ground state
is a particular SU(4) quantum Hall ferromagnet, in which not only the physical
spin is fully polarized but also the pseudo-spin associated with the valley
degree of freedom. The nature of the ground state and the spin-valley
polarization depend on explicit symmetry breaking terms that are also
reflected in the generalised spin-wave spectrum. In addition to pure spin
waves, one encounters valley-pseudo-spin waves as well as more exotic
entanglement waves that have a mixed spin-valley character. Most saliently,
the SU(4) symmetry-breaking terms do not only yield gaps in the spectra, but
under certain circumstances, namely in the case of residual ground-state
symmetries, render the originally quadratic (in the wave vector) spin-wave
dispersion linear.
## I Introduction
Graphene, a one-atom-thick layer of carbon atoms arranged in a honeycomb
lattice, is the prototype of a large class of two-dimensional materials such
as transition metal dichalchogenoidsManzeli _et al._ (2017), van der Waals
heterostructuresGeim and Grigorieva (2013) or twisted bilayersLu _et al._
(2019) and multilayersSinha _et al._ (2020) that present striking properties
such as topological, correlated or superconducting phases. It is the paradigm
of Dirac fermions in condensed matter since its dispersion is described by the
Dirac-Weyl equation in two dimensionsNovoselov _et al._ (2005); Cayssol
(2013). These fermions come in two flavours with different chiralities,
represented here by the valley index, which acts as an effective ”pseudo-
spin”.
Upon the application of a magnetic field $B$ perpendicular to the graphene
plane, the relativistic character of the Dirac fermions is at the origin of an
anomalous quantum Hall effect. While the effect is still a consequence of the
quantization of the electrons’ energy into highly degenerate Landau levels
(LLs), the latter inherit from the $B=0$ system a twofold valley degeneracy,
in addition to the spin degeneracy, such that the low-energy Hamiltonian is
invariant under SU(4) spin-valley transformations. This SU(4) symmetry is
furthermore respected to leading order by the Coulomb interaction between the
electrons, which constitutes the dominant energy scale in partially filled LLs
due to the flatness of the latter. If only some spin-valley branches of a
specific LL are filled, all the electrons inside this LL choose to
spontaneously break the SU(4) symmetry and to be polarized in a certain spin
and pseudo-spin state. This marks the onset of SU(4) quantum Hall
ferromagnetismNomura and MacDonald (2006); Doretto and Smith (2007); Goerbig
(2011); Young _et al._ (2012).
The physics inside a LL is thus dominated by the Coulomb interaction
$E_{C}=e^{2}/\varepsilon l_{B}=625\sqrt{B[T]}$\mathrm{K}$/\varepsilon$, where
$\varepsilon$ is the dielectric constant of the environment the graphene sheet
is embedded into, and $l_{B}=\sqrt{\hbar/eB}$ is the magnetic length. However,
at much smaller energies, explicit symmetry breaking terms become relevant,
such as the Zeeman term, short-range electron-electron interactions, electron-
phonon interactions or coupling to the substrateAlicea and Fisher (2006);
Abanin _et al._ (2006); Sheng _et al._ (2007); Nomura _et al._ (2009);
Kharitonov (2012). These symmetry-breaking terms, which happen to be all on
the same order of magnitude, determine thus the spin-valley polarization of
the ground state. At half-filling of the $n=0$ LL ($\nu=0$) several phases
have been proposed such as a ferromagnetic (F), charge density wave (CDW),
Kekulé distortion (KD) and canted anti-ferromagnetic (CAF) phase as a function
of the symmetry breaking termsHerbut (2007a, b); Kharitonov (2012); Durić _et
al._ (2014). Notice that there is experimental evidence for three of these
phasesYoung _et al._ (2014); Li _et al._ (2019); Veyrat _et al._ (2020),
indicating that the nature of the SU(4) ferromagnetic ground state may be
sample and/or substrate dependent. At quarter filling $\nu=\pm 1$ – the signs
are related by particle-hole symmetry – the phase diagram has been obtained by
Lian et al. Lian and Goerbig (2017) using the same symmetry breaking terms as
KharitonovKharitonov (2012), and one obtains similar phases as in the $\nu=0$
case.
Spin waves are the lowest energy excitations in a ferromagnet. They have been
observed in a wide variety of materialsBerger (1996); Tsoi _et al._ (2000);
Kajiwara _et al._ (2010); An _et al._ (2014); Hamadeh _et al._ (2014);
Collet _et al._ (2016); Wimmer _et al._ (2019) and are promising platforms
for spintronicsWolf _et al._ (2001); Chumak _et al._ (2015). In a two-
dimensional electron gas (2DEG) in GaAs/AlGaAs heterostructures at filling
$\nu=1$, the first example of a quantum Hall ferromagnet, the ground state
consists of all spins pointing in the direction of the magnetic field, and the
spin waves correspond simply to the precession of the spins around their
ground state position. Generalized spin waves have also been extensively
studied and observed in bilayer 2DEGs where the layer index plays the role of
the pseudo-spin. When the distance $d$ is on the order of the magnetic length
$l_{B}$, quantum Hall ferromagnetism of the layer pseudo-spin is observed and
manifests itself in the form of a global phase coherence between electrons in
the two layersMacDonald _et al._ (1990); Moon _et al._ (1995). At $\nu=1$
(quarter filling of the $n=0$ LL), the ground state is an interlayer coherent
state where each electron is in a superposition of the two layers, and the
physical spin is fully polarized. This ground state can be viewed as a
condensate of electron-hole pairs which then possesses a gapless, linearly
dispersing superfluid modeWen and Zee (1992); Fertig (1989); MacDonald (2001);
Eisenstein and MacDonald (2004). This mode was observed experimentallySpielman
_et al._ (2001) using tunneling spectroscopy. Put differently, this superfluid
mode is associated with a U(1) symmetry of the ground state that corresponds
to the phase of the electron-hole superposition. At $\nu=2$ (half-filling of
the $n=0$ LL), one is confronted with a frustrated situation: a complete spin
polarization excludes a full pseudo-spin polarization, and vice versa.
Depending on the relative strength of the Zeeman and interlayer tunneling
term, the ground state can thus be a spin ferromagnet, a spin-singlet or an
intermediate phase with CAF orderYang (1999); Demler and Sarma (1999). The
dispersion of the modes at $\nu=2$ are presented in Ref. [Hama _et al._ ,
2012]. The peculiarity of the CAF phase is that it possesses a U(1) symmetry
associated with the invariance under spin rotation around the $z$ axis. Such a
symmetry implies also a gapless linearly dispersing mode which was observed
experimentally by inelastic light scattering Pellegrini _et al._ (1998) and
nuclear magnetic resonanceKumada _et al._ (2006, 2007).
In graphene, due the SU(4) spin-valley symmetry, one can have valley pseudo-
spin waves in addition to spin waves, and what we call “entanglement” waves of
mixed spin-valley character. Recent experimentsStepanov _et al._ (2018); Wei
_et al._ (2018); Zhou _et al._ (2020); Assouline _et al._ (2021) have
managed to electrically emit and detect spin wavesTakei _et al._ (2016) using
local gates. This is a highly promising result in the prospective of probing
and controlling the spin degree of freedom in quantum-Hall systems. So far,
the observed threshold for the emission of a spin wave is equal to the size of
the Zeeman gap, a strong indication of the emission a pure spin wave. However,
Ref. [Wei _et al._ , 2020] has suggested a setup susceptible to generate
valley waves at the edge located at the interface between two regions with
filling factors $(\nu_{1},\nu_{2})=(+1,-1)$. The full dispersion relation of
spin waves in graphene at $\nu=0$ has been studied in Refs. [Lambert and Côté,
2013] and [De Nova and Zapata, 2017], while the low-energy dispersion and gaps
of the KD and CAF state spin-waves was obtained using a non-linear sigma model
in Ref. [Wu _et al._ , 2014], which showed the presence of gapless linearly
dispersing modes in these two phases. Ref. [Wei _et al._ , 2020] has studied
the transmission of spin waves at a junction between regions with different
filling factors.
Motivated by these recent experiments considering interfaces between regions
at $\nu=1$, 0 and $-1$, we present in this paper a classification of the
dispersion relations and the associated gaps in the graphene quantum Hall
ferromagnet at $\nu=1(-1)$ when one sub-LL is empty (filled). We consider the
spin waves in the four phases introduced in Ref. [Lian and Goerbig, 2017] with
the addition of a “valley Zeeman” term. However, since this term does not
modify substantially the phases but rather the location of their phase
transitions, we consider only the dispersion in the phases of Ref. [Lian and
Goerbig, 2017]. At $\nu=-1$, there are three Goldstone mode corresponding to
flipping one electron from the filled sub-LL to each one of the three empty
sub-LLs. In the simple phases such as KD or CDW, the three modes correspond to
a pure spin wave, a pseudo-spin wave and an entanglement wave. We derive a
non-linear sigma model valid at long wave lengths generalized to the CP3 coset
space corresponding to the space of broken symmetries. In the absence of
explicit symmetry-breaking terms at low energies, all the dispersions are
gapless and quadratic in the wave vector, corresponding thus to true Goldstone
modes. In the presence of the symmetry breaking terms, some modes acquire a
gap, while others remain gapless but acquire a linear dispersion relation
until a certain momentum at which they recover their quadratic dispersion at
higher momentum. We find that this behavior originates from a residual
symmetry of the ground state. We also find that at several high-symmetry
points in the phase diagram, some originally gapped modes become gapless.
The paper is organized as follows. In Sec. II, we present the phase diagram
originally introduced in Ref. [Lian and Goerbig, 2017] using a different
labelling for the phases and also discuss the introduction of a valley Zeeman
term. In Sec. III, we present our non-linear sigma model using a Lagrangian
formalism, while in Sec. IV, we present our results for the dispersion
relation in the different regions of the phase diagram. In the conclusion
section, we present a summary of the various spin waves one encounters in each
phase, in view of their dispersion, i.e. whether they are quadratic and gapped
or linear and gapless.
## II QHFM ground state
In a single particle picture, flat Landau levels (LLs) are formed in graphene
under a magnetic field with energies $E_{\lambda
n}=\lambda\hbar\omega_{c}\sqrt{n}$ where $\lambda=\pm$ is the band index, $n$
is the LL index, $\omega_{c}=\sqrt{2}v/l_{B}$ is the cyclotron energy, and $v$
is the Fermi velocity of graphene. For a sufficiently strong magnetic field,
the low-energy physics of a quantum Hall ferromagnet in the $n=0$ LL is
dominated by the Coulomb interaction
$\displaystyle\hat{V}_{C}=\frac{1}{2}\sum_{\mathbf{q}\neq
0}v(\mathbf{q})\bar{\rho}(\mathbf{q})\bar{\rho}(-\mathbf{q}),$ (1)
in terms of the Coulomb potential multiplied by the lowest Landau level (LLL)
form factor,
$\displaystyle v(\mathbf{q})=\frac{1}{\mathcal{A}}\frac{2\pi
e^{2}}{\varepsilon|\mathbf{q}|}|\mathcal{F}_{0}(\mathbf{q})|^{2},$ (2)
where $\mathcal{A}$ is the area of the sample and
$\mathcal{F}_{0}(\mathbf{q})$ is the form factor of the LLL (see eg. Ref.
[Goerbig, 2011]). Furthermore, $\bar{\rho}(\mathbf{q})$ represents the density
operator in momentum space projected into the LLL. This Hamiltonian is
approximately SU(4) invariant under spin-valley rotations. The exchange terms
favors a completely antisymmetric orbital wavefunction to minimize the Coulomb
replusion, which then favors a completely symmetric spin-valley spinor. At
filling $\nu=-1$, there is thus one electron per orbital site and the uniform
ground state is described by the Slater determinant
$\displaystyle|\psi_{0}\rangle=\prod_{m}\left(\sum_{\mu}F_{\mu}c^{\dagger}_{m,\mu}\right)|0\rangle$
(3)
where $\mu=\\{\sigma,\xi\\}$ runs over the spin
($\sigma\in\\{\uparrow,\downarrow\\}$) and valley ($\xi\in\\{K,K^{\prime}\\}$)
indices, $m$ is the Landau site index and $F$ is a normalized four-component
spinor which describes the QHFM ground state.
### II.1 Parametrization of the spinor
The Coulomb Hamiltonian is SU(4) symmetric, while the broken symmetry ground
state is invariant under SU(3)$\otimes$U(1) rotations corresponding to
rotations between the three empty sub-LL and the relative phase between the
empty and filled sub-LL. The coset space is thus $CP^{3}=U(4)/U(3)\otimes
U(1)$ which has 6 real dimensionsYang _et al._ (2006). A general spinor
describing the broken symmetry ground state is thus parametrized by 6 angles.
In order to describe the spinor $F$, we express it as a Schmidt decomposition
in the basis
$\\{|K\uparrow\rangle,|K\downarrow\rangle,|K^{\prime}\uparrow\rangle,|K^{\prime}\downarrow\rangle\\}$
asDouçot _et al._ (2008); Lian and Goerbig (2017)
$\displaystyle|F\rangle=\cos\frac{\alpha}{2}|\mathbf{n}\rangle|\mathbf{s}\rangle+e^{i\beta}\sin\frac{\alpha}{2}|-\mathbf{n}\rangle|-\mathbf{s}\rangle,$
(4)
where
$|\mathbf{n}\rangle|\mathbf{s}\rangle=|\mathbf{n}\rangle\otimes|\mathbf{s}\rangle$
is the tensor product of the spinors
$\displaystyle|\mathbf{n}\rangle$
$\displaystyle=\begin{pmatrix}\cos\frac{\theta_{P}}{2}\\\
\sin\frac{\theta_{P}}{2}e^{i\varphi_{P}}\end{pmatrix},$ (5)
$\displaystyle|\mathbf{s}\rangle$
$\displaystyle=\begin{pmatrix}\cos\frac{\theta_{S}}{2}\\\
\sin\frac{\theta_{S}}{2}e^{i\varphi_{S}}\end{pmatrix},$ (6)
acting in valley and spin spaces respectively. We have
$\bm{\sigma}\cdot\mathbf{s}|\pm\mathbf{s}\rangle=\pm|\pm\mathbf{s}\rangle$ and
$\bm{\tau}\cdot\mathbf{n}|\pm\mathbf{n}\rangle=\pm|\pm\mathbf{n}\rangle$,
where
$\displaystyle\mathbf{s},\mathbf{n}=\begin{pmatrix}\sin\theta_{S,P}\cos\varphi_{S,P}\\\
\sin\theta_{S,P}\sin\varphi_{S,P}\\\ \cos\theta_{S,P}\end{pmatrix}$ (7)
are the unit vectors on the spin and pseudo-spin Bloch spheres, respectively,
with $\theta_{S},\theta_{P}\in[0,\pi]$ and
$\varphi_{S},\varphi_{P}\in[0,2\pi]$. The angles $\alpha\in[0,\pi]$ and
$\beta_{1}\in[0,2\pi]$ are the angles of the ”entanglement” Bloch sphere of
the particleDouçot _et al._ (2008). The spinors $|-\mathbf{s}\rangle$ and
$|-\mathbf{n}\rangle$ are obtained from $|\mathbf{s}\rangle$ and
$|\mathbf{n}\rangle$ by the replacement $\theta\rightarrow\pi-\theta$ and
$\varphi\rightarrow\varphi+\pi$ such that we have
$\langle\mathbf{s}|-\mathbf{s}\rangle=\langle\mathbf{n}|-\mathbf{n}\rangle=0$.
When $\theta_{P}=0(\pi)$, the vector $\mathbf{n}$ lies at the north (south)
pole of the pseudo-spin Bloch sphere corresponding to a polarization in valley
$K(K^{\prime})$. Analogously, for $\theta_{S}=0(\pi)$, the vector $\mathbf{n}$
lies at the north (south) pole of the spin Bloch sphere corresponding to spin
up (down) polarization. Finally, this parametrization includes the possibility
of “entanglement” between the spin and the pseudo-spin. In fact, this
decomposition of the spinors does not correspond to real entanglement between
two particles because here it is the spin and pseudo-spin of the same particle
which is “entangled”, and the Schmidt decomposition can be viewed as a
decomposition of SU(4) spinors in the basis of SU(2)$\otimes$SU(2) spinors.
Because of this reminiscence and the relevance of the spin and pseudospin
magnetizations in experimental measurements, we will refer loosely to the
angle $\alpha$ as entanglement angle for simplicity.
### II.2 Symmetry breaking terms
Inspired by earlier worksKharitonov (2012); Nomura _et al._ (2009); Lian and
Goerbig (2017); Atteia _et al._ (2021) that focus on short-range electron-
electronAlicea and Fisher (2006) and electron-phononKharitonov (2012)
interactions at the lattice scale, we consider the local anisotropic
Hamiltonian
$\displaystyle H_{A}=\frac{1}{2}$ $\displaystyle\int
d^{2}r\left\\{U_{\perp}[P_{x}^{2}(\mathbf{r})+P_{y}^{2}(\mathbf{r})]+U_{z}P_{z}^{2}(\mathbf{r})\right\\}$
$\displaystyle-$ $\displaystyle\int
d^{2}r\left\\{\Delta_{Z}S_{z}(\mathbf{r})+\Delta_{P}P_{z}(\mathbf{r})\right\\},$
(8)
where
$\displaystyle\mathbf{P}(\mathbf{r})$
$\displaystyle=\Psi^{\dagger}(\mathbf{r})(\sigma_{0}\otimes\bm{\tau})\Psi(\mathbf{r}),$
(9) $\displaystyle\mathbf{S}(\mathbf{r})$
$\displaystyle=\Psi^{\dagger}(\mathbf{r})(\bm{\sigma}\otimes\tau_{0})\Psi(\mathbf{r})$
(10)
are the local spin and pseudo-spin densities, respectively, in terms of the
vectors $\bm{\sigma}$ and $\bm{\tau}$ of Pauli matrices vectors acting in spin
and pseudo-spin spaces, respectively, while $\sigma_{0}$ and $\tau_{0}$ are
the identity matrices. In the following, we neglect the identity and consider
$\bm{\sigma}\equiv\bm{\sigma}\otimes\tau_{0}$ and
$\bm{\tau}\equiv\sigma_{0}\otimes\bm{\tau}$. The potentials $U_{\perp}$ and
$U_{z}$ correspond to local interactions that act when two electrons are at
the same position, and they act only in valley space thus favoring in-plane or
out-of-plane pseudo-spin polarizations. The relative values of $\Delta_{Z}$,
$\Delta_{P}$, $U_{z}$ and $U_{\perp}$ determine thus the spin or pseudo-spin
polarization of the ground state.
The first term in Eq. (8) represents the electrons’ interaction with ”frozen”
in-plane phononsNomura _et al._ (2009) and is estimated to be of the order of
$U_{\perp}\sim 2.0B[(T)]K$. This term creates a Kekulé-like distortion. The
term $U_{z}$ originates from short-range Hubbard type interactionsAlicea and
Fisher (2006) and intervalley scattering which originate from the SU(4)
symmetry breaking the in Coulomb interactionGoerbig _et al._ (2006). Out-of-
plane phonons also contribute to $U_{z}$ and is estimated to be of the order
of $\sim 0.5B[(T)]K$. The Zeeman coupling $\Delta_{Z}=g\mu_{B}B$ is of the
order of $\sim 1.2B[(T)]K$. Finally, $\Delta_{P}$ corresponds to a staggered
potential on the A and B sublattice which generates a mass term in the Dirac
equation and can be generated by the interaction with a substrate, eg
hexagonal Boron-Nitride (hBN)Hunt _et al._ (2013); Amet _et al._ (2013). Due
to the locking of the sublattice and valley indices in the $n=0$ LL, this term
is analogous to a Zeeman term acting in pseudo-spin space, we thereby dub it
”valley Zeeman” term. This terms favors a polarization in one valley and thus
on one sublattice. The energies $U_{\perp}$ and $U_{z}$ are proportional to
the perpendicular magnetic fieldLi _et al._ (2019) while $\Delta_{z}$ is
proportional to the total magnetic field. Moreover, $\Delta_{P}$ is an
intrinsic effect and thus independent of the magnetic field. Notice that these
energy scales are all on the same order of magnitude and are likely to be
strongly sample-dependent. We thus consider them, here, as tunable parameters
that determine the phase diagram of the QHFM ground states as well as that of
the skyrmions formed on top of these states.
Applying the Hartree-Fock approximation, the energy of the anisotropic energy
$E_{A}=\langle F|H_{A}|F\rangle$ can be expressed asLian and Goerbig (2017)
$\displaystyle
E_{A}[F]=\frac{N_{\phi}}{2}\left[u_{\perp}\left(M_{P_{x}}^{2}+M_{P_{y}}^{2}\right)+u_{z}M_{P_{z}}^{2}\right]$
(11) $\displaystyle-
N_{\phi}\left[\Delta_{Z}M_{S_{z}}+\Delta_{P}M_{P_{Z}}\right],$ (12)
where $N_{\phi}=A/(2\pi l_{B}^{2})$ is the number of flux quanta threading the
area $A$ of the sample and
$\displaystyle\mathbf{M_{P}}$ $\displaystyle=\langle
F|\bm{\tau}|F\rangle=\mathbf{n}\cos\alpha$ (13) $\displaystyle\mathbf{M_{S}}$
$\displaystyle=\langle F|\bm{\sigma}|F\rangle=\mathbf{s}\cos\alpha$ (14)
are the spin and pseudo-spin magnetization respectively. The parameters
$u_{\perp,z}$ are obtained as
$\displaystyle
u_{\perp,z}=\mathcal{V}^{H}_{\perp,z}-\mathcal{V}^{F}_{\perp,z}$ (15)
where $\mathcal{V}^{H}_{\perp,z}$ and $\mathcal{V}^{F}_{\perp,z}$ are the
Hartree and Fock potentials, respectively, associated with the potentials
$U_{\perp,z}$. For a $\delta(\mathbf{r})$ interaction, at $\nu=\pm 1$, the
Hartree and Fock potentials are identical and thus cancel each otherLian and
Goerbig (2017). We thus postulate a slightly non-local interaction.
As a function of the angles, we obtain the expression
$\displaystyle E_{A}[F]=$ $\displaystyle
N_{\phi}\left[\frac{1}{2}\cos^{2}\alpha(u_{\perp}\sin^{2}\theta_{P}+u_{z}\cos^{2}\theta_{P})\right.$
$\displaystyle\left.-\Delta_{P}\cos\alpha\cos\theta_{S}-\Delta_{Z}\cos\alpha\cos\theta_{S}\right].$
(16)
The phase diagram is obtained by minimizing Eq. (16). We first consider the
phase diagram without the valley Zeeman term $\Delta_{P}$ in Sec. II.3, while
we show its effect in Sec. II.4.
### II.3 Phase Diagram without valley Zeeman term
The phase diagram of the QHFM at $\nu=\pm 1$ without the valley Zeeman term
was calculated by Lian et alLian and Goerbig (2017). Here, we briefly review
the different phases in order to discuss the spin waves associated with each
ground state. There is a $\mathbb{Z}_{2}$ redundancy in the parametrization of
the spinors (see appendix of Ref. [Lian and Goerbig, 2017]) such that without
loss of generality we can assume $\alpha\in\left[0,\pi/2\right]$. Using this
fact we can see that the anisotropic energy is minimized for
$\cos\theta_{S}=1$ everywhere.
Figure 1: (a) Phase diagram of the QHFM ground state composed of four phase :
charge density wave (CDW), Kekulé distortion (KD), anti-ferrimagnetic (AFI)
and canted anti-ferromagnetic (CAF). (b)-(e) Spin magnetization on the A and B
sublattices of the different phase. (b) CDW (c) KD, (d) AFI and (e) CAF.
Minimizing Eq. (16), we find the four phases shown in Fig. (1) which can be
separated in two types : for $u_{\perp}>u_{z}$, an easy-axis pseudo-spin
polarization is favored, which is the case of the charge density wave (CDW)
and anti-ferrimagnetic (AFI) phases, while for $u_{z}>u_{\perp}$, an easy-
plane polarization is favored, namely, the Kekulé distortion (KD) and canted
anti-ferromagnetic (CAF) phase. In addition to that, the phases can present
entanglement ($\alpha\neq 0$) or not ($\alpha=\\{0,\pi\\}$). The CDW and KD
phases are not entangled and they have maximal spin and pseudo-spin
magnetizations, they are thereby ferromagnetic phases. The AFI and CAF phases
are entangled, such that their spin and pseudo-spin magnetizations are
reduced. These phases are realized in the regions of positive $u_{\perp}$ and
$u_{z}$ because entanglement allows to reduce the pseudo-spin magnetization
thus making a compromise between the spin and pseudo-spin magnetizations. In
the limit of vanishing Zeeman term (compared to $u_{\perp}$ and $u_{z}$),
these two phases are maximally entangled become both anti-ferromagnetic. We
mention that, as opposed to the $\nu=0$ case, at $\nu=\pm 1$, the spin and
pseudo-spin can be maximal at the same time. Thus the CDW and KD phases are
pseudo-spin polarized and spin ferromagnetic, whereas at $\nu=0$, the phases
can be either be spin polarized and pseudo-spin unpolarized (F), pseudo-spin
polarized and spin unpolarized (KD and CDW) or entangled (CAF). Notice that in
Ref. [Lian and Goerbig, 2017], these phases were named after their valley
pseudo-spin magnetization : the CDW (AFI) phases are associated with an
unentangled (entangled) easy-axis pseudo-spin order, while the KD (CAF) comes
along with an unentangled (entangled) easy-plane pseudo-spin magnetization.
In order to characterize the different phases, we focus on experimentally
measurable quantities such as the spin magnetization and electronic density on
the A and B sublattices
$\displaystyle\rho_{A,B}$ $\displaystyle=\frac{1}{2}\langle
F|(\tau_{0}\pm\tau_{z})|F\rangle,$ (17) $\displaystyle\mathbf{M_{S}}_{A,B}$
$\displaystyle=\frac{1}{2}\langle
F|\bm{\sigma}(\tau_{0}\pm\tau_{z})|F\rangle,$ (18)
respectively.
The spinor of the CDW phase is
$\displaystyle|F\rangle=|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle,$ (19)
where $\mathbf{n}_{z}=(1,0)^{T}$ and $\mathbf{s}_{z}=(1,0)^{T}$ correspond to
a spin and pseudo-spin both polarized at the north of their respective Bloch
spheres, such that the electrons have spin-up and are polarized in valley $K$
or $K^{\prime}$ corresponding thus to a ferromagnetic phase restricted to a
single sublattice. The sublattice polarization is given by $\rho_{A}=1$ and
$\rho_{B}=0$ or $\rho_{A}=0$ and $\rho_{B}=1$ and there is thus a spontaneous
$\mathbb{Z}_{2}$ sublattice symmetry breaking. The spin magnetizations on
sublattices A and B are $\mathbf{M_{S_{A}}}=\mathbf{s}_{z}$ and
$\mathbf{M_{S_{B}}}=0$.
The spinor of the KD phase is given by
$\displaystyle|F\rangle=|\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle,$
(20)
where $|\mathbf{n}_{\perp}\rangle=\frac{1}{\sqrt{2}}(1,e^{i\varphi})^{T}$
points to a position at the equator of the pseudo-spin Bloch sphere and
corresponds thus to a superposition of the two valleys. The angle $\varphi$
corresponds to the orientation of the pseudo-spin magnetization in the $xy$
plane. There is thus a residual $U(1)$ symmetry corresponding to the angle
$\varphi$. Both sublattices are equally populated such that
$\rho_{A}=\rho_{B}=1/2$ and
$\mathbf{M_{S_{A}}}=\mathbf{M_{S_{B}}}=\frac{1}{2}\mathbf{s}_{z}$.
The spinor of the AFI phase has the expression
$\displaystyle|F\rangle$
$\displaystyle=\cos\frac{\alpha_{1}}{2}|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle+e^{i\beta}\sin\frac{\alpha_{1}}{2}|-\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle,$
(21)
with
$\displaystyle\cos\alpha_{1}=\frac{\Delta_{Z}}{u_{z}}.$ (22)
This phase corresponds thus to an entangled phase which in turn reduces the
amplitude of the spin magnetization in order to minimize the anisotropic
energy. The spin magnetization on the A and B sublattices are
$\mathbf{M_{S_{A}}}=\frac{1}{2}(1+\cos\alpha_{1})\mathbf{s}_{z}$ and
$\mathbf{M_{S_{B}}}=\frac{1}{2}(-1+\cos\alpha_{1})\mathbf{s}_{z}$ such that
the spin magnetization on each sublattice points along the $z$ direction but
there is an imbalance between the spin magnetization in sublattices A and B.
For $u_{z}=\Delta_{Z}$ ($\alpha_{1}=0$), namely at the CDW-AFI transition, we
recover the CDW phase, while for $u_{z}\gg\Delta_{Z}$
($\alpha_{1}\rightarrow\pi/2$), we have a maximally entangled phase with
$\mathbf{M_{S_{A}}}=-\mathbf{M_{S_{B}}}=\frac{1}{2}\mathbf{s}_{z}$ which is
anti-ferromagnetic, as we would expect in the limit of a vanishing Zeeman
effect.
The spinor of the CAF phase has the expression
$\displaystyle|F\rangle$
$\displaystyle=\cos\frac{\alpha_{2}}{2}|\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle+e^{i\beta}\sin\frac{\alpha_{2}}{2}|-\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle,$
(23)
with
$\displaystyle\cos\alpha_{2}=\frac{\Delta_{Z}}{u_{\perp}}.$ (24)
This phase has its pseudo-spin polarized in the $xy$ plane of the Bloch sphere
and presents entanglement analogously to the AFI phase. Both sublattices are
populated equally $\rho_{A}=\rho_{B}=1/2$. The spin magnetization on the A and
B sublattices forms a canted anti-ferromagnetic pattern with
$\mathbf{M_{S_{A,B}}}=(\pm\sin\alpha_{2}\cos(\beta-\varphi),\pm\sin\alpha_{2}\sin(\beta-\varphi),\cos\alpha_{2})$
such that the $z$ component of the magnetization is identical on both
sublattices, but there is a canting of the spin in the $xy$ plane with
opposite orientation on the sublattices. At the transition with the KD phase
($\Delta_{Z}=u_{\perp}\rightarrow\alpha_{2}=0$), we recover a ferromagnetic
phase with equal weight on the K and K’ valleys, while in the fully entangled
limit ($u_{\perp}\gg\Delta_{z}\rightarrow\alpha_{2}=\pi/2$), we obtain an
anti-ferromagnetic phase with spins pointing in the $xy$ plane.
### II.4 Phase diagram with valley Zeeman
Figure 2: (a)Phase diagram of the QHFM ground state with the valley Zeeman
term $\Delta_{P}$ such that $\Delta_{P}=\Delta_{Z}$. The KD and CAF phases are
modified compared to the case without the valley Zeeman term and are turned to
a canted KD phase (CDW) and a different CAF phase (CAF’).
Experimentally, graphene is generally placed on top of a substrate. In the
case of hBN, a potential difference is generated between the A and B sites of
graphene and yields a valley-dependent potential due to the valley-sublattice
equivalence in the LLL of graphene. Such a term favors a polarization on one
sublattice and thus in one valley, analougously to a Zeeman term in valley
space. The evolution of the phase diagram in the presence of the valley Zeeman
term is shown in Fig. 2. The phases CDW and AFI are not modified by the valley
Zeeman term because their pseudo-spin is already polarized in one valley.
However, the presence of the valley Zeeman breaks the $Z_{2}$ symmetry between
the two valleys by favoring one valley corresponding to the sublattice with
smallest on-site potential. However, the KD and CAF phases are modified such
that their pseudo-spin polarization is now canted towards the north pole of
the Bloch sphere (or the south pole if the staggered potential is reversed).
The KD phase becomes a canted KD phase with spinor
$\displaystyle|F\rangle=|\mathbf{n}\rangle|\mathbf{s}_{z}\rangle,$ (25)
with
$\displaystyle\cos\theta_{P}=\frac{\Delta_{P}}{(u_{z}-u_{\perp})}.$ (26)
There is thus a continuous phase transition between the CDW and CKD phase
transition located at $u_{z}-u_{\perp}=\Delta_{P}$, where the pseudo-spin is
progressively canted relative to the $z$ direction. For
$u_{z}-u_{\perp}\gg\Delta_{P}$, we recover the KD phase. The CDW occupied thus
a larger portion of the phase diagram compared to the $\Delta_{P}=0$ case (see
Fig. 1).
The transition between the CDW and AFI phase is also modified because the cost
to entangle the easy-axis phase implies a non-zero weight on the valley
$K^{\prime}$. Thereby, the transition occurs at
$u_{z}=(\Delta_{P}+\Delta_{Z})$ and the entanglement angle in the AFI phase
$\alpha_{1}$ is now given by
$\displaystyle\cos\alpha_{1}=\frac{\Delta_{Z}+\Delta_{P}}{u_{z}}.$ (27)
Finally, the CAF phase is also modified into a different CAF phase such that
the spinor reads
$\displaystyle|F\rangle$
$\displaystyle=\cos\frac{\alpha_{2}}{2}|\mathbf{n}\rangle|\mathbf{s}_{z}\rangle+e^{i\beta}\sin\frac{\alpha_{2}}{2}|-\mathbf{n}\rangle|-\mathbf{s}_{z}\rangle,$
(28)
where
$\displaystyle\cos\alpha_{2}=\frac{\Delta_{Z}}{u_{\perp}}\quad\text{and}\quad\cos\theta_{P}=\frac{\Delta_{Z}}{\Delta_{P}}\frac{u_{\perp}}{(u_{z}-u_{\perp})}.$
(29)
Once again, the AFI phase is favored in a larger part of the phase diagram and
the transition between the AFI and CAF phases is located at
$u_{z}=u_{\perp}(\Delta_{P}/\Delta_{Z}+1)$. The four phase transitions meet at
the point $(u_{\perp},u_{z})=(\Delta_{Z},\Delta_{Z}+\Delta_{P})$.
## III Non-linear sigma model
In order to find the dispersion relations of the Goldstone modes, we derive an
effective Lagrangian which describes the low-energy (long-wavelength)
excitations of the ground state. In the SU(4) invariant limit (in the absence
of symmetry breaking terms), this Lagrangian consists of a non-linear sigma
model describing the fields associated with the broken symmetries. The
collective modes of this Lagrangian are the different Goldstone modes. In the
presence of the symmetry breaking terms, the Goldstone modes acquire a mass
gap.
### III.1 Broken symmetries and their generators
Figure 3: Four sub-LLs of the $n=0$ LL and the three associated spin wave
modes corresponding to the mixing of the filled sub-LL described by the spinor
$|F\rangle$ with each of the three empty sub-LLs described by the spinors
$|C_{i}\rangle$.
At filling factor $\nu=\pm 1$, the spontaneous symmetry breaking mechanism
corresponds to filling one sub-LL out of the four with any SU(4) spin-valley
orientation (in the absence of symmetry breaking term). Explicitely, this
symmetry breaking mechanism corresponds to
$\displaystyle SU(4)\rightarrow SU(3)\otimes U(1),$ (30)
where SU(4) is the original symmetry of the Hamiltonian which in composed of
15 generators and SU(3)$\otimes$U(1) is the residual symmetry the ground state
which is invariant under tranformations that mixes the 3 empty sublevels
corresponding to 8 generators times the relative U(1) phase between the empty
and the occupied sub-LLs. According to Refs. [Arovas _et al._ , 1999] and
[Yang _et al._ , 2006], there are thus $15-8-1=6$ generators associated with
the broken symmetries. For simplicity, we label these generators ”broken
generators”. The corresponding coset space of the non-linear sigma model is
the complex projective space $CP^{3}=U(4)/[U(3)\otimes U(1)]$ which has six
dimensionsYang _et al._ (2006).
In order to find an explicit expression for the broken generators, we consider
for simplicity the CDW ground state
$|F\rangle=|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle=|K\uparrow\rangle$ to
be the filled sub-LL in the basis
$\mathcal{A}=\\{|F\rangle,|C_{1}\rangle,|C_{2}\rangle,|C_{3}\rangle\\}=\\{|K\uparrow\rangle,|K\downarrow\rangle,|K^{\prime}\uparrow\rangle,|K^{\prime}\downarrow\rangle\\}$
as shown in Fig. 3. The spinors $|C_{i}\rangle$ define the empty sub-LLs of
the basis $\mathcal{A}$. In this basis, we are able to define the six broken
generators
$\begin{aligned} \Gamma^{1}_{x}&=\frac{1}{2}\sigma_{x}P_{+n_{z}}\\\
\Gamma^{2}_{x}&=\frac{1}{2}\tau_{x}P_{+s_{z}}\\\
\Gamma^{3}_{x}&=\frac{1}{4}(\sigma_{x}\tau_{x}-\sigma_{y}\tau_{y})\end{aligned}\quad\begin{aligned}
\Gamma^{1}_{y}&=\frac{1}{2}\sigma_{y}P_{+n_{z}}\\\
\Gamma^{2}_{y}&=\frac{1}{2}\tau_{y}P_{+s_{z}}\\\
\Gamma^{3}_{y}&=\frac{1}{4}(\sigma_{x}\tau_{y}+\sigma_{y}\tau_{x}),\end{aligned}$
(31)
where $P_{+s_{z}}=\frac{1}{2}(1+\sigma_{z})$ and
$P_{+n_{z}}=\frac{1}{2}(1+\tau_{z})$ are the projectors over the spin up and
valley $K$, respectively. Here, the matrices $\bm{\sigma}$ and $\bm{\tau}$ are
the usual Pauli matrices acting in the spin and pseudo-spin spaces,
respectively. Explicitely, the $\Gamma_{x}$ operators are
$\displaystyle\Gamma^{1}_{x}$ $\displaystyle=\begin{pmatrix}0&1&0&0\\\
1&0&0&0\\\ 0&0&0&0\\\
0&0&0&0\end{pmatrix}\quad\Gamma^{2}_{x}=\begin{pmatrix}0&0&1&0\\\ 0&0&0&0\\\
1&0&0&0\\\ 0&0&0&0\end{pmatrix}$ $\displaystyle\Gamma^{3}_{x}$
$\displaystyle=\begin{pmatrix}0&0&0&1\\\ 0&0&0&0\\\ 0&0&0&0\\\
1&0&0&0\end{pmatrix}.$ (32)
The matrices $\Gamma^{1}_{x,y}$ mix $|F\rangle$ and $|C_{1}\rangle$, the
matrices $\Gamma^{2}_{x,y}$ mix $|F\rangle$ and $|C_{2}\rangle$ while the
matrices $\Gamma^{3}_{x,y}$ mix $|F\rangle$ and $|C_{3}\rangle$. We have thus
three sets of canonically conjugate matrices such that for each mode $a$
$\displaystyle[\Gamma_{\mu}^{a},\Gamma_{\nu}^{a}]$
$\displaystyle=i\varepsilon_{\mu\nu\lambda}\Gamma_{\lambda}^{a}$ (33)
$\displaystyle\\{\Gamma_{\mu}^{a},\Gamma_{\nu}^{a}\\}$
$\displaystyle=\frac{i}{2}\delta_{\mu\nu},$ (34)
where $\mu,\nu,\lambda\in\\{x,y,z\\}$, $a\in\\{1,2,3\\}$,
$\varepsilon_{\mu\nu\lambda}$ is the three-dimensional Levi-Civita tensor,
$\delta_{\mu\nu}$ is the identity matrix and we have introduced the additional
matrices
$\displaystyle\Gamma^{1}_{z}=\frac{1}{2}\sigma_{z}P_{+n_{z}},\quad\Gamma^{2}_{z}=\frac{1}{2}\tau_{z}P_{+s_{z}},\quad\Gamma^{3}_{z}=\frac{1}{4}(\sigma_{z}+\tau_{z}),$
(35)
to complete the algebra. To study the spin waves for another phase, we simply
rotate the spinors and the generators by a SU(4) unitary transformation $U$
$\displaystyle|\tilde{F}\rangle=U|F\rangle,$ (36a)
$\displaystyle|\tilde{C}_{i}\rangle=U|C_{i}\rangle,$ (36b)
$\displaystyle\tilde{\Gamma}_{\mu}^{a}=U\Gamma_{\mu}^{a}U^{\dagger}.$ (36c)
An important object that characterizes the spin waves in a (anti-)ferromagnet
is the matrix of the commutators of the broken generators over the ground
state
$\displaystyle M_{\mu\nu}^{ab}=\langle
F|[\Gamma_{\mu}^{a},\Gamma_{\nu}^{b}]|F\rangle,$ (37)
with $\mu,\nu\in\\{x,y\\}$. We find that it is independent of the basis and
defines the number and dispersion of the Goldstone modes associated with the
number of broken symmetryNielsen and Chadha (1976); Watanabe and Brauner
(2011); Hidaka (2013) (in the absence of explicit symmetry breaking terms). We
find that this matrix has the expression for any phase
$\displaystyle\langle
F|[\Gamma_{\mu}^{a},\Gamma_{\nu}^{b}]|F\rangle=\frac{i}{2}\varepsilon_{\mu\nu}\delta_{ab}.$
(38)
where $\varepsilon_{\mu\nu}$ is the two-dimensional Levi-Civita tensor for
$\mu,\nu\in\\{x,y\\}$. According to the general theory of Refs. [Watanabe and
Brauner, 2011] and [Hidaka, 2013], the number of quadratic spin waves is equal
to $\text{Rank}[M]/2=3$ while no linearly dispersing modes are found which is
in agreement with Refs. [Arovas _et al._ , 1999] and [Yang _et al._ , 2006],
where the number of Goldstone modes is shown to be half the number of the
broken symmetries because half of the fields are conjugate to the other half.
We thus expect three quadratically dispersing modes in the absence of symmetry
breaking terms. However, we show below that in some cases, the Goldstone modes
become linear at small wavevectors due to spin-valley anisotropic terms that
explicitely break the residual symmetry to yet lower ones.
### III.2 Lagrangian
The effective low-energy Lagrangian is obtained analogously to Ref. [Moon _et
al._ , 1995] by constructing a coherent state
$\displaystyle|\psi[\pi]\rangle=e^{i\sum_{\mathbf{r}_{i}}O(\mathbf{r}_{i},t)}|\psi_{0}\rangle,$
(39)
where $|\psi_{0}\rangle$ is the second quantized QHFM ground state (3) and
$\displaystyle
O(\mathbf{r}_{i},t)=\pi_{\mu}^{a}(\mathbf{r}_{i},t)\Gamma_{\mu}^{a}(\mathbf{r}_{i}),$
(40)
where $\pi_{\mu}^{a}(\mathbf{r}_{i},t)$ are six real fields associated with
the broken generators $\Gamma_{\mu}^{a}(\mathbf{r}_{i})$ acting at the Landau
site $\mathbf{r}_{i}$ and we have assumed summation over repeated indices.
They correspond to generalized local spin-valley rotations and thus describe
the quantum state $|\psi[\pi]\rangle$ with spin-valley textures.
The total Lagrangian $\mathcal{L}$ is the sum of the kinetic term
$\mathcal{L}_{K}$, the Coulomb term $\mathcal{L}_{C}$ and the symmetry
breaking $\mathcal{L}_{SB}$ terms
$\displaystyle\mathcal{L}$
$\displaystyle=\mathcal{L}_{K}-\mathcal{L}_{C}-\mathcal{L}_{SB},$ (41)
$\displaystyle\mathcal{L}_{K}$
$\displaystyle=\langle\psi[\pi]|i\partial_{t}|\psi[\pi]\rangle,$ (42)
$\displaystyle\mathcal{L}_{C}$
$\displaystyle=\langle\psi[\pi]|H_{C}|\psi[\pi]\rangle,$ (43)
$\displaystyle\mathcal{L}_{SB}$
$\displaystyle=\langle\psi[\pi]|H_{A}|\psi[\pi]\rangle-\langle\psi_{0}|H_{A}|\psi_{0}\rangle.$
(44)
In order to derive the effective non-linear sigma model at low-energy, we
follow closely Refs. [Arovas _et al._ , 1999], [Yang _et al._ , 2006] and
[Kharitonov, 2012].
#### III.2.1 Kinetic term
In the continuum limit, the kinetic term can be expressed as
$\displaystyle\mathcal{L}_{K}=\rho_{0}\int
d^{2}rZ^{\dagger}(\mathbf{r},t)i\partial_{t}Z(\mathbf{r},t),$ (45)
in terms of the spinor field
$\displaystyle Z(\mathbf{r},t)=e^{iO(\mathbf{r},t)}|F\rangle,$ (46)
where $|F\rangle$ is the ground state spinor corresponding to Eq. (3).
Expanding $O(\mathbf{r},t)$ up to second order in the $\pi$ fields, with the
help of Eq. (38), we obtain
$\displaystyle\mathcal{L}_{K}$ $\displaystyle=\frac{\rho_{0}}{2}\int
d^{2}r\varepsilon_{\mu\nu}\pi_{\mu}^{a}\partial_{t}\pi_{\nu}^{a}$ (47)
$\displaystyle=\frac{\rho_{0}}{2}\int
d^{2}r\bm{\mathcal{A}}^{a}[\bm{\pi}]\cdot\partial_{t}\bm{\pi}^{a},$ (48)
where $\rho_{0}=(2\pi l_{B}^{2})^{-1}$ is the electron density, and
$\bm{\mathcal{A}}^{a}[\bm{\pi}]=(-\pi_{y}^{a},\pi_{x}^{a},0)$ is the Berry
connection associated with the mode $a$.
#### III.2.2 Gradient term
To lowest order in the spatial derivatives, the energy associated with the
Coulomb Hamiltonian gives rises to a gradient termArovas _et al._ (1999);
Yang _et al._ (2006); Kharitonov (2012)
$\displaystyle\mathcal{L}_{\text{C}}$ $\displaystyle=\rho_{s}\int
d^{2}r\text{Tr}\left[\bm{\nabla}P\bm{\nabla}P\right]$ (49)
$\displaystyle=2\rho_{s}\int
d^{2}r\partial_{j}Z^{\dagger}(1-ZZ^{\dagger})\partial_{j}Z,$ (50)
where
$\displaystyle P(\mathbf{r},t)=ZZ^{\dagger}$ (51)
is the (space-time dependent) order parameter of the ferromagnet and
$\displaystyle\rho_{s}=\frac{1}{16\sqrt{2\pi}}\frac{e^{2}}{\varepsilon l_{B}}$
(52)
is the spin stiffness. This gradient term corresponds to the cost in exchange
energy associated with the misalignment of neighboring spins.
The matrix $P$ is a projectorKharitonov (2012) that obeys $P^{2}=P$,
$P^{\dagger}=P$ and $\text{Tr}[P]=1$. Up to second order in the $\pi$-fields,
the gradient term is given by
$\displaystyle\mathcal{L}_{C}=\frac{\rho_{s}}{2}\int
d^{2}r(\bm{\nabla}\pi_{\mu}^{a})^{2},$ (53)
where we have used the property that $\langle
F|\Gamma_{\mu}^{a}\Gamma_{\nu}^{b}|F\rangle=\frac{1}{4}\delta_{ab}(\delta_{\mu\nu}+i\varepsilon_{\mu\nu})$.
We recover thus the usual non-linear sigma model term extended to the six
fields in the $CP^{3}$ space.
#### III.2.3 Anisotropic terms
Finally, the symmetry breaking terms correspond to the anisotropic energy
$E_{A}[Z]$ of the slowly varying field $Z$ minus the anisotropic energy of the
ground state such that we consider only the excess energy corresponding to the
spin wave
$\displaystyle\mathcal{L}_{A}=E_{A}[Z]-E_{A}[F],$ (54)
where $E_{A}[F]$ is given by Eq. (12) and
$\displaystyle E_{A}[Z]=\rho_{0}\int
d^{2}r\Big{\\{}\sum_{i}u_{i}M_{P_{i}}^{2}[Z]-\Delta_{Z}M_{S_{z}}[Z]\Big{\\}},$
(55)
with $i\in\\{x,y,z\\}$, $u_{x}=u_{y}=u_{\perp}$, and
$\displaystyle\mathbf{M_{P}}[Z]$ $\displaystyle=\langle Z|\bm{\tau}|Z\rangle$
(56) $\displaystyle\mathbf{M_{S}}[Z]$ $\displaystyle=\langle
Z|\bm{\sigma}|Z\rangle$ (57)
are the spin and pseudo-spin magnetizations analogous to (14) generalized to
the field $Z$. We can express the anisotropic Lagrangian in a more compact way
$\displaystyle\mathcal{L}_{A}=\rho_{0}\int
d^{2}r\sum_{i}u_{i}t_{i}-\Delta_{Z}s_{z},$ (58)
with
$\displaystyle t_{i}$ $\displaystyle=\langle Z|\tau_{i}|Z\rangle^{2}-\langle
F|\tau_{i}|F\rangle^{2}$ (59) $\displaystyle s_{z}$ $\displaystyle=\langle
Z|\sigma_{z}|Z\rangle-\langle F|\sigma_{z}|F\rangle.$ (60)
We now expand the pseudo-spin magnetization up to second order in the
$\pi$-fields
$\displaystyle\langle Z|\tau_{i}|Z\rangle$ $\displaystyle=\langle
F|e^{-iO}\tau_{i}e^{iO}|F\rangle$ $\displaystyle=\langle
F|\tau_{i}|F\rangle-i\pi_{\mu}^{a}\langle
F|[\Gamma_{\mu}^{a},\tau_{i}]|F\rangle$
$\displaystyle-\frac{1}{2}\pi_{\mu}^{a}\langle
F|[\Gamma_{\mu}^{a},[\Gamma_{\nu}^{b},\tau_{i}]]|F\rangle\pi_{\nu}^{b},$ (61)
and we have a similar expression for the spin magnetization. Upon squaring,
the pseudo-spin anisotropy has a linear and a quadratic term in the
$\pi$-fields
$\displaystyle
t_{i}=R_{\mu}^{0a}\pi_{\mu}^{a}+\pi_{\mu}^{a}R_{i,\mu\nu}^{ab}\pi_{\nu}^{b},$
(62)
with
$\displaystyle R_{i\mu}^{0a}=$ $\displaystyle-2i\langle
F|\tau_{i}|F\rangle\langle F|[\Gamma_{\mu}^{a},\tau_{i}]|F\rangle$ (63)
$\displaystyle R_{i,\mu\nu}^{ab}=$ $\displaystyle-\langle
F|[\Gamma^{a}_{\mu},\tau_{i}]|F\rangle\langle
F|[\Gamma^{b}_{\nu},\tau_{i}]|F\rangle$ $\displaystyle-\langle
F|\tau_{i}|F\rangle\langle
F|[\Gamma_{\mu}^{a},[\Gamma^{b}_{\nu},\tau_{i}]]|F\rangle.$ (64)
The Zeeman term is linear in the spin magnetization such that we have
$\displaystyle
s_{z}=R_{Z\mu}^{0a}\pi_{\mu}^{a}+\pi_{\mu}^{a}R_{Z,\mu\nu}^{ab}\pi_{\nu}^{b},$
(65)
where
$\displaystyle R_{Z\mu}^{0a}$ $\displaystyle=-i\langle
F|[\Gamma_{\mu}^{a},\sigma_{z}]|F\rangle$ (66) $\displaystyle
R_{Z,\mu\nu}^{ab}$ $\displaystyle=-\frac{1}{2}\langle
F|[\Gamma_{\mu}^{a},[\Gamma^{b}_{\nu},\sigma_{z}]]|F\rangle$ (67)
For every state $|F\rangle$, the linear terms cancel each other
$\displaystyle\sum_{i}u_{i}R_{i\mu}^{0a}-\Delta_{Z}R_{Z\mu}^{0a}=0$ (68)
for all $\mu$ and $a$. The anisotropic Lagrangian can thus be written as
$\displaystyle\mathcal{L}_{A}=\int d^{2}r\bm{\pi}^{T}\mathcal{R}\bm{\pi}$ (69)
where $\bm{\pi}=(\pi_{\mu}^{a})$ is the six-component vector made of the
$\pi$-fields and
$\displaystyle\mathcal{R}_{\mu\nu}^{ab}=\sum_{i}u_{i}R_{i\mu\nu}^{ab}-\Delta_{Z}R_{Z\mu\nu}^{ab}$
(70)
is a $6\times 6$ matrix in the basis $\\{\mu,a\\}$ that we call the anisotropy
matrix.
We now consider the effective action $\mathcal{S}=\int dt\mathcal{L}$ and
Fourier transform the kinetic and gradient Lagrangians (47) and (53) in space
and time
$\displaystyle\mathcal{S}=\int d\omega
d^{2}k\bm{\pi}^{T}(\mathbf{k},\omega)\mathcal{M}\bm{\pi}(-\mathbf{k},-\omega),$
(71)
with
$\displaystyle\mathcal{M}_{\mu\nu}^{ab}=\left(\frac{\rho_{0}}{2}i\omega\varepsilon_{\mu\nu}-\frac{\rho_{s}}{2}\mathbf{k}^{2}\delta_{\mu\nu}\right)\delta_{ab}-\rho_{0}\mathcal{R}_{\mu\nu}^{ab}.$
(72)
The dispersion relations of the collective mode are obtained by minimizing the
action, $\delta\mathcal{S}/\delta\mathbf{\pi}(\mathbf{k},\omega)=0$, which
gives the equation
$\displaystyle\mathcal{M}(\mathbf{k},\omega)\bm{\pi}(\mathbf{k},\omega)=0.$
(73)
Because the matrix $\mathcal{M}(\mathbf{k},\omega)$ is hermitian, the
frequencies always come in pairs $\pm\omega(\mathbf{k})$. However, we only
consider the three positive eigenfrequencies $\omega_{\alpha}$($\mathbf{k}$),
which correspond to the physically relevant modes, and discard the negative-
energy solutions. The corresponding fields $\bm{\pi}$ are obtained by finding
the null space of $\mathcal{M}$. The resulting spinor is thus given by
$\displaystyle|Z_{\alpha}\rangle=\left(\mathbb{1}+i\pi_{\mu,\alpha}^{a}\Gamma_{\mu}^{a}-\frac{1}{2}\pi_{\mu,\alpha}^{a}\pi_{\nu,\alpha}^{b}\Gamma_{\mu}^{a}\Gamma_{\nu}^{b}\right)|F\rangle,$
(74)
where $\pi_{\mu\alpha}^{a}$ is the eigenstate corresponding to the frequency
$\omega_{\alpha}$. When the matrix is block-diagonal
$\mathcal{M}_{\mu\nu}^{ab}\propto\delta_{ab}$, the different modes are
decoupled and the eigenstate labels are identical to the mode label
$\alpha=a$. This is the case for the CDW and KD phases.
### III.3 Change of ground state
The general analysis of the previous sections has been performed by
considering the ground state spinor
$|F\rangle=|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$. To consider a
different ground state, we perform the unitary rotation given by Eqs. (36).
The spinor $Z$ is thus transformed as
$\displaystyle\tilde{Z}=UZ=e^{i\tilde{\pi}_{\mu}^{a}\tilde{\Gamma}_{\mu}^{a}}|\tilde{F}\rangle,$
(75)
where we have introduced the the fields $\tilde{\pi}_{\mu}^{a}$ which
correspond now to the modes $a$ associated with the broken generators
$\tilde{\Gamma}_{\mu}^{a}$. However, for simplicity, we will keep the notation
$\pi_{\mu}^{a}$ in every basis and assume that the $\pi$-fields correspond to
the modes in the corresponding basis.
The kinetic and gradient terms are independent of the basis because the SU(4)
transformation matrix $U$ is global
$\mathcal{L}_{K}[\tilde{Z}]=\mathcal{L}_{K}[Z]$ and
$\mathcal{L}_{C}[\tilde{Z}]=\mathcal{L}_{C}[Z]$. However, the symmetry
breaking terms are basis dependent. The spin and pseudo-spin magnetization
read
$\displaystyle\langle\tilde{Z}|\bm{\tau}|\tilde{Z}\rangle$
$\displaystyle=\langle Z|\mathbf{P}|Z\rangle$ (76)
$\displaystyle\langle\tilde{Z}|\sigma_{z}|\tilde{Z}\rangle$
$\displaystyle=\langle Z|S_{z}|Z\rangle,$ (77)
such that instead of computing the commutators in Eq. (61) using the
transformed matrices $\tilde{\Gamma}_{\mu}^{a}$, we simply replace the
matrices $\bm{\tau}$ and $\sigma_{z}$ by
$\displaystyle\mathbf{P}$ $\displaystyle=U^{\dagger}\bm{\tau}U$ (78)
$\displaystyle S_{z}$ $\displaystyle=U^{\dagger}\sigma_{z}U,$ (79)
such that the pseudo-spin magnetization reads
$\displaystyle\langle\tilde{Z}|\tau_{i}|\tilde{Z}\rangle$
$\displaystyle=\langle F|P_{i}|F\rangle-i\pi_{\mu}^{a}\langle
F|[\Gamma_{\mu}^{a},P_{i}]|F\rangle$
$\displaystyle-\frac{1}{2}\pi_{\mu}^{a}\langle
F|[\Gamma_{\mu}^{a},[\Gamma_{\nu}^{b},P_{i}]]|F\rangle\pi_{\nu}^{b},$ (80)
where $|F\rangle=|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$ and the
matrices $\Gamma_{\mu}^{a}$ are given by Eqs. (31). We have a similar
expression for the spin magnetization in the transformed basis. Thus instead
of computing the transformed matrices and spinors in the new basis, we simply
express the matrices $\bm{\tau}$ and $\sigma_{z}$ in the basis
$\tilde{\mathcal{A}}$. Thus the anisotropic Lagrangian reads
$\displaystyle\mathcal{L}_{A}[\tilde{Z}]=\int
d^{2}r\bm{\pi}\tilde{\mathcal{R}}\bm{\pi},$ (81)
where
$\displaystyle\tilde{\mathcal{R}}_{\mu\nu}^{ab}=\sum_{i}u_{i}\tilde{R}_{i\mu\nu}^{ab}-\Delta_{Z}\tilde{R}_{Z\mu\nu}^{ab},$
(82)
and the matrices $\tilde{R}_{i\mu\nu}^{ab}$ and $\tilde{R}_{Z\mu\nu}^{ab}$ are
obtained from Eqs. (64) and (67) by the replacements $\tau_{i}\rightarrow
P_{i}$ and $\sigma_{z}\rightarrow S_{z}$.
## IV Dispersion relations
Using the formalism developped in the previous section, we now diagonalize the
matrix (72) to find the dispersion relations of the three different modes and
their associated gaps. We only consider the four phases of Sec. II.3 without
the valley Zeeman term since they are not substantially modified upon its
introduction.
### IV.1 Charge density wave phase
In the charge density wave, the ground state spinor and the empty sub-LL
$|C_{a}\rangle$ defining the three mode $a$ have the expression
$\displaystyle|F\rangle=|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle=(1,0,0,0)^{T}$
(83a)
$\displaystyle|C_{1}\rangle=|\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle=(0,1,0,0)^{T}$
(83b)
$\displaystyle|C_{2}\rangle=|-\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle=(0,0,1,0)^{T}$
(83c)
$\displaystyle|C_{3}\rangle=|-\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle=(0,0,0,1)^{T}$
(83d)
in the basis
$\\{|K\uparrow\rangle,|K\downarrow\rangle,|K^{\prime}\uparrow\rangle,|K^{\prime}\downarrow\rangle\\}$.
We have chosen here a ground state polarized in valley $K$, but one can also
choose a polarization in valley $K^{\prime}$ by the replacement
$\mathbf{n}_{z}\rightarrow-\mathbf{n}_{z}$. The mode $a=1$ which mixes
$|F\rangle$ and $|C_{1}\rangle$ corresponds to a pure spin wave such that the
pseudo-spin remains unaffected. The mode $a=2$ mixes $|F\rangle$ and
$|C_{2}\rangle$ and corresponds to a pseudo-spin wave where the spin remains
unaffected. The mode $a=3$ corresponds to an entanglement wave in which
inverses both the spin and pseudo-spin such that the spinor $Z$ is in a
superposition of $|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$ and
$|-\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle$.
Figure 4: Dispersion relation of the three modes in the KD phase for
$u_{z}=-\Delta_{Z}$ and $u_{\perp}=2\Delta_{Z}$. The three modes are gapped
and quadratically dispersing.
The anisotropy matrix $\mathcal{R}$ is block diagonal
$\mathcal{R}^{ab}_{\mu\nu}\propto\delta_{ab}$ such that the three modes are
decoupled. We find the dispersion relations $\omega_{a}(\mathbf{k})$
corresponding to the three modes $a={1,2,3}$
$\displaystyle\omega_{1}(\mathbf{k})$
$\displaystyle=2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+\Delta_{Z}$ (84)
$\displaystyle\omega_{2}(\mathbf{k})$
$\displaystyle=2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+u_{\perp}-u_{z}$ (85)
$\displaystyle\omega_{3}(\mathbf{k})$
$\displaystyle=2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+\Delta_{Z}-u_{z}.$ (86)
The three modes have a quadratic dispersion and a mass term proportional to
the anisotropic energy terms. The CDW region is defined by $u_{\perp}>u_{z}$
and $u_{z}<\Delta_{Z}$ such that the three modes have a positive gap in the
region. The three eigenmodes have the same expression for each mode such that
the spinor with wavevector $\mathbf{k}$ corresponding to mode $a$ reads
$\displaystyle
Z_{\mathbf{k}a}(\mathbf{r},t)=\left(1-\frac{\pi_{0}^{2}}{8}\right)|F\rangle+i\frac{\pi_{0}}{2}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega_{a}t)}|C_{a}\rangle$
(87)
where $\pi_{0}\ll 1$ is the magnitude of the wave which is a scalar. As shown
in Fig. 5, The spin wave corresponds thus to a small weight on the spinor
$|C_{a}\rangle$ with the phase oscillating at frequency $\omega_{a}$ and
wavevector $\mathbf{k}$.
Figure 5: Bloch spheres corresponding to each modes in the CDW phase. (a) Spin
Bloch sphere of the pure spin mode 1. (b) Pseudo-spin Bloch sphere of the
pseudo-spin mode 2. (c) Entanglement Bloch sphere corresponding to the
entanglement mode 3. The black arrow indicates the ground state polarization,
while the red arrow corresponds to the magnetization at a point in space in
the presence of a spin wave. The red arrow rotates periodically around the
ground state polarization according to Eq. (87).
The first mode corresponds to a pure spin wave, its gap is unaffected by the
anisotropic term and depends only on the Zeeman term. Because the pseudo-spin
remains unaffected by the spin wave and is polarized in one valley, the spins
live only on one sublattice (we choose sublattice A here for illustration) and
spin the magnetization is
$\displaystyle\mathbf{M_{S_{A}}}=\begin{pmatrix}\pi_{0}\cos(\mathbf{k}\cdot\mathbf{r}-\omega_{1}t)\\\
\pi_{0}\sin(\mathbf{k}\cdot\mathbf{r}-\omega_{1}t)\\\
1-\frac{1}{2}\pi_{0}^{2}\end{pmatrix},\quad\mathbf{M_{S_{B}}}=0.$ (88)
The spin wave consist thus of the spins of sublattice A precessing around the
axis $z$ at frequency $\omega_{1}$ as shown in Fig. 6.(a).
(a) (b) (c)
Figure 6: Three modes of the CDW phase. (a) ”Snapshot” of the pure spin wave
mode $a=1$ seen from the top with wavevector $\mathbf{k}$ along the axis $y$.
We observe the precession of the spins around the $z$ axis of the spins in the
A sublattice. (b) Sublattice polarization of the pseudo-spin wave mode $a=2$.
We observe a small electronic density on sublattice B. The dynamic part of the
field is encoded in the relative phase of the superposition between valley $K$
and $K^{\prime}$. The spin magnetization is proportional to the sublattice
density and points along $\mathbf{s}_{z}$. (c) Spin magnetization on the A and
B sublattices of the entanglement mode $a=3$, there is a small spin
magnetization on the B sublattice with opposite direction as on sublattice A.
The second mode corresponds to a pseudo-spin wave for which the gap depend
only on the pseudo-spin anisotropic terms and not on the Zeeman term. In the
CDW region, we have chosen for simplicity a polarization in the valley $K$ (a
similar treatment can be done if the polarization is in valley $K^{\prime}$)
and thus the pseudo-spin points towards the north pole of the Bloch sphere.
Because the pseudo-spin magnetization points along the $z$ direction, the
anisotropic energy of the ground state depends only on $u_{z}$. The presence
of a pseudo-spin wave introduces a pseudo-spin magnetization in the $xy$ plane
of the Bloch sphere, such that the magnetization out-of-plane anisotropic
energy $u_{z}$ is reduced, while there is a cost in in-plane anisotropic
energy $u_{\perp}$, hence the gap is proportional to $(u_{\perp}-u_{z})$. The
pseudo-spin magnetization is given by
$\displaystyle\mathbf{M_{P}}=\begin{pmatrix}\pi_{0}\cos(\mathbf{k}\cdot\mathbf{r}-\omega_{2}t)\\\
\pi_{0}\sin(\mathbf{k}\cdot\mathbf{r}-\omega_{2}t)\\\
1-\frac{1}{2}\pi_{0}^{2}\end{pmatrix}.$ (89)
This expression for the pseudo-spin is analogous to the spin magnetization
(88) of the pure spin wave. It is now the pseudo-spin that precesses around
the $z$ axis, such that it corresponds to a superposition of the valley $K$
and $K^{\prime}$ with a relative phase oscillating at frequency $\omega_{2}$.
However, the electronic density imbalance of the sublattice, which corresponds
to the $z$ component of the pseudo-spin magnetization
($M_{P_{z}}=\rho_{A}-\rho_{B}$) remains uniform
$\displaystyle\rho_{A}=1-\frac{\pi_{0}^{2}}{4}\quad\rho_{B}=\frac{\pi_{0}^{2}}{4}$
(90)
as shown in Fig. 6.(b). We observe thus a small electronic density on the
sublattice $B$. Because the spinors $|F\rangle$ and $|C_{2}\rangle$ both have
spins pointing along the $z$ direction, the spin magnetization on sublattices
A and B is simply proportional to the electronic density,
$\mathbf{M_{S_{A}}}=\rho_{A}\mathbf{s}_{z}$ and
$\mathbf{M_{S_{B}}}=\rho_{B}\mathbf{s}_{z}$. The total spin magnetization is
thus $\mathbf{M_{S}}=\mathbf{s}_{z}$.
The spinors of the third mode cannot be expressed as a tensor product of a
spin and a valley spinors. Thereby, this mode is an entanglement mode which
mixes the sub-LLs $|K\uparrow\rangle$ and $|K^{\prime}\downarrow\rangle$. It
corresponds to the electron being mainly polarized on sublatice A with spin up
with a small polarization on sublatice B with spin down with the relative
phase oscillating at frequency $\omega_{3}$. Analogously to the pseudo-spin
wave, the pseudo-spin magnetization along the $z$ direction is reduced
($M_{P_{z}}=1-\pi_{0}^{2}/2$) such that there is a gain in anisotropic energy
$u_{z}$. However, there is a cost in Zeeman energy, and the gap is
proportional to $\Delta_{Z}-u_{z}$. The sublattice polarizarion is identical
to the pseudo-spin wave but the spin magnetization is
$\displaystyle\mathbf{M_{S_{A}}}=\left(1-\frac{\pi_{0}^{2}}{4}\right)\mathbf{s_{z}}\quad\mathbf{M_{S_{B}}}=-\frac{\pi_{0}^{2}}{4}\mathbf{s_{z}}$
(91)
such that the total spin is reduced similarly to the spin wave.
b) c) a)
Figure 7: Size of the gap of a) the pseudo-spin and b) the entanglement waves
as a function of $u_{\perp}$ and $u_{z}$ in the CDW region. We observe that
the pseudo-spin gap $\Delta_{2}$ vanishes at the boundary with the KD phase,
and the entanglement gap $\Delta_{3}$ vanishes at the boundary with the AFI
entangled phase. c) ”Phase diagram” of the spin mode with the lowest gap. We
can see that the pseudo-spin and entanglement modes have the lowest energy
near the phase boundaries, whereas the spin mode dominates elsewhere.
Figs. 7.(a) and (b) show the size of the gaps $\Delta_{a}$ of the pseudo-spin
and entanglement modes in units of $\Delta_{Z}$. We can see that the size of
the gap decrease as we get closer to the boundaries and eventually vanish at
the boundaries.
The gap of the pseudo-spin wave vanishes at the boundary with the KD phase
defined by $u_{\perp}=u_{z}$. At this line, as one can see from Eq. (8), the
SU(2) pseudo-spin symmetry is restored and there is thus no preferred
orientation of the pseudo-spin. There is no cost in anisotropic energy for the
creation of a pseudo-spin wave. The pseudo-spin wave becomes thus a true
Goldstone mode where the spontaneously broken symmetry is the SU(2) pseudo-
spin rotation symmetry.
The gap of the entanglement wave vanishes at the boundary $u_{z}=\Delta_{Z}$
with the anti-ferrimagnetic phase which is an entangled phase. This comes from
the fact that the spin and pseudo-spin magnetizations along $z$ of the wave
are identical $M_{S_{z}}=M_{P_{z}}=(1-\pi_{0}^{2}/2)$ because we have a small
imbalance over the state $|K^{\prime}\downarrow\rangle$ with opposite spin and
pseudo-spin that of $|K\uparrow\rangle$. In addition, there is no spin and
pseudo-spin magnetization in the $xy$ plane
$M_{S_{x},P_{x}}=M_{S_{y},P_{y}}=0$. Thus, at the transition line
$u_{z}=\Delta_{Z}$, up to second order in $\pi_{0}$, the anisotropic energy
term
$\displaystyle
E_{A}[Z]=\frac{u_{z}}{2}M_{P_{Z}}^{2}-\Delta_{Z}M_{S_{z}}=\frac{u_{z}}{2}-\Delta_{Z}=E_{A}[F],$
(92)
which is independent of the amplitude $\pi_{0}$. Thereby, for small
amplitudes, the spin and pseudo-spin magnetizations cancel each other at the
transition line. This symmetry between the spin and pseudo-spin magnetization
will be explored further in Sec. IV.3.
### IV.2 Kekulé distortion phase
In the KD phase, we apply the unitary transformation
$\displaystyle U_{KD}=e^{i\frac{\pi}{4}\mathbf{n}\cdot\bm{\tau}},$ (93)
with $\mathbf{n}=(\sin\varphi,-\cos\varphi,0)$ to the spinors (83) of the CDW
phase such that we have the spinors in the KD phase
$\displaystyle|\tilde{F}\rangle=|\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle=\frac{1}{\sqrt{2}}(1,0,e^{i\varphi},0)^{T}$
(94a)
$\displaystyle|\tilde{C}_{1}\rangle=|\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle=\frac{1}{\sqrt{2}}(0,1,0,e^{i\varphi})^{T}$
(94b)
$\displaystyle|\tilde{C}_{2}\rangle=|-\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle=\frac{1}{\sqrt{2}}(-e^{-i\varphi},0,1,0)^{T}$
(94c)
$\displaystyle|\tilde{C}_{3}\rangle=|-\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle=\frac{1}{\sqrt{2}}(0,-e^{-i\varphi},0,1)^{T},$
(94d)
where we have a U(1) pseudo-spin symmetry in the $xy$ plane of the Bloch
sphere. Similarly to the analysis for the CDW phase, the mode 1 is a pure spin
wave where the pseudo-spin is unaffected, the mode 2 is a pseudo-spin wave,
while the mode 3 is an entanglement mode.
Figure 8: Bloch spheres corresponding to each modes in the KD phase in the
same way as Fig. 5. (a) Spin Bloch sphere of the spin mode 1. (b) Pseudo-spin
Bloch sphere of the pseudo-spin mode 2. The ground state has a U(1) symmetry
for rotations around the $z$ axis. (c) Entanglement Bloch sphere corresponding
to the entanglement mode 3. For the spin and entanglement modes, the red arrow
rotates periodically around the ground state polarization according to Eq.
(87). At low-energy, the pseudo-spin mode is restricted to the equator of the
pseudo-spin Bloch sphere, which costs no anisotropic energy, while at higher
energy, it acquires an element along the $z$ direction.
The anisotropy matrix $\mathcal{R}$ is again block diagonal
$\mathcal{R}^{ab}_{\mu\nu}\propto\delta_{ab}$ such that the three modes are
decoupled. We find the dispersion relations $\omega_{a}(\mathbf{k})$
corresponding to the three modes $a={1,2,3}$,
$\displaystyle\omega_{1}(\mathbf{k})$
$\displaystyle=2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+\Delta_{Z}$ (95)
$\displaystyle\omega_{2}(\mathbf{k})$
$\displaystyle=|\mathbf{k}|l_{B}\sqrt{2\pi\rho_{s}}\sqrt{2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+u_{z}-u_{\perp}}$
(96) $\displaystyle\omega_{3}(\mathbf{k})$
$\displaystyle=2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+\Delta_{Z}-u_{\perp}.$ (97)
Figure 9: Dispersion relation of the three modes in the KD phase for
$u_{z}=2\Delta_{Z}$ and $u_{\perp}=-2\Delta_{Z}$. We observe that the pseudo-
spin mode is gapless, linear at low momentum $|\mathbf{k}\ll k_{0}$ and
becomes quadratic at higher momentum. the two other modes are gapped and
quadratic.
The dispersion of the three modes is shown in Fig. 9. Analogously to the CDW
case, the mode 1 corresponds to a spin mode, the mode 2 to a pseudo-spin mode
and the mode 3 to an entanglement mode.
The spin and entanglement modes are quite similar to the modes observed for
the CDW phase, they are quadratic gapped modes with a gap proportional to the
Zeeman coupling for the spin wave and a gap equal to $\Delta_{Z}-u_{\perp}$
for the entanglement wave, which corresponds to flipping both the spin and the
pseudo-spin. This gap is always positive since in the KD phase, we have
$u_{\perp}<\Delta_{Z}$. The space-time dependent spinor corresponding to these
two mode has the same expression as (87) with the basis spinors given by Eqs.
(94). The pure spin wave has the spins of each sublattice oscillating at
frequency $\omega_{1}$ with equal weight on both sublattices
$\displaystyle\mathbf{M_{S_{A}}}=\mathbf{M_{S_{B}}}=\frac{1}{2}\begin{pmatrix}\pi_{0}\cos(\mathbf{k}\cdot\mathbf{r}-\omega_{1}t)\\\
\pi_{0}\sin(\mathbf{k}\cdot\mathbf{r}-\omega_{1}t)\\\
1-\frac{1}{2}\pi_{0}^{2}\end{pmatrix}.$ (98)
Finally, the second mode looks different, it has a gapless linear dispersion
at low-momentum $\mathbf{k}^{2}\ll u_{z}-u_{\perp}$ while we recover a
quadratic dispersion relation at high momentum. The transition between these
two regimes occurs at a momentum of $k_{0}=\sqrt{u_{z}-u_{\perp}}$. Similarly
to the pseudo-spin mode in the CDW phase (85), the energy $u_{z}-u_{\perp}$
corresponds to the energy necessary to bring one pseudo-spin out of the plane,
namely there is a cost in out-of-plane anisotropic energy $u_{z}$ but a gain
in in-plane anisotropic energy $u_{\perp}$. This energy is always positive in
the KD region since $u_{z}>u_{\perp}$. Thereby, at low momentum, there is not
enough energy to bring one pseudo-spin out of the plane. The model corresponds
thus to an $XY$ model where the pseudo-spin is restricted to the equator of
the Bloch sphere and this mode is analogous to the linearly dispersing
superfluid mode in Helium and in bilayer 2DEGsFertig (1989); Moon _et al._
(1995); MacDonald (2001). Its gaplessness originates from the U(1) symmetry of
the ground state : there is no cost in anisotropic energy cost for rotating a
pseudo-spin in the $xy$ plane. When the energy is larger than
$u_{z}-u_{\perp}$, there is now enough energy to bring the pseudo-spin out of
the plane and we recover the usual quadratic dispersion relation associated
with the fact that the two generators are now canonically conjugate.
### IV.3 Anti-ferrimagnetic phase
The unitary matrix that tranforms the CDW spinors (83) into the entangled
spinors of the AFI phase is given by
$\displaystyle
U_{\text{AFI}}=e^{i\frac{\alpha_{1}}{2}\sigma_{x}\mathbf{m}\cdot\bm{\tau}},$
(99)
where $\mathbf{m}=(\sin\beta,-\cos\beta,0)$ and $\alpha$ is given by Eq. (22).
The basis spinors of the AFI phase are
$\displaystyle|\tilde{F}\rangle$
$\displaystyle=\cos\frac{\alpha_{1}}{2}|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle+e^{i\beta}\sin\frac{\alpha_{1}}{2}|-\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle$
(100a) $\displaystyle|\tilde{C}_{1}\rangle$
$\displaystyle=\cos\frac{\alpha_{1}}{2}|\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle+e^{i\beta}\sin\frac{\alpha_{1}}{2}|-\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$
(100b) $\displaystyle|\tilde{C}_{2}\rangle$
$\displaystyle=-\sin\frac{\alpha_{1}}{2}e^{-i\beta}|\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle+\cos\frac{\alpha_{1}}{2}|-\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$
(100c) $\displaystyle|\tilde{C}_{3}\rangle$
$\displaystyle=-\sin\frac{\alpha_{1}}{2}e^{-i\beta}|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle+\cos\frac{\alpha_{1}}{2}|-\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle.$
(100d)
We can see that the modes 1 and 2 involves the four basis spinors
$|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$,
$|-\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle$,
$|-\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$ and
$|\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle$ and one cannot factor the
spinors in order to have a definite spin or pseudo-spin mode. We find that
these two modes are coupled and their dispersions are given by
$\displaystyle\omega_{1,2}$
$\displaystyle=\pm\left(\frac{u_{\perp}}{2}-u_{z}\right)\cos\alpha_{1}$
$\displaystyle+\sqrt{2\pi\rho_{s}(\mathbf{k}l_{B})^{2}[2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+2u_{\perp}]+\frac{u_{\perp}^{2}}{4}\cos^{2}\alpha_{1}},$
(101)
which are both positive due to the gap term inside the square root. The gaps
$\Delta_{\alpha}$ of the modes $\alpha=1$ and $\alpha=2$ are
$\displaystyle\Delta_{1}$ $\displaystyle=u_{z}\cos\alpha_{1}=\Delta_{Z}$ (102)
$\displaystyle\Delta_{2}$
$\displaystyle=\left(u_{\perp}-u_{z}\right)\cos\alpha_{1}.$ (103)
Figure 10: Dispersion relation of the three modes in the AFI phase for
$u_{z}=2\Delta_{Z}$ and $u_{\perp}=6\Delta_{Z}$. The modes $a=1$ and $a=2$ are
coupled and form the $\alpha=1$ and $\alpha=2$ which are quadratically
dispersing while the entanglement mode is linear and gapless.
For $\alpha=0$, namely at the boundary with the CDW phase, the spinors (100)
simplify to the CDW spinors and we recover the pseudo-spin mode with gap
$u_{\perp}-u_{z}$ and the spin mode with gap $\Delta_{Z}=u_{z}$.
The dispersion for the entanglement mode $a=3$ is given by
$\displaystyle\omega_{3}(\mathbf{k})=\sqrt{2\pi\rho_{s}}|\mathbf{k}|l_{B}\sqrt{2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+u_{z}(1-\cos^{2}\alpha_{1})}.$
(104)
We can see that for $\cos\alpha_{1}=1$ ($\Delta_{Z}=2u_{z}$), namely at the
transition with the CDW phase, we obtain a gapless quadratic dispersion. When
$\Delta_{Z}<2u_{z}$, we have a linear dispersion at low momentum which
transforms into a quadratic dispersion around momentum
$k_{0}=\sqrt{2u_{z}(1-\cos^{2}\alpha_{1})}$. This mode is analogous to the
pseudo-spin mode in the KD phase. The linearity at low-momentum originates
from the U(1) symmetry of the ground state associated with the parameter
$\beta$ in Eqs. (100a) and (100d). The spinors $|\tilde{F}\rangle$ and
$|C_{3}\rangle$ are both in a superposition of the states
$|\mathbf{n}_{z}\rangle|\mathbf{s}_{z}\rangle$ and
$|-\mathbf{n}_{z}\rangle|-\mathbf{s}_{z}\rangle$ as shown in Fig. 11. It costs
thus no anisotropic energy to move the ground state (black arrow in Fig. 11)
around the parallel of the Bloch sphere at which lie both the black and red
arrows. At higher momentum, there is enough energy to bring the entanglement
mode out of this latitude and restore the symmetry between the $xy$ direction
and the $z$ direction.
### IV.4 Canted anti-ferromagnetic phase
Figure 11: Entanglement Bloch spheres corresponding to the entanglement mode
in (a) the AFI phase and (b) the CAF phase. The spinor $|F\rangle$ indicated
by the black arrow corresponds to the ground state, while the spinor
$|C_{3}\rangle$ is located at opposite direction of the Bloch sphere. The
ground states possesses a U(1) symmetry associated with the angle $\beta$
corresponding to the latitude indicated by the circle at the tip of the black
and red arrows. At low-energy, the entanglement wave correponds to a small
deviation at equi-latitude indicated by the red arrow.
The unitary matrix that tranforms the CDW spinors (83) into the entangled
spinors of the canted anti-ferromagnetic phase is the product of the matrices
(93) and (99) of the KD and AFI phase
$\displaystyle
U_{\text{CAF}}=e^{i\frac{\pi}{4}\mathbf{n}\cdot\bm{\tau}}e^{i\frac{\alpha_{2}}{2}\sigma_{x}\mathbf{m}\cdot\bm{\tau}},$
(105)
where $\mathbf{n}=(\sin\varphi,-\cos\varphi,0)$,
$\mathbf{m}=(\sin\beta,-\cos\beta,0)$ and $\alpha_{2}$ is given by Eq. (24).
Figure 12: Dispersion relation of the three modes in the CAF region for
$u_{z}=12\Delta_{Z}$ and $u_{\perp}=2\Delta_{Z}$. We observe two gapless modes
: the entanglement mode and the mode $\alpha=2$ which originates from the
gapless mode of the KD region.
The basis spinors of the AFI phase are
$\displaystyle|\tilde{F}\rangle$
$\displaystyle=\cos\frac{\alpha_{2}}{2}|\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle+e^{i\beta}\sin\frac{\alpha_{2}}{2}|-\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle$
(106a) $\displaystyle|\tilde{C}_{1}\rangle$
$\displaystyle=\cos\frac{\alpha_{2}}{2}|\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle+e^{i\beta}\sin\frac{\alpha_{2}}{2}|-\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle$
(106b) $\displaystyle|\tilde{C}_{2}\rangle$
$\displaystyle=-\sin\frac{\alpha_{2}}{2}e^{-i\beta}|\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle+\cos\frac{\alpha_{2}}{2}|-\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle$
(106c) $\displaystyle|\tilde{C}_{3}\rangle$
$\displaystyle=-\sin\frac{\alpha_{2}}{2}e^{-i\beta}|\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle+\cos\frac{\alpha_{2}}{2}|-\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle$
(106d)
The modes $a=1$ and $a=2$ are also coupled and we don’t present their explicit
expression here since it is too lengthy. We find the corresponding gaps
$\displaystyle\Delta_{1}$ $\displaystyle=\Delta_{Z}$ (107)
$\displaystyle\Delta_{2}$ $\displaystyle=0,$ (108)
such that one mode is gapless with a linear dispersion relation at low-energy
as can be seen in Fig. 12 and one mode has a pure Zeeman gap. We can see that
the modes $\alpha=1$ and $\alpha=2$ originate from an anti-crossing around
momentum $|\mathbf{k}|l_{B}\approx 0.03$ between a linear mode and a gapped
quadratic mode, which are the descendants of the spin and the gapless pseudo-
spin modes of the KD phase. The mode 1 becomes quadratic at higher energy.
Once again, the mode 3 is decoupled from the others, and corresponds thus to
an entanglement mode with dispersion
$\displaystyle\omega_{3}(\mathbf{k})=\sqrt{2\pi\rho_{s}}|\mathbf{k}|l_{B}\sqrt{2\pi\rho_{s}(\mathbf{k}l_{B})^{2}+u_{\perp}(1-\cos^{2}\alpha_{2})}.$
(109)
This mode is the analog of the entanglement mode in the AFI phase except that
we are in the basis
$\\{|\mathbf{n}_{\perp}\rangle|\mathbf{s}_{z}\rangle,|-\mathbf{n}_{\perp}\rangle|-\mathbf{s}_{z}\rangle\\}$
as shown in Fig. 11. The gaplessness and linearity originates also from the
U(1) symmetry associated with the angle $\beta$ in Eqs. (106a) and (106d).
## V Conclusion
Figure 13: Summary of the low-energy dispersion relation in the four phases.
The indices 1,2 and 3 refer to the spin, pseudo-spin and entanglement modes
respectively, except in the CAF and AFI region where the spin and pseudo-spin
modes are coupled. In the schematic expression of the dispersion relations, we
have set $\rho_{s}/\rho_{0}=2\pi\rho_{s}l_{B}^{2}\equiv 1$. In the CDW region,
the three modes are gapped. In the KD region, there are two gapped modes and
one gapless linear modes, the pseudo-spin mode. In the AFI region, the
entanglement mode is gapless while the two other modes are gapped. Finally, in
the CAF region, there are two gapless modes, the entanglement and the coupled
mode $\alpha=2$, the descendant of the pseudo-spin mode.
To conclude, we have presented the dispersions of the different types of spin
waves, namely pure spin, valley, and entanglement waves, in graphene at
filling factor $\nu=\pm 1$. We have considered the four different possible
ground states presented by Lian et alLian and Goerbig (2017) based on the
anisotropic terms $u_{\perp}$ and $u_{z}$ originally introduced by
KharitonovKharitonov (2012). We have introduced a non-linear sigma model based
on a Lagrangian formalism which describes the long wavelength space-time
dependent spin-valley rotations. The presence of small explicit symmetry-
breaking terms generally opens a gap in the dispersion relation of the
different types of spin waves. However, we have found that in each phase,
except in the CDW region, there remain one or two gapless modes with a linear
dispersion relation at low momentum. The fact that these modes remain gapless
originates from a residual symmetry of the ground state, which is present even
when the symmetry breaking terms are introduced. These modes recover a
quadratic dispersion relation at higher energies when the symmetry between the
different directions of oscillation is restored. The summary of our findings
for the presence or absence of a gap for the three modes in each region is
presented in Fig. 13.
Our study, along with the expression for the gaps at $\nu=0$ for the KD and
CAF phase presented in Ref. [Wu _et al._ , 2014] opens the way to an analysis
of the scattering of spin waves at interfaces between regions with different
filling factor taking into account the different types of spin wave (spin,
pseudo-spin or entanglement). Depending on the steepness of the scattering
region, we expect a different scattering process and emit the possibility that
one wave type in the $\nu=\pm 1$ region might be changed in the scattering
process, or be in a superposition of different types, since the type of spin
waves are different at $\nu=0$. The scattering mechanism should also depend on
the phase the region at $\nu=0$ is in.
###### Acknowledgements.
We would like to thank Alexandre Assouline, Preden Roulleau, Rebeca Ribeiro
Palau, and François Parmentier for stimulating discussions. We acknowledge
financial support from Agence Nationale de la Recherche (ANR project
“GraphSkyrm”) under Grant No. ANR-17-CE30-0029.
## References
* Manzeli _et al._ (2017) S. Manzeli, D. Ovchinnikov, D. Pasquier, O. V. Yazyev, and A. Kis, Nature Reviews Materials 2, 17033 (2017).
* Geim and Grigorieva (2013) A. K. Geim and I. V. Grigorieva, Nature 499, 419 (2013).
* Lu _et al._ (2019) X. Lu, P. Stepanov, W. Yang, M. Xie, M. A. Aamir, I. Das, C. Urgell, K. Watanabe, T. Taniguchi, G. Zhang, A. Bachtold, A. H. MacDonald, and D. K. Efetov, Nature 574, 653 (2019).
* Sinha _et al._ (2020) S. Sinha, P. C. Adak, R. S. Surya Kanthi, B. L. Chittari, L. D. Sangani, K. Watanabe, T. Taniguchi, J. Jung, and M. M. Deshmukh, Nature Communications 11, 1 (2020).
* Novoselov _et al._ (2005) K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature 438, 197 (2005).
* Cayssol (2013) J. Ô. Cayssol, Comptes Rendus Physique 14, 760 (2013), arXiv:1310.0792 .
* Nomura and MacDonald (2006) K. Nomura and A. H. MacDonald, Phys. Rev. Lett. 96, 256602 (2006).
* Doretto and Smith (2007) R. L. Doretto and C. M. Smith, Phys. Rev. B 76, 195431 (2007).
* Goerbig (2011) M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011).
* Young _et al._ (2012) A. F. Young, C. R. Dean, L. Wang, H. Ren, P. Cadden-Zimansky, K. Watanabe, T. Taniguchi, J. Hone, K. L. Shepard, and P. Kim, Nature Physics 8, 550 (2012).
* Alicea and Fisher (2006) J. Alicea and M. P. A. Fisher, Phys. Rev. B 74, 75422 (2006).
* Abanin _et al._ (2006) D. A. Abanin, P. A. Lee, and L. S. Levitov, Phys. Rev. Lett. 96, 176803 (2006).
* Sheng _et al._ (2007) L. Sheng, D. N. Sheng, F. D. M. Haldane, and L. Balents, Phys. Rev. Lett. 99, 196802 (2007).
* Nomura _et al._ (2009) K. Nomura, S. Ryu, and D.-H. Lee, Phys. Rev. Lett. 103, 216801 (2009).
* Kharitonov (2012) M. Kharitonov, Phys. Rev. B 85, 155439 (2012).
* Herbut (2007a) I. F. Herbut, Phys. Rev. B 76, 085432 (2007a).
* Herbut (2007b) I. F. Herbut, Phys. Rev. B 75, 165411 (2007b), arXiv:0610349 [cond-mat] .
* Durić _et al._ (2014) T. Durić, N. Chancellor, and I. F. Herbut, Phys. Rev. B 89, 165123 (2014), arXiv:1401.5680 .
* Young _et al._ (2014) A. F. Young, J. D. Sanchez-Yamagishi, B. Hunt, S. H. Choi, K. Watanabe, T. Taniguchi, R. C. Ashoori, and P. Jarillo-Herrero, Nature 505, 528 (2014).
* Li _et al._ (2019) S. Y. Li, Y. Zhang, L. J. Yin, and L. He, Phys. Rev. B 100, 085437 (2019).
* Veyrat _et al._ (2020) L. Veyrat, C. Déprez, A. Coissard, X. Li, F. Gay, K. Watanabe, T. Taniguchi, Z. Han, B. A. Piot, H. Sellier, and B. Sacépé, Science 367, 781 (2020), arXiv:1907.02299 .
* Lian and Goerbig (2017) Y. Lian and M. O. Goerbig, Phys. Rev. B 95, 245428 (2017).
* Berger (1996) L. Berger, Phys. Rev. B 54, 9353 (1996).
* Tsoi _et al._ (2000) M. Tsoi, A. G. M. Jansen, J. Bass, W. Chiang, V. Tsoi, and P. Wyder, Nat. Nanotechnol. 406, 46 (2000).
* Kajiwara _et al._ (2010) Y. Kajiwara, K. Harii, S. Takahashi, J. Ohe, K. Uchida, M. Mizuguchi, H. Umezawa, H. Kawai, K. Ando, K. Takanashi, S. Maekawa, and E. Saitoh, Nature 464, 262 (2010).
* An _et al._ (2014) K. An, D. R. Birt, C. F. Pai, K. Olsson, D. C. Ralph, R. A. Buhrman, and X. Li, Phys. Rev. B 89, 140405(R) (2014).
* Hamadeh _et al._ (2014) A. Hamadeh, O. D’Allivy Kelly, C. Hahn, H. Meley, R. Bernard, A. H. Molpeceres, V. V. Naletov, M. Viret, A. Anane, V. Cros, S. O. Demokritov, J. L. Prieto, M. Muñoz, G. De Loubens, and O. Klein, Phys. Rev. Lett. 113, 197203 (2014).
* Collet _et al._ (2016) M. Collet, X. De Milly, O. D’Allivy Kelly, V. V. Naletov, R. Bernard, P. Bortolotti, J. Ben Youssef, V. E. Demidov, S. O. Demokritov, J. L. Prieto, M. Muñoz, V. Cros, A. Anane, G. De Loubens, and O. Klein, Nature Communications 7 (2016), 10.1038/ncomms10377, arXiv:1504.01512 .
* Wimmer _et al._ (2019) T. Wimmer, M. Althammer, L. Liensberger, N. Vlietstra, S. Geprägs, M. Weiler, R. Gross, and H. Huebl, Phys. Rev. Lett. 123, 257201 (2019), arXiv:1812.01334 .
* Wolf _et al._ (2001) S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. Von Molnár, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, Science 294, 1488 (2001).
* Chumak _et al._ (2015) A. V. Chumak, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Nature Physics 11, 453 (2015).
* MacDonald _et al._ (1990) A. H. MacDonald, P. M. Platzman, and G. S. Boebinger, Phys. Rev. Lett. 65, 775 (1990).
* Moon _et al._ (1995) K. Moon, H. Mori, K. Yang, S. M. Girvin, A. H. MacDonald, L. Zheng, D. Yoshioka, and S.-C. Zhang, Phys. Rev. B 51, 5138 (1995).
* Wen and Zee (1992) X.-G. Wen and A. Zee, Phys. Rev. Lett. 69, 1811 (1992).
* Fertig (1989) H. A. Fertig, Phys. Rev. B 40, 1087 (1989).
* MacDonald (2001) A. H. MacDonald, Physica B: Condensed Matter 298, 129 (2001).
* Eisenstein and MacDonald (2004) J. P. Eisenstein and A. H. MacDonald, Nature 432, 691 (2004).
* Spielman _et al._ (2001) I. B. Spielman, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 87, 36803 (2001).
* Yang (1999) K. Yang, Phys. Rev. B 60, 15578 (1999).
* Demler and Sarma (1999) E. Demler and S. D. Sarma, Phys. Rev. Lett. 82, 3895 (1999).
* Hama _et al._ (2012) Y. Hama, Y. Hidaka, G. Tsitsishvili, and Z. F. Ezawa, The European Physical Journal B 85, 368 (2012).
* Pellegrini _et al._ (1998) V. Pellegrini, A. Pinczuk, B. S. Dennis, A. S. Plaut, L. N. Pfeiffer, and K. W. West, Science 281, 799 (1998).
* Kumada _et al._ (2006) N. Kumada, K. Muraki, and Y. Hirayama, Science 313, 329 (2006).
* Kumada _et al._ (2007) N. Kumada, K. Muraki, and Y. Hirayama, Phys. Rev. Lett. 99, 076805 (2007).
* Stepanov _et al._ (2018) P. Stepanov, S. Che, D. Shcherbakov, J. Yang, R. Chen, K. Thilahar, G. Voigt, M. W. Bockrath, D. Smirnov, K. Watanabe, T. Taniguchi, R. K. Lake, Y. Barlas, A. H. MacDonald, and C. N. Lau, Nature Physics 14, 907 (2018), arXiv:1801.07290 .
* Wei _et al._ (2018) D. S. Wei, T. van der Sar, S. H. Lee, K. Watanabe, T. Taniguchi, B. I. Halperin, and A. Yacoby, Science 362, 229 (2018).
* Zhou _et al._ (2020) H. Zhou, H. Polshyn, T. Taniguchi, K. Watanabe, and A. F. Young, Nature Physics 16, 154 (2020).
* Assouline _et al._ (2021) A. Assouline, M. Jo, P. Brasseur, K. Watanabe, T. Taniguchi, T. Jolicoeur, P. Roche, D. C. Glattli, N. Kumada, F. D. Parmentier, and P. Roulleau, , 1 (2021), arXiv:2102.02068 .
* Takei _et al._ (2016) S. Takei, A. Yacoby, B. I. Halperin, and Y. Tserkovnyak, Phys. Rev. Lett. 116, 216801 (2016).
* Wei _et al._ (2020) N. Wei, C. Huang, and A. H. MacDonald, arXiv (2020), arXiv:2008.07583 .
* Lambert and Côté (2013) J. Lambert and R. Côté, Phys. Rev. B 87, 115415 (2013).
* De Nova and Zapata (2017) J. R. De Nova and I. Zapata, Phys. Rev. B 95, 165427 (2017).
* Wu _et al._ (2014) F. Wu, I. Sodemann, Y. Araki, A. H. Macdonald, and T. Jolicoeur, Phys. Rev. B 90, 235432 (2014), arXiv:1406.2330 .
* Yang _et al._ (2006) K. Yang, S. Das Sarma, and A. H. MacDonald, Phys. Rev. B 74, 75423 (2006).
* Douçot _et al._ (2008) B. Douçot, M. O. Goerbig, P. Lederer, and R. Moessner, Phys. Rev. B 78, 195327 (2008).
* Atteia _et al._ (2021) J. Atteia, Y. Lian, and M. O. Goerbig, Phys. Rev. B 103, 035403 (2021), arXiv:2010.11830 .
* Goerbig _et al._ (2006) M. O. Goerbig, R. Moessner, and B. Douçot, Phys. Rev. B 74, 161407(R) (2006), arXiv:0604554 [cond-mat] .
* Hunt _et al._ (2013) B. Hunt, J. D. Sanchez-Yamagishi, A. F. Young, M. Yankowitz, B. J. LeRoy, K. Watanabe, T. Taniguchi, P. Moon, M. Koshino, P. Jarillo-Herrero, and R. C. Ashoori, Science 340, 1427 (2013).
* Amet _et al._ (2013) F. Amet, J. R. Williams, K. Watanabe, T. Taniguchi, and D. Goldhaber-Gordon, Phys. Rev. Lett. 110, 216601 (2013).
* Arovas _et al._ (1999) D. P. Arovas, A. Karlhede, and D. Lilliehöök, Phys. Rev. B 59, 13147 (1999).
* Nielsen and Chadha (1976) H. B. Nielsen and S. Chadha, Nuclear Physics B 105, 445 (1976).
* Watanabe and Brauner (2011) H. Watanabe and T. Brauner, Phys. Rev. D 84, 125013 (2011).
* Hidaka (2013) Y. Hidaka, Phys. Rev. Lett. 110, 091601 (2013), arXiv:1203.1494 .
|
# Weakening the Inner Strength: Spotting Core Collusive Users in YouTube
Blackmarket Network
Hridoy Sankar Dutta∗, Nirav Diwan and Tanmoy Chakraborty
Both authors contributed equally to the paper.
###### Abstract
Social reputation (e.g., likes, comments, shares, etc.) on YouTube is the
primary tenet to popularize channels/videos. However, the organic way to
improve social reputation is tedious, which often provokes content creators to
seek services of online blackmarkets for rapidly inflating content reputation.
Such blackmarkets act underneath a thriving collusive ecosystem comprising
core users and compromised accounts (together known as collusive users). Core
users form the backbone of blackmarkets; thus, spotting and suspending them
may help in destabilizing the entire collusive network. Although a few studies
focused on collusive user detection on Twitter, Facebook, and YouTube, none of
them differentiate between core users and compromised accounts.
We are the first to present a rigorous analysis of core users in YouTube
blackmarkets. To this end, we collect a new dataset of collusive YouTube
users. We study the core-periphery structure of the underlying collusive
commenting network (CCN). We examine the topology of CCN to explore the
behavioral dynamics of core and compromised users. We then introduce KORSE, a
novel graph-based method to automatically detect core users based only on the
topological structure of CCN. KORSE performs a weighted $k$-core decomposition
using our proposed metric, called Weighted Internal Core Collusive Index
(WICCI). However, KORSE is infeasible to adopt in practice as it requires
complete interactions among collusive users to construct CCN. We, therefore,
propose NURSE, a deep fusion framework that only leverages user timelines
(without considering the underlying CCN) to detect core blackmarket users.
Experimental results show that NURSE is quite close to KORSE in detecting core
users and outperforms nine baselines.
## Introduction
In recent years, YouTube has grown as a primary video-sharing platform, where
content creators create channels and upload videos. The videos are then
recommended to the content consumers based on several factors, one of which is
the online social reputation of the creators and their content. Social
reputation is usually quantified by the endorsement of the viewers in terms of
likes, (positive) comments, shares, etc. However, an organic way of gaining
reputation is a time consuming process, and often depends on several other
factors such as the quality and relevance of the video, initial viewers and
their underlying connections. Unfortunately, there exist a handful of online
reputation manipulation services (aka blackmarkets) which help content
creators rapidly inflate their reputations in an artificial way (Shah et al.
2017). Such services are built on a large thriving ecosystem of collusive
network. The underlying network comprising core users – fake accounts or
sockpuppets (Bu, Xia, and Wang 2013), which are fully controlled by the
blackmarkets (puppet masters), and compromised accounts which are temporarily
hired to support the core users – these two types of users are together called
as collusive users. Core users are the spine of any collusive blackmarket;
they monitor and intelligently control the entire fraudulent activities in
such a way that none of their hired compromised accounts are suspended.
Therefore, detecting and removing core blackmarket users from YouTube is of
utmost importance to decentralize the collusive network and keep the YouTube
ecosystem healthy and trustworthy. In this study, we deal with freemium
blackmarkets (Shah et al. 2017) which invite customers to opt for the service
for free, in lieu of surrendering their accounts temporarily for blackmarket
activities. In doing so, customers gain virtual credit and use it to grow
their content’s reputation.
Figure 1: Visualization of the collusive commenting network (CCN). Unlike
conventional core-periphery structure where peripheral nodes are sparsely
connected internally, CCN constitutes dense peripheral communities sparsely
connected with the core, indicating the growth of the network up to a certain
point where it may not require core users to support compromised users for
self-sustainability.
State-of-the-art and Motivation. Several efforts have been made to detect fake
activities in different online social networks (Cresci et al. 2015;
Castellini, Poggioni, and Sorbi 2017). However, as suggested by Dutta et al.
(2020), collusive activities are very different from usual fake activities. A
few studies attempted to explore the dynamics of blackmarkets, mostly for
Twitter (Castellini, Poggioni, and Sorbi 2017; Dutta and Chakraborty 2020) and
Facebook (Farooqi et al. 2017). On YouTube, there exists only one method,
named CollATe to detect collusive users (Dutta et al. 2020). However, to our
knowledge, none of these methods attempted to further divide collusive users
into core and compromised accounts.
Table 1: Important notations and denotations.
Notation | Denotation
---|---
$G(N,E)$ | Collusive commenting network
$V$ | Set of sets where $\\{v_{i}\\}\in\bm{V}$ indicates the set of videos created and posted by user $n_{i}$
$v_{i,j}$ | $j^{th}$ video in the video set $\bm{v_{i}}$
$comments(n,c)$ | No. of comments posted by user $n$ on video $c$
$w_{ij}$ | Weight of the edge connecting nodes $n_{i}$ and $n_{j}$
$wc$ | weighted coreness score
$core_{th}$ | Coreness threshold
$G_{C}$ | Core subgraph
$G_{P}$ | Induced subgraph of the peripheral nodes
$G_{P}^{L}$ | Largest connected component in $G_{P}$
$WCS_{Core,C}$ | Weighted cut set between the core and a peripheral community $C$
One may argue that once a collusive account (be it core or compromised) is
detected, it should be banned. Then why do we need to explicitly identify core
and compromised accounts, while both of them deserve punishment? We argue that
the role of a core user is different from a compromised account in the
collusive ecosystem; therefore, the extent of punishment may differ.
Compromised users are more interested in self-promotion; they join
blackmarkets temporarily; they gain appraisals for their online content both
organically (genuine interest by other users) and inorganically (through
blackmarket services). However, core users, being the backbone of the
blackmarkets, always intend to grow and popularize their business. They are
permanent members of the blackmarkets; they provoke other users to join the
services; and they generally initiate the artificial inflation of the
reputation of online content. Therefore, they are more harmful to pollute the
online ecosystem. Due to such contrasting behavior of core and compromised
users, one may consider that core users should be punished differently than
compromised users. For instance, a complete ban of core users would limit the
growth of the collusive blackmarkets. However, for compromised users, it may
be wise to just warn them and restrict their social network activities for a
limited time, instead of a complete ban. The authorities of a social media
platform may design suitable policies to handle these two cases.
To our knowledge, ours is the first attempt to identify and explore the
dynamics of core blackmarket users. It is also the second attempt after
CollATe (Dutta et al. 2020) to explore YouTube blackmarkets.
Present Work: KORSE. In this paper, we investigate the dynamics of core users
in YouLikeHits, one of the popular YouTube blackmarket services. We start by
collecting a novel dataset from YouLikeHits and YouTube, consisting of
collusive users, the videos they promote through blackmarkets, and their
comments on YouTube videos. In this study, we deal with only one type of
appraisals i.e., collusive comment on YouTube videos. We then construct a
collusive commenting network (CCN) based on the co-commenting activities among
collusive users. We leverage the topological structure of CCN to detect core
users using our proposed method, KORSE which utilizes $k$-core decomposition
particularly designed based on our proposed metric, Weighted Internal Core
Collusive Index (WICCI).
Present Work: Core-periphery Structure. An exhaustive analysis on the
interactions of core and peripheral nodes reveals a counter-intuitive core-
periphery structure of CCN – unlike a conventional network where peripheral
nodes are sparsely connected, and get disconnected upon removal of the core,
CCN constitutes peripheral nodes which form several small and dense
communities around the core (c.f. Fig. 1). We further observe that there
exists a strong positive correlation between the internal interactions within
peripheral communities and the interactions between the core and the
peripheral communities. This gives us the evidence that in peripheral
communities, compromised users who comment heavily on videos that are co-
commented by core users, tend to contribute more to the collusive market. We
also present a case study to highlight the major differences between core and
compromised users based on their user timelines: (i) Core users, although act
as heavy contributors of the blackmarket services, are not the top
beneficiaries of the collusive market. (ii) Core users indulge in less self-
promotion of videos. (iii) Core users are less active participants of the
collusive market than compromised users; they initiate the fraudulent
activities and let the compromised users finish the remaining job.
Present Work: NURSE. Although KORSE is highly accurate in detecting core
users, it is practically infeasible to deploy as it requires the complete
snaphot of the collusive market on a streaming basis and is also required to
be re-run on the introduction of each new user. Therefore, we consider core
users detected by KORSE as the ground-truth111Collecting the ground-truth for
fake/genuine entity detection is challenging, which usually requires
annotations from annotators with domain expertise (Shu, Wang, and Liu 2018).
However, obtaining the ground-truth data of core blackmarket users is almost
impossible. We do not know any legal way to find “core” blackmarket users.
Therefore, we consider KORSE as an oracle, which cannot be used in practice
but can be used to create the ground-truth. One can argue that the current way
of creating the ground-truth may be unconvincing. However, we perform several
case studies to provide strong empirical evidence which may validate our
strategy of collecting the ground-truth. We do not know any other way of
ground-truth creation for this problem unless blackmarkets themselves provide
the same! and develop NURSE, a deep fusion framework that only considers user
timeline (without the underlying CCN) and video submission information to
detect core blackmarket users. Experiments on our curated dataset show that
NURSE is quite close to KORSE with $0.879$ F1-Score and $0.928$ AUC,
outperforming nine baselines.
Contributions: In short, our contributions are four-fold:
* •
Novel problem: We are the first to address the problem of core blackmarket
user detection.
* •
Unique dataset: Our curated dataset is the first dataset, comprising core and
compromised collusive YouTube users.
* •
Novel methods: Our proposed methods, KORSE and NURSE, are the first in
detecting core blackmarket users.
* •
Non-intuitive findings: Empirical analysis of the dynamics of core and
compromised users reveals several non-trivial characteristics of blackmarket
services.
Reproducibility. Our full code and dataset are available here -
https://github.com/LCS2-IIITD/ICWSM-2022-Core-Collusive-Youtube-BlackMarket
## Related Work
We summarize related studies by dividing them in two subsections: (i)
blackmarkets and collusion, and (i) network core detection.
Blackmarkets and Collusion: Recently, the activities of blackmarket services
have garnered significant attention among the researchers due to the way they
provide artificial appraisals to online media content. Shah et al. (2017)
provided a broad overview of the working of blackmarkets. Dutta and
Chakraborty (2020) attempted to detect collusive retweeters on Twitter. The
authors also mentioned how collusive users are asynchronous in nature as
compared to normal retweet fraudsters. Dutta and Chakraborty (2020) further
studied the working of premium and freemium blackmarket services in providing
collusive appraisals on Twitter. Arora, Paka, and Chakraborty (2019) proposed
a multitask learning framework to detect tweets submitted to blackmarkets for
collusive retweet appraisals. Arora et al. (2020) further investigated the
blackmarket customers engaged in collusive retweeting activities using a
multiview learning based approach. Chetan et al. (2019) proposed CoReRank, an
unsupervised method to detect collusive retweeters and suspicious tweets on
Twitter. Farooqi and Shafiq (2019) proposed the measurement and early
detection of third-party application abuse on Twitter. Farooqi et al. (2017)
showed how collusion networks collect OAuth access tokens from colluding
members and abuse them to provide fake likes or comments to their members.
Zhu, Zhi, and Dai (2016) proposed an automated approach to detect collusive
behavior in question-answering systems. Dhawan et al. (2019) proposed
DeFrauder, an unsupervised framework to detect collusive behavior of online
fraud groups in customer reviews. Several other studies focused on detecting
fake followers on Twitter (Cresci et al. 2015; Castellini, Poggioni, and Sorbi
2017), fake likes on Instagram (Sen et al. 2018) and fake views in video-
sharing platforms (Shah 2017). Dutta et al. (2020) is the closest to the
current research, which detects collusive blackmarket users on YouTube.
However, it does not focus on detecting core blackmarket users.
Table 2: Qualitative comparison of KORSE and NURSE with similar approaches.
| Batagelj and Zaversnik (2003) | Shin, Eliassi-Rad, and Faloutsos (2016) | Cheng et al. (2011) | Rombach et al. (2014) | Zhang et al. (2017) | Dutta et al. (2020) | KORSE | NURSE
---|---|---|---|---|---|---|---|---
Detect collusive users | | | | | | ✓ | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$ | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$
Detect core blackmarket users | | | | | | | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$ | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$
Graph-based approach | ✓ | ✓ | ✓ | ✓ | ✓ | | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$ |
Deal with weighted graph | ✓ | | | ✓ | | | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$ |
Consider profile information | | | | | | ✓ | | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$
Consider content information | | | | | | ✓ | | $\textpdfrender{TextRenderingMode=FillStroke,LineWidth=.5pt,}{\checkmark}$
Network Core Detection: Due to the abundance of literature on network core
detection, we restrict our discussion to some selected works that we deem as
pertinent to our study. $k$-core decomposition (Batagelj and Zaversnik 2003)
is considered to be the de facto to detect core nodes. It is based on the
recursive removal of vertices that have degree less than $k$ in the input
network. Rombach et al. (2014) proposed an algorithm to detect core-periphery
structure in networks. The goal of this algorithm is to identify densely
connected core nodes and sparsely connected peripheral nodes. Cucuringu et al.
(2016) detected core and periphery using spectral methods and geodesic paths.
Kojaku and Masuda (2017) discovered multiple non-overlapping groups of core-
periphery structure by maximizing a novel quality function which compares the
number of edges of different types in a network. (Xiang et al. 2018) detected
multiple core-periphery structures and communities based on network density.
The authors also proposed an improved version of their model to detect active
and overlapping nodes. Zhang et al. (2017) studied the problem of collapsed
$k$-core to identify a set of vertices whose removal can lead to the smallest
$k$-core in the network. Shin, Eliassi-Rad, and Faloutsos (2016) showed
empirical patterns in real-world graphs related to $k$-cores. Laine, Ercal,
and Luo (2011) explored the dynamics of social activities and communities in
the context of grouping behaviors on YouTube. Recently, it has been observed
that the subgraphs of the detected core users are used for several graph-
related tasks, such as community detection (Peng, Kolda, and Pinar 2014; Xiang
et al. 2018), dense-subgraph detection (Andersen and Chellapilla 2009; Hooi et
al. 2020), and graph visualization (Alvarez-Hamelin et al. 2005). We encourage
the readers to go through Malliaros, Papadopoulos, and Vazirgiannis (2016) for
a comprehensive survey on network core detection.
Differences with Existing Studies: Table 2 compares our methods (KORSE and
NURSE) with a few relevant studies. In short, our methods are different from
others in five aspects – (i) we are the first to address core blackmarket user
detection problem; (ii) we are the second after (Dutta et al. 2020) to deal
with YouTube collusive blackmarkets; (iii) We propose both unsupervised
(KORSE) and supervised (NURSE) methods for core detection; (iv) our dataset
comprising core blackmarket users is unique; and (v) we provide a rigorous
analysis to explore the dynamic of core and compromised users.
## Methodology
### Dataset Description
In this work, we consider YouLikeHits222https://www.youlikehits.com/, a
freemium blackmarket service333Freemium blackmarkets offer customers to enjoy
their services for free with the condition that the customers will temporarily
act on behalf of the blackmarkets. Upon signing up, the social media accounts
of customers are compromised for a limited time for blackmarket activities,
which in turn help them gain virtual credits. We designed web scrapers to
extract the ids of YouTube videos submitted to blackmarket services for
collusive comments. We used YouTube
API444https://developers.google.com/youtube/v3 to extract the metadata details
and comment history of these videos. We extracted $26,166$ YouTube videos
which were submitted to YouLikeHits for collusive comments. These videos were
uploaded to $11,000$ unique YouTube channels. To our knowledge, this is the
first dataset of its kind. Note that the entire data collection process was
performed after taking proper Institutional Review Board (IRB) approval.
(a)
(b)
Figure 2: Cumulative distribution of (a) edge weights, and (b) weighted
coreness scores of nodes in CCN. Contrary to the general observation that
coreness score follows power law, we observe that there are relatively large
number of nodes having high weighted coreness.
### Preliminaries and Graph Construction
Here we present some important concepts used throughout the paper. Table 1
summarises important notations.
[Collusive Users and Videos] We define collusive users as those who are
involved in the blackmarket activities. There are two types of collusive users
– core users and compromised users. We call the videos submitted to freemium
blackmarkets as collusive videos.
[Core Users] A limited set of online accounts are fully controlled by the
blackmarket authorities. These accounts can be bots (fully automated),
sockpuppets (controlled by puppet masters) (Bu, Xia, and Wang 2013) or fake
accounts. However, they are used only to benefit blackmarkets. We call these
users core blackmarket users.
[Compromised Users] These are YouTube content creators who submit their
content to the freemium blackmarkets in order to receive artificial comments
within a short duration. Being freemium customers, their accounts are
compromised for a limited time to perform illegal activities by commenting on
videos of other blackmarket customers.
[Collusive Commenting Network (CCN)] A CCN is an undirected and weighted
network $G(N,E)$, where each node $n\in N$ represents a collusive user, and
two nodes $n_{i}$ and $n_{j}$ are connected by an edge $e_{ij}=\langle
n_{i},n_{j}\rangle$ if the corresponding users co-commented on the same
videos. The weight $w_{ij}$ of the edge $e_{ij}$ is calculated as per Eq. 1.
Let us denote a set of sets,
$V=\\{\\{\bm{v_{1}}\\},\\{\bm{v_{2}}\\},\\{\bm{v_{3}}\\},\dots\\}$, where
$\\{\bm{v_{i}}\\}$ indicates the set of videos posted by collusive user
$n_{i}$. $\\{\bm{v_{i,j}}\\}$ indicates the $j^{th}$ video in the set
$\bm{v_{i}}$.
[Inter-user Comment Count] The number of comments posted by the collusive user
$n$ on video $c$ is denoted by $comments(n,c)$. We define Inter-user comment
count (IUCC) for a video $c$ and a pair of users $n_{i}$ and $n_{j}$ as the
minimum of the number of comments by $n_{i}$ and $n_{j}$ on $c$.
$IUCC(n_{1},n_{2},c)=min\big{(}comments(n_{1},c),comments(n_{2},c)\big{)}$ (1)
[Edge weight] We measure the edge weight between two nodes (collusive users)
$n_{i}$ and $n_{j}$ as follows:
$w_{ij}=\sum_{\begin{subarray}{c}p=1\\\ p\neq i,j\end{subarray}}^{|V|}\hskip
2.84526pt\sum_{q=1}^{|v_{p}|}IUCC(n_{i},n_{j},v_{p,q})$ (2)
The edge weight $w_{ij}$ indicates the aggregated IUCC across all the videos
co-commented by $n_{i}$ and $n_{j}$, excluding their own videos. We exclude
the videos created by $n_{i}$ and $n_{j}$ since the comments on these videos
can be easily manipulated (added or deleted) by the owners themselves. Table 3
summarises the properties of CCN. Fig. 2(a) shows the cumulative distribution
of $w_{ij}$.
Table 3: Topological properties of CCN. Property | Value
---|---
# nodes | $1,603$
# edges | $51,424$
Avg./max/min edge weight | $1.392$ / $78$ / $1$
Avg./max/min weighted degree of nodes | $89.367$ / $1638$ / $1$
Unweighted edge density | $0.040$
Unweighted clustering coefficient | $0.737$
Network diameter | 8
### Weighted $k$-core Decomposition
Given a graph $G(N,E)$, the weighted $k$-core detection problem aims to find
k-core (or core of order $k$), the maximal induced subgraph denoted by
$G_{k}(N_{k},E_{k})$ such that $G_{k}\subseteq G$ and $\forall n\in
N_{k}:deg(n)\geq k$. The following two methods are often used to solve this
problem: k-core decomposition (Rombach et al. 2014) and core-periphery
algorithm (Della Rossa, Dercole, and Piccardi 2013). In our case, we choose
k-core decomposition555We use the weighted version of k-core decomposition to
incorporate the edge weights (see Eq. 1 for more details).. In (weighted)
k-core decomposition, to detect core users, we repeatedly delete nodes with
(weighted) degree666The weighted degree of a node is the sum over the edge
weights of the connected edges. less than $k$ until no such node is left (this
is also known as “shaving” method (Shin, Eliassi-Rad, and Faloutsos 2016)).
The reasons behind choosing $k$-core decomposition are as follows: (i) It has
been empirically shown to be successful in modeling user engagement (Zhang et
al. 2016, 2017); (ii) Unlike $k$-core, core-periphery algorithm fits more
closely with networks where the nodes are not closely connected to each other
(Borgatti and Everett 2000). However, in blackmarket services, the sole
purpose of collusive users to join the services is to gain credits (by
providing collusive appraisals to the content of other users) which can be
used by them to artificially inflate their social growth. This strengthens the
connectivity among the collusive users. The reason behind expecting high
interactions among users stems from the fact highlighted in (Dutta et al.
2020) that different collusive users retweet the same tweets on the collusive
market regardless of the topic of the tweets. We expect a similar behavior in
case of YouTube comments, i.e., different collusive users tend to comment on
the same videos in order to earn credits. In our dataset, a collusive video
has an average of $3$ comments by collusive users. This would create more
relations (edges) between nodes in CCN.
### WICCI: Expected Behavior of Core Users
We frame the core detection problem in CCN as the weighted $k$-core
decomposition problem in CCN. $k$-core decomposition assigns a coreness value
to each vertex. In our case, the coreness value ranges from $1$ to $193$, with
an average value of $48.7$. We obtain an ordered list of vertices sorted in
decreasing order of the coreness value. Typically, the node assigned with the
highest coreness value is said to be the “most influential node” in the graph.
The subgraph formed with such highly influential vertices is known as
degeneracy-core or $k_{max}$-core. On running the weighted $k$-core
decomposition on CCN, we obtain a degeneracy-core consisting of $8$ users. We
expect the distribution of nodes to continually decrease with increasing
coreness, as observed in typical core-periphery structures. However, we
observe that the fraction of nodes with a high weighted coreness is unusually
high ( 12.1% users with $\geq 100$ coreness score as shown in Fig. 2(b)). This
indicates the presence of a larger set of core users.
Therefore, in CCN, to define the partition of core and compromised users, we
propose a metric, called Weighted Internal Core Collusive Index (WICCI) which
is motivated by Rombach et al. (2014). WICCI is used to partition the list of
decreasing weighted coreness values by a “coreness threshold”. The nodes whose
coreness is above the threshold are eligible to be the core nodes, while the
remaining nodes are considered as compromised users. To define WICCI, we
consider two important properties of core users as follows:
1. 1.
Density: A core component of a network should be densely connected (Rombach et
al. 2014; Borgatti and Everett 2000). We attempt to understand the
implications of a dense core in CCN, by considering the flip-side first – a
sparse core. A sparse core in CCN would have less number of edges connecting
vertices internally. In the current scenario, it implies that different users
have commented upon different sets of videos. However, the existence of such
an entity would mean that there is no cohesion or strategy in the way core
users operate. They may be commenting randomly on different videos. The
existence of a dense core, however, would imply that different users are
commenting on a same set of (collusive) videos, indicating some cohesion or
strategy. Note that when we increase the coreness threshold, the subgraph of
the core formed has an increasing density (and a decreasing size).
$\texttt{WICCI}\propto density^{\beta}$ (3)
where $\beta$ is the density coefficient. We utilize $\beta$ to vary the
proportionality of WICCI with density.
2. 2.
Fraction of weighted size of core: There is a major flaw in considering only
density to define a core. Density does not take into account the edge weight
i.e., the volume with which the two users have commented together on same
videos. We intuitively expect that inside a core, a high fraction of the
commenting activities take place. We define $W_{G}$ as the weighted size (sum
of the weights of edges) of CCN and $W_{C}$ as the weighted size of the core
subgraph $G_{C}$. Correspondingly,
$\texttt{WICCI}\propto\frac{W_{C}}{W_{G}}$ (4)
Combining (3) and (4), we get
$\texttt{WICCI}=k\times\frac{W_{C}}{W_{G}}\times density^{\beta}$ (5)
where $k$ is the constant of proportionality. We assume it to be $1$.
(a)
(b)
(c)
Figure 3: Variation of (a) density, (b) fraction of weighted size of core, and
(c) WICCI with varying $core_{th}$. (a) Initially, the density dominates over
fraction of weighted size of the core and hence WICCI increases rapidly in
(c). (b) In the later stages, the inverse happens – fraction of weighted size
of core dominates which results in WICCI declining steeply in (c). The WICCI
peak of $0.294$ is observed at $core_{th}=0.73$ in (c). All nodes with a
weighted coreness above $core_{th}$ are part of the core. Note that despite
varying the density of core w.r.t WICCI by changing the density coefficient
($\beta$), we observe a similar WICCI peak in all cases.
Input: CCN $G(N,E)$
Output: $G_{c}$: Subgraph containing core nodes
1
2Initialize $wicci_{max}$ $\leftarrow$ 0;
3
$\triangleright$ Running the weighted $k$-core decomposition on $G(N,E)$.
4
5$wc$ = List of weighted coreness scores for nodes in $G(N,E)$.
$\triangleright$ Sort N by $wc$ and push into stack $S$
6 $\mathcal{S}$ = Stack of nodes in $G$ in descending order of weighted
coreness, $wc$.
$\triangleright$ Running set of core nodes
7 $core_{n}$ $\leftarrow$ [ ]
$\triangleright$ Set coreness threshold as the max weighted coreness
8 $core_{th}\leftarrow max(wc)$
9while _$core_{th}$ $>$ 0_ do
$\triangleright$ Get node with maximum coreness
10 $n=\mathcal{S}.pop()$
11 while _$wc(n) >=core_{th}$_ do
$\triangleright$ Add $n$ to $core_{n}$
12 $core_{n}.add(n)$
13 $n=\mathcal{S}.pop()$
14 end while
$\triangleright$ As $wc(n)<core_{th}$, we push $n$ back to $\mathcal{S}$
15
16 $\mathcal{S}.push(n)$
$\triangleright$ Make induced subgraph of core using current $core_{v}$
17
18 $G_{cr}$ $\leftarrow$ InducedSubgraph(G, $core_{n}$)
$\triangleright$ Compute WICCI for the current core $G_{cr}$
19
20 wicci $\leftarrow$ WICCI($G_{cr}$,G)
$\triangleright$ Finding the $G_{cr}$ with maximum WICCI
21 if _wicci $>$ $wicci_{max}$_ then
22 $wicci_{max}$ $\leftarrow$ wicci
23 $G_{c}$ = $G_{cr}$
24 end if
$\triangleright$ Iteratively decrease the coreness threshold
25 $core_{th}\leftarrow core_{th}-1$
26 end while
27
Algorithm 1 KORSE algorithm
### KORSE: A Graph-based Method for Core Detection
By considering the above properties of collusive entities, we design KORSE
(K-core decomposition for cORe colluSive usErs detection), a modified version
of (weighted) $k$-core decomposition that is designed for detecting core users
in blackmarket services based only on the topological structure of CCN. It
takes CCN as input and detects core blackmarket users (core subgraph $G_{C}$).
KORSE is implemented by decreasing the coreness threshold and consequently
making larger subgraphs of the core. The subgraph with the largest WICCI is
our final core.
Algorithm 1 presents the pseudo-code of KORSE. Firstly, we apply weighted
$k$-core decomposition which gives the weighted coreness score $wc(n)$ for
each vertex $n\in N$. The vertices are then sorted in decreasing order of $wc$
and pushed into a stack $\mathcal{S}$. The top of the stack is the node with
the maximum weighted coreness. Next, we create a running set ($core_{n}$) of
core nodes initially with no node. The running coreness threshold $core_{th}$
is set to the maximum value of weighted coreness $wc_{max}$. Next, $core_{th}$
is iteratively decreased, and the set of core nodes is updated by adding all
nodes $n$ which have $wc(n)$ greater than $core_{th}$. Next, $G_{cr}$, an
induced subgraph is created only by the core nodes. Further, WICCI of $G_{cr}$
is calculated. The induced subgraph with the maximum WICCI ($wicci_{max}$) is
the core of the graph, and the corresponding $core_{th}$ is the coreness
threshold.
On applying KORSE on CCN, we obtain an ideal coreness threshold of $0.73$ on a
max-normalized scale, with a peak WICCI value of $0.294$ (c.f. Fig. 3) for
different values of the density coefficient $\beta$. We explore the variation
of WICCI with $core_{th}$:
1. 1.
Initially, as $core_{th}$ increases ($0.1-0.5$), users of low $wc$ (which
contribute less to the overall collusive activity of the network) are removed
from the core subgraph, leading to rapid increase in density of the core
subgraph and a relatively smaller decrease in the fraction of weighted size of
the core. Initially, density dominates the fraction of weighted size of the
core and hence WICCI increases (c.f. Fig. 3(a)).
2. 2.
Towards the higher values of $core_{th}$ ($>0.8$), density obtains its maximum
value of $1$. However, the fraction of weighted size of the core decreases
rapidly due to the continued exclusion of more nodes with relatively higher
$wc$. As $core_{th}$ increases further, the fraction of weighted size of the
core dominates density towards the latter values of $core_{th}$, and hence
WICCI decreases (c.f. Fig. 3(b)).
3. 3.
In the mid-range values ($0.6-0.7$) of $core_{th}$, the peak of WICCI is
observed. The corresponding core formed by the nodes (with $wc$ higher than
$core_{th}$) leverages both the density and the fraction of weighted size of
CCN (c.f. Fig. 3(c)).
(a)
(b)
(c)
(d)
(e)
Figure 4: The distribution of nodes in components of sizes present in CCN
after removing nodes in the decreasing order of (a) weighted degree, (b)
unweighted degree, (c) weighted coreness, and (d) unweighted coreness. The
network visibly disintegrates into smaller components when at least (a) 50%
(b) 55% (c) 60%, and (d) 60% are removed from the network. Despite a large
removal of nodes, the remaining network has a high connectivity.
The core obtained on applying KORSE consists of $148$ nodes and (surprisingly)
is a complete graph. Nearly $30\%$ of the entire collusive commenting
activities of the network happens among $10\%$ of the core nodes. The
periphery consists of $1,455$ nodes and has an edge density of $0.0355$.
Nearly $60\%$ of the commenting activities take place among the peripheral
nodes despite $90\%$ of the users belonging to it. The rest $10\%$ activities
are captured between the core and the peripheral nodes (cross-edges between
core and periphery). We now investigate the connectivity of the core in our
proposed CCN network.
## Impact of Core on CCN
To closely explore the connectivity of the core in the network, we analyse the
effect after removing the core from CCN. Mislove et al. (2007) reported that
in a conventional social network, the removal of core breaks the graph into
small disconnected components. However, in our case we notice that the graph
does not break into smaller components even after removing a large fraction of
core nodes (c.f. Fig. 4). The possible reasons for such a behavior are as
follows:
1. 1.
Estimated core may be incorrect: One may argue that our metric WICCI to
estimate the core may be flawed. It may be possible that the core is larger
than what we estimate. To verify this, we start by removing the vertices from
CCN in the decreasing order of the (i) weighted degree (c.f. Fig. 4(a)), and
(ii) weighted coreness $wc$ (c.f. Fig. 4(c)). We observe that the point where
the size of the largest connected component decreases and the number of small
disconnected components increases drastically, should be the appropriate value
of $core_{th}$. However, we notice that such a point arises only after
removing 50% and 60% of nodes based on weighted degree and weighted coreness
of vertices from CCN, respectively. This would suggest that at least 50% of
the vertices belong to the core. However, the density of the core reduces
significantly (c.f. Fig. 5). This violates one of the fundamental properties
of a core that it should be incredibly dense. Mislove et al. (2007) observed
near-complete degradation of the largest connected component after only
removing $10\%$ of the nodes based on degree. Therefore, the observed pattern
is not the artifact of our proposed metric WICCI, but a result of the high
connectivity even among users of low coreness.
Figure 5: Change in the density of core with the number of nodes removed.
2. 2.
Weighted $k$-core decomposition may be incorrect: One may argue that we should
consider the traditional unweighted $k$-core decomposition (Mislove et al.
2007), instead of considering the weighted edges. We perform similar
experiments by removing vertices in the order of the (i) unweighted degree
(c.f. Fig. 4(b)) (as suggested in Mislove et al. (2007)) and (ii) unweighted
coreness (c.f. Fig. 4(d)). We observe similar results in both the cases where
the network breaks into many small disconnected components upon removing at
least 55% of the nodes. This would again make the core incredibly sparse (c.f.
Fig. 5). Therefore, applying weighted $k$-core decomposition is not a reason
for the late disintegration of the graph into smaller components.
Possible explanation: connected periphery. We examine $G_{P}$, the induced
subgraph of the peripheral nodes independently with specific focus on its
largest connected component $G_{P}^{L}$.
* •
$G_{P}^{L}$ and $G_{P}$ have $1,376$ and $1,455$ nodes, respectively.
* •
$G_{P}^{L}$ and $G_{P}$ have edge density of $0.03674$ and $0.0355$,
respectively.
* •
$G_{P}^{L}$ has an average path length of $2.6355$.
* •
Lastly, as stated earlier, when we progressively remove the core from CCN, the
periphery largely remains intact.
This indicates that there is a significant connectivity among nodes in the
periphery. This does not fall within the conventional structure of the
periphery which is generally described as small disconnected components.
Instead, we visualize the periphery in $G_{P}^{L}$ as smaller and relatively
dense communities (c.f. Fig. 1). One possible reason for a connected periphery
may be that the graph has organically grown to a stage where despite the
detection of the core users, the blackmarket service is in a self-sustainable
stage and is no longer driven by the core users alone. A solution would be to
detect the core users at an early stage to halt the growth of the market. To
identify the network at its infancy, one would have to create multiple
snapshots of the blackmarket services over a period of time, which is a
computationally expensive task. We now examine the relation between the core
and peripheral communities present in the proposed network.
## Interplay Between Core and Peripheral Communities
Here, we study the interactions between the core and periphery, and highlight
critical observations. We start by dividing the videos $V$ into three
categories:
1. 1.
Core-core videos are the set of videos commented exclusively by core users.
2. 2.
Core-periphery videos are the set of videos commented by both core and
peripheral users.
3. 3.
Periphery-periphery videos are the set of videos commented exclusively by
peripheral users.
Here, (1) and (2) are responsible for the formation of edges within core; (2)
and (3) are responsible for the formation of edges within periphery; (2) alone
is responsible for the formation of edges between core and periphery.
Next, we define the community structure in CCN. A “good” community in CCN is
the one in which the users of the community have co-commented heavily on a set
of videos. Due to the high connectivity observed in the periphery (mentioned
in the earlier section), we speculate that the periphery consists of several
small communities. To check this, we run the weighted version of the Louvain
community detection method (Blondel et al. 2008) for detecting peripheral
communities $C^{L}_{P}$ from $G^{L}_{P}$ (the largest connected component in
the induced subgraph in the periphery). The modularity of the community
structure detected by Louvain is $0.397$, and the number of large communities
(with size $>40$) is $9$. It indicates that there exist large communities of
collusive users that comment on the same set of videos. Next, we define the
interaction within the peripheral community based on the amount of collusive
commenting activities occurring inside the community. We categorize these
interactions using (a) weighted size, and (b) average weighted degree of nodes
in the peripheral community. We also quantify the interactions between core
and each of the peripheral communities based on the amount of commenting
activities on the core-periphery videos.
[Internal Interaction of Peripheral Community] We define the internal
interaction of a peripheral community as a measure of the collusive commenting
activities within the community.
We further categorize the internal interaction using the following metrics:
1. 1.
Average weighted degree of nodes in the community: It captures the average
collusive commenting activities taking place within the community.
2. 2.
Weighted size of the community: It is measured by the sum of weights of all
the internal edges of a community, capturing the total intra-community
collusive commenting activities.
[Independent Interaction of Core and Peripheral Community] We define the
independent interactions of core and a peripheral community as a measure of
the collusive commenting activities taking place between the core and the
peripheral community. This indicates the participation of the peripheral users
in commenting on core-periphery videos.
To capture independent interactions between core and peripheral community $C$,
we utilize the weighted cut-set $WCS_{Core,C}$ as the sum of the weights of
edges connecting the core and $C$. Since the size of the peripheral
communities varies, we normalize $WCS_{Core,C}$ by only $|C|$.
$WCS_{core,C}=\frac{\textit{Sum of weights of edges connecting core and
$C$}}{|C|}$
(a)
(b)
Figure 6: A strong positive correlation between weighted cut-set $WCS$ and –
(a) average weighted degree, and (b) weighted size of the peripheral
communities. Different colors indicate communities obtained in different
executions of Louvain method. The Pearson’s $\rho$ is also reported.
The following observations are drawn from the above (c.f. Fig 6):
1. 1.
There exists a positive correlation between the average weighted degree of a
peripheral community and $WCS_{core,C}$ (c.f. Fig 6(a)).
2. 2.
There exists a positive correlation between the weighted size of a peripheral
community and $WCS_{core,C}$ (c.f. Fig 6(b)).
From these observations, we conclude that there is a definite positive
correlation between the internal interaction within the peripheral communities
and that between the core and peripheral communities. Peripheral communities
which actively participate in activities associated with the core (such as
commenting on core-periphery videos), tend to contribute more to the collusive
market. We now discuss in our detail our proposed deep fusion framework NURSE
for the identification of core blackmarket users.
## NURSE: A Deep Fusion Framework
Although the network topology based weighted $k$-core decomposition presented
in KORSE is highly accurate to detect core blackmarket users, it may not be
feasible to adopt in designing a real-world system because of the following
reasons: (i) data arrives in streaming fashion, and the generation of CCN is
not possible as the entire snapshot of the blackmarkets at a certain point is
impossible to collect; (ii) CCN is often incomplete and highly sparse, and
(iii) $k$-core decomposition is comparatively slow. However, we consider KORSE
as an oracle and the core and compromised users it has detected as the ground-
truth to train and evaluate the following model. To address the above issues
and towards designing a real-world system, we propose NURSE (NeUral framework
for detecting coRe colluSive usErs), a neural fusion model to detect core
blackmarket users in blackmarket services based only on the user timeline and
video sharing information (without considering the underlying CCN).
### NURSE: Model Components
NURSE comprises three components: metadata feature extractor (MFE), similarity
feature extractor (SFE), and textual feature extractor (TFE); the output of
which are further concatenated to form the feature representation of a YouTube
user. The combined representation is passed through to a core detector module
which determines whether the user is a core or a compromised user. The
architectural diagram of NURSE is shown in Fig. 7. Individual components of
NURSE are elaborated below.
Figure 7: A schematic diagram of NURSE. The green colored network is the
metadata feature extractor (MFE), the orange colored network is the similarity
feature extractor (SFE), and the blue colored network is the textual feature
extractor (TFE). We concatenate the output of the feature extractors to form
the feature representation of a YouTube user. The final representation is
passed through to a core detector module to detect whether the given user is a
core user or a compromised user.
#### Metadata Feature Extractor (MFE).
We extract $26$ metadata features based on the profile information, and videos
uploaded by the users. These features are largely divided into four
categories:
(a) Self-comments ($MFE_{1-5}$): These features are derived from the comments
made by the users on their own videos. We observe that, on an average,
compromised users tend to write more self-comments ($\times 1.778$) than the
core users, indicating that core users are less involved in self-promotion. We
take the maximum, minimum, total, average and variance of the comments across
self-posted videos as five different features.
(b) Number of videos uploaded ($MFE_{6}$): It refers to the total number of
videos uploaded by the user. On average, core users upload fewer videos, which
is $\times 0.633$ less than that of compromised users. A core user’s efforts
to benefit from the blackmarkets are lesser (as they are created by the
blackmarket services themselves) than the compromised users.
(c) Duration of uploaded videos ($MFE_{7-11}$): These features measure the
duration of the videos uploaded by users. On average, a core user uploads
significantly shorter videos, which is $\times 0.628$ less than that of
compromised users. The possible reason could be that core users are less
interested in their own content; rather their primary objective is to
artificially inflate the popularity of other customers’ videos. We take the
maximum, minimum, total, average and variance of video duration per user as
five different features.
(d) Other features: Apart from the above features, we also consider the
following features related to the rating of the videos posted by a user (in
each case, we take the maximum, minimum, total, average and variance as five
different features) – the number of likes $(MFE_{12-16}$), the number of
dislikes $(MFE_{17-21}$) and the number of views received $(MFE_{22-26}$).
#### Similarity Feature Extractor (SFE).
Collusive users have been shown to post similar/duplicate comments regardless
of the topic of the content (Dutta et al. 2020). We extract two sets of
features based on the linguistic similarity of comments posted on the video
and video metadata:
(a) Comment-based features: We capture similarity features based on the
linguistic similarity of comments posted by users. For a user, let the set of
her comments on her own videos and the set of comments on other videos be $SC$
and $OC$, respectively. We first generate embedding of individual comments
using pre-trained BERT (Devlin et al. 2018). We then measure the maximum,
minimum, total, average and variance of similarities (cosine similarity)
between comments in $SC$. Similarly, we obtain five similar features, each
from the comments within $OC$ and by comparing comments in $SC$ and $OC$. This
results in $15$ features ($SFE_{1-15}$).
(b) Video metadata based features: In YouTube, a user can upload her own
videos ($SV$) or act on videos posted by other users ($OV$). For each video,
we combine the text of the video title, video description and video genre. We
then generate the embedding of the combined text using BERT. Next, we extract
the maximum, minimum, total, average and variance of similarities (cosine
similarity) between video embeddings, each from the videos within $SV$ and and
videos across $SV$ and $OV$. This results in 10 features denoted by
$SFE_{16-25}$. We did not extract features from within $OC$ because we
observed that doing so heavily biased the model.
#### Textual Feature Extractor (TFE).
We capture textual features from the content of the comments posted by a user.
We generate embeddings for every comment using pre-trained BERT (Devlin et al.
2018). To get a representative embedding for a user, we average out the
embeddings of all the comments posted by the user. As collusive users tend to
post repetitive text in their comments (Dutta and Chakraborty 2020), we feed
the resultant embedding into a CNN to capture this inter-dependency. In
literature, CNNs have shown to perform well in capturing repetitive patterns
in texts (Lettry et al. 2017; Zhou and Long 2018).
#### Core Detector.
The core detector module consists of a fully-connected layer (FC) with softmax
to predict where a YouTube user is core or compromised, denoted by
$G_{c}(.,\theta_{c})$, where $\theta_{c}$ represents the model parameters. For
the prediction task, $G_{c}$ generates the probability of a user $u$ being the
core user based on the combined representation $\vec{u}$.
$P_{\theta}(u)=G_{c}(\vec{u};\theta_{c})$ (6)
We use the cross-entropy loss ($L_{d}$) for our model:
$L_{d}(\theta)=y\log\big{(}P_{\theta}(u)\big{)}+(1-y)\log\big{(}1-P_{\theta}(u)\big{)}$
(7)
### NURSE: Model Specifications
NURSE executes three parallel operations - (1) TFE: The $1\times 784$ textual
vector is fed to a CNN (number of channels = $32$, filter size = $2$, no
padding). Next, the resultant vector is passed to a max-pooling layer and then
to a FC layer of size $64$. The final output from this operation is a $1\times
64$ vector. (2) SFE: The $1\times 25$ similarity vector is fed to a FC Layer
of size $32$. A dropout of $0.3$ is applied on the FC layer. The final output
from this operation is a $1\times 32$ vector. (3) MFE: The $1\times 26$
metadata vector is passed to a FC layer of size $16$. A dropout of $0.25$ is
applied on the FC layer. The final output from this operation is a $1\times
16$ vector.
The combined representation is a $1\times 112$ vector. This is then passed to
another FC layer of size $16$, followed by a softmax layer of size $2$ to
obtain the final prediction. We utilize the ReLu activation function for all
other layers.
## Experiments
### Dataset and Ground-truth
Although we collected collusive users from the blackmarkets, it is unknown who
among them are core blackmarket users. Thus, the ground-truth information
about the core and compromised users are impossible to obtain unless
blackmarkets themselves provide the data! We, therefore, consider the core and
compromised users obtained from KORSE as the ground-truth since it uses the
topological structure of the underlying collusive network to detect the core
users. We hypothesize that KORSE is highly accurate in detecting core users.
We also perform several case studies to validate our hypothesis. We intend to
show how much NURSE (a non-topology based method) is close to KORSE (a pure
topology-based method). We also present a case study to show whether the
detected core users are really meaningful or not.
Since the number of compromised users ($1,455$) is $10$ times higher than the
number of core users ($148$), we generate two datasets for our analysis: (i)
Dataset (1:1) is a balanced dataset where equal number of compromised users as
that of core users are (randomly) sampled; (ii) Complete dataset is an
imbalanced dataset where all collusive users are kept. We performe $10$-fold
stratified cross-validation and report the average performance.
### Baseline Methods
Since ours is the first work to detect core blackmarket users, there is no
existing baseline. We therefore design our own baselines by considering
individual components of NURSE in isolation and their combinations:
1. 1.
MFE: This model uses only the metadata feature extractor.
2. 2.
SFE: This model uses only the similarity feature extractor.
3. 3.
TFE: This model uses only the textual feature extractor. Each comment is
represented as a $786$ dimensional vector using BERT.
We further combine these three components and design three more baselines: (4)
MFE+SFE, (5) MFE+TFE, and (6) SFE+TFE.
These baselines also in turn serve the purpose of feature ablation to explain
which features are important for NURSE.
Are core users the influential nodes in the network? To answer this, we
consider three other approaches as baselines which aim to detect influential
users:
1. 7.
INF: Huang et al. (2020) proposed a node influence indicator, called INF,
based on the local neighboring information to detect influential nodes in a
network.
2. 8.
Weighted Betweenness Centrality (WBC): Betweenness centrality (BC) (Brandes
2001) is a measure of node centrality based on the shortest paths. We utilize
the approach in (Shin, Eliassi-Rad, and Faloutsos 2016) to run the weighted
version of BC on CCN and detect core users.
3. 9.
Coordination Game Model (CGM): Zhang and Zhang (2017) proposed a coordination
game model to find top-$K$ nodes to maximize influence under certain spreading
model.
(a)
(b)
Figure 8: Change in the performance of competing methods with the increase of $k$ (the number of results returned) for detecting core users from our dataset (1:1). For better visualization, among the variations of NURSE, we report the results of only the best variation (SFE+TFE). Table 4: Performance (F1-Score and AUC for detecting core users) of the competing methods at $k=148$ (break-even point). The results also explain feature ablation of NURSE. Method | Dataset ($1:1$) | Complete Dataset
---|---|---
| F1 (Core) | AUC | F1 (Core) | AUC
MFE | 0.638 | 0.559 | 0.268 | 0.294
SFE | 0.816 | 0.857 | 0.516 | 0.472
TFE | 0.665 | 0.773 | 0.530 | 0.365
MFE+SFE | 0.824 | 0.882 | 0.682 | 0.718
MFE + TFE | 0.696 | 0.767 | 0.415 | 0.631
SFE+TFE | 0.819 | 0.865 | 0.721 | 0.792
INF | 0.750 | 0.139 | 0.533 | 0.113
WBC | 0.617 | 0.304 | 0.407 | 0.270
CGM | 0.622 | 0.392 | 0.302 | 0.414
NURSE | 0.879 | 0.928 | 0.833 | 0.845
### Performance Comparison
Since all the competing methods return a score (or a probability), indicating
the likelihood of a user being core, we first rank all the users based on the
decreasing order of the score, and then measure the accuracy in terms of
precision, recall, F1-Score and Area under the ROC curve (AUC) w.r.t. the
‘core’ class. Fig. 8 shows that NURSE dominates other baselines for almost all
values of $k$ (the top $k$ users returned from the ranked list). Table 4
summarizes the performance (F1-Score and AUC) of the models at $k=148$ (as
there are $148$ core users; it is also known as break even point) – NURSE
turns out to be the best method, followed by MFE+SFE (for balanced dataset)
and SFE+TFE (for imbalanced dataset). Similarity feature extractor (SFE) seems
to be the most important component of NURSE, followed by TFE and MFE. Among
influential node detection methods, both INF and CGM seem to be quite
competitive. Next, we examine the core users identified by our proposed method
KORSE and NURSE.
## Case Studies
We further delve deeper into the characteristics of some of the core users
detected by both KORSE and NURSE by conducting some case studies. These
provide us strong evidences to validate our strategy of collecting the ground-
truth from KORSE.
1. 1.
Core users are heavy contributors: A core user, on average, comments
significantly ($\times 2.665$) more than a compromised user, indicating that
core users are the top contributors to the freemium collusive market.
2. 2.
Despite being heavy contributors, core users are not the largest beneficiaries
of the collusive market: We measure the average number of comments received by
the videos uploaded by collusive users, and rank them in decreasing order of
this quantity. We find only one core user from the top $30$ users. Upon
further investigation, we notice that only $8$ out of the top $250$ users are
core users. This suggests that core users, despite being heavy contributors,
are not the largest beneficiaries of the collusive market.
3. 3.
Core users aggressively participate in the collusive market: We observe that
the average number of comments made per collusive video by core users is twice
$(\times 1.997)$ higher than that of compromised users. This indicates an
aggressive behavior to promote the videos they comment on.
4. 4.
Channels controlled by core users are not popular: We observe that the
channels controlled by core users are not the popular YouTube channels. More
than $85\%$ of the channels have a subscriber count of less than $1,000$. This
clearly indicates that the primary objective of the core users is not to
promote their own videos/channels.
5. 5.
Channels controlled by core users have less uploaded videos: We observe that
the channels controlled by core users usually do not contain much YouTube
videos. More than $90\%$ of the channels have a video count of less than
$100$. This further corroborates the theory behind the working principle of
core blackmarket users.
Despite the above suspicious characteristics exhibited by core channels, we
observe that till date, $93\%$ of the core channels continue to be active on
YouTube. On average, these core channels have been active on YouTube for over
$4$ years ($1497$ days). It indicates how core channels are able to evade the
current in-house fake detection algorithms deployed by YouTube.
## Conclusion
This paper addressed the problem of detecting core users in YouTube
blackmarkets. We curated a new dataset of collusive YouTube users. We then
proposed KORSE, a novel graph-based method to segregate core users from
compromised accounts. Empirical studies revealed interesting dynamics of core
and compromised users. As KORSE is practically infeasible to design due to its
dependency on the underlying collusive network, we further proposed NURSE, a
deep fusion model that leverages only the user timeline and video submission
information to detect core users. Extensive experiments on our dataset showed
that NURSE is highly similar to KORSE in detecting core users. Summarizing,
our study contributed in four aspects – problem definition, dataset, methods
and empirical observation. As a future work, it would be interesting to see
how NURSE can merge with existing collusive entity detection approaches to
effectively identify core, collusive and non-collusive users. We also made the
code and dataset available for the purpose of reproducibility.
## References
* Alvarez-Hamelin et al. (2005) Alvarez-Hamelin, J. I.; DallÁsta, L.; Barrat, A.; and Vespignani, A. 2005. K-core decomposition of internet graphs: hierarchies, self-similarity and measurement biases. _arXiv preprint cs/0511007_ .
* Andersen and Chellapilla (2009) Andersen, R.; and Chellapilla, K. 2009. Finding dense subgraphs with size bounds. In _International Workshop on Algorithms and Models for the Web-Graph_ , 25–37. Springer.
* Arora et al. (2020) Arora, U.; Dutta, H. S.; Joshi, B.; Chetan, A.; and Chakraborty, T. 2020. Analyzing and Detecting Collusive Users Involved in Blackmarket Retweeting Activities. _ACM TIST_ 11(3): 1–24.
* Arora, Paka, and Chakraborty (2019) Arora, U.; Paka, W. S.; and Chakraborty, T. 2019. Multitask learning for blackmarket tweet detection. In _Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining_ , 127–130.
* Batagelj and Zaversnik (2003) Batagelj, V.; and Zaversnik, M. 2003. An O (m) algorithm for cores decomposition of networks. _arXiv preprint cs/0310049_ .
* Blondel et al. (2008) Blondel, V. D.; Guillaume, J.-L.; Lambiotte, R.; and Lefebvre, E. 2008. Fast unfolding of communities in large networks. _JSTAT_ 2008(10): P10008.
* Borgatti and Everett (2000) Borgatti, S. P.; and Everett, M. G. 2000. Models of core/periphery structures. _Social networks_ 21(4): 375–395.
* Brandes (2001) Brandes, U. 2001. A faster algorithm for betweenness centrality. _Journal of mathematical sociology_ 25(2): 163–177.
* Bu, Xia, and Wang (2013) Bu, Z.; Xia, Z.; and Wang, J. 2013. A sock puppet detection algorithm on virtual spaces. _Knowledge-Based Systems_ 37: 366–377.
* Castellini, Poggioni, and Sorbi (2017) Castellini, J.; Poggioni, V.; and Sorbi, G. 2017. Fake twitter followers detection by denoising autoencoder. In _WI_ , 195–202. ACM.
* Cheng et al. (2011) Cheng, J.; Ke, Y.; Chu, S.; and Özsu, M. T. 2011. Efficient core decomposition in massive networks. In _ICDM_ , 51–62. IEEE.
* Chetan et al. (2019) Chetan, A.; Joshi, B.; Dutta, H. S.; and Chakraborty, T. 2019. Corerank: Ranking to detect users involved in blackmarket-based collusive retweeting activities. In _WSDM_ , 330–338.
* Cresci et al. (2015) Cresci, S.; Di Pietro, R.; Petrocchi, M.; Spognardi, A.; and Tesconi, M. 2015. Fame for sale: Efficient detection of fake Twitter followers. _Decision Support Systems_ 80: 56–71.
* Cucuringu et al. (2016) Cucuringu, M.; Rombach, P.; Lee, S. H.; and Porter, M. A. 2016. Detection of core–periphery structure in networks using spectral methods and geodesic paths. _European Journal of Applied Mathematics_ 27(6): 846–887.
* Della Rossa, Dercole, and Piccardi (2013) Della Rossa, F.; Dercole, F.; and Piccardi, C. 2013. Profiling core-periphery network structure by random walkers. _Scientific reports_ 3(1): 1–8.
* Devlin et al. (2018) Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ .
* Dhawan et al. (2019) Dhawan, S.; Gangireddy, S. C. R.; Kumar, S.; and Chakraborty, T. 2019. Spotting Collective Behaviour of Online Frauds in Customer Reviews. In _IJCAI-19_ , 245–251.
* Dutta and Chakraborty (2020) Dutta, H. S.; and Chakraborty, T. 2020. Blackmarket-Driven Collusion Among Retweeters–Analysis, Detection, and Characterization. _IEEE TIFS_ 15: 1935–1944.
* Dutta et al. (2020) Dutta, H. S.; Jobanputra, M.; Negi, H.; and Chakraborty, T. 2020. Detecting and analyzing collusive entities on YouTube. _arxiv:2005.06243_ .
* Farooqi and Shafiq (2019) Farooqi, S.; and Shafiq, Z. 2019. Measurement and Early Detection of Third-Party Application Abuse on Twitter. In _WWW_ , 448–458.
* Farooqi et al. (2017) Farooqi, S.; Zaffar, F.; Leontiadis, N.; and Shafiq, Z. 2017. Measuring and mitigating oauth access token abuse by collusion networks. In _IMC_ , 355–368.
* Hooi et al. (2020) Hooi, B.; Shin, K.; Lamba, H.; and Faloutsos, C. 2020. TellTail: Fast Scoring and Detection of Dense Subgraphs. In _AAAI_ , 4150–4157.
* Huang et al. (2020) Huang, X.; Chen, D.; Wang, D.; and Ren, T. 2020. Identifying Influencers in Social Networks. _Entropy_ 22(4): 450.
* Kojaku and Masuda (2017) Kojaku, S.; and Masuda, N. 2017. Finding multiple core-periphery pairs in networks. _Physical Review E_ 96(5): 052313.
* Laine, Ercal, and Luo (2011) Laine, M. S. S.; Ercal, G.; and Luo, B. 2011. User groups in social networks: an experimental study on Youtube. In _2011 44th Hawaii International Conference on System Sciences_ , 1–10. IEEE.
* Lettry et al. (2017) Lettry, L.; Perdoch, M.; Vanhoey, K.; and Van Gool, L. 2017. Repeated pattern detection using CNN activations. In _WACV_ , 47–55.
* Malliaros, Papadopoulos, and Vazirgiannis (2016) Malliaros, F. D.; Papadopoulos, A. N.; and Vazirgiannis, M. 2016. Core Decomposition in Graphs: Concepts, Algorithms and Applications. In _EDBT_ , 720–721.
* Mislove et al. (2007) Mislove, A.; Marcon, M.; Gummadi, K. P.; Druschel, P.; and Bhattacharjee, B. 2007\. Measurement and analysis of online social networks. In _SIGCOMM_ , 29–42.
* Peng, Kolda, and Pinar (2014) Peng, C.; Kolda, T. G.; and Pinar, A. 2014. Accelerating community detection by using k-core subgraphs. _arXiv preprint arXiv:1403.2226_ .
* Rombach et al. (2014) Rombach, M. P.; Porter, M. A.; Fowler, J. H.; and Mucha, P. J. 2014. Core-periphery structure in networks. _SIAM Journal on Applied mathematics_ 74(1): 167–190.
* Sen et al. (2018) Sen, I.; Aggarwal, A.; Mian, S.; Singh, S.; Kumaraguru, P.; and Datta, A. 2018. Worth its weight in likes: towards detecting fake likes on Instagram. In _WebSci_ , 205–209. ACM.
* Shah (2017) Shah, N. 2017. Flock: Combating astroturfing on livestreaming platforms. In _Proceedings of the 26th International Conference on World Wide Web_ , 1083–1091.
* Shah et al. (2017) Shah, N.; Lamba, H.; Beutel, A.; and Faloutsos, C. 2017. The many faces of link fraud. In _ICDM_ , 1069–1074. IEEE.
* Shin, Eliassi-Rad, and Faloutsos (2016) Shin, K.; Eliassi-Rad, T.; and Faloutsos, C. 2016. Corescope: Graph mining using k-core analysis—patterns, anomalies and algorithms. In _ICDM_ , 469–478. IEEE.
* Shu, Wang, and Liu (2018) Shu, K.; Wang, S.; and Liu, H. 2018. Understanding user profiles on social media for fake news detection. In _2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)_ , 430–435. IEEE.
* Xiang et al. (2018) Xiang, B.-B.; Bao, Z.-K.; Ma, C.; Zhang, X.; Chen, H.-S.; and Zhang, H.-F. 2018\. A unified method of detecting core-periphery structure and community structure in networks. _Chaos_ 28(1): 013122.
* Zhang et al. (2016) Zhang, F.; Zhang, Y.; Qin, L.; Zhang, W.; and Lin, X. 2016. When engagement meets similarity: efficient (k, r)-core computation on social networks. _arXiv preprint arXiv:1611.03254_ .
* Zhang et al. (2017) Zhang, F.; Zhang, Y.; Qin, L.; Zhang, W.; and Lin, X. 2017. Finding critical users for social network engagement: The collapsed k-core problem. In _AAAI_.
* Zhang and Zhang (2017) Zhang, Y.; and Zhang, Y. 2017. Top-K influential nodes in social networks: A game perspective. In _SIGIR_ , 1029–1032.
* Zhou and Long (2018) Zhou, K.; and Long, F. 2018. Sentiment analysis of text based on CNN and bi-directional LSTM model. In _ICAC_ , 1–5.
* Zhu, Zhi, and Dai (2016) Zhu, Z.-h.; Zhi, Y.; and Dai, Y.-f. 2016. A New Approach to Detect User Collusion Behavior in Online QA System. In _CNCT_. Atlantis Press.
|
Subsets and Splits